next up previous
Next: Numerical methods Up: Estimating the behavior of Previous: Derivation of the transformation

Theoretical examples

 

In section 3 the synchronization methods were used on experimental output where the underlying individual output function is not known. To get some insight in the validity of the methods we here apply the methods on theoretical output from a known individual output function. We will show that the methods do very well when applied to output generated by the assumptions they were derived for, but that sever mistakes can be made when applied to the output generated from the wrong set of assumptions. However, the example here is a worst case scenario.

For all the theoretical examples we assume that the individual output function is

displaymath1060

As a first example for both method 1 and 2 we choose the exponential distribution (14). The output function then reads

  eqnarray367

where tex2html_wrap_inline1062 for method 1 or tex2html_wrap_inline1064 for method 2. The inverse function equals

  equation380

A second distribution we use for method 1 is a variation of the tex2html_wrap_inline932 -distribution tex2html_wrap_inline934 (9). In this case the output function and its inverse read

   eqnarray386

For method 2 we use a uniform distribution (11) for the probability of passing some crucial state in the cell cycle for method 2 is the resulting in

   eqnarray398

In figure 6 f(a), tex2html_wrap_inline1072 , and tex2html_wrap_inline816 are plotted.

   figure397
Figure 6: The probability distributions (A) tex2html_wrap_inline1076 , tex2html_wrap_inline1078 , tex2html_wrap_inline812 and (B) the function f(t)=cos(t) and its variation distorted outputs tex2html_wrap_inline816 .)

Suppose we have a series of observed values tex2html_wrap_inline1086 of an output function F(t). How close do we get to f(a) if we apply the inverse functions on this series? In the case of the exponential distribution and the uniform distribution in either model 1 or 2 we need estimates of the first derivative only. Derivatives are found numerically by differentiating spline functions running through the data points (see appendix C). Finding f(t) from the series tex2html_wrap_inline1094 , and tex2html_wrap_inline1096 is (not surprisingly) very accurate (fig. 7-A). In case of the tex2html_wrap_inline932 -distribution, method 1 requires an infinite series of derivatives of Y. The series tex2html_wrap_inline1102 can at most be described by a tex2html_wrap_inline1104 order polynomial, so truncation of the series of derivatives is unavoidable. However a sufficient approximation of the original function does not require estimation of all the derivatives as is seen in fig. 7-B. This figure also shows that having more data points does not necessarily improve the approximation, but it is rather the quality of estimating the higher derivatives of the output function which make the approximation better. It is only after having the right number of derivatives (in all our computations a third order approximation turned out to be sufficient) that more data points result in a better fit (results not shown).

   figure414
Figure 7: Results of retrieving f from a series tex2html_wrap_inline820 on the interval [0,4/pi] A) x(t) from tex2html_wrap_inline824 using method 1. B) y(t) from Y(t) using method 1. C) z(t) from Z(t) using method 2. In the upper, middle, and lower graphs the interval was split up in 64, 32 and 16 intervals respectively and a spline function consisting of respectively linear, quadratic, third order elements was used to determine the derivatives required.

In a real experiment we have to judge whether method 1 or method 2 applies. We used the examples with the uniform distributions to see what mistakes we make if we use the wrong inverse function on the output function. One mistake we can make is using method 2 to obtain a synchronized output function tex2html_wrap_inline1122 from data generated by the output function underlying method 1, i.e. we apply (29) to Y(t) with tex2html_wrap_inline1126 ( tex2html_wrap_inline1128 ) to obtain tex2html_wrap_inline1122 as an estimate of y(t). So we have

eqnarray436

Note that Y(0) = tex2html_wrap_inline1138 is non zero for most h. Data generated by an output function underlying method 1 (17) necessarily has a zero value at t=0. Therefore we can not apply eq. (29) directly. If in addition we assume that all cells in the culture have a constant output Y(0) in all phases in its life, applying eq. (29) results in

  eqnarray445

Figure 8 shows that tex2html_wrap_inline1122 is very different from f(t) for the particular choice tex2html_wrap_inline1150 .

   figure442
Figure 8: Results of retrieving f using the wrong synchronization method. For explanation see text.

Another mistake is evident if we use method 1 to synchronize data generated by an output function underlying model 2, i.e. we apply (27) to Z(t) to obtain tex2html_wrap_inline1156 as an estimate of z(t).

  eqnarray462

For the particular choice tex2html_wrap_inline1150 , tex2html_wrap_inline1156 is different from f(t), but not that much compared to tex2html_wrap_inline1122 (fig. 8). For tex2html_wrap_inline1168 the difference has vanished altogether.

The reason that in this example differences occur, after applying the wrong method to the average output functions, is that method 1 implicitly assumes a history which directly influences the time course of the average output function at t=0. This behavior is absent in method 2. Mistakes are absent if the history is zero, i.e. f(a)=0 for tex2html_wrap_inline1174 at t=0, or when the underlying distribution is exponential. Errors, if made by using the wrong method, become smaller when the synchrony of the population is increased (in our particular example small values of h).


next up previous
Next: Numerical methods Up: Estimating the behavior of Previous: Derivation of the transformation

John Val
Mon Oct 14 15:36:06 EDT 1996