back to main page

Example 1


[NEW]

DATA: [ [1.05   0.11    0.02]
            [2.03   0.233   0.03]
            [3.02   0.694   0.02]
            [4.0    2.130   0.02]
            [5.05   0.638   0.02]
            [6.03   0.219   0.02]
            [7.02   0.105   0.03]
            [8.0    0.061   0.02] ]

the data show a peak, therefore it seems reasonable to try

MODEL:   '1/(A+(X-B)^2)'

[OK]

the peak seems to be located  between x=3 and x=5  ...
this motivates the following choice for the start values:

 A   1.0
 B   3.5

 [OK]

 1 [LEMA]                                       ( 1 iteration )
            CHI2: 4783.019                   ( quite large ...)
 1 [LEMA]
            CHI2: 193.706                     ( it's getting better )
 1 [LEMA]
            CHI2: .116                          ( good enough )
 [INFO]
            Cova: [ [  1.90E-5  -1.22E-6  ]
                        [  -1.22E-6   2.34E-4 ] ]
            CHI2: .1167
            PROB: .9999                       ( probability of nearly 1 )
 [SPAR]
            A: .46914                           ( estimates for the parameters A and B )
            B: 3.9997

 Here are some suggestions for further 'explorations':
 



Example 2


In  example 1 it took only three iterations to obtain a very good fit. - This
was due to the choice of reasonable start values for the parameters and
an appropriate model function. The influence of the start values is
demonstrated by the following example:
(Same data and model function f as in example 1.)

 [NEW] [OK]

 A 5.0
 B 0.1

 [OK]

 1 [LEMA]
            CHI2: 12827.1304                     ( hm, quite large )
 1 [LEMA]
            CHI2: 12453.5042                     ( nearly unchanged ... therefore: )
 4 [LEMA]                                            ( lets take 4 iterations at once! )
            CHI2: 10653.5412
 3 [LEMA]                                            ( 3 iterations at once )
            CHI2: 5968.9701                      ( algorithm begins to find its way )
 2 [LEMA]
            CHI2: 4777.3626
 2 [LEMA]
            CHI2: 3851.0126
 2 [LEMA]
            CHI2: 9.1893                           ( already quite good )
 1 [LEMA]
            CHI2: 0.04534                         ( certainly good enough )

 [INFO]
            Cova: [ [  1.90E-5  -3.01E-6  ]
                       [ -3.01E-6   2.34E-4  ] ]
            CHI2: .04534
            PROB: .99999
 [SPAR]
            A: .46906
            B: 4.0039
 
 

Of course, the algorithm yields the same values for the parameters Aand B
as in example 1. - But it  took  16 iterations to obtain a set of values a_i
satisfying chi^2 < 1.

How does one find good start values?  - There is no general prescripton or
method that allows to find good start values. One has to rely on intuition,
but it helps if you have an idea about how the graph of f changes when you
change the values of the parameters.

The value of chi^2 will stabilize after a certain number of iterations. This tells
that the algorithm has determined the values of the parameters a_i as good as
possible. However, it might be that you've found only a local minimum of the
chi^2 merit function. To be more explicit: the value of chi^2 depends on the
data, the model function and the values of the parameters. But the data and
the model function are considered fixed during a fitting session and chi^2 is
thus effectively a function of the parameters a_i only.  The algorithm tries to
adjust the parameters a_i such that they correspond to a minimum of the chi^2
function. However, this might be a local or a global minimum.
The lesson to be learned: choose 'good' start values.