Author |
Topic  |
|
akhan
China
3 Posts |
Posted - 05/15/2003 : 11:33:01 PM
|
While fitting the data with my own function (something like sum of two normal distributions), I found the result depended very much on the initial value of the parameters, while the resluts were usually with different R^2 values. Soetimes little changes in the initialization of the parameters may reslut in quite different fitting results. What shall I do to get the best result? Is there anyway let Origin find the best fitting curve more automatically? |
|
easwar
USA
1965 Posts |
Posted - 05/16/2003 : 09:04:59 AM
|
Hi,
The minimization process tries to find a set of parameter values that minimize the chi-sq value. Your data and choice of function could lead to situations where the chi-sq surface is flat (does not change appreciably over a range of values for each/some parameters). In such a case, you could obtain different parameter values depending on what initial values you started from. Do you see large errors on your parameters? Do you see large dependencies between parameters? Did you perform enough iterations in each case - in other words, did the fit converge?
You could try sharing parameters in your function (such as say the width of the distributions, if that makes physical sense) to cut down on the number of parameters. You could try settting lower and upper bounds on parameters (based on what makes physical sense for the parameter values) to prevent the minimization from moving parameters into regions/values that do not make sense etc.
If you need further help, please send your OPJ to your nearest tech support office. Also, take a look at this article: http://www.tipmagazine.com/tip/INPHFA/vol-9/iss-2/p24.html
Easwar OriginLab.
Edited by - easwar on 05/16/2003 09:07:05 AM |
 |
|
akhan
China
3 Posts |
Posted - 05/17/2003 : 8:04:19 PM
|
Hi,
Yes, the dependencies were usually very close to 1. However, the fit converged and the errors were not very large (less than 0.1). Must I reduce the number of parameters? Did the large dependencies mean that the model didn't explain the data well?
Thanks a lot for your help! |
 |
|
easwar
USA
1965 Posts |
Posted - 05/19/2003 : 10:08:08 AM
|
Hi,
Yes, large dependencies could mean over parametrization - one or more parameters could be eliminated, or set as fixed (if fixing makes sense)
For example, the function y= a * exp( b*(x-x0) ) is over parametrized. It can we rewritten as y = a * exp( - b * x0 ) * exp( b * x) and then y = c * exp( b * x ) reducing the number of parameters.
However, you may want to fit with the first expression involving x0, when your data does not start at x = 0, and x0 represents an offset. Then you could fix x0 to the "known" offset to avoid over parametrization.
Easwar OriginLab.
Edited by - easwar on 05/19/2003 10:10:36 AM |
 |
|
|
Topic  |
|
|
|