T O P I C R E V I E W |
Cytheria |
Posted - 06/13/2011 : 03:13:29 AM My question concerns the subject of nonlinear fitting of extremely large datasets (1D function with about 100 000 records in workbook). Origin help says that the method of Levenberg-Marquardt is used for this purpose.
Does Origin use some preprocessing of large datasets (mostly functions to reduce the dataset) before the application of L-M?
And if does, what is that algorithm?
Thanks. |
2 L A T E S T R E P L I E S (Newest First) |
easwar |
Posted - 06/13/2011 : 5:31:23 PM quote: Originally posted by Cytheria Does Origin use some preprocessing of large datasets (mostly functions to reduce the dataset) before the application of L-M?
Hi,
You can first reduce your dataset to a smaller size using tools available under Analysis->Data Manipulation menu, and then fit the resulting reduced dataset.
Then if you have the reduction operation and the fitting all set to auto update, once you have a desired fit, you can change the reduction factor to say 1, and allow the fit to update for the full dataset if desired.
Perhaps you can explain more what you are trying to do.
Easwar OriginLab |
larry_lan |
Posted - 06/13/2011 : 5:13:58 PM Hi:
There is no special preprocessing for large datasets.
Thanks Larry OriginLab |
|
|