My question concerns the subject of nonlinear fitting of extremely large datasets (1D function with about 100 000 records in workbook). Origin help says that the method of Levenberg-Marquardt is used for this purpose. Does Origin use some preprocessing of large datasets (mostly functions to reduce the dataset) before the application of L-M? And if does, what is that algorithm? I will be very glad to obtain some references too.