| Author |
Topic  |
|
|
User1456
2 Posts |
Posted - 10/29/2025 : 04:18:18 AM
|
Origin Ver. and Service Release (Select Help-->About Origin): 10.1.0.178 (Academic) Operating System: Windows
When using the "Compare Datasets" function in OriginPro, to compare how two datasets fit to the same model differ, do I need to test for any assumptions to be able to use this test? There are no instructions for these requirements on the webpage but it says that this function uses an F-test which usually only applies to normally distributed data. So: 1. Does data need to be normally distributed to use "Compare Datasets" 2. If yes, is there a non parametric alternative?
Thanks! |
|
|
NadeeshaSupport
USA
7 Posts |
Posted - 10/29/2025 : 1:50:47 PM
|
Hello,
Origin does not do any assumption checking under the "Compare Datasets" function. When you input the two fits into the tool, it will calculate the F statistic and print that.
You can check for normality. After you do the fit for the two datasets, check the residual plots and make sure there are no patterns. Alternatively, Origin will output the residuals and you can input these into Statistics > Descriptive Statistics > Normality Test
1. Yes, the F-test is sensitive to normality departures, so this assumption should be checked before reporting the results from the F-test
2. Origin does not currently support a readily available nonparametric approach. If you would like to share an OPJU file / describe what you're looking to do, we can better assist you.
Thanks, Nadeesha |
 |
|
|
User1456
2 Posts |
Posted - 10/29/2025 : 2:40:06 PM
|
Thank you Nadeesha.
I have fit an exponential model to 20 datasets and I would like to test whether the model fit is significantly different between each dataset. My sample sizes in each dataset range from 150 - 18 data points.
Unfortunately, the residuals are not normally distributed (Shapiro-Wilk p>0.05) and show a very clear pattern, and so I am not able to confidently use the Compare Dataset function.
Do you have any recommendations on another approach in Origin that can handle non-normally distributed data, and that will allow me to test model outputs against each other for significant difference?
Thanks! |
 |
|
|
NadeeshaSupport
USA
7 Posts |
Posted - 10/29/2025 : 4:51:14 PM
|
Hello,
For the Shapiro-Wilk test, the null hypothesis is that the data follow a normal distribution. Alt hypothesis is that the data do not follow a normal distribution. With a p-value > 0.05, the null is not rejected.
Instead of checking 20 datasets one by one, you can also do something like this. Fit a reduced model with parameters that are shared by all datasets. Fit a full model that includes group-specific parameters. Then you can run "Compare Models" to see if there's added value by including group-specific parameter. Your null is that the reduced model performs just as well as the full model. The alternative is that the full model performs differently (better) than the reduced model. If normality test / residual plots show that the normality assumption does not hold, use the AIC and BIC values to draw conclusions regarding your reduced and full model.
https://www.originlab.com/doc/Origin-Help/GlobalFitting-Param-Sharing https://www.originlab.com/doc/Tutorials/Fitting-NLFit-Built-in https://www.originlab.com/doc/Origin-Help/PostFit-CompareFitFunc-Dialog
Thanks, Nadeesha |
 |
|
| |
Topic  |
|
|
|