The Origin Forum
File Exchange
Try Origin for Free
The Origin Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ | Send File to Tech support
Username:
Password:
Save Password
Forgot your Password? | Admin Options

 All Forums
 Origin Forum for Programming
 Forum for Origin C
 NLSF fit routine crashes when called from Origin C
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic Lock Topic Edit Topic Delete Topic New Topic Reply to Topic

obauer

Germany
15 Posts

Posted - 06/26/2009 :  11:55:22 AM  Show Profile  Edit Topic  Reply with Quote  View user's IP address  Delete Topic
Origin Ver. and Service Release (Select Help-->About Origin): OriginPro 8 SR4
Operating System: Windows XP

Hello everyone!

I have written a NLSF routine (i.e. fdf file) which runs smoothly when I start it from the "Non-Linear Curve Fit Dialog". That routine includes two convolutions (1st convolution of two theoretical curves and then the second one with a Gaussian). It takes about 2 min to run on a dataset of 30 datapoints. Yet if I call this very routine from an Origin C code via "using NLSF = LabTalk.NLSF;" two things can happen: either Origin crashes or the routine runs for 100 iterations (as stated in the code) and does not converge. To make things even worse these 100 iteration take more than 1 hour!

Of course this is very inconvenient to me. Now my question is: Has something like this happened to anyone else? What could be the reason?

I have the feeling is that the fdf routine is ok since it works fine and gives reasonable results when started from "Non-Linear Curve Fit Dialog". Nonetheless I will paste the routine below. Maybe there are incompatibilities when calling it from OC?

I really acknowledge your comments and advices since I have been struggling with this issue for days now... Thank you so much in advance!

best regards,
Oliver



The fdf file starts here:

********************************************************

General Information]
Function Name = XSW_ReflectivityFit_Ag100_RmonoConvolution_test5
Brief Description = fit to experimental reflectivity (including convolution with Gaussian function)
Function Source = fgroup.NewFunction
Number Of Parameters = 4
Function Type = User-Defined
Function Form = Equations
Path =
Number Of Independent Variables = 1
Number Of Dependent Variables = 1
FunctionPrev = \\pcfs1\abt-soko$\bauer\OriginLab\Origin8\Anwenderdateien\fitfunc\NewFunction.fdf


[Fitting Parameters]
Names = y0,A,wG,xcG
Initial Values = --(V),--(V),--(V),--(V),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F),--(F)
Meanings = y offset,amplitude,Gaussian width,Gaussian center,lower bound for integration,upper bound for integration,integration stepsize,Bragg energy (eV),Bragg angle (rad),Sigma,structure factor Re(F0),structure factor Im(F0),structure factor Re(FH),structure factor Im(FH),Debye-Waller factor,asymmetry parameter,Monochromator Bragg energy (eV),Monochromator Bragg angle (rad),Monochromator Sigma,Monochromator structure factor re(F0),Monochromator structure factor Im(F0),Monochromator structure factor Re(FH),Monochromator structure factor Im(FH),MOnochromator Debye-Waller factor,Monochromator asymmetry parameter
Lower Bounds = --(X, Off),0.000000(X, On),0.000000(X, On),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off)
Upper Bounds = --(X, Off),--(X, Off),--(I, Off),--(I, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off),--(X, Off)
Naming Method = User-Defined
Number Of Significant Digits = -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1


[Independent Variables]
x =


[Dependent Variables]
y =


[Formula]



//********************************************

double E_Bragg = 3.03489 * 1000.0;
double theta_Bragg = 90 * PI/180;
double Sigma = 2.195624e-006;
double F0_real = 151.4992;
double F0_imag = 15.33788;
double FH_real = 105.4824;
double FH_imag = 15.33788;
double DW = 0.98;
double b = -1.0;


double E_Bragg_mono = 3.03489 * 1000.0;
double theta_Bragg_mono = 40.6 * PI/180;
double Sigma_mono = 9.346760e-007;
double F0_mono_real = 113.775;
double F0_mono_imag = 15.3858;
double FH_mono_real = -60.8679;
double FH_mono_imag = -10.8794;
double DW_mono = 0.988344;
double b_mono = -1.0;


double t_initial = 3.027 * 1000.0;
double t_final = 3.0426 * 1000.0;
double dt = 0.0001 * 1000.0;





//*********************************************






// Parameters for integration

double dIntegral = 0.0;
double dInt = 0.0;
double t = t_initial; // The integration interval ranges over experimental photon energy range +/- some eV. The latter is necessary to ensure the quality of the fit at the boundaries of the experimental photon energy range.

double dSize = (t_final - t_initial) / dt + 1.0;
int nSize; // number of simulated data points in the reflectivity curves
nSize = (int) (dSize + 0.5);



//printf("\n nSize = %i \n", nSize);
//



double F[10000], G1, G2; // F =



// Definition of created Datasets

Dataset dsReflectivity; // substrate reflectivity
dsReflectivity.Create(nSize);

Dataset dsReflectivity_mono; // monochromator reflectivity
dsReflectivity_mono.Create(nSize);

Dataset dsReflectivity_mono_squared; // squared monochromator reflectivity
dsReflectivity_mono_squared.Create(nSize);

Dataset dsReflectivity_mono_squared_modified; // squared monochromator reflectivity, "wrapped around" and normalised
dsReflectivity_mono_squared_modified.Create(nSize);


Dataset dsResponse;
dsResponse.Create(nSize);




// Parameters for reflectivity calculation


// substrate Reflectivity

double Delta_E[10000]; // deviation from calculated Bragg energy in eV
double E_photon[10000]; // photon energy in eV = E_Bragg + Delta_E

// definition of complex structure factos F0, FH
complex F0(F0_real,F0_imag);
complex FH(FH_real,FH_imag);

// temperatue correction of reflected intensity
FH = FH*DW;

// Polarisation P = cos (2 * theta_Bragg)
double P;
P = cos ( 2 * theta_Bragg);


// Calculation of eta and Reflectivity = |b| * |eta +- (eta^2 -1)^0.5|^2

complex eta[10000]; // eta = ((-2.0*pow(sin(theta_Bragg),2.0)*Delta_E[ii]/E_photon[ii] + Sigma * F0) / (|P| * Sigma * FH)
complex z0[10000]; // z0 = ((-2.0*pow(sin(theta_Bragg),2.0)*Delta_E[ii]/E_photon[ii] + Sigma * F0)
complex z1[10000]; // z1 = eta^2
complex z2[10000]; // z2 = eta^2 - 1
complex z3[10000]; // z3 = (eta^2 - 1)^0.5
double d1[10000]; // d1 = | eta +/- (eta^2 - 1)^0.5 |
double d2[10000]; // d2 = Re(eta)
double Reflectivity[10000]; // Reflectivity = |b| * | eta +/- (eta^2 - 1)^0.5 |^2



for (int ii = 0; ii < nSize; ii ++)
{
Delta_E[ii] = (t_initial + ii * dt) - E_Bragg; // in eV
E_photon[ii] = (t_initial + ii * dt);
z0[ii] = -2.0*pow(sin(theta_Bragg),2.0)*Delta_E[ii]/E_photon[ii] + Sigma * F0;
eta[ii] = z0[ii] / (abs(P) * Sigma * FH);
z1[ii] = cpow(eta[ii],2.0+0i);
z2[ii] = z1[ii] - 1;
z3[ii] = sqrt(z2[ii]);
d2[ii] = eta[ii].m_re;

if (d2[ii] < 0)
{
d1[ii] = cabs( eta[ii] + z3[ii] );
Reflectivity[ii] = abs(b) * pow( d1[ii], 2.0 );
}
else
{
d1[ii] = cabs( eta[ii] - z3[ii] );
Reflectivity[ii] = abs(b) * pow( d1[ii], 2.0 );
}
}



// monochromator Reflectivity

double Delta_E_mono[10000]; // deviation from calculated Bragg energy in eV
double E_photon_mono[10000]; // photon energy in eV = E_Bragg + Delta_E

// definition of complex structure factos F0, FH
complex F0_mono(F0_mono_real,F0_mono_imag);
complex FH_mono(FH_mono_real,FH_mono_imag);

// temperatue correction of reflected intensity
FH_mono = FH_mono * DW_mono;

// Polarisation P = cos (2 * theta_Bragg)
double P_mono;
P_mono = cos ( 2 * theta_Bragg_mono);


// Calculation of eta and Reflectivity = |b| * |eta +- (eta^2 -1)^0.5|^2

complex eta_mono[10000]; // eta = ((-2.0*pow(sin(theta_Bragg),2.0)*Delta_E[ii]/E_photon[ii] + Sigma * F0) / (|P| * Sigma * FH)
complex z0_mono[10000]; // z0 = ((-2.0*pow(sin(theta_Bragg),2.0)*Delta_E[ii]/E_photon[ii] + Sigma * F0)
complex z1_mono[10000]; // z1 = eta^2
complex z2_mono[10000]; // z2 = eta^2 - 1
complex z3_mono[10000]; // z3 = (eta^2 - 1)^0.5
double d1_mono[10000]; // d1 = | eta +/- (eta^2 - 1)^0.5 |
double d2_mono[10000]; // d2 = Re(eta)
double Reflectivity_mono[10000]; // Reflectivity = |b| * | eta +/- (eta^2 - 1)^0.5 |^2
double Reflectivity_mono_squared[10000]; // Reflectivity^2 = ( |b| * | eta +/- (eta^2 - 1)^0.5 |^2 ) ^2
double Reflectivity_mono_squared_modified[10000]; // Reflectivity^2 = ( |b| * | eta +/- (eta^2 - 1)^0.5 |^2 ) ^2, "wrapped around" and normalised


for (int jj = 0; jj < nSize; jj ++)
{
Delta_E_mono[jj] = (t_initial + jj * dt) - E_Bragg_mono; // in eV
E_photon_mono[jj] = (t_initial + jj * dt);
z0_mono[jj] = -2.0*pow(sin(theta_Bragg_mono),2.0)*Delta_E_mono[jj]/E_photon_mono[jj] + Sigma_mono * F0_mono;
eta_mono[jj] = z0_mono[jj] / (abs(P_mono) * Sigma_mono * FH_mono);
z1_mono[jj] = cpow(eta_mono[jj],2.0+0i);
z2_mono[jj] = z1_mono[jj] - 1;
z3_mono[jj] = sqrt(z2_mono[jj]);
d2_mono[jj] = eta_mono[jj].m_re;

if (d2_mono[jj] < 0)
{
d1_mono[jj] = cabs( eta_mono[jj] + z3_mono[jj] );
Reflectivity_mono[jj] = abs(b_mono) * pow( d1_mono[jj], 2.0 );
}
else
{
d1_mono[jj] = cabs( eta_mono[jj] - z3_mono[jj] );
Reflectivity_mono[jj] = abs(b) * pow( d1_mono[jj], 2.0 );
}

Reflectivity_mono_squared[jj] = pow ( Reflectivity_mono[jj], 2.0 );
}





//*********** Statistics on squared monochromator reflectivity data: used for modification of the the data in order to perform convolution ***********

for(int kk = 0; kk < nSize; kk++)
{
dsReflectivity[kk] = Reflectivity[kk];
dsReflectivity_mono[kk] = Reflectivity_mono[kk];
dsReflectivity_mono_squared[kk] = Reflectivity_mono_squared[kk];
}


// Prepare everthing for the convolution of substrate reflectivity and squared monochromator reflectivity

// Take response function and "wrap it around" so that the point with the max value is now the first point in the response dataset. Also normalize sum to 1
// Also find the row index of the maximum value in the response (= squared monochromator reflectivity)

BasicStats bsStats; // Data structure of the output statistics
Data_sum(&dsReflectivity_mono_squared, &bsStats); // returns sum of data in specified dataset
double Reflectivity_mono_squared_sum = bsStats.total; // sum of data
int Reflectivity_mono_squared_Max_iRow = bsStats.iMax; // index of row which holds maximum of squared monochromator reflectivity


for (int ll = 0; ll < nSize; ll++)
{
if (ll >= Reflectivity_mono_squared_Max_iRow)
{
Reflectivity_mono_squared_modified[ll - Reflectivity_mono_squared_Max_iRow] = Reflectivity_mono_squared[ll] / Reflectivity_mono_squared_sum;
dsReflectivity_mono_squared_modified[ll - Reflectivity_mono_squared_Max_iRow] = Reflectivity_mono_squared_modified[ll - Reflectivity_mono_squared_Max_iRow];
}
if (ll < Reflectivity_mono_squared_Max_iRow)
{
Reflectivity_mono_squared_modified[nSize + ll - Reflectivity_mono_squared_Max_iRow] = Reflectivity_mono_squared[ll] / Reflectivity_mono_squared_sum;
dsReflectivity_mono_squared_modified[nSize + ll - Reflectivity_mono_squared_Max_iRow] = Reflectivity_mono_squared_modified[nSize + ll - Reflectivity_mono_squared_Max_iRow];
}
}


int convolution1;
vector x1 = dsReflectivity;
vector y1 = dsReflectivity_mono_squared_modified;

convolution1 = fft_fft_convolution(nSize, x1, y1);
dsResponse = x1;





//++++++++++++++++++++++++++++++++++++++++++++++++


// response function

G1 = 1 / (wG * sqrt(2.0 * PI)) * exp( -0.5*((x - xcG) - t)^2/wG^2 );



// theoretical Reflectivity curve: Assign function values F(t) by dataset index [mm] of dataset "dsResponse".
for(int mm = 0; mm < nSize; mm++)
{
F[mm] = dsResponse[mm];
}




// The following lines actually perform a integrate to the function, the trapezoidal rule is used.

do
{
// response function

G2 = 1 / (wG * sqrt(2.0 * PI)) * exp( -0.5*((x - xcG) - (t + dt))^2/wG^2 );


// employing trapezoidal rule for integration

dInt = 0.5 * (G1 * F[(t - t_initial) / dt] + G2 * F[(t + dt - t_initial) / dt]) * dt ;
dIntegral += dInt;
t +=dt;

G1 = G2;

}while (t < t_final);

y = A * dIntegral + y0;



[Initializations]


[After Fitting]


[Controls]
General Linear Constraints = 0
Initialization Scripts = 0
Scripts After Fitting = 0
Number Of Duplicates = N/A
Duplicate Offset = N/A
Duplicate Unit = N/A
Generate Curves After Fitting = 1
Curve Point Spacing = Uniform on X-Axis Scale
Generate Peaks After Fitting = 1
Generate Peaks During Fitting = 1
Generate Peaks with Baseline = 1
Paste Parameters to Plot After Fitting = 1
Paste Parameters to Notes Window After Fitting = 1
Generate Residuals After Fitting = 0
Keep Parameters = 0
Compile On Param Change Script = 0
Enable Parameters Initialization = 0


[Compile Function]
Compile = 1
Compile Parameters Initialization = 1
OnParamChangeScriptsEnabled = 0.


[Parameters Initialization]


[Origin C Function Header]
#pragma warning(error : 15618)
#include <origin.h>

// Add your special include files here.
// For example, if you want to fit with functions from the NAG library,
// add the header file for the NAG functions here.



#include <..\originlab\fft.h> // path points to C:\Program Files\OriginLab\Origin8\OriginC\OriginLab\fft.h; used for convolution of data



// Add code here for other Origin C functions that you want to define in this file,
// and access in your fitting function.

// You can access C functions defined in other files, if those files are loaded and compiled
// in your workspace, and the functions have been prototyped in a header file that you have
// included above.

// You can access NLSF object methods and properties directly in your function code.

// You should follow C-language syntax in defining your function.
// For instance, if your parameter name is P1, you cannot use p1 in your function code.
// When using fractions, remember that integer division such as 1/2 is equal to 0, and not 0.5
// Use 0.5 or 1/2.0 to get the correct value.

// For more information and examples, please refer to the "User-Defined Fitting Function"
// section of the Origin Help file.



[Origin C Parameter Initialization Header]
#include <origin.h>

// Add your special include files here.
// For example, if you want to use functions from the NAG library,
// add the header file for the NAG functions here.

// Add code here for other Origin C functions that you want to define in this file,
// and access in your parameter initialization.

// You can access C functions defined in other files, if those files are loaded and compiled
// in your workspace, and the functions have been prototyped in a header file that you have
// included above.

// You can access NLSF object methods and properties directly in your function code.
// You should follow C-language syntax in defining your function.
// For instance, if your parameter name is P1, you cannot use p1 in your function code.
// When using fractions, remember that integer division such as 1/2 is equal to 0, and not 0.5
// Use 0.5 or 1/2.0 to get the correct value.

// For more information and examples, please refer to the "User-Defined Fitting Function"
// section of the Origin Help file.


[Constraints]


cpyang

USA
1406 Posts

Posted - 06/26/2009 :  9:21:20 PM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Basically, Origin has a new fitter which is what you see in the dialog. LabTalk.NLSF is the older Origin 75 fitter that does not support some of the new features in the new Origin 8 FDF. Also, the older 75 fitter (NLSF) is much slower when called from Origin C.

See this thread.

http://www.originlab.com/forum/topic.asp?TOPIC_ID=7848

Go to Top of Page

obauer

Germany
15 Posts

Posted - 06/29/2009 :  07:19:14 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi cpyang!

Thank you for your response! Over the weekend the idea of using NLFitSession instead of NLSF also came to my mind since that was the only thing I had not yet tried...

So I tried to call the fdf fit routine which I posted above from OC according to this example:

http://ocwiki.originlab.com/index.php?title=OriginC:NLFitSession-Fit#Examples

And again Origin crashed!!! Please help me: I am desperate, I can´t think of anything else what could be wrong. My above fdf routine works if run directly from the Non-Linear Curve Fit Dialog. If you apply it on this dataset (column 1 = X, column 2 = Y):

+++++++++++++++++++++++++++++

3032 2303.71661
3032.2 2343.07114
3032.4 2352.03741
3032.6 2391.51741
3032.8 2432.98028
3033 2494.09065
3033.2 2560.76282
3033.4 2655.35729
3033.6 2748.48704
3033.8 2925.27642
3034 3158.98841
3034.2 3530.0834
3034.4 4230.9175
3034.6 5768.54022
3034.8 10533.68015
3035 21524.61856
3035.2 30412.99312
3035.4 26896.49702
3035.6 19650.8882
3035.8 10661.65874
3036 5671.02219
3036.2 4178.63704
3036.4 3323.2433
3036.6 2932.04889
3036.8 2736.11784
3037 2612.00931
3037.2 2529.99623
3037.4 2472.44245
3037.6 2425.92686
3037.8 2392.27594
3038 2370.43395

+++++++++++++++++++++++++++++

with these initial parameters for the 4 parameters to be fitted:

+++++++++++++++++++++++++++++

y0 = 2300; // y offset
A = 12500; // amplitude
wG = 0.1; // Gaussian width
xcG = -0.03; // Gaussian center

+++++++++++++++++++++++++++++

you get the following result within less than three minutes:

+++++++++++++++++++++++++++++

y0 = 2300.43145 +/- 62.82; // y offset
A = 35946.5635 +/- 308.95; // amplitude
wG = 0.14213 +/- 0.00581; // Gaussian width
xcG = -0.13155 +/- 0.00285; // Gaussian center



+++++++++++++++++++++++++++++

I have run out of ideas what to try to get my fdf run from an OC code... I would be so nice to get some input from your side what the problem could be. I hope you have some helpful ideas and you find the time to answer my post. Thank you so much for your time!!!

best regards and thanks in advance,
Oliver

PS: Sorry, one more (short) question: How do I enbale error weighting in NLFitSession when I call it from OC?
Go to Top of Page

cpyang

USA
1406 Posts

Posted - 06/29/2009 :  2:53:00 PM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Best if you can email us the FDF so we can try without trying to recreate it from the forum text.

Please email to me

cp@originlab.com

CP
Go to Top of Page

obauer

Germany
15 Posts

Posted - 06/30/2009 :  03:45:23 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Dear cpyang!

I have just emailed you the FDF file and the exemplary data file. I am curious to hear where my mistake was in trying to run the FDF from OC...

Thank you and your colleagues so much since I would have been totally lost in OC and I would not have even come close to where I am now with my OC routine without the help of you guys!

best regards and thanks in advance,
Oliver
Go to Top of Page

cpyang

USA
1406 Posts

Posted - 07/01/2009 :  2:11:09 PM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
I think the key is to close Code Builder. When you are working from OC, you are most likely keeping CB open. When CB is open, then OC runs much slower if you have a for loop since ESC can break into OC code if CB is open. Your OC based FDF does have a for loop, so it becomes much slower then from dialog where CB is most likely not open.

I tried the following code (will modify a bit to put into ocwiki as another example shortly), and it works fine.



BOOL test1(int nMaxIter = 30, string strFunc = "XSW_ReflectivityFit_Ag100_RmonoConvolution_test5", int nXCol=0, int nYCol=1)
{
    Worksheet wks = Project.ActiveLayer();
    if(!wks)
        return false;
    
    NLFitSession    FitSession;
    int             nDataIndex = 0; // only one set in our case
    DWORD           dwRules = DRR_GET_DEPENDENT | DRR_NO_FACTORS;
    
    // 1. Set fucntion
    if(!FitSession.SetFunction(strFunc, NULL)) // set function, category name can be ignore
        return error_report("invalid fit function");
    
    vector<string>  vsParamNames;
    int 			nParam;
	vector 			vParamValues, vErrors;
    int             nNumParamsInFunction = FitSession.GetParamNamesInFunction(vsParamNames);
    int             nFitOutcome;
    
    DataRange   drInputData;
    drInputData.Add(wks, nXCol, "X");
    drInputData.Add(wks, nYCol, "Y");
    int         nNumData = drInputData.GetNumData(dwRules);
    ASSERT(1==nNumData);
        
    //2 set the dataset
    vector  vX1, vY1;
    drInputData.GetData( dwRules, nDataIndex, NULL, NULL, &vY1, &vX1 );     
    if(!FitSession.SetData(vY1, vX1, NULL, nDataIndex, nNumData))  
        return error_report("err setting data");  
    // 3' Set the init parameters
    vector vParams(nNumParamsInFunction);
    vParams[0] = 2300;//y0
    vParams[1] = 12500;//Amplitude
    vParams[2] = 0.1; //w
    vParams[3] = -0.03; // center
    int nErr = FitSession.SetParamValues(vParams);
    if(nErr != 0)
    	return error_report("Fail to set init parameters: err=" + nErr);

    // 4. Iterate with default settings
    FitSession.SetMaxNumIter(nMaxIter);   
    if(!FitSession.Fit(&nFitOutcome))
    {
        string strOutcome = FitSession.GetFitOutCome(nFitOutcome);
        printf("fit failed:%d->%s\n", nFitOutcome, strOutcome);
        return false;
    }
    // 5. success, get results and put to wksOutput
    RegStats        fitStats;
    NLSFFitInfo     fitInfo;
    FitSession.GetFitResultsStats(&fitStats, &fitInfo, false, nDataIndex);
    FitSession.GetFitResultsParams(vParamValues, vErrors);
	printf("# Iterations=%d, Reduced Chisqr=%g\n", fitInfo.Iterations, fitStats.ReducedChiSq);
    
	for( nParam = 0; nParam < vParamValues.GetSize(); nParam++)
    {
        printf("%s = %f\n", vsParamNames[nParam], vParamValues[nParam]);
    }   
    return true;
}


Go to Top of Page

Iris_Bai

China
Posts

Posted - 07/02/2009 :  03:55:22 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi Oliver,

NLFitSession not has a way to set error weighting data for now. We will add function to set error data and weight method in Origin8.1.

Iris
Go to Top of Page

obauer

Germany
15 Posts

Posted - 07/06/2009 :  11:29:18 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi cpyang!

Brilliant that all works fine now!!!! I would have never thought that the answer to my problem would have been so simple: Just close code builder... Thank you so much for your help - you did a tremendously good job there!

Thanks for your time! I am so happy that my code runs the way I want it now - this wouldn´t have been the case without your help!

Best regards and thanks again,
Oliver
Go to Top of Page

Stefan.E.S

Germany
11 Posts

Posted - 01/22/2011 :  5:39:30 PM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi Iris,

has weighting been implemented in NLFitSession of Origin 8.5SR1? I cannot find it in the help file.

Stefan

quote:
Originally posted by Iris_Bai

Hi Oliver,

NLFitSession not has a way to set error weighting data for now. We will add function to set error data and weight method in Origin8.1.

Iris



StSch
Go to Top of Page

Iris_Bai

China
Posts

Posted - 01/24/2011 :  04:02:09 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi StSch,

Sorry, I did not updated document.

Please click here to see the document about how to set weight data to NLFitSession object.

Added two arguments nDataMode and vW to NLFitSession::SetData method.


Iris
Go to Top of Page

Stefan.E.S

Germany
11 Posts

Posted - 01/24/2011 :  08:40:08 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi Iris,

thank you for your quick reply.

I just tested the weighting in my routine. The fit via FitSession produces the same results for the fit parameters as compared to a fit "by hand". However the value for the reduced ChiSqr is very much different. The usual user interface returns 1.0057 (as expected) whereas FitSession gives RegStats.ReducedChiSq = 1285.12138. Do you happen to have an idea where this difference might come from?

Another question is whether the FitSession object also allows for setting bounds on the parameters.

Cheers,

Stefan

Stefan
Go to Top of Page

Stefan.E.S

Germany
11 Posts

Posted - 01/24/2011 :  3:26:30 PM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi Iris,

I looked a bit more into my fitting problem and found that the gives always the same result no mather what I chose for the weights. Even
in the extreme case of zero weights below, I still get the same results.

Is this behavious a matter of the DataMode? Or is the functionality simly not coded?

By the way in my OriginLab directories I do not find the definition of the data modes. There is no file "analysis_utils.h" as mentioned in the Wiki. I use German Origin Pro 8.5SR1.


vector vWeights=0;
       	FITsession.SetData(COLsimul_y, COLsimul_x, NULL,0,1, INVALID_DATA_MODE, vWeights);


Stefan
Go to Top of Page

Iris_Bai

China
Posts

Posted - 01/26/2011 :  10:16:11 PM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi Stefan,

Data Mode only available with multiple range selections, for more details about Data Mode see Origin help -> Regression and Curve Fitting book -> Nonlinear Curve Fitting book-> The NLFit Dialog Box book -> Settings Tab (Upper Panel) -> Data Selection section.

analysis_utils.h file under Origin install path OriginC\system folder.

The data mode enum is:

enum 
{
	DATA_MODE_INDEP_CONSOLID, // the default used in NLFit dialog
	DATA_MODE_INDEP_SEP,	
	DATA_MODE_CONCATENATE,
	DATA_MODE_GLOBAL,
};

Since DATA_MODE_INDEP_CONSOLID mode is the default used in NLFit dialog, so please also use this data mode in NLFitSession in codes when there are multiple dataset to fit.

About parameter bounds, NLFitSession not support it yet. We added a function NLFitSession::SetParamBounds to set parameter lower/upper bounds, but need the next release build - Origin 8.5.1.
Click here to see the usage of the new method in a whole example.

Iris

Edited by - Iris_Bai on 01/27/2011 02:25:39 AM
Go to Top of Page

obauer

Germany
15 Posts

Posted - 11/12/2013 :  07:58:37 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
quote:
Originally posted by cpyang

When CB is open, then OC runs much slower if you have a for loop since ESC can break into OC code if CB is open. Your OC based FDF does have a for loop, so it becomes much slower then from dialog where CB is most likely not open.



Sorry for warming up this thread but I am currently writing a documentation on my code. For this purpose I would like to know what "ESC" means and what is does. Your explanation is highly appreciated. Thank you very much!

Oliver
Go to Top of Page

obauer

Germany
15 Posts

Posted - 11/15/2013 :  03:41:33 AM  Show Profile  Edit Reply  Reply with Quote  View user's IP address  Delete Reply
Hi,

I have tried to to figure out what the statement

quote:
... OC runs much slower if you have a for loop since ESC can break into OC code if CB is open.


exactly means. Back then, I was just happy that cpyang could solve my problem (see above), and I didn´t care much about the reasons behind. If I understand correctly there is some sort of cross-communication between OC (which, I guess, refers to the Origin C fitting routine in NLSF, right?) and Code Builder. If the fitting routine contains a loop this will be very time-consuming. I imagine this cross-communication as a debug mode or something like this. Is this correct?

Your help and explanation is highly appreciated. Thank you very much for your efforts!

Best regards,
Oliver
Go to Top of Page
  Previous Topic Topic Next Topic Lock Topic Edit Topic Delete Topic New Topic Reply to Topic
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
The Origin Forum © 2020 Originlab Corporation Go To Top Of Page
Snitz Forums 2000