T O P I C R E V I E W |
Seb136 |
Posted - 07/18/2018 : 07:27:39 AM Origin Ver. and Service Release (Select Help-->About Origin): 2017 b9.4.0.220 Operating System: Windows 10
Hello everyone,
I have data with different columns (Pressure, Leakage, Speed, Temperature). My goal is to find sections in my data where all pressure columns and the speed column are all at a constant level at the same time and then get the average of all leakage columns and all temperature columns.
My problem at the moment is the definition of "constant" levels.
My approach to this right now is to import the data and smooth (Moving Average) all the columns so the noise doesn't effect the output. After importing the data I go through all the columns and look at little sections with a specified length (e.g. 15 rows) and calculate the average of this section. Then I calculate the differences between the first value of this section and the average and the last value and the average. I divide both differences by the average and if the result of one of them is greater than a specified limit (e.g. 0.05), then all values of this section are set to NANUM.
I tried to find the right combination of length of the section and limit value, so there are only constant levels left but when I find this "perfect" combination and use it for another set of data it doesn't work anymore and I have to find a new combination.
Is there a way of automatically finding all constant levels of a column as accurately as possible and set all other values to NANUM?
Here's the code of my Approach:
for(int nCol = 2; nCol<wks.GetNumCols(); nCol++){
if(wks.Columns(nCol).GetUnits() == "[bar]"|| wks.Columns(nCol).GetUnits() == "[Nl/min]"|| wks.Columns(nCol).GetUnits() == "[rpm]"){
DataRange dr;
dr.Add("X", wks, 0, nCol, -1, nCol);
vector vData;
dr.GetData(&vData, 0);
for(int nRowVector = 0; nRowVector< vData.GetSize()-n-1; nRowVector+=n){//with 'n' being the length of the section
double Sum = 0;
for(int nn = 0; nn<n; nn++)
Sum += vData[nRowVector+nn]; //Creating the sum of all values of the current section
double Average = Sum/n; //Calculating the average
double Abweichung1 = (vData[nRowVector]-Average)/Average, Abweichung2 = (vData[nRowVector+n-1]-Average)/Average;
if(fabs(Abweichung1)>Grenzwert || fabs(Abweichung2)>Grenzwert){
for(int jj = 0; jj<n; jj++)
vData[nRowVector+jj]=NANUM;
}
}
dr.SetData(vData);
}
}
Thank you in advance Seb |
6 L A T E S T R E P L I E S (Newest First) |
Seb136 |
Posted - 07/30/2018 : 03:16:16 AM Hi Alexei,
thank you for your article, it looks really interessting!
I'll have a look at it and see if it is helpful for me.
 |
aplotnikov |
Posted - 07/24/2018 : 05:09:57 AM Hi Seb136,
Depending on S/N-ratio the you may need a bit more complicated detection method than simple averaging procedures or thresholding. _In general case_ statistic tests like the Page-Hinckley one are used for the detection of abrupt signal changes (s. A.Benveniste, Detection of abrupt changes in signals and dynamical systems. Lecture notes in control and information sciences Vol. 77, Springer Verlag, 1986). Your "approach" (indeed it is very well known moving/adjacent averages) is valid only if you need to process stepwise signals with extremely high S/N ratio (like in your graphs) only. You can also try smooth-X-function for this purpose:
smooth sg:=aav npts:=... It may be faster than your own implementation, but I'm not sure.
Regards,
Alexei |
Seb136 |
Posted - 07/23/2018 : 08:22:51 AM Hi Yuki,
thank you very much for your help!
The Worksheet Query function is a really helpful function :). |
yuki_wu |
Posted - 07/20/2018 : 03:48:02 AM Hi Seb,
For your case, I think you could try the feature of Worksheet Query and Differentiate.
For Worksheet Query, you could set the threshold value by yourself, then pick the value in the condition:
abs(pressure(i)- pressure(i-1)) > threshold value
More info about Worksheet Query: https://www.originlab.com/doc/Origin-Help/Wks-Query-QS
For Differentiate, you could perform derivative calculations on the data, then set the value of data in the range whose derivative is much more than 0 to be missing value. Note that you may need to use the smooth option of the differentiate dialog.
More info about Differentiate: https://www.originlab.com/doc/Origin-Help/Math-Differentiate
If you need to do this by coding, you could use the corresponding X-Function wxt and differentiate: https://www.originlab.com/doc/X-Function/ref/wxt https://www.originlab.com/doc/X-Function/ref/differentiate
To call X-Function in Origin C, please refer to: https://www.originlab.com/doc/OriginC/guide/Accessing-X-Function
Hope it helps.
Regards, Yuki
OriginLab
|
Seb136 |
Posted - 07/19/2018 : 02:40:08 AM Hi Yuki,
I'll show you three pictures so you understand what I mean.
The first image shows my data after importing and smoothing.
The second image shows my desired result, so all the constant sections are left.
I achieved this result by the method I tried to explain earlier, but it works pretty much by trial and error.
So I'm looking for a method that finds those sections without the user having to look at the result and decide, wether it is ok or not.
The third image would be a result I wouldn't like to get.
Hope you understand my problem now 


 |
yuki_wu |
Posted - 07/18/2018 : 10:59:31 PM Hi Seb,
Not sure if I understand your question fully, I suppose you are looking for this function, right?
ocmath_compare_data https://www.originlab.com/doc/OriginC/ref/ocmath_compare_data
Hope it helps.
Regards, Yuki
OriginLab
|
|
|