The Origin Forum
File Exchange
Try Origin for Free
The Origin Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ | Send File to Tech support
 All Forums
 Origin Forum
 Origin Forum
 FFT with Mask

Note: You must be registered in order to post a reply.
To register, click here. Registration is FREE!

Screensize:
UserName:
Password:
Anti-Spam Code:
Format Mode:
Format: BoldItalicizedUnderlineStrikethrough Align LeftCenteredAlign Right Horizontal Rule Insert HyperlinkUpload FileInsert Image Insert CodeInsert QuoteInsert List
   
Message:

* HTML is OFF
* Forum Code is ON
Smilies
Smile [:)] Big Smile [:D] Cool [8D] Blush [:I]
Tongue [:P] Evil [):] Wink [;)] Clown [:o)]
Black Eye [B)] Eight Ball [8] Frown [:(] Shy [8)]
Shocked [:0] Angry [:(!] Dead [xx(] Sleepy [|)]
Kisses [:X] Approve [^] Disapprove [V] Question [?]

 
Check here to subscribe to this topic.
   

T O P I C    R E V I E W
niccnacc Posted - 01/19/2013 : 10:33:40 AM
Origin Ver. and Service Release (Select Help-->About Origin): 9.0.0
Operating System: win 7

Hi.
I have many Vel/Time datasets, wich I need to overlay with an FFT.
First I interpolate my time with const. intervals and then FFT it.

The thing is, I have a periodic error in the data, because of the realisation of the experiment. So I want to cut the errorspikes and just use the space in between for the analysis. When I mask the region with the error and click fft on the column I get an error.

Is there any way to do this properly? There are longer measurements with more "errorspaces" in between... and just selecting the lines is not possible due to 10000s of lines.

Thanks for any help.
10   L A T E S T    R E P L I E S    (Newest First)
snowli Posted - 01/31/2013 : 12:56:34 PM
To get mode of a column, you can use our Statistics on Columns tool.

You can select column(s) or a range in worksheet, and then choose Statistics: Descriptive Statistics: Statistics on Columns.

You can also set Grouping and weighting.

There is a long list of Quantities to Compute, include Mode.

You can also watch our videos about descriptive statistics.
http://www.originlab.com/Index.aspx?go=Support/VideoTutorials&pid=1553.

Thanks, Snow

quote:
Originally posted by niccnacc

When I started this I thought this couldn't be so hard, that you need to go into another programming/scripting language.
And now I need the Mode value of a column and that seems to be, even by scripting, an impossible task.



niccnacc Posted - 01/31/2013 : 02:31:40 AM
When I started this I thought this couldn't be so hard, that you need to go into another programming/scripting language.
And now I need the Mode value of a column and that seems to be, even by scripting, an impossible task.
greg Posted - 01/30/2013 : 4:35:08 PM
No matter what you have selected in a worksheet, you can get the same simple statistics as the Status bar (plus Standard Deviation and number of Missing Values) with this two-line script:
stats;
stats.=;

It sounds like you do not know about referencing cells by row ...
// For rows 1 to 32
loop(ii,1,32)
{
col(3)[ii] = col(1)[ii] / (col(2)[ii]*col(2)[ii]);
}
but for your simple function you don't need row by row reference since Origin can do vector math:
col(3) = col(1) / (col(2) * col(2));
niccnacc Posted - 01/25/2013 : 05:28:04 AM
Thank you two for the very detailed descriptions!
To be honest the Interface of Origin is very counter-intuitive. Sometimes things are done here, sometimes there. And with big sets of data - if you get that far - the process is kind of clean and is done with a few clicks here and there, or a script maybe. But the downsides are, that nobody thought of really simple things. In the "what cell is marked" bar on the low righthand corner you can see sum/average of marked cells, but there's no easy way (as long as I see) to calculate this in a cell for 3 or 4 cells. And I mean not for a whole column - there it's fine, but if you have to pic the cells by hand, even with scripting I didn't get that far.
So as long as I see there has to be done much of conforming and simplifying in the future! Most of the data - at least I get to hand as Engeneer - is at least a little different, than before, so you have to be able to take that into account! And some easy way of simple data manipulation is crucial for some things, where the dialog form isn't the best way to do it.

Anyway, besides all that I am quite impressed by origin.

ps.: if there's an easy way to talk to one cell pls don't hesitate to post it :D. For some kind of: to calculate column 3 take the cell of col1 and divide by the square of the coresponding cell of col2 ..

Thanks, mike
snowli Posted - 01/22/2013 : 11:39:56 AM
Maybe Set Column Values isn't the best choice here.

If you have many worksheets in one workbook.

Each sheet contains FFT data. E.g. Frequence as X column and Amplitude as Y column.

To find the max(Amplitude) and also corresponding Frequency for one Amplitude.

1. Highligth the Amplitude column.
2. Choose Analysis: Peaks and Baseline: Peak Analyzer.
3. In PA dialog that opens, choose Find Peaks as Goal.
4. Click Next button 3 times to go to Find peaks page.
5. Set Peak Filtering method to be By Number. Uncheck Auto checkbox below it and set Numnber of peaks to be 1. This will find the tallest peak.
6. Click the little triangle next to Dialog Theme and choose Save as... context menu.
7. Save it as Find_1_peak.
8. Click OK to go back to Peak Analyzer dialog.
9. Click Finish.
--> You will see two sheets are added to your current workbook. One of them is Peak_Centers1. It shows the Max of Amplitude in 2nd column and also corresponding Frequency in 1st column.

Since we saved the Find_1_peak theme just now, we can use it to batch process all your amplitude data. Here is how to do it.

1. With the workbook of many FFT data active, choose Analysis: Peaks and Baseline: Batch Peak Analaysis with Theme...
2. Click the triangle next to Input node and choose Select Columns.
3. Make sure "In Current Book" is selected in List Datasets dropdown list.
--> The box in the middle lists all the Y columns in the worksheets of current workbook that can be analyzed.
4. Select all the Amplitude columns.
5. Click Add to add them to the bottom panel.
6. Click OK to go back to Batch Analysis dialog.
7. Choose Find_1_peak as Theme.
8. Choose Peak Centers as Result Sheet.
9. U can keep current [Summary]Results! as Output Sheet.
10. Click OK
--> A Summary workbook is created. On Results sheet, it lists the Maximum Amplitude for each FFT data and also corresponding frequency.

You can go to www.originlab.com/videos to watch many useful videos of how to use Origin, including some Set Column Value videos and Batch Peak Analysis using theme.

Thanks, Snow Li
OriginLab Corp.

quote:


Thanks, yes, you stated my problem more clearly.
This was, what I originally intended, but it seemed quite unhandy to me, because I have about 50 datasets with minimum 3 parts to be analyzed.
Anyway it seems like the only reasonable method. I think now I got this nailed quite good, the only thing is averaging.
I have a book with many sheets of FFT. Now in the first sheet with the data I go to setColValues and get the Columns of the FFT Data as range variables. Then I'm trying something like:

one field shall be: Max(r1_Amplitude) of each file..
but for the Frequency I need the corresponding X value to the Max(r1_Amplitude).

Is SetColumnValues the best way to do this, and how can I get the line number of the Max(r1)? Thanks again!

Cheers mike

edit: I figure it would be best, if you know anywhere (link) to learn this set column values in greater detail. The things on the originlabs are just basic and is not complete in any way.

Drbobshepherd Posted - 01/22/2013 : 11:38:00 AM
Mike,

I generally find SetColumnValues very limited for analysis work, so I quickly go to writing a LabTalk script.

You asked a good question. Analysis functions work well on a single dataset to find a particular Y-value (ex. maximum), but what is the best way to find the corresponding X-value?

Because this is not the LabTalk Forum, I will suggest using the Worksheet/Sort_Columns drop-down menu.

1. Select your X and Y columns.
2. Select the Worksheet/Sort_Columns/Custom drop-own menu item.
3. In the Nested Sort dialog box, select your Y-column and click on the Descending>> button to make the Y-data your sort criterion.
4. Click on the OK button. Your max Y-value and corresponding X-value can be found in row 1.
5. (Optional) Save these X and Y values however you choose (copy and paste into another wksheet, pencil and paper, etc.), then select the Edit/Undo drop-down item (or hit Ctrl-Z, or type Undo in the script window) to restore the columns to their original order.

If you would like to automate this process, try running the following script:

double X4maxY; // Declare variable for result.

undo -wa; // Save backup copy of active wksheet.
wsort descending:=1 bycol:=2 c1:=1 c2:=2; // Sort(X,Y) by Y in desc order (note:col(1)=X, col(2)=Y).
X4maxY=col(1)[1]; // Save x-value corresponding to max(Y).
undo; // Restore wksheet.

X4maxY=; // Print result.


Good luck,
DrBobShepherd
niccnacc Posted - 01/22/2013 : 05:45:17 AM
quote:
Originally posted by Drbobshepherd

It sounds to me like you have a time-dependent curve with blank (i.e. drop-out) periods of data acquisition. This type of data is commonly called a chopped signal. Chopped signals are often processed by applying FFT analysis to each section between the drop-out periods. This shouldn't be very difficult if the chopping frequency is constant.

1. Separate the data into data sets of un-inerrupted intervals.(1st and last data points in each data set should be aprox.=0.)
2. Apply FFT to each data set.(I suggest using the Hanning window.)You should now have a spectrum for each section.
3. Try averaging your spectra to reduce the noise level.(Add them up and divide by N, the number of datasets). This is called signal averaging.

DrBobShepherd



Thanks, yes, you stated my problem more clearly.
This was, what I originally intended, but it seemed quite unhandy to me, because I have about 50 datasets with minimum 3 parts to be analyzed.
Anyway it seems like the only reasonable method. I think now I got this nailed quite good, the only thing is averaging.
I have a book with many sheets of FFT. Now in the first sheet with the data I go to setColValues and get the Columns of the FFT Data as range variables. Then I'm trying something like:

one field shall be: Max(r1_Amplitude) of each file..
but for the Frequency I need the corresponding X value to the Max(r1_Amplitude).

Is SetColumnValues the best way to do this, and how can I get the line number of the Max(r1)? Thanks again!

Cheers mike

edit: I figure it would be best, if you know anywhere (link) to learn this set column values in greater detail. The things on the originlabs are just basic and is not complete in any way.
Drbobshepherd Posted - 01/21/2013 : 3:33:42 PM
It sounds to me like you have a time-dependent curve with blank (i.e. drop-out) periods of data acquisition. This type of data is commonly called a chopped signal. Chopped signals are often processed by applying FFT analysis to each section between the drop-out periods. This shouldn't be very difficult if the chopping frequency is constant.

1. Separate the data into data sets of un-inerrupted intervals.(1st and last data points in each data set should be aprox.=0.)
2. Apply FFT to each data set.(I suggest using the Hanning window.)You should now have a spectrum for each section.
3. Try averaging your spectra to reduce the noise level.(Add them up and divide by N, the number of datasets). This is called signal averaging.

DrBobShepherd
niccnacc Posted - 01/21/2013 : 1:50:30 PM
quote:
Originally posted by greg

You could mask the erroneous data before you do the interpolation, or try smoothing the data to see if that eliminates the spikes without altering the good data too much.



Thanks for your input.
I tried masking the regions befor interpolation, but the result is an Interpolation with gaps. And that's what the FFT tool complains about - that there shall be no gaps in the data. Smoothing is not an option, because the error is due to the experiment -> its a metal ball running through the vel-measurement. So this data must be excluded somehow.

When I try your tip and interpolate after masking, I get an interpolated data with straight lines where the marks (erroneus data) have been. This however produces it's own frequencys. Or do you know a way to interpolate, where masks are just left out and x is reduced by the masked part? Sounds complicated.

Anyone has any input on this one.

I start to feel like there's no solution to this problem...
greg Posted - 01/21/2013 : 11:53:20 AM
You could mask the erroneous data before you do the interpolation, or try smoothing the data to see if that eliminates the spikes without altering the good data too much.

The Origin Forum © 2020 Originlab Corporation Go To Top Of Page
Snitz Forums 2000