The Origin Forum
File Exchange
Try Origin for Free
The Origin Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ | Send File to Tech support
 All Forums
 Origin Forum
 Origin Forum
 Automatic calculation when importing data

Note: You must be registered in order to post a reply.
To register, click here. Registration is FREE!

Screensize:
UserName:
Password:
Anti-Spam Code:
Format Mode:
Format: BoldItalicizedUnderlineStrikethrough Align LeftCenteredAlign Right Horizontal Rule Insert HyperlinkUpload FileInsert Image Insert CodeInsert QuoteInsert List
   
Message:

* HTML is OFF
* Forum Code is ON
Smilies
Smile [:)] Big Smile [:D] Cool [8D] Blush [:I]
Tongue [:P] Evil [):] Wink [;)] Clown [:o)]
Black Eye [B)] Eight Ball [8] Frown [:(] Shy [8)]
Shocked [:0] Angry [:(!] Dead [xx(] Sleepy [|)]
Kisses [:X] Approve [^] Disapprove [V] Question [?]

 
Check here to subscribe to this topic.
   

T O P I C    R E V I E W
yt47106 Posted - 08/15/2011 : 04:19:23 AM
Origin Ver. and Service Release (Select Help-->About Origin): 8.5 SR1
Operating System: Windows XP

Hi,

after importing new data I need to perform the following operations:
1. Add 10 new columns
2. Name these columns according to a predefined list
3. Perform a Set Column Value calculation for each column consisting of either a single line column formula or a multi line Labtalk script.

What is the most time saving way to perform these operations on a regular basis,
my idea is to implement needed Labtalk code in a import filter as an advanced filter option but perhaps there is a better way?

Is there perhaps some example on how these or similar operations could be done with Labtalk commands already available?
15   L A T E S T    R E P L I E S    (Newest First)
Drbobshepherd Posted - 01/20/2012 : 2:04:23 PM
Forget the previous script-- it still had a bug. Try:

smooth iy:=3 oy:=7 method:=aav npts:=30; // Smooth raw data.
document -cw Raw; // Count wksheets w/ "Raw" in name
window -r %H Raw$(count+1); // Rename workbook into the "Raw" sequence.
%R=%H; // Store Raw workbook name.
%S=page.longname$; // Store sourcefile name.
%T="Smoothed"; // Define window name for smoothed results.
itest=exist(%T);
if (itest==2) {
win -a %T; // Activate Smoothed Results Window.
wks.addCol(); // Add a column.
wks.addCol(); // Add a column.
nn=wks.nCols-1; // Set nn to next-to-last col#.
wks.col$(nn).type=4; // Make data in col(nn) type X.
nn++; // Set nn to last col#.
wks.col$(nn).type=1; // Make data in col(nn) type Y.
}
else {
newbook option:= lsname name:= "Smoothed"; // Create "Smoothed" workbook.
nn=2; // Set nn to last col#.
}
nn--; // Set nn to next-to-last col#.
range rr=[%R]1!Col(2), tt=[%T]1!Col($(nn)); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt lname:=1; // Copy X data and longname.
wcolwidth irng:= tt width:=10; // Widen column to 10 characters.
nn++; // Set nn to last col#.
Col($(nn))[L]$=%S; // Label column longname with raw data source filename.
range rr=[%R]1!Col(7), tt=[%T]1!Col($(nn)); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt; // Copy Y data.
wcolwidth irng:= tt width:=24; // Widen column to 24 characters.

I tested the above script and it really seems to work now. You can even use it to execute several drag-and-drops. In other words, you can drag your raw datafiles in groups of 10, or 20, one at a time, or all 200. Caution: This script will allow you to import the same file more than once, and it doesn't test for invalid text files. I can't do everything for you-- I would be robbing you of the learning experience.

Try this, and let me know how it works.

-DrBobShepherd
Drbobshepherd Posted - 01/20/2012 : 1:03:05 PM
onsight8c,

This is the first time you have indicated to me that you also want to copy your X dataset. So try this in your filter:

smooth iy:=3 oy:=7 method:=aav npts:=30;
document -cw Raw; // Count wksheets w/ "Raw" in name
window -r %H Raw$(count+1); // Rename workbook into the "Raw" sequence.
%R=%H; // Store Raw workbook name.
%S=page.longname$; // Store sourcefile name.
%T="Smoothed"; // Define window name for smoothed results.
itest=exist(%T);
if (itest==2) {
win -a %T; // Activate Smoothed Results Window.
wks.addCol(); // Add a column.
wks.col1.type=4; // Make data in col(1) type X.
wks.addCol(); // Add a column.
wks.col1.type=1; // Make data in col(1) type Y.
}
else {
newbook option:= lsname name:= "Smoothed"; // Create "Smoothed" workbook.
}
nn=wks.nCols-1; // Get no. of columns.
range rr=[%R]1!Col(2), tt=[%T]1!Col($(nn)); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt lname:=1; // Copy X data and longname.
wcolwidth irng:= tt width:=10; // Widen column to 10 characters.
nn++; // Increment nn.
Col($(nn))[L]$=%S; // Label column longname with raw data source filename.
range rr=[%R]1!Col(7), tt=[%T]1!Col($(nn)); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt; // Copy Y data.
wcolwidth irng:= tt width:=24; // Widen column to 24 characters.

Yes, the counting routine and the exist routine work on the entire project; however, filter-embedded scripts do not work simply on different folders because the import is executed in the active folder before the script is executed. I wrote the above script assuming all books are in the same folder. I think if you try it with just one folder, you will get the results you need. But, if you really want to put each raw datset into its own folder, check the help files on the "document" command.

The reason I suggested the import filter script is because of the drag-and-drop capability. If all your raw datafiles are in the same directory, it is really quick to select them, drag them over to the Origin window, and then drop them.

Let me know if this now works for you.
onsight8c Posted - 01/19/2012 : 6:35:48 PM
Hi,

first of all, let me thank you again for your great help - your script almost does what I was looking for. I made some minor changes like adding some columns and changing their type prior to the data importing.

In the last days I didn't focus that much on Origin but today I tried to import my 8x 200 datasets. Unfortunately I discovered another error while:

1. creating folder "sample 1" in project xyz by hand
2. importing files of sample 1 via script --> no error, perfect job!!
3. creating folder "sample 2" in the same project by hand
4. importing files of sample 2 via script --> error!!!

--> The first file from sample 2 is processed in folder "sample 2", the remaining 199 files are created in folder "sample 1". Can't say if workbook "smoothed" is overwritten as well...

Whatever I tried, I could only import one dataset, all following datasets were processed in the folder of the first dataset..

Does the script count the "Raw"-files within the whole project, e.g. in my case these from sample 1? And might it be the solution to this to count the "Raw"-files only in the selected/activated folder, in my case folder "sample 2"??

Obviously it's not over yet

-------------------------------------

Here is my adapted code:

smooth iy:=3 oy:=7 method:=aav npts:=30; // Smooth Col(3). Save in Col(7).
document -cw Raw; // Count wksheets w/ "Raw" in name.
window -r %H Raw$(count+1); // Rename workbook into the "Raw" sequence.
%R=%H; // Store Raw workbook name.
%S=page.longname$; // Store sourcefile name.
%T="Smoothed"; // Define window name for smoothed results.
itest=exist(%T);
if (itest==2) {
win -a %T; // Activate Smoothed Results Window.
wks.addCol(); // Add a column.
}
else {
newbook option:= lsname name:= "Smoothed"; // Create "Smoothed" workbook.
//del col(B); // Make no. of columns = 1.
wks.addCol(); // Add a column
//wks.col1.type=1; // Make data in col(1) type Y because all smoothed datasets are type Y.
wks.col2.type=4; // Make data in col(2) type X because this will be x-axis "Frequency"
}
nn=wks.nCols; // Get no. of columns.
Col($(nn))[L]$=%S; // Label column longname with raw data source filename.
range rr=[%R]1!Col(7), tt=[%T]1!Col($(nn)); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt; // Copy
wcolwidth irng:= tt width:=24; // Widen column to 24 characters.
Drbobshepherd Posted - 01/17/2012 : 5:26:35 PM
I just caught another error: First sentance in my reply should read, "I do NOT recommend using an Analysis template when you have to write a script anyway."
Drbobshepherd Posted - 01/17/2012 : 5:03:24 PM
Typo error: Last line comment should say "24 characters".
Drbobshepherd Posted - 01/17/2012 : 5:00:56 PM
OK, first, I do recommend using an Analysis template when you have to write a script anyway. With an Analysis template, you do not not have as much control. You can do a lot more processing in a script and you can execute your script when you want, and how you want. Use the import Wizard on a single file. Select to rename the workbook longname using the imported filename. As you did before, set the filter to read 4 Main Header lines, and 1 subheader line into the Longname row, and then the rest of the lines as data. Save values from the Main header lines if you want. Check the box to save your import file and name it. Check the box to "Specify advanced filter options" and enter he following code:


smooth iy:=3 oy:=7 method:=aav npts:=30; // Smooth Col(3). Save in Col(7).
document -cw Raw; // Count wksheets w/ "Raw" in name.
window -r %H Raw$(count+1); // Rename workbook into the "Raw" sequence.
%R=%H; // Store Raw workbook name.
%S=page.longname$; // Store sourcefile name.
%T="Smoothed"; // Define window name for smoothed results.
itest=exist(%T);
if (itest==2) {
win -a %T; // Activate Smoothed Results Window.
wks.addCol(); // Add a column.
}
else {
newbook option:= lsname name:= "Smoothed"; // Create "Smoothed" workbook.
del col(B); // Make no. of columns = 1.
wks.col1.type=1; // Make data in col(1) type Y because all smoothed datasets are type Y.
}
nn=wks.nCols; // Get no. of columns.
Col($(nn))[L]$=%S; // Label column longname with raw data source filename.
range rr=[%R]1!Col(7), tt=[%T]1!Col($(nn)); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt; // Copy
wcolwidth irng:= tt width:=24; // Widen column to 22 characters.


After you entered this script, CHECK THE BUTTON TO RUN SCRIPT AFTER EACH FILE IS IMPORTED. This is important. This script was written to add data to the "Smoothed" window only one file at a time.

Then hit "Finish". The file will be imported the way I think you want it. I think I have all the bugs out now.

You should be able to drag and drop your files, now,from any MS Window into the Origin Workspace and get them processed. You can even drag them in bunches. Helpful Hint- Save your project after each successful import. Please let me know how well this works for you.
onsight8c Posted - 01/17/2012 : 11:14:32 AM
Damn, I forgot to mention that I can upload some datafiles and my analysis template for trying it out.

http://www.originlab.com/ftp/forum_and_kbase/Images/s04.zip

It contains 4 data-txt-files and my Analysis Template "smooth transfer ratio _AnaTemp.ogw"
onsight8c Posted - 01/17/2012 : 11:01:18 AM
Hi,

thank you very much for your help - but unfortunately it doesn't work 100%

My data files look like this:

Text-Editor:


Origin:



I created an Analysis Template following this tutorial: http://www.youtube.com/watch?v=SWAG7vUX0LA using the Adjacent-Averaging-Method with Weighted Average, 30 Points of Window and Auto Recalculation. The following image is an example how it looks like using the Analysis Template without any coding:



Column G(Y) is my desired column to copy to the "Smoothed" workbook.


For importing my files I use the Import Wizard:






When I now use your code in the Advanced Filter Options:

document -cw Raw; // Count wksheets w/ "Raw" in name
window -r %H Raw$(count+1); // Rename workbook into the "Raw" sequence.
%R=%H; // Store Raw workbook name.
%S=page.longname$; // Store sourcefile name.
%T="Smoothed"; // Define window name for smoothed results.
itest=exist(%T);
if (itest==2){
win -a %T; // Activate Smoothed Results Window.
wks.addCol(); Add a column.
}
else {
newbook name:= "Smoothed"; // Create "Smoothed" workbook.
wks.nCols=1; // Make no. of columns = 1.
wks.col1.type=1; // Make data in col(1) type Y because all smoothed datasets are type Y.
}
nn=wks.nCols; // Get no. of columns.
col($(nn))[L]$=%S; // Label column longname with raw data source filename.
range rr=[%R]1!Col(7), tt=[%T]1!Col($(%(nn))); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt; // Copy


As I want to copy column G(Y) which is the 7th of the dataset, I set range rr=[%R]1!Col(7).

When compiling the script window pops up and I get an error:

Failed to resolve range string, VarName = irng, VarValue = [Raw2]1!Col(7)
Failed to resolve range string, VarName = irng, VarValue = [Raw3]1!Col(7)
Failed to resolve range string, VarName = irng, VarValue = [Raw4]1!Col(7)


My output looks like this:




Well, the first dataset (s04_prevac_1) has been analyzed as desired. In the "Smoothed" workbook the name is set correctly but data from "Raw1 - s04_prevac_1..." is missing. Additionally a "Smoothed1" workbook is opened as shown. But then nothing more is processed... Where is the error?

Just to show you why I'm so crazy about the naming of the files: On the left you see the naming specification. This is necessary for me to recover specific books for later analysis. As long as I have the books named like this in the Project Explorer I am more than happy :)

Thanks again for your help - I really appreciate it!
Drbobshepherd Posted - 01/16/2012 : 6:55:31 PM
The routine doesn't work because %H changes whenever you activate a workbook. There may be a line missing that doesn't activate the window assumed by the code writer. I also think you could have a problem with generating windows that already exist and then delete them. I suspect it it quicker to test for existance first, then create the window if it doesn't already exist.

I am suggesting that you rename your imported workbooks, because your import script may be executed when other workbooks already exist with the default name "Book#". The script you showed me will apply to all workbooks in the workspace and this may not be your intention.

Since you told me that you got the smoothing routine to work, try adding the following to your script:
document -cw Raw; // Count wksheets w/ "Raw" in name
window -r %H Raw$(count+1); // Rename workbook into the "Raw" sequence.
%R=%H; // Store Raw workbook name.
%S=page.longname$; // Store sourcefile name.
%T="Smoothed"; // Define window name for smoothed results.
itest=exist(%T);
if (itest==2){
win -a %T; // Activate Smoothed Results Window.
wks.addCol(); Add a column.
}
else {
newbook name:= "Smoothed"; // Create "Smoothed" workbook.
wks.nCols=1; // Make no. of columns = 1.
wks.col1.type=1; // Make data in col(1) type Y because all smoothed datasets are type Y.
}
nn=wk.nCols; // Get no. of columns.
col($(nn))[L]$=%S; // Label column longname with raw data source filename.
range rr=[%R]1!Col(2), tt=[%T]1!Col($(%(nn))); // Assign datasets for copying using range notation.
colcopy irng:=rr orng:=tt; // Copy.

I hope this works for you.
onsight8c Posted - 01/16/2012 : 05:33:29 AM
Hi Drbobshepherd,

I need all columns imported for further calculations. First I need all smoothed columns from all imported datafiles in one workbook to get an overview. Later I have to access some specific datasets to perform further calculations. So importing all datafiles at once would save me time. That's also the reason, why renaming is not an option.

I followed your instructions, I'm familiar with most of them. But after a weekend of coding, I still don't get it done...

In the forum I found this post:
http://www.originlab.com/forum/topic.asp?TOPIC_ID=4351
That's pretty much the same thing, I want to do.

Using the Imput Wizard and my previously generated smoothing template (which smoothes [Adjacent-Averaging, 30 PoW] one column in my data-txt-file and adds the smoothed data in G(Y)) I specified the Advanced Filter Options:

cc=7; // col with smoothed data in G(Y)
win -t D; // new wks
%W=%H; // save name
del col(2); // remove Y col
nn=1;
doc -e W {
// skip if %H=%W
if("%H"!="%W") {
nn++;
%W!wks.addCol();
%W!wks.col$(nn).label$=%H;
copy %(%H,cc) %(%W,nn);
win -cd %H; // close wks ??
};
};
win -a %W; // activate %W
wks.labels(); // show col labels

When I start importing, I get this error message "Can not copy, % is not a dataset"

I don't get it.. What I definitely don't want is switching to MS Excel, writing macros for about thousand datasets is not the nicest work to do :)
Drbobshepherd Posted - 01/12/2012 : 6:47:13 PM
I still think an import filter file (.OIF) that includes a LabTalk script is your best solution.

Use the Import Wizard on one of your data files to create your input filter. Be sure to check the box that gets the long name of your window from your input filename. The default short name is still going to be Book1. It is still not clear to me why you need all 5 columns of data since you seem to only process (smooth) one. It seems to me that all you need is 2-- one column of x-data and one column of y-data. But, if you say you need all 5, then just don't check the Partial Import box in the Wizard (BTW, complete import is default).

Use the Import Wizard to define how Origin should read the header lines, subheader lines, and data. On the "Save Filters" window, check the box to save your filter (with your description, filtername, applicable extensions, ...). Check the "Specify filter options" box, and click on Next; this will open up your filter script window in which you will write the Labtalk commands you want executed on your data.

Start small. Try something simple, like:

smooth iy:=col(3) oy:=col(6);

Then click on Finish to save your filter. Now if you drag any number of similar datafiles into the Origin workspace,your filter should automatically import your data into Origin workbooks in the format you defined, and col(3) in each book will be smoothed.

Use the Import Wizard to edit and debug your filter file, and add more LabTalk commands, such as ones to count and rename your data workbooks (e.g. Raw1, Raw2,...), create or test for your Results window (seek Help on "Exist" an "Create"), and combine your smoothed datasets into the Results window.

Once you get your filter file to execute the way you want on a few of your datafiles, then you can use it to easily drag-and-drop large numbers of files from MS Explorer windows into the Origin workspace.
onsight8c Posted - 01/12/2012 : 3:08:03 PM
Thank you Drbobshepherd for your post.

I never did any scripting with Origin - that's why I struggle with your solution.

There are some more difficulties:

1) partial import is not an option as I need all information of the data files

2) & 3) all files have a similar structure. But the filename is always different as it contains a time stamp. How do I count them with the "doc -cw Raw;" command? Renaming is not possible.

What I have to do is import each data file as a seperate workbook, perform Signal Processing: smooth for one specific column in each file/workbook and then copy this smoothed data columns into one overall result workbook.

That basically means copying columns from one workbook to another via batch processing instead of copy&paste manually.. Hope this made it more clear.
Drbobshepherd Posted - 01/12/2012 : 12:48:45 PM
This is what I have done to process multiple files-
Create a custom import filter that will:
1. import your raw data into a workbook (consider partial import),
2. count the existing similar raw data workbooks using "doc -cw Raw;" (This assumes your workbooks are named Raw#. The value is stored in a variable named "count".),
3. rename the active wbook using "win -r %H Raw$(count+1);.
4. If Results wbook doesn't exist create one.
5. Add a column to Results wbook.
6. Copy dataset from Raw wbook to Results wbook.
7. Give the new column a unique longname (I suggest the import file name).

After you save this filter, you can use it on multiple files by drag-and-drop method.
onsight8c Posted - 01/12/2012 : 07:31:39 AM
Hi,

is there a way to copy for example one specific column in an imported file and paste it in a result sheet?

My problem is the following:

I have about 100 data-files with a similar structure of 5 columns. From all of these files I want to copy one column (e.g. column 3) and paste it to one overall sheet. The result should be one sheet with 100 columns no.3

I tried this tutorial http://qhwiki.originlab.com/~originla/howto/index.php?title=Tutorial:Batch_Processing but I didn't get more than one column in the result sheet...

Google didn't help me, so maybe one of you guys can?!

Thank you in advance!
Boris

---
OriginPro 8.5 SR1 on Win7-x64
Hideo Fujii Posted - 08/15/2011 : 1:26:21 PM
Hi yt47106,

If the number of columns in your import source data is always the same (say, two), then you can predefine a worksheet with additional 10 columns (say, 12 in total) with the names of your choice; setting the Set Column Values associations between columns, and save it as a template. Then, you can pour data into the worksheet by importing, and recalculate automatically. You can save this worksheet as a template to reuse.

Otherwise, using an import filter for the Import Wizard combined with a script may be a more flexible approach for your task. This can be performed not only via Import Wizard, but can be triggered by drag-and-drop operation to import. Also, it can deal with multiple files to import at once. You don't have to write a script command (e.g., impASC) in addition to the following script code such as you specified.

If your task may evolve to further application with other analysis or statistics, then, you may go further to the use of the analysis template for the batch processing...

--Hideo Fujii
OriginLab

The Origin Forum © 2020 Originlab Corporation Go To Top Of Page
Snitz Forums 2000