T O P I C R E V I E W |
LC_ |
Posted - 09/10/2012 : 11:43:15 AM Origin Ver. and Service Release (Select Help-->About Origin): 8.5.0 SR1 Operating System: Win7
Hi
I wrote the code below which filters around 200 columns from a .txt file with ~1200 columns which is run within the file import process. The code is working well, unless for the fact that the execution time which seems to increase with the number of files which are dropped into Origin for each execution. I got intrigued that my job list for 50 cases wasn't finished after several hours and decided to try to measure necessary to run the script. So far I've got the following running times:
1 file: 2.5 minutes / file 3 files: 2.5 minutes / file 5 files: 2.5 minutes / file 10 files: 3.8 minutes / file 15 files: 7.0 minutes / file 50 files: > 7.5 minutes / file (running for 6 hours now)
The files have different sizes (from 6 to 50mb), but the execution time should not be mainly related to the file size (I clocked the execution time mostly with the larger file sizes), which makes sense as most of the time should be spent on searching and deleting the columns.
Any idea why this happens and/or how to improve the script execution time? The constraint is that I need to find the variables by their long names.
Thanks
//deletes unwanted columns stringarray AA = {"'time'","'Alpha'" //plus ~200 variables }
//time, alpha, alpha_dot -> necessary to import header for export
//int ncs = wks.nCols;
int kk=1; //for(int ii=1; ii<=ncs;ii++) // loops through all columns for(int ii=wks.nCols;ii>=1;ii=ii-1) {
for(int jj=1;jj<=AA.GetSize();jj++) // loops through all elements of AA { if(AA.getAt(jj)$ ==col($(ii))[L]$) { kk=0; } } if (kk==1) { del col($(ii)); } kk=1; }
|
|
|