03-18-2009 09:42 PM - edited 03-18-2009 09:43 PM
Hi,
Ya, about the column width, it will always same for the same file.
Here I attached the corrected file again. Thanks for the explanation here .. and just, anything !!
03-19-2009 07:48 AM - edited 03-19-2009 07:49 AM
If I was in your shoes I would have read one (or several lines), and shuffled the content. Then I would have written the content back to a new file. This can be done by a simple while loop since labview keeps the current file pos. Se picture. Then you reach end of file you will get error 4. Remember to add the end of line constant to the shuffled data because this constant is stripped from the "Read from text file output"

03-19-2009 08:57 AM
In essence, that was precisely what I had done in my quick and dirty VI. I had chosen to use Read from Binary File rather than reading lines so that I could get an array of bytes directly. This allowed me to easily pick off the columns I wanted. You could, of course, convert the string that is read by Read from Text File into an array using the Spreadsheet String to Array, but reading the line as an array eliminates this step.
I found that the limiting factor is file I/O. On a quiet system I was able to read 750K+ lines and extract the six column in about 30 seconds. Not too bad.
03-19-2009 09:23 AM
If the data is numeric only, I would definitely have chosen to write data in this format to the new shuffled file( 2d array). Not ASCII format

03-20-2009 01:40 AM
Erm.. think I will try to replace the "Read from file and String Spreadsheet to Array" to the Read file as shown above.
Thanks for your guys advices. 🙂
03-20-2009 04:42 AM
But I have found that, to convert Spreadsheet string to Array is only 0.032 seconds, and Array to Spreadsheet string less than 0.000X seconds.
For my experiment, there are 250 files to extract, each with 1k rows.
Each files will have to extract the columns of
(0,10,20,30,40,50) => 1st file
(1,11,21,31,41,51) => 2nd file
(2,12,22,32,42,52) => 3rd file
Extracting the column of 250 files will produce 250files x 3 column files = 750files.
The entire process took me 32 seconds.
Spreadsheet string to Array for 250 files = 250*0.032 = 8 seconds.
So, the other process took up 24 seconds.
And, if the VI "Read file" took no time to "read", probably the maximum time I can optimize is only 8 seconds, since the rest of the operation still involved "array" VI. ..
It that .. correct ?
03-20-2009 05:13 AM

03-20-2009 05:24 AM - edited 03-20-2009 05:24 AM
Here I paste it . .
03-20-2009 08:54 AM
You seem to have changed the premise of the question. Initially you said you were dealing with one file with 840K rows. Now you say you're dealing with 250 files, each with 1K rows. That's a big difference.
It seems that in your inner loop you are extracting columns based on some sort of input string, though it's not clear since we can't see the rest of the code. It would be easier if you actually uploaded the VI. If my assumption is correct, then my guess is that you can replace the while-loop with a for-loop in this case.