10-14-2011 02:57 PM
Hi Mark,
This looks like a nice method I'd like to try!
I'm working with Labview 8.6.1. and can't open your vi's, any chance you can post something I can open?
Do you think I can use the Read from text file (with Read lines option) instead of Read from a binary file in the read loop?
Reagarding your remark about the Read bytes vi - you're right of course. The problem is I have to right the first 2 columns to the new file, and only then the other 2. So, I was hoping to find a way to read all the file by chunks, without loosing memory and then write the data in the correct order. Since, as you've mentioned, the Read bytes vi still needs a lot of memory I guess I'll need to read big file twice, chunk by chunk: Read all data each time, but write only the 2 necessary columns each time. Seems ugly, but I don't have a better Idea...
10-14-2011 03:48 PM
Here is the 8.6 version.
As for using the read lines you could use that but I think the read binary will be faster. Whether you read n lines or a big chunk you will need to parse the data. The parsing example I included looks for lines of text already and will take care of partial lines read. There is no benefit from using the read lines VI and it may actually be slower. Using my approach you can specify the chunk size as 10K. You will get vast reading if the file.
10-16-2011 11:47 AM
Hi Mark,
I made some preliminary tests and it really looks stable!
I now have all the options.
As Tim said, it depends what works best for the application.
I think I'll perform some tests with some big files I have and pick the best method accordingly.
Thanks a lot for all your help!!!
Mentos.
12-21-2012 11:05 AM
thank you for your addvice
12-21-2012 01:19 PM
Depending on how the data is formatted i'd use a different approach, but if, like you mention, you have a line of 4 columns (tab separated) of text, i'd definatly go with Read text file as lines. That way i'll never get into the issue of ending a read mid-line.
/Y