LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to improve the execution time of my VI?

Thanks so much for the great suggestions. I dont quite understand how to implement your suggestion in the last paragraph, and wonder if you could provide a simple example code.

Thanks again.

Bryan.
0 Kudos
Message 11 of 17
(1,520 Views)
Hi Bryan!

I think what CMB is trying to say is to just read out the whole string at one time and parse it yourself. I'm attaching a .zip file that includes a VI and a .txt file to read. I don't know how much this will speed up your application, but you can look and see. Furthermore, I think CMB's first recommendation is the best approach (reading and writing the data as a binary file - instead of converting all of the #'s to strings). This will definitely increase the speed of the process.

Hope this helps!

Travis H.
National Instruments
Travis H.
LabVIEW R&D
National Instruments
0 Kudos
Message 12 of 17
(1,501 Views)
Well, I don't know if your analysis requires historical data or not. In other words, can you do what you need to do with just a single sample (of each channel)? If you can't, forget this idea. If you can, then do what I suggested, which was:
Read the whole file as a string, then extract from that string a line at a time, and convert that to a 1-D array of channels.



You open the file and get the EOF (which is the file length). Then you read that many bytes, as one long string, and close the file. That's only one file read, which is better than many.

If you know how many columns are in the file, you're golden - just set up a SCAN FROM STRING inside a WHILE loop. Keep track of the offset, and start each loop at an offset past the end of the last one. Each output of the SCAN function is one sample of one channel. Do your peak detection, or averaging, or whatever there.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 13 of 17
(1,496 Views)
This approach rings a bell.

I did something like this before (previous employer) where I had to read in many small data sets from a spreadsheet-type file. In the end I got the best results by reading in N lines at a time (N< total number of lines to reduce the required memory overhead) and parsing the text myself, as has just been suggested. By sticking this N-line read in a while loop checking for EOF, you can implement this without even knowing how many "lines" of data you have in the file. I noticed a significant speed increase, but a lot of this was probably because the files I was opening was on the network.

I don't have an example, the code is at my old job.....

Hope this helps (even a little)

Shane
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 14 of 17
(1,481 Views)
What's typically a good approach is to use a combination of disk streaming and buffering.

Put all the data in one file and then just open it once, read and process the data in chunks, and close when you have processed all the data.

This way you reduce the number of file operations and the use of memory, making the software fast and memory efficient.

I used this technique on a file converter we had that used to load an entire file, convert it and then write the converted file. Conversion time was reduced by 90%.
0 Kudos
Message 15 of 17
(1,480 Views)
Hello All,

Thanks for all your replys. I didnt expect to receive so many good ideas in the recent days. As a beginner programmer, i especially appreciate benefiting from such in-depth analysis. I havn't had time to try and implement many of your suggestions yet (an urgent side project and a Chinese new year were my excuse:)). I shall post a summary of the effects from different approaches after i try them...

Bryan.
0 Kudos
Message 16 of 17
(1,462 Views)
Hi All,

This is a much delayed reply to all you who have helped on the subject. If you still recall, my execution needs to either open and read 500 files or read a single huge file (>3Mb). It turns out that I ended up with multiple files approach by using a simple open-read-close low level operation in the loop. This turns out to be pretty fast compared to my earlier try using the Read From Spread Sheet.vi that slowed me down by almost a factor of ten. Later I tried reading single file approach, but need to handle the huge array, my code was not easy to adapt at reading sections of the array, the approach was 2-3 times as slow as the first approach. Still got to try with the binary file handling later. Thanks again for all the helpful ideas.

Laterly i was stuck by another challengy which turns out a little harder to solve, if you are interested in taking a look, the link is: http://forums.ni.com/ni/board/message?board.id=170&message.id=107848

regards,

Bryan.
0 Kudos
Message 17 of 17
(1,401 Views)