LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Large arrays and loop speed

I'll apologize in advance for not posting a VI, I'm currently away from any computer where I could do so.

 

Anyways, I have a situation where I need to perform operations on individual columns of data from two separate large arrays (100 rows by 36000 columns) every 10ms.  Basically my process is to measure a voltage from a cDAQ, find where that value lies on column 1, interpolate a corresponding value in column 2, and output that value back to the DAQ after every 10ms iteration.  This needs to be performed on all of the columns.  When I run my program with the entire array, my loop speed is on the order of 65ms, while when I run it with a smaller array (100 rows by 600 columns is what I used), the speed is just fine.  These large arrays are being calculated by a MATLAB script in a previous state and being sent over the shift register to the state where they are used (I'm using the producer/consumer structure).

 

I've already tried some things to help speed up my program.  I've stopped using the DAQ assistant express VI in my loop and started passing tasks over the shift register.  I also tried exporting the spreadsheet files from MATLAB and saving them on the hard drive before referencing them in Lab View, but that didn't change anything.  I don't have a strong background in programming, but I get the feeling that this is a RAM issue.  Does anyone have any suggestions for what I can do to get my program up to speed?

 

Thanks,

 

Josh

0 Kudos
Message 1 of 8
(4,220 Views)

What do the 100 rows represent?   What do the 36,000 columns represent?  What needs to be performed on every column and how often?  You need to interpolate a value on each of 36,000 columns in every 10 msec?  And how would you output 35,999 values out of the cDAQ?

 

What is the datatype for this array?  100 x 36,000 is 3.6 million elements.  (x 8 bytes if they are all doubles)  That isn't a huge amount of memory.

 

Ultimately you are going to have to post some code to get some help.  You need to do some benchmarking.  See how long each part of your code is taking.  What are you trying to do with your algorithm?  There may be a much better way to do this and get the result you are looking for.  Perhaps it is a matter of breaking up your array into smaller arrays and allowing the interpolation functions to operate on the smaller array portions simultaneously.

0 Kudos
Message 2 of 8
(4,217 Views)

The columns in the first array are x values and the columns in the second array are y values.  Together they make up curves, where the first curve would be made up of the first column from each of the arrays plotted against each other (which is a small part of what I'm doing).  Every 10 milliseconds I'll be measuring a voltage coming in to the cDAQ, finding that voltage's position in a column from the first array, interpolating a corresponding value (a current value) in a column from the second array, and outputting that value back to the DAQ.  In addition, after every 100ms, a different pair of curves will be used; so when the loop first runs, column 0 from each of the arrays are retrieved, 10 readings, interpolations, and outputs are performed, then column 1 from each of the arrays is retrieved, 10 more readings, interpolations, and outputs are performed, etc.

 

All of the values are doubles.  I'm also new to the concept of benchmarking.  I do have millisecond timers set up so that I can see how long each iteration takes, but I'm not sure how to do that for separate parts of the loop.  I determined that the large array was my problem by disabling other parts of of my code and seeing how that affected the overall loop speed.  I went through and disabled my graphs, DAQ VIs, and my interpolation sub VI individually and none of them sped up the program sufficiently.  (Though disabling the interpolation did cut my loop speed in half.  I'll be looking at that tomorrow.)  I'll try and post some code tomorrow.

0 Kudos
Message 3 of 8
(4,203 Views)

Benchmarking is a good use for a flat sequence structure.  If you have a part of code you want to see how long it takes, put it in the middle frame of a 3 frame flat sequence.  Put a millisecond timer in the first frame and third frame.  This setup will take a time immediately before and immediately after the operation of your piece of interest and subtracting to the two will show how long it took.

Message 4 of 8
(4,187 Views)

Post some code so we can help.

 

An Action Engine init with the array and a sorted index array (seperate array!) should run fast less than 1 usec on modern PCs.

 

Show us the code.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 5 of 8
(4,184 Views)

Hi Josh,

 

It sounds like your loop is a bit overburdened with these large arrays, and slimming down the amount of data that needs to be involved in each iteration might help. I'm in agreement with the recommendations Ben and RavensFan have made thus far.

 

If your program requires access to the entirety of the array on each iteration, we may have problems because of it's size. If only a subsection is required, however, it is much more economical in terms of speed to pass only the required elements through the loop. Looking at the code will help us recommend where you should go next more easily.

Verne D. // Software R&D // National Instruments
Message 6 of 8
(4,138 Views)
Sorry for the slow response.  In the process of putting together a demo VI for you to have a look at, I discovered the source of my problem.  I was bundling the large arrays over the shift register with some other quantities that were changing values after every iteration.  Once I separated them into two separate clusters, everything worked fine.  Thanks everyone for your willingness to help me out.
0 Kudos
Message 7 of 8
(4,120 Views)

I'd love to see a demo as well.  the array size (100-36000) isnt really "massive" for a modern pc.  but this KB article can give you some idea of how LabVIEW handles data and how to optomize code for large data sets.  Also This info (I don't often recomend as it premotes poor style) can help.  Check out the In Place Element Structure in the LabVIEW help as well as the rest of the "Memory management functions"  that can reduce overhead. 

 

A lot of great ideas can come from this threadso please keep going on it-  (I love a good learning experience and some of the other posters on this thread have taught me a BUNCH about this type of problem too)  It could be very instructive for a lot of users.


"Should be" isn't "Is" -Jay
0 Kudos
Message 8 of 8
(4,103 Views)