LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need to improve speed when graphing large arrays

Earl,
 
I don't have DAQmx or IMAQ installed, so I cannot really do much with your code. Still, here are some general observations.
  1. You image data is ony U8, so I think it would be much more efficient to do the scaling (contrast/brightness) in the upper loop and do the queue as U8. This will reduce the queue data by a factor of 8 (!!!). The scaling is such a simple VI, maybe you want to flatten it to the diagram or at least set it to subroutine priority.
  2. I am not sure why you execute "set image size" with every iteration. Doesn't that belong before the loop?
  3. "Array to Image" belongs into the case structure containing the image terminal. It is of no use to convert, unless you actually display, right? (10x less work!).
  4. It would seem more logical to display the image containing the new line, so you should tap into the 2D array after, not before the line replacement.

Attached are a few quick edits. Sorry, since I don't have the drivers/toolkits, I cannot wire one of the property nodes, just hook'em up again (see disagram comments). Since I am working on a laptop, I shrunk the diagram a bit for my conveninece. Just ctrl+drag on some strategic places to "inflate" again. 😉

Please try and see if there is an improvement. You might also want to disable debugging on the main VI for performance reasons.

Are you sure you really need the queue at all, have you tried putting all in one loop and running things synchronously?

I am sure many more improvements are possible. What determines the loop speed of the upper loop? 😄

Message Edited by altenbach on 04-16-2006 01:03 PM

Message 11 of 66
(2,951 Views)

Shane,

The code was initially only in one loop but I would miss every other line (1 D array) as the loop would process the first line & miss the second one in doing so. Hence the classic "Producer/Consumer Loop".

In response to your other questions:

 

1. You image data is ony U8, so I think it would be much more efficient to do the scaling (contrast/brightness) in the upper loop and do the queue as U8. This will reduce the queue data by a factor of 8 (!!!). The scaling is such a simple VI, maybe you want to flatten it to the diagram or at least set it to subroutine priority.

I have tried to convert to U8 right away as you suggest,  but found that it slows the producer loop so I miss lines as discussed above.

2. I am not sure why you execute "set image size" with every iteration. Doesn't that belong before the loop?

For my application, the array size is changed depending upon the equipment acquistion speed; the slower the acquistion, the larger the file.

3. "Array to Image" belongs into the case structure containing the image terminal. It is of no use to convert, unless you actually display, right? (10x less work!).

Ideally I would like to  redisplay 1 line at a time, I have redisplayed up to 6 lines at a time to save overhead & processing speed but this has limitations. The most time is spend on "replacing the array subset".

4. It would seem more logical to display the image containing the new line, so you should tap into the 2D array after, not before the line replacement.

Tried it both ways doesn't seem to make much difference either way.

As you can see, the code is a prototype & far from finished. I have limited & tested the loops while monitoring memory usage & CPU performance. About 30% of the CPU time is spent in the array handling, about 25% is spent on displaying the data (IMAQ), the balance is acquiring the data & general overhead.

Thanks for your help but the main topic was the manner & ineffiency LV handles arrays. NI has acknowledged to me personally & in telephone calls that this is a problem. As the last instructor put it, "NI recognizes that LV does not handle arrays efficiently. R & D is working on it".

If you do a search on this listserver for "Large Arrays" there are no less than 30 pages (10 responses per page) for this. Many of them commenting about how slow the application becomes when handling large arrays.

In either case I has sped my application considerably. Instead of using the "replace array subset" to replace only one line, I have it replace 5 lines at a time (1 D array X 5). This has reduced my overheat by about 20% & the "number of elements" in the enque scheme has been reduced to zero rather than almost 1,000 in the prior loop.

I have been coding since January of this year but I am finding that every language has it's limits, Arrays are one of LV's.

 

Happy Wiring.

Earl

 

 

 

0 Kudos
Message 12 of 66
(2,941 Views)
Check out the tutorial Managing Large Data Sets in LabVIEW.  It addresses many of the issues brought up by this thread.  You can handle large data sets in LabVIEW without slowing yourself down unnecessarily.  I have seamlessly displayed data sets which switched between 2000 points and 20 million points using max-min decimation and chunking algorithms.  If you have used the NI-SCOPE Soft Front Panel (ships with NI-SCOPE), you have seen it in action.  The key is data management.  The tutorial will give you the info you need to apply the principles to your application.  If you continue having problems, post again and we will try to help.
0 Kudos
Message 13 of 66
(2,923 Views)
i'd suggest not using C, you'll find a lot of headaches using C dll than finding a way in labview, and you actually making more memory operations by including C in the picture.

anyway, labview is not very efficient in handling memory blocks, but on the other hand it is easy to use. Decimation is the way to reduce the problem but not eliminating it entirely. You might also want to check your display setting and your computer performance setting. For example, you probably don't need the highest color or the fancy shadow displays.

-Joe
Message 14 of 66
(2,911 Views)


2. I am not sure why you execute "set image size" with every iteration. Doesn't that belong before the loop?

eweltmer wrote:
For my application, the array size is changed depending upon the equipment acquistion speed; the slower the acquistion, the larger the file.



In your code, the terminal of the control to set the image size is NOT inside the loop! Even if you would change the value at runtime it would have no effect because the terminal only gets read once at the beginning. Think dataflow! 😄

The subVI "set image size" and its control terminal belong on the same side of the loop boundary in any case. 

0 Kudos
Message 15 of 66
(2,896 Views)

I think there might be a little bit of confusion on the issue with arrays and memory allocation in LabVIEW.  The dataflow paradigm for LabVIEW removes much of the difficulty involved with managing memory.  LabVIEW also tries to minimize the reallocation of memory, since it can be an expensive operation.  Reallocation will allocate a larger memory location for the data and moving the contents from the previously allocated memory to a new location.  National Instruments recommends fixing the memory issues by reducing or preventing situations where LabVIEW must reallocate memory.

If you create an array of data by constantly calling the build array to concatenate a new element, the VI must continually resize the buffer (in each loop iteration) to make room for the new array and append the new element.  The execution speed is slow, especially if the loop is executed multiple times. (see Build Array Example.JPG attached)

If you must conditionally add values to an array but can determine an upper limit on the array size, you might want to consider preallocating the array and using the Replace Array Subset to fill the array.  The array is created only once, and the Replace Array Subset can reuse the input buffer for the output buffer.  The performance of this is very similar to auto-indexing.  It avoids resizing the output array at every iteration since the total size was first specified with the Initialize Array. If you use this technique, be careful to array in which you are replacing is large enough to hold the resulting data because the Replace Array Subset does not resize the arrays for you.  An example of the process is shown as an attachment VI, Replace Array Subset Example.JPG.



Mark Walters
Application Engineer
National Instruments



Message Edited by Mark W on 04-18-2006 03:52 PM

Download All
0 Kudos
Message 16 of 66
(2,873 Views)

Hi Mark,

 

Well said.

As I have said in my previous mesage above, I initialize the array then use the "Replace Array Subset" vi as opposed to the "Append Array" vi.

It is about as efficient as it gets given the array size.

I think the whole point is missed about this thread in that LV does not handle large arrays very well.

There are techniques to improve the effieciency as you discribe by using the "Replace Array subset" vi instead of the "append array" vi.

But much of the discussion in the discussion forums is dedicated to large arrays slowing down the application.

This is precisly what I have.

We (you & I) discussed this when you gave the beginner's course here in Santa Ana, CA.

After speaking with some of the other NI Applications engineers as yourself, I thought it may be prudent by writing some code in "C" then using a "CIN" to execute it.

I have made the application much more efficient without using "C" code.

Instead of replacing only one 1D array at a time, I replace 5 to 10 1D arrays still using the replace array subset vi.

Apparently much overhead is used addressing the array & replacing data.

As expected, LV would rather replace blocks of data than bits of data.

 

Best Regards,

 

Earl

 

 

0 Kudos
Message 17 of 66
(2,870 Views)

Hi Earl,

I will grant you that dealing with arrays in LV can be tricky, I will not easily agree with the people that have told you LV can't handle large array operations.

Please see this discusion and my reply #71

http://forums.ni.com/ni/board/message?board.id=170&message.id=142706&view=by_date_descending&page=4

for an example of using arryas efficiently.

I sometimes think of writing efficient code in LV is like surfing.

It is possible to think about how the undulations in the board are interacting with the waves, but doing it while you are riding a curl is a bad idea.

Somewhere along the lines you have to abstract the idea and express it as efficient code.

The coding challenges (like the one discussed in the above cited thread) are an excellent way to learn how the "waves" (wires) act as you explore the "What if I did it THIS way?" questions.

Ben

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 18 of 66
(2,860 Views)
Dear Ben,
 
I have never said that "LV can't handle large array operations".
 
I said "LV does not handle large arrays very well".
 
This is not my inexperienced opinion but the following NI applications engineer's opinion:
 
Mark Walters
Brooke Waite
Somendra Sreedhar
 
As I have said on page 1 of this thread, I have asked these engineer's personally.
 
It is not that I do not appreciate the more experienced programmers comments but I must defer to to NI's application staff as well as the 30 or so pages in the Discussion Forum about LV slowing down when using large arrays.
 
Anyway, as I have said in three responses ago, I have my answer.
 
NI has looked at my code with little or no changes.
 
I am not LV bashing, just making a point.
 
Thanks to All.
 
Earl
 
 
 
 
0 Kudos
Message 19 of 66
(2,853 Views)
In my previous post, I described two situations in LabVIEW allocating memory for arrays.  The first way (Build Array Example.JPG) handles large arrays poorly by reallocating the memory each time the build array is called.  The second way (Replace Array Subset Example.JPG) handles memory more efficiently by preallocating the array size and avoiding the resizing of the output array at every iteration.

There are are ways to handle larger arrays poorly in LabVIEW, but it does not mean that you cannot handle large arrays well in LabVIEW.  By getting rid of memory reallocations, you can optimize the handling of large arrays as seen the the Replace Array Subset Example.


Mark Walters
Application Engineer
National Instruments

Message 20 of 66
(2,823 Views)