LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Are there guidelines which tool (e.g. insert into array vs build array) is more efficient, so code can be done in less time?

Me and my colleague played around with some code yesterday. We took a file with 20k datapoints and turned them into some arrays to interprete them and show them in some waveforms and the like. In the process of trying to get it work a bit faster we played around with some things, and when we exchanged the "build array" function with the "insert into array" in a loop with a shift register, our process time fell from 10 seconds to basically 1 second!

 

So that got us thinking, are there any guidelines somewhere to which tools in Labview are the most time efficient?

 

Thank you for your input ^^

0 Kudos
Message 1 of 5
(731 Views)

Hi Thien,

 


@ThienEnthusiast wrote:

Me and my colleague played around with some code yesterday. We took a file with 20k datapoints and turned them into some arrays to interprete them and show them in some waveforms and the like.

In the process of trying to get it work a bit faster we played around with some things,

 

Thank you for your input


The description you made about your work is some gibberish…

 

Why don't you attach your code???

 


@ThienEnthusiast wrote:

are there any guidelines somewhere to which tools in Labview are the most time efficient?


There is a paragraph on efficient code in the LabVIEW help. Did you read it?

 

IMHO I would prefer BuildArray instead of InsertIntoArray.

InsertIntoArray should only be used when you insert elements in the middle of an array…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 5
(727 Views)

Hello Thien,

 

it is known, that using the indexing output of a for loop is fast, since LabVIEW knows before, how many elements the array will have.

If you use the "build array" node in a loop, LabVIEW can't predict the size of the array and has to constantly allocate new memory (I suppose, it doubles the memory size when the allocated memory gets full). The indexing output with a "conditional" terminal will decrease the speed too, for the same reasons (I suppose).

The fastest way is to use the "In Place Element Structure", which needs a pre- "Initialize(d) Array" (so you have to know or guess the size of the array before) and set the value of every element. My experience is, that the costs of the "Replace Element" - node are the same as the In-Place solution, if you replace a single Element.

Greets, Dave
0 Kudos
Message 3 of 5
(696 Views)

@daveTW wrote:

Hello Thien,

 

it is known, that using the indexing output of a for loop is fast, since LabVIEW knows before, how many elements the array will have.

If you use the "build array" node in a loop, LabVIEW can't predict the size of the array and has to constantly allocate new memory (I suppose, it doubles the memory size when the allocated memory gets full). The indexing output with a "conditional" terminal will decrease the speed too, for the same reasons (I suppose).

The fastest way is to use the "In Place Element Structure", which needs a pre- "Initialize(d) Array" (so you have to know or guess the size of the array before) and set the value of every element. My experience is, that the costs of the "Replace Element" - node are the same as the In-Place solution, if you replace a single Element.


Some of these things are not hacked in stone. For instance LabVIEW always since the earliest days preallocated output tunnels on for loops, used the double array size on output tunnels of while loops, and did a increment each time on Build Array nodes. But in recent versions it uses a more adaptive algorithm for Build Array, that is similar to the output tunnel on while loops.

 

Still, if you can estimate the array size ahead it is always a good idea to plan for that, for instance by initializing an array with the expected size, put it in a shift register and using Replace Array Element inside the loop with the necessary index. If the build-up of an array is linear I would not bother with this nowadays and simply use the auto indexing on a for or while loop, but for algorithmes that require random access in the loop but can be well estimated ahead, the Replace Array is the perfect solution.

 

Basically, everytime you resize an array (or string) in your code costs time. If that is a few hundred times, you will be hard pressured to measure a significant difference on modern hardware. If we talk about many thousand and millions of iterations, those array resizes start to really count and can make the difference between an application that feels like a slug or a panther.

 

And no Replace Array is not exactly the same as the Inplace Node. The Inplace Node can make a few more assumptions about safety of access to elements in the array but requires extra overhead to lock the array access with a mutex each time the structure is entered and unlock it when leaving the structure. While mutex locking and unlocking is fast, it is not without cost either. So replacing a single Replace Array Element function with a single Inplace structure in itself trades off more complex array handling to prevent concurrent access with additional mutex handling. If you can however put a significant part of your array manipulation inside the Inplace Structure, you indeed will gain a measurable advantage, since you pay the mutex overhead once, but gain the simplified array handling code many times in the Inplace Structure.

 

However, the Inplace Structure is not a free panacea to speed up your badly designed application. While your code inside the Inplace Structure does its (possibly very lengthy thing) the resource protected by the Inplace Structure is locked for anyone else wanting to even peek inside that resource. So if you place your minute long analysis algorithme in that Inplace Structure, your UI or another part of your program wanting to read the current state of things from that same resource, will simply have to wait until your algorithme is finished and leaves the Inplace Structure before it can enter its own Inplace Structure. Parallel Programming is never for free, it has certain rules and limits you have to play by, to avoid creating a crash generator.

 

Also the conditional terminal in loops has a hard to measure influence. For for loops it adds ONE additional resize to the array at the end to resize the array to the actual length from the initially assumed size and resizing an array to a smaller size is usually pretty cheap. For the while loop there is not even a significant difference since it does the resizing anyhow from the exponential growed array (double its size every time it gets full) to the final size after the loop finishes! Of course the conditional terminal has some influence in the for loop as the code now needs to check an additional condition on every iteration, but that is independent on the use of auto indexing array tunnels.

Rolf Kalbermatter
My Blog
Message 4 of 5
(682 Views)

@ThienEnthusiast wrote:

Me and my colleague played around with some code yesterday. We took a file with 20k datapoints and turned them into some arrays to interprete them and show them in some waveforms and the like. In the process of trying to get it work a bit faster we played around with some things, and when we exchanged the "build array" function with the "insert into array" in a loop with a shift register, our process time fell from 10 seconds to basically 1 second!

 

So that got us thinking, are there any guidelines somewhere to which tools in Labview are the most time efficient?


Once you start writing the word "LabVIEW" with the correct lettercase, we will take your posts more seriously 😄

 

"Built array" and "insert into array" (with index unwired) do exactly the same thing and chances are that the binary code is identical after all optimizations have been applied by the compiler. If you see a significant difference you have false result.

 

The main problem is that both functions are not efficient, especially if you can predict the final array size before the loop even starts.

 

Even 1 seconds seems very long for a measly 20k points! Once you show us what the code is doing, I bet I can get the same result in a few microseconds using 10% of the code complexity! 😄

 

Your post is very vague, You don't even mention if you use a FOR or a WHILE loop, what the debugging settings were, what was involved in the data "interpretation", and the overall code architecture. What was the placements of the indicator terminals? (e.g. what is "show in a waveform"???). I assume without real evidence that this is a graph, If you have 20k datapoints in a graph that is maybe 1K pixels wide and update it inside a tight loop with an everchanging array size, autoscaling, and fancy point styles, that would not be reasonable! You don't even describe how you measure the elapsed time! 

 

Benchmarking code is a very (very!) hard problem. To start, maybe study our presentation from a few years ago.

0 Kudos
Message 5 of 5
(650 Views)