LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Performance decrease using 2D array

Solved!
Go to solution

Hello, guys.

 

I'm trying to create a acquisition buffer for pre and post trigger. It acquires 5k points of data in a buffer (2D array) and every new iteration it deletes the oldest line and add the new line of data. When a trigger occurs it stops the old buffer and starts to add more 5k lines of data.

The problem is that without this code my main VI is running at 250hz, and when I add the code as a subVI it decrease continuously the performance until to 1 hz.

I could detect that the problem is using 2D array and add a new line of points and then delete the oldest. But how can I manage this in order to not jeopardize the performance of the main VI?

 

I even tried to initialize the 2D array as 14 columns and 5k lines, but this helped in nothing.

 

Thanks to you all.

0 Kudos
Message 1 of 7
(170 Views)

You should attach the code, preferably back saved to 2020.

 

You are inserting into an array, that is a new data copy and buffer. You are deleting from an array, that is a new copy and buffer. Copies and new buffers are slow!!.

 

Make an array of the correct size before your loop, connect via shift register, then use replace subset to replace elements. If you need to rotate the array that is a more difficult for a 2D array. I suggest using a fixed sized queue with a lossy insert. Use the get queue status to read the whole array if necessary, but that will make a copy of all your data elements. Otherwise make a fixed sized queue and use DVRs instead, but looking at your VI, I am not sure you are ready for that. If references are closed properly then you will have a huge memory leak.

Message 2 of 7
(157 Views)

" every new iteration it deletes the oldest line and add the new line of data" 

 

In LabVIEW everytime it occours, it creates a copy of the whole array in memory this is probably why the performance is so slow. 

One.

 

The best way is to innitialize the 2D array with 5k items BEFORE the loop begins. At every new item, use a replace element like it was suggested before. the iteration can be used to point to the right pointer of the array. 

When the number of elements reach 5000, initialize the array again. 

 

 

From Copilot :

In LabVIEW, when you add a new element to an array, it can indeed create a copy of the array in memory. This behavior occurs because LabVIEW arrays are contiguous in memory, meaning all elements are stored sequentially. When you modify an array (e.g., by appending an element), LabVIEW may need to allocate a new block of memory to accommodate the updated array, especially if the original memory block doesn't have enough space to expand.

Key Points:

  1. Memory Reallocation: If the array's current memory block cannot accommodate the new size, LabVIEW allocates a new block of memory, copies the existing array into it, and then adds the new element. This process can be computationally expensive for large arrays.

  2. Performance Optimization:

    • Preallocate Memory: If you know the maximum size of the array in advance, you can preallocate memory by initializing the array to its maximum size and then replacing elements as needed. This avoids repeated memory reallocation.
    • Use Shift Registers: In loops, use shift registers to manage arrays efficiently. This minimizes unnecessary memory copies.
    • In-Place Element Structure: Use the "In-Place Element Structure" to modify array elements directly without creating additional copies.
  3. Debugging Memory Copies: You can use LabVIEW's "Show Buffer Allocations" tool to identify where memory copies occur in your code. This helps you optimize performance by reducing unnecessary copies.

By carefully managing how arrays are modified, you can minimize memory overhead and improve the efficiency of your LabVIEW application.

0 Kudos
Message 3 of 7
(141 Views)

We can't get the full picture from a picture, because there are way too many thing that we cannot see (but there is a lot we can smell). Since there is no toplevel loop, I assume this is a subVI? What are the connectors? How is it called?

 

 

Your "loop pre" is very inefficient, because you are prepending new data after removing the last row.

I am wildly guessing that you write to the global variable and empty the inner feedback node once the size is 10k. This array can be pre-allocated at an invariant size, keeping track of the insert point.

 

Once you attach your code (LabVIEW 2020 or below) and explain how your run/call it, we can give more targeted advice.

0 Kudos
Message 4 of 7
(115 Views)

Another smell with 2D array is unintentionally padding with unneeded elements. If you have a 2x2 2D array and append a 1000 element row now the size is 3x1000 even if you only have 1004 actual data points and 1996 0s.

0 Kudos
Message 5 of 7
(108 Views)
Solution
Accepted by Andre_Simoes

@Andre_Simoes wrote:

 

The problem is that without this code my main VI is running at 250hz, and when I add the code as a subVI it decrease continuously the performance until to 1 hz.



- inline the subvi, turn of debugging

 


@Andre_Simoes wrote:

 

I even tried to initialize the 2D array as 14 columns and 5k lines, but this helped in nothing.

 


like this? "replace subset" is faster when the arrays are getting bigger.

 

replace-vs-insert.png

 

 

0 Kudos
Message 6 of 7
(75 Views)

@alexderjuengere  escreveu:

 

- inline the subvi, turn of debugging

I just did this and I stopped getting the performance decrease.

 

Thanks.

0 Kudos
Message 7 of 7
(39 Views)