LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Replacing a 2D array section of a 3D array

I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).

 

A new array of data points arrives and I want to insert it into the array. Let's focus on 1 channel for now (the first one, page index 0). How can I insert this new 2d array into the 3d array at a specified location. Say each data array that arrives is 100 points. The first iteration must be inserted at row 0, column 0. The second data array at row 0, column 100, etc. But when I wire both the page and column index of the replace array subset function, the function only accepts 1d arrays.

 

I have a version that works, but I don't think it's the most efficient, since it uses a for loop instead of writing the data directly. The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).

Basjong53_2-1778236924456.png

Any suggestions? 

Maybe even a better data structure?

0 Kudos
Message 1 of 12
(430 Views)
From a performance point of view, I would like to recommend ensuring that you write data strictly sequentially. You can clearly feel the difference:
snippet.png
Message 2 of 12
(413 Views)

Hi Basjong,

 


@Basjong53 wrote:

I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).

 

Maybe even a better data structure?


Other (but not immediatly better) data structure:

  • 1D array of DVRs
  • each DVR contains a cluster of two 1D arrays (one array for "I", the other for "V"), as you currently use different columns for I/V…

@Basjong53 wrote:

The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).


As a 3D array this would require 1M  points * 8 B/point * 2 * 100 channels = 1600 MB: I guess you need LabVIEW-64bit to handle this safely…

Breaking up the data in smaller chunks (like an array of DVRs) might help to manage this amount of data.

Do you really need to hold ALL the data in memory?

Can't you use a (TDMS?) file or a database to store/manage the data?

 


@Basjong53 wrote:

But when I wire both the page and column index of the replace array subset function, the function only accepts 1d arrays.


Yes as you restrict the "entry point" to a certain page+column…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 3 of 12
(411 Views)

@GerdW wrote:

Hi Basjong,

 


@Basjong53 wrote:

I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).

 

Maybe even a better data structure?


Other (but not immediatly better) data structure:

  • 1D array of DVRs
  • each DVR contains a cluster of two 1D arrays (one array for "I", the other for "V"), as you currently use different columns for I/V…

A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array. 

Basjong53_0-1778240633201.png

 


@GerdW wrote:

@Basjong53 wrote:

The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).


As a 3D array this would require 1M  points * 8 B/point * 2 * 100 channels = 1600 MB: I guess you need LabVIEW-64bit to handle this safely…

Breaking up the data in smaller chunks (like an array of DVRs) might help to manage this amount of data.

Do you really need to hold ALL the data in memory?

Can't you use a (TDMS?) file or a database to store/manage the data?


Yeah, I should probably manage this better. Not all channels will be visualized at the same time. So I can keep just the active channels in memory and store all others just in a file.

0 Kudos
Message 4 of 12
(385 Views)

Hi Basjong,

 


@Basjong53 wrote:
A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array. 

You still can go with just one array (per cluster) once you combine voltage+current into a complex number 🙂

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 5 of 12
(377 Views)

@Basjong53 wrote:

A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array. 

I agree.  Think in terms of the structure of your data.  You have two measurements (Voltage and Current) that are related to each other (by being acquired at the same time, and manifesting different "aspects" of the data you are saving).  They are acquired simultaneously, and any analysis that you do will likely require both to make an interpretation of the data.  

 

So if the basic "quantity of interest" is a Cluster of V and I, the other two dimensions are "Channel Number" (how many simultaneously-acquired sets of V,I data you are handling) and Time, itself. Here, your DAQ device helps you with creating a logical data structure.  Because the DAQ device (probably) gives you N (= 10?) channels simultaneously, and for design reasons gives you a "chunk" of data (say, 1000 samples of your 10 channels) at a time, the "natural" way of saving these data are in a 2-D "Total # Samples" (variable) by "Total # channels" (fixed, 10 in this example).  The "rows" are individual samples of 10 ("columns") channels.  You are taking 1000 samples (of 10 channels) at a time, so these become the rows of your data matrix.  And within each "sample" (row) of each "channel" (column) there is a "pair" (cluster) of Voltage and Current readings.

 

Is it obvious/intuitive why you'd take 1000 samples at a time and save them in such large "chunks", rather than one sample at a time?

 

Bob Schor

0 Kudos
Message 6 of 12
(340 Views)

inter


@Andrey_Dmitriev wrote:
From a performance point of view, I would like to recommend ensuring that you write data strictly sequentially. You can clearly feel the difference:
 

interesting! this also true for allow debugging turned of, reentrant execution, and some slight randomization

alexderjuengere_0-1778592538095.png

 

in labview 2020 64bit under windows 11 pro, on a surface 8 pro 

 

Conclusion: Always index 3D arrays sequentially (keeping the page constant). Accessing by rows is significantly faster than by columns due to memory layout.
Data Comparison:
  • 140.2µ is in the range of 10^-4
  • 3.1492m is in the range of 10^-3
This confirms a difference of one order of magnitude between the results!
0 Kudos
Message 7 of 12
(187 Views)

@alexderjuengere wrote:

inter


@Andrey_Dmitriev wrote:
From a performance point of view, I would like to recommend ensuring that you write data strictly sequentially. You can clearly feel the difference:
 

interesting! this also true for allow debugging turned of, reentrant execution, and some slight randomization...


Yes, and this is obviously true for read access as well. A classical example is a 2D array, which can be iterated either by rows or by columns (like an image); it is always better to access it sequentially. Sometimes the data needs to be reorganized—either trivially, by transposing it, or more complicated, for example by migrating from an array of clusters to clusters of arrays, classically Structure of Arrays, (SoA) is usually faster than Array of Structures, (AoS), but it depends from data and volume (if they will fit into cache or not).

 

2D-1.png

Message 8 of 12
(174 Views)

LabVIEW is not good with 3d arrays in general. Editing functions become cumbersome (comparing with numpy or matlab) and speed is super slow (e.g. reading multiple values along 3rd dimension).
Speedwise it was always best for me to redim to 1d array but then you can run into problems with insufficient I32 bit indexing.

CLA
0 Kudos
Message 9 of 12
(118 Views)

@Quiztus2 wrote:

LabVIEW is not good with 3d arrays in general. Editing functions become cumbersome (comparing with numpy or matlab) and speed is super slow (e.g. reading multiple values along 3rd dimension).
Speedwise it was always best for me to redim to 1d array but then you can run into problems with insufficient I32 bit indexing.


Yes, of course — and here is a rational explanation. If we compare LabVIEW-generated machine code side by side with, for example, more or less equivalent Rust code (where, honestly, a triple Vec is far away from an optimal solution), we can clearly see why, because of this screenshot from the debugger below, there are whole loops under the hood (ADDSD is the sum, accumulator is xmm0):

Screenshot 2026-05-13 11.44.40.png

The reason is that LabVIEW performs triple bounds checking, resulting in roughly four times more machine instructions and a lot of conditional jumps.

On the other hand, the question of “to flatten or not to flatten into a 1D array” depends on the context. In the case of moderately sized arrays, it often makes little sense to reorganize the data without a significant reason, especially if it is more convenient to work with a “native” 3D array (since index computation becomes slightly more complex when flattened). It only really makes sense in case of a real performance bottleneck.

Message 10 of 12
(101 Views)