05-08-2026 05:45 AM
I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).
A new array of data points arrives and I want to insert it into the array. Let's focus on 1 channel for now (the first one, page index 0). How can I insert this new 2d array into the 3d array at a specified location. Say each data array that arrives is 100 points. The first iteration must be inserted at row 0, column 0. The second data array at row 0, column 100, etc. But when I wire both the page and column index of the replace array subset function, the function only accepts 1d arrays.
I have a version that works, but I don't think it's the most efficient, since it uses a for loop instead of writing the data directly. The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).
Any suggestions?
Maybe even a better data structure?
05-08-2026 06:10 AM
05-08-2026 06:12 AM
Hi Basjong,
@Basjong53 wrote:
I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).
Maybe even a better data structure?
Other (but not immediatly better) data structure:
@Basjong53 wrote:
The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).
As a 3D array this would require 1M points * 8 B/point * 2 * 100 channels = 1600 MB: I guess you need LabVIEW-64bit to handle this safely…
Breaking up the data in smaller chunks (like an array of DVRs) might help to manage this amount of data.
Do you really need to hold ALL the data in memory?
Can't you use a (TDMS?) file or a database to store/manage the data?
@Basjong53 wrote:
But when I wire both the page and column index of the replace array subset function, the function only accepts 1d arrays.
Yes as you restrict the "entry point" to a certain page+column…
05-08-2026 06:45 AM
@GerdW wrote:
Hi Basjong,
@Basjong53 wrote:
I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).
Maybe even a better data structure?
Other (but not immediatly better) data structure:
- 1D array of DVRs
- each DVR contains a cluster of two 1D arrays (one array for "I", the other for "V"), as you currently use different columns for I/V…
A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array.
@GerdW wrote:
@Basjong53 wrote:
The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).
As a 3D array this would require 1M points * 8 B/point * 2 * 100 channels = 1600 MB: I guess you need LabVIEW-64bit to handle this safely…
Breaking up the data in smaller chunks (like an array of DVRs) might help to manage this amount of data.
Do you really need to hold ALL the data in memory?
Can't you use a (TDMS?) file or a database to store/manage the data?
Yeah, I should probably manage this better. Not all channels will be visualized at the same time. So I can keep just the active channels in memory and store all others just in a file.
05-08-2026 07:05 AM
Hi Basjong,
@Basjong53 wrote:
A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array.
You still can go with just one array (per cluster) once you combine voltage+current into a complex number 🙂
05-08-2026 04:39 PM
@Basjong53 wrote:
A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array.
I agree. Think in terms of the structure of your data. You have two measurements (Voltage and Current) that are related to each other (by being acquired at the same time, and manifesting different "aspects" of the data you are saving). They are acquired simultaneously, and any analysis that you do will likely require both to make an interpretation of the data.
So if the basic "quantity of interest" is a Cluster of V and I, the other two dimensions are "Channel Number" (how many simultaneously-acquired sets of V,I data you are handling) and Time, itself. Here, your DAQ device helps you with creating a logical data structure. Because the DAQ device (probably) gives you N (= 10?) channels simultaneously, and for design reasons gives you a "chunk" of data (say, 1000 samples of your 10 channels) at a time, the "natural" way of saving these data are in a 2-D "Total # Samples" (variable) by "Total # channels" (fixed, 10 in this example). The "rows" are individual samples of 10 ("columns") channels. You are taking 1000 samples (of 10 channels) at a time, so these become the rows of your data matrix. And within each "sample" (row) of each "channel" (column) there is a "pair" (cluster) of Voltage and Current readings.
Is it obvious/intuitive why you'd take 1000 samples at a time and save them in such large "chunks", rather than one sample at a time?
Bob Schor
05-12-2026 08:30 AM - edited 05-12-2026 08:38 AM
inter
@Andrey_Dmitriev wrote:
From a performance point of view, I would like to recommend ensuring that you write data strictly sequentially. You can clearly feel the difference:
interesting! this also true for allow debugging turned of, reentrant execution, and some slight randomization
in labview 2020 64bit under windows 11 pro, on a surface 8 pro
05-12-2026 09:21 AM - edited 05-12-2026 09:24 AM
@alexderjuengere wrote:
inter
@Andrey_Dmitriev wrote:
From a performance point of view, I would like to recommend ensuring that you write data strictly sequentially. You can clearly feel the difference:interesting! this also true for allow debugging turned of, reentrant execution, and some slight randomization...
Yes, and this is obviously true for read access as well. A classical example is a 2D array, which can be iterated either by rows or by columns (like an image); it is always better to access it sequentially. Sometimes the data needs to be reorganized—either trivially, by transposing it, or more complicated, for example by migrating from an array of clusters to clusters of arrays, classically Structure of Arrays, (SoA) is usually faster than Array of Structures, (AoS), but it depends from data and volume (if they will fit into cache or not).
05-13-2026 02:17 AM
LabVIEW is not good with 3d arrays in general. Editing functions become cumbersome (comparing with numpy or matlab) and speed is super slow (e.g. reading multiple values along 3rd dimension).
Speedwise it was always best for me to redim to 1d array but then you can run into problems with insufficient I32 bit indexing.
05-13-2026 04:52 AM
@Quiztus2 wrote:
LabVIEW is not good with 3d arrays in general. Editing functions become cumbersome (comparing with numpy or matlab) and speed is super slow (e.g. reading multiple values along 3rd dimension).
Speedwise it was always best for me to redim to 1d array but then you can run into problems with insufficient I32 bit indexing.
Yes, of course — and here is a rational explanation. If we compare LabVIEW-generated machine code side by side with, for example, more or less equivalent Rust code (where, honestly, a triple Vec is far away from an optimal solution), we can clearly see why, because of this screenshot from the debugger below, there are whole loops under the hood (ADDSD is the sum, accumulator is xmm0):
The reason is that LabVIEW performs triple bounds checking, resulting in roughly four times more machine instructions and a lot of conditional jumps.
On the other hand, the question of “to flatten or not to flatten into a 1D array” depends on the context. In the case of moderately sized arrays, it often makes little sense to reorganize the data without a significant reason, especially if it is more convenient to work with a “native” 3D array (since index computation becomes slightly more complex when flattened). It only really makes sense in case of a real performance bottleneck.