LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reconstruction of 2D Arrays from Multiple Data Value References After Batch Acquisition

Hi everyone,

I'm working on a LabVIEW application where I perform a large number of repetitive measurements, and I need to store the results of each measurement as a row in a 2D array. Each row represents one complete scan, and I typically collect hundreds or even thousands of such scans in a single run.

To prepare the data storage structure, I create a 2D array where each row corresponds to a measurement, and each column holds a sample from that scan. Before acquisition starts, I create a Data Value Reference for each row using the New Data Value Reference node. Each of these DVRs is then associated with the corresponding row of the preallocated 2D array.

After acquisition is complete, my challenge is to reconstruct the full 2D array by extracting the contents of each reference. Currently, I use a For Loop to go through all the DVRs one by one. Inside the loop, I use an In Place Element Structure with the Data Value Reference Read/Write element to read the data from the reference, and I insert the result into the appropriate row of the final matrix. After that, I delete the reference using Delete Data Value Reference. This works correctly, but with a high number of references for example, 1024 or more, this process becomes extremely slow and introduces a significant bottleneck in the system.

What I’m looking for is a way to make this step faster. Ideally, I’d like to avoid reading and deleting each reference individually. I’m wondering if there’s a more efficient way in LabVIEW to build a 2D array from many individual DVRs, or if there's a better design pattern that allows me to manage this kind of segmented data without having to rely on one reference per row. It would also help to know whether there’s a way to encapsulate the entire 2D array in a single DVR, and work on it more directly after acquisition is finished.

Any suggestions, experiences or architectural advice on how to efficiently handle this type of bulk DVR reading and memory cleanup would be very welcome. Thanks a lot in advance!

0 Kudos
Message 1 of 11
(665 Views)

You can create a DVR with a 2d array and then replace each row in the array as needed. You should probably post what you are doing. You could also just have an initialized 2d array on a shift register in your loop and replace it there.

 

snip.png

Message 2 of 11
(650 Views)

Yes, show us what you are doing. Create thousands of data value references, one for each row is certainly not the way to go. Herding cats! You should not need any DVRs at all!

 

mdcuff already hinted at the correct solution.

 

You seem to know the final size of the 2D array before the program starts, so all you need to do in initialize a shift register with the final array size, then replace rows as you go. If multiple program locations need to access the data, place it into an action engine.

 

Once you attach a simplified version of your code and define the requirements, we can offer suggestion. At the moment we are flying blind.

Message 3 of 11
(618 Views)

@altenbach wrote:

You seem to know the final size of the 2D array before the program starts, so all you need to do in initialize a shift register with the final array size, then replace rows as you go.


Here's a quick example for that.

 

(note that the indicator is inside the loop for demonstration. You probably would only need the output after the loop, of course, freeing up the UI thread)

Message 4 of 11
(614 Views)

Hi again everyone,

Thanks for the feedback so far! 

Just to clarify the architecture: I'm acquiring signals from a photodiode using a PicoScope, capturing three channels simultaneously. I preallocate three 2D arrays (one per channel), where each row corresponds to a single acquisition. Since I know array dimensions in advance, I create a DVR for each row before acquisition starts. As you might guess, this setup is designed to work seamlessly with SetDataBuffer and GetValuesBulk, which rely on the DVRs to correctly populate the preallocated arrays during acquisition.

Each row-level DVR points to the memory region that gets filled directly during the acquisition. After triggering and confirming with IsReady, I call GetValuesBulk, and at that point the arrays are populated as expected this part works well and is performant.

Figure1Figure1Figure2Figure2

 

The bottleneck comes after acquisition: I currently iterate through all DVRs in a For Loop(Figure2), read each one using an In Place Element Structure/Delete Data Value Reference to copy the data into the final array. With large acquisition sets (>10k scans), this final step becomes painfully slow.

The question is whether there's a better approach to collapse the DVRs into the final 2D array without this per-row dereferencing. Ideally, I'd like to avoid that final loop altogether.

 

Thanks again for the help.

T

0 Kudos
Message 5 of 11
(574 Views)

You should post your actual code. We cannot troubleshoot pictures. Having default data saved would also be helpful.

Tim
GHSP
0 Kudos
Message 6 of 11
(565 Views)

@tbianconi wrote:

 

The question is whether there's a better approach to collapse the DVRs into the final 2D array without this per-row dereferencing. Ideally, I'd like to avoid that final loop altogether.

 

Thanks again for the help.

T


I do not see the utility of DVRs here in your picture. Why not just build the arrays in the first for loop? 

Is memory an issue? If so, preallocate your arrays and then use the replace subset function as altenbach showed earlier.

 

what you have shown is that picoscope grabs a channel of data, then you put that data in a DVR, and then dereference it later. The DVR does nothing in those steps.

0 Kudos
Message 7 of 11
(563 Views)

@tbianconi wrote:

 

The question is whether there's a better approach to collapse the DVRs into the final 2D array without this per-row dereferencing. Ideally, I'd like to avoid that final loop altogether.

 

Thanks again for the help.

T


In other words, you're doing something like this:

var 1.png

is anything against simply preallocating the buffers and replacing the data — this approach will be about fifty times faster:

var 2.png

isn't?

Message 8 of 11
(556 Views)

I remember a topic from a while back that looked similar. It seems the implementers of the PicoScope drivers are not fans of dataflow. It updates the preallocated arrays which are somewhat protected from being freed by LabVIEW by being placed in a DVR. 
Re: When is dataflow not data flow? Updating LabVIEW Arrays through Call Library Function Nodes? 

 

You could maybe manually allocate an array with DSNewPtr and pass offsets to the driver. But then you have a pointer that is very inconvenient/impossible to use in the rest of LabVIEW. I am kinda glad I don't have to use that driver.

There are two VI packages that would make this less awkward: https://github.com/ni/labview-memory-management-tools, https://www.vipm.io/package/easlib_memory_manager/

 

 

Message 9 of 11
(520 Views)

@cordm wrote:

I remember a topic from a while back that looked similar. It seems the implementers of the PicoScope drivers are not fans of dataflow. It updates the preallocated arrays which are somewhat protected from being freed by LabVIEW by being placed in a DVR...

 

 


Ah, OK, i see now. Then in this special casy it might be simpler (and faster) to take SDK for C/C++ (if available) and create own Wrapper DLL, rather than dealing with LabVIEW allocated memory, which is "tampered" inside of Pico DLL in some "non data-flow friendly" form.

Message 10 of 11
(495 Views)