LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

moving large data real-time

Anyone having an idea for moving a large data set from one core's thread to another's thread, in a Real Time system, where it will be put into a DAQ's output buffer? I have an array ,which at the max is 125625 X 23 X I16, which is built column (the 23 dimension) by column (actually it is done in a replace array element mode), that is then is too be "exported" to the other core's thread. There it is to be loaded into the DAQ cards' memory, to be output. I need to be able to do this in a pretty timely manner. When I try to use a Function Global Variable, that is "loaded" one column at a time, it is _slow_ ! Need a faster method!

Thanks,

Putnam
Certified LabVIEW Developer

Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5


LabVIEW Champion



0 Kudos
Message 1 of 15
(5,050 Views)
More details:   I'm running this program on a dual-core processor on the PXI chassis platform, running LabVIEW 8.5 RT.  My plan was to calculate the waveforms, (and do a lot of other stuff, comms over tcp/ip, etc.) on one core and then "send" the resulting array over to the other core, which would have the DAQ "stuff". My original method has a FVG to store the array, which would be loaded, one "column" of calculated data a (1X 23 array of I16) at a time, each column representing one calculated point in each of the 23 waveforms. To minimize the accessing of the memory manager the "loading" would be into a pre-allocated 2D array, using replace array element.  Doing it this way seems to really kick up the time to calculate a whole waveform compared to having the replace array element internal to the actual calculation vi, passing in the array from without, doing the calcs and replaces and passing them out. Unfortunately I then have the problem of getting that 5Mb worth of data over the fence to the other core's side to output it. Any suggestions would be greatly appreciated, I think I'm standing too close to the trees to see the forest!



Putnam
Certified LabVIEW Developer

Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5


LabVIEW Champion



Message 2 of 15
(5,011 Views)

Hi Putnam,

If the "data path" has a single source and single sink, then try a queue.

They are the fastest way of moving data that I have run across when you have a single source and single sink.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 3 of 15
(5,004 Views)
I haven't tried queues for core to core communications before, hadn't considered them, but since some recent revelations have shown that RT lelvel determinism isn't a critical issue I guess that I can examine them as well. Not that some of them necessarily have  jitter issues, but I was being conservative. Will put together a trial test case. Thanks, as usual, for your help Ben. Hope the weather in your neck of PA wasn't too bad. Up in Syracuse the are 400% more snow than this time last year, and last year the season had 140+" (norm is closer to 115), and that was in a very truncated winter, not much until third week of January.


Hmm, three stars. Wonder what I said that annoyed someone, or underwhelmed them?

Even weirder is that at the LabVIEW forum level it is showing 4 stars, two voters.  Hmmm, too early for me, the coffee isn't hitting (actually not able to drink coffee lately)





Message Edited by LV_Pro on 12-17-2007 09:15 AM

Message Edited by LV_Pro on 12-17-2007 09:19 AM
Putnam
Certified LabVIEW Developer

Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5


LabVIEW Champion



Message 4 of 15
(4,994 Views)

Hang in there Putnam!

If you still aren't getting the speed you need, post an example of what you are trying and I'll take look.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 15
(4,977 Views)
You may want to try an RT FIFO, as well.  I have never benchmarked them, but would expect similar performance to a queue.  In my tests on WinXP, the single-element queue has always outperformed functional globals when using large data sets (see this post).  However, I have never tested on RT, so your mileage may vary.  Please let us know what you come up with.
Message 6 of 15
(4,939 Views)
I'm planning on trying out the RT FIFO, but it may have the same issue the RT Shared variable configured as a FIFO, which, if I understand it, is that it is a single element process. What I mean is that you declare the element as, in my example, and array of 23 I16's. Then you declare that there will be (in the shared variable) a buffer of x these elements. The problem is that I load them, one element at a time, in my waveform calculation routine, which may be ok, but then when I go to "load" the DAQmx write function's buffer I need to dump a 2D array to it, and all I have are 1D elements that I'm popping off the FIFO. The queue is able to take the whole, large, 2D array in, and make it available at the other end as essentially a single, big, data element. I can't think of an efficient means to pop the elements off the FIFO that doesn't take as long, nor can I think of a means to make it variable in length (and quick). Of course, defining the largest expected "element" when creating the queue also poses some problems (I don't want to be outputting a waveform that ends up being largely zeros when the actually waveform length is less than the max). 


Putnam
Certified LabVIEW Developer

Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5


LabVIEW Champion



0 Kudos
Message 7 of 15
(4,926 Views)
Just brainstorming, but could you use the RT FIFO as a "parallel port" connection?  Some possible options are:
  1. Use a unique start and end line of data for "STX", "ETX".  This may be difficult to make if your data is truly random.  Perhaps a unique sequence of 23 element frames would work better.
  2. Add an extra line to your data for flow control.  This would be like the "XON" "XOFF" lines of a serial port.
In either case, the "element" of the FIFO is a 23 or 24 element array with maximum size set in advance.  The data source puts a start sequence in the FIFO, adds the data one line at a time, then puts the end sequence.  It is decoded at the receiving end.  You could include a header with size info in your start sequence. 

You could do this with queues, as well, and they can be preallocated at creation time to avoid memory hits.

Good luck.
Message 8 of 15
(4,877 Views)
I will say up front that just having others to brainstorm with is a big help, giving me new paths to ponder.
     The data is "sort of " random (an analog waveform) so anything I can think of for an "ETX" would probably come up sooner or later. My bigger issue is how to get up to 125K "samples" (sample = 1X23 array of I16) out of a fifo and into the write buffer of my D/A Daq in an efficient manner? As to the length issue, I was planning on seeing what kind of hit I take if I send the max array size in my queue and another value indicating the actual size of this waveform and then "peel" off that valid data. The problem is that the actual waveforms vary in length from about 2K to 125K in length. All 23 of them are the same length each time though (thank goodness). I don't want to have my DAQ sitting there outputting a 125K long stream, where 110K are zeros though, when they are short I need to finish up and get ready for the next one.

Putnam


Putnam
Certified LabVIEW Developer

Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5


LabVIEW Champion



0 Kudos
Message 9 of 15
(4,873 Views)
Putnam wrote "I will say up front that just having others to brainstorm with is a big help, giving me new paths to ponder. "
 
I believe it is Proverbs that says (paraphrased) "As face answers to face and iron answers to iron, so does a man trieth words."
 
Ben
 
My guess at what that means;
 
"face answers to face"--- look at yourself in the mirror.
 
"iron answers to iron" --- a file is used to sharpen an axe.
 
"man trieth words" --- tosses out ideas
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 10 of 15
(4,864 Views)