LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Handling circular buffers


- I am configuring two  AI channels  for 10kHz sampling rate and starting the Task . ( LV automatically sets the buffer at 100k samples )

- The Task then feeds into a 50ms timed loop and inside of the loop I have a AI DAQMx Read set for  acquiring  200 samples. / channel. Thus the Timed loop will fire 20 times every second  and in each of that instance read  400 samples making it to a total of 20 x 400 = 8000 samples / sec.

- Thus I will be filling the data buffer at the rate of 10000 samples every second and reading  them inside the  timed loop  at the rate of 8000 samples per second.

- For each second there will be unread data of 2000 samples and the buffer of 100k samples size will over flow in  50 seconds. This will defeat the FIFO data flow.

Even assuming that I manage to exactly match the rate of aqcuistion with the rate of read, what happen when the timed loop misses a beat ?

I am in fact running one such code and have been having random lock ups where none existed when I was just acquiring 1 sample / channel inside the timed loop with the software trigger.

Can someone throw some light on the proper configuration of such a  set up and how to ensure that buffer overflows / underflows are avoided ??

Thanks
Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 1 of 14
(4,913 Views)
Can you tell us what you are doing with the data in the timed loop? Are you postprocessing it, displaying it, logging it (or maybe all three?)


@Raghunathan wrote:

- The Task then feeds into a 50ms timed loop and inside of the loop I have a AI DAQMx Read set for  acquiring  200 samples. / channel. Thus the Timed loop will fire 20 times every second  and in each of that instance read  400 samples making it to a total of 20 x 400 = 8000 samples / sec.



You certainly won't be able to display full rate 10 kHz data in real time. I've done something similar with hight speed UDP data. I created a UDP receiver that pulled data off the TCP stack as fast as it could. This receiver passed the data to two queues; one called logger and the other called UI. All data was passed to the logger queue and decimated data was sent to the UI.

The logger ran as an independant VI with a timed loop that flushed the queue four times a second and wrote the data to disk. The UI als ran as an independant VI at ~20 Hz and post-processed the decimated data flushed from the UI queue (least-squares-fit and averaging) for about 5 data points at a time and output the results as a notifier. This notifier allowed me to pop up dialog boxes to graphically display or monitor the processed data.

There are a few discussions that discuss techniques for high speed handling of data on the LAVA forums. I tried to find my posts to provide a link, but LAVA seems to be down this morning. When LAVA is available, search for "lossy queue".


0 Kudos
Message 2 of 14
(4,896 Views)

In short, you've overconstrained yourself by specifying both a fixed DAQ read rate (via the Timed Loop) and a fixed DAQ read quantity (via the # of samples) which don't match up cleanly with your DAQ acquisition rate.  2 options I've used:

1. No Timed Loop.  In a regular While loop, call the DAQmx Read function with your fixed # of samples (200?  400?   Not clear in your post).  By default, DAQmx will always return the oldest data that you have not previously read.  Within a few loop cycles, you should catch up to your acq task and find that the DAQmx Read call will actually get stuck waiting for those samples to arrive.  This is fine -- the DAQmx driver will yield CPU during this waiting time unlike the old legacy driver.  Once you get to this point, your loop rate will generally be constant and stay in sync with your acquisition task.  But for sure you'll be dealing with constant chunk sizes of data, which is an important consideration in some apps because you can make special accomodations to overwrite rather than re-allocate memory.

2. Wire -1 to read 'All Available Samples'.  Stick with your timed loop to give you your best shot at a constant loop rate.  But each time you call DAQmx Read, use the (-1) input value to specify that you want all available samples.  This Read call will return immediately with no waiting, but will give you varying quantities of samples from one call to the next.  This variable chunk size of data may not be a problem in some apps.

Like Phillip, I often use Queues and/or Notifiers so I can defer things like datalogging, display, and processing so as not to bog down the data acq process.  He and I both agreed in that LAVA thread that a set of "lossy queue" or "circular buffer" functions would be a great addition to LabVIEW.  Much of the functionality seems to have been worked out inside the DAQmx driver -- we just don't have the ability to do similar things with RAM in our apps.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 3 of 14
(4,867 Views)
Hi

I agree with Kevin in that you have overconstrained yourself. However, if you want to use a timed loop and continuous hardware timed daq I have a suggestion:

1) use the AI Task as a timing source for your timed loop (The VI which you need is called something like "DAQmx Create Timing Source"). This vi will also start your data aquisition using the given AI task (which must be configured for continuous aquisition)

2) calculate a blocksize according to: blocksize = sampling freq / timed loop freq (in your case: blocksize = 10000 / 20 = 500)

3) use this blocksize as value for both the period (dt) and the offset (t0) of your timed loop.

4) Place a AI read with no. of sample = blocksize inside your timed loop

In this way the timing of the timed loop corresponds to that of the AI read. Furthermore, since you start the first iteration of the timed loop with an offset, the first block of data has already been aquired by the time the AI read is called, thus the AI read will return the data immediately.


Greetings

Georg

P.S: I am using LV 8.5, PCI-6221 hardware



0 Kudos
Message 4 of 14
(4,859 Views)
Hi Kevin,

Thanks to you and all those who responded.

Instead of too much description, I have enclosed a typical code ( LV8.0 ) which should make matters clear.
Kindly revert back as to how to make this code work reliably ???

Raghunathan
Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 5 of 14
(4,832 Views)

In the DAQmx Read where you set overwrite samples, you may want to use the task and error wires so that it is guarantee to run before the task starts.  As of right now, it is a race  condition as to whether that node or the task start node executes first.  If the task start node happens first, it may or may not be possible to set the "Overwrite Samples".  The help file says it depends on the device as to whether this setting can be changed while a task is executing.

I would recommend passing all data to another parallel loop using queues and a producer/consumer design pattern for display and analysis.

In your For loop, you are doing averaging on each row of data, then building that back up into an array where you index out only the 1st row.  So the averaging that is being done on 7/8 ths of your array is getting discarded.  Why not index out the 0th row first and only average that?

0 Kudos
Message 6 of 14
(4,815 Views)
HI Ravens,

Yes I can bring in a dependency by using the error node for the OverWrite Samples property.

But I am not clear on this : "I would recommend passing all data to another parallel loop using queues and a producer/consumer design pattern for display and analysis." Could you possibly modify my VI to reflect this ?

As to me using only the first channel value and discarding the rest - its just for check out purpose ! Sorry I did not explain it beforehand.

Thanks

Raghunathan
Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 7 of 14
(4,811 Views)
Try the attached.
 
This should keep the graph display and calculations from slowing down your data acquisition loop.
0 Kudos
Message 8 of 14
(4,800 Views)
Hi Ravens,

OK I get the idea.

But does the introduction of the queue provide any additional safegaurd against the buffer over / underflows ?

Maybe the creation of data dependency with the error wire for the DAQMx overwrite property alone is enough safegaurd against buffer overflows ? Not sure though.

Thanks for your time.

Raghunathan
Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 9 of 14
(4,777 Views)
Hi George,

Yes the Timed loop behaves perfect when I bring in the DAQMx Create Timing source into picture and use the terminal  instead of fixed 50ms. I also saw an exact example for this  in the LV help : Cont. Acq&Graph Voltage-Int Clk-TimedLoop.vi.

But life as always is not simple. In my application I collect all AI channels ( 10 of them ) in the Main.vi and load them into a Functional Global which are read by sub.vis. Then there are many VIs, that, depending on the user choice gets loaded into the Sub-panel of the main VI. (Only one loads at a time ) And each of these sub VIs has a 50ms timed loop to handle DIs and DOs. These timed loops have low priority as compared to the Main.vi timed loop.

When I try to load the sub.vi I get an error message that reads

" Error 50103 occured at DAQMX Start .vi.
The specified resource is reserved. The Operation could not be completed .
Task Name : Unnamed Task <xx> "

and the sub.vi just halts.

I really am not sure whats going on ??
Raghunathan
LabVIEW to Automate Hydraulic Test rigs.
0 Kudos
Message 10 of 14
(4,776 Views)