Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx continuous sampling problem

I am (finally) changing my DAQ VIs from Traditional to DAQmx and I'm having problems.  I am trying to replace code that formerly used "AI Continuous Scan.vi" which worked great.  I have attached a simplified version of my attempt at recreating my use of this VI.  The problem is that is seems that data is made available to my DAQmx read function only in multiples of 256 scans.  Am I missing a setting somewhere that lets me get data as fast as my loop runs? 
 
I've also attached a simple representation of how I used Traditional DAQ to solve my problem.
 
Thanks for any help!!
 
Melissa
Download All
0 Kudos
Message 1 of 20
(6,371 Views)
I'm sorry, I meant to post this to the LabVIEW forum.  In case this is an appropriate quesiton for this forum, I am using the DAQCard-6036E.
0 Kudos
Message 2 of 20
(6,362 Views)
Melissa,

I looked at your code and I am quite surprised that you are reading data in multiples of 256. 

You asked if there was a way for DAQmx to get data as fast as the loop runs.   In order for me to answer that question I need to know if you want to perform a software timed acquisition or a hardware timed - buffered acquisition.  I looked at your code and it seems to have elements of both so I couldn't tell.

In case you are wondering about my terminology this is what I mean:
Software timed:  This continuously pulls one data point from each channel everytime your loop executes.  This means that your sampling rate is determined by loop speed but since windows is non-deterministic the loops won't run at exactly 100 Hz.
Hardware timed:  This continuously pulls a set of data points from the buffer on your DAQ card.   This is ideal because your card can sample at 100 Hz with a tremendous amount of accuracy.  Your loop speed has nothing to do with sampling rate in this case.  If you want to do this I recommend that you specify how big you want your set to be.  I generally start with something between 1/10 and 1/2 of your sampling rate.  I modified your code a little bit to reflect those changes.   I also included some error handling  (which is tremendously important) just in case DAQmx is throwing some errors. 

I hope this helps.  Have a great day!

Jeff Tipps
Applications Engineer
National Instruments
Message 3 of 20
(6,340 Views)

Melissa,

I took your VI and modified it slightly to display the number of samples in each of the waveforms returned from the 'data' output of the DAQmx Read VI.  I usually see 10 samples give or take a couple now and then, which is what I would expect.  With a sample rate of 100 Hz and a delay of 0.1 seconds, you should be getting about 10 samples (100 S/s * 0.1 s = 10 S) from each read.  I certainly would not have expected to get samples in multiples of 256 scans.

Which kind of device do you have installed as Dev1?  I have a PCI-MIO-16E-1, which is a fairly typical data acquisition board.

0 Kudos
Message 4 of 20
(6,338 Views)
I didn't see that you said you were using the DAQCard-6036E.  My apologies.
 
I notice that the AI FIFO on the 6036E is 512 samples deep, so having two channels in your task would make it effectively 256 scans deep.  This is a plausible starting point to explain why you're getting samples in chunks of 256 scans.  However, I checked the default data transfer request condition property (DAQmx Channel >> Analog Input >> General Properties >> Advanced >> Data Transfer and Memory >> Data Transfer Request Condition) and it seems to be Onboard Memory Not Empty, which implies to me that transfers should begin as soon as samples have entered the FIFO and not when the entire thing is full.  I don't have a 6036E to test with, but allow me to suggest playing around with both the data transfer mechanism from DMA to Interrupts (DAQmx Channel >> Analog Input >> General Properties >> Advanced >> Data Transfer and Memory >> Data Transfer Mechanism) and the aforementioned Data Transfer Request Condition property to see if some combination of the two appears to work.
 
Post your findings, as it may be a DAQmx driver bug.
Message 5 of 20
(6,332 Views)
Wow.  You just saved me hours of frustration!
 
I looked at the "Data Transfer Request Condition" property, and it was set as "Onboard memory more than half full".  Once I changed that to "Onboard memory not empty" everything worked like a charm!!!
 
Even the weird thing I was seeing with loops that didn't have a delay has been corrected.  For example, the modified version you sent, Jeff, (along with some of the NI LV examples) would only update indicators on the front panel as well as BD probes every 51 iterations (when one channel was being read) or at 25, 51, 76,102, etc. samples (when reading two channels).  That was the oddest thing I've ever seen.  Why would the data transfer have any effect on display function?  I noticed this by probing the loop counter and also you could tell graphs were only being updated every 2.5 or 5 seconds (loop time was set to 0.1 secs).  I didn't look at it enough to see if all of the data was being graphed or if just the data that was acquired during those interations was.  Those iterations did only have the correct 10 points, though, not 256.
 
So, it's working for me now.  Is this a bug?  I don't know.  I've never had to worry about data transfer settings before.  Is this something you generally have to pay more attention to in mx?
 
Jeff, about your modified version.  Maybe it's a carryover from many versions of LabVIEW ago when I first started, but I guess I'm weary about having loops wait for samples to be acquired.  Is this done with very little overhead in LV8/mx?  Also, I like to have the Read VI either read *all* the samples that are available or have a query after each read that finds the number of scans still available (backlog) and add that to the next loop's acquisition.  That way, in case something happens and the loop slows down enough that the loop takes longer than the acquisition takes, I won't be gathering backlog, which would basically present itself as data lag.  With loop times around 0.1 seconds, this probably isn't a problem, but that may not be the timing that is always used.
 
Thanks very much Aaron and Jeff.  I really appreciate your time.
 
Best regards,
Melissa
 

Message Edited by Melissa Niesen on 07-19-2006 03:31 PM

0 Kudos
Message 6 of 20
(6,319 Views)

Glad to see your program is working as intended now.

I'll take a stab at explaining the update behaviour you were seeing.  Specifying a timeout of -1 means that the DAQmx Read VI will wait indefinitely for the requested number of samples to become available for reading--10 samples, in your case.  However, with the Data Transfer Request Condition attribute set to Onboard Memory More Than Half Full, the device will not transfer samples to the from the AI FIFO to host until this condition has been met.  For a sampling rate of 100 samples/second and a FIFO size of 512 samples, it should takes [ ( 512 S / 2 ) / 100 S/s ] = [ 256 S / 100 s/s ] = 2.56 seconds for the condition to be met and for the 256 samples to be transferred to the host buffer.  The DAQmx Read VI now has 256 samples available in the buffer to read, and so the request for 10 samples should be satisfied immediately, leaving 246 samples in the buffer.  Now the loop continues and comes back around to the DAQmx Read call again.  Since there are 246 samples now in the buffer, that 10-sample read should be satisfied instantly, leaving 236 samples in the buffer and so on.  In other words, you see updates at 25, 51, etc., because the loop is actually running approximately 25 times every time the data transfer request condition is satisfied.

In general, you shouldn't really need to play around with the Data Transfer Request Condition, as the defaults tend to work for the most part.  It just so happened that the DTRC setting in conjunction with your sampling rate and the way the VI was written led to some confusing behaviour.  The DTRC can sometimes be used to tune latency-critical applications to transfer samples from the device to the host sooner than it normally would.  In your case, this is exactly what you needed to do for your application to work as you would have expected.

Message 7 of 20
(6,305 Views)

One more thing I just found out.  BEFORE the DAQmx Timing VI,  AI.DataXferReqCon=Onboard Memory Not Empty.  AFTER the DAQmx Timing VI, it equals Onboard Memory More than Half Full!!  It is counterintuitive to me that that setting would be used for a task using Continuous Samples.

Your explanation of the 25,51,etc. loop probem makes sense up to a point.  Why, though, would the display and probes stop updating with every iteration?  I wish I could take a video of this because I might not be explaining it well.  I put a probe on a wire coming from the loop counter.  The only numbers this probe displays are 25, 51, 76, etc.  In other words, it is obvious that the loop is running 25/26 times between the update of the probe display.  Normally, the probe display is updated with EVERY iteration of the loop, at least in every other VI I've ever written.  Once I correct the Onboard Memory Not Empty setting, I see the probe count up every 0.1 seconds as expected.

BTW, the buffer description you gave IS what I see when I run the "Cont Acq&Graph Voltage-Int Clk-Timed Loop.vi" example when I monitor the available scans property, but it doesn't have any display delays.  That one does update every loop iteration.  With the default data xfer setting, it waits 5 seconds before even displaying data, then it has a buffer to read from and works fine, although the data would always be delayed by 5 seconds.  Not an option for my application, so I'm glad you pointed me to this setting.  If I change the setting back to Onboard Memory Not Empty after the Timing VI, this example works as expected, not delaying in the beginning at all.

Message Edited by Melissa Niesen on 07-19-2006 04:24 PM

0 Kudos
Message 8 of 20
(6,303 Views)
Before the DAQmx Timing VI is called in a DAQmx task, the default timing type (for an analog acquisition, at least) is software timed non-buffered.  After the DAQmx Timing (Sample Clock) VI is used, the task becomes buffered.  The driver is attempting to make a tradeoff between latency and bus usage.  With the DTRC set to Onboard Buffer Not Empty, new samples being acquired enter the FIFO and are immediately transferred into the host buffer.  This leads to a lot of small, bursty transfers across the bus, which means a lower maximum transfer rate and higher CPU usage as more interrupts are being processed.  At the same time, though, it decreases the overall amount of time between the conversion of the sample and the sample being available for the user to read.  However, with DTRC set to Onboard Buffer More Than Half Full, bus transfers will be fewer in number but tend to contain more data per transfer.  This is a more efficient use of bus and CPU resources and allows faster maximum transfer rates to be achieved at the expense of a little bit of extra time between the sample conversion and the transfer into the host buffer.  This latency is more prominent at slower transfer rates.  The driver is essentially trying to tune the data transfer settings to that which would be most appropriate given the circumstances.  It's just that in this case, it happens to have a negative impact on your VI behaviour.

I am not sure why you might be seeing the issue with the probes.  I can only surmise that LabVIEW might be updating the probes less frequently when running quickly through the loop (when the samples are immediately available and the DAQmx Read call returns right away).  Perhaps someone more familiar with LabVIEW could shed some light on why the probes act that way.
Message 9 of 20
(6,299 Views)
I can see what you mean about trying to optimize the process.  That would certainly be helpful at high scan rates.  It does seem like a pretty drastic change from Traditional DAQ, though.  If I were to take data at 1S/s, the data would be delayed 8.5 minutes. (The FIFO buffer on the DAQCard-6036E is 1024 as opposed to 512 on the PCI version.)  I probably wouldn't even have noticed it if I had used a faster scan rate.  I just realized this is also probably the cause of a problem a colleague was seeing when writing a different DAQ application.  At least I know now to watch out for this whenever I do continuous reads, which is what I almost always do due to the length of my tests.
 
Just as a side note to anyone also encountering this problem: The FIFO buffer apparently isn't simulated in a device simulated in MAX.  It goes ahead and presents the data as if the "Onboard Memory Not Empty" option were used, even though it isn't.
 
Thanks again for all the help!!!
 
Melissa
0 Kudos
Message 10 of 20
(6,280 Views)