LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Difference between Acquisition Modes when placing a DAQ Assistant in While Loop

In the past, I have configured the DAQ Assistant express VI to acquire 1 sample on demand within a while loop, and by using a time delay I have managed to roughly control the sampling rate. I do understand that I am not able to have precise control of the sampling rate because I am relying on the operating system's clock.

 

To achieve more accurate sampling, I am told that I should use either the "N Samples" or "Continuous" Acquisition Mode, but I am rather confused about the difference between the two when configuring a DAQ Assistant and using it within a while loop. How are these two modes different? Also, should I avoid using a time delay within the while loop?

Message 1 of 4
(8,514 Views)

Hi,

Continous samples acquisition mode means that you acquire N samples (the number of samples control) at the sample rate you specify. Say that you have specified to sample 1 kS at 1 kHz and you have placed it in a while loop. The DAQ Assistent would then execute during a second since 1 kS/1 kHz = 1 s and generate 1000 samples each iteration. The loop is by doing this timed by your acquisition and not in software. Including a wait in your loop with mean a loop time > 1 s and a break in your acquisition.

The only difference between the N samples acquisition mode and continous mode is the while loop. If you remove if from the above example only 1000 samples will be acquired.

Hope it helped,

Pelle S
Account Manager
National Instruments Sweden
Message 2 of 4
(8,498 Views)
Hello upthekhyber,

Just to elaborate a little more on what Pelle S's explination, the easiest way for me to think about the difference is to look at the names of each literally:

1 sample is fairly obvious: every time it is called, it takes one sample.  By calling it in a loop with a 1000ms wait, you will call it once every ~1s.

N samples is slightly more complicated.  It takes N samples each time it is called.  The rate the samples are collected is your sampling rate. You tell it how many (N) with samples to read.  So using Pelle S's example, every time you call one of these (each iteration of your loop), you would get 1000 samples at a rate of 1000 Hz (.001s between samples).  This is hardware timed with an ocillator, so its much more accurate. 

Continuous is just what it implies.  There is no waiting to be called, it runs continuously from when it is started.  It takes samples at the rate you specify (e.g. 1kHz) and fills a FIFO buffer constantly.  Every time you call the vi, it reads a number of points out of that buffer (samples to read, e.g. 1000 samples).  For our example, once started (i believe the first time the vi is run) it will take a sample every 0.001s.  Every time our loop runs, it will take the next 1000 samples out of the buffer (1st iter=1st second, 2nd iter=2nd second, etc no matter how slowly we run our loop).  This method is good for not missing any data, but you run into buffer issues (reading too fast or not fast enough).

I hope this helps clear things up.
Neal M.
Applications Engineering       National Instruments        www.ni.com/support
0 Kudos
Message 3 of 4
(8,460 Views)
I appreciate your replies. The difference is now clear to me.
0 Kudos
Message 4 of 4
(8,453 Views)