08-11-2012 10:31 PM
I'm a little new to the NIDAQmx methodology and was wondering if someone could could give me some pointers on speeding up some voltage measurements I'm doing with an NI USB-6363 DAQ box.
I have a python script that controls and makes measurements with a few pieces of lab equipment over GPIB, and also makes measurements from the DAQ box via NI-DAQmx (I use a wrapper library called pylibdaqmx that interfaces with the native C libraries). The measurement I'm making with the DAQ box is for 32k samples at 2MHz via a differential pair at AI0. Some sample code that accomplishes this is:
from nidaqmx import AnalogInputTask # set up task & input channel
task = AnalogInputTask() task.create_voltage_channel(phys_channel='Dev1/ai0', terminal='diff',
min_val=0., max_val=5.) task.configure_timing_sample_clock(rate=2e6, sample_mode='finite', samples_per_channel=32000)
for i in range(number_of_loops):
< ... set up/adjust instruments ... >
task.start()
# returns an array of 32k float64 samples
# (same as DAQmxReadAnalogF64 in the C API)
data = task.read(32000)
task.stop()
< ... process data ... >
# clear task, release resources
task.clear()
del task
< ... etc ... >
The code works fine and I can grab all 32k samples, but if I want to repeat this measurement several times in a loop, I have to start and stop the task each time, which takes a while and is really slowing down my overall measurement.
I thought maybe there'd be a way to speed this up by configuring the task for continuous sample mode and just reading from the channel whenever I want the data, but when I configure the sample clock for continuous mode and issue the read command, NI-DAQmx gives me an error saying the samples are no longer available and that I need to slow down the acquisition rate or increase the buffer size. (I'm guessing the API wants to pull the first 32k samples from the buffer, but they've already been overwritten by the time I get to the read command.)
What I'm wondering is: how can I set up the task to have the DAQ box acquire samples continuously, but give me only the last 32k samples from the buffer on demand? Seems like I'm missing something basic here, maybe some kind of triggering I need to implement before I make a reading? This doesn't seem like it should be difficult to do, but as I mentioned, I'm a bit of a newbie to this.
I understand the python implementation I'm using isn't something that's supported by NI, but if someone could give me some examples of how to make a measurement like this in either LabView or C (or any other ideas you have to speed up this kind of measurement), I can test it out in those environments and implement it on my own with python.
Thanks in advance!
toki
Solved! Go to Solution.
08-13-2012 10:39 AM
That's something I do quite a bit, but I can only describe how I'd do it in LabVIEW -- I'm no help about particulars of the C-function prototypes or the python wrapper.
In LabVIEW, there are properties accessed via the "DAQmx Read property node" that help set this up. One is the "OverWrite Mode" which I'm pretty sure needs to be set before starting the task. The other pair are known as "RelativeTo" and "Offset" and they let you specify what part of the data acq buffer to read your data from. If you config for "RelativeTo" = "Most Recent Sample" and "Offset" = -32000, then each time you read 32000 samples they will be the very most recent 32000 that are already available in the data acq buffer. In between reads, the task is free to overwrite old data over and over indefinitely.
Note that you'll need to do this in continuous sampling mode and that you might want to explicitly set a smaller buffer size than the default that DAQmx will choose based on your fast sample rate.
Here's a LV 2010 snippet:
-Kevin P
08-13-2012 03:50 PM
Great, thanks for the example. It works fine in LabView with a little tweaking.
Just in case anyone else runs into this, it looks like the C functions for these parameters are:
DAQmxSetReadOverWrite()
DAQmxSetReadOffset()
DAQmxSetReadRelativeTo()
Unfortunately, they're not supported in the python API I'm using, but it's an open-source project and I can write a patch to implement it.
toki