06-30-2009 11:05 AM
I am going to bump this up since I don't think it has been addressed.
Please Please fix the documentation for DAQmx Base Read VI. For "Analog 2D DBL NChan NSamp" the "Number of samples per channel" documentation is woefully incomplete. It just bit me again and I found this thread as I was about to run into the same problem I did 2 years ago.
The documentation states
If you leave this input unwired or set it to -1, NI-DAQmx Base determines how many samples to read based on if the task acquires samples continuously or acquires a finite number of samples. If the task acquires a finite number of samples and you set this input to -1, the VI waits for the task to acquire all requested samples, then reads those samples.
However this does not tell you what the behavior is when set to continuous sampling!!! If I have continuous sampling what is the number of samples returned when a -1 is sent to the number of samples input.
Further if a timeout occurrs will this VI return the number of samples acquired up to that point or nothing at all?
Please file a CAR and get this documentation completed. Thanks.
And lastly is there anyway to retrieve all samples in the buffer? Or find out how many samples are in the buffer? Or clear the buffer without taking the humongous performance hit of stopping and restarting the data acquisition.
07-15-2009 11:16 AM
Ok, a CAR has been filed for cleaning up that documentation a little bit.
I
believe the behavior is as follows (I tested this with a USB-621x
family device, and I think the behavior is the same with other product
families...but disclaimer, I'm not 100% certain and I didn't test will
all product families).
If you use a -1 for samples to read on a continuous AI task, it will read the value of "Samples per Channel" input to the DAQmxBase Timing VI. If that value is left unwired, then it will use the default for that input, which is 1000.
Other questions - With the current DAQmxBase API, there is no way to retreive all the samples in the buffer, find out how many samples are in the buffer, or clear the buffer. However, since most of the API is written in LabVIEW, you could always add that functionality yourself.
-Alan
07-15-2009 02:58 PM
Alan [DE] wrote:Other questions - With the current DAQmxBase API, there is no way to retreive all the samples in the buffer, find out how many samples are in the buffer, or clear the buffer. However, since most of the API is written in LabVIEW, you could always add that functionality yourself.
I found out where I had the idea the -1 would read the whole buffer. This was upgrading from traditional DAQ (Mac OS 9) and the control label had that in it. This is a traditional style about labels that is bad when the underlying API changes. So the control on one of my VIs was auto generated and "Number of Scans (-1 for all)". When I dropped in the DAQmx Base VIs this label didn't change of course but is obviously wrong.
True. *BUT* that modification then has to be reapplied to each upgrade to DAQmx Base. Trying to maintain an alternate build of DmB is a lot more work than I want to do and merge any future updates into it.
My current Hack/Kludge that does what I need works, but is just a bit off. I want to read at least 1 sec. or all the samples in the buffer. To do this I read 1 second of data, then loop reading 1 more second of data with zero second timeout. This will timeout when there is no more data in the buffer and stop the read.
The "bit off" part is that the subsequent reads take finite time so I always get a few more points than 1 second even if the buffer starts empty. So in my case I get about an extra 4 mSec for each loop.
07-15-2009 03:10 PM
07-15-2009 04:39 PM
The CAR number for the documentation issue is #179187.
I understand your hesitation to manually editing the NI-DAQmxBase API. You are running into obvious limitations of NI-DAQmxBase, but it looks like you were able to work around it.
-Alan
This was reported to R&D (CAR ID #179187)
07-15-2009 06:49 PM
There are definite limitations. I have looked into the code and it is useful to make work arounds in many cases. Everyone has NIDAQmx now even the linux heads. Us poor red headed stepchild OS X folks are lumped with the windows PDA group which is really disheartening.
I like the concept of NIDAQmx Base. As soon as a very basic driver can be ported, everything is cross platform and all works. This is great and a move to LV everywhere. The problem is that there is just not enough resources for parallel development. If NI was willing to drink their own koolaid here and commit that a driver structure built in G could be just as good as written in C, then we would have the ideal that we are shooting for.
Unfortunately that doesn't seem to be happening.