‎08-31-2009 01:13 AM
Hi all,
I am rather new to NI and have run into a frustrating issue. We are developing an application which will run on Linux and sample data from a USB-9215A at 50kS/s (this is an ideal rate for the project and about half the device capability). The inital development was done in Windows (by our partners) using NIDAQmx and utilises callbacks to read the data when the buffer is ready. This works fine. However while porting it to Linux we discovered we can only use NIDAQmxBase. We can only do continuous reads in a loop with this driver and now we have -200361 device buffer overflow errors! The Linux system is running minimal services and has a cpu load of <5%. From reading other posts this appears to be not uncommon. It would seem development of NIDAQmx was discontinued for non-Windows OS's.
Is this the case?
The Base version appears to quite "crippled" in comparison.
Darren
‎08-31-2009 04:55 PM
Regards,
Dan King
‎08-31-2009 08:59 PM
Dan_K,
Thanks for quick response. We are using the C API for this project. The contAcquireNChans.c in the examples runs fine. However it is using 5000S/s. Our code runs fine at this rate too. If we increase the sample rate to 50kS/s in the example code it fails periodically with the -200361 error as well (not as often as our code though). I noticed the input buffer in the example is created 100 times larger than the samples/chan to read.
Is there a rule of thumb for determining the optimal size of the input buffer?
We were able to reduce the instances of the error in our code by keeping the sample rate at 50kS/s but reducing the samples read per call of the readAnalogF64 function. The cost here is the cpu utilisation increases. This may be adequate for our needs but it's certainly not ideal. For the record the test machine has a 1.5GHz core2duo/1GB ram running Linux kernel 2.6.26 SMP.
Darren
‎09-01-2009 01:45 PM
Regards,
Dan King
‎09-01-2009 05:59 PM
We had spent some time tinkering with the samples/channel parameter over the last couple of days, we are presently reading in blocks of 500 as it happens. This resulted in us successfully reading about 98% of samples without the error occurring. We were then able to reliably achieve 100% by tweaking some thread scheduling parameters.
Thanks for your help.
Darren