Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

NiDaqmxbase - Samples Not Available?

Solved!
Go to solution

aNIta B wrote:

         Thanks for that additional information.  I'll give everything a shot here and hopefully reproduce what you're seeing.  Thanks again!

 


Hi aNIta,
Any progress in reproducing this?  Basically it seems a really big bug to me.   The sample code that I can easily reproduce this on several systems is the NI provided example code.

 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 11 of 19
(2,242 Views)

Hi Sth,

     Sorry for the delay getting back to you.  I was able to reproduce this, and it looks like there have been a few customers in the past, as well,  who have seen performance issues when running at the full rate of these cards.  I would take a look at This Forum, the developers have been very helpful in overcoming the issue that you're seeing and should be able to help you out.  Have a good one!

0 Kudos
Message 12 of 19
(2,223 Views)

No, No, I am not near the maximum for this card.  This is a 100 kHz card.  I am sampling 16 channels at 500 hz.  That is only 8 kHz or 8% of the maximum rate.   Further I can sample near the maximum rate and have in the past.  As long as I don't request 1 second of data it works fine!

 

It is a buffer size problem NOT a sampling rate problem.  If I sample at 500 hz and plot every 400 scans it works fine.  If I do the same and request 500 scans each cycle to plot it fails.  It says that there isn't enough samples, not a buffer overrun.  This is the complete opposite of what you are indicating.  Please ask R&D to look at this again.

  

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 13 of 19
(2,209 Views)

Hi sth,

       I've spoken with R&D and reproduced your exact error.  May I have permission to get your email address to put you in contact with them directly?

0 Kudos
Message 14 of 19
(2,193 Views)

aNIta,

 

Certainly!  I assume that you folks have my email address etc either as a forum member, LV Champion, or just as the 100s of support requests I have filed over the past 20 years.

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 15 of 19
(2,186 Views)
Solution
Accepted by sth

Hi Scott,

The problem is not in the DMA library, but in the logic of choosing the DMA buffer size. Currently for continuous tasks, DAQmx Base only uses the number of channels in the task and the sample rate to calculate the DMA buffer size. As you've noticed, the driver starts to have problems once reads start approaching one second's worth of data. That directly matches the logic in DAQmx Base: the DMA buffer size is the number of channels multiplied by the sample rate, which give a buffer size exactly equal to a second's worth of data.

The reason that this works for lower channel counts is that once the driver makes the multiplication, it coerces the buffer size to the range [8kB, 2MB]. With your lower sample rates (500S/s), the DMA buffer size doesn't exceed the lower limit until there are 9 channels in the task, so the buffer can hold more than a second's worth of data. If you increased your read size, you would again run into the problem.

In general, whenever you attempt to read the entire circular buffer at once, you enter a race condition. Either the driver will be faster than the DMA transfer and see that not all of the samples have been returned and tell you so, or the driver will be slower and you'll get the DMA overflow error.

The workaround is to call the DAQmx Base Configure Input Buffer VI after the timing VI and manually configure a buffer larger than the number of samples you want to read. An increase as small as 2% worked in my tests, so for a 500S/s continuous task, reading 500 samples at a time, configure a buffer size of at least 510 S/ch.

 

Set Buffer Size.png

 

I've filed a CAR to the group so we can track the problem. The ID number is 182112.

Thanks for your diligent testing and detailed reports.

Message Edited by Joe F. on 08-07-2009 11:59 AM
Joe Friedchicken
NI Configuration Based Software
Get with your fellow OS users
[ Linux ] [ macOS ]
Principal Software Engineer :: Configuration Based Software
Senior Software Engineer :: Multifunction Instruments Applications Group (until May 2018)
Software Engineer :: Measurements RLP Group (until Mar 2014)
Applications Engineer :: High Speed Product Group (until Sep 2008)
Message 16 of 19
(2,151 Views)

The problem is not in the DMA library, but in the logic of choosing the DMA buffer size. Currently for continuous tasks, DAQmx Base only uses the number of channels in the task and the sample rate to calculate the DMA buffer size. As you've noticed, the driver starts to have problems once reads start approaching one second's worth of data. That directly matches the logic in DAQmx Base: the DMA buffer size is the number of channels multiplied by the sample rate, which give a buffer size exactly equal to a second's worth of data.


 

Joe,

 

Thanks for figuring out the basis of this errror.

 

But, why oh why, doesn't it use the number of samples value!!!!!  That should tell the software how many samples you plan on collecting at one time!  The rate is much less relevant to the problem at hand.

 

The buffer size shouldn't depend directly on the rate (maybe a second order effect)!!!  The buffer should be #channels * #samples * SafetyFactor.  I would have picked a safety factor of at least 2!  That is effectively what you do in the example with a safety factor of 1.02.  That logic should just be in the driver itself for the default buffer.

 

In most of my real world programming I set a buffer size explicitly but in tracking down the error I got this test case that made no sense.

 

Please, please add to the CAR about changing the error message.  It should be a buffer overflow, not a "Samples Not available Timeout" error!   I don't see how you ever get a "Samples not available" message after waiting 10 seconds for 1 second worth of data.  It should not matter how fast the driver is because it is programmed to wait for all those samples.

 

This is a perfect example of where the software should make slightly smarter calculations for the default settings.

 

I am going to start another thread on NIDAQmxBase and how to read a buffer.  I would appreciate your comments on that.

 

Thanks

-sth 

 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 17 of 19
(2,136 Views)
sth wrote:

But, why oh why, doesn't it use the number of samples value!!!!!  That should tell the software how many samples you plan on collecting at one time!  The rate is much less relevant to the problem at hand.

 

The buffer size shouldn't depend directly on the rate (maybe a second order effect)!!!  The buffer should be #channels * #samples * SafetyFactor.  I would have picked a safety factor of at least 2!  That is effectively what you do in the example with a safety factor of 1.02.  That logic should just be in the driver itself for the default buffer.


You are operating at one end of the spectrum, but that doesn't mean everyone else is there, too 😉 The rate does matter, and it's a zero-order effect when the rate is high. What if I specified a sample rate of 200k and a read size of 100? Following your suggestion, I'll get a DMA buffer that can hold 200 samples or one millisecond of data. Few general purpose OSes can service an application so quickly let alone do any post-processing on the incoming data. The workaround in this scenario is to read more samples at a time, which is (admittedly) slightly less work than configuring the input buffer manually with a VI call. Should we also provide an error in this case ("read size too small for sample rate"), or expect our users to know their host OS and how to program for it? We want to make writing applications quick and simple, but we don't want to make you feel that we're holding your hand or limiting your creativity. There's a happy medium here, and I'm working to find that balance.

Every programmer needs to know how their hardware and driver works, and then make adjustments so that they can operate at their quiescent point. It clearly doesn't make the situation straightforward when there are bugs in the driver. There is room for improvement in Base's algorithm (which is an obviously simpler implementation of DAQmx's), but yours is the first time in the history of the driver that such behavior was reported as possible, which means we chose a popular generalization of logic. It's inconvenient and frustrating for you, I recognize, but the majority of our users haven't even encountered this problem.

sth wrote:

Please, please add to the CAR about changing the error message.  It should be a buffer overflow, not a "Samples Not available Timeout" error!   I don't see how you ever get a "Samples not available" message after waiting 10 seconds for 1 second worth of data.  It should not matter how fast the driver is because it is programmed to wait for all those samples.


The unclear error message is the effect of the race condition, and likely a problem in the DMA library itself as a result of trying to read the entire circular buffer with each DAQmx Base Read call. My guess is that it's not tracking its state very well and it should be reporting a DMA overflow in every case, but when the driver is faster than the DMA and sees less than the number of requested samples available, it gets fixated on waiting for them to show up. Unfortunately between subsequent checks, the board has already overflowed the DMA buffer and those samples will never arrive.
Joe Friedchicken
NI Configuration Based Software
Get with your fellow OS users
[ Linux ] [ macOS ]
Principal Software Engineer :: Configuration Based Software
Senior Software Engineer :: Multifunction Instruments Applications Group (until May 2018)
Software Engineer :: Measurements RLP Group (until Mar 2014)
Applications Engineer :: High Speed Product Group (until Sep 2008)
0 Kudos
Message 18 of 19
(2,132 Views)

Joe F. wrote:  
You are operating at one end of the spectrum, but that doesn't mean everyone else is there, too 😉 The rate does matter, and it's a zero-order effect when the rate is high. What if I specified a sample rate of 200k and a read size of 100? Following your suggestion, I'll get a DMA buffer that can hold 200 samples or one millisecond of data. Few general purpose OSes can service an application so quickly let alone do any post-processing on the incoming data. The workaround in this scenario is to read more samples at a time, which is (admittedly) slightly less work than configuring the input buffer manually with a VI call. Should we also provide an error in this case ("read size too small for sample rate"), or expect our users to know their host OS and how to program for it? We want to make writing applications quick and simple, but we don't want to make you feel that we're holding your hand or limiting your creativity. There's a happy medium here, and I'm working to find that balance.

Every programmer needs to know how their hardware and driver works, and then make adjustments so that they can operate at their quiescent point.

Alright, I understand the misleading error message, but somehow you need to give more accurate feedback to the user as to when he is running up against those hardware/driver limits.  The fact that the buffer overflows  should terminate the read immediately instead of waiting that additional 9 seconds to realize that the data isn't going to show up.  If the overflow occurs then the read should error out immediately even if it thinks there aren't enough samples.  Not that it thinks the buffer is too small but that I am overrunning it.  That was the basic error and if it had returned that error after 1 second when the buffer filled up I would have fixed it and moved on.
Agreed I am working at one end of the spectrum (or around the middle of 1 second), which just means that you need to include that second order effect at small sample sizes.  So make it set the default buffer to the number of samples * 2 or 1 second of data which every is larger.  That is a simple way of taking care of the other end of the case as well.  Reading 1 second of data is probably a mid range application since I sometimes work on hour time scales and sometimes on mSec.
Setting the default, with adequate documentation is not limiting my creativity in overriding it.  But just making a *good* suggestion as to what that buffer should be is maybe that kind of good hand holding.  Trying to keep the errors from occurring before the impact a majority of the customers is good goal, right? 🙂
Minimally, the example should at least set the buffer size so that you don't crash out like that while merely playing with values that seem reasonable in the controls.  I know a lot about the hardware and OS limitations on my machine but I was very puzzled by that response from the test program.  My actual code sets the buffer size to 10 seconds and had another reason for it's problem, but going back to the example for testing purposes just confused me more.
If only I could let the buffer overwrite data and not stop the data acquisition, so I could just access the most recent past 1 second of data and let the rest go.  That would be a nice feature for NIDAQmx base.  I think I submitted it awhile ago to the NI suggestion box.

 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 19 of 19
(2,117 Views)