LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx Memory overflow error (Error -200361) with finite sample acqusition

I'm using a USB-6361 and sampling one analog voltage channel at a 2 MHz sample rate, the maximum sample rate. When running continuously, after a few hours I will usually get an onboard device memory overflow error (Error -200361), and I'm trying to figure out what could be causing the issue.

 

Here is a simplified version of the code. The actual code breaks up each of these DAQmx functions into a state machine, and interfaces with many other devices. The program acquires 1 second of data and is looped to acquire data indefinitely, with small delays (< 50 ms) between iterations.

 

Is there anything wrong with this design? The error is always thrown by the Wait Until Done vi. I'm using Wait Until Done instead of calling DAQmx Read directly to avoid blocking. Alternatively, I could have used parallel loops and queues to pass information between them, but I thought this solution was simpler. Maybe switching to that design would fix the errors though?

 

DAQmx memory overflow error.png

 

0 Kudos
Message 1 of 7
(206 Views)

@slourette wrote:

I'm using a USB-6361 and sampling one analog voltage channel at a 2 MHz sample rate, the maximum sample rate. When running continuously, after a few hours I will usually get an onboard device memory overflow error (Error -200361), and I'm trying to figure out what could be causing the issue.

 

Here is a simplified version of the code. The actual code breaks up each of these DAQmx functions into a state machine, and interfaces with many other devices. The program acquires 1 second of data and is looped to acquire data indefinitely, with small delays (< 50 ms) between iterations.

 

Is there anything wrong with this design? The error is always thrown by the Wait Until Done vi. I'm using Wait Until Done instead of calling DAQmx Read directly to avoid blocking. Alternatively, I could have used parallel loops and queues to pass information between them, but I thought this solution was simpler. Maybe switching to that design would fix the errors though?


So, you start a task, get data for 1s, then stop it and reacquire. For finite tasks it should set the buffer to hold all the samples, so you don't need to set that up. I always use continuous acquisition and that rate is not an issue, especially for a single channel.

 

Here's a suggestion, rather than acquire 2M points, acquire 2,000,896 points (this assumes your disk sector size is 512.)  Acquiring points that is an even integer of the disk sector size is extremely efficient. I would decimate the data before plotting it also.

 

Is there a reason you are using an external clock? Have you tried with the internal clock?

0 Kudos
Message 2 of 7
(193 Views)

Thanks for the reply!

 

Yes, that's right. I actually just recently tested using 1835008 samples after reading similar advice in this forum, but unfortunately I got the same error.

 

The main reason we don't use continuous acquisition is because we typically change one or more parameters (of other devices) between each measurement loop, and while parameters are changing the voltage measurement is meaningless.

 

The external clock signal is actually made up of 2 MHz pulse trains with gaps between them. Basically, we don't need to be acquiring 100% of the time, so instead of continuously acquiring and throwing away some of the data, we just skip the acquisition by leaving the clock signal low. Under current conditions—the ones that are producing this error—we are acquiring over 90% of the time, but we weren't getting the error previously when we were acquiring closer to 20% of the time. Though this could also just be a coincidence, as many things have changed.

 

I might be able to try switching to an internal clock, especially if it's just for troubleshooting purposes. I think an internal clock might cause synchronization issues though, as well as making the data analysis quite a bit more complicated.

 

A few years ago, we used a more complex setup where instead of an external clock, we used an external trigger, which we used to generate the clock signal using a counter channel. Kevin Price actually helped me set that up a few years ago.

 

The error isn't an insurmountable problem. I just finished adding proper error handling so that the measurement can recover from this error. Still though, I would like to explore why the error is occurring in the first place. Do you know whether it is a problem to wait for the task to be finished before calling DAQmx Read?

0 Kudos
Message 3 of 7
(169 Views)

If Kevin gave you the sample clock it is good.

 

You are using functions that I rarely use; I can do continuous acquisition with multiple channels at 10MSa/s. To do this, I can set the buffer size. For a finite acquisition the buffer is set to the number of points being acquired, there is no way to change this I believe. Since the buffer is set exactly to the number of points any delay downloading it can result in the error. It is possible using the task done delays the read enough that the buffer overflows.

  1. Try without the task done.
  2. Make sure your device is attached to its own USB port, that port is not shared with anything else.
  3. I get that you are concerned about blocking. Do you want to abort the task before completion? You can always branch the task wire and use that to abort the task if necessary.
  4. How often do you check Task done? Maybe set the timeout to a small number, like 10ms?
0 Kudos
Message 4 of 7
(163 Views)

1. I think you are suggesting that I remove the while loop altogether and immediately call DAQmx Read. (Technically, since I'm using a state machine I would just be removing that "state" from the state machine.) If I do that, then the UI will only update once, and then lock up until the readout is done, which isn't the end of the world, but the VI really needs to be rewritten to accommodate parallel processing using queues to transfer data between the structures. This is definitely something I can test.

 

2. This is something I completely overlooked. Yes, the device is sharing the port with other devices, since we recently ran out of USB ports. I'll try switching this tomorrow.

 

3. Aborting the task before completion isn't really important. The blocking is more of a concern for the UI.

 

4. There's quite a lot that happens between calls of "Wait until Done." I'm not sure how long it takes between loops but if I had to guess it's probably 50-100 ms between loops.

 

I'll work on testing out these changes, but it will take some time, since the error can sometimes take as long as 12 hours before appearing.

0 Kudos
Message 5 of 7
(144 Views)

@slourette wrote:

1. I think you are suggesting that I remove the while loop altogether and immediately call DAQmx Read. (Technically, since I'm using a state machine I would just be removing that "state" from the state machine.) If I do that, then the UI will only update once, and then lock up until the readout is done, which isn't the end of the world, but the VI really needs to be rewritten to accommodate parallel processing using queues to transfer data between the structures. This is definitely something I can test.


I do no have LabVIEW here to try, I really never use finite acquisition, and I don't use Wait till done either. I am not sure if (don't think so) finite reads support DAQmx Events, In my applications, I register for a DAQmx Read N Samples in the buffer event. This way I can do other things while waiting for the points to be ready. Look in the example finder for Continuous voltage acquisition with Events. If it works with finite events, then set the number of samples to your finite acquisition number. So rather than a Wait loop, your application will have an event structure that triggers when it is finished acquiring data.

0 Kudos
Message 6 of 7
(111 Views)

@slourette wrote:

I'm using a USB-6361 and sampling one analog voltage channel at a 2 MHz sample rate, the maximum sample rate. When running continuously, after a few hours I will usually get an onboard device memory overflow error (Error -200361)...

The error you're getting is more of a system-level error which typically isn't fixable with code.  Sure there are exceptions, but let's focus on what's most likely.

 

Onboard memory overflow signals that the DAQmx driver isn't able to keep transferring data from the device to PC memory fast enough to prevent the onboard FIFO from overflowing.  At 2 MHz sample rate and a 2047 sample FIFO, that would only take about 1 millisec.  For as long as the task is running (whether finite or continuous), DAQmx needs to keep moving data and freeing up space in that FIFO to avoid the error you see.

 

1. I'm confident that a PCIe device would not have this problem due to the ability to transfer data via DMA without burdening the CPU.  I'm far from shocked that a USB device does.   I'm actually kind of surprised you can run for hours before you get the error.

 

2. A shared USB hub certainly wouldn't be helping matters.  The more you can prevent any sharing of USB bandwidth and access, the better.

 

3. With less than a millisec as a margin for error, many system-level things encroach on DAQmx's ability to "keep up".   You'll have little to no control over many of them.

 

4. However, it's at least *possible* that some possible CPU contention is *within* your control, at least the part driven by your other code that's running.

I'm doubtful you'll get long-term reliable behavior with a USB device and a 1 millisec overflow time limit.  You can try dabbling about with stuff mentioned in #'s 2-4 above, but I expect that to get to a fully reliable solution you'll need a different DAQ device.

 

 

-Kevin P

 

 

P.S.  I recall from years ago that there were some speed-limiting quirks about querying for whether the task was done.  Not sure they're still around in newer DAQmx versions and don't think it's having an impact on the problem you have anyway.  Nonetheless, you might consider instead querying for the # samples acquired in the loop.  You can use a DAQmx Read property to do that as shown below.Kevin_Price_0-1751375073060.png

 

Another (and probably better) idea might be to occasionally call DAQmx Read in your loop to retrieve the (partial) data that's been acquired so far.  You should ship it off to some other loop via Queue where you can accumulate the segments while keeping the DAQ loop lean and mean.  Even though with FInite Acquisition you *can* wait until the end to read it all, you don't *have* to.  It can be helpful to read incrementally to avoid total data loss in case of this error.  

    Reducing the *consequence* of the overflow error might even get you to "good enough" and save you the cost of new DAQ hardware.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 7 of 7
(100 Views)