LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

error 42 DAQmx Base DMA overflow

Solved!
Go to solution

I need to perform a simultaneous AO of a non-sinusoidal wave and collect data from 2 AI channels all at 1000Hz.  I'm using LV8.5.1 on a G4-1.5GHz running OS 10.4.11.  I have a PCI6733 AO board and a PCI6143 (S-Series) AI board.  Everything I tried yielda a DMA overflow error after lass than 5000,000 samples.  I tried the ProducerConsumerDAQmxBase.vi from this thread:

 

 http://forums.ni.com/ni/board/message?board.id=170&message.id=364720&query.id=52503#M364720

 

When I run it a 100Hz it overflows in <140s, at 150Hz in 43s, at 200Hz in 28s. 

 

Am I fundamentally limited by my meager horsepower or can I use advanced functions to speed things up?  I can transfer the whole project to a dual G5/2.7GHz with a 1.3GHz bus, but will that buy me anything?

 

If I use a vi with one call to the AO board and have the AI call in a loop just writing to an array indicator it actually works at 1000Hz.  But in addition to needing to parse in a 900,000 point wave to the 6733, which only has an 8k buffer, I need to perform some GPIB measurements and save and display data.  When I try any of these tasks I get the DMA overflow.  Setting a large output buffer didn't seem to help.

 

 Any suggestions are welcome.

0 Kudos
Message 1 of 14
(4,243 Views)
I forgat to add that in my vi I'm using 4000 samples/channel in both AO and AI for a 4s run of each function.
0 Kudos
Message 2 of 14
(4,240 Views)

Hey Tanzella,

 

On the thread here, did you modify the read to do multiple samples rather than 1 sample as mentioned by Steve in the thread?  I've been running it on my 6251 (M-series) for a while and having no trouble.   Please see my attached vi for the modifications.  I removed the file I/O for testing ease.  So you will need to add that back in.  Please confirm the results and let me know if you are still getting the Error.  With 1 sample reads, it becomes system dependant how fast your processor can pull the 1 sample at a time memory before it overflows.  I would also like to mention that analog output does not support DMA transfers in DAQmx Base 3.2.  This will mean that the output rate you see with the 6733 might be limited when generating that 900,000 point waveform.

 

I hope this helps,

Paul C.

0 Kudos
Message 3 of 14
(4,209 Views)

Paul,

   

    Thanks for the reply and updated vi.  I had already eliminated one of the file save's and made the other automatic.  I added a "# of samples/channel" control to your latest.  You ar correct - at 100 to 4000 samples the vi runs well with no problems at 1000Hz.  Since I only need 4000S/Ch at 1000 Hz I'm satisfied that this solves the AI side of this problem. In fact, when I eliminated the AO function from my subvi that collects data from both the 6143 and my GPIB instruments and saves the data every 60s back my vi worked fine.  Could the non-DMA problem with the 6733 have been causing my buffer error on the 6143 (I have separate error indicators on the AI and AO lines).

 

    On the AO side, my 900,000 point array is generated once at the beginning of the main vi and parsed into the polymorphic AO vi 4000pts at a time using Array Subset, calculating the index from the loop iteration terminal.   Woiuld using an auto-indexed tunnel operate faster?  Is having both of the AO and AI polymorphic vi's in the same loop a good or bad idea?  Would using digital triggering on both boards help?

 

PS.  Should I call this question solved and start a new thread on the mixed AO/AI problem?

0 Kudos
Message 4 of 14
(4,200 Views)

Hey Tanzella,

 

In the future, I would recommend posting additional questions to a new thread just to keep things more easily searched by other users.  For now, we can just keep it to this thread. 

 

I believe the loop rate was slowed with the added functionality.  This was causing the DAQmx Base Read to be called less frequently and allowing for the buffer to overflow.  On the AO side, it sounds like your writing one sample at a time.  If so, have you tried changing this to be multiple samples?  I believe this or something else is slowing down the loop rate and causing the read to be called less often.  If you could post a screenshot of your code, this may help me identify the problem better. 

 

Regards,

Paul C.

0 Kudos
Message 5 of 14
(4,171 Views)

It looks like I spoke too soon.  The DMA error crops up between 55 and 65 minutes (over 3,500,000 data points) even without saving any data.  I'm trying it again with no  GPIB or data-save.  If that doesn't work, I'll remove the output data chart next.

 

Frnacis Tanzella 

0 Kudos
Message 6 of 14
(4,154 Views)

HI Tanzella,

 

Was this test with both AI and AO?  If so, are you monitoring the CPU usage when you run the application?  I'm curious if you are going to 99%-100% CPU usage when the error occurs.  Please let me know how the test works with no GPIB or data-save.

 

Regards,

Paul C.

0 Kudos
Message 7 of 14
(4,133 Views)

Hi Paul,

 

    All tests are run with both AI and AO operating at 4000 points/call.  Without GPIB or save, the vi ran for ~80 minutes before the error 42.  Every time I checked CPU usage it was below 10%.  I had manually set the AI buffer to 1,800,000 points for the 80 point run.  When I let it revert to its default buffer the error 42 cropped up after 22 minutes.  I set the AI buffer to 36,000,000 points and let it run overnight without GPIB or save, plotting the midpoint of every 4000 point sawtooth repeating every 4s.   The vi is still running after ~1000 minutes (60,000,000S/ch).  The AI midpoint generally alternates between in-sync and 140 points behind, with occasional excursions to 400 points behind.  Interestingly, it always returns to being in-sync.  I'm going to try digital triggering both the AI and AO on the same signal.

 

Thanks,

Francis Tanzella

0 Kudos
Message 8 of 14
(4,130 Views)

Hi Francis,

 

Please let me know how things go with the triggering.  I have confirmed the performance problems that you are seeing is a known issue with the DAQmx Base driver.  The issue is documented in ID #125793.  This is typically characterized by a "there is not enough data" error.  The DMA overflow error you are seeing is typically due to slow loop rates, not large enough read sizes, or too fast a sample rate.  I do believe the the known issue is likely having an effect on the problems you are seeing.  If you would like to find out the status of the bug report in the future, you can request support here and use the ask an engineer button to contact our support staff to inquire on the status.  It is currently being looked into by R&D.  If you keep me posted on your progression, I will do my best to help you find a workaround that suites your needs until the issue is fixed.

 

Regards,

Paul C.

0 Kudos
Message 9 of 14
(4,075 Views)

Hi Paul,

 

    The triggering didn't seem to change anything.  The AI was still up to 400ms out of sync occasionally but would sync up on its own.  I can't trigger the AO PFI0 from the AO CTR0 so I connected the AO CTR0 to the AI PFI0.  Somehow the AI and AO still fell out of sync.  I may need to trigger both from an external source, but don't have one that I trust.  What is the proper way to trigger both the AO and AI cards simultaneously?

 

   I put the AO polymorphic VI in a While loop which doesn't exit until all 4000 popints are written.  This seems to have nearly fixed the problem except for one reproducible offset every 15 minutes.  The AI VI doesn't have a "# of points collected terminal" so I can't do the same for it.  After I figure out what's giving me the regular offset, I'll declare it done.

 

Francis Tanzella

0 Kudos
Message 10 of 14
(4,060 Views)