Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

NI-DAQ PCI-6110 Unaccounted Overhead

The "Wait Until Done" for AO isn't really necessary.   You can already know that it's ok to Stop the AO task as soon as the function call to DAQmx Read for the AI has completed.  Just set up sequencing to make sure the AO stop happens right after AI read.

 

Onward to the untriggered finite pulse train as shared sample clock.   Again, "Wait Until Done" shouldn't be needed, similar sequencing could be used to avoid it.  It also may help to commit the counter task before the loop.  But these are minor points about reducing overhead, they don't address the timeout error.

   Frankly, I don't see a good reason for getting the timeout you got when trying to trigger.  Then there was a circular dependency in the timing signals preventing anything from getting started.  Here, the config and sequencing appear ok to me.

   Which task(s) are producing an error and what's the error # and descriptive text?  How many loop iterations do you get before the error?

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 21 of 37
(978 Views)

We know that the timeout was due to the trigger not being set off in the first place. In this case, there is no trigger as the CO is manually being started/stopped.

 

The VI pauses for 10 seconds and I see one loop is completed. After that there is no response and I press the stop button to which the below errors appear.

0 Kudos
Message 22 of 37
(967 Views)

You can right click on those errors and pick "Explain Error..." from the popup menu for more info.  The error for hw-timed AO (-200018) suggests either that the sample clock intervals are too short for the board to handle or that not enough data was available in the buffer for the # of sample clocks that occurred.  

   It's possible that the board only handles 4 MHz when AO uses its own internal sample clock.  Perhaps the limit is lower for an external clock?   You could check this by running trials with lower sample clock rates.

   To deal with the "not enough data" possibility, you could explicitly configure the AO task to allow regeneration.  I thought this was already the default behavior, but it wouldn't hurt to make it explicit.

 

The AI error (-200284) is a timeout error.  When the timeout input on DAQmx Read is unwired, the default timeout is 10 sec, as you observed.  For some reason, the AI task doesn't seem to be getting enough pulses from the counter (maybe none) while the AO task seems to be getting too many or getting them too fast.  I can't explain why from the code I see.  I can only suggest doing some standard "divide-and-conquer" troubleshooting efforts.  Make some temporary copies of your "real" vi and use those for troubleshooting.  Then you don't have to worry about remembering what changes to undo.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 23 of 37
(962 Views)

I solved the errors by making the CO Timing continuous instead of finite and reducing the rate to 2.5 MHz. 3 MHz also worked fine but did not test above that.

 

I've successfully implemented the image scan by making AO/AI continuous and using custom waveforms as discussed before.

The only issue is that the 2nd line of each scan appears to be shifted. I am using a triangle waveform and inputting a delay at constant voltage at the beginning and middle at 270 phase. Suppose I place a delay of 100 us to account for when the y galvo moves, however AI is still scanning so I discard 100 us worth of data for each buffer read i.e. at 5 MHz for 100 us is 500 data points so I would discard 500 data points. For 200 us it is 1000 data points etc. This doesn't seem to be working as intended as I have to delete an additional, for example, 1725 to 1000, so 2725 data points. I'm thinking there is some sort of small overhead for when I call Read VI and process the data as 1725 would be 345 us of some unaccounted overhead.

 

I've tried taking this overhead into account and adding an additional delay of 345 us but then the image is completely scrambled. Is the direction of closely tweaking AO/AI to taking into account the overhead for Read VI correct or is there another route?

0 Kudos
Message 24 of 37
(954 Views)

You do not need to worry about "compensating" for 345 microsec or whatever due to the call to DAQmx Read.  The beauty of continuous tasks is that data isn't lost during the time between calls to Read functions.  DAQmx is buffering it in the background, losslessly.  The next call to DAQmx Read will find it waiting there.  You won't lose data unless you fall so far behind that the buffer fills all the way up with unread data, in which case the task will return an error to alert you anyway.  So fear not - if you aren't getting an error, you won't be losing data.

 

Hard to comment on specifics of your current observations without having the associated code to examine.  I'll just reiterate that the procedure I outlined in msg 10 should be sufficient.  

 

Either post the exact code that led to your most recent observations, or if you've done some more mods since then, post *that* and re-describe your new observations in detail.  It's impossible from this end to know exactly what variation of code led to your observations unless you make the connection explicitly and post the corresponding code.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 25 of 37
(937 Views)

The main outlook of the VI is similar to what I've posted before.

 

I modified the waveform for both X/Y galvo such that I removed the first delay and placed it at the end of the waveform. This decreased the shift significantly and at the correct delay I don't see these alternating shifts unless I run the galvo quickly meaning high frequency in the waveform. I also notice that the shift increased as time moves on when the galvos are running at high speeds. When the galvos run at higher speeds I read the buffer more times per second.

This shift also increases when I open other programs during program operation.

 

The only thing that comes to mind is that I'm still performing data processing inside the main loop. The next step will be to separate everything into a production/consumer system using queues.

If this doesn't wok then I can only say that either drift is being introduced over time when using triggering since AO/AI aren't synced or something is wrong with the galvo(s).

 

Will keep posted.

0 Kudos
Message 26 of 37
(932 Views)

I just opened matlab while the VI was running and AO spit out error -200279. Is this practically telling me the PC can not keep up with the data acquisition?

0 Kudos
Message 27 of 37
(930 Views)

Yeah, that error should be coming from the AI task though, not AO.  It means your software isn't reading data out of the buffer fast enough to keep up with the rate that the board & driver are re-filling it.  If it's going to occur, it makes sense that it's during the launch of another sizeable program, making the CPU less available to your data acq program.

 

Within the same data acq board, the hardware timing for AO and AI sampling won't drift relative to one another.  Both derive timing from a common oscillator on the board.

 

The *apparent* drift you describe will have another explanation.  I'm away from LabVIEW and can't recall the code clearly now.  But one thing that could look like drift would be a tendency to slowly fall behind with the AI Read loop.

   Here's an example.  Suppose you request a fixed # samples every iteration, and that # corresponds to 5 msec.  Now suppose Windows decides to give a lot of CPU to Matlab while you launch it.  Over the course of 100 msec of real time, you only iterate 10 times and read 50 msec of data.  The other 50 msec of data is in the buffer, waiting for you to read it on subsequent iterations.

  On that next iteration, you'll read another 5 msec of data, but it will be stale data from 50 msec ago.  Still, the read will return quickly and you'll be ready to take another read in maybe 1 msec.  Eventually, you'll spin through iterations faster than normal, slowing eating up the backlog, until you get back to real time.

   However, if you have other code in the loop to help enforce the nominal 5 msec loop time, then you won't be able to catch up.  Your *reads* will cause your data to *appear* to drift relative to AO.

 

I don't know if that's what *is* going on, it's an example to illustrate that what looks like drift can have causes that aren't related to the hardware sample clocks.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 28 of 37
(916 Views)

I attached a snapshot of the VI now with consumer/producer implemented..

The performance has improved, and I can now take more data points without the PC complaining about memory issues or lagging behind AI.

It was actually AI giving that error code, AO has no problems so far.

Before I was limited to 5 ms per line with 500 pixels per line but now I can go below 1 ms and above 1000 pixels per line.

Unfortunately, it did not change the "shift" at all and at this point I am pretty much lost. Here are my observations;

1. There is no shift when using the seasaw waveform. This is true for finite mode, I have not tried continuous mode yet

2. There is a shift when using the triangle waveform with both finite and continuous mode

3. I can fix the shift by either putting in a unique delay or deleting a manual amount of points to delete at the beginning of the data points as described in previous posts i.e. delete 2725 instead of the expected 1000 points.

 

At this point I am wondering if this has something to do with the galvos. The reason why I am fixated on the tri waveform is because I can run the galvo above 1000 Hz with tri, however they are limited to 100 Hz with seasaw.

 

When running this VI on the programing machine, I see a fringe pattern due to whatever the simulated AO is set to. There are no shifts.

0 Kudos
Message 29 of 37
(913 Views)

In one of the previous posts, I mentioned that I reduced the shift by removing the delay at the start of each waveform of both x/y galvo and placing it at the end. I went back to to the original waveform where the delay is back to the start and the shift worsened, and can not be fixed with any delay time. Remember that the x galvo is using tri and y galvo is using seasaw.

I changed the y waveform alone back, and the shift did not change. I reverted the y waveform, and changed the x waveform and the shift was fixed with a certain delay time.

I'm thinking that either the galvos have issues with the tri waveform or I'm not processing the data correctly. Or, the x/y waveforms are desynced somehow.

0 Kudos
Message 30 of 37
(910 Views)