LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Big constant latency for generating the AO signals which is a immediate copy of the AI signals.

Solved!
Go to solution
First of all, I use PCI-6229 M-Series card and LabVIEW 8.0.
 
I am trying to develop a general purpose program consisting of three parts each of which is inside an independent while-loop. Part-1 is for continuous data input through the DAQmx Read, part-2 for analyzing the data passed by part-1, and part-3 is for continuously generating the results from part-2 through the DAQmx Write. I used FIFO queue to manage seamless data passing among the 3 parts. I also used "Rendezvous" to synchronize the starting timing for them. However, I found all of the output signals have a constant delay of about 140ms (pretty long) relative to the timing of input signals from DAQmx Read.
 
In order to clarify how my program works, I attached you a simple VI consisting of the 3 parts I've described above. This VI was designed to simultaneously using DAQmx Write to generate a copy of the signals from DAQmx Read. I connected the analog input channel to a microphone for sound input directed into DAQmx Read. And I used an oscilloscope to monitor the sound signals and the simultaneously generated copy of signals by DAQmx Write. Even in such a simple case, I also observed the generated signals having a constant delay of ~140 ms relative to the sound signals. Regardless of such latency, these two sets of signals are just identical. I tried to feed external timing signals to a PFI channel shared by DAQmx Read and Write as the sampling clock, but still get the same latency.
 
I have referenced NI's knowledge base, tried hundred of tests, and posted my problem on the borad of "Multifunction DAQ", but still got no help. Is that just a limitation of LabVIEW for such type of task without using its Real-Time module? Or do I just miss any key points? I appreciate any of your help or comments and if I didn't clearly state my question, please let me know. Thanks.
 
Claude.
 
0 Kudos
Message 1 of 3
(2,728 Views)
Solution
Accepted by topic author ClaudeWang

I don't have LV near my network PC and can't look at the code now.  Since you're dealing with audio signals, I'm guessing that both your AI and AO are buffered tasks, right?   If so, that fact is probably at the root of the latency you see.

Each time you call AO write, the data you feed it is *appended* after any previous data that was written to the buffer.  It won't actually pass through the D/A converter until after all that previous data passes through, one sample at a time according to the AO sample clock rate. 

In general, some latency will be necessary to make sure you don't experience a buffer underflow error, where the AO task needs data that hasn't yet been written to the buffer.  The only possible other tip I can think of is to check out the following old thread and see if it's helpful. 

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 2 of 3
(2,715 Views)
Hi, Kevin:
 
You're really my hero...the thread you refered just exactly solved my problem. Thank you very much.
Nevertheless, if you have time to take a look at my VI, please feel free to give me any comments. Thanks again!
 
Best,
 
Claude.
0 Kudos
Message 3 of 3
(2,700 Views)