Measurement Studio for VC++

cancel
Showing results for 
Search instead for 
Did you mean: 

2 second delay

Hi,

In my previous question I was not specific.

Here's what I'm doing :

1) I'm sending data to the PCI-6713 DAQ card using double buffering method ( Analog output).

2) Here's what I'm doing for the update rate, I'm setting the update clock, delay clock, delay clock prescalar

iUpdateTB = 1 from WFM_Rate

iWhichClock = 0;
ulUpdateInt = 1000; from WFM_Rate
iDelayMode = 0;
WFM_ClockRate(1, 1, iWhichClock, iUpdateTB, ulUpdateInt, iDelayMode);

iWhichClock = 1;
ulUpdateInt = 1;
iDelayMode = 1;
WFM_ClockRate(1, 1, iWhichClock, iUpdateTB, ulUpdateInt, iDelayMode);

iWhichClock = 2;
ulUpdateInt = 1;
iDelayMode = 1;
WFM_ClockRate(1, 1, iWhichClock, iUpdateTB, ulUpdateInt, iDelayMode);


According to the documenatati
on, the delay time should be
Delay Time = timebase period * delay interval * delay interval prescalar1 * delay interval prescalar2
= 1 us * 1000 * 1 * 1 * 2
= 2 milli-seconds

3) I am outputting ecg signals continously to the card which in turn is connected to scope.
I'm using a double buffer method. I have 8 channels. I get 8 bytes of data for each channel every 10 ms.
My buffer size is 16 bytes. I'm writing to the buffer using WFM_DB_HalfReady() & WFM_DB_Transfer() functions.
My signals look good on the scope.


Problem : I am observing a 2 second delay right from the start till the end.
The signals looks good but there is always a 2 second delay. For example on my ECG Lead II there is always
a double-beat periodically, which occurs right away in my program but I only see it after 2 seconds.


What am I doing wrong in the above code.

I have tried different buffer sizes. However I get a 2 second delay on the output independent of the buff
er size.


Please Help


Chandra
0 Kudos
Message 1 of 11
(6,060 Views)
Chandra,

Thank you for clarifying the question. As Kevin pointed out in your previous posting, it was important to know how you were measuring this delay. From your current description I can see that there is a constant delay of 2s, independent of buffer size. This delay can be interpreted as the time elapsed between the time you write a patter to the buffer and the time it actually shows up at the output, assuming the generation is already running.

The delay is probably caused by having regeneration enabled for the continuous waveform output operation.

Enabling it would allow you to avoid any under-run errors. Basically the driver would reuse the data until you provide a new half buffer. When regeneration is allowed the driver fills the on-board memory with as many buffer copies (of the same data buffer) as possible. This means that you will not actually see the updated pattern at the output of the board until the previous data has been flushed out.

This probably explains the 2s delay for the 6713 board, which has a FIFO size of 16,384 samples. Dividing the FIFO size by the number of channels we obtain about 2000 samples per channel already stored in the FIFO. If you divide that number by your update rate you can see that it would take around 2 seconds before the old data has been flushed out of the FIFO and the new patterns shows up at the output.

If you disable regeneration then the driver does not reuse any data and you must provide a pattern every time. This means you could get under-run errors. The advantage is that the driver only uses as much on-board memory as your buffer size, which means that you are probably going to see the update at the output of the board sooner than with regeneration enabled. (It all depends on the buffer size of course)

The buffer size in your application seems relatively small so you might run into under-run errors if the driver is not provided with new data soon enough. You'd have to experiment to see if you can find a compromise between buffer size, stability and responsiveness to new patterns.

Chapter 4 of the NI-DAQ User Manual explains the double-buffered scheme for an output operation. The concepts explained there apply mostly to the non-regenerative output operations.

There are other more advanced things that can be tried but I would suggest starting with the suggestions above.

I hope this helps,

Alejandro Asenjo
Applications Engineer
National Instruments



This question had previously been posted by Chandra at this link.
0 Kudos
Message 2 of 11
(6,060 Views)

I realize this is an old post, but it is relevant to what I'm working on.  I have created a software lock-in amplifier using LabView 8.2 and a PCIe-6259.  I am generating CW sinusoidal output with a frequency range of 0.01 to 2,000 Hz, sampling at 10 kHz.   The output drives a voltage-controlled current amplifier which feeds a large magnet coil.  The input reads an analog voltage from a magnetic field sensor.  I want to use the phase array used in my AO loop to generate the sin and cos reference signals for demodulating the input, and here's where the delay issue comes into play.  I am doing double-buffered acquisition and excitation by allocating the buffer size to the circular buffer size (8000 samples) and doing reads/writes of half that size.  The AO loop writes it's output phase array to a USR-style global variable every time through its loop.  The AI loop does the same with the data it reads and then uses a notifier to signal a third thread for processing.  As a test, I've wired the output (DAC0) to the input (AI0).  Initially, I tried an excitation frequency of 2.5 Hz (the update rate).  And integer multiples of this, the phase is consistent, but by changing excitaiton frequency by 0.01 Hz shifts, I found a phase error of 687 deg./Hz!!

To remedy this, the USR-type global for housing the phase array became a 2-D array with read and write pointers to different rows.  The phase aligns when the "written" output and the input are shifted by 7 half-buffers.  Would disabling regeneration shorten this delay to what I would expect, i.e. 3 half-buffers?  If so, where is this property accessed in DAQmx under LabView 8.2?  Thanks.

Kevin Pratt

Tristan Technologies, Inc.

0 Kudos
Message 3 of 11
(5,790 Views)
Hi Kevin,

I'm having some trouble understanding your application and questions.  I think it would be helpful to see your code so that we can have a better idea on how you are configuring the analog input and output tasks.  We may be able to provide some suggestions regarding the DAQmx programming, but I can't make any guarantees on being able to troubleshoot the lock-in amplifier algorithms.

If this is possible, please highlight the portions of the code that you believe to be introducing the errors.  You may also provide screenshots of the front panel in situations where the operation of application is in error.

To answer your question at the end of the post, I do not know yet if disabling regeneration would shorten the delay.  To disable regeneration, use the DAQmx Write Property Node in line with the analog output task and wire in a value of "Do Not Allow Regeneration".

It sounds like your application is already fully built but that there are just a few minor things to overcome.  In any case, there is an example program on our Develop Zone for implementing a Lock-In Amplifier with DAQmx, and it can be found here:  Multi Channel Count Lock-In Amplifier with NI-4472 and DAQmx
This is not completely applicable to your situation, however, because it does not incorporate any voltage generation.

Regards,
Andrew W
National Instruments
0 Kudos
Message 4 of 11
(5,769 Views)
Hi Andrew,
 
As far as sending the code is concerned, it's about 5 MB worth of vi's, so it may be quite a handful to go through. Let me know if this hasn't dissuaded you from wanting to view it.
 
As far as the regeneration is concerned, I don't know how I missed that one. Maybe I just probed the task after the DAQmx Write command, but I could have sworn it wasn't there before.  I'll try it for sure and see if this changes anything.
 
One more question ... this 7 half buffer delay that I've measured, do you think it would be the same for a PCI-6052E, or will a different system need to be separately characterized.
 
Thanks,
Kevin
0 Kudos
Message 5 of 11
(5,767 Views)
Hello squidmixer,

If I am understanding the issue correctly, you are writing data to your continuous analog output and are seeing a large delay from when you write to when it is actually generated.  From what I know about the buffers and the previous posts on this topic, I believe that the FIFO size has a big effect on the behaivior.  This is due to the number of samples you must "flush out of the system" before your new data is output.  The non-regeneration would shorten (or even eliminate) this difference since the data is not being copied multiple times and essentally remove the FIFO size from the equation.

This being said, the Analog Output FIFO on the two cards you speak of are dramatically different.  The PCIe-6259 has an AO FIFO of 8,191 samples, which is shared between all channels used (as seen on page 3 of its spec sheet.).  The PCI 6052E, on the other hand, only has 2,048 samples for its AO FIFO (page 6 of its spec sheet).  This could change the delay quite significantly.

I am very interested in finding out whether turning off regeneration does what we expect it to. If so, is that amount of delay acceptable?  If this does not get rid of the issue, then we may be misunderstanding how your program is working.  What is the end goal of this application in terms of the AO and the AI?  How much delay is acceptable? Since we are doing this in software and not hardware (i.e. an FPGA), we have to use buffers, so there has to be a delay, but there may be a easier way of going about this.

Since what we are working on is esentially just the DAQ aspects of your program, if you could make a simpler VI with just the DAQ (not the lock-in amplifier, etc) and possibly post it, or even a screenshot of that section, that could help.
Neal M.
Applications Engineering       National Instruments        www.ni.com/support
Message 6 of 11
(5,747 Views)

I found and implemented the "Do Not Allow Regeneration".  It reduced the delay from when the output is "written" to when it actually shows up at the output from 7 half buffers (2.4 seconds) to 6 (2 seconds).  Anyway, here's the code.  It's in 8.2.1.

Kevin

0 Kudos
Message 7 of 11
(5,721 Views)

Hello again squidmixer,

I took a look at your vi and although the structure is good, it is a little difficult for me to follow since I am not familiar with your application.  I am working on writing an example that shows the behavior we are talking about so I can actually see the delay on my hardware, but I need to make sure it is correct, clean it up, and document it.  I will post it as soon as I have something worthy of posting.  If you would try to whittle down your program as much as possible and still show the behavior,  I would appreciate it.



Message Edited by Neal M on 12-06-2007 06:55 PM
Neal M.
Applications Engineering       National Instruments        www.ni.com/support
0 Kudos
Message 8 of 11
(5,703 Views)

Hi Neal,

I pared down the execution to eliminate the FFT, FRF, and noise reduction.  I still included the lock-in because it is instrumental in demonstrating the phase shift vs buffer size and frequency.  To test this, simply attach a BNC-2110 to your DAQ card and connect DAC0 to AI0.

Thanks,

Kevin Pratt

0 Kudos
Message 9 of 11
(5,682 Views)

Can someone link this thread to the LabView forum? I realize I kind of hijacked it from the wrong message board.

Kevin

0 Kudos
Message 10 of 11
(5,672 Views)