07-19-2011 04:54 PM - edited 07-19-2011 04:54 PM
I would like to synchronize an S Series board with an M Series board, by sharing a trigger via RTSI. When the sample rates of the two boards are the same, the t0's for the two measurements are separated by only 1 microsecond, but when the rates are not the same, the t0's are separated by 100 microseconds. Is this delay inherent in LabVIEW, or can I reduce it somehow? I have attached a picture of my VI.
Thank you
07-20-2011 07:21 AM
I am not sure how the boards handle their clocks but in general with different frequencies, you should expect a random offset up to the period of the lower frequency clock. The master oscillators probably run continuously. The sample clocks may run continuously and be gated to the digitizer or may be started by the trigger. Either way some difference in start offsets is reasonable.
One way of assuring synchronization is to generate both sample clocks from the same source and trigger.
Lynn
07-20-2011 11:26 AM
Lynn, thanks for your reply. A delay of 100 microseconds makes sense if we're expecting an offset of one period, at a rate of 10k
About synchronizing the sample clock, I tried that previously and had some interesting results. First, the offset was greater, around 15 milliseconds. And, for some reason I haven't figured out yet, I was not able to sample the 6133 at a rate above 1.4M (it's capable of up to 2.5M). I got an Error - 200019: ADC conversion attempted before prior conversion was complete. ???
07-20-2011 06:11 PM
Hi estrandb,
Have you taken a look at any of the DAQmx synchronization examples? They can be found in the NI example finder. You can find them in LabVIEW by going to Help -> Find Examples
10-11-2011 02:31 PM - edited 10-11-2011 02:34 PM
Use the example "Multi-Device Synch-Analog Input-Cont Acquisition.vi"
Run it with two PCI X series cards linked with a RTSI cable.
Put a delay of 500ms between the 2 "Start.vi"
RUn the VI.
T0 between the two read VIs will be 500ms apart.
Keep changing the delay value and they will be separated by whatever time delay exists between the two starts.
Only run it a single iteration and check it.
Make the delay too large and you'll get a buffer overrun on the slave.
I have to conclude this example or this method does not actually work.
10-13-2011 08:54 AM
Hi FTI,
First, are you using actual or simulated devices? If you are using simulated devices, triggers do not get simulated, each task gets its start trigger when it hits the DAQmx Start Task, regardless of how you configure the start trigger. This would explain why you are getting the buffer overflow error. The slave task is starting and acquiring simulated data as soon as it gets to the start task vi.
If you are using physical hardware, one thing to check is if you have the Synchronization Type control set to X Series (PCIe) some of the other types will run without throwing an error, but won't necessarily work well.
A good resource to take a look at is this knowledge base article on where a timestamp actually comes from.
Another thing to keep in mind, since you are using X Series cards, is that if you are using the same sampling rate on both devices, you can use a single task, and include channels from both devices.
I hope this helps,
Regards,
Luke B.
10-13-2011 10:05 AM - edited 10-13-2011 10:06 AM
Both simulated and actual devices exhibit same behaviour. T0 for read tasks is the same time value apart as the delay time between starts.
Make the time delay too long, the slave task buffer gets an overrun. I do not know how this is possible unless the Slave task is starting way ahead of the Master task; it's the only explanation I could come up with, but there may be another.
Same task is not the solution since you can't always guarantee that you'll be working with X series cards, or they'd both be the same anyway.
I have since submitted a support ticket on this and that AE is seeing the same thing, so don't worry about supplying an answer, unless you want to try it yourself.