LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Acquisition using waveforms gives incorrect time info when written to tdms

Solved!
Go to solution

Hi all, In most of my daq applications I am looking at fairly slow phenomena (temperatures and static pressures) over 30 to 60 channels as well as two or three serial instruments for a very long time (eg: days to weeks). Typically I also have to calculate some values based on the acquired data. I had been using 2D arrays to accomplish this, but discovered that the metadata inherent with the waveform datatype to be extremely useful for organizing my data sets. I've attached a screenshot of my typical acquisition code using waveforms.

 

The problem I'm noticing is that the timestamp at the end of the data set is never correct. For example, if I start a test at 8am on Monday and end it at 4pm Tuesday, when I pull the data out of the tdms file and plot it on a waveform chart the final timestamp displayed could be 7pm Wednesday or 3pm Monday or some other time nowhere close to the actual time the test was stopped. I suspect this has something to do with the "dt" parameter of the waveforms, but so far I haven't been able to figure it out. What am I doing wrong? Thanks.

CLAD
0 Kudos
Message 1 of 6
(3,410 Views)

If you want accurate timing you need to do the acquisition is parallel, not serial.  With waveform you set t0 (time start) and dT (time between samples).  You serial data capture is ADDING time to every acquisition and throwing off your timestamps.  Example: If dt is 1 second but your acquisition loop is 1.1 seconds because of the serial port read, then all your timestamp will be off.

 

Matt

 

Matthew Fitzsimons

Certified LabVIEW Architect
LabVIEW 6.1 ... 2013, LVOOP, GOOP, TestStand, DAQ, and Vison
0 Kudos
Message 2 of 6
(3,400 Views)

So would arranging the VI as below help solve the problem? 

 

Also, since I'm only acquiring one sample per loop iteration, where is the dt value coming from?  Is there a default value or does the "write to tdms" calculate it based on the "t0" of successive writes?

CLAD
0 Kudos
Message 3 of 6
(3,388 Views)

dt gets automatically populated by the sample rate.  Drop a probe on the waveform coming out the DAQ read and you will see that it contains t0, dt, and Y (your data).

 

You are getting closer and this will work better BUT still not correct because you are using time for the write to file.  If it is small enough in time you may be able to get away with it.  Correct way would be to queue the data and send it to another processing loop to write the file.  Also assumes that the DAQ read is always using more time than the serial read.  Look at the producer consumer template the ships with LabVIEW.

 

The producer is the DAQ & serial reads.  The consumer would get the data and write it to file. That would give you the most accurate timing.

 

Matthew Fitzsimons

Certified LabVIEW Architect
LabVIEW 6.1 ... 2013, LVOOP, GOOP, TestStand, DAQ, and Vison
Message 4 of 6
(3,375 Views)

So then it defaults to dt = 1 when the AI is in "N chan 1Sample" mode?  When I change my ms wait value to 4000, the AI read does execute once every four seconds, but the dt value on the waveforms coming out of the AI Read VI are still 1.  This explains the inaccurate timestamps in the tdms file.  Do I need to independantly calculate the actual dt and change it on all the waveforms in the array before writing them to file or is there a way to fix this using daqmx VIs?

 

Thanks again.

CLAD
0 Kudos
Message 5 of 6
(3,365 Views)
Solution
Accepted by topic author testingHotAir

It seems that the daqmx read VI does in fact default to dt = 1 when the "N chan 1 Samp wfm" instance is selected.  Switching to continuous sampling and parallel loops with a queue solved the problem.

 

Thanks for all your help!

CLAD
0 Kudos
Message 6 of 6
(3,330 Views)