I am working on a LabView application to display and analyze physiological data. The data is imported as a waveform build from Y-amplitude data, t0 and dt, which is also how our physiological recording device saves data – we use the BioRadio 110 from Cleveland Medical Inc.
I noticed few discrepancies in the X-Y pairs reported by Waveform\WDTOps.dll VIs and the graph data points. The cursor snapped to a point of interest in the graph reports a different amplitude value than the point at the same timestamp position in the related X-Y pairs returned by “Waveform to XY Pairs” vi function (from the analog waveform palette).
Note, also, that the “Get Y Value” vi function carries the same discrepancies depending on the value plugged in (i.e. index or relative time). Under further investigation it seemed the two time stamp values that I came across correspond to the time derived using the same index but with two different algorithms. The first one is the typical programmatic error when using float, and deriving t= t0 + n.dt in a For loop (i.e. adding dt and along a small error at each iteration) and the other minimizing the error by only calculating directly once (i.e. no iteration) t= t0 + n.dt, for n=index. At a sampling rate of 640Hz, after 100000 data points (i.e. or 156.25s) the error is about 9ms, my recording are 30min or more and I need to track/correlate time stamped annotation throughout with consistent accuracy for my experiments.
The “testTime” VI illustrates all this. I also added the data file save by BioRadio110 which are used as raw data and the vi reading it (with a function to converting BioRadio Crystal binary file “little endian” into LabView “big endian” – I wish they were a LabView functure to handle this, which is another subject. The two VIs and two data files were compressed into one winzip file.
I am puzzled by all this as I expected waveforms and their related timestamps to accurately and consistently be reported/set in LabView to serve all sort of signal acquisition and processing applications.
Could you help me clarify which time (array) is derived cleanly and unambiguously and which can be trusted between the source (i.e. the waveform itself), the “get Y value”, the “Waveform to XY Pairs” and/or any similar (analog) waveform function?
Isn’t time stamp supposed to be an accurate format using a cluster of four integers numbers (64 bit each)? How can I make best use of this format?
Although using double float for absolute time is asking for trouble, since it is a source inaccuracy, doesn’t it seem that there are also some inconsistencies leading to inflated errors reported by some of the WDTOps.dll functions as described above?
In the mean time I’ll try to use index instead of time stamp as much as possible in manipulating my data for analysis – which is not very convenient and wil get me so far as I will need to keep track and correlate events reported in discreet absolute times.
Please let me know if you have any problem with the attachment and do not hesitate to contact me if you need further details. I look forward to your reply and guidance.
Donat-Pierre LUIGI, Ph.D.
Research Associate
Institute for Creative Technologies
University of Southern California
13274 Fiji Way desk: 310-574-1620
Marina Del Rey, CA 90292 fax: 310-574-5725