04-12-2012 12:41 PM
Dear LabVIEW forum,
I am the end user of an pre-compiled application that uses LabVIEW coupled with a USB-6211 DAQ card. These cards are fed a GPS signal (10 Mhz clock and 1pps pulse) and an analog input. Timing is very critical, and I've noticed that different devices are offset by milliseconds despite a common input signal.
Because I don't have access to the software, I was wondering if any delay information was embedded in the DAT binary format. I have a good idea of what's contained in the header, except for one thing, a 32 bit integer describing the number of floats, followed by four 64-bit floating point numbers. The hexadecimal extract follows. I've included the floating point translation after the left arrow (<-).
% Number of floats
0000 0004
% Four unknown 64-bit floating point numbers. These numbers
3fbf 08f8 796e 8657 <- 0.1212e0
3f35 8a6c cc47 cc40 <- 3.2869e-4
3d24 4e98 aef0 59c0 <- 3.6073e-16
3c65 7ad6 26ce 98c0 <- 9.3154e0
My two questions:
1. I'd like to systematically correct for the delay. Any ideas about its origin?
2. What information is contained in the floats above?
Please let me know if you need additional information, and I'll provide it.
Thanks,
Sean
04-13-2012 03:35 PM
Sean,
DAT is not a standard defined format. It can be used by anyone for data in any format.
Sorry.
You will need to obtain that information from the developer of the code which creates the file or you will need to do a possibly very difficult reverse engineering job on the files.
Lynn