03-07-2012 12:29 PM
Hi all,
I have a data acquisition application where I am trying to compute velocity based on my position values. So the actual data acquisition time has a crucial influence on the accuracy of my application. In order to that, I need to check the accuracy of my time stamps with respect to the real CPU time. So I am trying to read and write the CPU time in the same .txt file as my position and velocity data.
So I was wondering how I can access to the CPU time stamps in Labview. Does the "Get Date/Time in Seconds.vi" get the values from the CPU or is that a dependent measure calculated by labview?
According to help files, it is calculated by labview, which I think is still dependent on labview and hence not what I should rely on.
But I also found the following example on NI website that says it is reading a CPU time:
So is it a CPU time or a Labview calculated time? Can someone help me with this issue?
I just need to access the CPU time stamps in my block diagram.
Thanks.
Amir
03-07-2012 12:33 PM - edited 03-07-2012 12:37 PM
Get Date/Time in Seconds.vi calls the system clock. (Simply change you CPU time and test it) It does convert the time from the OS definition to a LabVIEW (OS Independant) format however, it is the OS's time.
03-09-2012 02:17 AM - edited 03-09-2012 02:22 AM
Jeff already answered your question, but your use of CPU time is rather ambigues. I assume you mean the time of the real time clock but you should be aware that it has only a relatively low resolution of milliseconds and the real resolution might be in fact even less, since the RTC hardware is not necessarily going that low and Windows might be calculating the higher resulotion from its timer tick. The timer tick is another sort of clock and it's simply a software counter that gets incremented continously in the background. It's start point is during initialization of the operating system when the computer is started up, so its absolute value is meaningless, and it simply rolls over when the maximum value of 2^32 ms is reached. It's actual resolution is 1ms but the update rate is in nowadays Windows usually 10ms here. In earlier days (talking Pre Win32 here) that used to be 55ms but you could reduce it to 1ms with some ini file hack.
Last but not least there are several counters directly in the CPU usually there for performance measurements. Their resolution goes down to below microseconds up to naonseconds but you need to call into Windows APIs to get at their value (and accept that the Windows API call delays add a relatively large uncertainety to these values).