04-11-2018 01:13 PM
Isn't the card you have in the computer supposed to deal with most of those criticisms? I thought you were just worried about accuracy and latency of the Labview and Window API calls themselves.
04-11-2018 01:50 PM
Your mixing and maching various time related tidbits together and then try to claim that precision is the same as accuracy. It is not!
The Windows time service may now be able to update the internal clock to microseconds accuracy but that does not mean that a user application can get such an accuracy (and it doesn't). User applications run in ring 3 of the x86 CPU, which is the least privileged subsystem. The time service can and probably was moved for a substantial part to the kernel in ring 0, which has a much more direct access to the kernel structures that maintain the timer value. Your user application however has to call an according API, which then has to call into the kernel, which will cause a costly context switch to ring 0, retrieve the value, do possibly some calculations too, then switch back to ring 3, before it returns to the caller. This entire datapath alone takes most likely much more than a microseconds. But that is not enough. The thread which called the API has to be scheduled by the OS, and has to share the CPU with all the other threads on the system. And when an interrupt occures, for instance because there has a new data packet arrived on the network interface, then the according device driver gets invoked which runs in the kernel and has a higher priority than any of your user application threads. All that together can mean that your thread calling the OS API to retrieve the very accurate time can get delayed by many microseconods to several milliseconds. The time Windows maintains internally may be accurate to 1 microsecond, but the accuracy you see in the user application is much less, in the order of milliseconds.