08-12-2009 11:13 PM
I find that the 'Tick Count' VI fails every 41 or 42 ms: it properly reports the changing time in milliseconds and then fails by repeating the same 'time' (number) for a second millisecond ("the clock loses time"). The spacing between errors is 42 ms (twice) and then 41 ms (once), and then back to 42 ms (twice), etc. This is also visible on an oscilloscope watching a USB-8451 Chip Select (SPI) line using my own 'Tick Count' based timer, and also appears to occur using the Labview "Timed Loop". These errors are 1 whole millisecond errors; not the smaller random timing errors due to grabbing the USB bus.
I wrote a VI (attached, LV vers 8.0), which is independent of the USB bus to expose this behavior, and which reveals the same problem on two computers, one using LV 8.0 and one using LV 5.01. (Also attached is a JPEG screen shot of the VI results.) I've stumbled into something: does anyone know what? Or how to fix this?
Finally, fixing the "Tick Count" VI would significantly help the subsequent signal processing on my SPI bus ADIS16130 (analog devices) gyroscope data which is presently software timed with an NI USB-8451.
Thank you,
Guyanalog
08-12-2009 11:29 PM
First, I'll have to admit I don't understand what you are trying to do.
However, it looks to me like a regular pattern of 41 to 42 milliseconds. Are you sure you are dealing with calculations based on a fraction of a millisecond and the values between rounded up or down?
For example, at zero milliseconds, the timer would be zero. At 2/3 of millisecond the timer would be .66667 (round up to 1), At the next 2/3 millsecond, the timer would be 1.33333 (round down to 1). The next 2/3 millsecond, the timer would be 2, thus 2. So you would wind up with 0, 1, 1, 2. Two would look like the same millisecond.
08-13-2009 02:28 AM
Dear Raven's Fan,
My Tick Count VI test program runs a dummy calculation to use up some CPU time (in the JPEG it's about 16 microseconds); then it checks if the Tick Count VI is reporting a different time than the last time check, and if NOT then it continues for more dummy calculations until the Tick Count VI changes; when the Tick Count reports a time change, the loop then resets and the number of dummy calculations needed to add up to 1 millisecond (as measured by the Tick Count VI) is saved for the graph and the process is repeated for the next millisecond.
In essence I made my own crude clock by doing dummy calculations and watch how many dummy calculations are needed to fill up each millisecond as dictated by when the Tick Count Time reports a change (of 1 millisecond).
The JPEG shows that mostly 65 'dummy calculations' fit into one millisecond but periodically we need 130 dummy calculations to fill up the 1 millisecond period. I claim that the Tick Count gets stuck every 42/41 milliseconds and doesn't change (causing my dummy calculations to run twice as long before a change is reported by the Tick Count VI).
I know that the argument can be made that the computer, Win OS, etc. is letting my dummy calculation program run faster now-and-then. However I see the whole millisecond delays on an oscilloscope setup (with the USB-8451), with my own Tick Count based timer or with the Labview "Timed Loop".
By the way, I also see this kind of problem using the "Wait (ms)" VI unless it's set to 1 ms. Using the Wait VI set to 3 ms has the described problem, but concatenating 3 of the Wait VIs, with each set to 1ms, does not have the problem.
Thanks,
Guyanalog
08-13-2009 02:59 AM
08-13-2009 02:45 PM
Good Afternoon Guyanalog,
Coq Rouge brings up a very important point. I attached an image which is a compilation of screenshots of Time between tick timer problems (ms) from 4 (nearly consecutive) runs of your test VI. You should be able to see very similar results if you run this code a few times. If not, try opening some other programs or start an anti-virus scan.
Windows is not a Real Time OS, so you should expect to see jitter in your code's execution. If this is not acceptable behavior for your application, you may want to look into using a Real Time (http://www.ni.com/realtime/) system.
08-13-2009 03:41 PM
The way you do your timing could also affect the results. You have no assurance that the tick count is always read before or after the time consuming for loop - no data dependency. here is one way to modify it. The cases not shown are empty or have wires passing through.
I had to increase the "Roger's Clock" value to >10000 before I saw any variation in the Time between problems graph. This was on a Mac with LV2009. Improvements in LV or OS differences?
Lynn
08-15-2009 11:48 PM
Hi Lynn, Charlie, Coq, and Raven's Fan,
Thank you all.
Yes, changing the Tick Count data dependency improves the accuracy of the test VI.
I recommend adjusting the "Roger's Clock" value so that the top graph gives numbers between 30 and 1000, which is the number of passes of the whole 'Roger Clock' that fit into 1 ms. Pushing the clock adjustment number too high will use more than 1 ms and make the top graph have numbers around 1, and do strange things.
This odd problem of the 'Tick Count' losing time every 41/42 ms also shows up (disrupts) with the labview "Timed Loop", and the 'Wait (ms)' VI.
I am doubting that this problem of "losing time" is due to the distracted operating system: I'm guessing that it has to do with the manner by which labview's interior programming gets at the computer clock. The problem is not wobbly timing, but whole time cycles being lost. I see wobbly timing on the oscilloscope, and I also see whole one-millisecond timing errors occuring like clockwork: I set the scan speed of the oscilloscope carefully and watch the location of the regular 1 ms hole march smoothly across the screen.
Guyanalog
08-16-2009 01:17 AM
08-17-2009 01:49 AM
The problem here is also that the tick count is not a real clock in any way. It is a Windows timer interrupt that that gets triggered in hardware but serviced in software. The hardware trigger is derived from an onboard quartz oscillator, that is more or less stable, but certainly not to 100ppm or less. After all high precision quartz oscillators cost a dollar or two and with nowadays PCs where the whole system costs only a few 100 dollars this is a very substantial price increase in production costs.
So this timer tick has a somewhat accurate timing but not in anyway synchronous to a real clock. Then there is a real time clock hardware in the computer that uses a quartz too, of course also not high precision. This is used to maintain the actual time in the PC. If connected to the internet this real time clock is synchronized about once every day to an internet time server. So you end up really with three time bases in a computer that run more or less asynchronusly.
Last but not least lets assume your CPU load from one iteration takes about 24.2 us. Your millisecond time counter is limited to a resolution of 1ms. So it will two times count up to 41 and one time to 42 as it can not report 41.3 iterations of the loop. But reality is not so nice. The fact that it seems to go very regularly with 2*41 and 1*42 iterations would make it seem that Windows is very constant in execution but it is certainly not. In old days we had a thumb of rule that if you needed a more accurate reproducability than 100ms you were not going to ever get that with Windows at all. Nowadays it is probably more in the range of 10ms. A guarantee is never possible under Windows since it is not a real time system and therefore your process can be locked up for seconds in extreme situations. That is why anti-virus scanners are so loved as some of them tend to monopolize the computer regularly for very long periods of times.
Rolf Kalbermatter