LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Can LabVIEW threads sleep in increments less than a millisecond?

I am aware of two LabVIEW sleep functions:

1) All Functions | Time & Dialog | "Wait (ms)"
2) All Functions | Time & Dialog | "Wait Until Next ms Multiple"

In this day and age, when 3GHz processors sell for less than $200, it seems to me that a millisecond is an eternity. Is there any way to tell your LabVIEW threads to sleep for something less than a millisecond?

In Java, the standard Thread.sleep() method is written in milliseconds [sorry, the bulletin board software won't let me link directly]:

http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Thread.html#sleep(long)

but there is a second version of the method that allows for the possiblity of nanoseconds:

http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Thread.html#sleep(long, int)

So there does seem to be some consensus that millisecond sleep times are getting a little long in the tooth...
0 Kudos
Message 1 of 20
(6,724 Views)

The problem is the fact that even milliseconds are pretty meaningless on a OS like windows. Such a precision cannot be guaranteed in software.

What exactly are you trying to achieve? Typically, such timings only make sense in hardware control and in this case you can use the board timers. Have a look at e.g. the Counter/Timer boards.

Of course, LabVIEW RT has a 1MHz resolution in software.

0 Kudos
Message 2 of 20
(6,720 Views)
What exactly are you trying to achieve? Typically, such timings only make sense in hardware control and in this case you can use the board timers. Have a look at e.g. the Counter/Timer boards.


Of course, LabVIEW RT has a 1MHz resolution in software.



Look, the idea of millisecond sleeps goes back a good thirty years or more, to the era when state of the art computers ran at about 1MHz. Nowadays, computers are several THOUSAND times faster than that, so it seems to me that there shouldn't be any reason why sleep times couldn't be about (1/1000)th of what they were back in the day.

At the other end of the spectrum, LabVIEW has a 128-bit Timestamp data type that has at least 64-bits worth of precision for the fractional side of things, so someone at NI realized early on that computers were destined to become faster over the years.

0 Kudos
Message 3 of 20
(6,707 Views)
I'm not all that familiar with Java but I'm wondiering if the millisecond and nanosecond timers only apply to Sun platforms. Years ago, I did a conversion from a C program on a Sun to a C program on Windows/Intel. The Sun/Solaris supported a millisecond sleep but the Windows/Intel platform simply had no hooks into a timer with that kind of resolution.
0 Kudos
Message 4 of 20
(6,700 Views)
Hi Tarheel !

May be you should get some idea of the kind of timing accuracy that you can reach when using a loop.
Use the attached vi, which runs repeatedly a For loop (10 iterations) reading the time, then calculate the average and standard deviation of the time difference between the loop iterations.
On my PC (P4, 2.6 MHz, W2K), I get a standard deviation of about 8 ms, which appears to be independent of the sleep duration I asked for.
Same thing with a timed loop.
Under MacOS X (PowerBook, 1.5GHz), the SD falls down to 0.4 ms.
I tried to disable most of the background processes running on my PC, but I could not get a better resolution.
Seems that the issue is not in LV but on the way the OS manage its internal reference clock.

Since you are a Java afficionado, may be you could produce something equivalent ?
A proof that nanosecond resolution is available on a PC could be of great help to NI. Why bother with costly timers on DAQ cards ?

By the way, it took me about one minute to create the attached vi. I would like to have an idea of the time required to do the same thing in Java.

Tempus fugit...

CC
Chilly Charly    (aka CC)
0 Kudos
Message 5 of 20
(6,698 Views)
Look, you guys are all thinking in terms of polling in an RTOS, which is fine, and which might very well need a high degree of accuracy.

But all I want is some way to put the thread to sleep and get it off the CPU so that the scheduler can give some other thread a chance to accomplish something. For all I care, the thread can be ordered to sleep for any episilon greater than zero, just so long as it goes to sleep and releases its hold on the processor.

Here's another example of a need for a very short sleep interval: When you stress test an application, it helps to overwhelm it by a factor of a thousand, or a million: Loops that might normally be run ten times get run a million times; arrays that might have ten elements get expanded to a million elements; sleep times of (1/10)th of a second are sped up to a mere (1/1000000)th of a second.

OOPS - can't do that last one: If your App has sleep times, there's no way that it can ever be sped up faster than that (1/1000)th of a second minimum. What's worse, if I tell an empty FOR-LOOP to put itself to sleep for a millisecond, I get only about 500 [not 1000] iterations of the loop each second.

So that's my theoretical upper bound in any App that with sleep times: The guts of the App can iterate no more than 500 times a second.

Again, in this day and age of 3GHz processor, it just seems like there's something wrong with an upper bound of 500 iterations per second.

Of course, at this point, I almost expect some LabVIEW guru to speak up and say, "Oh, you don't have to worry about putting your threads to sleep so as to free up the CPU - LabVIEW does all that for you behind the scenes."

Or maybe "sleep" isn't the correct terminology in LabVIEW; maybe there's some "relinquish the processor" command which I don't know about.
0 Kudos
Message 6 of 20
(6,681 Views)

@tarheel_hax0r wrote:
Here's another example of a need for a very short sleep interval: When you stress test an application, it helps to overwhelm it by a factor of a thousand, or a million: Loops that might normally be run ten times get run a million times; arrays that might have ten elements get expanded to a million elements; sleep times of (1/10)th of a second are sped up to a mere (1/1000000)th of a second.

OOPS - can't do that last one: If your App has sleep times, there's no way that it can ever be sped up faster than that (1/1000)th of a second minimum.


Don't forget that you can wire a zero to the wait function. This is NOT a NO-OP, but causes the execution system to switch to a different task. Maybe that's all you need. This is especially useful in parallel loops that need to run fast but should not block each other.

You can also put your 1ms wait into a case structure which is active only every n'th iteration of the loop.

Also, have a look at Application note 114 [broken link removed], for example.

0 Kudos
Message 7 of 20
(6,676 Views)
Don't forget that you can wire a zero to the wait function. This is NOT a NO-OP, but causes the execution system to switch to a different task. Maybe that's all you need. This is especially useful in parallel loops that need to run fast but should not block each other.

This would be an absolutely perfect solution to my problem. Unfortunately, it doesn't appear to be documented.

Help for "Wait (ms)" says: U32 milliseconds to wait specifies how many milliseconds to wait. This function does not wait for longer than 0x7ffffff or 2,147,483,647 ms. To wait for a longer period, execute the function twice.

Help for "Wait Until Next ms Multiple" says: U32 millisecond multiple is the input that specifies how many milliseconds lapse when the VI runs.

Do you know of any authoritative documentation that says that setting the U32 value to zero will do nothing more than cause the thread to relinquish the CPU?

Thanks!
0 Kudos
Message 8 of 20
(6,661 Views)

@tarheel_hax0r wrote:
Do you know of any authoritative documentation that says that setting the U32 value to zero will do nothing more than cause the thread to relinquish the CPU?

Thanks!


This is something I have used for many, many years. I could not find the "authoritative documentation" in a casual glance at the docs, but I'll keep looking.

In the meantime I attach a simple demonstration VI (LV 7.0, let me know if you need an earlier version) where you can verify this fact yourself. There are two independent parallel loops that each append either a "1" or "-1" to a local variable array.

Without 0ms waits, the final output contains long stretches of the same number in a row.
With a 0ms wait added to each loop, the output nicely alternates with each element.

Enjoy! 🙂
Message 9 of 20
(6,654 Views)
The major problem with waits under Windows, especially NT based versions, is that the OS timeslice for an application is 20ms by default. What this means is that when your process (LabVIEW) is swapped out so another process can run (e.g. your e-mail gets mail, your mp3 player is decoding mp3s, the system clock ticks, the keyboard gets data), there is a minimum of 20ms before LabVIEW gets the processor back. This is an automatic delay of 20ms+ whenever it happens. As a result, delays of less than 20ms are not particularly accurate and are generally only useful for throttling CPU use.

You can change this 20ms to 1ms, but it will result in a lot more thread switching overhead for the OS. This is probably not much of a problem with a fast processor, but something to consider.

Take home message - this is a problem with the operating system, not LabVIEW. Windows operationg systems are nowhere near real time. If you really need to sleep for 10 microseconds, you need to use LabVIEW RT, where the function exists. Note that this is usually not necessary. Creative use of your hardware timers and buffering data will handle most problems. For those it won't (and they do exist), there is LabVIEW RT and LabVIEW FPGA.
Message 10 of 20
(6,629 Views)