LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Wait Function - Deep down what's it doing? Should people be using it?

I get that it's essentially delaying for the allotted amount of time before allowing the code to continue on.  But I've always seen the wait function as something that's holding your code hostage.  And maybe I see it this way if it's improperly utilized (like the code needs to stop, but the stop was "executed" during a wait and won't execute until wait is complete).  Also, some part of me has thought of it as a resource hog, but maybe that's not fair?  I've read that people will use the wait to help slow down loops so they're not going crazy.  But then what is the wait doing that's different than a "free spinning (the while loop is a circle...)" piece of code if it's not executing its own while loop counting to the desired amount of time?  Does the wait instead work like an interrupt?  Basically, I've rationalized (for probably no good reason) that using waits is just lazy coding and dangerous at the same time and offers no real value outside of very slightly inaccurate timing.  Explain to me why I'm wrong, please.  Because I've adopted the practice of making my own "wait" where I give it a time, and during that time the code is allowed to watch for "exits" and possibly do some other mundane task, like watch for or accumulate data.

0 Kudos
Message 1 of 9
(323 Views)

Sometimes you have to wait. For instance sending multiple commands to an instrument you need to wait several milliseconds between commands so you don't overflow the instrument's input buffer.

 

A While loop that is doing nothing but polling a Boolean Control will use 100% of the CPU core it is running on just to poll that boolean. This will affect the GUI for that LabVIEW program and everything else that is currently running on the computer. But even a 1mS delay in that loop will dramatically reduce that CPU usage and allow Windows to better control time slicing.

 


@LuminaryKnight wrote:

 I've adopted the practice of making my own "wait" where I give it a time, and during that time the code is allowed to watch for "exits" and possibly do some other mundane task, like watch for or accumulate data.


 

Frankly you are doing the right thing as my While Loop polling example is not really the most efficient way to handle a GUI but it was one of the things we had to do before the Event Structure came along. 

========================
=== Engineer Ambiguously ===
========================
Message 2 of 9
(311 Views)

@LuminaryKnight wrote:

I get that it's essentially delaying for the allotted amount of time before allowing the code to continue on.  But I've always seen the wait function as something that's holding your code hostage.  And maybe I see it this way if it's improperly utilized (like the code needs to stop, but the stop was "executed" during a wait and won't execute until wait is complete).  Also, some part of me has thought of it as a resource hog, but maybe that's not fair?  I've read that people will use the wait to help slow down loops so they're not going crazy.  But then what is the wait doing that's different than a "free spinning (the while loop is a circle...)" piece of code if it's not executing its own while loop counting to the desired amount of time?  Does the wait instead work like an interrupt?  Basically, I've rationalized (for probably no good reason) that using waits is just lazy coding and dangerous at the same time and offers no real value outside of very slightly inaccurate timing.  Explain to me why I'm wrong, please.  Because I've adopted the practice of making my own "wait" where I give it a time, and during that time the code is allowed to watch for "exits" and possibly do some other mundane task, like watch for or accumulate data.


At the company that taught me to use LabVIEW, my mentor/boss expressly forbade waits in code. It was his opinion that there is never any reason for code to wait. For many years I never used waits because I was taught that there was always a better way. Of note is that the code base I was working on had libraires for sequencing and timing of code execution.  On the other side of that learning process, I rarely think about putting a wait in code but I can now see that there are some use cases that waits are handy but these use cases are typically in very small single purpose programs. 

 

Instead of waiting, use the event loop timeout case to check the state of things and see if its ok to do the thing you are waiting for. 

______________________________________________________________
Have a pleasant day and be sure to learn Python for success and prosperity.
Message 3 of 9
(299 Views)

@LuminaryKnight wrote:

I've read that people will use the wait to help slow down loops so they're not going crazy.  But then what is the wait doing that's different than a "free spinning (the while loop is a circle...)" piece of code if it's not executing its own while loop counting to the desired amount of time? .


A simple test would be to place a couple of empty independent while loops on the diagram and look at the CPU use. Then place a 1ms wait in each and observe again. Maybe you are thinking about the "high resolution polling wait", which is pretty ugly in what it is doing and most likely quite pointless on windows.

 

Back in the days, the event structure did not exist and polling the UI should probably not be faster than the average reaction time of the user. Even now any independent part that is not mission critical should be reasonably paced. LabVIEW is a highly parallel programming language and it is important not to starve more critical parts by burning CPU doing menial repetitive work where it does not matter. Especially if you do hard computations using parallel FOR loops, you don't want too many distractions.

 

Exception: Waits don't belong in inner loops or non-interactive subVIs.

 

In LabVIEW, dealing with time is a core part of programming. Don't ignore it.

Message 4 of 9
(276 Views)

@altenbach wrote:

@LuminaryKnight wrote:

I've read that people will use the wait to help slow down loops so they're not going crazy.  But then what is the wait doing that's different than a "free spinning (the while loop is a circle...)" piece of code if it's not executing its own while loop counting to the desired amount of time? .


In LabVIEW, dealing with time is a core part of programming. Don't ignore it.


Duly noted, thank you.

0 Kudos
Message 5 of 9
(244 Views)

Saying that Wait or Wait until next multiple ms should never be used is definitely an oversimplification of things.

if you write communication drivers talking to devices over GPIB, serial or network, it is indeed almost never the right solution. Often it goes together with the abuse of the other non-solution “Bytes at Serial Port”! Both of these are in almost every case a misuse. Instead one should use the termination character or known message length features of every sane communication protocol.

 

But if you have loops handling user inputs you.should either use the event structure or if the loop is trivial at least a wait.

 

Also when handling polling loops, for instance for physical IO states, you should slow down the loop to whatever is an acceptable reaction time by adding a wait inside.

Rolf Kalbermatter
My Blog
0 Kudos
Message 6 of 9
(215 Views)

One thing alluded to, but not explicitly stated here, is that the Wait (ms) yields the CPU to other threads. It is, at a simplistic view, using an interrupt to know when to resume, but other processes are allowed to run in that time. You can even use a 0ms wait and you will see this CPU yield.

 

Now there is the question of when should you use these waits. In instrument communications, I have had several situations where I needed to wait, usually to give the instrument time to perform a command before I send the next one or to allow for settling before taking a measurement. There is also the case of waiting for an instrument to send any kind of data (if no data has come in, wait X ms and then try again).

 

In the RT realm, I have needed to use a wait in non-critical loops just to make sure my critical loop got plenty of CPU time, using that yield principal I mentioned first in this post.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 7 of 9
(198 Views)

@LuminaryKnight wrote:

  But I've always seen the wait function as something that's holding your code hostage.   Also, some part of me has thought of it as a resource hog, but maybe that's not fair? 


It's not a hog, quite the opposite. It yields this particular thread to any other wanting to do some work. Holding the code hostage is a witty and quite fitting way of seeing it, and sometimes that's exactly what's needed. Just like holding your toddler to prevent them from running off a pier you might need to reign in your code. In pure software there are often better solutions, but when interacting with hardware you often need to e.g. poll a DI or a serial port or send a heartbeat.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 8 of 9
(176 Views)

@crossrulz wrote:

 

Now there is the question of when should you use these waits. In instrument communications, I have had several situations where I needed to wait, usually to give the instrument time to perform a command before I send the next one or to allow for settling before taking a measurement. There is also the case of waiting for an instrument to send any kind of data (if no data has come in, wait X ms and then try again).


Generally, using Wait to not overrun a slow device with to many commands to quickly after each other is a valid use case. For the waiting of received data, the according Read function has already an inherent Wait build in IF you can use known termination characteristics of the protocol. This can be a known protocol message size, either standardized in the protocol or as part of a header in the protocol, or it can be a termination character/byte. If one of these is given, all that is needed is a long enough timeout, but not to long to not hold your system hostage when the user for instance decides to quit your application and the application sits there waiting and waiting for the timeout to occur, before it finally can terminate. Then just execute the Read and it will return after any of these events occurs:

- the requested number of characters have been received

- the configured termination character has been received

- there was a communication error in the lower level driver

- the timeout has elapsed

 

This is not directed at you Tim! Your presentation here should be a must watch for anyone doing instrument communication in LabVIEW (and even applicable to non-LabVIEW solutions).

 

Rolf Kalbermatter
My Blog
Message 9 of 9
(163 Views)