LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Getting Exact Loop Timing for Data Measurements

I'm sure this is a simple problem, but I didn't see an existing topic that covered it.  I'm looking for a more efficient way of timing a loop for data gathering.  The Elapsed Time express VI runs too slow.  I can use the timing from cDAQ system, but that only controls how fast the instruments loop, not how fast the other stuff in the loop runs (obviously).  Being hyper exact on the loop timing isn't important, but knowing the exact time between loop iterations is for accuracy purposes.  I was going to use the High Resolution Relative Seconds VI as in the attached picture, but didn't know if this was going to lead me to grief later.

 

I would use the Timed Loop, but I can't figure out how to make it use MHz timing.  The option is grayed out for me.  KHz timing is too slow.

0 Kudos
Message 1 of 19
(5,684 Views)

You haven't stated what OS you are using but given your comments in the last paragraph I'm going to go out on a limb and say you're probably using Windows. Windows is non-deterministic (irrespective of LabVIEW) so you will never be able to achieve a high determinism of timing control over loop execution rates, even with a Timed Loop (these structures were really more designed for a real-time OS running LabVIEW Real-time). With Windows you'll always get a lot of jitter. This is the reason why Timed Loops can't be run at any more than 1KHz on Windows OS.

 

However it seems that what you really need to know is the delta between each execution - and the cycle jitter is not important. In which case the High Resolution VI is the one you should use for this purpose. Also note that having a VI with debugging enabled will additionally slow down execution of your VI, which might be important if you are looking order of magnitude less than 1ms.

 

I'm not sure where your comment regarding the "Elapsed Time express VI runs too slow" comes from. It implies you have done some bench-marking timing already with something else.

Message 2 of 19
(5,665 Views)

Yes, using Windows, should have said so.  I have debugging switched off.

 

When I try to run the loop in excess of about 10,000 Hz, the loop bogs down and the culprit was the Elapsed Time express VI.  That was also when I learned that you can't use the dt output from DAQmx Read 1D Analog Waveform for timing either.

0 Kudos
Message 3 of 19
(5,660 Views)

The dt from a waveform is to do with the rate at which the waveform was captured - in your case it was captured in hardware completely independently from your block diagram code. Obviously this is useful if you need to have defined jitter between each sample - hardware timing on a DAQ card will always out-perform timing controlled via Windows OS.

 

If you need to minimize jitter at rates faster than 1KHz then you probably need to get off Windows onto a RTOS. If you aren't worried about jitter but need to have a high enough resolution to delta your loop then the High Resolution VI will work. There are other things that can also be done to improve performance besides disabling debugging on that VI such as:

  • Running the contents of the loop in a subroutine VI will ensure that the LabVIEW execution scheduler forces the entire content into a single clump and will execute on the designed execution system till completed.
  • Allocate the VI (and thus its called SubVIs) to a dedicated execution system
  • Front panel updates hog the UI thread and force thread swaps to transfer data; consider not showing the Front panel to improve execution speed (as a subroutine priority VI would do) or defer updates

You could also attach a VI example that you have for us to have a more targeted discussion. Its not clear to me what else you have besides a waveform read.

0 Kudos
Message 4 of 19
(5,653 Views)

For the best timing, you want to use the accurate clocks that are in DAQ devices.  Suppose you are acquiring data at 10KHz, set it up for 1000 samples and "continuous" samples.  Once you start the DAQ task (with DAQmx Start Task), every time you do a DAQmx Read, after precisely 100 msec (10KHz/1000 sec = 0.1 sec = 100 msec), you will have 1000 samples.  Immediately get rid of these points (via a Queue or, my favorite, a Channel Wire) to another loop which processes them in parallel, and do another read (in a While loop), generating another 1000 points in the next 100 msec.  You will effectively be collecting as many samples as you want at 10KHz with no breaks.  With very little CPU time being used, as most of the time the loop is "waiting" for the hardware to gather then 1000 points.

 

Why don't you attach a VI (which lets us poke around and change your code easily in ways that may prove helpful to you)?

 

Note that there is no explicit timing (no timers, no Timing Loop), it's all done with the clocks in the DAQ hardware, much more reliable than the clocks in Windows.  Also, of course, you must not use the Dreaded DAQ Assistant, nor its Evil Twin, the Dynamic Data Wire.

 

Bob Schor

0 Kudos
Message 5 of 19
(5,644 Views)

@SmithGo wrote:

I can use the timing from cDAQ system, but that only controls how fast the instruments loop, not how fast the other stuff in the loop runs (obviously).


Not exactly true.  If you have your DAQ task to continuous sampling, you can set your loop rate by stating how many samples to read.  For example, if your DAQ is sampling at 1kHz and you tell the DAQmx Read to read 100 samples, then your loop will iterate at approximately 100ms.  No additional wait needed.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 6 of 19
(5,629 Views)

It's actually a really large program, as it controls a hydraulic test stand that has a lot of functionality.  I'm in the process of updating it with the new timing scheme, so I don't have a working VI or picture of same to show.  I don't think it will be worth it to post the VI because it relies on several dozen sub-VIs to function.  When I get this working again tomorrow (I hope), I will post a picture for it.

 

I have looked into the multiple sample path before, but can't use it for two reasons.  The first is that the data rate is variable by the user from 5 Hz - 100 kHz, so it would be difficult to target the correct number of samples to keep the loop timed correctly.  Also, users can put the software on different hardware, which would again make targeting the correct number of samples difficult.  The second is that the front panel needs to update constantly, or as near as possible, with the data being taken.  It can't just update once every second.  A consideration for the users is also that they don't want to record more data than absolutely required in order to limit file size.

 

I have just looked up this white paper on Producer/Consumer loops (http://www.ni.com/white-paper/3023/en/), and that looks pretty promising for a future version of the program.

0 Kudos
Message 7 of 19
(5,621 Views)

Hi Go,

 

The first is that the data rate is variable by the user from 5 Hz - 100 kHz, so it would be difficult to target the correct number of samples to keep the loop timed correctly.

You can always read as many samples as you would expect for ~100ms. So for samplerates <=10Hz you would read just one sample per DAQmxRead, above 10Hz you could calc "samplerate * 0.1" - after all it's just simple math…

 

Also, users can put the software on different hardware, which would again make targeting the correct number of samples difficult.

Your users port the software to use different hardware? Usually the programmer of the software should do this…

 

The second is that the front panel needs to update constantly, or as near as possible, with the data being taken.  It can't just update once every second.

An update rate of 10Hz usually is more than enough.

How fast can your users look at screens (aka "recognize the content displayed on screen")???

 

A consideration for the users is also that they don't want to record more data than absolutely required in order to limit file size.

File save rate is not related to sample rate or screen update rate at all!

Use a producer-consumer scheme to decouple data acquisition from data saving…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 8 of 19
(5,609 Views)

It's sold as commercial software, so I don't really have any control over what they do with it once it's out the door, so to speak.

 

Like I linked to in the message above yours, I just learned about the producer-consumer concept just now, and will look in to implementing that in the future.  For now, I just wanted to tighten up the timing measurement a bit.

0 Kudos
Message 9 of 19
(5,588 Views)

SmithGo,

 

The MHz timing for the timed loop is only available for Real Time targets. As far as tightening timing, I'd make sure you don't have any Express VIs, as they tend to bog down execution. I don't see any issues off the bat with checking the timing as you suggested in your first post. What progress have you been able to make? 

 

I'd definitely agree that the Producer Consumer would be a solid avenue to explore for future applications.

Claire M.
Technical Support
National Instruments
Certified LabVIEW Developer
Message 10 of 19
(5,559 Views)