LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need help with timing statistics and timing details function

I tried to figure out how quickly a certain loop I had written would execute (per loop cycle). I was advised by a NI representative to use the following timing feature.

I went to Project > Show Profile Window

From there, I checked timing statistics and details and ran the loop. What I found was that the individual loop iterations were not timed. I therefore set the loop to run only once. After running again, I found run times of 0.0 msec. consistently. I placed a delay in the VI and eventually got it to read 1.0 msec. I did try reading in microseconds as well, but I only ever got 0 and 1000.

Therefore, it seems that the smallest increment this feature measures is 1 msec, or 1000 microseconds. As
I am trying to insure that the loop can be run in under 0.2 msec, I need better resolution. I did try running the VI loop through several iterations, thereby increasing total run time, but I always got a measurement of an even msec.

Is this because I do not have the latest LabView version? Does 7.0 measure to the microsecond? Am I doing something wrong, or is there another way I can measure more accurately with 5.1?
0 Kudos
Message 1 of 3
(2,466 Views)
The time measurements are a limitation of the pc's clock and Windows and have nothing to do with LabVIEW. There's an example VI called Timing Template that you can use to get finer resolution by specifying a large number of trials. It then takes the elapsed time in milliseconds, converts to seconds, and then divides that by the number of trials.
0 Kudos
Message 2 of 3
(2,466 Views)
> From there, I checked timing statistics and details and ran the loop.
> What I found was that the individual loop iterations were not timed.
> I therefore set the loop to run only once. After running again, I
> found run times of 0.0 msec. consistently. I placed a delay in the VI
> and eventually got it to read 1.0 msec. I did try reading in
> microseconds as well, but I only ever got 0 and 1000.
>
> Therefore, it seems that the smallest increment this feature measures
> is 1 msec, or 1000 microseconds. As I am trying to insure that the
> loop can be run in under 0.2 msec, I need better resolution. I did
> try running the VI loop through several iterations, thereby increasing
> total run time, but I always got a measurement of an even msec.
>
> Is this because I do not have the latest LabView version? Does 7.0
> measure to the microsecond? Am I doing something wrong, or is there
> another way I can measure more accurately with 5.1?

The profiler measures the time at VI scale, not loops. It works well
for deteremining how much time various elements of an application take,
but it doesn't show timing transitions such as "loop started at
HH:MM:SS.sss, and ended at ...". There is a tool in development that
will mark those sorts of events and display them on a timeline, but it
is a very different tool than the profiler.

If you are trying to determine how much time a loop iteration takes, you
need to run multiple iterations to take somewhere around 10ms of wall
clock time, then divide by the iterations. You can do this with the
profiler, or just using two of the clock tick VIs and simple math. The
clock resolution for this is about 1ms, though some OSes may have less
granularity. It is possible to use a high resolution timer to get
something like 100ns resolution, but those VIs don't ship with LV. I
searched the ni site and found
http://sine.ni.com/apps/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3DE8556A4E034080020E74861&p_node=DZ52018&p_submitted=N&p_rank=&p_answer=&p_source=External
which I think is the library I am referring to.

You can sprinkle calls to these around your diagram, but beware that
reading these clocks may take tens of microseconds depending on the OS.

Greg McKaskle
0 Kudos
Message 3 of 3
(2,466 Views)