LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Time to call sub VIs

    altenbach wrote :
    Let me try to understand your question: so you are worried that sometimes it takes a few ms and sometimes 0ms, and you want a reliable way to always execute the subVI is sub-ms time.  

Yes.

        * First of all your benchmarking is flawed, because you also meausure the reading of the FP controls and updating of the FP indicators (2xblue, 2xorange). Updating of indicators is asynchronous, so iterations that include an update will be slower than the others.

I'm sorry, I can't understand FP, because probably some terms in LabVIEW Japanese version are different from English version.
But I think I can understand what you want to say. I fixed the VIs to measure times correctly and put them as question20080903max.zip and profile20080903.png. I also changed them to calculate a maximum value as "Max".
In profile20080903.png, sub.vi's longest time is also 15.6ms, but "Max" is 1ms. I don't know why the longest time is different from "Max".

        * A few ms might well involve an OS task switch or other things out of your control such as cache hits/miss.
I tried to burden a load of prime95 and other environments, and they showed the same behavior. Therefore, it isn't such problems.

        * Is debugging enabled?
I tried both enabled and disabled debugging. They were same.

        * What are the priority and thread settings?
I tried both low and high priorities, and they were also same. But I don't know where I can thread settings. Where is it?

        * Is the panel of the subVI open or closed?

It was closed.

        * Do you really have a REAL performance issue are are you just curious about the somewhat "quantized" values in the profile window?
Both. At first I found some delays in my research program, so I tried to get a profile and found some 15.6ms.
Therefore, I made simple VIs (test.vi and sub.vi) to detect a cause. Then I found that the profile of the simple VIs also showed 15.6ms as a longest time.

Moreover, I also found the strange difference between Tick Time's result and the longest time of the profile shown in profile20080902.png and profile20080903.png.
Therefore, I have the 3 problems now.

        * I hate to tell you this, but if you are worried about ms jitter, a general purpose OS like windows is not the correct tool. You need LabVIEW RT.
I think 15.6ms is too long time for PC and I confirmed the problem on the some environments, so I think that it concerns a core of LabVIEW.

I don't know if LabVIEW RT can escape it.

Download All
0 Kudos
Message 11 of 15
(1,153 Views)

1. An occassional 15 msec delay is not unusual for any programming language when you use windows.

2. Place a wait inside the while loop like you are supposed to.

3. I would trust the result of the tick count more than the profiler

4. I cannot reproduce your issue (with the modified while loop) but even if I could, it would not be a concern. If you want deterministic behavior, you need a deterministic os.

Message 12 of 15
(1,120 Views)

Dennis Knutson wrote :

1. An occassional 15 msec delay is not unusual for any programming language when you use windows.

2. Place a wait inside the while loop like you are supposed to.

3. I would trust the result of the tick count more than the profiler

4. I cannot reproduce your issue (with the modified while loop) but even if I could, it would not be a concern. If you want deterministic behavior, you need a deterministic os.


1. I don't think so.

I cut needless services and processes on Windows and measured times to call a function by an attached Ruby script "time.rb".

I couldn't put .rb file, so I compress it to .zip file. It run on Cygwin and Ruby 1.8.7. The result is below.

---------------------------

% ./time.rb

0.000999927520751953
0.00100016593933105
---------------------------

This units is second. Thus, It don't frequently happen over 10ms delay.

I think LabVIEW must be faster than script languages like Ruby.

 

2. I put Wait in the while loop. Then the longest time became 0.0ms and the Max shown in profile20080903.png also became 0ms.

I think the problem is caused by the profiler. I'll write it next.

 

3. I think so too.

All parameters without average times in profile results are about an integral multiple of 15.6ms.

I think 15.6ms is a resolution as Japanese support said, after all.

The average times are surely calculated from VI times and Execution frequency, and each environment surely had it's resolution as 15.6ms and 10ms.

However, I also think the resolutions are too long. Shouldn't I use the profiler of LabVIEW?

 

4. I don't have money to buy LabVIEW RT.

However, even if I have enough money, I wouldn't like to use RT, if I can't trust non-RT.

0 Kudos
Message 13 of 15
(1,103 Views)

yuta,

 

As the Japanese support engineers told you, it is a problem with resolution.  The profiler tool uses a Windows API function to get the thread time of each execution thread.  I believe this API is based on the number of thread switches that are required for your code execution.  The Windows thread scheduler divides time up between applications in blocks of time of around 20 ms.  I believe the actual time depends on the system and some underlying hardware clock, so that's why you see 10.0 ms on one computer (100 Hz clock) and 15.6 ms on another (64 Hz clock).  The reported times you see in the profiler will therefore always be a multiple of this base time.  This is why the VI you put together that checks to see if execution time is above 10 ms never increments its counter, yet times of 15.6ms are reported by the profiler.

 

This resolution issue not specific to LabVIEW but is a limitation of the Windows thread scheduler.  Instead of using this API, the profiler could instead use the system wall time to obtain a better resolution.  However, the drawback of this is that highly parallel applications will begin to report bloated execution times.  Although the resolution would be finer, the actual execution time of a VI may end up being more inaccurate.  It is possible that we will move towards this alternative at some point in the future, but at present time I do not know of any plans to do so.

 

Message Edited by Devin_K on 09-10-2008 04:50 PM
Message 14 of 15
(1,044 Views)

I see. The profiler has a limitation.

Now I use Tick Count to measure sub-VI time as http://forums.ni.com/ni/attachments/ni/170/353075/2/profile20080903.png.

Doesn't Tick Count have such a limitation?

 

0 Kudos
Message 15 of 15
(995 Views)