LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Subvi overhead and performance

I have some time critical loops which call subvi functions and I need to get the most performance out of them. I have designated them as subroutine priority. Is this the same as inlining a function in c/c++? Is there still overhead associated with such a call and how can I minimize it. The subroutines are relatively small in code size but I have made a subvi for ease of modularity and maintenance of coding. Does anyone know what are the issues with subroutines and how can I squeeze the most out of labview (7.0 version). Is there an inline equivalent in labview (essentially pastes the subvi in the diagram and removes function calling overhead and pushing of parameters on and off of the stack)?
-Paul
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
Message 1 of 3
(3,071 Views)
Yes, subroutine priority lowers the calling overhead, so they should give you best performance in this scenario. The advantage can be very small, though.

If this is critical, you should make a few alternate versions of the same code (different subVI priorities, subVI flattened out to the main diagram, etc. and always make sure the subVI front panel is closed.) Put it in a FOR loop in the center of a 3-frame flat sequence. (The first and last frame each contain "tick count", you subtract the two to get the time in ms of the inner frame).

Now race them and see who's fastest 🙂

Many times, it is much more important how you code it, and not how you call it. 😉

For example, have a look at the two alternative ways to remove certain rows from a 2D array posted earlier. They do exactly the same thing, but for a 20000x3 array input, the upper takes 22 seconds while the lower solution takes 14ms on my rig. (The attached LabVIEW 7.0 benchmark code compares the two versions).

Typically it is not worth to bother solving a problem that can also be solved by buying a 5% faster computer. Also be aware that the exact results can vary quite a bit between different processors (e.g. AMD vs Intel), one can be faster for one problem, while the other is better at a different alogrithm. There are no hard rules of thumb, you need to test on your exact machine. Make sure that your critical code does not get starved for CPU by other processes. (e.g. don't run a parallel while loop without any wait statement).

Message Edited by altenbach on 04-13-2005 09:38 AM

0 Kudos
Message 2 of 3
(3,055 Views)
Obviously what Altenbach didn't mention is using the advanced metrics and profiling tools to check the overall effect each VI has on your complete application. That might be coming in a little late in the game, but if you have problems, you will know exactly where they're coming from.

___________________
Try to take over the world!
0 Kudos
Message 3 of 3
(3,040 Views)