LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Help with Execution efficiency when using sub vi's and object Oriented methods.

Here's one for the Labvillians (NI LV guru's and other power useres) out there:

 

Background:

I am developing a large application that handles a lot of data in many different forms ranging from Synchronus Analogue waveforms to asynchronus CAN Message/Frame Data. This necessistates several different data types that can be appended to. 

 

The polymorphism feature of the classes contained in Object oriented programming makes it a very attractive solution.

 

The Problem:

Writing to an object takes twice as long as writing to an array located in the main VI.

A Sub-VI has slighty better performance.

Enforcing the new "By-Ref" method is slower again.

Put By-Ref  in a sub-VI and you increase to nearly 4x the execution time

 

Attached is my LV 2009 Test Code "Speedy" that appends 100,000 data points one at a time to an array. I have also reults for for 1,000,000

 

                         100,000            1,000,000

                         ----------            ----------

Main:                17 ms                 165 ms  CPU= 19 %      [x10 Increase (Linear)]

Sub VI:             30 ms                  357 ms  CPU= 24%       [x10 Increase (Linear)]

By Ref:             47 ms                 795 ms  CPU=  53%        [x17 Increase]

Byref SubVI:     62  ms               988 ms   CPU=  58%        [x15 Increase]

Object:             35 ms                681 ms  CPU=  25%        [x20 Increase]

Object subvi:     48 ms               808  ms  CPU=  35%      [x17 Increase]

Object byref:     75 ms               1123 ms  CPU=  44%     [x15 Increase]

Obj byref subvi: 87 ms               1252 ms  CPU=  40%     [x15 Increase]

 

 

Careful when running this module, if you open a subvi, execution can be as high as 50 times slower (This makes sense, it has to keep doing array copies)

For sub-vi's with "By-Ref", exeution, time does not increase (by much). when the sub-vi is open, by design it is immune to this problem. 

Note: This is a simplified example, The code could be easily optimised for speed, inc. subroutines, these optimisations are not practical in my application - Subroutines can't be functional globals. 

 

My Questions:

0. Are there any fundamental flaws in my coding that is causing the differences?.

1. Why does a sub-vi Have such a large overhead?, Can it be reduced

2. Why do "By-Ref" methods have such a large overhead?

3. Which is the best Short term method to use? (Sub vi is the best balance between good code structure and performance).

4. Which is the best Long term method to use (are NI planning any fixes)?

 

Note: My application is very busy, this thread is low priority, a method that takes a while but doesn't use a lot of CPU is acceptable.

 

I appreciate help or insight of any sort 

 

Kind regards,

 

Tim L.

 

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
Message 1 of 6
(3,432 Views)
STOP IT>>>>>........:smileymad:
0 Kudos
Message 2 of 6
(3,414 Views)

Timmar wrote:

Here's one for the Labvillians (NI LV guru's and other power useres) out there:

 

...

 

Attached is my LV 2009 Test Code "Speedy" that appends 100,000 data points one at a time to an array. I have also reults for for 1,000,000

 

                         100,000            1,000,000

                         ----------            ----------

Main:                17 ms                 165 ms  CPU= 19 %      [x10 Increase (Linear)]

Sub VI:             30 ms                  357 ms  CPU= 24%       [x10 Increase (Linear)]

By Ref:             47 ms                 795 ms  CPU=  53%        [x17 Increase]

Byref SubVI:     62  ms               988 ms   CPU=  58%        [x15 Increase]

Object:             35 ms                681 ms  CPU=  25%        [x20 Increase]

Object subvi:     48 ms               808  ms  CPU=  35%      [x17 Increase]

Object byref:     75 ms               1123 ms  CPU=  44%     [x15 Increase]

Obj byref subvi: 87 ms               1252 ms  CPU=  40%     [x15 Increase]

 

 

...

 

I appreciate help or insight of any sort 

 

Kind regards,

 

Tim L.

 


Those are very interseting numbers Tim.

 

If your benchmarks are correct, I am curious about the methods that do not scale linearly. I don't think I'll get a chance to look at your posted code so I'll have to let others dig and comment.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 3 of 6
(3,388 Views)

Thanks Ben,

 

I can speculate about the nature of the non liniarites.

Memory allocation is a fickle thing. Adding 1 element to a small array is a relatively simple concept.

Add an element to a large array on a memory boundry and chaos ensues.

 

My major concern was the massive performance hit when using a sub-vi.

Good coding practices discourage the "all in one sheet" approach to coding.

My resuts hint that (in LV 2009 at least) it is by far the most executionaly efficient way.

 

I am looking for someone in the NI-LV world that can explain this.

Any hints.

 

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 4 of 6
(3,350 Views)

Running benchmarks like this can be a very frustrating act.

 

You need to make sure the order of testing is not influencing your timings due to memory allocation within LV.

 

I don't see any imemdiate reason why the scaling between one method and another should be different unless you're making more copies of the array in one version than another.  Speed-wise they should scale similarly.

 

What IS of interesting is the absolute timing differences between the methods.  I think the Object method does pretty good there, coming very close to sub-VI levels (ignoring the dodgy scaling with larger arrays for the moment).

 

I might have a look at the code if I have some free time.  This is unlikely to happen any time soon though.

 

Shane

0 Kudos
Message 5 of 6
(3,340 Views)

Intaris wrote:

Running benchmarks like this can be a very frustrating act.

 

You need to make sure the order of testing is not influencing your timings due to memory allocation within LV.

 

I don't see any imemdiate reason why the scaling between one method and another should be different unless you're making more copies of the array in one version than another.  Speed-wise they should scale similarly.

 

What IS of interesting is the absolute timing differences between the methods.  I think the Object method does pretty good there, coming very close to sub-VI levels (ignoring the dodgy scaling with larger arrays for the moment).

 

I might have a look at the code if I have some free time.  This is unlikely to happen any time soon though.

 

Shane


Timmar,

 

If you have not already seen the "Clear as mud" thread, then I highly recomend reading it since it changed my coding habits.

 

Yes geting good benchmarks can be tricky and require time a patience to make sure we are really timing what we think we are timing.

 

I have the next five days off so I'll try to review these finding myself.

 

I'll share what I find.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 6
(3,327 Views)