LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Can I increase the execution speed of this VI?

Ah Tim was faster :). To my defense, I was using a phone to respond. 😮

0 Kudos
Message 11 of 30
(1,527 Views)

@altenbach wrote:

@MrJackHamilton wrote:

Another way to speed up your program is to not build the output array from the FOR loop. Preallocate the result array and use 'Replace Array Subset' to insert the result element(value) into the result array.


Building an array in an auto-indexing output tunnel of a FOR loop is equally efficient because the final array size is known when the loop starts and can be fully allocated at once.

 

Your suggestion just dramatically complicates the code without any benefit.


LabVIEW when left to it's own devices will tend to allocate more memory, copy memory and allocate memory in parallel to execution. When looking to 'lean' out your LabVIEW code, it's very important to explicitly deal with memory allocation. LabVIEW does not use pointers. I interpreted the question of one of optimization and therefore proposed that the user explicitly allocate the memory for the results.

 

When looking at the diagram, a 2D array is created as a result of the output of the function. As implemented, you are not defining 'When' this /pre/allocation will occur?....to optimize the calculation execution, having it not run in parallel to the calculation would be ideal. Thus, Pre-allocating the 2D array /before/ the calculation part - would assure explicitly that the interpreter performs this task in the order you intend.

 

So yeah, you can leave it to LabVIEW to 'handle' it, which it will. But, optimization opens an entire world of complexity that would be nice to avoid, but necessary none the less.

 

Regards

Jack Hamilton

0 Kudos
Message 12 of 30
(1,514 Views)

@altenbach wrote:

Ah Tim was faster :). To my defense, I was using a phone to respond. 😮


To make you worse, I was using my phone as well.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 13 of 30
(1,511 Views)

MrJackHamilton wrote:

So yeah, you can leave it to LabVIEW to 'handle' it, which it will. But, optimization opens an entire world of complexity that would be nice to avoid, but necessary none the less.


In my experience, complications tend to cause inefficiencies.  There are exceptions, but those rarely gain much and only truly useful in extremely tight loops in an RT system.  LabVIEW has come a long way in making optimizations.  As long as you are half way careful, it will really handle it.  You cannot get more efficient than a FOR loop autoindexing an output and it is simple, a best of both worlds.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 14 of 30
(1,508 Views)

MrJackHamilton wrote: 

Thus, Pre-allocating the 2D array /before/ the calculation part - would assure explicitly that the interpreter performs this task in the order you intend.

 


The final 2D array will be pre-allocated in both cases before the for loop runs. Chances are that the compiled codes are nearly identical.

 


@MrJackHamilton wrote:
LabVIEW when left to it's own devices will tend to allocate more memory, copy memory and allocate memory in parallel to execution. When looking to 'lean' out your LabVIEW code, it's very important to explicitly deal with memory allocation.

 


The LabVIEW compiler does optimizations way beyond what you think. An autoindexing output tunnel of a FOR loop is one of the most highly optimized constructs. Trying to micromanage the compiler actions by throwing much more code at it could potentially confuse the compiler.

With autoidexing output tunnels, the compiler knows that the elements will be placed in order. If you preallocate manually, the compiler needs to first figure out that the index (wired to replace array subset) is wired directly to the iteration terminal in order to make that same simplifying assumption, requiring more analysis. 

0 Kudos
Message 15 of 30
(1,500 Views)

Jack,

 

i recommend you the following reading:

LV Compiler under the hood

 

I hope you find it interesting.

 

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 16 of 30
(1,496 Views)

@altenbach wrote:

Ah Tim was faster :). To my defense, I was using a phone to respond. 😮


You probably wrote the answer as ascii code in binary, right? 🙂

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 17 of 30
(1,493 Views)

 

My experience is lots of FPGA, RT coding on cRIO, sbRIO and the SOM which is a much more restrictive platform than Windows. I don't really change my technique when coding LV on a Windows box, is too hard to unlearn inefficient methods for me. 

 

LabVIEW is wonderfully forgiving, mainly because PC hardware is so insanely powerful. It's amazing what a 40mhz FPGA can do compared to a i7 core running Windows. of course they are completely different hardware. But an FPGA can run laps around an i7, as to be the defacto platform for true RT tasks. However when looking at the specs on paper the i7 should out perform the FPGA by an order of magnitude or more. But it does not. [Yes, this is mainly due to the OS running on the CPU]

 

It's all experienced based, there are no 'correct' answers. After coding some FPGA and RT applications...one will quickly learn what efficient coding truly is. I work with a lot of very experienced LabVIEW coders who have a hard time accepting what they think they know about coding when they run into performance problems. And they have to unlearn a lot of stuff. Especially, when they take their 10+ years of LabVIEW coding techniques on Windows and attempt to throw it on an RT system.

 

If you don't want anyone else to post on this forum, simply state so...or we can arm wrestle I guess?

 

 

0 Kudos
Message 18 of 30
(1,492 Views)

@MrJackHamilton wrote:

 [...]

If you don't want anyone else to post on this forum, simply state so...or we can arm wrestle I guess?

 

I think nobody intents to restrict opinions in postings. However, LV has a short release cycle (quite some customers complain about) of one year per major version which in this specific case has a great advantage: NI can implement and provide improvements and more up-to-date technologies on a regular basis.

The biggest changes for the LV compiler were done in LV 2009 and 2010. These changes included the reworking and introduction of the two-layer compilation model using DFIR and LLVM. This concept is true for all targets (Windows, MacOS, Linux, RT and FPGA). However, LLVM (Low Level Virtual Machine) is compiled system specific which means for example for FPGA that the Xilinx compiler kicks in.

Following LV versions did include minor changes which can have effect on runtime performance of any application.

 

You are correct that for all targets it is recommended to help the compiler to optimize to a best level. Contrary to your experience, the way to make this does potentially change over LV versions. So what was suggested for instance in LV 8.x is not necessarily suggested for e.g. LV 2015 anymore. On the opposite, it might even be that it can induce negative impact.... (my personal experience for instance: Inplace Element Structure).

The forum is a good place to discuss this stuff, however it is always important to point out the LV version when talking about specific optimization implementation.

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 19 of 30
(1,481 Views)

@MrJackHamilton wrote:

 

My experience is lots of FPGA, RT coding on cRIO, sbRIO and the SOM which is a much more restrictive platform than Windows. I don't really change my technique when coding LV on a Windows box, is too hard to unlearn inefficient methods for me. 

...

 

Especially, when they take their 10+ years of LabVIEW coding techniques on Windows and attempt to throw it on an RT system.

 

If you don't want anyone else to post on this forum, simply state so...or we can arm wrestle I guess?

 

 


 

 Hi Jack,

 

Tim is a CLED and Christian wrote the book on performance.

 

Your points were perfectly valid for LV from 10 years ago. But LabVIEW is now much smarter about allocating memory for the auto-indexing For loops.

 

I also have to agree with you that what is "good code" for Windows sucks for RIO and vise versa. Still make me cringe when I see locals being used in high-performance loops. But I have to accept these changes since on the Windows platform all code ahs to run through a shared CPU while on a FPGA the CPU is spread out and duplicated multiple times. A bit of a paradigm shift but it is what it is.

 

Keep posting and the contributors here will help. 

 

Smiley-wink

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 20 of 30
(1,448 Views)