LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Exe consumes much more CPU then development

I am working on an elderly project in LV 2011. The project has a long history going back to LV 8.6 or even before.

I just noticed that the newly built EXE consumes about 30 % CPU load, where the same main VI running in the development environment uses between 2 and 4 % CPU only.

Interesting fact is that the EXE produces about the same CPU load on different machines. I have one with an i3 CPU and 2 others with I7 CPUs, all running under Win10.

A web search found only one thread talking about a similar problem. It was recommended there to replace any property nodes for opening the front panel by the method open front panel. I have done that, but no change.

The application is for a custom monitoring system, for configuration and visualization of the measurement data.

It is about 500 sub Vis, including the library functions. The communication to the hardware is via (virtual) COM port or TCP/IP. For COM, it uses LVSerial.DLL and a virtual COM port driver from STM (Arm class CPU in the monitoring hardware). For TCP it uses the TCP functions of LabView. There is NO Visa involved.

The main VI does some initialization during startup and then waits for user action. This is implemented as case structure with the default case just doing nothing. Nowadays I would implement this as Event Structure, but the main VI was written before that even existed. Yes, the while loop has the wait for releasing CPU time for other tasks.

 

I still don’t see any silly programming mistakes, as the application runs fine in the development environment, consuming about 2 to 4 % CPU load. The EXE uses about 30% CPU !!!

I have tried the remote debugging, running the EXE on a target machine and the debugging on the development machine, both connected in the local network.

Now imagine what happens, as soon as the remote debugging is invoked, the CPU load of the target machine drops to about 4 to 6%. How can that be? Seems that the target machine somehow uses the development environment instead of the runtime engine, when in remote debug mode? This would lead to the conclusion that the runtime engine is the bad boy.

 

Just to mention: LabView 2011, all machines Win10. One i3, one i7 target, i7 development.

All machines go to the same 30% CPU load when the exe is run and just sits there waiting for user action.

0 Kudos
Message 1 of 27
(4,062 Views)

Soooo many thoughts...

 

The debugging might introduce idle time in something that otherwise runs free. Or it makes the code a tiny bit slower, so some deadline is missed, and the next one introduces a wait.

 

Similar things could happen with your display. A low refresh rate, or even a faster refresh rate could make updates hit or miss some marks, that would otherwise introduce idle time.

 

The serial stuff could is the same story.

 

Of course the code could have a time dependent wait, resulting in a steady 30%. I once made a wait controlled by a PID loop, with the CPU as feedback... That could have happened incidentally.

 

If Wait Until Next ms Multiples are used (by mistake), instead of Wait (ms), then faster code execution could actually result in less idle time... That would quick to search.

 

30% does make sense if all machines are quadcore. 25% could mean one CPU is completely used (e.g. no wait), and ~5% in other stuff running in parallel. If an octcore or dualcore also show 30%, then yes, it's weird.

 

It's all a stretch, but I guess you already went over the obvious stuff...

 

Is the CPU still low when you compile with debugging, but don't connect a debugger (e.g. LV)?

Are you absolutely sure that there no part of your code (code that might wait) is skipped because of some error?

Is it possible to turn off parts of the code, to locate where the problem is?

Or to make a test executable where you can add more features of the original step by step?

Are those dll API calls running in the UI Thread? Is that needed?

0 Kudos
Message 2 of 27
(4,047 Views)

 I had a similar problem back in the LV8.x era and it turned out to be greedy loops.

 

In reality I do not remember the "greedy loop" being as big of a problem before LV8.0, but you have to take care now because a loop spinning itself waiting for a control to change can suck up %100 of the CPU basically doing nothing.

========================
=== Engineer Ambiguously ===
========================
0 Kudos
Message 3 of 27
(4,036 Views)

Just to illustrate how Wait Until Next ms Mutliple can cause this:

 

In LabVIEW something takes 101 ms. WUNmsM set to 100, will wait 99 ms.

In the executable, the same thing takes 99 ms. WUNmsM set to 100, will wait 1 ms.

0 Kudos
Message 4 of 27
(4,027 Views)

wiebe@CARYA wrote:

Just to illustrate how Wait Until Next ms Mutliple can cause this:

 

In LabVIEW something takes 101 ms. WUNmsM set to 100, will wait 99 ms.

In the executable, the same thing takes 99 ms. WUNmsM set to 100, will wait 1 ms.


One f my pet peeves is the Wait Until ms Multiple... I never use them and change them if I find them in the code to the normal wait.

 

This is my soap box speech on that critter..

 

In LabVIEW we use "Cooperative Multitasking" where we write multithreaded code such that no single thread gobbles up all of the available CPU. This is done using a wait function which removes a thread from the executable queue and allow other threads to execute. Even a "0 ms Wait" is enough to let more than one thread execute.

 

When develop multithreaded code using a "Wait Until ..." the threads are put to sleep until the next even ms multiple happens.

 

Now when we have many loops waiting for their time to run, we use wait values that make sense to us like "500" or "100" etc depending on what the loop does and how responsive we need to the loop to be. But the issue with values like "100" "250" "500" is that twice a second we have all of the sleeping loops waking up and fighting over the CPU. It is like rush-hour with everyone waking up at the same time and vying for the same limited resource. Meanwhile the CPU is being wasted between the multiples.

 

In the case of the limited access roadway, it make sense to stager the time and spread out the usage. Allowing for different start times can fix the road bottle neck.

 

Same thing applies to threads and the limited CPU.

 

Sure you can still use the "wait Until..." and avoid the fighting by making sure each "Wait Until ..." does not have even multiple wait values. So instead of values like "100" we have to use values like "81", "125", "121", "343" where wait multiple do not share common prime factors. After all we are not Cicadas.

 

Why bend over backwards to use the "wait Until" when we can simply use the normal "wait" and take advantage of the fact that different threads will be running at different times as they are each scheduled to run by the OS?

 

So use the normal "Wait" and spread out the load on the CPU with out having to memorize all of the exponents of prime numbers.

 

Stepping down from my soap box.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 5 of 27
(4,019 Views)

Thank you Ben for explaining the differences of the "wait" blocks.

I have replaced all WUNmsM with pure wait.

Unfortunately this does not make any difference in my case.

Anyway, it might have positive impact in other projects.

 

0 Kudos
Message 6 of 27
(3,988 Views)

What exactly do you mean with "greedy loops"?

Do you refer to the wait in a while loop (WUNmsM).

I have changed from WUNmsM to pure wait, but that did not change anything in the CPU load.

 

0 Kudos
Message 7 of 27
(3,986 Views)

@Balanceman wrote:

What exactly do you mean with "greedy loops"?

Do you refer to the wait in a while loop (WUNmsM).

I have changed from WUNmsM to pure wait, but that did not change anything in the CPU load.

 


Any loop that does not release the CPU.

 

A loop can release the CPU by waiting on an I/O to complete or explicitly using a "wait". For the most part the wait value that is used is determined by how often that code has to iterate. In the case of code that is crunching numbers a "wait" with a "zero' wired to it can used to allow other process to get control of the CPU.

 

Now keep in mind some of the things I have mentioned in this thread where very important when we were running on 400Mhz single-core CPU. Multicore processors can handle more abuse.

 

Now to see if I can help with your specific issue.

 

Is there any aspect of the application that is different when running as an exe?

 

When you say the exe uses more CPU, is it the same machine that you are testing it using the development environment?

 

Take care,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 27
(3,967 Views)

@Ben wrote:

wiebe@CARYA wrote:

Just to illustrate how Wait Until Next ms Mutliple can cause this:

 

In LabVIEW something takes 101 ms. WUNmsM set to 100, will wait 99 ms.

In the executable, the same thing takes 99 ms. WUNmsM set to 100, will wait 1 ms.


One f my pet peeves is the Wait Until ms Multiple... I never use them and change them if I find them in the code to the normal wait.


Not to say that they are evil, but I don't want them in my projects.

 

Multiple Timed Loops can be similar (they have lots of options). They seem to somehow be prefered over 'plain' while loop by some. Perhaps a course showing of fancy new LV stuff? I don't know, but I avoid them.

 

Both Ben's explanation, as mine (which is a different problem) apply to timed loops.

 

So if you have timed loops, you could consider changing them next.

0 Kudos
Message 9 of 27
(3,939 Views)

I have replaced the WUNmsM with Wait.

I don't have any Timed Loops in that project.

I have disabled (False Cases) several functions testwise.

Have checked the compile logfile. No warnings.

The main VI just sits there and waits for user input / clicking any button which would then call a subVI. The main loop has now a 100 ms wait.

CPU load is still about 30% when running the EXE.

Somehow I get the feeling that the EXE builder does something weird...

0 Kudos
Message 10 of 27
(3,932 Views)