LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

High CPU usage: using clusters vs handling data individually?


@Yamaeda wrote:

@altenbach wrote:

@Yamaeda wrote:

11% is very close to 1 CPU spinning wildly, which is typical of a greedy loop.


The N200 only has four regular cores, so one core would be close to 25%.


Fair enough. Though those two loops should not be 50% on one core regardless, unless it's underclocked to 1MHz. 🙂


As I usually say — a processor with a 6W Max TDP will also perform at 'six watts', so don't expect much. Guess why I have a 175W CPU?

Screenshot 2025-05-06 09.07.14.png

LabVIEW need some resources.

I have a strong feeling that the SubVIs are not inlined, and there could be some penalties when a cluster is passed to terminals. The only way to improve is to strip down step by step and check where the 'bottleneck' is. Each individual time for each SubVI needs to be checked, as well as the overall occupation in each loop. LabVIEW is equipped with a profiler, which needs to be used here as the first step in troubleshooting. Also, don't think that a timed loop for a dedicated CPU will help. What happens if it is replaced with regular while loops? What if the first loop is disabled, or the second?

Message 21 of 22
(168 Views)

@Andrey_Dmitriev wrote:


As I usually say — a processor with a 6W Max TDP will also perform at 'six watts', so don't expect much. Guess why I have a 175W CPU?


I generally use a decent laptop (Dell Precision)

Yamaeda_0-1746633699544.png

 

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 22 of 22
(130 Views)