03-30-2020 08:49 AM
Hello,
I have a python script that I am running in a Labview For loop.
I noticed that the execution of the python script is very fast (just a few microseconds) but the for loop is executed in the scale of hundreds of milliseconds.
I think that communication between Labview and Python is slow.
Is it possible to make it faster?
I noticed also that the use of the memory and CPU is very low.
I have Labview 2019 64bit and Python 3.6 64 bit.
How can I use the full power of my machine (core i7 vPro 9th Gen and 32GB memory)?
I am using less than 1%.
Thanks,
Solved! Go to Solution.
03-30-2020 09:10 AM
Why do you think any of this? Whenever I see someone complaining that they aren't utilizing enough of their computer, it is usually a misconception about what constitutes an efficient application.
03-30-2020 10:29 AM
03-30-2020 11:36 AM
Oh yes, I am opening and closing the Python node for each iteration of the for loop.
Please forgive my stupidity.
I put them out loop and it is much faster now.
Thank you both for your help.
I still see that only 6% of CPU is used. Probably, I am missing another thing.
03-30-2020 11:40 AM
03-30-2020 12:00 PM - edited 03-30-2020 12:01 PM
Is there an example of how to use the parallelization for the for loop.
Actually, I am running hundreds of thousands of iterations but each iteration uses the data from the previous one.
I read that python can use the multithreading but I am not an expert in python to make it speak to LabVIEW in multithreading.
03-30-2020 12:05 PM
@ziedhosni wrote:
Is there an example of how to use the parallelization for the for loop.
Actually, I am running hundreds of thousands of iterations but each iteration uses the data from the previous one.
I read that python can use the multithreading but I am not an expert in python to make it speak to LabVIEW in multithreading.
Hmm. When the loop iterations are not independent ("each iteration uses the data from the previous one") then you'll have a much harder time parallelizing it.
If you can make them independent (or cut them into chunks) then in LabVIEW it's trivial - right click on the For loop and choose "Configure Iteration Parallelism..." (although note that you might need multiple python sessions, so you'd want to be careful about the chunking probably).
If not, you can't do that.
In Python, the same is probably true, although you'll have to use the Python libraries to obtain new threads - a much more detailed/tedious process. If you're confident with Python, it's certainly possible, but it's more work than in LabVIEW.
However, with your current algorithm, you'll probably need to see if you can rearrange it to not have all iterations depend on the previous ones.
What are you doing?
03-30-2020 01:42 PM
I don't think that I can apply the parallelization for the current problem: I am using an optimisation algorithm that is running in a loop and the fitness is updated from a python script.
I thought that using a Labview version 64bit will activate automatically the multithreading.
03-30-2020 02:28 PM
AFAIK you can't multithread an algorithm that uses the previous results. Intermediate steps might be able to be multithreaded, but we don't know what the algorithm is so we can't say if it can be multithreaded or not.
64 vs 32 bit LabVIEW has nothing to do with multithreading, and since your code uses an external Python script anyway there's nothing LabVIEW can do to make it multithreaded.
04-01-2020 02:33 PM
Hi again,
I just realised that the output of the python node is not reliable when I put the Open Python node outside the loop.
But it works properly when I put the Open node outside the loop.
Is there any way to avoid including the Open Python node inside the loop and flash or clean the memory in order to let the python script runs properly.
The reason that I don't put the Open node inside the loop is that the connection get slow and each iteration takes almost 100 more time to run.