03-04-2026 10:47 AM
This very general question-trying with the user's group before researching (so here goes)
I have an optical wafer test system, and I utilize a Python program to calculate the results from a raw spectrum. (as I step through the wafer Add each LED device).
The Python calculation doubles the overall test time at each device on the wafer.
(Yes, I could process the calculations from stored the raw spectrum at a later date)
But I'm wondering if an alternative would be to run the Python program calculation separately and not have the ongoing program pause as completing the Python calculation. (The overall Python calculation approximately three seconds).
This concept would be very new to me, so I was just wondering if I'm looking at that the right way and is it possible?
03-04-2026 11:13 AM
Since you posted here, can you explain how LabVIEW fits into all this?
03-04-2026 11:20 AM
Oh sorry for the confusion
The entire program is in Labview,
I'm using a system exec vi... (I call in Python to do the calculations at each measurement point)
03-04-2026 11:23 AM
Well I guess I could just run the Python program separately outside of Labview as the Labview program is running.
It's just that I've always heard about dual processing in LabVIEW and I was wondering if this is an instance where it could be used.
03-04-2026 11:24 AM
If you are asking "Can LabVIEW support concurrent processing?", then the answer is "Yes, because LabVIEW is a "Data Flow" language and can have multiple processing routine running in parallel". Of course, if the two routines are "CPU-bound" (a term I just made up meaning that each requires 100% effort on the CPU to do the calculations), then the answer is "No", as 100% effort + 100% effort = twice as much time.
But if one task's speed is being handled by, say, DAQ hardware that acquires data from external hardware and hands you, say, a million data points every three seconds (which usually means it "waits" for 2.99 seconds while it gathers points in a buffer, then in the last 10 milliseconds dumps them into PC memory), you can use the Producer/Consumer design pattern to ship those points off to a parallel (LabVIEW) analysis routine that can occupy the CPU for the next 2.99 seconds. [Whether you can do this with Python in the mix, I don't know).
It would have helped to understand your situation if you provided descriptions of what the LabVIEW part of the program was doing in its three-second time slot. I'm guessing moving the wafer, "turning on and off" parts of the wafer, acquiring data from the wafer, and handing it off to Python. But that's just a guess.
Bob Schor
03-04-2026 11:50 AM
Since the device does not need to wait for the results, you could handle things in a parallel process to the main loop.
How is the input to the python program relayed (file?)
What should happen to the result of the python calculatiom?
How hard are the calculations? Why not do them in plain LabVIEW in parallel via a queue?
03-04-2026 12:11 PM
Thank you for the response
As you point out, yes there is a stepper motor, a spectrometer, device power on -off, forward voltage for the LED in question etc.
As I get to the complete point for the LED in question I output the raw spectrum to a text file.
Once the measurement is completed and I have the raw spectrum in a text file, then the LabVIEW program executes the system exec for Python.
If I disable that aspect of the Labview program and compare the overall time for each individual LED test, I find that it is three seconds shorter.
Of course, I don't get the Python calculations which is the LuxPy program.
So the system exec for Python I do show in the attachment as the beginning of outputting that into a final CSV file with the other data.
03-04-2026 12:50 PM
Do you really need the outputs for system exec? Else you can wire a FALSE to "wait until completion?" and it will no longer block the code progress.
03-04-2026 01:07 PM
No idea what version of LabVIEW you are using, but there is a Python Node in newer versions. You could call your script in another loop, no system exe.
However, even if all of this runs in parallel, it won't speed up program that much. Assume you have 100 steps, takes 3s to process each step in Python, then your program runs for 5 minutes, unless you have multiple "workers", that is, as one data set is starts to be processed, another "worker" handles the next data set before the previous one in finished, and so on.
Best way to improve your speed is to either optimize your python script, rewrite it in LabVIEW ,or another language that is faster than python.
03-04-2026 01:54 PM
Makes sense Thank you