LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Is Python faster than LabVIEW for PicoScope 6424E Rapid Block with Overlapped Acquisition?

Solved!
Go to solution

Hi everyone,

I'm currently using a PicoScope 6424E with LabVIEW 2023 and have implemented a producer/consumer architecture using rapid block mode with overlapped acquisition. My setup acquires data from 3 channels simultaneously at 10-bit resolution, triggered by an external laser firing at 100 kHz.

To collect a total of 60k segments, I loop 6 times with 10k segments per run.

The producer handles acquisition and pushes arrays into a queue, while the consumer processes the data in parallel.

 

Everything works fine, but I'm wondering:
Would rewriting this entire pipeline in Python result in faster or more efficient performance?
I'm considering Python because of potential advantages in buffer control, multithreading, and low-level SDK access.

However, LabVIEW does a good job managing driver-level timing and producer/consumer flow.

 

Switching from LabVIEW to Python would also require me to rewrite all the parts of the VI that handle communication with other devices in the setup, such as motorized delay stages and external instruments currently controlled via LabVIEW drivers or VISA.

Has anyone done a similar comparison between LabVIEW and Python for high-speed acquisition with the PicoScope 6000 series, especially with external triggering and large segmented memory?

 

 

 

I would also like to specify that I have never used Python so far, I have a bit of experience with MatLab and I will base the code mainly on the examples provided by the companies.

I'd appreciate any insight or suggestion.

 

Thanks in advance!

0 Kudos
Message 1 of 2
(186 Views)
Solution
Accepted by topic author tbianconi

It all depends on the implementation of the PicoScope interface library. But assuming that they are equivalent, it is EXTRMELY unlikely that you would get a better performance in Python. Python is a byte code interpreted language. Your Python code gets translated into a byte code for a virtual CPU. This byte code is then read and interpreted every time and translated to the actual CPU underneath.

 

LabVIEW compiles its diagram directly into the target CPU instructions and that is what gets executed natively on the CPU.

 

This difference can be extreme, for instance if you would operate on arrays of data without the use of special libraries such as numpy. Python itself has no native array datatype, only lists which are extremely inefficient to handle large amounts of data. numpy solves that by implementing arrays as a custom object. LabVIEW simply crunches happily through arrays in a very similar manner than what a program does that was created in C, Rust or a similar compiled language.

If the operation is IO bound, this difference is much smaller, since the actual execution speed is mostly defined by the speed of the IO device and the actual code execution is secondary in terms of performance. So here LabVIEW and Python can be rather similar in speed but given a similar implementation of the IO interface library, Python never can be faster than LabVIEW.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 2 of 2
(114 Views)