08-07-2017 09:52 AM - edited 08-07-2017 10:09 AM
Hi Everybody,
this is my first post and I'm a fairly new user of LabVIEW. Pardon me in advance for my mistakes.
I'm designing and implementing a VI to allow acquisition, processing and visualization of live data captured from an Optical Coherence Tomography system running at around 100KHz A-scan rate (triggered acquisition).
I'm using the producer/consumer design pattern to read buffers from the (AlazarTech) high-speed digitizer memory in a continuous while loop and then to process and visualize them in a second while loop that pulls out buffers from the queue. By manually adjusting number of buffers that the VI pushes into the queue and how many records of each buffer are displayed (to account for different scenarios), I managed to achieve a rudimentary but usable VI. I want to underline that I'm more interested in having an application that can show me results live, so buffer dropping is acceptable (also, one buffer coincides with a single B-scan which is the information unit of the system). No logging is required.
My questions is related to the manual adjustment of the number of buffers to be pushed into the queue (please see "Start Acquisition" section of attached VI). My understanding is that, if the while loop doesn't have any front panel control/indicator, it will then run in a thread that is NOT the UI thread and therefore will never block to wait for user inputs. So, which is the LabVIEW mechanism/construct/technique which allows to have user interaction with this thread (for instance, so I can adjust during execution how many buffers are dropped) through the front panel but guarantees that the "acquisition thread" (or any thread) DOESN'T run into the UI Thread? I've read about properties, events, messages but couldn't find specific answers regarding how to effectively separate (and even determine if) acquisition/processing from running separately from the UI Thread.
Please excuse the poor quality of the attached VI.
Thank you in advance for your replies!
Lorenzo
Solved! Go to Solution.
08-08-2017 11:00 AM
Perhaps I've posted my question in the wrong place or maybe it doesn't even make sense what I'm asking?
Thank you!
08-08-2017 11:16 AM - edited 08-08-2017 11:20 AM
Just use controls/indicators. LabVIEW handles this for you. Although the Front-Panel control is in the UI thread, the control terminal on the Block Diagram is not. Reading/writing to a terminal (or local variable) does not involve the UI thread, and never blocks. You can even use an Event Structure, as it doesn’t use the UI thread either. Only Property Nodes execute in the UI thread.
08-08-2017 11:17 AM
I have briefly looked at your VI. Here are some issues:
Try to make more subVIs along with comments so others can follow.
I understand that you cannot display your Mathscript routines, that is all right, but typically native functions are faster, more efficient.
As far as I can tell you have no references, nor property nodes, which is good. These can cause a switch to the UI thread. Other than that I cannot say much more.
I can show you how I typically set up my stuff, can post it due to IP restrictions. Hopefully you can read the comments on the template.
Cheers,
mcduff
08-13-2017 03:47 PM
Thank you for the answers!
Knowing that controls/indicators are efficiently and automatically handled by LabVIEW helps quite a lot. I managed now to get a decent data flow by setting a timeout on the consumer (before it was set to infinite). This way I can execute some control logic that automatically terminates the consumer while loops when the queue becomes empty after acquisition has been terminated. And the whole VI now exits by itself.
@mcduff. Yes, something like is what I will aim to in the future for a better application.
I have one question though. I read somewhere that when using sub-VI, LabVIEW copies data a lot and I'm working with a pretty big amount of data (each acquired buffer is 8MB at a rate of 300Hz max). What would be an efficient design pattern (events/messages handling + producer/consumer) to acquire, process and visualize data, let's say at 10 buffers/sec? (now I can achieve 1-2 buffers/sec).
Thank you!
Lorenzo
08-14-2017 08:18 AM
Yes, something like is what I will aim to in the future for a better application.
I have one question though. I read somewhere that when using sub-VI, LabVIEW copies data a lot and I'm working with a pretty big amount of data (each acquired buffer is 8MB at a rate of 300Hz max). What would be an efficient design pattern (events/messages handling + producer/consumer) to acquire, process and visualize data, let's say at 10 buffers/sec? (now I can achieve 1-2 buffers/sec).
I have used a similar architecture to stream data from a NI USB-6366 at 2MSa/s for each channel with a total of 8 channels. Some of the options I used to limit data copies, memory, etc, (my colleagues like to run DAQ on old equipment) are:
A good tool to check for data copies are the WIndows task manager, and other internal Windows tools. For the aforementioned case, when I selected 8 channels at 1MSa/s, time acquisition 1 s, when I looked at the task manager, my memory only increased 80MBy, 16MBy for the raw data(I16), plus 64MBy(double) for the processed data.
For another system that used high speed digitizers, the file save was the limiting factor. I used a combination of queues and DVRs to limit memory copies. In this case, I limited the size of the queue storing raw data so the user would not run out of memory. If the file write was fast enough, if the user was a RAID array, then the queue would not fill up. If the file write was slow, then the queue would fill up and DAQ proceeded at a slower rate.
cheers,
mcduff