LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

real time data processing at extreme high sampling rate

Hi everyone, 

First of all, I want to remark that my knowledge with labview is limited, and that I never worked previously with real-time data processing. 

I have done a vi to obtain data from ultrasonic acelerometers. The experiment lasts several hours, so it is not possible to record all the data in a single file. In addition, I only need the data for the regions when a specific threshold is exceeded. 

In the vi I have programmed, there is a while loop that process the data every 2 seconds. In the firsts iterations it calculates the threshold, and for the next ones , the data is only written to a text file if there is any peak above this threshold/trigger .

This kind of vi works well for me up to 100kHz sampling frequency. For higher sampling frequencies (500kHz) the while loop starts to have a delay... and above 500 kHz it runs out of memory.  I really don't need 2 seconds iterations, it could be done for 0.5 seconds or less. However, the problem here is the same, the while loop has not enough time to write the file. 

 

Could anyone help me with ideas for a more efficient way to re-write this program? 

 

Thank you in advance for your time!

0 Kudos
Message 1 of 6
(3,513 Views)

Just based on what I see, if it we're me, I'd setup a Queue to take the data that you want to save to a file and offload it to another loop. That would be the biggest benefit that I see.

Message 2 of 6
(3,487 Views)

Additionally, use the lower-level file primitives so you can be sure to open the file only once while looping over all your writes using a file function that takes in a file *refnum* rather than a file *path*.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 3 of 6
(3,463 Views)

Hi,

 

one more comment: usually it is a bad idea to have a wait next to a DAQ(mx) Read function with a well-defined sample rate… (It doesn't matter if you use DAQmx or ULx.)

 

Usually you set a reasonable amount of samples to read (and process)!

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 4 of 6
(3,433 Views)

Several comments (repeating some already mentioned):

  • You almost never need to use a Frame Sequence structure, especially when you have made proper use of the Error Line to serialize your code.  The Frame adds no benefit, and takes up a lot of space.  I do see you are creating an "Elapsed Time" clock -- though I'm not a fan of Express VIs, the Elapsed Time Express VI is much more compact.
  • When you are trying to maximize throughput and want high sampling rates, you want to take full advantage of The Principle of Data Flow, which allows LabVIEW do to "two things at once" (such as "Collect A/D Samples" and simultaneously spool them to disk.  LabVIEW has a Template showing how to do this.  Here's what to do:
    • Open LabVIEW.
    • Go to File, New ... (the second item on the Drop-down, with the three dots).
    • Open "From Template", "Frameworks", "Design Patterns", and choose "Producer/Consumer Design Patterns (Data)".
    • Study the VI that this generates.  The top part, the "Producer", is where you put your DAQmx (or ULx) code that generates, say, 5000 samples at 500 kHz.  If you do nothing else with these data, this loop will run at precisely 100 iterations/second (as the very accurate hardware clock in the A/D hardware takes 5000/500 k = 0.01 seconds to gather and give you the data).  What you need to do is to "hand the data off" to another loop, so you put it on a Queue.
    • The lower Loop is the "Consumer", which accepts the data from the Queue and "does something with it".  This can include streaming the data to disk (you should be able to keep up), plotting or "on-the-fly" processing (this might be more difficult at these rates), etc.  Note that any task that does require additional time you can put on yet another Queue and send to another Consumer whose first task would be to "slim down" the data (throw away 999 points out of a 1000, say, and analyze the 0.1%).
  • Now that we have Data Acquisition "split" from "Data Analysis/Data Saving", concentrate on optimizing these two sub-tasks.  Data Acquisition is simple -- learn more about DAQmx (which will also apply to ULx) and keep it "lean, mean, and fast".  NI has excellent tutorials on DAQmx -- my favorite is "Learn 10 Functions in NI-DAQmx and Handle 80 Percent of your Data Acquisition Applications" (look this up on the Web, and skip the first suggestion involving the Dreaded DAQ Assistant).
  • To optimize saving a lot of data, learn more about how to save data.  Saving Text files is one of the slowest ways to save data, particularly a lot of it; binary files are one of the fastest, and LabVIEW has other "fast" formats meant for sampled data.  Look it up.
  • If you want to do data processing during acquisition at high data rates, you probably won't be able to keep up with the data.  There are "tricks" you can do (similar to Producer/Consumer) to process "the current Buffer of data" (which might have you processing every 10th "chunk" of samples, for example).  Especially when data rates are rapid, you can't "visualize it all", so you really have to rely on some form of data decimation for on-going monitoring.
  • When you are ready to start coding, you might consider writing a little routine to act as a Data Generator, and write routines that "do a few things at a time" to get sample code that seems to work at the data rate you need.  Once you have the algorithms down, you can put the real DAQmx/ULx code and real Analysis routines in place.

Bob Schor

 

 

Message 5 of 6
(3,424 Views)

 

Thank you everyone for the suggestions!!

Special thanks for Bob Schor for this really detailed explanation.

I will try to implement all the changes proposed and I will give feedback as soon as I test it

 

 

0 Kudos
Message 6 of 6
(3,385 Views)