LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Better than polling for a full buffer?

Hi all,

 

For background: I am wiritng a DAQ VI with a producer consumer model, strongly based on the Continuous Acquisition and Logging VI.

 

I am writing the acquired data to a buffer. When the buffer is full, I want to write that data to a file -- in other words, tell the "logging" loop to start writitng.

 

I would like to know the best way to do this. Right now I am doing the following thing: polling for a full buffer, and then if full, enqueuing a command for the logging loop to start writing the data to file.

0 Kudos
Message 1 of 13
(4,034 Views)

Why does the logging loop need to be told anything? It has access to the buffer, right? Why can't it decide on its own that it's time to start saving data?

 

Mike...


Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 2 of 13
(4,017 Views)
That's a good point. I should have mentioned that I'm trying to minimize the number of writes-to-file. There might be something silly about what I'm doing, so please tell me if this is inefficient:
0 Kudos
Message 3 of 13
(4,009 Views)
OK, that doesn't really change the questions. Somebody has to decide when to write to the file using some criteria that is likely to change over time. Seems to me that the most reasonable place to put that logic is in the loop that is going to be writing the data.

Mike...

Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 4 of 13
(3,996 Views)

So should one poll for a full buffer before writing to file, then? Or is there a better way? I feel like polling for anything is generally inelegant.

0 Kudos
Message 5 of 13
(3,967 Views)

The point is that, your OS does buffered file IO. Even if you are writing to your file at every iteration of a while loop (lets say at 1 Hz), the OS does not write in real that often: OS writes to your file in chuncks. So I would not worry about buffering. Just put your data into a Queue, and at the consumer loop write to file. What is the DAQ rate? If you do fast acquisition, you might want to have a look how to use DAQmx TDMS operations, which is very fast. Either the case, you do not need to explicitly handle buffering...

0 Kudos
Message 6 of 13
(3,938 Views)

A little more context would be nice here.  What file format are you using?  Where is the data coming from?  Where/what is this buffer you are referring to?



There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 7 of 13
(3,918 Views)

Using DAQmx built-in logging is probably better. If that doesn't fit, you're doing premature optimization. You only need to focus on write size if you need crazy high performance and then you start having to worry about buffering and the like. Since labview will buffer for you and the os will buffer for you, you can just leave it be until you've identified that loop as the bottleneck. 

 

Also, this is a peeve of mine so I'm going to call it out specifically:

"I am writing the acquired data to a buffer. When the buffer is full, I want to write that data to a file -- in other words, tell the "logging" loop to start writitng."

 

The logging loop should know, itself, when to log. After all, it only has that one job. Is that job really so hard? It just sits there, waits for data, and shoves it in a file. 🙂

My point is, *if* your loop needs buffering to hit the write-to-disk performance you need, that buffering is the responsibility of the logger loop not the producer loop. That way the producer has just one job -- producing -- and the logger has just one job -- logging. Any other configuration can and will lead to suffering, eventually.

 

So to answer your question rather than giving the super obnoxious "don't optimize" statement above, I would say you leave your producer just writing to the queue. On the receiving side you have an allocated array of data and a counter. Every time you receive a packet from the producer you replace some subset of your buffer array and increment your counter. When the counter reaches N, reset the counter and write your data to disk. This keeps everything event-based (reading from the queue) and also eliminates the pesky problem of having the producer doing the logger's job.

0 Kudos
Message 8 of 13
(3,906 Views)

@AlecSt wrote:

Hi all,

 

I am writing the acquired data to a buffer. When the buffer is full, I want to write that data to a file -- in other words, tell the "logging" loop to start writitng.

 

I would like to know the best way to do this. Right now I am doing the following thing: polling for a full buffer, and then if full, enqueuing a command for the logging loop to start writing the data to file.



So, assuming the buffer is a queue or AE in it's own consumer state, it can queue a write-command to the consumer with the data and clear its buffer.

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 9 of 13
(3,890 Views)
The context: I'm taking data with a high speed digitizer and writing to binary files that we can read with our C++ analysis software. I'm exploring the option of using TDMS but only after we are up and running with the current setup.

The digitizer can sample at 2ns and we take around 1000 samples for event. So in the extreme case we would need to write every 2 us or so, on average.
0 Kudos
Message 10 of 13
(3,870 Views)