LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Better than polling for a full buffer?


@AlecSt wrote:
The context: I'm taking data with a high speed digitizer and writing to binary files that we can read with our C++ analysis software. I'm exploring the option of using TDMS but only after we are up and running with the current setup.

The digitizer can sample at 2ns and we take around 1000 samples for event. So in the extreme case we would need to write every 2 us or so, on average.
Setup the Daq to take 1k samples and you can write the full array at once.
/Y
G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 11 of 13
(738 Views)

@AlecSt wrote:
The context: I'm taking data with a high speed digitizer and writing to binary files that we can read with our C++ analysis software. I'm exploring the option of using TDMS but only after we are up and running with the current setup.

The digitizer can sample at 2ns and we take around 1000 samples for event. So in the extreme case we would need to write every 2 us or so, on average.

I made a simple Producer/Consumer setup, and timed it.  On my Windows 7 system, I was writing about 40,000 arrays of 1000 Dbls per second.  No buffering, just "write as fast as we keep feeding you ..."  Note that the Consumer was pretty much keeping up with the Producer -- there were 2 elements left on the Queue when it exited.  On the other hand, if I speeded up the Producer by putting the For loop outside the While, the Write speed went up to around 60,000 arrays, but there were almost 200,000 left in the Queue.

 

So as long as you aren't generating more than 40,000 arrays of 1000 Dbls (or 320 MBytes/sec), a simple C/P should suffice.

Test Spooling.png

Bob Schor

 

 

0 Kudos
Message 12 of 13
(735 Views)

Bob, that's a great point. I'm going to test this out myself with our setup.

0 Kudos
Message 13 of 13
(718 Views)