High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

Fetch More Than Available Memory, TDMS Write.

Can anyone advise on the most efficient method for fetch-write?  1GHz sampling rate, record length 300-points.  The records are created at 50kHz, i.e. 50kHz TRIG.  Acquisition length 10 seconds, 500,000 records in total.  With the records per fetch set at 10,000 the VI runs smoothly with the points fetched following the points acquired count.  When I add TDMS write to the VI, it all slows and I start overwriting unfetched records.  I think the problem is the time taken to convert the 1D cluster to waveform (not shown in the attached).  It's also possible I'm not fetching using the most effective method.

0 Kudos
Message 1 of 10
(6,808 Views)

Maybe you can try with the producer/consumer pattern, in LabVIEW, choose "File -> New -> VI -> From Template -> Producer/Consumer Design Pattern (Data)". And there should be some examples released with LabVIEW which you can refer to.

0 Kudos
Message 2 of 10
(6,799 Views)

You can fetch a waveform instead of a cluster to give you a bit more speed, but the real issues is probably writing to disk.  As my esteemed colleague mentioned, use a producer/consumer architecture to put the TDMS write in a different loop than the scope fetch.  Use a queue to pass data between the loops.  You will see your memory use go up as the acquisition gets ahead of the write to disk, then drop when the acquisition finishes and the disk write catches up.

 

The LabVIEW help and these forums contain many examples of producer/consumer architectures.  Let us know if you need more assistance.

0 Kudos
Message 3 of 10
(6,793 Views)

thanks for the pointers, I'm now writing to TDMS file using producer-consumer loops, but I'm still converting from 1D Cluster to Waveform in the consumer loop.  If I up the create record rate to 100kHz it cannot keep up.  Do you know what data throughput can be achieved using this method?

0 Kudos
Message 4 of 10
(6,788 Views)

As long as you don't run out of memory, I would expect you to be able to keep up with your device.  However, there are a lot of things that can cause this to fail.  Please post your current code so we can have a look.  In the mean time, you may want to try fetching data as a waveform data type instead of the cluster to avoid the extra conversion.

0 Kudos
Message 5 of 10
(6,778 Views)

I tired out the waveform fetch, this resulted in an overwrite error.  It was showing first fetch as 18, second fetch 48000.  Cluster versus waveform VIs attached.

 

LabVIEW 8.5

0 Kudos
Message 6 of 10
(6,773 Views)

Hi bmann,

 

Approximately how many loops (how many total records fetched/acquired) does it perform before the overwrite error occurs? Are you saying that the fetch returns 18 waveforms in the 1D array on the first loop, then 48000 waveforms on the next loop iteration, and then it gives an overwrite error on the third loop iteration?

 

I noticed you are doing some graphing in the producer loop which could be slowing down the fetching of data off the digitizer. Can you try moving the graphing to the consumer loop and seeing what effect this has on the fetching of the data? Maybe even try disabling the graphing for now and see if any errors still occur.

 

Whatever is happening should be due to the producer loop not executing fast enough, as the only trouble from the consumer loop would be tied to filling up your LabVIEW memory (as DFGray mentioned), which I doubt will happen in this situation.

 

Regards,

Daniel S.
National Instruments
0 Kudos
Message 7 of 10
(6,762 Views)

Hi, yes failing on the 3rd iteration.  However, if you change the number of records per fetch it'll fail on later iterations, e.g. 300 per iteration and it'll manage over 100 iterations before the overwrite error appears.  Is there a formula to calculate the number of records per fetch based on Sample Rate, points per record and number of records?  I'd like the user to be able to alter these variables on the fly without introducing an overwrite error.

 

I've messed about with the 1D Cluster versus Waveform Fetch, 1D Cluster works fine, but Waveform results in overwrite errors.  All graphing moved to consumer loop, that didn't seem to make a difference.

 

I've also noticed the memory usage climbs when I first run the VI, then climbs again on the second run.  The only way to reduce memory usage back to pre-first run levels is to close labview, I'm assuming the graph display is taking up memeory.  Should I be coding to manage the memory better?

 

Also, I'd like to trigger from the FPGA card via the PXI bus, at the moment I'm using a BNC to the EXT TRIG of the 5154.  I can see PXI_TRIG0 on the FPGA IO to write the trigger, is there a good example showing the fpga triggering a NISCOPE product?  I've searched examples to no avail.

 

Latest 2 VIs attached.

0 Kudos
Message 8 of 10
(6,755 Views)

I've sinced noticed the attached 1D Cluster is not working either, the consumer loop stops at iteration 16 with an out of memory error.  Think it's time to count up the number of bytes I'm working with.

0 Kudos
Message 9 of 10
(6,749 Views)

I don't see anything wrong LabVIEW wise with your code, so you are probably running into hardware limitations at this point.  As you noticed, however, your speed is dependent upon how many records you fetch and the record length.  I have seen order of magnitude changes by optimizing these (file I/O is similar).  I would recommend changing your buffer sizes in a coherent fashion and plotting the results to optimize.  If you are really gung-ho, you can do polynomial fits on the results to find minima (yes, I have been known to do this).  You should be able to get close to your theoretical maximum bandwidth in this fashion.

 

When I was doing this sort of benchmark, transfer buffer sizes of about 200kB to 300kB worked best.  But this was with the 5112 on a PCI bus.  Your mileage may vary.

0 Kudos
Message 10 of 10
(6,741 Views)