PXI

cancel
Showing results for 
Search instead for 
Did you mean: 

Problems with continuos sampling and writing to disk PXI-6071E

I'm getting kind of desperate now. I'm attempting something that should be extremely simple: sample 1-analog channel (single-ended, referenced) at 1MHz continuosly and write it to disk. At first I tried using an RT-FIFO in an time-critical loop and then systematically read off the FIFO and dump it to disk. Over time, the system gets slower and slower until it freezes. The timestamp that I record for each data frame gets more and more out of synch with the actual clock as well. I gave up on trying to do this and decided to attempt something even more basic. I address the card directly as per example: Cont Acq to File (binary).vi. The problem is that as backlog builds up FAST and no matter how big I set up
the buffer, I will simply not get 1 hour's worth of data at 1MHz. Is such a sampling rate doable with a 6071? If so, what is the approach? Or is it simply not possible to have a functional, continuosly read ring buffer and have it stream to disk?

-Alex
0 Kudos
Message 1 of 14
(5,711 Views)
Hello Alex,

It looks like we are dealing with several issues here.

First, the PXI-6071E�s maximum sampling rate is 1.25 MS/s, so you should be ok there.

Second, there�s a chance that your hard drive cannot write at that speed. Included below is a sample VI that tests your hard drive�s write speed. I tested it on my computer (2 years old) and was able to write at about 18MB/s, so that shouldn�t be the problem either.

Third is your disk size. 1 MS/s X 12 bit resolution (will actually write to a 16 bits, 2 bytes) X 1 hour (3600s) = 6.7 GB. You might be running out of disk space.

Fourth issue is file size. What file system are you using? NTFS can accommodate file sizes up to 16 exabytes (gigabyte x gigabyte). Fat32 can only accommodate 2 gi
gabyte files sizes. LabVIEW can only do 2 gigabyte files sizes as well. Off the record, and unsupported by National Instruments, openg.org has a >VI that will let you access larger files.

As far as efficiency goes, the �High Speed Data Logger.vi� example (found in NI Example Finder) may provide useful code.

If none of these suggestions help, please post the software and versions you are using, the applicable portions of your code, and any other information that may help, and I�ll be happy to look further into it.

Have a nice day!

Robert M
Applications Engineer
National Instruments
Robert Mortensen
Software Engineer
National Instruments
0 Kudos
Message 2 of 14
(5,709 Views)
Robert,

I'll test the harddrive speed, but I don't think that's the issue. I tried eliminating the file writing and just sample the data and watch the backlog. The backlog slowly creeps up. When I slow the sampling rate to about 650kS/s the backlog is almost insignificant whether or not I stream the data. Can you possibly point to the the simplest code I can use to test 1MS/s, or better still the maximum performance I can continuosly stream to harddrive (in this case PXI-8176 with 20GB HDD). I originally thought that I'd be able to do some processing of the data by queing a given frame and processing outside the DAQ loop. Is this possible at the upper end rates?

Thank you.

Alex Golovitser
0 Kudos
Message 3 of 14
(5,709 Views)
Alex,

I agree that the bottleneck is not your hard drive, especially since you�re still getting a backlog when not streaming to disk. Lets look at some of the other factors than can affect performance.

1. How many scans are you reading at a time? If this is set too low, the overhead of so many file transfers can slow things down. The �High Speed Data Logger.vi� (the simplest and fastest code I�ve found) has a default �scans to read� set to half of the scan rate. This equates to about 2 file transfers per second. For a MS/s, if your �scans to read� is set to 1000, that would be 1000 file transfers per second.

2. How much memory do you have? The PXI-8176 comes with 128 MB standard, 512 max. If you only have 128 MB, that might be your
slowdown.

3. What�s your processor speed? The PXI-8176 comes with a 1.26 GHz Pentium III, which might also slow things down at that rate.

As far as processing outside of the loop goes, it just depends on your configuration and what you are doing. Anything graphical would take up way too much memory and processor time for this application, so you wouldn�t want any graphs or charts to be running on the data. You�ll just have to test and see what works for your application.

If nothing here helps, post me a screen shot of your application, or the code itself, and I�ll see what else I can find out.

Robert M
Applications Engineer
National Instruments
Robert Mortensen
Software Engineer
National Instruments
0 Kudos
Message 4 of 14
(5,709 Views)
This is the most simple version of what I'm attempting. There is no processing or FIFO stack or anything of that nature. If I can get this to work properly, I can finish the actual task, but alas, this backlogs out.
The PXI-8176 is a 1.2GHz PIII with 512Mb RAM and 20Gig HDD.

Thank you.

Alex Golovitser
0 Kudos
Message 5 of 14
(5,709 Views)
Hi Alex,

I modified your post with a read the backlog operation first.

What kind of numbers can you do now?

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 14
(5,709 Views)
Unfortunately, this doesn't solve my problem. All this appears to do is to try to force the AI-reads to the maximum acquired data. I still end up with a 10846 (retrieve data from background) error in about the same amount of time.

Alex
0 Kudos
Message 7 of 14
(5,709 Views)
Hi Alex,

If you toss the file write I believe the code I posted should burn!

Some quick math indicates that you are trying to write an extremely large file. Something like 14,400,000,000 bytes (14.4Gb). This is based on 1,000,000 samples per second, 4 bytes per sample, etc, etc. (maybe I'm wrong on these numbers?)

I now suspecting the disk again.

Try modifying your example to get rid of the DAQ stuff and test how fast you can write a file.

I believe that if you create your output file and fill it up with dummy data (enough to ensure that it will not have to be expanded) and then write your data over top of the junk durring the run, you MAY get better performance. This is also one of those special cases
were ensuring your disk is not fragmented will help as well.

Another test would involve collecting all of the data into a pre-allocated memory buffer and only writing to disk when the acq is done. You will probably only be able to do 15 minues of collection if my math is correct.

I am always interested in pushing speed and performance so please update this thread with your findings.

Trying to help,

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 14
(5,709 Views)
OK, my math wasw off!

I still think you are looking at two issues.

1) Is keeping up with the acq. The example I posted will do that provided the file writes do not get in the way.

2) File writing not not only involves recording the data but also allocating space on the disk to store the data. The pre-allocation I suggested MAY help there.

2 Gig of memory would let you buffer everything in memory until the acq was done.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 9 of 14
(5,709 Views)
Alex,

How long is this code running before it backlogs out? Does the backlog immediately start building up, or does it run for a while just fine, and then start building up? If it runs for a while, you have enough processing power, but you are either running out of memory or space.

One other issue: is anything else running on the computer at the same time? If anything intensive is running, a 4MB buffer may not be big enough. You have the memory to bump the buffer up significantly. Doing so will allow the processor to be elsewhere for a while, come back and still catch up.

How long do you need this to run? How much data do you actually need? You could also try opening several files before your while loop starts, and
then simply switch between files when one gets fairly large.

Let me know if these suggestions help, and have a nice day!

Robert M
Applications Engineer
National Instruments
Robert Mortensen
Software Engineer
National Instruments
0 Kudos
Message 10 of 14
(5,709 Views)