LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

The fastest way of acquiring data?

Fast forward >>>

 

If you sit down and do some math using the sample rate and taking into concideration that the data has to be written to disk, you will find that your disk I/O is a bottle-kneck (regardless of what software you develop in).

 

Search this site for "Producer/Consumer" design pattern.

 

You will want to do a continuous double-buffered acq in the producer and use queues to transfer the updates to one consumer to display the data, and another to write it to disk.

 

Write the data as binary sine converting to text will demand more throughput.

 

Throttle your graph updates because even a single seconds worth of data would require every pixel represent 2000 data points.

 

Just trying to push it along,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 11 of 24
(2,415 Views)

Norbert, even in that one the error occurs when I get it to 2.5MHz. It actualy works for some seconds, and then it happens again.

0 Kudos
Message 12 of 24
(2,414 Views)

Ben wrote:

Throttle your graph updates because even a single seconds worth of data would require every pixel represent 2000 data points.


Ben,

 

what do you mean by "Throttle your graph updates"?

And thanks for the Producer/Consumer tip, I'm on it now...

0 Kudos
Message 13 of 24
(2,404 Views)

Rafael,

 

if this example runs several seconds before creating the error, you are running into the issue Ben describes.

The disc is definetly a bottle neck. How many channels are you using currently?

If more than one, please reduce the number of channels to one and see if the example is working there. If so, you should split up acquisition and logging into two different loops using producer/consumer. Then you can increment the number of channels to benchmark what the maximum transferrate to your disc is. If it is too low for your requirements, you have to think about using a dedicated datastorage like raid systems connected to the system.

 

You can find useful knowledge base articles on streaming here, and here.

 

hope this helps,

Norbert 

 

[Edit]: Ben is talking about removing any graphical display of the data during acquisition. The data should be included in the file and available for offline analysis. So there is no need to display it during runtime. 

Message Edited by Norbert B on 11-11-2008 09:47 AM
Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 14 of 24
(2,400 Views)

Norbert,

 

I'm actualy using only one channel for these tests.

About the graphics, I'd removed them already, thx.

 

And thx for the links, I'll be reading them also.

0 Kudos
Message 15 of 24
(2,397 Views)

Danigno wrote:

Ben wrote:

Throttle your graph updates because even a single seconds worth of data would require every pixel represent 2000 data points.


Ben,

 

what do you mean by "Throttle your graph updates"?

And thanks for the Producer/Consumer tip, I'm on it now...


Please review Dr. Gray's KB article on large data sets here.  And while I am at it, this list of Tags cover many aspects of LabVIEW performance that could sneak up and bite you. They are high on my list of recomended reading. Smiley Wink

 

Have fun!

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 16 of 24
(2,388 Views)

Since Ben invoked me, I have to pontificate Smiley Wink.

 

The biggest thing you can do to help performance is use raw binary instead of scaled data.  Use the DAQ Wizard to generate code, then change the output type to unscaled binary.  This will give you a 4X improvement in speed (2 bytes/sample for I16 vs. 8 bytes/sample for DBL).  You can query for the scaling coefficients and save them either at the beginning or end of your acquisition.  You should be able to stream at least 4 channels at 2.5MS/sec this way (20MBytes/sec) on a generic set of modern hardware (defrag your disk first).  If you have the appropriate RAID drive or a very fast single drive, your bandwidth limit may switch to the PCI bus, or you may be able to stream all 8 channels.  I would have to try it.

 

One last point.  The write speed of the file formats you have easy access to are, fastest to slowest - LabVIEW Write primitive, NI-HWS, NI-TDMS.  I would recommend using the one highest on the list you can maintain easily (don't forget how you will use the data later!).

 

Good luck!  Let us know if you need more help.

Message 17 of 24
(2,356 Views)

Hey Gray, thanks for coming Smiley Happy

 

So, using the raw unscaled binary with I16 makes the graphic show only one, weird, measurement. I believe that's because of the raw unscaled binary format, that is not meant for the graphic... am I right? But, I don't need graphs, so it has already been removed.

How can I be sure, in this "generated code" enviroment, that it is acquiring samples at 2.5MHz? I can change the number of samples, but there's no place that tells me the freq... If the frequency I put at the "DAQ Assistant" remains the same, then that's ok, but how can I be sure of that?

And from where do I get the scaling coefficients? (they are so I can understand and read the saved file, right?)

 

Sorry for the lots of begginer's questions, but I know this is "peace of cake" for you guys, and that's probably a few mins answer. Don't think, please, that I'm not looking for answers out of here, it's just that, as usual, my deadline is running in my direction...

 

0 Kudos
Message 18 of 24
(2,337 Views)

Where the sample rate is depends on how you generate your code.  If you generate from a task, the info will be contained in the task.  You can convert this task to a configuration VI, the info will be there.  If you simply open the front panel of the Express VI, the info will be there, as well (unless you chose a task instead of a configuration).  The VI which sets it is DAQmx timing.vi.

 

The scaling information can be read from a property node. Open the DAQmx -Data Acquisition palette and it is probably the first property node on the left (third row is property nodes in LabVIEW 8.2).  Drop this and wire it to your DAQ reference.  Your first item should set the Active Channel.  Now select Analog Input->General Properties->Channel Calibration->Scaling Parameters->xxx.  DAQmx has two calibration schemes - query for which one your device is using and then get the coefficients.  You can read more about these properties and how to use them in the LabVIEW help file. (NOTE: I am not a DAQmx expert.  There may be a better/easier way to do this).

 

Wiring an array of I16s to the graph should have worked.  Did you have the Y-axis set to autoscale?  The range will change dramatically between scaled and unscaled data.

Message 19 of 24
(2,307 Views)

Nice, I've learned about the scaling now and I'm already been using it... Thanks Gray! Now I'm looking for "saving time samples"...

 

I've recorded the information and seen it in graphic with one channel... How can I, using the "Read Binary File.vi", read more than one channel? I'm thinking in spliting the array retrieved from the file in two arrays and then ploting both... Is there any best way to do that, knowing that later I'll need to that with 8 channels?

0 Kudos
Message 20 of 24
(2,265 Views)