LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

16 channels of data at 30kHz on the NI USB-6259 - is it possible?

Hello -- I would like to use the NI USB-6259 for electrophysiology.  My application requires 16 channels of analog input digitized at 30kHz.  This should be possible, as the listed aggregate digitization rate is 1MS/s.  I've however run into several problems, all of which I believe may deal with how memory is handled in LabView (or maybe I'm just programming it wrong).

 

First off, if I pull in continuous voltage input using the DAQ assistant (DAQA) from anywhere from 5k to 30k samples at a time -- without any visualization -- things run alright.  I record the data in binary format (TDMS) using the "Write to Measurement File (WTMF)" routine.  However, I notice that the RAM used by LabView creeps up at a steady pace, roughly a megabyte every several seconds.  This makes long-term recording unfeasible.  Is there any way to avoid this?  Basically I just have the DAQA and WTMF in a while loop that was automatically created when I set the acquisition mode to continuous.  

 

Secondly, I would like to be able to visualize my data as I record it.  If I set up 16 graphs -- one for each signal -- I need to raise the "Samples to Read" (STR) to 30k to ensure that the "Attempted to read samples that are no longer available" error [-200279].  This is annoying, as it makes the display look jerky, but is probably livable. 

 

Now if I choose to display data in 16 charts rather than graphs (charts, as defined in LabView, display a bit of cumulative data along with the real-time signal), the amount of RAM used by LabView increases by several megabytes a second, regardless of whether or not I'm saving the data.  After a short time, I get an "out of memory error".  

 

Ideally I would like to be able to display 16 channels of 30kHz analog voltage data and save the data.  As you see I'm having some level of trouble doing either of these things.  Bare minimum requirements for my application would be to pull in the data with an STR of 30k, visualize the data in graphs, and save the data.  Should this be possible in LabView 8.6 or 2009 (I use 8.6, but have tried these steps on the trial version of 2009 as well)?  Even better, I would like to use an STR closer to 5k, and display the data in charts as it's saved.  Should this be possible?

 

I'm using a reasonably powerful machine -- 32-bit Windows 7 with 3.24 gigs RAM,  2.4 GHz quad-core, etc.

 

Thanks

0 Kudos
Message 1 of 7
(3,791 Views)
Please post your code.  What you wish to do is perfectly possible with properly designed code, so my first thought is that it has something to do with the way you've set things up.  Are you using separate loops for the data collection and the front panel display / file write?
0 Kudos
Message 2 of 7
(3,785 Views)

I've attached code that contains all of the components I'd like to have in an ideal program.  The data collection and display are in the same loop -- I suppose that might not be the best way to code it?  Ideally the display would be running on a different thread from that of the data collection.  There's some filtering thrown in there as well, but that doesn't seem to contribute to the problem (and if it did it could be removed).

 

Thanks!

0 Kudos
Message 3 of 7
(3,768 Views)

Hello!

 

I will admit right now that I can't stand any of the "assistants" and never use them.  I don't like to have any part of my code invisible from me.  Therefore, looking at your code gave me a headache.  🙂 

 

So, what I did is rewrite your code using the DAQ functions (basically what you'd see if you selected "Open Front Panel" on the DAQ assistant icon).  You can go in and put the DAQ assistant back in if you so desire.  This is just to give you an idea of the approach you should take.  I'm grabbing 15000 points per loop iteration, just because I happen to like 500msec loop rates.  You can tailor this number to your needs.

 

I have two parallel loops -- one collects the data and the other displays it on the front panel and writes it to a file.  (I used the "Write waveform to file" function -- you can put your assistant back in there instead if you like.)  The data is passed from the DAQ loop to the display loop using a queue.  I use the "index array" function to select out the individual channels of data for display.  I show 3 channels here, but you can easily expand that to accommodate all 16.  You can also add your filtering, etc.

 

I am using a notifier to stop the two loops with a single button, or in case of an error.  If "stop" is pressed, or an error occurs in the DAQ loop, a "T" value is sent to the notifier in the display loop (and that "T" value is used to stop the DAQ loop as well).  That will cause the display loop to receive a "T" value, which will cause it to stop.

 

I don't have a 6259 on hand, so I simulated one in MAX.  I didn't have a problem with the processor running at 100% -- on my clunky old laptop here, the processor typically showed ~40-50% usage.

 

I've added comments to the code to help you understand what I'm doing here.  I hope this helps!

d

 

 

P.S.  I have a question...how are you currently stopping your loop?  You have "continuous samples" selected, and no stop button.

Message Edited by DianeS on 12-30-2009 07:28 PM
Message 4 of 7
(3,749 Views)

Oh wow, thanks!  I'm going to be away from my DAQ for a couple of days, but I'll try out the code and report back when I can.  I really appreciate it!

 

Regards,

Brian

0 Kudos
Message 5 of 7
(3,742 Views)

Brian,

 

I have not looked at the code, but one point should be made: Using a chart to store millions of samples is probably not a good plan.  Charts or graphs must reduce the data if more points are sent to them than the number of pixels in the line displayed.  If your chart is 1000 pixels wide, send it only 1000 data points.  There are several ways of reducing the data: Decimation, averaging and others.  If you feed millions of points to the chart, LV will reduce them to the number of pixels but still stores all the data in the chart buffer.  You may be making extra copies of the data which aggravates your memory leak.

 

The users cannot process updates faster than a few times per second.  So it is not necessary to write to the chart more often than that.

 

Lynn 

Message 6 of 7
(3,695 Views)

Hi Diane and Lynn -- thanks so much for your replies.  Diane: there's still a small memory leak when using your code, but if I replace the charts with graphs it works beautifully.  I'm able to visualize 16 channels and save the data without any memory leak.  Lynn, on your advice I'll do a little averaging of the signal to minimize the memory leak from drawing to the charts.

 

Thanks again!

-Brian

0 Kudos
Message 7 of 7
(3,638 Views)