Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx 32 bit DMA and Number of Booleans per Channel property

I have an application that uses change detection on an M-series board to latch state changes on 6 digital lines (port 0:lines 0-5).  My application also needs to immediately respond programmatically depending on which lines change, so I dynamically register the task to an event structure so that I can process which lines changed and respond.  Because I'm only sampling 6 lines, I cannot use DMA transfers which require a 32 bit minimum transfer size, so I'm currently using hardware timed single point sampling with programmed I/O reads.  This means that I cannot use a buffer, and so it's possible that I can miss line change events if they occur too quickly (all I can do is check if any were missed using the Channel property Status->Advanced->ChangeDetection->Overflowed).  I thought about using other digital lines on the board (ports 1 and 2) as "dummy" lines that I can sample, but many are used for other tasks, and even if I could, that still only provides 24 bits.

However, I've discovered in the Read DAQmx class that there is a property called Advanced->Digital Input->Number of Booleans per Channel.  This propery apparently allows you to pad reads with extra bits that can be ignored.  If I set this to 32, will I be able to go back to continuous sampling with DMA reads and therefore have a buffer that can handle event overflow? Is there any other way I could implement my application?

Thanks.
0 Kudos
Message 1 of 5
(3,539 Views)

Semi-random assortment of thoughts:

Have you actually tested DMA and found it to fail?  I wasn't aware that you can't or shouldn't use DMA for data transfers in this type of app.  Maybe it depends on the underlying timed digital port size?  I've only used the 6259 with a 32-bit port.  I've thought that it transfers a 32-bit port reading immediately on a bit change, and masking gets applied somewhere under the hood so that the 26 bits I don't care about are always reported as 0.

It sounds like you've got an M-series board with only 8 bits for timed DIO, right?  I guess then I can picture how you might have trouble.  It seems unlikely to me that it should matter, but have you tried to see whether it matters how you define your channel(s) -- one channel for all bits vs. one channel for each bit?

Have you tried buffering based on interrupts?  Not as fast as DMA, but probably faster than on-demand.  Dunno if it's fast enough to give you reliable no-miss running though.

If you *do* try DMA, what happens?  Does the "available samples" property remain at 0 during changes 1,2,3 and then jump to 4 at change 4?  Or does it increment 1,2,3 but not allow you to successfully Read until you get to 4?

Wish I had a definite answer to give you, but will be interested to learn more from any other followup posts.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 5
(3,537 Views)
I did much of the testing of modes\settings a while back, but I can answer a few of your thoughts from what I remember:

I have tested DMA transfers, and it indeed shows no samples available for reading until 32 bits worth of data (4 line changes) is available.  I can't remember if the "Available Samples" property increments or jumps, but I do know that I get timeout errors if I try to read before there is a transfer.  This did not change whether I assigned all lines to 1 channel or each line a seperate channel.

Transfers using interrupts still require 32 bits worth of data available (I was quite surprised by this).  By dynamically registering the Change Detection signal to an event structure, I am essentially creating interrupts on each line change so that I can transfer the data myself.  The only way to read data in non-32 bit chunks that I have found is with Programmed I/O.  I do not use On Demand sampling--that means the software is providing the sample clock.  The Change Detection signal is the sample clock, while the data transfers are software controlled (Programmed I/O).  Although technically, because I transfer data in software from a dynamic event hooked to the Change Detection event, its essentially the same thing.  In my case, however, I also latch a counter on each Change Detection so that I can provide a timestamp to each line change.  On Demand sampling would make these timestamps inaccurate.

Bottom line is that my sampling is hardware controlled and well defined.  It works well for me.  It's the data transfers that cause the trouble.

Any help\thoughts are appreciated.
0 Kudos
Message 3 of 5
(3,531 Views)

Thanks for the details.  I'm afraid I've got nothing useful to offer.  Having used only the 6259 with a 32-bit port, I never encountered anything like this personally and have no experience to draw from.  You've already explored all the options I know of and then some.  In fact, while I think I recall that the '# available samples' property incremented by 1's on every change, I'm not 100% certain how I read the values out of the buffer.  So I *think* a 6259 or other board with a 32-bit timed DIO port could work for you but I'd have to stop a little short of guaranteeing it.  Assuming of course that the project has budget for one.

Anyone blue bars from NI out there with ideas for a better workaround? 

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 5
(3,521 Views)
Yuri,

The property you referred to in your first post can't actually be configured or set - the value of the property can only be read.  You've described a difficult and interested problem, and I agree that it seems you've come up with some good ideas. 

However, I wanted to ask you about your experiments using interrupts instead of DMA.  From what I understand of your applications, interrupts should work.  Did you set the property for the data transfer mechanism to DAQmx_Val_Interrupts?

DAQmxSetAIDataXferMech(TaskHandle taskHandle, const char channel[], DAQmx_Val_Interrupts);


Elijah Kerry
NI Director, Software Community
0 Kudos
Message 5 of 5
(3,505 Views)