Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Ni-DAQ single-point analog sample latency.

I am maybe in my own little world here in wanting to do software-triggered analog sampling and wanting to do it fast. Having bought a PCI-6014 which is capable of 4 microsecond analog sampling I found a simple AI_Read took 95usec. I was somewhat disappointed. Much reading of FAQs and other's postings here didn't help. I posted my own question and got little help.

Well, having spent days working out such problems as how to access hardware under windows I would just like to throw this to the NI world:

Digital in/out: was 6 usec. now 0.9usec.
Analog in: was 95usec. Now 19 if a channel/gain change is required. 6 usec if not.

To the hardware developers - lets have a cheer. The register-level programming is a bit con
voluted, but that comes with the territory when the triggering, etc, is so flexible.

To the software developers - boo, hiss. Wake up, get your act together. So far as programming the card is concerned I did exactly as described in the "E-Series register-level programming manual". The no-change-to-gain-channel case just does the last, trigger step.

Now, imagine what I could charge if I could make the latest nVidia or ATI video card go 10x faster. I suspect those guys know how to write drivers though.
0 Kudos
Message 1 of 5
(2,889 Views)
NI-DAQmx has considerably improved the performance of single-point software-timed analog measurements. We have measured rates of 83 kHz (12 usec per sample) on a 2.5 GHz machine. If you find the complexity of register-level programming too daunting or desire any of the features offered by NI-DAQ, NI-DAQmx may be a good option.

However, if your performance requirements are such that register-level programming is the only option, you may wish to investigate the Measurements Hardware DDK.
--
Geoff Schmit
Huskie Robotics, FIRST Team 3061 Lead Mentor
http://team3061.org/
@team3061
0 Kudos
Message 2 of 5
(2,889 Views)
Hi Michael,

There's no question that efficiencies can be exploited once we start using RLP. Configuring the hardware is definitely quicker using this approach. However, programming specific applications using RLP begins to be a headache once you leave the realm of the simple application.

Using the NI-DAQ driver on the other hand gives me a consistent, intuitive interface over almost all of their hardware products. I am also able to pick up LabVIEW, look at some example code and have an application up and running in less than a couple hours without ever programming data acquisition before. NI-DAQ also has an abstraction layer which offers programming benefits but will invariably add some delay to function calls.

It is also important to note that acquisition times are not lost because of the latency of the configuration calls. You will be able to acquire triggered and continuous data at hardware rates!! The only benefit that is lacking is if you want to start a software call to the card in 6us. But this seems to be a mute point since your software timing is OS dependent anyway. Why push a software call to be 6us when your are not sure when the OS will process your application's code.

The only time having a quick configuration would be a benefit is if you are constantly reconfiguring in a software loop. But then your are still restrained by OS timing so you will never have true deterministic results unless you go to an RT system.

The developers of the NI-DAQ driver have had to make many trade-offs when developing such a comprehensive and flexible driver and accommodating every scenario with every user is an impossibility. This particular instance happens to be one of them.

I would like it to be known though that I have been successful, with the NI-DAQ 7.0 driver, at acquiring continuous pattern input with the PCI-6534 card at 50ns resolutions (20MHz). Obviously, the driver has its benefits!!

Anyway, I appreciate the chance to discuss this type of situation.

Ron
Applications Engineer
National Instruments
0 Kudos
Message 3 of 5
(2,889 Views)
Ni-DAQmx does not yet support either of the cards I am using, a PCI-6014 and a PCI-DIO-96. I am stuck with traditional Ni-DAQ or RLP.
0 Kudos
Message 4 of 5
(2,889 Views)
Para 1,2,5: I am not querying the interface, I am wondering why Ni-DAQ takes 5x as long when I follow the sequence described in the RLP manual.

Para 3, 6: Continuous and triggered sampling (DAQ_* and SCAN_*) work nicely once they get going, but (on a 2.4GHz P4) have 0.9 MILLIsecond latency when first called and again when they reach terminal count. Most of the time I am on the same analogue channel, but need to sample others regularly and don't really feel like losing that 2 ms each time.

Para 3, 4: Yes, the OS (Win98) makes life interesting, but I am reading a sample-and-hold circuit, so I just have to get the data before the next clock, not at an exact time. My cycle is 250 usec (4 kHz). The original 95 usec AI sample time was a frustrati
ngly big chunck out of that.

I have no issue with the unified interface, hardware triggering vs software triggering, flexibility trade-offs, etc. And yes, some of these have an overhead, while others require more silicon. But when I see a 95 usec AI on a multi-GHz machine I try to think of what might be going on to consume the equivalent of 100,000 CPU instructions. Especially when I can do 5x better by following your own instructions.

Sorry if I have sounded bitchy about this. Like I said, the hardware is great. I have just found the software a bit frustrating.
0 Kudos
Message 5 of 5
(2,889 Views)