Para 1,2,5: I am not querying the interface, I am wondering why Ni-DAQ takes 5x as long when I follow the sequence described in the RLP manual.
Para 3, 6: Continuous and triggered sampling (DAQ_* and SCAN_*) work nicely once they get going, but (on a 2.4GHz P4) have 0.9 MILLIsecond latency when first called and again when they reach terminal count. Most of the time I am on the same analogue channel, but need to sample others regularly and don't really feel like losing that 2 ms each time.
Para 3, 4: Yes, the OS (Win98) makes life interesting, but I am reading a sample-and-hold circuit, so I just have to get the data before the next clock, not at an exact time. My cycle is 250 usec (4 kHz). The original 95 usec AI sample time was a frustrati
ngly big chunck out of that.
I have no issue with the unified interface, hardware triggering vs software triggering, flexibility trade-offs, etc. And yes, some of these have an overhead, while others require more silicon. But when I see a 95 usec AI on a multi-GHz machine I try to think of what might be going on to consume the equivalent of 100,000 CPU instructions. Especially when I can do 5x better by following your own instructions.
Sorry if I have sounded bitchy about this. Like I said, the hardware is great. I have just found the software a bit frustrating.