Platform: NI-DAQ 6.9.3, PCI 6036E, MSVC C++ 6.0 SP5, W2K SP3, Intel Xeon 1.8 GHz, 1 GB RAM.
Does DAQ_DB_Transfer go into an efficient wait-state when data is not ready, or does it poll constantly, consuming 100% CPU?
My setup code is roughly as follows:
1. Set XFER_MODE_AI to INTERRUPTS
2. Enable double-buffering
3. SCAN_Setup with 1 analog channel
4. Call Config_AITrig_Event_Message to watch for certain values.
5. SCAN_Start with a 400-entry buffer and 5 ms between scans.
On another thread, I continuously call DAQ_DB_Transfer with a 200-entry buffer. The call returns once every second, as expected. But the CPU usage soars to 100%. If I remove the transfer call, CPU usage drops to almost 0%. This has lead me to believe
that DAQ_DB_Transfer is implemented as a tight polling loop.
Is this how DAQ_DB_Transfer is supposed to work (or is there some other problem)?
I can work (kludge) around using multiple calls to DAQ_DB_HalfReady intersperced with Sleep() calls.
Thanks.