Hello all. I would like to request your suggestions on the following DAQ/LV
resource consumption problem.
I run LV 5.1 on W2000 notebook using a 16E-4 PCMCIA card with NI-DAQ. 300+MHz CPU
128MB RAM.
I would like to have a simple VI running continuously that reads 5 channels
at 1000 samples/sec, 100 samples at a time (about 100ms per acquisition.) Plan
is to do some simple arithmetic on each array and write global vars which can
be used by other VIs running in parallel with the acquistion VI. These VIs all
run slower than the DAQ. They update anywhere 0.5 one second intervals
to ten minute intervals and by just grabbing data, they do not talk to hardware.
I tried:
1) Circular buffered continuous data acquisition; consumes 90+% of the CPU due
to polling the AI Read VI. Other VIs cannot run or run very slowly. One of the
other VIs is a vendor-supplied instrument driver that uses VISA and takes about
40-50% of the CPU by itself. Got it down to <15% but there is still
a CPU usage conflict with the DAQ. The DAQ issue for the circ.
buffered acquisition is known, found tech support suggestions to try the
following:
2) Occurrence-driven VIs in example program database. CPU load goes way
down but the scan backlog behaves oddly. 1000 samples/sec and 100 samples
causes scan backlog to monotonically increase until it hits the buffer limit.
Other combinations of rates and sample sizes give about 100 scan backlog before
clearing. However, doing other processes with the CPU can cause the scan backlog
to jump up. ie. Hold down the left mouse on an LV pane, the backlog freezes, and
jumps to a larger number upon release. Backlog blips around the new value.
Obviously, changes in the signals take longer to clear the backlog
---
Then decided that if the dead time between acquisition and simple processing
were short compared to the 100ms acquisition time, that I could dispense with
buffers and just restart the DAQ but without doing AI Config over again.
ie. 100ms acquire, 10ms process, repeat. 10ms dead time is lost but not crucial
So,
3) Modified "Acquire N Scans.vi" from example programs so as to run
continuously. Put in occurrences, added a while loop around AI Start
and AI Read, and also a variable wait state (wait until multiple or simple wait
for x milliseconds) etc.
Never a scan backlog but CPU usage starts out low and
slowwwwwly creeps up, almost logarithmically until it reaches the mid 90% range.
VI eventually "freezes" The running VI indicator is still on but there
are no data transfers and no updates. VI must be aborted, closed, amd restarted.
Verified this on an NT4 system with a PCI version of the 16E series card so I
don't think it is the transfer rate difference between PCI and PCMCIA.
I chose this line of DAQ after trying
to sync multiple VIS with semaphores etc. to avoid multiple VIs calling
the DAQ hardware simultaneously. That was an even bigger brick wall. With this
approach, only one VI talks to the card and the others just read variables.
Since sound cards, video players, etc. can stream large amounts of data without
burdening the CPU unduly, I think that my plan to scan, process, write vars, and
repeat ought to work. I am, however, at a loss to understand why there is such
an issue with the CPU load and what I'm overlooking. Have been through what is currently
available on Deja and the NI databases along with talking with some good support
folks at NI.
Thanks in advance,
--- Ravi Narasimhan
--
Ravi Narasimhan
Dept. of Physics and Astronomy, UCLA
http://www.physics.ucla.edu/~oski
oski@physics.ucla.edu