PCI-6281, DAQmx, programming in Python.
I continue to develop my app, which is supposed to work like an oscilloscope:
I send a ramp (actually a triangle) generated by the DAQ to my hardware, while at the same time reading two voltages.
It works most of the time.
But for some combinations of parameters it appears that the read and write get out of sync by an ever-increasing amount.
In other words, sometimes the graphical display on my monitor shows what I want: the signal.
But for certain choices of parameters (e.g., total number of samples per sweep) the signal drifts, eventually moving off the edge of the display.
Schematically, here's what I do: (I hope this is clear enough)
- in s/w, create the array that holds the triangle wave. Let's call the length of the array N (one sweep = one up ramp and one down ramp)
- create array to hold input data. There are two Analog Input channels; the size of this array is 2*M where M > N
- Setup and start a Continuous Analog Output task.
-- in DAQmxCfgSampClkTiming I set sampsPerChanToAcquire = N.
-- I use DAQmxCfgDigEdgeStartTrig to set this task to start on /Dev1/ai/StartTrigger, the Input task that will start later.
-- Start the output with DAQmxWriteAnalogF64, with numSampsPerChan = N
- Setup a Continuous Analog Input task
-- in DAQmxCfgSampClkTiming , sampsPerChanToAcquire = N
(this call is identical to the call of DAQmxCfgSampClkTiming in the Output task
(including the rate, but not of course the taskHandle
- Start a loop in a separate thread to handle reading Input, processing and display data.
-- loop: ---while flag==True
---DAQmxReadAnalogF64, with numSampsPerChan = N ; arraySizeInSamps = 2*M
---Process and display data
As I said, that often works, but sometimes the data drifts off the display, as if the
input and output were not starting at the same time, and the offset between the
two grows monotonically.
I suspected that perhaps I'm delivering two different values of N to the Input and Output
tasks, but that doesn't seem to be the case. I'll grant that there might be a subtle
logic flaw that does just that, but I've looked, and found nothing yet.
It seems to me that there are at least two possibilities:
1. The input and output buffers/arrays are not being filled in sync
2. The DAQmxReadAnalogF64 function is not starting it's read at the beginning of the buffer
I'm a little concerned about the relationship between M and N. I'm guessing that
I want to choose M to be larger than the largest number of samples I'll ever see.
I've done nothing explicit to ensure that the Read starts when I want it to start.
I just crossed my fingers and hoped that it would.
I want each Read to start at the start of a new period of the Output.
I've been hoping that DAQmxReadAnalogF64 will do that.
Perhaps I am just lucky that it sync's at all. But I haven't seen anything in the docs to tell me how to
explicitly achieve this sync.
I considered trying to sync using DAQmxRegisterEveryNSamplesEvent, but there's
the issue of the callback mechanism. I'm not sure how it would work in Python. So
I'm looking for alternatives to that first.
Can anyone help me sync my Input to Output?
thanks,
garyp