09-19-2011 01:17 AM - edited 09-19-2011 01:19 AM
Zach,
Thank you for the reply and for trying it out. I have experimented with the various possibilities and final answer is not something that you'd expect.
Absolute offset does not work because of the 32-bit limitation and because of the issue that is related to previous post. I found no way around it. If I understand you pseudo-code correctly, it does what my original version did. Its ability to produce errors is dependent on fine timing of the DAQmsWrite call with respect to FIFO stepping. I won't go into all the details because this is not the "right" answer, but that is disconcerting.
I followed your suggestion to recode for relative offset, and that is a cleaner solution. In the most obvious form it does NOT work and produces the following error whenever the FIFO runs past the current write position:
NI Error: Attempted to write to an invalid combination of position and offset. The position and offset specified a sample prior to the first sample generated (sample 0).
Make sure any negative write offset specified will select a valid sample when combined with the write position.
Property: DAQmx_Write_RelativeTo
Requested Value: DAQmx_Val_CurrWritePos
Property: DAQmx_Write_Offset
Requested Value: -94208
Task Name: PWMDO_Chip
Status Code: -200287
Now, the log shows that the current write position was 204800 (frame 14, highlighted in red), which, even with the negative offset, will not result in a negative index. HOWEVER, note that from Frame 13 the write position grew magically by one full buffer wrap, which is 131072 samples.
OldWriteIndex is the "frame" (of which there 16 per buffer) computed from the result from DAQmxGetWriteCurrWritePos
NewWriteIndex is the "frame" (of which there 16 per buffer) computed from the FIFO position, as you suggested in the previous post, with some safety padding.
Delta is simply the difference between the two, that is used to compute the offset, in samples, signed, which is supplied to DAQmxSetWriteOffset, DAQmxSetWriteRelativeTo is always set to DAQmx_Val_CurrWritePos.
FrameCount= 13 SamplesGen= 63242 DMAIndexEst= 15.9397 WritePos= 73728 WriteDelta= 0 Pad= 0 :: OldWriteIndex = 18 NewWriteIndex= 17 delta= -1 tmsec= 55.2745 / 0 tid= 6348 **** Write operation skipped, duplicate index
FrameCount= 14 SamplesGen= 103854 DMAIndexEst= 25.8547 WritePos= 204800 WriteDelta= 131072 Pad= 0 :: OldWriteIndex = 50 NewWriteIndex= 27 delta= -23 tmsec= 75.588 / 0.609568 tid= 6348 *** SamplesWritten mismatch @ 0 **** Err=-200287
That "magic" 131072 must be taken into account, but only for that frame and not for the subsequenct ones!!!! This is crazy and is mildly counterintuitive. What's even more counterintutive is the fact that the subsequent frames must disregard that offset and go "back in time" otherwise each write takes 10+ msec! Here's the log the the "pad" logic enabled that accounts for the extra 131072 frames any time there's a drop-out:
FrameCount= 8 SamplesGen= 72297 DMAIndexEst= 18.1504 WritePos= 86016 WriteDelta= 4096 Pad= 0 :: OldWriteIndex = 21 NewWriteIndex= 20 delta= -1 tmsec= 59.7433 / 0 tid= 6384 **** Write operation skipped, duplicate index
FrameCount= 9 SamplesGen= 112382 DMAIndexEst= 27.9368 WritePos= 217088 WriteDelta= 131072 Pad= 131072 :: OldWriteIndex = 53 NewWriteIndex= 61 delta= 8 tmsec= 79.7863 / 0.0220813 tid= 6384
FrameCount= 10 SamplesGen= 112883 DMAIndexEst= 28.0591 WritePos= 122880 WriteDelta= -94208 Pad= 0 :: OldWriteIndex = 30 NewWriteIndex= 30 delta= 0 tmsec= 80.0335 / 0.0195933 tid= 6384
And, just for comparison, this is what happens if the "pad" does not get removed, note the duration of the next DaqmxWrite call:
FrameCount= 14 SamplesGen= 103809 DMAIndexEst= 25.8438 WritePos= 212992 WriteDelta= 135168 Pad= 131072 :: OldWriteIndex = 52 NewWriteIndex= 59 delta= 7 tmsec= 76.317 / 0.0233253 tid= 6904
FrameCount= 15 SamplesGen= 104580 DMAIndexEst= 26.032 WritePos= 114688 WriteDelta= -98304 Pad= 131072 :: OldWriteIndex = 28 NewWriteIndex= 60 delta= 32 tmsec= 76.7008 / 6.44308 tid= 6904
This appears to be related to some type of a logic error that kicks in when the FIFO overruns the write position, which is the same to say that when the regen kicks in. The next write "insist" on being in the future (as to force the user to detect the dropout???), but the next one has to be in the "past".
So....... I have a working solution that I suspect depends on some type of an undocumented feature in the write buffer code. This brings me no comfort. I would like to request that there's an option added to either
a) disable the "DMA nanny" and to allow DMA overwrites at will, -OR-
b) even better, to add another regen mode, which is to include "auto-sync" which is simply another regen where each subsequent write is written as close to the end of the currently running FIFO as possible. It would have to be equipped with a buffer granularity control, like the frames that I have in my code, to make sure that PWM application can maintain their pulse timing.
Is that possible, and how do I escalate this request?
Thank you,
-Alex
P.S. The code is available if you'd like, although I suggest that you take the whole app to make sure that you're seeing the same thing that I'm seeing. You can find my contact info at www.a-vue.com if you can't get it from my profile.
09-19-2011 04:14 PM
First, I apologize this is such a hassle. I will admit that your use case is not the general use case, though. Glitches aren't something to be tolerated in most use-cases. It just so turns out that your specific use-case has a good reason not to care. Furthermore, using digital lines as PWMs is also not typical, but once again you have a good reason to do so (you need lots of PWMs, and X Series has only 4 counters).
Second, on the signed integer overflow issue, it will take about 35 minutes to overflow at 1 MS/s update rate. Half that at 2 MS/s (I don't remember what rate you were running at) as I think it has changed a couple times. So, I would say if your test doesn't have to run that long then using Absolute Positions is probably advisable. If it has to run longer, I did get it to work, see below.
Anyways, the latest issue you're seeing is actually the original issue I described with the pictures. It turns out that when you query 'Current Write Position' we also do that nastiness where we notice that the hardware is actually outputting data at the 'Current Write Position' and automatically move it forward for you. So once again that throws a wrench into everything. I got stuff to work by basically ignoring DAQs reported 'Current Write Position'. Instead, I think it is advisable to keep track of it yourself. You know where you just wrote, why bother asking DAQ? Here is the algorithm I had working.
WritePosition = TotalSampPerChanGenerated + FIFOSize
WriteIndex = (WritePosition % WriteSize) + 1
WritePosition = WriteIndex * WriteSize
# NOTE: Everything so far is the exact same as before
if WritePosition != OldWritePosition:
# The first iteration is a little weird since we wrote a bunch, but haven't output it yet.
if OldWritePosition == 0:
set WriteRelativeTo to First Sample
set WriteOffset to WritePosition
Write
Else:
# Since we wrote last time, the current position has advanced
CurrentWritePosition = OldWritePosition + WriteSize
RelativeWritePosition = WritePosition - CurrentWritePosition
set WriteRelativeTo to Current Write Position
set WriteOffset to RelativeWritePosition
Write
OldWritePosition = WritePosition
This ran for over an hour with no errors and no Writes taking any time.
As for Feature Requests I would suggest posting them to the DAQ Idea Exchange. I would add a third possibility to your list as well:
c) Add 64-bit versions of the Streaming Properties for 64-bit Processes. Internally all of the streaming properties are actually of type size_t, so they're already 64 bits. We should go ahead and support those in the API as well then there may be less headache.
09-19-2011 04:43 PM
Zach,
Tracking the write position manually as you suggest results in error -200287, which is one of the examples in my previous post. This why I need to add the full buffer increment for that one write. The error is time-sensitive; it depends on when with respect to the precise FIFO DMA position the call occurs. The "bad zone" is small but with my particular timing it occurs often enough to be a problem. I have code that reliably produces the error, but stops doing so if I insert a small (<500usec) wait.
The other question that is not answered is why do I have to go back to the old position after the increment. I don't want for my code to stop working with the next version of NIDaqmx.
-Alex