08-04-2009 10:32 PM - edited 08-04-2009 10:36 PM
Thanks for updating the files, Tom. I've grabbed all the latest a few weeks ago and integrated it in with my code. Unfortunately, even with the fixes referenced in this thread it seems that the original problem I was chasing down is still there, so I'm back for some more ideas.
Following is a quick overview of what I am doing:
The problem that I’ve found is after running for a couple weeks the A/I data no longer appears on the correct channel. In other words, normally when reading the next chunk of 16 samples (1 sample per channel, 32 total bytes) I expect to see the data for channels 0, 1, 2,… 15. However, after several weeks, the 16 samples that I read in contain the data for channels 2, 3, 4, ... 14, 15, 0, 1. So somehow I slipped either 2 channels forward or 14 channels backwards. I've seen this several times and the circular buffer is always rotated by 2 channels (never 1 or 3, at least that I've seen). Restarting my process immediately gets things back lined up again as you would expect.
I had really been hoping that the fix for the stable DAR problem mentioned in this thread would take care of this, but I’ve once again found my channels misaligned.
So first of all, have you ever seen anything like this before and/or know of a solution? (I figure I’ll shoot for the stars first.)
Second, in debugging this I found something that confuses me further. Since my process is checking for new data at 1 kHz, but I’ve requested data at 10 kHz, I expect to see a total of 10 samples per channel per loop. So I expect to see tDMAChannel::_readIdx increasing by 320 bytes on each pass (10 samples/channel * 16 channels * 2 bytes/channel). However, in reality I see this number changing at about half that rate. If I decrease the A/I sampling rate down to 1 kHz, then start bumping it up again it looks like everything behaves as I expect up through 4 kHz/channel; in this case _readIdx increases by 128 per pass (4*16*2). But when I up the A/I rate to 5 kHz/channel all of the sudden I consistently find the _readIdx increasing by less than 100 bytes per loop (per 1 ms). It’s as if the system simply can’t push that much data through.
The specs on the 6229 card indicate the A/I can run at a max 250kHz samples aggregate across all active channels, so my understanding is that when running 16 channels I should be able to get up to 250 kHz / 16 channels = 15.625 kHz max sampling rate per channel. So what am I missing? Is there an additional limit on DMA transfer rates?
Thanks again for the help,
AJ
08-05-2009 09:46 AM
AJ,
I was reading every 250 us. I had the exact same problem. I had problems where every so often once every 13 hours or more, my loop would get preempted for awhile (several ms). These were always the trigger for channel slips. I had the same setup where my buffer was huge, so I couldn't possibly fill it up. I don't remember exactly what I found out, but at the time I could not solve it. Since my acquisition was very sensitive (slipping channels could literally kill someone) and since it could possibly run for months, I opted to have very short acquisitions that run for 200 us (a little shorter than the loop time in case of some jitter). Every time I read the card, I cleared the DMA and restarted the acquisition. In this way, I could guarantee the channels would be as I expected. If you try to start the acquisition before the task has finished, you'll run into problems. I cannot remember offhand if it was because you cleared the DMA memory but it would continue to fill or something different.
As for your second problem, I also experienced that problem as well. Check out http://forums.ni.com/ni/board/message?board.id=90&message.id=1310#M1310. I am the poster child for problems with the 6229.
Hope that helps. I can dig up some old code or look for other tweaks in my version of the DAQ code if you have further concerns. I think most everything is posted, but there may have been one or two tweaks (not bug fixes per se) that are not posted.
Aaron
08-05-2009 04:37 PM
Thanks for the quick response, Aaron. I took a look at the other forum you linked to and now I understand a little bit more why I am not getting the sampling rate that I request. I decreased my value for the samplePeriodDivisor and now I can get all of the requested samples. But as you suggested, there is clearly something more to the relationship between the samplePeriodDivisor and the convertPeriodDivisor -- it's not straightforward to calculate one from the other based purely on the number of channels and the sampleDelayDivisor. Like you, the only way I could get it to work is to add in some extra fudge factor. The timing diagrams in the user manual help but still there must be something missing. Can someone from NI help to provide some insight into what is required or share the code that is used by NI-DAQ (for instance) to calculate these values?
As for stopping and restarting the DMA on each loop through the program, I understand what you're describing but it's not clear to me how much of the DMA I need to shutdown and reinitialize. Can you share some of your code or at least identify which calls can be done once during initialization and which must be done after each read?
Thanks,
AJ
08-05-2009 10:51 PM
Attached is some of my code based on the original DDK. I call startAITask during startup and in the case of DMA errors after shutting everything down. The getAI function is called every time I need to get the data (i.e. every 250 us). I couldn't find a reliable way to determine if a task had finished, so I came with this somewhat embarassing and kludgy "done" system that makes a best guess to see if the task has finished. The real time system I use tries to catch up if it misses a loop, so every so often it will do bad things and run several loops one right after another. This system catches that and restarts the DMA if things go too badly. It's not at all how I wanted to do things, but I just couldn't get it to work the way I wanted. You may not be able to make sense of all of it, but it should definitely answer your question about how much code needs to be run every loop. If you get something working better than I have, I would appreciate any feedback.
actualDaqRate = numInputs * 4000.0;
m_convertPeriodDivisor = (u32)floor( MAX_DAQ_DIVISOR / actualDaqRate );
double desiredSamplePeriod = m_convertPeriodDivisor / MAX_DAQ_DIVISOR * (double)m_numInputs + 12*0.15e-6;
m_samplePeriodDivisor = (u32)ceil(MAX_DAQ_DIVISOR * desiredSamplePeriod);
m_numSamplesPerInput = floor(200e-6/(m_samplePeriodDivisor/MAX_DAQ_DIVISOR));
#define MAX_DAQ_DIVISOR 20000000.0
That's the general gist. There are some other checks and balances in the above code, but this is really just provided to give you some insight to the calculation I use for the convertPeriodDivisor and such.
Hope that helps.
Aaron