01-12-2007 12:30 PM
01-12-2007 01:41 PM
1. I referred back to the datasheet to confirm what I suspected. Sure enough, the transfer rate is very much faster in finite acquisition mode than in continuous mode. I don't know the exact reason why, but it seems inherent in the board / driver.
2. So no, there probably isn't anything you can do to regain that bandwidth while in continuous mode.
3. Just thinking out loud now. I'm assuming that you'd like to get back to finite acquisition mode for the bandwidth, but need a way to fake a reference trigger. Let's stick with the basic idea of 3 buffered event counters and 1 separate counter generating a sample clock. I suspect 1 more counter should be enough, but it isn't yet clear to me what other signals / timing info you have to work with. What is your "reference trigger?" Where does it come from?
If you start your 3 buffered edge tasks before starting your sample clock, you can be sure the samples indices are in sync. The only little glitch is that the very first value measured will be # edges between starting the event task and starting the sample clock. The tasks are started with software "Start" calls, so the 3 edge counters start in quick succession but not simultaneously. Personally, I'd just subtract index 0 from each of the arrays of edge counts and treat the result as "# edges since initial sample clock."
Next, I'd configure 1 more counter to do a single unbuffered "two edge separation" measurement. I'd use the sample clock counter output as the timebase to count # of samples between edge 1 and edge 2. I'd also use the sample clock as the initiating 1st edge and the reference trigger as the 2nd edge. Give or take an off-by-1 you need to be careful with, the resulting count value tells you the index into the edge count arrays where the reference trigger occurred. Now you can post-process as desired with whatever quantity of pre-trigger and post-trigger samples are available.
One last tip: I'd make the sample clock be a very short pulse rather than a square wave. Then I'd be careful to make the two edge separation pay attention to both. One edge would be used for the initiating 1st edge, while the *other* edge would act as the timebase. This avoids a hardware race condition. You'll also want to consider which edge is most appropriate for the sampling clock on the edge-counting tasks. Another one of those little things that can make an off-by-1 to deal with.
-Kevin P.
01-12-2007 02:23 PM
Kevin,
Thanks for your reply. To answer your question, my reference trigger is the start trigger from an analog input card (PXI-6132); the AI task waits for a signal to get above a certain level before starting. I knew about the bandwidth differences for finite and continuous acquisitions, but didn't think this mattered, since my acquisition task uses a circular buffer and is constantly overwriting old values with the new ones. Also, the 500 kS/s is about 10x higher than the stated continuous acquisition bandwidth, while the 3 MS/s bandwidth that I get for finite acquisitions is comparable to what is stated in the datasheet. Incidentally, when using my "reference triggered" buffered event counting task, I always get an error saying that I overwrote samples before they could be read (which is, of course, true). However, this doesn't prevent me from reading the buffer or getting the correct data. I set overwrite mode to DAQmx_Val_OverwriteUnreadSamps, but still get the error. Could it be that interrupts are being generated when samples are overwritten, thus reducing the bandwidth?
Jason
01-12-2007 05:01 PM
01-14-2007 09:53 PM
01-15-2007 03:23 PM
Thanks for clarifying, reddog. It helps a lot to learn more about why things work the way they do, such as the throughput drop for continuous acquisition.
Also, good point on emphasizing the distinction between the actual overwrite of a circular buffer and the later Read call that attempts to retrieve data from the buffer. I've apparently had a wrong notion of what the "samples were overwritten before they could be read" error meant. I took it as implying that the overwrite was the cause of the error rather than the read itself. Apparently I never tried querying the task status before attempting the read -- with my faulty understanding, I'd have expected the status query to reveal an overwrite error. Had I tried it, I wouldn't have gotten the expected error and should have been able to discover that it was my Read call that created the problem. Anyway, your explanation definitely helped me make the mental connection about why the read properties need to be adjusted in order to avoid the error.
Small suggestion: if ever the error text is being updated, it'd be helpful to inject some #'s into it, along the lines of, "failed attempt to read at sample #101. Earliest sample in buffer is #4500."
Couple followup questions while on the topic: when setting Read properties for "RelativeTo" and "Offset" with a DAQmx Read property node, are they both persistent and independent? If I change just one of them, say "Offset", will my prior value for the "RelativeTo" property still hold (and vice versa)? Do they need to be set after the task has been started, or can they be configured prior to task start? Once I perform a Read with a non-default setting for RelativeTo and Offset, what happens to the internal read mark? Does it always move to point one sample later than what was just read or does it only get adjusted when the default settings for RelativeTo and Offset are both intact?
Sorry for so many questions -- I'd experiment a bit, but I won't be able to get on a LV PC for several days and am liable to forget by then.
-Kevin P.
01-16-2007 11:00 AM
01-18-2007 09:03 AM
1. Hmmm. When I set RelativeTo = MostRecentSample, and Offset = -(# to Read) I got data *without* an error. I had also set the property allowing unread samples to be overwritten. I posed a question earlier in the thread about persistence -- perhaps that's a factor?
In my app, I created a kind of DAQmx task driver with an interface supporting different types of Read modes. At the time, we were not fully decided on whether we would need to perform continuous streaming, take occasional snapshots of the recent past, or request the next several future samples. So the code inside would set the RelativeTo and Offset property on every single call because in principle they could change from call to call. I'm not sure that I ever investigated whether the property settings would have remained persistent over the course of many Reads, though it sure seems like they *should*.
2. Can't speak to the hw directly, but the 2-sample FIFO has been known as a bottleneck for years. Here's hoping that we soon see a new multi-channel counter board. The 660x series is from pre-Y2K, which is getting old for a DAQ board design...
-Kevin P.
01-18-2007 10:00 AM
Kevin,
Thanks for the reply. I was already setting the appropriate property to allow overwriting of samples, but I still get the error. I do still get the correct data, so I can live with the error. However, the fact that I get an error makes me think that I'm doing something wrong. It is interesting that you don't get the error, since your case is very similar to mine.
Regarding the 6602 FIFO, I ran a finite acquisition, but set the sampling mode to continuous. I set the buffer size to 1000000. If I acquire 1000002 samples or less, everything is fine. If I go above 1000002 samples, I don't get any data. This seems to suggest that the 6602 really does only have a 2 sample FIFO, as reddog pointed out. It still would be helpful, though, if someone could comment on whether or not the MITE chip has any additional FIFO memory. Also, is the FIFO 2 samples per channel, or just 2 samples?
Jason
01-24-2007 12:02 AM