LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

unexpected AI_read behaviour

Hallo all,

My application continuously acquires the data from all the 8 DAQ channels using external start trigger and external scanclock. The scanclock source is a stable 1000 Hz signal. Buffer size is 10,000.
Inside the while loop the AI_buffer_read.VI reads the 1000 samples from the buffer on every iteration, the obtained data columns are averaged over every 50 points in order to get the 20 Hz equivalent data array and then these new data are written into the binary file. The file is opened with every loop iteration and closed after each data portion is written.

I assume that when I drag the 1000 data samples from the buffer and since the scanclock is precisely 1KHz in rate, there is no data left in the buffer after the AI_read call. Every second I have a 8*1000 data points. This would allow me to make an accurate timestamping of the data since I know both start time, dt and data array size.

When the program is started I see no backlog or timeout warnings - everything works fine. But after some hours I see random backlog numbers appearances, some of them are really weird. I made a list of these numbers with corresponding time marks. The most ridiculous part is this:

time backlog number
01:08:40 6210
01:08:40 5227
01:08:40 4242
01:08:40 3254
01:08:40 2277
01:08:40 1296
01:08:40 311

It would mean that the AI_read was doing something for 6 seconds while the data were acquired into the buffer.
That's looks like a nonsense for me. I'm totally frustrated with the way the program runs since I do not see the reasons for such a strange backlog numbers appearance and because most of the time the application is running good. The precise timestamping is absolutely crucial for this application that's why I cannot leave the samples in backlog to be read on the next loop iteration - it will produce the time shift in the dataflow.

So my questions are:
is it possible that some computer activities interrupt the datalogging process?
how can I find out why such unexpected behaviour occurs and what are the possible approaches to get rid of it?

My system is: PowerMac G4, 1.25 GHz; MacOS 9, LabView 6.1; PCI-MIO-16XE-50 DAQ board.
0 Kudos
Message 1 of 8
(3,352 Views)
It is very likely that OS actons are interrupting things. However, the interruptions should be on the retrieval end of things, not the acquisition. The daq cards are smart to the extent that they maintain the specified sample rate regardless of what the OS is doing (within limits of course). What you are seeing is the daq board utiizing its buffering to maintain your desired sample rate. Base your timestamp calculations on the number of points you have read from the buffer and all should be well. Mike...

Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 2 of 8
(3,352 Views)
Thanks for advice Mike,

I've been thinking about that option but there are other things that puzzling me much. I still cannot understand why my application is doing those tricks with backlog. Let me explain.
The AI_read is set to read fixed amount of samples from the buffer - 1000. Since the scanclock is 1000 Hz signal which is phase locked to the GPS clock I see the process is going this way:
1th cycle begun - acquisition triggered - 1000 samples acquired - AI_read takes these 1000 samples off the buffer - new cycle.
That's what I see most of the time - no backlog and timeouts. Now here is my problem. I would assume that if due to some reasons AIread left 50 samples in the buffer unread then they will be read on the next iteration i.e. the new data seque
nce will consist of 50 old samples +950 from newly acquired. This is equivalent to the time shift and I should see that in one of my analog channels which has a short pulse started at the beginning of every second (that pulse is triggered by GPS PPS signal).
But this pulse is always in the beginning - no offsets, no zeroes in data. Looks like despite of the large backlog numbers I'm well in time with acquisition. I hardly believe this is a LV bug but where am I wrong then?
0 Kudos
Message 3 of 8
(3,352 Views)
Sounds like the hardware is working fine. The scenario of 50 oldplus 950 new isn`t likely, as the DAQ card will keep sampling at 1000Hz, thus ensuring that each AI read contains an entire scan (unless a buffer overflow occurs, but then the data is mash).

I would imagine that the acquisition is fine, but the software lags a little behind. If the computer pauses for a couple of seconds (As a windows user, I have come to consider this a part of life), the buffer read will be delayed. The next 1000 points WILL be an entire scan though, as will the next. While the computer is busy catching up with the DAQ card, the buffer size will fall (Your timestamp can be wrong in this instance, as the actual acquisition and the transfer to your program are
skewed. This can also be the reason why you have more than one data update per second (The AI read function reads from the buffer without delay due to the backlog).

I might be wrong, but this is how it looks to me.

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 4 of 8
(3,352 Views)
In other words - What mikereporter said.

Shane
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 5 of 8
(3,352 Views)
mikeporter, sorry.

How I hate mondays.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 6 of 8
(3,352 Views)
Thanks for comment Shane,

Yes, I think you're right and it is an occasional "time sliding" between the DAQ data acquisition (perfectly running) and software operation (delayed from time to time).
The funniest thing seems to me that I do not have to do anything, but only to keep the buffer large enough for data not to be overwritten.
Since I do not use computer clock for timestamp and 1000 scans are guaranteed 1 second of data then even when there is a AI_read delay I have a correct timestamp at the end. In every cycle I add the number of cycle to the pre-calculated vector of milliseconds thus obtaining the corresponding time row. That's why I do not see the missing data or offsets.

many thanks to you and Mike for interest.
0 Kudos
Message 7 of 8
(3,352 Views)
Dear Alle,

I'm tring to fetch tiemstamp from GPS engine every 1 seond. But I'm not sure hot to do it. Have you got an example for external trigger?

Thank you in advance,

John
0 Kudos
Message 8 of 8
(3,352 Views)