12-12-2011 10:55 AM
Hi to everybody.
I need to acquire an analog signal without loosing any sample and filtering it with a simple average.
My idea was to acquire at the max speed (in my case, using an USB 6215, it is 250ksample/sec) and periodically average the acquired sample to record the waveform.
For example I acquire 1 channel at 250ks/sec and every time I collected 25000 samples (which should happen every 0,001 sec) I average them and save the result to a log in which I save the average value and the 0,001 interval time.
So I'm supposing to record the filtered waveform every 0,001 seconds, with a lot of averages of 25000 samples at 250ks/sec.
This gives me a very good noise filtering while maintaining a nice reactivity.
I strictly need EXACT timing for each sample (average) acquired, nevermind if the first is sampled after 100ms and the next one is after 101ms but I need to know that in order to re-construct correctly the original signal. (In fact I need it for a real-time kalman filtering)
The problem is that I don't know if my code realizes this behaviour.
Actually I'm using C# with the following code:
analogInReader.BeginMemoryOptimizedReadWaveform(samplesPerChannel, analogCallback, niTask, this.data); //avvia acquisizione
where analogCallback is the callback function that is called whenever all samplesPerChannel (eg. 25000) are read.
Then the callback is something like this:
if (niTask == ar.AsyncState)
{
dataLogger.data = dataLogger.analogInReader.EndMemoryOptimizedReadWaveform(ar);
signals.P = dataLogger.data[0].Samples.Average(sample => sample.Value);
dataLogger.analogInReader.BeginMemoryOptimizedReadWaveform(AppSettings.Default.cellSamplesPerChannel, dataLogger.analogCallback, dataLogger.niTask, dataLogger.data);
}
What makes me feel wrong is the time in which I do the average, with the line between the EndMemoryOptimizedReadWaveform() and the BeginMemoryOptimizedReadWaveform().
This in fact seems to me to be a delay in the acquisition time, so the acquisition of 25000 don't really happens every 250000s/sec / 25000 = 0.001 sec but every acquisition is delayed of some millisec in order to do the average.
Even if I postpone the average, putting the BeginRead just after the EndRead I have a small delay to stop and restart... 😞
There's not a way to never stop the acquisition and at the same time get a group of samples to be processed?
12-13-2011 11:03 AM
Well, no reply means I was too obscure 🙂
To be clearer I watched all continuous voltage acquisition samples for .NET 4.0 and I noticed the same callback paradigm for all of them.
The paradigm is this one:
private void AnalogInCallback(IAsyncResult ar)
{
try
{
if (runningTask == ar.AsyncState)
// Read the available data from the channels
data = analogInReader.EndReadWaveform(ar);
// Plot your data here
dataToDataTable(data, ref dataTable);
analogInReader.BeginMemoryOptimizedReadWaveform(Convert.ToInt32(samplesPerChannelNumeric.Value),
analogCallback, myTask, data);
}
}
catch(DaqException exception)
{
[...]
}
}
My question regard the correct timing, because in your example there is a (supposed) dead time after the EndReadWaveform and before the BeginReadWaveform. So if the first waveform is sampled from time 0 to time X, then the second waveform doesn't seem to start from time X to time 2X but it has a leading dead time D, so that
First waveform: from time 0 to time X
Second waveform: from time X+D to time X+D+X = 2X+D
Third waveform: from time 2X+D+D = 2X+2D to time 2X+2D+X = 3X+2D
...
Nth waveform: from time (N-1)(X+D) to time NX+(N-1)D.
Is it correct or the dead time doesn't apply? If it doesn't, why?
12-13-2011 02:30 PM
Hi Matmer,
Assuming that you are doing a hardware timed measurement, the software delay 'D' does not affect the measurement. The reason why is because the card has its own clock and is acquiring samples based on it. These samples are constantly being transferred to a temporary buffer in PC memory waiting for you to call from your application to bring it into application memory. The BeginMemoryOptimizedReadWaveform does not start the acquisition on the card; it starts the read from the temporary buffer. Assuming you don't over fill the temporary buffer, the data you receive will be exactly the same, regardless of when the read occurs.
One thing to keep in mind is that the temporary buffer is a finite resource. So if you do the read too infrequently, it could fill up and throw a buffer overflow error. In your example if 'D' becomes too large, it could cause the reads to happen too slowly. As an example, if it takes 10ms to do our read and calculations and we are trying to sample at 1000Hz and getting 5 samples every read, we will overfill the buffer because we are only grabbing 500 samples per second (5samples/10ms = 500samples/s).
It is this reason, we often recommend having another thread running to do the data processing portion. So instead of having the AnalogInCallback do the averaging, it might be better to offload it to another thread so the processing time does not take as much of a burden on the acquisition.
12-14-2011 02:02 AM
Well, excellent reply, thank you very much, you centered my doubts.
To be perfect sure of hardware timing, is there a variable or anything in which I could find the exact timing for each waveform?
I mean is there a timestamp for each sample acquired? And is it reset across waveform or does it increase continuously? The second chance would be better if I need to know the time between the last sample in the first waveform and the last sample in the next waveform (to measure the first order derive of my signal)
One another thing: if the buffer overflow occurs I suppose an exception is thrown, so I could know for certain if there's an error, isn't it?
In this way I can be sure that if no exception is thrown, each waveform sampled (and averaged in a single point) represents the last n samples occurred in a fixed x timespan which never changes (because of hardware timed acquisition) without any loss of data, is it right?
12-14-2011 05:56 PM
Hi Matmer,
Rather than thinking of each block of data as an individual waveform, think of the blocks as a part of one continuous waveform. The time between samples is going to be exactly the same across all samples.
For example if we are acquiring at 1000Hz, and the first call back we get the first 100 samples (sample indexed 0-99), and the next call back we get the next 100 samples (sample index 100-199), the time difference between samples 99 and 100 is 1ms, which is also the time between all the other consecutive samples.
Since we know the time between all the samples, we can calculate the time it was sampled based on when the initial sample occurs. This will be right after the task start is called. It might be useful to get a timestamp from the computer's real-time clock right before or right after calling the task start. We can use this timestamp to calculate when a particular sample was taken in absolute time. Keep in mind since this is a software timestamp, it is not going to be particularly exact, however it can be useful to get a general idea.
To answer your second question, yes it will cause an exception. If you do not receive an exception, the acquisition is going correctly without any data loss.
12-15-2011 05:11 AM
Ok, you cleared almost all my doubts, thank you very much!
When you talk about getting system ticks just before or after the task start, you mean before or after the first "BeginMemoryOptimizedReadWaveform" command, right? (Because in continuous acquisition examples I see no explicit Task.Start() command)
In addition could you point me an example on which the data processing is done in a separated thread? (Referring to your first post)