LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Decimation - aaaarrrggghhhh, the horror, the horror!!!

Decimation. It's all supposed to make sense; easy: you input a number of samples, then you decimate, that is you cut down the number of samples by the decimation factor. A good explanation is given at http://digital.ni.com/public.nsf/allkb/5776744A9946416986256D170079700D. Yes, I am using the express vi, and this after unsuccessfully using the Decimate(continuous).vi

 

Here is my story:

I am gathering data from multiple sensors. If I may, it is the classic scenario when using daqmx you get waveforms from various sensors. Problem is I have quite a few channels and I can only use one sampling rate. So I have to use the sampling rate for accelerometers for thermocouples too. However, the data file becomes too big and I have no need to sample the temperature at 5kHz for example. Therefore I decided to decimate the temperature waveform. First I tried using Decimate(continuous).vi but I started to see mismatches in time such as: if I was using 'averaging' the waveforms would be nicely alligned time wise; if I was not using 'averaging' then the time I was getting for the temperature was longer. Long story short, I tried using the express vi "sample compression". This one seemed to work fine until I noticed spikes at the end of recordings. They go either up or down with few orders of magnitude when I stop the daq using the reference trigger. It does not happen all the time; it is rather random and does not seem to be influenced by the sampling rate. This did not happen until I started using the sample compression express vi.

 

I attached some snapshots of the diagram in an effort to shed some light on my case. Did any of you had this happen? Do you have any suggestions as to why this my happen and how to get rid of this?

 

Thank you.

0 Kudos
Message 1 of 18
(7,360 Views)

I use the Decimate (continuous) VI with average but my decimation is max 1:100. You have to keep in mind, that due to the average you get lowpass filtering (which is fine for the TC) and a phase shift (mean: time delay) of half the decimation rate.  If you read every second 5000 samples you could also use the mean.vi. 

Depending on your choosen interpolation/filter type you can/will get filter artifacts at start and end. The simple mean function feeded only with data can avoid that.

 

About the timeshift/leakage/mismatch :Sounds like the classical rounding problem when many small numbers with small errors get summed up. Thats what the timestamp datatype is good for. Try a samplerate that has a dt with a errorfree binary representation. (1/2^n)  

 

I haven't looked at your pictures, however this task is nice for a producer/consumer architekture .....

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 2 of 18
(7,302 Views)
I use the Decimate VI (single shot). I have a signal sampled with 20Khz, and use it to downsample a signal to 50Hz sample rate. Works fine for me. And no problem with timing. Can you post a working example of your problem?


Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 3 of 18
(7,288 Views)

For decimation factors of 1:100 and less it appears to work fine (1000 samples per channel) but I want to build some 'generality' into that vi: 1:250 or 1:500 for example (or even 1:300 or 1:1000)

 

I already have a producer/consumer loop but for other operations.

 

Thank you for your suggestions / notes.

0 Kudos
Message 4 of 18
(7,267 Views)

I did not try decimate single shot. I may try at some point but I do not think this will work for me. I have a continuous data acqusition scenario.

 

I can replicate what decimation does using index array and statistical functions. this works fine if the number of samples I am reading is a multiple of the decimation factor. The problems start when the number of samples I am reading is NOT a multiple of the decimation factor because then I am going to have to 'wait' and append the samples necessary to conduct next decimation (for example I read 1000 samples per while loop turn and let's say I take a decimating factor of 300, then I am going to have 3 full decimations at a dt = dt initial * 300, but then you have the remaining 100 samples for which you cannot take dt = dt initial * 300, you have to "wait" or append 200 more samples from the next while loop read out).

I am trying to do this 'manually' (that is without express vis) but I am in a real time crunch and I would have been soooooo happy if I could have used the vis provided by NI. However, the explanations in the help file are somewhat insufficient and therefore I am not sure how to troubleshoot my application...

0 Kudos
Message 5 of 18
(7,259 Views)

Hi,

 

Could you please post the VI? Only some pictures could not let us understand your VIs...

 

What you said above is right. I think Decimation(Continous) VI is just processing by this way.

 

Thanks!

 

Zhijun Gu

0 Kudos
Message 6 of 18
(7,206 Views)

Hi RPJ,

 

The Decimate (continuous) VI decimates the input signal as you described.

 

Suppose you read 1000 samples per loop, and the decimate factor is 300.

 

If you do not use average, the VI picks up one samples every 300 samples.

 

So in first loop, the VI returns 0th, 300th, 600th, 900th elements in first loop.  Note there are 4 elements.

In second loop, the next 1000 samples come in. The VI returns 200th, 500th, 800th elements of the second 1000 samples.

 

If you use average, the VI will average every 300 samples.

 

So in first loop, the VI returns average values for 0-299th samples, 300-599th samples, 600-899th samples.  Note there are only 3 elements.

In second loop, the VI averages 900-999th samples of first loop and 0-199th samples of seond loop. And then the VI averages 200-499th, 500-799th samples of second loop.


The example VI, Continuous Decimating.vi, shows how to use Decimation (continuous) VI to process a large data sequence consisting of smaller blocks of data.   You can find it in this VI details help.


If possible, could you please post a VI to show your problem?  So we can better understand your problem.

0 Kudos
Message 7 of 18
(7,189 Views)

OK. I am attaching the first three versions. I am currently using version 3 (my design which does not make use of Decimate(continuous) or Sample compression).

 

My version would allow me to use a decimating factor that is not an exact sub-multiple of the samples per channel without affecting the time for the output waveform(s); the decimating factor can actually be bigger than the samples per channel that are read in one "rotation" or the while loop. The sub vi I created is also intended to be able read the very last samples in case my decimating factor is not a sub-multiple of the samples per channel; it takes thos samples and processes them then puts the last result at a dt 'distance' from the last sample.

 

Questions:

Is my version 3 OK?

Can the sample compression do the same thing as what my version 3 does? If yes, how should it be configured?

 

Thank you.

0 Kudos
Message 8 of 18
(7,125 Views)

Hi RPJ,

 

I check your version 3 VI, Decimate waveform.vi.


Your VI can deal with the case when decimating factor is not exact sub-multiple of the samples per channel.
However, your VI will return incorrect result when the decimating factor is sub-multiple of samples per channel.

Consider the following case:

The input waveform is [1 2 3 4 5 6 7 8 9 10], decimating factor is 5.
It is the last read. So the "Last read" Boolean is TRUE.

Here is the result.  You can find there is one redundant element in the decimated array.

 

aa.PNG 

 

 

Your VI does not contain the data.  Do you have any data to show the problem? 

For example, what is the input waveform, decimating factor, and what is your expected result?
I am not very clear why the Decimate (continuous) VI does not work.

 

Message 9 of 18
(7,064 Views)

Good catch. Thank you.

I have corrected the problem (I think) - see attached. Nevertheless, when I watch the subvi while the application is running, let's say I have 1000 smaples per channel and I set a decimating factor of 500, then I see only 2 values in the output terminals most of the time; and here is the problem: sometimes the output appears to have more than two values. In other words, I look at the values of the output arrays on the control panel and, from time to time, apparently quite random, the arrays get more than 2 values. Now this should not happen unless the samples per channel become more than 1000 for some reason. I am not sure why I see that problem.

 

In any case, the attached vi has some actual data. I would appreciate if you could take a look again over it. Meanwhile I will try to re-implement the sample compression vi.

 

The problems I had with the NI vis were that they either do not give me the right time if I do not average or they show some weird spikes if I choose not to reset (Open the only example provided by NI and ply a little with the reset and the averaging and you'll see some discrepancies; those are probably expected but I wouldn't know about them as a user because the functions are insufficiently explained).

Any additional explanations are welcome.

0 Kudos
Message 10 of 18
(7,024 Views)