LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Interpolating lost data

I have an application that has data streaming in, in real-time, but occasionally, some of the existing data needs to be "filled in" with interpolated data. This data that needs to be interpolated will always be of the value integer zero. All other data can take on values between 1 and 255. My current solution is to use a formula node. This formula scans the data and if it detects a zero, it starts looking for the next value that is NOT zero. Now, it has two data points; One data point being the last non-zero data before the "blank" zero data points, and the first non-zero value after the "blank" zero data points. My formula node then replaces the zero's with data values that are on a line between the two valid data points. In a sense, this is interpolation, but it is not adding more elements to an array. In this case, it is just replacing existing "blank" data to attempt to fill-in any missing data. If you are at all lost, the following explanation may shed some light on this.

This application is being fed by a USB wireless reciever that I have designed. A wireless transmitter is sending an electroencephalography signal that has been digitized with an ADC. Occasionally, some data is lost in transmission, but my USB receiver can detect that data was lost. In an effort to maintain synchronization with time, my receiver sends data values of zero to LabVIEW, rather than the value being transmitted by my transmitter since the TX data was lost. On the LabVIEW end, my application should detect this missing data as any value that is zero, and try to insert a linearized approximation of what the missing data might be. As my sampling rate is well above any frequency components of my signal, and assuming only a small amount of data is lost and recovered, the end result should not show any noticeable signs of distortion as a linear approximation of lost data should be sufficient.

My question is: Is there any kind of built in LabVIEW VI that can handle this kind of thing more efficiently than my current method? Would some low-level C code be the best route to go for such a DSP intensive task? Attached is my VI that does this task.





0 Kudos
Message 1 of 7
(3,197 Views)

It would help if you could place reasonable default values in the X-in and Y-in arrays (Why are they 2D instead of 1D)? Make the current values the default, save under a new name, and attach it here.

 

Why aren't the inputs U8? That's sufficient for 0..255.

 

Are the values equally spaced in x?

Message 2 of 7
(3,188 Views)

Ah, sorry about the lack of clarity. I didn't think to set any values as default. That's a good idea for clarity.


The arrays are 2D because there are multiple channels; each having it's own independent data. The VI just loops through each channel and performs the actions independently of other channels.

 

The values are, currently, U8, but I have plans to upgrade my architecture so that the samples coming in are 12-bit (0 - 4096). That is why it is U16, at the moment.

Yes, the values are equally spaced in time (X).

0 Kudos
Message 3 of 7
(3,173 Views)

Since the data only changes very little during the drops, as a first approximation we could just hold the last valid data.

 

Here's what I would do. If you really want to do linear interpolation, it would need a little more code.

 

 

Download All
Message 4 of 7
(3,150 Views)

Thanks a bunch for the help. If you look at the commented out line of code in my VI, holding the current value is what it does. That was my first stab at the problem and I occasionally revert back to only using that method (no linear interpolation) when debugging.

In your opinion, which method is more efficient - a formula node that gets right to the point, or a wired up VI like the one you came up with? Both seem to work, so, my goal now is code efficiency.

0 Kudos
Message 5 of 7
(3,146 Views)

Philip_McCorkle wrote:

In your opinion, which method is more efficient - a formula node that gets right to the point, or a wired up VI like the one you came up with? Both seem to work, so, my goal now is code efficiency.


I don't have opinions. I simply benchmark both and see what transpires. 😄

 

A quick test shows that both are the same speed within a few % (~20ms on my old quad core) so It does not really matter much. Personally I prefer the simplicity of the graphical code. It is much easier to understand. 😉

 

Anyway, I made a few changes to optimize my code (inplaceness) and it is now down to 10ms. I am sure there is quite a bit of slack left. 😉

0 Kudos
Message 6 of 7
(3,139 Views)

Well, here's a very quick modification that seems about 40x faster than yours (~0.5ms).

 

(This is very rough and can still be improved in many places. Still, it shows the general idea. It takes advantage of the fact that all channels are zero at the same places and that the gaps are relatively rare)

Message 7 of 7
(3,134 Views)