LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

FPGA Filter response issue

I am observing a difference between 16 bit, 24 bit and 32 bit inputs into the FPGA butterworth filter express function in LabVIEW FPGA.  I can not find a reference to the differences in the documentation.  I have attached a test case showing input vs output of the function as well as a frequency response.
I believe that all options should produce the same data.  24 and 32 would support a greater input range but should produce the same value output if a I16 input is supplied.  I am wrong?
 
LabVIEW 8.5
 
Stu
Stu
0 Kudos
Message 1 of 8
(6,794 Views)
Hi Stu,
 
This Butterworth filter that you are using was designed to run on the FPGA.  The code that you have attached is running on a PC.  This is why you are seeing unusual behavior.  Please note that when running on the FPGA, this VI will be accepting binary data.  I hope this answers your question, please let me know if there is anything else I can do to help.  Thanks! 
0 Kudos
Message 2 of 8
(6,738 Views)
I understand that it is designed for an FPGA.  I observed this behavior on the FPGA and created this test case so that NI could investigate.  Are you saying that you have tested this code on the FPGA and observed different behavior than I am reporting?  I understood that I could test code written for the FPGA on Windows with certain limitations.
Stu
0 Kudos
Message 3 of 8
(6,731 Views)

Hi Stu,

I apologize for not understanding your question.  If these are the same results that you are seeing when running this Butterworth filter on the FPGA, then this is definately an issue that we need to investigate.  I have addressed our R&D team and they are currently looking into this.  We will keep you updated.  Thanks Stu, any feedback is greatly appreciated!

0 Kudos
Message 4 of 8
(6,709 Views)

Hi Stu,

The short answer is that the 24 and 32-bit modes are not bit-true with the 16-bit mode for 16-bit input data. I agree that the documentation is deficient and will work to correct that (probably via the DevZone article linked from the online help). I consulted with the original designer, who provided the explanation below. Both of us are operating under the assumption that your filter response curves were all produced with a 16-bit stimulus. Let me know if that is not the case.

"The FPGA filters are designed as a trade-off between quality and FPGA resource usage. The goal was to cover most practical real world applications and the filters therefore use internal dynamic corresponding to 32 bit resulting in an overall dynamic range of 26-28 bit depending on filter order and cut-off frequency. You will always lose some bits due to scaling for internal headroom and re-quantization errors. But 26-28 bits is still much better than practically any real world signal, the best A/D converters can not give you more than 24 bit or even less.

When you input an I16 bit signal the dynamic range is internally moved up to use the upper 16 bit of the 26-28 bit range and therefore you do not loose any dynamic in the process. However if you are using the 32 bit mode but only input an I16 signal, you are applying your signal to the lower 16 bit and your output noise will now correspond to at best (26-28)-16 = (10-12 bit). You are not using your dynamic range optimally. It is like inputting a low-level input signal of 10 mV when using the 10 V input range of an acquisition board. To fix 'the problem' you need to prescale/post-scale your signal. Try for example to shift your input signal 16 bit up and your output signal 16 bit down like shown on the attached screenshot."

I hope this explanation helps. The 3 response curves produced with the scaled data described above are practically indistinguishable. It is possible to modify the 32-bit implementation to use a 64-bit internal path and make it behave more like a superset of the 16-bit implementation--let me know if you have a use case for this.

Jim



Message Edited by JLewis on 02-29-2008 11:49 AM

Message Edited by JLewis on 02-29-2008 11:50 AM
0 Kudos
Message 5 of 8
(6,493 Views)
thanks for the explanation.  are you saying that any input that I provide to the 24 or 32  bit version should be in the MSB of the 32 bit word?  If that is true, why isn't this done in the filter code instead of burdening the user with shifting up/down.
My use case is filtering a velocity signal computed from an encoder.  most of the time the value does not change (dc).  The dc value must be a true representation.
From your explanation, even if I had a 24 bit A/D value, I would have to shift it up/down in order to use the filter function.  Correct?

Stu
Stu
0 Kudos
Message 6 of 8
(6,347 Views)

Hi Stu,

The filters expect the input data to be scaled such that it utilizes most of the range specified. The modifications to the example above were mainly to demonstrate the behaviors--if your input signal is actually only 16 bits, the recommended course of action is to use the 16-bit filter, not to scale up and use the 32-bit version. In fact, the only difference between the 24-bit and 32-bit implementations is that for 24 bits, we take advantage of your promise not to exceed 2^23 input magnitude so we can scale up to give better accuracy results while preventing overflow for steady-state signals. The 32-bit implementation has no room to scale up, because it assumes the input data will fill the entire 32-bit range.

It sounds like you have a 32-bit velocity encoder, and your DC value is relatively small? If you need to retain the 32-bit precision (ie, scaling the encoder output down to 24 or 16 bits is not an option), then I think you will need a customized 32-bit implementation that uses 64-bit internal paths. I will work on putting an example of this together for you to try.

Jim

0 Kudos
Message 7 of 8
(5,637 Views)
Hi Stu,
 
After some further investigation, I did find a scaling problem that was degrading the DC performance beyond the limitations I discussed above (which still apply). This has been fixed in LabVIEW 8.6 by expanding some internal paths slightly and delaying some scaling operations in order to minimize loss of precision due to underflows.
 
One thing you can do to improve the behavior in LabVIEW 8.5 is to check the "Show configuration terminal" option. The original nonreconfigurable design (from LabVIEW 8.2) uses a modified (and more expensive) implementation for filters with very low cutoff frequencies (less than .01 * Fs), and the standard implementation for other cutoffs. With the online reconfigurability feature introduced in 8.5, we need to be able to handle all cutoff frequencies with a single run-time filter architectures so we used a more accurate generic implementation at the expense of an extra multiplier (only if you choose to show the configuration terminal). The nonreconfigurable implementations were left as is to maintain compatibility with existing code.
 
Thanks for the feedback!
 
Jim
0 Kudos
Message 8 of 8
(3,366 Views)