02-27-2008 10:22 PM
02-28-2008 03:59 PM
02-28-2008 04:16 PM
02-28-2008 05:16 PM
Hi Stu,
I apologize for not understanding your question. If these are the same results that you are seeing when running this Butterworth filter on the FPGA, then this is definately an issue that we need to investigate. I have addressed our R&D team and they are currently looking into this. We will keep you updated. Thanks Stu, any feedback is greatly appreciated!
02-29-2008 11:41 AM - edited 02-29-2008 11:50 AM
Hi Stu,
The short answer is that the 24 and 32-bit modes are not bit-true with the 16-bit mode for 16-bit input data. I agree that the documentation is deficient and will work to correct that (probably via the DevZone article linked from the online help). I consulted with the original designer, who provided the explanation below. Both of us are operating under the assumption that your filter response curves were all produced with a 16-bit stimulus. Let me know if that is not the case.
"The FPGA filters are designed as a trade-off between quality and FPGA resource usage. The goal was to cover most practical real world applications and the filters therefore use internal dynamic corresponding to 32 bit resulting in an overall dynamic range of 26-28 bit depending on filter order and cut-off frequency. You will always lose some bits due to scaling for internal headroom and re-quantization errors. But 26-28 bits is still much better than practically any real world signal, the best A/D converters can not give you more than 24 bit or even less.
When you input an I16 bit signal the dynamic range is internally moved up to use the upper 16 bit of the 26-28 bit range and therefore you do not loose any dynamic in the process. However if you are using the 32 bit mode but only input an I16 signal, you are applying your signal to the lower 16 bit and your output noise will now correspond to at best (26-28)-16 = (10-12 bit). You are not using your dynamic range optimally. It is like inputting a low-level input signal of 10 mV when using the 10 V input range of an acquisition board. To fix 'the problem' you need to prescale/post-scale your signal. Try for example to shift your input signal 16 bit up and your output signal 16 bit down like shown on the attached screenshot."
I hope this explanation helps. The 3 response curves produced with the scaled data described above are practically indistinguishable. It is possible to modify the 32-bit implementation to use a 64-bit internal path and make it behave more like a superset of the 16-bit implementation--let me know if you have a use case for this.
Jim
02-29-2008 10:11 PM
03-03-2008 09:29 AM
Hi Stu,
The filters expect the input data to be scaled such that it utilizes most of the range specified. The modifications to the example above were mainly to demonstrate the behaviors--if your input signal is actually only 16 bits, the recommended course of action is to use the 16-bit filter, not to scale up and use the 32-bit version. In fact, the only difference between the 24-bit and 32-bit implementations is that for 24 bits, we take advantage of your promise not to exceed 2^23 input magnitude so we can scale up to give better accuracy results while preventing overflow for steady-state signals. The 32-bit implementation has no room to scale up, because it assumes the input data will fill the entire 32-bit range.
It sounds like you have a 32-bit velocity encoder, and your DC value is relatively small? If you need to retain the 32-bit precision (ie, scaling the encoder output down to 24 or 16 bits is not an option), then I think you will need a customized 32-bit implementation that uses 64-bit internal paths. I will work on putting an example of this together for you to try.
Jim
08-06-2008 04:02 PM