LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

problem with integer filter attenuation

Hi all,
I am collecting data at 200 kS/s with a 7833R and want to low-pass filter it before doing some processing and storing to memory/disk.

I'm confused about the way integer filters work in LabVIEW (8.5). I'm using the 'Classical Filter Design' VI to design Butterworth low-pass filters, then the DFD Scale filter vi, and then 'DFD QFX Quantize Coef' to create 16-bit coefficients. Using Filter analysis the FLP and FXP magnitude responses look almost identical.
I test the FLP filter with 'DFD Filtering.vi' which seems to work fine.
Then I create integer code for either LabVIEW (testing) or FPGA (my real application) with 'DFD FXP Code Generator.vi'

Now, comparing the FLP and the FXP filters I get strange results. I generate gaussian white noise with a standard deviation of 1 and pass through both filters. For the FXP filter I will be reading the AD-converter so I am converting the signal to a signed 16-bit integer.

Using Dual Channel Spectral Measurement I then compare the output to the input of the filter. The floating point filter works mostly fine.
The FXP filter has a pass-band attenuation of around -18 dB, and for some cut-off frequencies it doesn't seem to work at all. What is going on??

My program is here:
http://electronics.physics.helsinki.fi/personal/awallin/lvfilters/create_filt.vi

Here are screenshots from the response testing:




0 Kudos
Message 1 of 2
(2,872 Views)
Hi, vompatti
 
There are several issues you may want to be aware of in your codes.
 
1. The integer filter is actually working as a fixed-point filter. You can refer to the DFDT document for more information on the fixed-point concept. Hence, the input data and the output data actually have their own fixed-point data types. Since you just use the DFD FXP Quantize Coef VI to quantize the filter coefficients, the VIs assume that your input word length and output word length are both 16. That's to say, the input data type is I16.1 (16 bit word length with 1 bit integer word length). Meanwhile, the VI calculates the output integer word length automatically for you, say it is iwl. Then, you have the output data type as I16.iwl. In your codes, you actually assume the output data type is the same as that of input data, which is incorrect. You can use the DFD FXP Get Quantizer take a look at the output quantizer settings. I guess, to incorrectly interpret the data type is the key issue that introduce the attenuation.
 
2. You can use the DFD FXP Simulation VI to run the fixed-point simulation, instead of generating the integer filter first. It provides the same results as if you are using the generated integer filter.
 
3. Since the input data type is assumed to have 1 bit integer word length, you must scale the input data to the floating point filter and fixed-point filter. Otherwise, you will definitely get different results. I notice that the Gaussian noise is not in the range of [-1, 1]. You 'd better use Quick Scale VI first.
 
4. The filter magnitude response is actually not as good as you expected at the low frequency with 16 bit quantization. You'd better refer to the following example in NI Example Finder to analyze the filter at log spaced frequency bins.
NI Example Finder >> Toolkits and Modules >> Digital Filter Design >> Getting Started >> Analyze Frequency Response of Filter with Log Spaced Freq Bins.vi
 
5. When the passband is 10Hz, the filter coefficients is quite sensitive to the quantization error. you'd better feed a signal in band to test the numeric performance. The out band signals lead to very small outputs which does not give you much useful information.
 
Please let me know if you have further questions.
 
Thanks,
Tianming Liang
0 Kudos
Message 2 of 2
(2,838 Views)