Dynamic Signal Acquisition

cancel
Showing results for 
Search instead for 
Did you mean: 

Strange measurements of white noise on 4472B

I'm prepping some NI hardware for an upcoming test series, and wanted to use a white noise input to test the near-perfect phase difference between all channels on 3 PXI-4472B boards.
 
Using a waveform generator with a standard range of 2-8 volts, all simple signals look good and the signals have great correlation. However, when white noise is applied to the inputs, the measured signal drops below 100 _mV_ for every channel, and a small offset appears, increasing negatively towards -50 mV for increasing voltage inputs. The correlation of the channels goes bad because while the signal appears to be measured correctly frequency-wise, the amplitude on all channels is mismatched.
 
What the heck is going on? A scope on the data shows the white noise at the correct amplitude. I'm simply watching acquisition through MAX at all sampling rates, and various input voltages. Input sine wave at 4V, looks great; hit the noise button, measured amplitude goes to crap. Tektronix shows no problems. I don't want whatever is causing this problem to creep up during experiments.
 
Is there some fundamental concept I'm missing here?
 
Thanks,
Eric
0 Kudos
Message 1 of 6
(7,604 Views)
Hey Eric,
I don't have a good idea of what might be going on, so I'd like to get some more information from you.
1) Could you post a screen shot of the data from the 4472B before and after adding the noise?
2) What settings do you have on the 4472?  Sampling Rate, coupling, etc.
 
Hopefully one of those will help.
 
-gaving
0 Kudos
Message 2 of 6
(7,591 Views)

Thanks for the reply, I'll gladly attempt to post some screenies.

This occurs with either AC or DC coupling, and at all ranges of sampling rates. The signal used here is 8V peak-to-peak, and the only change going from the sine wave to the noise input is the press of a button on the waveform generator. Again, the scope shows everything normally.

The auto-scaled noise sample shows a -150 mV average. This gets closer to 0 as the input voltage is taken down (as does the overall small amplitude of the signal).

The last plot show 2 channels from 3 cards, detailing the harmonic match, but failing the match the amplitudes of the diminished signal on each channel.

Eric

Free Image Hosting at www.ImageShack.us Free Image Hosting at www.ImageShack.us Free Image Hosting at www.ImageShack.us
Free Image Hosting at www.ImageShack.us
0 Kudos
Message 3 of 6
(7,581 Views)
Hi, Eric.

This is just a wild guess here, but keep in mind that at your sample rate of 20 kS/s, the 4472 bandwidth is limited to about 10 kHz. If your white noise generator has a high bandwidth (say, several MHz), then the 4472 is only reporting the 10 kHz spectral slice it sees. That could cut down the reported signal quite a bit, since most of the signal power would lie beyond 10 kHz.

If this is the case, then there are several possible explanations for the voltage offset and poor correlation. The first possibility is that the input circuitry is simply overloaded and is clipping the signal before it's passed along to the ADC chip (or perhaps it's clipping in the ADC chip itself, though that's less likely). The asymmetrical nature of the clipping could account for the offset, while the slight difference in clipping levels between channels could account for the poor correlation. With true random noise, peak voltage is impossible to define or characterize, and you can only describe the probability that the voltage exceeds a certain level, such as the input range of the input circuitry. As the level of the noise signal is decreased, so decreases the probability that the signal exceeds the input voltage range, and so should the apparent offset decrease.

Another possibility is that the ADC modulator and digital filter are swamped with more out-of-band signal energy than they're designed to handle. In the case of an internal overload, the modulator is programmed to reset to keep it stable, but that reset will cause discontinuities in the processed signal, which in turn could produce unpleasant artifacts such as those you've observed.

One more possibility is that the input amplifiers are exhibiting non-linearity (other than clipping, such as slew-rate limiting) in the presence of large, high-frequency signals. This could lead to rectification, which would show up as offsets and perhaps poor correlation between slightly different channel amplifiers. As I think about it, this may be the most likely possibility.

In any case, my guess is that you're presenting the 4472 with too much out-of-band signal power. Can you tell us what noise generator you're using? And can you give us an idea of how much out-of-band signal power you anticipate in your upcoming tests?

Cheers,
Ed L.

0 Kudos
Message 4 of 6
(7,569 Views)

Ed,

That is certainly, in theory, what is occuring here. Let me follow up with more information to see if it fits.

I've tested two different noise generators. One in particular has a range of 20 Hz - 20 MHz. Another is a waveform generator with a max range of 15 MHz. I've also sampled at the max rate for these cards, 102.4 kS/s, while trying to figure this problem out.

What confuses me is that the 4472B card has two lowpass filters. The first is a two-pole analog Butterworth filter with a 400 kHz cutoff frequency which occurs before the signal reaches the ADC. The second is the 'brick-wall' digital filter tied to the sampling rate to remove aliasing. Testing the digital filter, it appears to be working as intended, as sine signals over 50 kHz bring the measured signal down to zero. So, assuming the analog filter is working, I have a range of frequency between 50-400 kHz being discarded or about 90% reduction in energy. Given a 8Vpp signal, I would expect measurements with a 1Vpp range... instead I'm getting less than half that, and a nasty aforementioned bias to go along with it.

I pulled out an old (and still very useful) spectrum analyzer which outputs noise with a max range of probably 25-50 kHz. Being within the nyquist rate, the corresponding acquisition was dead on. Not having any low-pass hardware filters on hand, this will serve our needs for performing channel phase calibration. Our actual experiments won't have data beyond the 5-10 kHz range beyond noise, and ultimately a perfect channel calibration signal would include just the experimental frequency range. I was just being lazy and using available equipment.

I'm not certain about the analog filter either. When ramping up the sine wave frequency from the waveform generator over the nyquist rate up to a maximum of 15 MHz, no signal existed, but a negative bias did exist, up to -0.5 V at around 6 MHz. This is coincidentally close to a narrow hole in the digital filter at 6.55 MHz for max Fs, but not exact. If the low-pass analog filter was doing it's job, why is there this bias from the high frequency components?

In previous experiments a couple years ago with other hardware, we sampled noise from these other noise generators without problem. I'm disappointed that apparently the ADC digital aliasing filter is being overloaded from signals having 90% frequency content over the nyquist rate. As I'm just coming on-board with setting up the equipment for signal analysis, I'm not being very knowledgeable about the breadth of hardware capabilites, so maybe I shouldn't be surprised.

Thanks for your knowledgeable input,

Eric

 

0 Kudos
Message 5 of 6
(7,544 Views)
Hi, Eric.

Everything seems to point to the too-much-out-of-band-power diagnosis. The crux of the problem is that there's no passive filtering at the input of the board before the first op amp. This is to keep the noise low and the input impedance high. So the first op amp, an OPA2134, sees the full brunt of your wideband noise.

Now consider the noise signal: if the bandwidth were 15 MHz (-3 dB cutoff) and the level were 1 Vrms (which, at 3-sigma, we might call 6 V "peak-to-peak" (with our noses wrinkled)), the rms slew rate would be about 90 V/us. So the instantaneous slew rate would exceed 20 V/us most of the time, and that's a big problem, since the OPA2134's slew rate limit is specified at 20 V/us. And with that much level and bandwidth at its doorstep, this poor little heap of transistors would spend most of its time in a nonlinear mode, just trying to keep up. Thus the poor performance.

The same thing is probably happening when you put in the 6 MHz signal - slew rate limiting, and asymmetrical limiting at that, which will cause rectification and offsets.

Because of the analog filtering in the next stage, I think that the problem is mainly in the first stage and is probably not caused by the ADC or any other components.

So what to do about? Well, I can say with some certainty that the board was designed with the expectation that "nobody would ever have a signal like that." (!) And in truth that's probably a reasonable expectation for most sound and vibration applications (for which the board was principally intended), though the data sheet would seem to suggest that you can hurl anything at it and nothing bad will happen. I wouldn't anticipate any hardware fixes, but at the very least perhaps the data sheet specifications should be amended. In any case, I hope that the band-limited generator you have is an acceptable work-around.

In the meantime, let us know if you have any more problems.

Cheers,
Ed L.

0 Kudos
Message 6 of 6
(7,534 Views)