Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

analog in calibration on PCI-6221

Solved!
Go to solution

I am not getting the volts per bit that I expect from my PCI-6221 board.

I'm running the "Test Panel" available in Measurment & Automation Explorer v.4.3.  I have an NI-6221 multifunction board with a BNC-2090 interface box installed.  When I run the "analog input" test, with dfferential input and ai0 connected to ai8, I expect 0 volts. The min, max input values are at the defaults of -10 V, +10 V.  The graph shows zero +-1 to 2 bits of noise.  See screenshot. The actual amplitudes reported are +0.000468,+0.000144, -0.000180, -0.000504.  This corresponds to 0.000324 V/bit.  I expect 20V/(2^16 bits) = 20V/65536 bits = 0.000360 V/bit.  What accounts for this 10% discrepancy? Thanks.

Bill

0 Kudos
Message 1 of 3
(3,589 Views)
Solution
Accepted by WCR

Hey Bill,

 

I could see a couple things: With M-series, realize that the data returned from the ADCs are not linear so the V/bit will vary across the range of the acquistion. We use Mcal to correct for the non linearity. That may account for the discrepancy right there. Also, note that the range is actually larger than that - in order to do cal across the full desired range some of the codes are outside the 10 V range - generally 5%. Though this would actually push the V/bit up. This is mentioned in the M-series user manual in the Analog Input Range section. All of this is factored in to the absolute accuracy specifications so they are still valid. 

 

Finally, when I calculate 20/65536 I get  .0003052 - which when I multiply by 1.05 gives me .0003204 which looks much better. You may have swapped the 6 and 5, or Windows calculator is fibbing to me yet again Smiley Happy

 

So this brings me to my question - are you just noticing the difference in theoretical vs measured and wondering what is up, or were you planning on using this info? If you're looking at reading binary (a common practice that results in questions like yours) take a look at this KB when you get to scaling to calibrated data - 

AE KB 3SKGA409: Is DAQmx Raw Data Calibrated and/or Scaled?

Hope this helps, 

Andrew S

 

Message Edited by stilly32 on 01-29-2009 05:48 PM
Message 2 of 3
(3,569 Views)
Thanks Andrew. Your info & the link were very informative.  And you're right, I goofed on the calculator so the difference is 6% not 10% and in the opposite direction.  Didn't know a nonlinear calibration was used.  Your post inspired me to write the attached test VI which gets the calibration coeffs and computes the V/bit at the high, middle, and low ends of the binary range.  It shows that, for my device, the 2nd & third order terms in the calibration polynomial are negligible, because microV/bit=324.120 +/- 0.010 at the bottom, middle, and top of the range.  In other words its highly linear.  And the V/bit is 6% more than "nominal", which means the fulll scale range is +/-10.6V instead of +/-10V.
0 Kudos
Message 3 of 3
(3,561 Views)