PXI

cancel
Showing results for 
Search instead for 
Did you mean: 

pxi-4070 calibration

I would like to calibrate pxi-4070 cards with a Fluke 5520A instead of the Fluke 5720A called for in the calibration procedure.

 

I think that the 5520A would be adequate, (spec wise) for the application we are using the pxi-4070 cards for.  I will use another 

Fluke 5520A to verify the performance of the pxi-4070 meets our requirements after calibration.

 

My first of many questions involves the Ohms calibration procedure. 

The procedure calls out connecting the pxi-4070 to the calibrator in both 2-wire and 4-wire configurations.

 

While in the 4-wire configuration the 5720A calibrator is placed in a mode where external sense is turned on

and the 2-wire compensation is turned off.  I can't seem to find a similar configuration available on the 5520A.

It appears that you can only select comp_off, comp 2-wire, comp 4-wire selections available on the 5520A.

Is there a way to do this with the 5520A?

 

We will not be using the pxi-4070 cards in 4-wire configuration.  Is there a work around to allow calibrating the

2-wire configuration only? 

 

0 Kudos
Message 1 of 18
(9,418 Views)

Test Engxx,

 

I do not have extensive familiarity with the details of the various Fluke calibrators, but based upon the documentation I've looked at (5520A and 5720A), it appears as though there is not a method to both disable compensation and enable sense.  

 

With regards to your question on working around the 4-wire calibration: using any process other than what is dictated in the calibration documentation would result in unknown calibration accuracy, but it appears as though you should still be able to perform the 2-wire calibration with the options available on the 5520A Fluke.  If you were to manually perform the calibration steps and bypass the 4-wire steps, you may be able to calibration your system to within a range suitable for your application.  But again, deviating from the prescribed calibration procudure will not guarantee a fully calibration device.

 

 

Regards,

 


 

National Instruments
0 Kudos
Message 2 of 18
(9,405 Views)

You mention 'bypassing' the 4-wire calibration steps as a possible work around.  I was under the impression that 'skipping' any steps in a calibration would cause the cal values to be defaulted back to their previous values.

I.E.  It apears that you safeguard against anyone 'skipping' steps during calibration.  Is there a way to get around this?

 

I have an additional question regarding the AC voltage calibration.  There are frequency/voltage points in the cal table which the 5520A cannot hit.

I cannot get > 100kHz out at 5, 50, and 100 Volts.

 

For my application I will not be specifying performance of the module for frequencies greater than 20 kHz.  Can I simply provide the correct voltage for cal

points at the highest frequency the 5520A will provide where it cannot hit the cal point?  Will this still allow the module to meet specification down at 20 kHz.

I.E.  What affect, if any, would the higher frequency call points have at 20 kHz and below.

 

I would also ask if there is a way to 'work around / skip' the higher frequency cal points and still save the calibration data to the module?

0 Kudos
Message 3 of 18
(9,394 Views)

Upon further review of the manual, I do see what you're referring to.  It seems that for this module, specifically, the steps that can be made optional are explicitly stated (which I missed the first time through), and resistance is one of the required steps for calibration.  Sorry for not catching that the first time.

 

For your other questions:  I am not certain how the device would react to using a lower frequency for a high frequency calibration.  I'd suspect it would still calibrate based on the values that are read if they are within the expected range.

 

There is no documentation of whether individual components of a step can be omitted without nulling the test, so the best I could suggest there is to test leaving those out and check if adjustments are made to the best of your ability.  If it will not accept the shortened test, it would default to the other values, so no harm would be done to the calibration.

 

I apologize for not having a more complete answer, but the low-level behavior of the hardware is only really documented for the expected-use cases, and beyond that it is difficult to say with certaintly what will happen.

 

Regards,

National Instruments
0 Kudos
Message 4 of 18
(9,385 Views)

Hi TestEngxx,

We can not guarantee the specifications of the 4070 DMM at any AC voltage/frequency if the improper setup is used.  When the calibration coefficients are calculated, there are many interdependencies between testpoints that could disallow proper adjustment if outputting the incorrect frequency or voltage.  Each testpoint calibrated in the procedure is not independent of the others. This is why each step in the procedure must be performed in order without any previous steps skipped.  Unfortunately, I can not recommend the 5520A to calibrate the 4070.

 

Brandon G

 

National Instruments
Precision DC Hardware Engineer
0 Kudos
Message 5 of 18
(9,378 Views)

I now have access to a Fluke 5720, and this has raised some additional questions.

 

1)  In the resistance calibration the procedure has you enter the expected value 'The display on the calibrator for 10MOhms etc.'

On the 5720A the value displayed is in different units depending on which cal step you are on.  What units should the value entered for the

'expected value' be in?  Or do I just enter the value displayed regardless of units. (Should this always be entered in ohms? kohms? Mohms?)

 

I.E. My calibrator displays 9.9999075 kOhm for the 10 kOhm cal point.  How would I enter this value.

 

I.E. My calibrator displays 9.999481 MOhm for the 10 MOhm cal point.  How would I enter this value.

 

2)  My intent is to automate this calibration.  Is there a command to query the displayed resistance remotely on the 5720A?

I could not seem to find it in Fluke's documentation.

 

3)  In the current adjustment section, one of the first steps is to call niDMMConfigureMeasurement.  It appears that the

resolution argument is not listed.  What resolution should be used?

 

 

 

0 Kudos
Message 6 of 18
(9,319 Views)

1. You will want to enter the value on the display as you see it, you should not need to take considerations for units.

 

2. Taking a look at the 5720a user guide, it looks like it does support remote interaction to some extent:

 

http://us.flukecal.com/products/electrical-calibration/electrical-calibrators/5700a5720a-multifuncti...

 

Unfortunately that manual doesn't provide much detail as far as what commands to send to query the current measurement value, but there is a remote programming reference guide that should have some of the necessary information.  If the functionality you're looking for is detailed there then you can likely interface as you mentioned.

 

3. I will have to look into this as it appears to be a typo in the manual at first glance.  I will get back to you once I have more information.

 

 

National Instruments
0 Kudos
Message 7 of 18
(9,308 Views)

I just wanted to insure that I have interpreted your response correctly.  Sorry, but it seems somewhat counterintuitive especially when I start trying to automate this.

 

This is what I took to be correct from your response:

 

When entering the Ohms value displayed, I should enter it exactly as displayed with no consideration for units.

For 10 MOhms and for 10 kOhms the value I enter will be practically the same. (~10)

 

This is the RCI command from the calibration procedure.

ViStatus = niDMM_CalAdjustGain(handle, NIDMM_VAL_2_WIRE_RES, Range, NIDMM_VAL_RESISTAMCE_NA, 'The display on the calibrator');

 

I would then interpret that the following would be what I would send to the Dmm based on the calibrator values I see.

For 10 MOhm

ViStatus = niDMM_CalAdjustGain(handle, NIDMM_VAL_2_WIRE_RES, 10e6, NIDMM_VAL_RESISTAMCE_NA, 9.999841);

For 10kOhm

ViStatus = niDMM_CalAdjustGain(handle, NIDMM_VAL_2_WIRE_RES, 10e3, NIDMM_VAL_RESISTAMCE_NA, 9.999075);

 

1) Is what I have above correct?

2) Does the Dmm simply assume the value entered is in the same units as those dictated by the range?

 

When I query the value from the calibrator it comes out in scientific notation, essentially in Hz for units.

3) Will I need to divide the number returned by the calibrator by factors of 10 to adjust for what the Dmm is expecting based on range?

 

4) Is there a way to query all of the existing cal values before I try and perform a calibration?  If there is, can I also restore 

them? (I see that you can abort instead of store, and even do some verification prior to store, but I would really hate to

accidently screw up the calibration values.)

 

5) What happens if any errors occur at a cal step?  Will the module still allow the cals to be stored, or will it revert to the old

values. Also, does the module check the applied signal to be within acceptable bounds, or just cal regardless of the signal

applied.  Do I need to verify the return status from each and every cal command sent to the instrument?  Does a value 

of zero returned indicate success in all cases?  Do I need to abort the calibration at the first instance of an error being 

returned?  (I am trying to write the calibration for modules that should all be in working order, and I want to be able to

ensure the calibration was successful.  I also need to be able to detect possible bad modules as they fail over time.

There will be a fairly large number of modules to be calibrated.)

 

6)  How long will the Self Cal take to complete?

 

(Sorry for all the questions, but I am working in linux and have had trouble locating all of the documentation that would come

with the driver install under windows.  I am also limited in communicating directly with the module since it is in an embedded

environment.)

 

 

 

0 Kudos
Message 8 of 18
(9,232 Views)

I should start by suggesting our Calibration Executive software for calibration automation (if you were to switch to a Windows system).  

 

That said...

 

What you said for 1) and 2) are correct.  You will need to alter whatever returns from the calibrator to represent what is displayed for each step.  Essentially the step would be expecting whatever units that step was working in, so for kOhm you'd enter "9.999xx" kOhm, and for MOhm you'd enter "9.999xx" MOhm.

 

For 4), there is a function called Restore Last External Cal Constants that will restore the values of your last calibration in the even of a fatal error during the calibration process.  Unfortunately there is no way to read/store the calibration values stored on the device.

 

5) If there were to be a fatal error during the calibration procedure, you could use the function i mentioned above to revert back to the previous external calibration values and then recalibrate.  If there is a non-fatal error, you should still be able to abort the calibration and revert back automatically.  In the case of a manual calibration, it is the responsibility of the user to check that the measurements being taken fall within the expected ranges per the verification tables of the manual.  In other words, it will cal regardless of the signal applied.  You would want to be sure to error check at each function call to ensure proper error handling and be able to address any issues that might arise.  Depending on the error, you may not need to abort the calibration, that would be dependent on what went wrong in the code and how it affects the functions.  

 

6) This is not explicitly documented, but for my M series DAQ device is takes 10-15 seconds.  I'd say 10-30 seconds overall.

National Instruments
0 Kudos
Message 9 of 18
(9,141 Views)

The resolution you should use in the current adjustment step should be the best for the device, which is 10 nA at that range (20mA)


@Test_Engxx wrote:

...

 

3)  In the current adjustment section, one of the first steps is to call niDMMConfigureMeasurement.  It appears that the

resolution argument is not listed.  What resolution should be used?

 



.  I will be filing a request to have that added to the manual, thank you for pointing it out.

 

National Instruments
0 Kudos
Message 10 of 18
(9,098 Views)