Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

How do I calibrate binary voltage input on an Mseries 6221

Hi,
 
Would someone please show me how to calibrate the binary voltage input from an Mseries 6221 board using DAQmx (preferably in ANSI C but not LabView).
 
I have a global task handle for an AnalogInput task
  TaskHandle  hAItask=0;
  int16  AIbinaryBuffer[20000];
I create the hAItask with this code (for using it in a circular buffer)
  DAQmxErrChk (DAQmxCreateTask("",&hAItask));
  DAQmxErrChk (DAQmxCreateAIVoltageChan(hAItask, "Dev1/ao0:3", "", DAQmx_Val_RSE, -10.0, 10.0, DAQmx_Val_Volts, NULL));
  DAQmxErrChk (DAQmxCfgSampClkTiming(hAItask, "/Dev1/ao/SampleClock", 40000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 800000));
  DAQmxErrChk (DAQmxSetReadOverWrite(hAItask, DAQmx_Val_OverwriteUnreadSamps));
  DAQmxErrChk (DAQmxSetReadRelativeTo(hAItask, DAQmx_Val_FirstSample));
I start the task using
  DAQmxErrChk (DAQmxStartTask(hAItask));
And I read it using
  DAQmxErrChk (DAQmxSetReadOffset(hAItask, DAQmxInternalAIbuffer_Offset));  // I provide the offset
  DAQmxErrChk (DAQmxReadBinaryI16(hAItask, 4000, 10.0, DAQmx_Val_GroupByScanNumber, AIbinaryBuffer, 20000, &sampsPerChanRead, NULL));
 
ALL THIS SO FAR WORKS FINE.  My code scales the DAQmxReadBinarI16() input fine, but I still need to calibarate it.
 
To calibrate it, after creating the hAItask but before running it, I call this code (based on the example file SaveChannelCalibrationInfo.c) to begin getting the calibration polynomial coefficients - BUT IT DOEST WORK!!!  DAQmxGetAIChanCalHasValidCalInfo(taskHandle, chanName, &hasValidCalInfo));  returns hasValidCalInfo = 0 which is rediculous.
WHAT AM I DOING WRONG?????
 
int main__(void)        // from SaveChannelCalibrationInfo.c
{
TaskHandle taskHandle=0;
//char  *taskName="<Specify task name here>";  // MY TASK IS ALREADY PROGRAMMATICALLY CREATED
// Load the task
//DAQmxErrChk2( DAQmxLoadTask(taskName,&taskHandle));   // Used to get task from MAX 
taskHandle = hAItask;  // THIS I'VE MODIFIED - IS THIS WHERE THE PROBLEM IS ?????
 
// If the calibration is expired DAQmx will return errors when we get the values of calibration attributes.  We enable  the Apply Calibration If Expired attribute in order to avoid
// those errors.  This setting is not saved and will not  affect the channel calibration.
DAQmxErrChk2( DAQmxSetAIChanCalApplyCalIfExp(taskHandle, "", 1) );
 
// Get the names of all channels in the task
DAQmxErrChk2( DAQmxGetTaskNumChans(taskHandle, &numChannels) );
DAQmxErrChk2( GetChannelNames(taskHandle, numChannels, &channelNames) );
 
// Read and format the calibration info
for (i = 0; i < numChannels; i++)
  {
  DAQmxErrChk2( GetChanCalInfoSize(taskHandle, channelNames[i], &chanCalInfoSize) );  
  if( chanCalInfoSize > 0 )
    OneOrMoreChansWithValidCalInfo = TRUE;
  calInfoSize += chanCalInfoSize;
  }
 if( !OneOrMoreChansWithValidCalInfo )
  {
  printf("The channels in this task do not contain valid channel calibration information.\n");    // WENT HERE - NO ai0 CALIBRATION INFO AVAILABLE
  goto Error;
  }
// and so forth (rest of code not included because it never got past here
...
}

int32 GetChanCalInfoSize (TaskHandle taskHandle, const char *chanName, uInt32 *chanCalInfoSize)
{ ...
DAQmxErrChk2( DAQmxGetAIChanCalHasValidCalInfo(taskHandle, chanName, &hasValidCalInfo));  // ERROR IS HERE - returns hasValidCalInfo = 0 = NO
if (hasValidCalInfo)
  {
  // NEVER GETS TO HERE!!!!!
  DAQmxErrChk2( DAQmxGetAIChanCalScaleType(taskHandle,chanName,&scaleType));
  DAQmxErrChk2( numScalingVals = DAQmxGetAIChanCalTableScaledVals(taskHandle,chanName,NULL,0));
  DAQmxErrChk2( numCoeffVals = DAQmxGetAIChanCalPolyForwardCoeff(taskHandle,chanName,NULL,0));
  // AND SO FORTH
  ...
  }
*chanCalInfoSize = size;    // chanCalInfoSice is always 0 !!!!!!!!!!!!!
Error:
return error;
 }
 
0 Kudos
Message 1 of 15
(5,481 Views)
Hi Bill,

I don't know what you actually want to do. Do you want to do a Self-Calibrate? or External Calibrate? From what you have said I think not, an external calibrate. However you could run a self-calibrate after task generation and before reads, but it would need to be before you went in to the read loop. So this also seems unlikely.

Do you have a transducer that MAX does not have a type for and so you need tell the unsacled data how to scale? If that is the case you have to provide the forward co-efficients and then the driver can work out the reverse co-efficients.
More likely I suppose.

Finally do you want the previous external calibrations constants saved to the EEPROM to scale up your data?
If this is the case you do not need to worry as the unscaled data already uses the latest calibration contants at a borad level, as soon as the data is read in, it goes through the CALDACs and the unscaled or scaled data you get out has been calibrated for the board.

To go from unscaled to scaled in your prgram you will need to get the Device Scaling Co-efficients from the driver.

Please can you reply with what it is exactly you want the driver to do.

Regards
JamesC
NIUK and Ireland

0 Kudos
Message 2 of 15
(5,460 Views)

Hi James,

Thanks for quickly responding. Yes, I want to Self-Calibrate once (or maybe a few times) during my program start.  I have actually called
    DAQmxErrChk2( DAQmxSelfCal("Dev1") );
after running
    taskHandle = hAItask;
in the above program but it did nothing except update the DAQmxGetSelfCalLastDateAndTime() date and time.

I am new to programming NI boards (this is my first) so I dont know the "National Instruments way" of calibrating.  (My program was originally written to use the Axon Instruments 1322A board and it self calibrated in the program during startup if needed.) 

I dont even know if I have to calibrate the 6221 board while in MAX, or if I can calibrate the board by calling DAQmxSelfCal("Dev1") in my program at startup if the board has not yet been self calibrated.  I would definately prefer the second way because it would be much simpler for my users.  I would like to use MAX only to get the "Dev1" device name, if possible.

I am actually measuring picoamps from a single channel patch clamp amplifier (a neurophysiological amplifier), so I guess its a transducer that MAX does not have a type for.  But if I could just get the board accurately calibrated in volts at the board input I would be completely satisfied.

But so far, I can't even get the SaveChannelCalibrationInfo.c demo program working, and I think the problem could be in using my taskHandle created earlier in my above program
    taskHandle = hAItask;                                                                 // THIS I'VE MODIFIED - IS THIS WHERE THE PROBLEM IS ?????
rather than this commented out DAQmxLoadTask() code to get the task from MAX
    //DAQmxErrChk2( DAQmxLoadTask(taskName,&taskHandle));      // Used to get task from MAX

Then the code later fails in
  DAQmxErrChk2( DAQmxGetAIChanCalHasValidCalInfo(taskHandle, chanName, &hasValidCalInfo));  // ERROR IS HERE 
because hasValidCalInfo returns 0 or NO.  Remember I have  tried putting in DAQmxSelfCal("Dev1") after running taskHandle=hAItask, and that did not make the above code work.

I'll worry about how to use the coeffients after I get DAQmxGetAIChanCalHasValidCalInfo() to work.  But I do want to use the previous SELF (NOT EXTERNAL) calibrations constants saved to the EEPROM used to scale up my data?

Hope to hear from you soon.

Cheers, Bill

 

 

0 Kudos
Message 3 of 15
(5,440 Views)
Hi Bill-
 
It sounds like there is some confusion about the DAQmxGetAIChanCalHasValidCalInfo property.  That property is used to query whether the channel has been software calibrated in MAX using the DAQmx Global Virtual Channel Calibration in MAX.  This "calibration" allows you to assign reference values to unknown inputs and apply some linear or other fitting in between.  For example, if you have an external signal at 9.8 volts that you want the automatic scaling in DAQmx to interpret at 10V then you can use this Channel Calibration feature.  The property does NOT refer to whether the device itself is correctly or successfully calibrated. 
 
So, unless you use Global Virtual Channels in MAX and use the Channel Calibration feature, this property should always return a False value.
 
The DAQmxSelfCal function does perform a hardware calibration.  This includes measuring a precise onboard reference voltage and updating the calibration date/time as well as the calibration coefficients the device will use to scale its native units (i.e. binary) to voltage.  The coefficients returned by DAQmxGetAIDevScalingCoeff will be adjusted automatically by the DAQmxSelfCal function.  The DAQmxGetAIChanCalHasValidCalInfo is not related to this process.
 
Hopefully this helps-
Tom W
National Instruments
Message 4 of 15
(5,439 Views)

Hi Tom,

Thanks for your response, it clears up a lot of it.  I wont bother with DAQmxGetAIChanCalHasValidCalInfo().

However I don't think I can use DAQmxGetAIDevScalingCoeff().  I want to convert unscaled, uncalibrated binary values to UNSCALED, CALIBRATED binary values (not SCALED, calibrated voltages), in part,  so that I can save them to a binary disk file.  My understanding is that DAQmxGetAIDevScalingCoeff() converts unscaled, uncalibrated binary values to SCALED, calibrated voltages.

What function should I use?

I tried DAQmxGetAIChanCalPolyForwardCoeff(), but when using this code,
    DAQmxErrChk( numCoeffVals = DAQmxGetAIChanCalPolyForwardCoeff(taskHandle,chanName,NULL,0));
DAQmxGetAIChanCalPolyForwardCoeff() returned numCoeffVals = 0, indicating that it was NOT WORKING.

In contrast, using this code,
    DAQmxErrChk( numCoeffVals = DAQmxGetAIDevScalingCoeff(taskHandle,chanName,NULL,0));
I could get DAQmxGetAIDevScalingCoeff() to return numCoeffVals = 4, suggesting that at least it works even though it is not the right function.

Cheers,  Bill

0 Kudos
Message 5 of 15
(5,416 Views)

Hi Bill-

There is no way to return scaled binary values with NI-DAQmx.  The best method if you're performing high speed logging would be to get the DAQmxGetAIDevScalingCoeff(), log them to file and then log the raw data as it is returned by DAQmxReadBinary...  Then when you need to read the file back you should just apply the calibration and scaling via the 3rd order polynomial coefficiencts you saved previously.

Hopefully this helps-

Tom W
National Instruments
Message 6 of 15
(5,396 Views)

Hi,

  what exactly do you mean by unscaled, uncalibrated? I think you're referring to the fact that, if your device supports software calibration (M-series boards), NI-DAQmx does not calibrate unscaled samples.

Your device uses software calibration to adjust the software scaling of signals read from and produced by your device. Using calibration pulse width modulated (PWM) sources with a reference voltage, your device measures and calculates scaling constants for analog input and analog output. The scaling constants are stored in nonvolatile memory (EEPROM) on your device. NI recommends that you self-calibrate your device just before a measurement session but after your computer and the device have been powered on and warmed up for at least 15 minutes. You should allow this same warm-up time before performing any calibration of your system. Frequent calibration produces the most stable and repeatable measurement performance. The device is not harmed in any way if you recalibrate it often.

For the M-series board, the acquired raw data will be uncalibrated. However, the calibration information and the scaling information is combined by the driver and available though the AI.DeviceScalingCoeff property of a DAQmx Channel Property Node. This property is selected by clicking Properties » Analog Input » General Properties » Advanced » Device Scaling Coefficients. The data returned by this property is an array. Each element of the array is a polynomial coefficient. This polynomial can be applied to the raw data to get the scaled data. (and calibrated)

Therefore, if you want the unscaled calibrated values, you'll need to use the scaling coefficients to convert the data to a real number, which is then calibrated, and then put the values back to an integer appropriate for the resolution of the board. You might aswell read it fully scaled and calibrated, and then convert it back, since then the driver level is performing the scaling application efficiently.

Otherwise, as Tom says, dealing with the DeviceScalingCoefficients will give you the data you ultimately need to get to the real workd values, just dealing with the raw uncalibrated data you'd have to store. (i.e. the DeviceScalingCoefficients include the calibration).

Thanks

Sacha Emery
National Instruments (UK)

// it takes almost no time to rate an answer Smiley Wink
0 Kudos
Message 7 of 15
(5,379 Views)

Hi Tom and Sacha,

I essentially got the Analog Input calibrated essentially using this code to convert from unscaled, uncalibrated binary values to scaled, calibrated voltages, and then to calibrated, unscaled binary values

    DAQmxErrChk( DAQmxGetAIDevScalingCoeff(hAItask, "Dev1/ai1", ai0CoeffVals, 4) );   NUM_COEFF_VALS) );    /

                UnCalBinVal = AIbuffer_RepSweep[sn];
                // I assume the polynomial is of the form Coeff_0 + Coeff_1*x + Coeff_2*x^2 + Coeff_3*x^3 ....
                dCalVoltVal =  Mseries_AD0_CoeffVals[0]
                             + (Mseries_AD0_CoeffVals[1] * UnCalBinVal)
                             + (Mseries_AD0_CoeffVals[2] * (UnCalBinVal^2) )
                             + (Mseries_AD0_CoeffVals[3] * (UnCalBinVal^3) );
                CalBinVal = ((dCalVoltVal / 10.0) * 32767);
                dg[DGnum].arraydata.ADaryCh[AD0].ADarray[uLinArraySmplNum+lsn] = CalBinVal;

0 Kudos
Message 8 of 15
(5,376 Views)

Hi Tom and Sacha,

(SORRY ABOUT THE ABOVE GIBBERISH MESSAGE, I PUSHED THE WRONG (SUBMIT) BUTTON BY MISTAKE!)

Thanks for your help.  I essentially have the Analog Input calibrated using the code below to convert from unscaled, uncalibrated binary values to scaled, calibrated voltages, and then to calibrated, unscaled binary values, as Sacha suggested.

// get the coefficients
float64 ai0CoeffVals[4];
DAQmxErrChk( DAQmxGetAIDevScalingCoeff(hAItask, "Dev1/ai1", ai0CoeffVals, 4) );    // 4 = NUM_COEFF_VALS

// get the uncalibrated binary value
UnCalBinVal = AIbuffer[ii];

// get the calibrated, scaled voltage
dCalVoltVal =  ai0CoeffVals[0] + (ai0CoeffVals[1] * UnCalBinVal) + (ai0CoeffVals[2] * (UnCalBinVal^2) ) + (ai0CoeffVals[3] * (UnCalBinVal^3) );

// then convert the voltage back to a CALIBRATED binary value (10.0 is 1/2 the p-to-p voltage).  This is a bit of a kludge to be worked on later but should be only off  by 1 LSB
CalBinVal = ((dCalVoltVal / 10.0) * 32767);

For a 5.0 voltage output pulse, my calibrated scaled graph reads 5.068 and a 4-digit multmeter reads 5.065 or within 0.05% which seems OK for the Analog Input calibration.  To speed it up I may put the calculations in a TABLE to read from.

I ONLY HAVE ONE MAJOR PROBLEM LEFT: The analog output voltage is about 5.065 rather than 5.000 or about 1.31% too high.  This could be a problem internal to my program, but I dont think so. 

I therefore have a final question:  Do I have to software calibrate the binary values used for Analog Output with  DAQmxWriteBinaryI16(), or are they calibrated by     DAQmxSelfCal("Dev1")?

Thanks for your help.

Cheers,  Bill

0 Kudos
Message 9 of 15
(5,374 Views)
Hi Bill-
 
I'm glad to hear you got the effect of calibrated, scaled binary data but I'm not clear why this method is so useful to you.  You basically scale the data three times- once to voltage, then back to binary, then back to voltage when you ultimately read the file back.  It seems that it would be more useful just to read the samples as calibrated scaled voltages first and then convert to binary for file logging if you absolutely must have calibrated binary data.  Otherwise, the most efficient method would just be to read the raw binary values and log them directly to file and scale only when you read the file; you will already be performing a similar operation to re-scale the calibrated binary data in your current implementation.  Regardless, I'm glad to see you're getting the operation you desire now.
 
In order to properly scale your AO signals you will need to take into account the device scaling coefficients for AO that can be read using the DAQmxGetAODevScalingCoeff() property.  The method for using these linear scaling coefficients is similar to the AI method you're using currently.
 
Hopefully this helps-
Tom W
National Instruments
0 Kudos
Message 10 of 15
(5,361 Views)