Post Go back to editing

AD4114 calibration


  The measurement error of AD4114 still exists when we use it. Our measurement error is about 60mV, We requires it to be within 15mV. We suspect it is calibrated problem; Since the formula we use for calibration is our own attempt to convert,  How to confirm the values of formula OFFSET and GAIN as below relationship?

  Is there any way to calibrate AD4114 by non-automatically ?  thanks 


  • Hi, 

    Do you consider using ADC system calibrations? or you just really wanted to adjust the offset manually? I would recommend to perform system offset and gain calibrations once and then read the offset and gain calibration registers. You can then use the same offset and gain register to the other channels or devices. However, there are some variations from part to part so you can't expect the same performance from each but at least it is expected to minimize the errors. 

    Just to add info regarding the above equation. The above equation show the calculations associated with the offset and gain for bipolar modes of operation. You can manually adjust the gain and offset register via their own registers as the ADC provides access to the on-chip offset and gain calibration registers to allow the microprocessor to read the device calibration coefficients and to write the stored calibration coefficients.

    • The 0.075 number reflects the attenuation of the analog input to 0.1 then to 75% before the offset and gain coefficients are applied. This is done to avoid modulator saturation after applying the offset and gain corrections. This number will vary slightly from part to part because of manufacturing tolerances.
    • The value 0x800000 is the default offset coefficient. In the Offset_Reg each bit is equal to 1LSB.
    • The additional 0x800000 in the bipolar equation is to implement the offset binary that is used in bipolar mode.
    • The 0x400000 along with the gain coefficient invert the 0.075 scaling.

    If we work through an example here, where we assume there is no offset and no gain correction applied to the ADC output codes, where the offset coefficient of 0x800000 and gain coefficient of 0x555550. The offset correction is 0x800000– 0x800000= 0. The gain coefficient of 0x555550 and the fixed value 0x400000 together give a value closer to 1/0.75. So, the attenuation (0.75) is reverted here.

    But please note that the ADC does all this processing internally, therefore the main conversion of codes to associated voltages should be use in the final results. 

    Code = 2^ (N – 1) × [(0.1 x VIN/VREF) + 1]



  • Hello,

    I have a question regarding system calibration. As I understand (from datasheet) the procedure for system calibration is following:

    1. Apply zero voltage on the input being calibrated.

    2. Start zero-scale (offset) calibration and wait for RDY pin to go low.

    3. Apply full scale voltage on the input being calibrated.

    4. Start system full-scale (gain) calibration and wait for RDY pin to go low.

    When I try to calibrate by this procedure only the offset register updates, but the gain register stays the same. How does the ad4114 know what is the value of my full scale voltage? Ad4114 should know the voltage in order to correctly calculate GAIN value, or am I wrong? Or am I suppose to manually calculate GAIN value (after offset calibration) and write it in register? 

    I also have a question regarding data to voltage calculation. I try to calculate voltage by inverting the equation for Vin (for bipolar operation) from datasheet (I assume Code means the actual raw data read from data register):

    Vin = 10 * Vref * ((Code / 2^23) - 1)

    By this equation I somehow get the ratio of Vin to Vref. Example: When I apply 5V on input pins, the calculated value by this equation for Vin is around 2 and if I apply 2.5V I get around 1 and so on. So if I multiply this calculated value by Vref I get the correct input voltage, but with some error. The error is 0.2V at 5V input and 0.4V at 10V input. I am using AD4114 evaluation board with external reference (default setting) which I assume is 2.5V. What could be causing this error? My input voltage power supply should be very accurate, I also measured with multimeter exact values.

    Please help me understand calculation from data to voltage and also correct calibration procedure.

    Best regards, Matej

  • Hi, 

    There's a limitation in the calibration range of the ADC gain for a system full-scale calibration on a voltage input is from 3.75 × VREF to 10.5V x VREF. However,  if 10.5 × VREF is greater than the absolute input voltage specification for the applied AVDD, use the specification as the upper limit instead of 10.5 × VREF (see the Specifications section).

    May I know what input voltage did you used for system full scale calibration?  May I know also how you also performed system zero scale calibrations? Are you using unipolar or bipolar coding? Are you using a floating input for the Offset calibration or is it an applied 0V input?

    One more thing, when you performed calibrations or measurements have you enabled the analog input buffers? Please take note that this is required as the high input impedance of the resistive dividers may cause an error when these buffers are disabled. 



  • Hello,

    as it turned out the problem was that I did not enable analog input buffers. Now, the calibration is successful.

    Thank your for help

Reply Children
No Data