Post Go back to editing

voltage drifting in AD7176 vs AD7710

Hello All,

we already used AD7710 with a very simple schematic as attached to read a pressure transducer output. Our board was a simple two layer PCB,without isolating digital and analog ground, no polygon, no low noise regulator for reference and no isolating from our micro controller to motor driver. the system resolution in this scenario was 16bit free of noise with 5 SPS.

In our new design we use AD7176-2 with ALL considerations which we didn't care on the previous one (the schematic is attached) + EMI film capacitors for excitations and also chebyshev digital filter on our microcontroller side. The accuracy of the system in this scenario is around 19bit free of noise  with 1000 SPS  BUT we face a big issue which is voltage drifting !

In the old system when we calibrate the system after even a long period of time we don't have even 0.5% voltage drift. However in the new system with the same mechanic and the same sensor specifications we have 3-5% voltage drift few hours after calibration and it is day to day varry. The interesting thing is when we reset our microcontroller (ARM SAM4E) the value drift 1-3%.

Anyone face this issue before? or can guess what could be the issue?

I do appreciate you if you can help me in this issue , ,

attachments.zip
  • Hi, Mostafa.

    May I request more information regarding the issue? May I ask if you could provide me the complete schematic for the old and new system? When you said voltage drift, are you referring to offset voltage with respect to temperature or just at room temperature? AD7176-2 has an evaluation board which is tested and functional within the specs. Have you tried to refer your schematic with the existing EVAL-AD7176-2SDZ?

    Thanks,

    Jellenie

  • Hi, Mostafa.

    If you want a better calibration results, the AD7176-2 provides three calibration modes that can be use. Internal zero scale, system zero scale and system full scale. For internal offset calibrations, the input pins should be disconnected and are connected internally. However, for system calibrations, the ADC expect the system zero scale (offset) followed by a system full scale (gain) voltages to be applied to the input pins before initiating the calibration modes. You can use these three calibration modes to eliminate the offset and gain errors of your system. You can try these using a lower output data rate so it will be accurate for all output data rates.

    Thanks,

    Jellenie

  • Hi Jellenie,

    Thank you for your reply. Actually I am not sure that it is true to call it a voltage drift. The ADC raw data is drifted however voltage input and amplifier output is nearly the same less than 0.2% change.

    We already worked a lot with Eval-AD7176. We didn't see any voltage drift on that board. The issue we faced on that board is amplifier system. AD8656 is not an ideal choice to amplify input signal for more than 10 times gain. After this gain value (10), we face a big nonlinearity in amplifier. AD4940 is even worse in both linearity and output noise. That's why we choose AD8221, we even get a good result with gain 300.

    Regarding the voltage drift, as I understood it is not respect to temperature. Last week we solved the issue by commenting  "system offset calibration". Before that, every time the microcontroller started to work, we called "internal offset calibration" and "system offset calibration" functions. It seems the reason of changing ADC value by resetting microcontroller was this function calling.

    As a conclusion, the issue seems  not a hardware or schematic issue (or at least my conclusion is this). Our conclusion is, the problem is calibrated related issue. Now we have only "internal offset calibration" and our voltage drift is lower than 0.4%. However we don't have "system offset calibration" in our system which I think is not a good point. I even not sure about "internal offset calibration" to work fine or not. I put the code in below for your evaluation.

    we have a function at the microcontroller initialization on that function we have these two settings:

    ret=AD7176_channel_calibration(INTERNAL_OFFSET_CALIBRATE);

    if (ret<0) return ret;

    //commented this part becuase of voltage drifting issue

    //ret=AD7176_channel_calibration(SYSTEM_OFFSET_CALIBRATE);

    //

    if (ret<0) return ret;

    //channel calibration function:

    int32_t AD7176_channel_calibration(enum AD7176_mode calib_mode)

    {

      int32_t ret;

     

      if ((calib_mode!=INTERNAL_OFFSET_CALIBRATE)&&(calib_mode!=SYSTEM_OFFSET_CALIBRATE)&&(calib_mode!=SYSTEM_GAIN_CALIBRATE))

      {

      puts("Invalid Calibration mode. \n\r");

      return -1;

      }

      /* INTERNAL OFFSET CALIBRATE  */

      CLEAR_AD7176_OPR_MODE();

      AD7176_regs[ADC_Mode_Register].value|=ADC_MODE_REG_MODE(calib_mode);

      ret = AD7176_WriteRegister(AD7176_regs[ADC_Mode_Register]);

      if(ret < 0) return ret;

      ret = AD7176_WaitForReady(TIME_OUT);

      if(ret < 0)

      {

      puts("Calibrate Time out\n\r");

      return ret;

      }

     

      CLEAR_AD7176_OPR_MODE();

      ret = AD7176_WriteRegister(AD7176_regs[ADC_Mode_Register]);

      if(ret < 0) return ret;

     

      return 0;

    };

  • Hi Again,

    Thanks again for your support. As you told and you may see on my previous post, I used the calibration modes you mentioned. However, "system offset calibration" instead of improve our results make 3-5% drift. When I don't use this mode (simply comment it on my code), we don't have this drift on our results. I am wondering how to make sure about correctness of doing these calibration mode/functions.

  • Hi, Mostafa.

    System Calibration is a two step process which compensate for external system gain and offset errors as well as its own internal errors. It is basically a conversion process on two specific input voltages (zero scale or offset calibration and full-scale calibration). A zero point must be presented first before the full scale. These voltages applied to the analog input of the converter before the calibration step is initiated and must remain stable until the step is complete. May I know how did you calibrate your system? May I know the full scale and zero scale input voltages that you use?

    Thanks,

    Jellenie

  • This question has been assumed as answered either offline via email or with a multi-part answer. This question has now been closed out. If you have an inquiry related to this topic please post a new question in the applicable product forum.

    Thank you,
    EZ Admin