I need to digitize analog signals from sensors with maximum rating. But the output data i get is incorrect: meanings are decreasing while input signal is constant. I use the following configuration of ADC and core frequency:

1) POWCON = 0x00

2) ADCCON = 0x3A4 (16 clocks, max f, continuous software conversation)

3) internal Vref=2,5 (REFCON = 0x01)

As input chanels ADC 0,1,2,3,4,12,13 are used. The code is:

ADCCP = chanel;

ADCCON = 0x3A4;

for (i=0;i<128;i++)

{

while (!ADCSTA);

ADCdata[i] = (ADCDAT>>16)&0xfff;

}

ADCCON = 0x324;

The result is: ADCdata[0] = 2055 ... ADCdata[127] = 2042. Real input signal corresponds to ADCdata[0].

How can I get the correct digital signal?

Hi Jul511

If you measure the voltage at the ADuC7020 pins you will probably see the voltage drop as indicated by the conversion results. The reason for this is that the ADC has a capacitive input which drains some charge from your signal for each conversion. Unless your signal is a fast low impedance circuit this loss of charge will cause the signal to drop. If for instance you have a typical RC filter in your signal path then the repeated loss of charge means a current through the resistor with resulting voltage drop over the resistor. If you buffer the signal with an op-amp you need a fast accurate one to achive good results.

The capacitor in the ADC that nedds charging for every conversion and then gets discharged has a value of nominally 16pF. This capacitor must be charged during the sample time of the ADC. To achieve this we can place a capacitor on the input pin which can quickly provide the necessary charge. To ensure that the voltage on this capacitor does not drop significantly the capacitor must be sufficiently large. This means at least 4096 X 16 pF = 65536pF. Typically 100nF could be used. This charge must then be replaced before the next conversion is required. This is normally done through a resistor plus the output impedance of the signal source. At 1MHz sampling of a 5V signal the current through this impedance is i = VC/t = 5V*16pF/1us = 80uA. For an error < 1 bit we can allow 5V/4096 =~ 1mV. So the charging resistor can be R = V/I = 1mV/80uA = 12.5 Ohm. This is not a trivial task at 1MHz. Note also that above we have several errors near 1 bit which will add together. To get a total error that is under 1 bit we need to choose even more conservative values.