How to select the right ADC ?

Hello

I have to design an analog to digital convertion stage with a high accuracy (RMS value error < 0.1%) on a large dynamic of 44dB (62.5mA to 10A).

The number of bits necessary to code the full dynamic is log2(10A/62.5mA) = 7.32 bits.

The signal to be mesured being a sine at 50Hz, I have digitalized a normalized 50Hz sinus at a quantification of 2^8 to 2^12 and calculated the RMS value error due to the quantfication : With this, I have determined that the number of bits necessary to respect the accuracy must be > 9.35 bits.

If no considering the noise, the number of bits of the ADC necessary must be 7.32 + 9.35 = 16.67 bits or the SNR must be greater then 16.67 x 6.02 + 1.76 = 102dB.

In your opinion, is this method seems to be correct?

Top Replies

• Hi,

ADI has wide range of ADCs and you can do a parametric search and filter the parameter you want for an ADC. You can go to ADIs website and click on parametric then Precision A/D converter. Here is the link.

https://www.analog.com/en/parametricsearch/10825#/p7=8|12&p4913=10|40&d=sel|7|4748|3062|4363|4913|4365|1746|4907|4364|s3|s5|3970

Aside from the above requirements, are the other critical specs do you need or would you be able to share your specific/target applications so we can help you narrow down the selections?

For example, I'd also like to understand more about the AFE. The dynamic range is in current unit (A) how are you planning to condition this signal for the ADC to read? Are you looking for an ADC who can read a current inputs directly? or do you have a front end that convert this to voltage? If yes, what will be the equivalent input range to ADC?

Thanks,

Jellenie

• Hi,

I know AD can proposed ADCs for my needs but it isn't the goal of my question.

The application in question is a Stand-Alone Merging Unit used in high voltage substation.

The ADC must have a voltage inputs, the current will be converted by a sensor to be defined.

The answer I need now is : is my method to determined the number of bits necessary for the ADC correct ?

Regards

• Hi,

In general, the actual method of determining the required ADC resolution is to know the ideal lowest value of the input that you would like to be detected.

For example if you want a resolution of 1uV for a Full scale range of 5V (DC).

So using the same equation above, the required number of bits to detect a 1uV change is log2 (5 V/1uV) = 22.25 bits.

This means that you need an ADC with at least 23 bits noise free resolution (DC) or ENOB(AC).

Then the noise free resolution/ENOB would be limited by the ADC noise or other noise in the system. And these should be check on the ADC datasheet or other part of the system in order to know if the 22.25 effective resolution can be achieved considering the noise of the ADC or the entire system.

For a DC input, noise free resolution can be computed as

noise free bits = log2 (FSR/pk-pk noise).

For an AC input, ENOB is computed by the measured Signal-to-Noise Ratio plus Distortion (SINAD) value minus the quantization noise 1.76 dB over the multiplier 20 log base 10 of the converters bit.

There are also some cases that the ENOB is calculated based on its wideband dynamic range.

ENOB = DR - 1.76dB/ 6.02.

From these equation you would be able to determine if the ADC can meet your target resolution of 22.25 bits.

Thanks,

Jellenie

• Hi,

I don't know which lowest value (to be detected by the ADC) I need, to obtain the accuracy of 0.1% of the RMS value.

This is why I have made simulation to determine the number of bits necessaries to obtain this accuracy.

Regards

• Hi,

The accuracy is more affected  or dominated by the ADC gain and offset errors. It is different from the resolution which is more dependent on noise,

I am not familiar with the quantification method that you have mentioned above. Can you explain this further? But like I mentioned the resolution is usually defined by the noise while the accuracy is based from gain and offset which is usually higher than the ADC noise.

Thanks,

Jellenie