AnsweredAssumed Answered

12 bits or 16 bits? why N matters and ‘sensitivity’ of ADCs

Question asked by hoo on Dec 5, 2016
Latest reply on Dec 11, 2016 by hoo

for a ideal High Speed @ADC, SNR = 6.02*N, actual ADC's SNR is much lower than 6.02*N, with input frequency increasing, SNR is only related to sampling clock jitter,that is SNR = 20*log(2*pi*fin*T_jitter).

below are performace of two ADC chips:

CHIP_A(16bits)

AC performance ,sample rate = 1Gsps:

IN = 470 MHz, Ain = –1 dBFS ,SNR = 66.5 dBFS, NSD = 153.5 dBFS/Hz

CHIP_B(12bits)

AC performance,sample rate = 1Gsps:

IN = 600 MHz, Ain = –1 dBFS ,SNR = 58.2 dBFS, NSD should be = 153.5 - (66.5 - 58.2) =  145.2 dBFS/Hz

1. CHIP_A is 16 bits and CHIP_B is 12 bits,but the SNR difference is 8dBc, the 16 bits ADC does not seem to have a significant performance advantage since it is much more expensive??? why N matters???

2.if input signal is 6.02*16 = 96.32 dB lower than FS but higher than NSD,e.g. a 100MHz input is -110dBFS(or even lower -120dBFS,FFT Gain is high enough),Can ADC 'receive' this input (or , what is the ADC input 'sensitivity')???

My understanding to Q2: since signal amplitude lower than 1/2^16 (20*log(1/2^16) = -96.32 dBc)does not trigger the input encoder , ADC can not receive such low input .right??? not sure about ADC's input 'sensitivity':should related to real SNR performance or just related to 'N' ? can't be NSD, cause it is not a AMP ?

Outcomes