Post Go back to editing

CN0288 Ripple voltage question

Category: Hardware
Product Number: AD598

Based on Figure 2 of CN0288 the document report that the main output noise source of the AD598 is the output ripple voltage 0f 0.4mVrms. This lead to 10.9 bit of  Noise-Free Code Resolution.

In the test result section the document report that the misured voltage ripple is 6.6mV reduced by the low pass filter to 2mVpp that lead to a 11bit of  Noise-Free Code Resolution.

My question is: How 2mVpp is better that 0.4mVrms?

If we back calculated from the Noise-Free Code Resolution to Noise_Vrms we have:

Effective Resolution = 11 + 2.7 = 13.7

Total RMS Counts = 2^(13.7) = 13308

Noise = 5V/13308 = 0.375mVrms of noise

...so how 2mVpp of ripple noise is equal to 0.375mVrms?

  • Hi BestUser,

    Not sure how you're adding 2.7 bits to 11 bits - the AD7992 is a 12-bit converter, so the resolution cannot be greater than 12-bits. It is true that resolution can be improved by averaging, but only if the noise is "well behaved", ideally Gaussian, which is not the case. You can see this in CN0288 Figure 4 - only two ADC codes are represented, so you would only see two bins if you plotted as a histogram.

    RMS noise is related to the peak-to-peak noise by the crest factor. And the crest factor depends on how many readings are observed - refer to https://www.analog.com/media/en/technical-documentation/application-notes/an96fa.pdf, appendix B.

    In the case of the CN0288, this would imply a crest factor of 2 / 0.375 = 5.3, which is a practical number in many cases. Often crest factors of 6 or 6.6 are used, where 6.6 corresponds to a 99.9% probability.

    But there is a distinct difference between the AN96 example and CN0288 - the CN0288 is limited by quantization noise, it is NOT Gaussian, so these textbook relationships fall short.

    Did this start to answer your question? Did you have a particular performance requirement for an LVDT application?

    -Mark

  • Hi Mark, thanks for your answer.

    My question was related to the fact that by insert the low-pass of 3KOhm and 0.01uF the ripple is reduced to a level of 2mVpp and this produce a 11bits of noise-free code resolution.

    Initially the Noise-free Code Resolution was 10.9bits, calculated with 0.4mVrms of noise

    So my question was: why 2mVpp of ripple (that produce 11bits of Noise-free Code Resolution) are better than 0.4mVrms (that produce 10.9bits of Noise-free Code Resolution)?

    I tried to invert the formulas, so from the last one:

    Effective resolution = Noise-free Code Resolution + 2.7bits = 11 + 2.7 = 13.7bits

    Total RMS Counts = 2^(Effective resolution) = 2^13.7 = 13308

    RMS Noise = FSR/Total RMS Counts = 0.375Vrms

    So i was try to understand how derive 0.375Vrms from 2mVpp.

    Yes, we are develop a circuit that use an LVDT.

    Application circuit is like the one in the AD598 datasheet with single supply. Output span is 0.5V to 4.5V with 2.5V of offset generated by a voltage reference. OPAMP is ADA4099-1 (we have a low temp requirement of -55°c)

    The sensitivity is calculated to be 19.52mV/V/mm (span is +/-20mm) with 24V of supply

    Resolution goal is 0.015mm. This produce an output of 1.55mV. I choosed the 16bit ADC AD7980.

    To verify that we have enough noise-free code resolution i tried to calculated the total noise of the system by using the following model:

    I can calculte all the noise contribution, but the noise contribution of AD598 is not easy to estimate since the output voltage ripple is filtered by the differential amplifier.

    So, is my circuit and assumption right?

    Thanks for the support

  • Okay I'm starting to see where the confusion is arising from. Firstly, it's better to NOT think in terms of "counts" or "effective number of bits". Rather, think in terms of RMS voltage noise, which can more easily be translated to what you care about - noise in terms of your input, which for an LVDT is distance RMS (meters RMS, millimeters RMS, etc.)

    This assertion may be correct, at least when no ADC averaging is applied, but you've got a signal with 12500 count resolution feeding an ADC with 4096 counts, so the ADC will be the limiting factor:

    The maximum number of rms counts that can be resolved can now be calculated by dividing the full-scale output by the total system rms noise.

    Total RMS Counts = 5 V/0.4 mV = 12, 500

    And subtracting 2.7 bits means that the author assumed a crest factor of 2^2.7 - 6.5, so about 1000 readings in the observation period.

    To the question of why the 11 bit measured resolution is better than the 10.9 bit calculated resolution, that looks like a typical measurement error. Almost no noise measurement is going to be that close to its calculated value.

    Have you built your circuit using the AD7980? A 16-bit converter will allow you to observe the noise characteristics of the analog signal with much greater fidelity - you SHOULD be able to take a histogram of readings from the ADC, convert to voltage, take the RMS value, and if everything works out, get a number close to 0.4 mV RMS.

    Again, part of the difficulty with noise calculations on the CN0288 is that it's not really clear what the dominant noise source is - the ADC (quantization noise) or the signal chain (thermal noise). Noise calculations are much more straightforward when the dominant noise source is thermal, so any data you take looks like nice, smooth, Gaussian histograms.

    Hopefully this helps.

    -Mark