ADAU1701
Production
The ADAU1701 is a complete single-chip audio system with a 28-/56-bit audio DSP, ADCs, DACs, and microcontroller-like control interfaces. Signal processing...
Datasheet
ADAU1701 on Analog.com
Hi,
I am developing a module based on the ADAU1701 - the circuitry is essentially a simplified version of the evaluation board. The intended sample rate is 96kHz using the internal ADCs and DACs.
However, I have found it impossible to get correct results using the formulae given in the datasheet for calculating the ADC input and current-setting resistors at sample rates higher than 48K. Another user reported the same issue around a year ago in the thread "1701 input noise", but no definitive explanation resulted and I am a little surprised that the subject has not come up again.
The datasheet states that the total value (including 2K of internal resistance) of both the input and current-setting (ADC_RES) resistors should reduce linearly with sample rate. Using the formulae given results in external values of:
@ 48K, ADC_RES resistor = 18K, input resistors = 7K (for unity overall gain)
@ 96K, ADC_RES resistor = 8K, input resistors = 2K5 (for unity overall gain)
@ 192K, ADC_RES resistor = 3K, input resistors = 250R (for unity overall gain)
Adopting those resistor values at 96K (and with the inputs linked directly to the outputs in SigmaStudio) results in a noisy, jittery output with large amounts of both waveform and zero-crossing distortion. Routing a tone generator directly to the DACs in SigmaStudio resulted in a clean output, thereby excusing them of blame and casting doubt on the ADC setup.
Replacing the fixed resistors with 20K potentiometers, I evaluated the performance with various combinations of resistance and sample rate. The 18K value for the ADC_RES resistor (as implied in the old thread) actually works fine for all sample rates - the variation in overall gain is within 0.2dB for sample rates between 48K and 192K and there seems to be no noticeable change in distortion performance.
As this resistance is reduced, the overall gain falls somewhat, indicating that the full-scale input current is increasing, but if the value is decreased below around 13K (at which point the gain has dropped by around 2dB), the distortion increases massively as previously observed. Again, this behaviour is largely independent of sample rate.
Leaving the ADC_RES resistor as 18K, I then adjusted the input resistor to give unity overall gain from input to output and obtained the (nicely convenient) value of 5K6. This indicates that the datasheet formula for the input resistors may also not be entirely accurate, and double-checking my '1701 evaluation board did indeed show that the gain there is also somewhat lower than predicted by the formula.
So I have ended up with a working system, but have had to ignore the datasheet in order to do so - not a situation which inspires confidence. My questions are:
- can others confirm the behaviour I have observed?
- have I misunderstood the datasheet, or is there genuinely a discrepancy between it and real-life behaviour?
- if there is, can anyone from AD provide any insight into what is actually happening and how to properly optimise ADC performance at all sample rates?
Thanks very much - I hope this at least helps anyone who is struggling with the same issue.
Steve L.
Hi Steve, and welcome to the forum.
Fortunately, I think I know the exact reason why this is happening! Unfortunately, I think it points out a lack of clarity in our datasheet.
As explained in the datasheet, the series resistors on the ADC inputs only need to be changed "is if a sampling rate other than 48 kHz is used." That being said, the definition of "sample rate" used here is somewhat ambiguous and misleading.
In some types of DSP (or codecs), in order to increase or decrease the sample rate, you simply scale the clock signals accordingly. For example, if you're feeding a 12.288 MHz master clock into the system for a 48 kHz sample rate, then you can simply scale that down by 8.125% to 11.2896 MHz for a 44.1 kHz sample rate, or double it to 24.576 MHz for a 96 kHz sample rate, etc... In many kinds of ICs, you are free to do this (as long as you stay within the boundaries of allowable clock frequencies) and all of the internally generated clocks will simply scale accordingly.
The ADAU1701 is a bit different. The PLL takes in a master clock and divides it down before multiplying it up to a much higher frequency. For a 48 kHz sample rate, the DSP core clock is 1024 * 48000 = 49.152 MHz. Now, when you want to use a 96 kHz sample rate, logically you might assume that the core clock doubles to 98.304 MHz. However, this is not the case for the ADAU1701. Its DSP core clock can only run up to somewhere in the 50 MHz range before things start to break down and function incorrectly. Scaling down for a 44.1 kHz sample rate is not a problem (the DSP core clock can easily run at 44.1584 MHz.
So, an alternate method is used for higher sample rates. For 96 kHz (which we refer to as "double rate" or "dual rate" in some other SigmaDSP datasheets), the DSP core simply grabs a sample twice as often as it did at 48 kHz. This also makes the interpolation and decimation filters for the converters run twice as fast, and has the side effect that the DSP core can only execute half as many samples as it could at the "normal rate" of 48 kHz.
Quad rate is possible as well, using the same method. The DSP core quadruples the rate at which it grabs samples from the converters and serial ports, and the interpolation and decimation filters do the same. Instead of scaling the frequency up or down of all of the clocks in the system, you're simply doubling or quadrupling the clock frequencies for certain subsystems.
That being said, the explanation regarding the ADC resistors refers more to the case I originally described above, where you might be running your system at 44.1 kHz or maybe at some strange non-standard rate like 49 kHz or 42 kHz. In that case, you're literally scaling the frequency of the MCLK, so the ADC resistor values need to scale as well.
Conversely, if you are running at 96 kHz, the MCLK frequency itself remains unchanged, and you're simply doubling the rate of the DSP core and converter filters. In that case, you only need to change a register setting, and no hardware changes are required.
In summary, the ADC_RES values for 48 kHz and 96 kHz should be the same, because the core clock frequencies are the same in both cases.
In retrospect, the datasheet does a poor job of explaining this. I will make this into a FAQ on EngineerZone in order to help clear up any confusion on the subject.
Let me know if you have questions!
Brett
Hi Brett,
Thank you very much for your answer, which makes perfect sense.
So where the datasheet refers to the sample rate in the ADC calculations, it in fact means the "effective sample rate", where:
effective sample rate = actual sample rate (for normal rate systems) or
effective sample rate = actual sample rate/2 (for double-rate systems) or
effective sample rate = actual sample rate/4 (for quad-rate systems)
I feel more comfortable about what I have done now. With the hardware working, tomorrow I can get on with testing the code...this is my first project using the SigmaDSPs and I am excited by the possibilities that they open up.
I hope that making this a FAQ will save others some head-scratching!
Thanks again and kind regards,
Steve L.
Steve, that's exactly right! In fact, some of the newer SigmaDSP datasheets (like the ADAU144x) use that same kind of terminology. In the Master Clock and PLL section of the ADAU144x datasheet, the concepts of fCORE, fS,NORMAL, fS,DUAL, and fS,QUAD are described in detail.
If you have any questions, check out page 20 and 21 of Rev C: http://www.analog.com/static/imported-files/data_sheets/ADAU1442_1445_1446.pdf
This is a similar architecture to that of the ADAU1701.