Can someone explain why a DDS, like the AD9859 for example, can have 32 bits of frequency resolution, 14 bits of phase resolution, but a DAC with only 10 bits of resolution?

Can someone explain why a DDS, like the AD9859 for example, can have 32 bits of frequency resolution, 14 bits of phase resolution, but a DAC with only 10 bits of resolution?

DDS frequency granularity depends only on the number of bits in the DDS's phase accumulator, as demonstrated by the DDS output frequency equation: Fout = Fs*FTW/(2^N), where Fout is the DDS output frequency, Fs is the DDS system clock frequency (or sample rate), N is the number of bits in the phase accumulator, and FTW is the N-bit digital frequency tuning word. So, the AD9854, which has a 32-bit accumulator (N=32), has 32 bits of frequency resolution. That is, its smallest frequency step is Fs/(2^32) or Fs/4,294,967,296.

By design, a DDS with a D-bit DAC requires a angle-to-amplitude converter with a minimum of D+3 bits of phase resolution. This is necessary in order to guarantee that the smallest phase step (1 LSB of phase) results in a DAC step of no more than 1/2 LSB of amplitude. That is, there is no chance that a minimum phase step will skip over a DAC code. So, a 12-bit DAC, for example, requires that the DDS's angle-to-amplitude converter have at least 15 bits of phase granularity.

Therefore, in the case of the AD9854, the DAC only requires 15 bits of phase resolution, far less than the 32 bits of phase resolution provided by the DDS's phase accumulator. This means that the angle-to-amplitude converter only needs the 15 most significant bits of the phase accumulator in order to generate a 12-bit accurate amplitude value for the DAC. That is, the 32-bit output of the phase accumulator can be truncated to 15 bits, because of the limitation imposed by the DAC resolution.