AD9954 has worse stability than AD9854

Hello,

Recently I evaluated AD9954 and AD9854.

Under the same condition, I use DFT to test the stability of their output signals. But I alwalys found that the stability of AD9954 output can only reach 0.07%~0.04%, while the stability of AD9854 output can normally achieve 0.013%~0.004%. When I read the datasheet of each chip, I thought the stability of AD9954 can`t be so obviously worse than that of AD9854.

Sincerely hoping someone can provide ideas.

Thanks.

Parents
  • 0
    •  Analog Employees 
    on Jun 11, 2021 1:39 PM

    Please clarify what you mean by "stability".

    Technically, stability refers to maintaining a constant frequency over time. In the case of a DDS, the stability of the output signal derives solely from the stability of the device providing the system clock signal to the DDS. If the clock source is a general purpose quartz crystal resonator, then stability should be a few tens of parts-per-million (ppm). An OCXO source can provide stability in the fractions of a ppm range. Rubidium or cesium sources can offer stability in the range of fractions of a ppb (parts-per-billion).

    The main difference between the AD9854 and AD9954 is the frequency resolution of their respective DDS cores. The AD9854 has a 48-bit core, while the AD9954 has a 32-bit core. This means the AD9854 can tune much closer to a desired frequency than the AD9954. For example, suppose both devices use a system clock source of exactly 100MHz and both devices are programmed to output exactly10MHz. The actual output frequency is then:

    • AD9854: 10.000000000000142108547152020037 MHz
    • AD9954: 10.00000000931322574615478515625 MHz

    That is, a tuning error of:

    • AD9854: 0.14 micro-Hz
    • AD9954: 9.3 milli-Hz

    Regarding your "DFT" measurements...

    A DFT is not suitable as a stability measurement tool. It is merely a mathematical tool that converts a sampled time-domain sequence to a sampled frequency domain sequence (by "sampled" frequency, I mean Fourier frequencies (i.e., bin frequencies)). That is, a DFT does not provide explicit indication of frequency variation over time. Rather, it is a snapshot of a signal's frequency content (spectrum).

    Accurate and reliable stability measurement requires a fairly sophisticated test bench and suitable analysis software.

  • Thanks for your reply.
      
    In my application, I use the DDS to provide signal stimulation to a device under test(such as a resistance) and measure the voltage between the device with an ADC, then I can calculate the frequency and amplitude information through DFT, and by then I can calculate some other parameters.
      
    Exactly my question is when I set both AD9954 and AD9854 in the same condition to output a certain sine signal, I collected hundreds of DFT measurement results of each DDS to see the relative amplitude error. Then I found the relative amplitude error of AD9954 keeps 0.07%~0.04% and it keeps 0.013%~0.004% for AD9854. I also drawed histograms to confirm the measurement results will meet normal distribution.
      
    I also tried to connect the output of AD9954 & AD9854 to the Signal-Input port and Ref-Input port of a lock-in amplifier and observe the amplitude measurement result change over a period of time. The result shows that AD9954 output has larger relative amplitude change than that of AD9854.
      
    So I wonder if it`s because AD9854 do output a stabler signal(probably amplitude?) than AD9954. But from the datasheets I see AD9954 has 14-bits DAC and AD9854 just has 12-bits DAC. Maybe the more precise 48-bits DDS core of AD9854 makes the better measurement results?
      
    Looking forward to your reply.

  • 0
    •  Analog Employees 
    on Jun 14, 2021 1:13 PM in reply to flyinglight

    Because the AD9854 and AD9954 have 12b and 14b amplitude resolution, respectively, the difference in your measurements between the two is reasonable. However, to be fair, the resolution (and linearity) of the ADC also plays into the total measurement error. The AD9854 and AD9954 have differing DAC designs, which means that resolution, harmonic distortion and other non-linearities associated with DACs (and ADCs, for that matter) also play a role in signal quality.

    Again, to be clear, the term "stable"  or "stability" typically applies to frequency variation over time. "Distortion" is probably more at play here than stability.

  • Thanks for your reply

    I read the datasheets of AD9854 and AD9954, and it seems that the SFDR of DAC output signal of AD9854 is not as large as that of AD9954. But in the same measurement conditions, I found the amplitude distortion result of AD9854 is better. Could it be the influence of larger FTW width of AD9854?

    Another question I`d like to consult is that in the datasheet of AD9954, it`s mentioned that "Truncation of the LSBs is implemented to reduce the power consumption of the DDS core. This truncation does not reduce frequency resolution". Could the truncation lead to the amplitude distortion of AD9954?

Reply
  • Thanks for your reply

    I read the datasheets of AD9854 and AD9954, and it seems that the SFDR of DAC output signal of AD9854 is not as large as that of AD9954. But in the same measurement conditions, I found the amplitude distortion result of AD9854 is better. Could it be the influence of larger FTW width of AD9854?

    Another question I`d like to consult is that in the datasheet of AD9954, it`s mentioned that "Truncation of the LSBs is implemented to reduce the power consumption of the DDS core. This truncation does not reduce frequency resolution". Could the truncation lead to the amplitude distortion of AD9954?

Children
  • 0
    •  Analog Employees 
    on Jun 16, 2021 1:47 PM in reply to flyinglight

    It could very well be that the finer tuning resolution of the AD9854 over the AD9954 makes your amplitude measurements appear slightly better.

    Both devices rely on phase truncation to optimize the DDS core. The number of bits truncated is directly related to the DAC resolution, as follows.

    To optimize the DDS for power consumption, it is common to use only a portion of the phase accumulator output for phase-to-amplitude conversion. Consider a DDS that has a phase accumulator with N bits of frequency tuning resolution and a DAC with D bits of amplitude resolution. To optimize, we can use a certain number (P) of the phase accumulator's most significant bits for phase-to-amplitude conversion. That is, we truncate N-P LSBs of the accumulator output for phase-to-amplitude conversion. Generally, we choose P to be in the range of D+3 to D+5. The AD9854 uses N=48, D=12 and P=D+4=16, while the AD9954 uses N=32, D=14 and P=D+5=19. Hence, the number of truncated bits (N-P) is 32 for the AD9854 and 13 for the AD9954.

    Keep in mind that the overall performance of the DDS is only partially dependent on the digital design of the DDS core (N, P and D). Also at play is the the performance of the DAC itself (harmonic distortion, linearity, etc.), as well as the quality of the system clock source (spurious/noise content).

  • Thanks for your reply.

    I`m sorry, there is still something confusing that I want to consult you. The picture below is my measurement system,

    DDS9854 or DDS9954, both use temperature-compensated crystal oscillator as the clock source, and then output a signal of 0.5Vrms (the two keep the same amplitude). The output signal has been observed with an oscilloscope, and the output signal is very good, that is, the SFDR and other conditions are at the same level. Then the output signal of DDS9854 or DDS9954 is input to the lock-in amplifier (such as SR830 or OE1022, DDS and the instrument used have been preheated during the measurement process), one as the reference signal, and one as the signal to be measured to the input terminal. The function of the lock-in amplifier will track the externally input reference signal, so that algorithms such as DFT can be used to perform operations to extract the amplitude of the signal to be measured and the phase relative to the reference signal. Because the lock-in amplifier will self-track the frequency of the reference signal, although the frequency resolution of DDS9854 is higher than that of 9954, for example, a 1k signal is output. Assuming that DDS9954 outputs 1khz, the closest real frequency is 1000.1hz signal, while 9854 outputs It is a 1000.00001hz signal, but the lock-in amplifier will self-track to the 1000.1hz frequency point (or understood as 1000.10000hz) to measure the 9954 output amplitude, and track 1000.00001hz to measure the 9854 output amplitude. In the measurement process repeated many times (also changing multiple frequency points and amplitudes), the fluctuation of the 9854 amplitude is about 0.005% (that is, the accurate signal amplitude should be 0.5Vrms, and the measured amplitude ranges from 0.4999975Vrms to 0.5000025Vrms) , and the amplitude fluctuation of 9954 is 0.05%. The amplitude fluctuation of 9854 is much smaller, but according to the above explanation, this amplitude fluctuation should have nothing to do with the frequency resolution.

    What`s more, as mentioned above, “Also at play is the performance of the DAC itself (harmonic distortion, linearity, etc.)”, but in the measurement process, I just let the DDS9954 and 9854 output a signal of 0.5 Vrms. During the measurement process, I just let the DDS9954 and 9854 output a signal of 0.5 Vrms. I will not do any control on 9954 or 9854. I think it is a stable system during measurement, so the measured amplitude should be stable (that is, the fluctuation of the amplitude should be basically the same), but in fact, the fluctuation of 9854 will be better than 9954 as I just mentioned, so I don’t understand it very well, or are there some other situations that may cause the 9954 to fluctuate more?

  • 0
    •  Analog Employees 
    on Jun 25, 2021 2:50 PM in reply to flyinglight

    Because the AD9854 has a 48b tuning resolution (vs. 32b for the AD9954), there can be very low frequency components in the output signal (sub 1Hz). Hence, the amplitude measurement must span a long enough averaging time to capture the low frequency components. If the averaging time is too short, then there will be differences between subsequent measurements (due to the amplitude varying slowly from the low frequency components).

    I have a thought to try a different frequency tuning word as a test, but I need to know the system clock frequency for the DDS. What are you using as a system clock frequency for the DDSs?

  • The system clock I used for the DDSs could be 20MHz or 400MHz. One method is the output of a 20MHz crystal oscillator while another method is a crystal oscillator + clock management chips in which the frequency could be changed.

    In my measurement test I tried different DDS system clock frequency, different output signal frequency and different DFT period length. But with the increment of DFT period length, the amplitude distortion result shows no obvious improvement.

    In addition, I`m curious that if there can be very low frequency components in the output signal of DDS, how can it work well in IQ signal output mode for 2 output channels?

  • 0
    •  Analog Employees 
    on Jun 28, 2021 1:58 PM in reply to flyinglight

    Given your clock arrangement, there is a significant difference in output spectral content between using a 20MHz clock (which necessitates the use of the internal PLL) and 400MHz clock (which bypasses the PLL). The PLL is a secondary source of potential noise contribution, etc. Hence, you can expect a difference in performance between using and not using the internal PLL.

    Regarding low frequency spectral content, the effect of its presence is only problematic for applications sensitive to very low frequency modulation components.

    The source of the low frequency components is phase truncation, as phase truncation is essentially a form of phase modulation. Most DDS designs incorporate phase truncation (a tradeoff between power consumption and spurious spectral content). Note, the low frequency components (if present) are typically of very low amplitude (< -95dBc for the AD9854 and < -110dBc for the AD9954).

    It turns out that certain choices of tuning words will avoid the effects phase truncation altogether. In fact, that was direction of my "thought" in my previous post. However, using different clock rates renders such a direction moot.

    In any case, by choosing only tuning words for which none of the truncated bits are Logic 1 you can avoid the effects of phase truncation. For example, the AD9854 has a 48b phase accumulator of which the 32 LSBs are truncated prior to phase-to-amplitude conversion. The AD9954, on the other hand, has a 32b accumulator of which the 13 LSBs are truncated. Therefore, if you use only tuning words that make use of the 16 MSBs of the AD9954 or the 19 MSBs of the AD9954, then phase truncation is nonexistent. Of course, this greatly reduces the frequency tuning resolution of the DDS.

    If frequency tuning resolution is an important feature in your application, then you must accept the consequences of phase truncation. However, you can mitigate the lowest frequency components by not using the lowest LSBs of the tuning word. Essentially, the less significant the tuning word bit, the lower the frequency of the phase modulation.