AD5641 generating glitches in a sine wave with DC offset

Hi all!

I have an AD5641 running at 5V supply, controlled by a 3.3V SPI line at 20MHz clock rate. (30 MHz are allowed according to datasheet). A sum of a small 100kHz sine wave ( 20 to 40mV Vpp) and a DC offset has to be generated. A bigger signal at 100kHz would approach the DAC's specified slew rate, so I can't test at higher amplitudes.

In the midscale the sine wave looks acceptable (small negative spikes present between bits, but they are cought well enough by the reconstruction filter - see picture below). Once I start adding a DC offset to it, some bits of the sine start to spike out:

I've dumped a large chunk of control data using a logic analyzer and used an SPI decoder to plot the data - it contains a perfect sine as expected, the SPI analyzer did not have any problems decoding any frames. The SPI lines are about 5cm long, about 75 ohm with a 75-ohm series termination enabled in the fpga.

Here is a screenshot of a single frame to ensure my SPI implementation is within spec:

Any idea what I am missing?The specified DNL doesn't seem to allow spikes that high. Is a glitch like this allowed during the internal register update? 

  • Hi,

    Honestly no idea... the worst case scenario for the glitch, in this case is midscale... but you mentioned that by default, the signal is biased at midscale, right?

    Could you identify the code transition that generates the spike?



  • Hi Miguel,

    thank you for your reply!

    The glitch moves around quite significantly with offset, there is at least one sweetspot on the midscale which doesn't generate it. I might be able to identify which code patterns generate which glitches later. However, I've slowed the sine (including the sampling rate) down to 10kHz and this is what it looks like now:

    So the stair form suggests that the codes are received correctly and it's the transition which is containing the spike. With a 10x faster sampe rate the spike probably consumes almost the entire bit. There seem to be smaller spikes between some other bits, but passing specific levels seems to generate bigger ones then the others.

    The only explanation I can come up with would be the DAC's internal register not updating at once but with a significant skew between bits, so that a wrong code is applied to the R-network for an amount of time significant enough to pass through the output amplifier. But I can't find anything about such a behaviour in the datasheet. (See EDIT) If the DAC turns out to be unfit for the required sample rate it would be nice to have hints what to look for in the datasheet when selecting an alternative part.

    BTW I have 2 DACs of this type on the board, both doing this, so it's unlikely to be a damaged part.




    Looking at the shot above and at figure 27 of the datasheet it does make sense. Guess a glitch at second/third MSB (which are passed more frequently) is significant enough to distort my signal at this sample rate, and as the figure specifies an RC-Filter as load it does make sense that the glitch amplitude is higher then that in the figure, as I've been measuring directly at the output with a 100MHz probe..

    Learned something new today =) Guess I'll see if I can find a precision DAC with a smaller glitch or change my architecture...

  • Hey,

    I'm a bit surprised about a R-2R DAC recommendation: reading comparisons of architectures, all articles suggest the superiority of string DACs in terms of glitch energy:

    Secondly, the glitch energy is greater for an R2R architecture than for a string architecture, making R2R DACs less attractive for waveform generation and other glitch-sensitive applications (Planet Analog - Articles - Understand the differences between R2R and String DAC architectures )

    .The AD5541A is specified with 1.1nVs glitch energy (the previous was with 5nVs).  Could you please suggest if the AD5541A would be superior to, for example, TI's internally buffered ultra-low-glitch string DAC8560 (0.15nVs) for precise DC 0.2-4.8V offset + 20mV 100kHz sine generation? It seems superior in glitch energy (power saving doesn't matter much), not so in slew rate and full-scale settling time. It seems to me now that the glitch energy is the major limiting factor to be considered.

    Thank you for your time!



  • Hi Evil,

    Let me explain, the AD5641 is an string DAC. This DAC are mainly designed for static applications as they are optimized for power consumption (high resistor impedance).

    In your application, I'd recommend a part like the AD5541A or the AD5542A. Those are R-2R architecture, much more faster. The main trade-off is that the part does not include an external buffer, which is pretty common in this type of architecture as you can optimize the system for speed or accuracy.



  • Hi Ilya,

    That is not true... the R-2r offer better dynamic characteristics than the string DAC, as the impedance is lower, consequently parasitic (aka capacitors) will charge/discharge faster.

    TI implements a different architecture which makes their glitch reasonable good. Said that, their main trade-off is the linearity which is pretty poor.

    We did a really good job in terms of glitch energy in our latest nanoDAC+, offering an impressive 2LSB INL @ 16-bits.

    Said that, if you dont care about power, I'll recommend a R-2R architecture.