Post Go back to editing

RX DC offset problem

We are using the AD9371 and seeing large and variable DC offsets, which have caused us to reduce the amplitude of incoming received signal to avoid clipping and distortion.

In addition this DC offset is causing unreliability in the recovery of our transmitted data.

Here is an example of the RX I,Q showing large DC offsets, clipping of the top of I and Q.

So from two perspectives we wish to remove the DC offset.

We have the DC_OFFSET initial calibration enabled which corrects for dc offset within the RX chain. There are no errors on initial calibration.

The AD9371 has DC offset removal at both the TX and RX sides, however these is very little information on how this works in detail in UG992 and if the settings can be optimised at all in cases where problems are seen.

In addition I have seen posts on the AD community forum for a different, but similar, AD361 transceiver. Here there is a known issue “DC offset correction algorithm fails sometimes whenever there is a signal very close to DC (less than 7.5 kHz)”.

Can a more detailed explanation of how the DC offset removal works for the AD9371 TX and RX sides be provided?

Can an explanation be provided on how to use ADI SW APIs to optimise the DC offset removal?

Also does the AD9371 DC offset correction algorithm suffer from the “fails sometimes whenever there is a signal very close to DC” problem?

Also are the any other known issues with this function?



  • The calibrations are done by ARM inside the chip and there are really no knobs for the user.  Table 65 in UG-992 lists the different calibrations. 

    Please check UG-992 section on "Tx LOL Tracking Calibration" . This will help in reducing the LO leakage on Tx side.

    Can you share spectrum plot of LO leakage on Tx and FFT plot (GUI plot) of Rx LO leakage. ?

    What is the signal (waveform ) and LO frequency set  on Tx and Rx. ? 

  • Hi Vinod,

    shame there is no control over the DC offset calibrations (TX and RX).

    Can you confirm if the DC offset calibration suffers from any known issues like “fails sometimes whenever there is a signal very close to DC” which I have seen mentioned on other ADI devices with this calibration?

    The LO is set the same on Tx, an Rx to 1.5GHz.

    We are not using the ADI "out of the box experience" with its GUI.

    We have our own FPGA design.

    I'm a bit confused why you mention LO Leakage calibration and reducing LO leakage, as this is not what I was asking about. We are concerned with DC offset.

    Our spectrum extends from 70MHz down to DC. We are considering offsetting this so it does not extend to DC.

    I understand the DC offset corrections are an initial calibration.

    The DC offset is not in the tracking calibration enum. Are they also tracking calibrations that are "always on"?

    Any suggestions in how to make the DC offset calibration work and not leave a large and variable DC offset?



  • MYKONOS_setDigDcOffsetEn API is used to Enable/ Disable Digital DC Offset channels using the channel mask.


    There is no API provided to enable/disable RF DC offset.

  • Thanks - that's useful info. I am still investigating this issue. It seems with a pure tone input (we have an inbuilt test pattern generator), and selecting specific repetitive symbols the DC offset algorithm does a good job, and the RX digital signal is well centred for I and Q.

    However when we send a real data packet consisting of many symbols, the DC tracking immediately goes wrong and large offsets are seen in both I and Q. This is a real problem as it reduces dynamic range and often causes clipping.

    It seems that the DC offset does not work well with the waveform we use. To be able to adapt the waveform to work better with the DC offset algorithm, we need some information on how this works

    Can you explain how the algorithm works?

    many thanks


  • Does anyone have experience in the DC offset performance of this device (either good or bad)?

    Also is there any knowledge of how the DC tracking works (either at a system overall level, or detailed algorithm)?

    We are still seeing DC offset problems, and investigating - any help is much appreciated.

  • Todays DC offset shows

    * DC offset within 4mS packet (small signal) = -17 to -33 dBFS

    * DC offset within 4mS packet (larger signals) = -9 to -23 dBFS

    * DC offset for a repeated signal (long term) = -62 to -72 dBFS

    * AD9371 specification >-80dBFS

    So the suspicion is the transient performance is poor/slow.

    Our packets are sent infrequently (one 4mS packet every minute or two). So there will be pure carrier, followed by a 4mS burst of waveform. It seems the default DC correction settings do not respond to this scenario, leaving larger and variable DC offsets.

    How can the DC offset correction (Rx tracking) be optimised for responding quickly to a different waveform?

  • The values of -33 and -23 seems to be very high. Hope you have enabled DC_offset init cal during Arm initialization calibration.

    Tracking calibrations need a minimum of 800Usec continuous data. 

    Can you try above experiment after giving CW signal for some time and then pulsing the signal. ?

    From UG-992.

    For Tracking cals, the scheduler performs calibrations in batches, where the Tx data can be observed in chunks of 800 μs. When sufficient batches of a tracking calibration run, the algorithm then computes its correction based on the observed data across all the batches. It is only after the correction parameters update that the pending bit is cleared, as shown in Figure 30.
    This batch operation means that when a calibration is pending, and is selected by the scheduler to be run (based on the three conditions described previously), it initiates a batch to observe the Tx data for 800 μs. When this batch is complete, the scheduler again determines which calibrations can be run. If the same calibration cannot continue to run, for example, in time division duplex (TDD) mode, when the path to be calibrated may be no longer active, or if a higher priority calibration is pending, it awaits its next opportunity before it takes another batch of data.
    Note that, if a tracking calibration batch is started but the observation is disrupted (for example, if the Tx/Rx path is disabled, or if the ORx path is reacquired by user for DPD captures) before 800 μs has completed, then the observation that has been made up to this point is discarded.

    Please refer User guide for more details.

  • Vinod,

    thank you for your reply explaining how the DC offset tracking works, unfortunately none of this is in the user guide you refer to (UG992).

    After lab experiments yesterday I think I now understand what is causing the large DC offsets I am seeing. The root cause is that the TX output defaults to sending a signal that contains a large DC offset. This is nulled out by the tracking cal. When the actual packet is sent, it is interpreted by the RX side as having a large offset. As the minimum DC offset correction time is slow compared to our 4mS waveform, only 4 corrections are made.

    If I disable the DC offset tracking cal, I see very low offsets and no problem during the packet. Of course the DC offset could drift over time so this is not a solution.

    I am thinking of the following solutions

    1. change TX waveform to not default to a signal containing a large DC offset. It should be representative of the packet, and ideally contain no DC offset.

    2. turn off TX1 when not transmitting a packet.

    3. only enable the DC offset correction shortly before a packet is sent, ensure the TX output is contains data that is representative of the packet. Could disable the tracking to avoid changes during the packet.

    I have some questions for you to support this work -

    Q1 - Do you have any alternative suggestion o how to solve this problem?

    * problem = DC offset tracking introduces very large DC offsets when packet data transmitted

    Q2 - How can I easily enable/disable the TX1 output (we only use TX1, and RX1)?

    * TX1_ENABLE pin (pin M6) - can this be controlled by HW with not SW process/state change sequence? How fast does this enable/disable TX1?

    * Software enable/disable TX1 - can this be controlled by SW? what is the API?  is there any SW process/state change sequence? How fast does this enable/disable TX1?

    Q3 - What do the MShift values mean?

    * no explanation of MShift in UG992

    * only info is the header comments/SW provided by ADI in the headless driver

    * this says "M-Shift is the corner frequency of Rx notch filter for the given channel", and valid values are 8 to 20.

    * what is the notch filter used for in DC offset tracking cal?

    * what corner frequencies do 8 to 20 represent?



  • Q1. 'The root cause is that the TX output defaults to sending a signal that contains a large DC offset. This is nulled out by the tracking cal. When the actual packet is sent, it is interpreted by the RX side as having a large offset'--Are you doing a loopback from tx to rx?What is the frequency of the tone that you are giving at rx input?

    Q2. The ARM enables and disables the signal chains of the device. This can be performed either through pin control or over the SPI interface. Pin control mode is real time ,delay is also not significant.So you can use that.

    To enable or disable TX1 or TX2 using pin control mode, first do the required configuration using the following API:


    Then call the API:


    Note:Device should be in radio off state before calling this function.

    Q3. Please refer to the below post for details on m shift and notch filter.

  • Thank you for the reply. The link to the ADRV9009 device user guide was very useful. It seems to have a complete description of the DC offset function.

    Can you please confirm the ADRV9009 DC offset function is implemented in the same way for the AD9371?

    To answer your questions

    1. Yes at the moment its a loopback until we fix the problems with the AD9371. Our modulation scheme uses tones in the range +/-50MHz of the centre frequency.

    2. I have setup pin control as you suggested. However I got errors back from the AD9371 when I tried to set this after initialisation in the Radio On state. So I used the API to change to radio off state, but still got the same error. I have therefore changed the configuration to set pin control for tx-rx.

    Why does the ARM/AD9371 reply with an error when sending the pin control API, after issuing the radio off API?