We are using the AD9371 and seeing large and variable DC offsets, which have caused us to reduce the amplitude of incoming received signal to avoid clipping and distortion.
In addition this DC offset is causing unreliability in the recovery of our transmitted data.
Here is an example of the RX I,Q showing large DC offsets, clipping of the top of I and Q.
So from two perspectives we wish to remove the DC offset.
We have the DC_OFFSET initial calibration enabled which corrects for dc offset within the RX chain. There are no errors on initial calibration.
The AD9371 has DC offset removal at both the TX and RX sides, however these is very little information on how this works in detail in UG992 and if the settings can be optimised at all in cases where problems are seen.
In addition I have seen posts on the AD community forum for a different, but similar, AD361 transceiver. Here there is a known issue “DC offset correction algorithm fails sometimes whenever there is a signal very close to DC (less than 7.5 kHz)”.
Can a more detailed explanation of how the DC offset removal works for the AD9371 TX and RX sides be provided?
Can an explanation be provided on how to use ADI SW APIs to optimise the DC offset removal?
Also does the AD9371 DC offset correction algorithm suffer from the “fails sometimes whenever there is a signal very close to DC” problem?
Also are the any other known issues with this function?
The calibrations are done by ARM inside the chip and there are really no knobs for the user. Table 65 in UG-992 lists the different calibrations.
Please check UG-992 section on "Tx LOL Tracking Calibration" . This will help in reducing the LO leakage on Tx side.
Can you share spectrum plot of LO leakage on Tx and FFT plot (GUI plot) of Rx LO leakage. ?
What is the signal (waveform ) and LO frequency set on Tx and Rx. ?
shame there is no control over the DC offset calibrations (TX and RX).
Can you confirm if the DC offset calibration suffers from any known issues like “fails sometimes whenever there is a signal very close to DC” which I have seen mentioned on other ADI devices with this calibration?
The LO is set the same on Tx, an Rx to 1.5GHz.
We are not using the ADI "out of the box experience" with its GUI.
We have our own FPGA design.
I'm a bit confused why you mention LO Leakage calibration and reducing LO leakage, as this is not what I was asking about. We are concerned with DC offset.
Our spectrum extends from 70MHz down to DC. We are considering offsetting this so it does not extend to DC.
I understand the DC offset corrections are an initial calibration.
The DC offset is not in the tracking calibration enum. Are they also tracking calibrations that are "always on"?
Any suggestions in how to make the DC offset calibration work and not leave a large and variable DC offset?
MYKONOS_setDigDcOffsetEn API is used to Enable/ Disable Digital DC Offset channels using the channel mask.
There is no API provided to enable/disable RF DC offset.
Thanks - that's useful info. I am still investigating this issue. It seems with a pure tone input (we have an inbuilt test pattern generator), and selecting specific repetitive symbols the DC offset algorithm does a good job, and the RX digital signal is well centred for I and Q.
However when we send a real data packet consisting of many symbols, the DC tracking immediately goes wrong and large offsets are seen in both I and Q. This is a real problem as it reduces dynamic range and often causes clipping.
It seems that the DC offset does not work well with the waveform we use. To be able to adapt the waveform to work better with the DC offset algorithm, we need some information on how this works
Can you explain how the algorithm works?
Does anyone have experience in the DC offset performance of this device (either good or bad)?
Also is there any knowledge of how the DC tracking works (either at a system overall level, or detailed algorithm)?
We are still seeing DC offset problems, and investigating - any help is much appreciated.