I have been experimenting with the signal shot mode of RX quadrature calibration and I have found that the accuracy of the calibration degrades as the channel size decreases.
I have built a script based the details in the calibrations document. When I run the script in a 10 MHz channel the algorithm returns the following coefficients:
When I move to a 1.25 MHz channel (same receive and transmit structure, just changing the BBPLL and other things according to customer software) the algorithm returns the following coefficients
I have generated the TX test tone to always be +BBBW/2 relative to the RX LO.
You can see that the sign and magnitude of the coefficients are completely different. I have run the script several times and the results are very consistent.
On the other hand, the TX quadrature calibration runs in both cases and returns basically the same coefficients in 10 MHz as in 1.25 MHz (read back registers x08E - x091). I can see with a CW that the quadrature calibration is behaving basically the same.
The EVM in the 10 MHz case is quite good, but not in the 1.25 MHz case. If I write x3 into register x183 and then manually write into registers x170 - x173 the values from the 10 MHz channel when I'm in 1.25 MHz the EVM is significantly better.
Are there some register settings that are needed to improve accuracy based on channel size? Are there some optimizations that are missing? I'm just using customer software settings together with the description on the calibrations document.