The function `ad9361_tx_quad_calib` in the AD9361 API finds the optimal RX NCO phase value depending on if "TX1_LO_CONV" and "TX1_SSB_CONV" bits are 1. The reference manual implies these bits are set when the algorithms for LO leakage and quadrature calibration converge.
At a high level, what is going on in the chip to run these convergence algorithms? Does the AD9361 RX chain filter and measure the LO leakage and image power internally to determine if the RX NCO phase offset decreases their power to create a RX NCO phase vs dBc plot like the one here? AD9361 Transmit Quadrature Calibration (Tx Quad Cal)