Just wondering how should I interpret the automatic calibration results I am getting.
I followed the procedure but GAIN_CAL_OF is always set after each pass incrementing DAC_GAIN_RNG. I have tried all CAL_CLK_DIV settings with the same result.
Here is my setup. AD9102 on my own board but based largely on reference design (AD9102-EBZ RevA Schematic.pdf). FSADJ and CAL_SENSE are tied together and then grounded via 8.06k resistor. My source clock is 38MHz (kept low for testing) chip otherwise produces DDS wave forms OK. Using all internal LDO's.
SPICONFIG = 0x0000
POWERCONFIG = 0x0E07
CLOCKCONFIG = 0x0700
0) Disable external trigger and do hardware reset.
1) Set DAC_GAIN_RNG=0 and COMP_CAL_RNG = 0
2) Set CAL_CLK
3) Set CAL_CLK_DIV = 0 (have tried all values 0 to 7)
4) Set CAL_MODE_EN
5) Set START_CAL
6) If !CAL_FIN goto 6)
7) At this point CAL_FIN=1 and CAL_MODE=1. The full registers CALCONFIG=0x0448 and DACAGAIN=0x3F00. Then I clear START_CAL
8) As I'm reading OF in GAIN_CAL_OF next I increment DAC_GAIN_RNG and return to 5)
So it never gets beyond 8) and DAC_GAIN_RNG just overflows after 3 passes and I'm stuck in a loop. CFG_ERROR = 0x0000 but don't think that's relevant.
Without knowing if it is required I have tried RAMUPDATE after setup and CAL_RESET in different combinations with the same result. Could a damaged chip cause this?
Something Interesting I have noticed is the reserved bits do not match the data sheet. For example CLOCKCONFIG should be all zeros on reset but I see 8,9,10 set to disable the non-existent DAC clocks. For this reason I always read back the full register each time I want to modify it and use a mask to prevent altering reserved bits, seems like the only safe way if we can not know if any of them are modified at run-time by the chip.