Hello. I am currently using the Qsys ADRV9371 reference design for the 2017 R1 branch.
I am also using the patch for the DMA IP core to fix an issue where pushing a single buffer always resulted in cyclic transmit behavior. This fix was described here:
I am also using the latest Arria10 Linux ADI drivers from 2017 R1.
The problem I am seeing is that when I push an IIO buffer (cyclic or non-cyclic, both cases) when Tx and Rx sampling rates are different I see corruption in the frame where the data is zero sometimes. This was tested with the default rates of 254.76 and 122.88 MHz respectively for Tx and Rx. I confirmed this by using the BIST loopback mode (Tx->Rx digital loopback) and was not transmitting over analog. I am receiving the data by using IIO calls to define the buffer size (iio_device_create_buffer). On the PL I am commanding a data capture routine by specifying how many clock-cycles to capture data for through using the valid signals entering the CPACK IP core.
Here is an example where I push a TX ramp counter from 1 to 2^15. Most ramps come through fine but then sometimes I will see drops in samples.
The plot was generated using the iio_refill_buffer on the Rx side. The receiver is not overflowing. I command the PL over AXI4 to capture a configurable number of samples that matches the number of samples set by the IIO l buffer API call such that I can control exactly how many frames I get. Another test I did was with DDS which works fine - by using the same AXI4 data capture triggering mechanism, I was able to receive a full un-corrupted frame from a DDS BIST loopback. Based on the testing above, i believe there is some drop in data happening on the TX-DMA data pathside.
If I set the base-band sample rates for Tx and Rx to match exactly this problem is no longer observed. I have tested this by setting the rates to be 61.44 MHz for both Tx and Rx. This config file is attached.
<profile AD9371 version=0 name=Rx 50, IQrate 61.440> <clocks> <deviceClock_kHz=76800> <clkPllVcoFreq_kHz=9830400> <clkPllVcoDiv=2> <clkPllHsDiv=4> </clocks> <rx> <adcDiv=1> <rxFirDecimation=2> <rxDec5Decimation=5> <enHighRejDec5=1> <rhb1Decimation=2> <iqRate_kHz=61440> <rfBandwidth_Hz=50000000> <rxBbf3dBCorner_kHz=50000> <filter FIR gain=-6 num=72> 0 -1 2 3 -5 -7 11 15 -23 -29 43 54 -75 -92 125 150 -198 -235 302 355 -447 -524 646 759 -920 -1089 1302 1568 -1864 -2324 2763 3696 -4513 -7179 9583 31418 31418 9583 -7179 -4513 3696 2763 -2324 -1864 1568 1302 -1089 -920 759 646 -524 -447 355 302 -235 -198 150 125 -92 -75 54 43 -29 -23 15 11 -7 -5 3 2 -1 0 </filter> <adc-profile num=16> 596 358 201 98 1280 134 1509 64 1329 25 818 39 48 40 23 190 </adc-profile> </rx> <obs> <adcDiv=1> <rxFirDecimation=2> <rxDec5Decimation=5> <enHighRejDec5=1> <rhb1Decimation=2> <iqRate_kHz=61440> <rfBandwidth_Hz=40000000> <rxBbf3dBCorner_kHz=20000> <filter FIR gain=-6 num=72> 3 2 -8 -10 8 30 6 -52 -51 55 127 1 -206 -147 216 375 -63 -596 -319 639 889 -296 -1445 -570 1618 1893 -951 -3304 -968 4064 4403 -2936 -9821 -4360 14179 31305 31305 14179 -4360 -9821 -2936 4403 4064 -968 -3304 -951 1893 1618 -570 -1445 -296 889 639 -319 -596 -63 375 216 -147 -206 1 127 55 -51 -52 6 30 8 -10 -8 2 3 </filter> <adc-profile num=16> 599 357 201 98 1280 112 1505 53 1331 21 820 40 48 40 23 191 </adc-profile> <lpbk-adc-profile num=16> 599 357 201 98 1280 112 1505 53 1331 21 820 40 48 40 23 191 </lpbk-adc-profile> </obs> <tx> <dacDiv=2.5> <txFirInterpolation=2> <thb1Interpolation=2> <thb2Interpolation=2> <txInputHbInterpolation=1> <iqRate_kHz=61440> <primarySigBandwidth_Hz=20000000> <rfBandwidth_Hz=40000000> <txDac3dBCorner_kHz=92000> <txBbf3dBCorner_kHz=20000> <filter FIR gain=0 num=32> -11 -15 58 78 -190 -264 475 693 -1024 -1644 1893 3779 -2438 -7373 2905 19451 19451 2905 -7373 -2438 3779 1893 -1644 -1024 693 475 -264 -190 78 58 -15 -11 </filter> </tx> </profile>
Is it recommended to use the same base-band sampling rates for Tx and Rx when pushing a buffer out from software?