We did some performance testing (FMCOMMS5 and ZC706 eval board) with the libiio and DMAC, where we got around 300 to 350MB/s. The reason I catch up on this thread was exactly the discussion on ADC to DMA to user code congestion control (should new thread be started ?).
Let me start out with what we observed and what we have:
On the PL: The ADC DMAC input uses a native FIFO and sets a signal FIFO_WR_OVERFLOW which is connected to the status register mentioned by Lars.
On the ARM we run the IIO Oscilloscope with a window of 1024.
We generate a TX signal which is one on real and imag once every 1024 sample. (we do that via TX->RX loopback).
On the display we see this "one" on a specific position say position 57 of 1024 as expected, it stays at that position 80% of the, but sometimes I goes into other position and then back to position 57 where it again stays stable.
This somehow indicates a data loss or display loss. We poll the REG_CHAN_STATUS to look at OVER_RANGE but it stays zero. What is strange is that things recover back to position 57 which also indicates that there was no FIFO_WR_OVERFLOW. But how can the ADI IIO Oscilloscope miss data if there is congestion control all the way down to DMAC FIFO.
Is it correctly understood that if the DMAC is running empty of descriptors or in cyclic mode meets an already processed descriptor that will lead to the FIFO_WR_OVERFLOW being asserted ?
But he DMA driver is not told about this error (has also read dma-axi-dmac.c to confirm). So what I see is the flow will continue as if no error? so no real congestion control ?
So what I conjecture is that the DMA flow is running without overflow (FIFO_WR_OVERFLOW not asserted) so this is good news, but then the libiio / iio-oscilloscope is not fast enough to supply the fill buffer, but how that correlates with the signal "coming back to position 57" is still to be explained. I miss knowledge of how the samples gets from the DMA'ed buffer to "the screen" via libiio, I know it gets via iio_get_sample... but what happens if this is too slow compared to the speed things gets into the DMA'ed buffer ? Or I think what I ask is: "Is there congestion control from user-code-side libiio to FIFO_WR_OVERFLOW being asserted" ? And if not then we can miss data on the processor without knowing ?
Another reason never seeing the REG_CHAN_STATUS, OVER_RANGE being asserted is that something clears it before I poll it ? But then again why should that recover my position 57 so fast....