I'm having a difficult time with the BF536 DMA. I currently have a VDK based VoIP-like application which streams RTP out of the EMAC at 1 packet per 10ms. So it's 92 byte UDP plus the ethernet overhead.
In a 30-second streaming test I notice choppy audio at the far end. Inspecting Wireshark shows the stream drops ~5 packets. Looking at the core registers under VisualDSP++ I can see that there were exactly that number of underruns (EMAX_TX_DMAUND value). Repeating the test consistently shows these two values match, meaning that the dropped packets are being lost during DMA transfer to the EMAC.
The only higher priority DMA channel in use is EMAC Receive (DMA 1) and no packets are being receiving during the test, so this channel shouldn't be doing anything.There are other lower-priority channels used (SPORT0RX, SPORT0TX, UART0RX, UART0TX) but my understanding is that these would be pre-empted by the EMAC TX at priority 2. Is this assessment correct?
I have been working through some things to try to eliminate the problem. One idea was to move any data buffers in use to L1 memory. All buffers within my emac driver are already in L1, and application buffers and variable (such as sockets) have been moved in to L1 since. This does not seem to make an improvement.
I also tried to enable DMA Traffic Control.by setting DMA_TC_PER to 0x3800 but this actually makes it worse!
I also tried switching off the UART code so only the SPORT and EMAC were used. No change.
The only thing that seemed to make a difference in my application was when I altered the way in which the threads worked. This included varying the size of VDK_Sleep() within some loops, exchanging VDK_Sleep for VDK_Yield() on others where the only purpose of VDK_SLeep was to yield priority. However, this appeared to make the problem almost dissapear in some cases but in others make it even worse. Furthermore the same alterations removed and then put back would yield differing effects between code rebuilds, which is even more perplexing.
So if it is related to threading, or the VDK kernel, what should I consider that might affect DMA?
It appears that we have corrected the problem.
We went through and checked our clock values. We found that SCLK was only running at 50Mhz.This was affecting the EBIU which we thought was running at 125MHz, so we had the wrong value set within EBIU_SDRRC. So we altered EBIU_SDRRC to suit a 50Mhz SLCK.
Somehow correcting this conflict also cleared the DMA underruns, although we aren't certain why. However, we have now tested one TX RTP stream simultaneously with 3 RX RTP streams and there are no underruns or overflows.
Can SDRAM running at a different speed to the DMA controller cause a problem? Is this anything to do with the speed of SDRAM with respect to L1 memory (in the CCLK domain, which is equal to VCO i.e. 250Mhz)?
If we change the value of EBIU_SDRCC back to what it was (to suit 125Mhz) the problem returns. So it is reproducible.
To add further detail, from ADI support:
"From your update in the forum thread, I understand wrong configuration of the EBIU was causing the issue at your end. This is so because, the LwIP uses the DDR memory space and hence the wrong configuration of EBIU can cause it not to function in the way expected."