I made a post a couple of weeks ago about seeing irregular behaviour when transmitting or receiving at high sample rates. This turned out to be buffer underflows/overflows related to a limitation in the current implementation of iio daemon and its lack of a zero-copy feature. I combined the "iio_adi_xflow_check" program with parts of the "ad9361_iiostream" example program into a new program (attached) which defines and sets parameters and transmits a single tone signal and prints to console each time the buffer is filled whether or not there was an underflow. This program works fine running over network context (provided the sample rate is below 3MSps), but if I try and run it locally on the zynq, even at a minimum sample rate of 1MSps, I get severe underflows -- about 4 seconds of no data going through to every second of actual transmission (see attached video clip). Running a single tone transmission in GRC locally on the zynq gives only slightly better results. Both of these cause the zynq processor to run at 100% usage.
My impression from reading the wiki was that local context used a high speed memory mapping interface -- why would the performance of a simple single tone transmission be so much poorer run locally than when it's done over a network, if the local backend uses a higher speed memory interface?