AnsweredAssumed Answered

How to improve TCP/IP networking with uClinux/BF561 ?

Question asked by Wojtek on Jun 30, 2012
Latest reply on Jan 24, 2013 by Aaronwu

We are trying to achieve decent performance with Blackfin TCP/IP, either regular RJ-45 or ether-over-USB. As of today the performance is not good with either the gigE over AX88180 or TCP/IP over USB. Does the following result suggest that TCP/IP implementation on Blackfin is inefficient in general? If there are any knobs to crank up, then where they might be hiding? Please note that the performance reported below is roughly 20% of the hardware capabilities. This poor performance begs the question whether it can be cranked up, and how?

 

The Ethernet benchmark was done with uClinux --> gigE switch --> Windows.

During the USB/RNDIS benchmark we connected the board to Windows PC with a USB cable and used the Ethernet Gadget, version: Memorial Day 2008.

 

The numbers were collected with the following commands. The results fluctuated somewhat within plus/minus half a MB/s. Similar results were obtained with the board connected to a Linux desktop rather than Windows. Therefore, poor performance cannot be blamed on Windows.

 

server> iperf -m -w 64k -s

client> iperf -f M -w 64k -c <server_IP>

 

Server is windows, client is uClinux

************************************

uClinux --> ASIX --> Windows 12.9 MBytes/sec

uClinux --> USB  --> Windows 8.1 MBytes/sec

Server is uClinux, client is windows

************************************

Windows --> ASIX --> uClinux 10.9 MBytes/sec

Windows --> USB  --> uClinux 6.3 MBytes/sec

  

Our board setup:

BlackVME board with BF561 single core running at CCLK= 600 MHz and SCLK = 120 MHz. Core B was not enabled. The USB chip NET2272 is wired to AMS2 in 16-bit mode (following ADI reference board USB-LAN EZ-Extender). AX88180 is wired to AMS3 in 32-bit mode following ADI reference design for AX88180. The Blackfin board was freshly booted. The board was essentially idle. It means that nothing major was competing for the network bandwidth, as far as I can say.

 

Both asynch banks are running with ABMCTL settings as fast as possible. We tried even faster settings, but they were too fast. So the ones below were used for benchmarking.

 

Bank #2 = 0x3314 (3 cycles AWE/ARE, 0 hold, 1 setup, 1 bank transition)

Bank #3 = 0x8854 (8 cycles AWE/ARE, 1 hold, 1 setup, 1 bank transition)

Outcomes