AnsweredAssumed Answered

Linux cache usage with ftpd

Question asked by RobertCraig on Apr 29, 2013
Latest reply on May 6, 2013 by RobertCraig

I'm using the 2010R1-RC5 release with the 2.6.34.7 kernel release on a BF518F processor.  The application is a pretty standard implementation with an SD card connected to the RSI lines and inetd used to execute ftpd sessions for uploading / downloading files.  What I'm finding is that during FTP activity with large files (10's of MB), the memory usage grows to the point that the system runs out of memory which often results in the FTP session failing.  If the FTP session completes, then the memory eventually gets freed and the system returns to normal.  It's not uncommon for the OOM process killer to come in and kill ftpd.  If this happens the memory stays allocated and the system is then dead. It's not possible to run anything from the command line at this point since there isn't sufficient memory available.  I can clearly see that the memory is being taken up by the kernel buffering (I'm assuming) the file system transactions.  This also looks similar to a previous thread in which someone was doing something along the lines of "echo 3 > /proc/sys/vm/drop_caches" to help free the memory.

 

  So... My questions are: 

 

Is this a known deficiency with the linux kernel?  I can understand that memory fragmentation becomes a problem with the lack of a full MMU, but it still strikes me that rendering a system unusable because of a file write is a pretty serious flaw.

 

Is there any kernel configuration setting that I can use to help alleviate this problem?

 

Will going to the latest kernel help at all?

 

   Thanks!

        Robert Craig.

Outcomes