2009-08-28 15:02:12 Memory too fragmented to start large application
Steve Strobel (UNITED STATES)
Message: 79366
We start a dozen or so applications on startup (indirectly from /etc/rc). One of them allocates a relatively large (3.3MB) chunk of memory. Even when starting it before most of the other applications, we sometimes have trouble with the memory being too fragmented to satisfy that allocation request. In one test we had 9MB free total, but no chunks larger than 2MB. When all of the allocations do succeed on startup (they are inconsistent), we end up around 3MB free.
What are my options for ensuring that all of the applications will be able to start? The things that come to mind include:
Reduce the amount of memory used by the applications
Do the large allocations first
Change memory allocators (currently using SLAB with non-power of 2 allocations disabled)
Our applications use dynamic memory allocation very rarely so the speed of allocations isn't a big deal to us. Would a different memory allocator (SLUB or SLOB) be more appropriate for us?
Allowing non-power of 2 allocations would seem to be an advantage for us, but docs.blackfin.uclinux.org/doku.php?id=linux-kernel:memory_allocation says the non-power of 2 allocator "has not (yet) been ported to the 2.6 kernel." An option for doing non-power of 2 allocations does seem to be selectable via "make menuconfig", independently of which allocator is used. Is it now just an option available when using any of the three allocators?
When an allocation does fail and show_free_areas() gets called, it prints very helpful info about the available memory blocks. Is there a way to get that same information by running a command at the root:~> prompt? It would be helpful to insert it in the scripts that run at startup so we could see what sizes of blocks are available at each step as it loads programs into memory.
I am also curious about what things get allocated in a single large chunk when a program starts. In this case the program itself is a 1.2MB file; it obviously needs to be loaded into memory. What happens with the bss and data segments and the stack; are they allocated as a single block, or separately?
Thanks for any suggestions,
Steve
Allocation of length 3198976 from process 250 failed
DMA per-cpu:
CPU 0: Hot: hi: 6, btch: 1 usd: 4 Cold: hi: 2, btch: 1 usd: 0
Active:13 inactive:2 dirty:3 writeback:0 unstable:0
free:2311 slab:494 mapped:0 pagetables:0 bounce:0
DMA free:9244kB min:732kB low:912kB high:1096kB active:52kB inactive:8kB present:33528kB pa
ges_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0
DMA: 117*4kB 89*8kB 34*16kB 29*32kB 19*64kB 8*128kB 5*256kB 0*512kB 1*1024kB 1*2048kB 0*409
6kB 0*8192kB 0*16384kB 0*32768kB = 9244kB
QuoteReplyEditDelete
2009-08-28 15:35:48 Re: Memory too fragmented to start large application
Mike Frysinger (UNITED STATES)
Message: 79367
ignore the docs about the NP2 allocator. it's all dead code and obsoleted by stuff already in mainline.
/proc/buddyinfo should be pretty much the same info as you see in the dump message.
how things get loaded depends on the file format (FLAT vs FDPIC). for FDPIC, run `readelf -l` on the file to see the load sections. stack is always allocated dynamically.
your best bet is to allocate the large chunks early and never free them.
QuoteReplyEditDelete
2009-08-31 13:26:11 Re: Memory too fragmented to start large application
Steve Strobel (UNITED STATES)
Message: 79413
Thanks for the info, Mike. We switched to SLOB and enabled NP2; it is working fine.
QuoteReplyEditDelete
2009-09-01 03:38:39 Re: Memory too fragmented to start large application
Hari Prasad (INDIA)
Message: 79422
Hi Steve,
May I now which uClinux release version you are using?
QuoteReplyEditDelete
2009-09-29 13:56:28 Re: Memory too fragmented to start large application
Steve Strobel (UNITED STATES)
Message: 80642
Sorry for the belated reply; I didn't notice your post before.
We are using uClinux-dist-2008R1.5-RC3.