d6269543ef
The version of SLOB in -mm always scans its free list from the beginning, which results in small allocations and free segments clustering at the beginning of the list over time. This causes the average search to scan over a large stretch at the beginning on each allocation. By starting each page search where the last one left off, we evenly distribute the allocations and greatly shorten the average search. Without this patch, kernel compiles on a 1.5G machine take a large amount of system time for list scanning. With this patch, compiles are within a few seconds of performance of a SLAB kernel with no notable change in system time. Signed-off-by: Matt Mackall <mpm@selenic.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
---|---|---|
.. | ||
allocpercpu.c | ||
backing-dev.c | ||
bootmem.c | ||
bounce.c | ||
fadvise.c | ||
filemap_xip.c | ||
filemap.c | ||
filemap.h | ||
fremap.c | ||
highmem.c | ||
hugetlb.c | ||
internal.h | ||
Kconfig | ||
madvise.c | ||
Makefile | ||
memory_hotplug.c | ||
memory.c | ||
mempolicy.c | ||
mempool.c | ||
migrate.c | ||
mincore.c | ||
mlock.c | ||
mmap.c | ||
mmzone.c | ||
mprotect.c | ||
mremap.c | ||
msync.c | ||
nommu.c | ||
oom_kill.c | ||
page_alloc.c | ||
page_io.c | ||
page-writeback.c | ||
pdflush.c | ||
prio_tree.c | ||
quicklist.c | ||
readahead.c | ||
rmap.c | ||
shmem_acl.c | ||
shmem.c | ||
slab.c | ||
slob.c | ||
slub.c | ||
sparse.c | ||
swap_state.c | ||
swap.c | ||
swapfile.c | ||
thrash.c | ||
tiny-shmem.c | ||
truncate.c | ||
util.c | ||
vmalloc.c | ||
vmscan.c | ||
vmstat.c |