kernel-ark/mm
Nick Piggin 95b35127f1 slob: rework freelist handling
Improve slob by turning the freelist into a list of pages using struct page
fields, then each page has a singly linked freelist of slob blocks via a
pointer in the struct page.

- The first benefit is that the slob freelists can be indexed by a smaller
  type (2 bytes, if the PAGE_SIZE is reasonable).

- Next is that freeing is much quicker because it does not have to traverse
  the entire freelist. Allocation can be slightly faster too, because we can
  skip almost-full freelist pages completely.

- Slob pages are then freed immediately when they become empty, rather than
  having a periodic timer try to free them. This gives efficiency and memory
  consumption improvement.

Then, we don't encode seperate size and next fields into each slob block,
rather we use the sign bit to distinguish between "size" or "next". Then
size 1 blocks contain a "next" offset, and others contain the "size" in
the first unit and "next" in the second unit.

- This allows minimum slob allocation alignment to go from 8 bytes to 2
  bytes on 32-bit and 12 bytes to 2 bytes on 64-bit. In practice, it is
  best to align them to word size, however some architectures (eg. cris)
  could gain space savings from turning off this extra alignment.

Then, make kmalloc use its own slob_block at the front of the allocation
in order to encode allocation size, rather than rely on not overwriting
slob's existing header block.

- This reduces kmalloc allocation overhead similarly to alignment reductions.

- Decouples kmalloc layer from the slob allocator.

Then, add a page flag specific to slob pages.

- This means kfree of a page aligned slob block doesn't have to traverse
  the bigblock list.

I would get benchmarks, but my test box's network doesn't come up with
slob before this patch. I think something is timing out. Anyway, things
are faster after the patch.

Code size goes up about 1K, however dynamic memory usage _should_ be
lower even on relatively small memory systems.

Future todo item is to restore the cyclic free list search, rather than
to always begin at the start.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16 09:05:35 -07:00
..
allocpercpu.c
backing-dev.c [PATCH] nfs: fix congestion control 2007-03-16 19:25:05 -07:00
bootmem.c
bounce.c block: blk_max_pfn is somtimes wrong 2007-03-27 08:52:47 +02:00
fadvise.c
filemap_xip.c xip sendfile removal 2007-07-10 08:04:15 +02:00
filemap.c sendfile: kill generic_file_sendfile() 2007-07-10 08:04:14 +02:00
filemap.h
fremap.c
highmem.c [PATCH] i386: PARAVIRT: add kmap_atomic_pte for mapping highpte pages 2007-05-02 19:27:15 +02:00
hugetlb.c hugetlb: remove unnecessary nid initialization 2007-07-16 09:05:35 -07:00
internal.h Make page->private usable in compound pages 2007-05-07 12:12:53 -07:00
Kconfig sh64: generic quicklist support. 2007-05-14 09:55:35 +09:00
madvise.c Detach sched.h from mm.h 2007-05-21 09:18:19 -07:00
Makefile Quicklists for page table pages 2007-05-07 12:12:54 -07:00
memory_hotplug.c memory hotplug: fix unnecessary calling of init_currenty_empty_zone() 2007-06-01 08:18:29 -07:00
memory.c MM: use DIV_ROUND_UP() in mm/memory.c 2007-07-16 09:05:35 -07:00
mempolicy.c [PATCH] Page migration: Fix vma flag checking 2007-03-05 07:57:51 -08:00
mempool.c
migrate.c page migration: fix NR_FILE_PAGES accounting 2007-04-24 08:23:08 -07:00
mincore.c
mlock.c Detach sched.h from mm.h 2007-05-21 09:18:19 -07:00
mmap.c security: Protection for exploiting null dereference using mmap 2007-07-11 22:52:29 -04:00
mmzone.c
mprotect.c
mremap.c security: Protection for exploiting null dereference using mmap 2007-07-11 22:52:29 -04:00
msync.c Detach sched.h from mm.h 2007-05-21 09:18:19 -07:00
nommu.c security: Protection for exploiting null dereference using mmap 2007-07-11 22:52:29 -04:00
oom_kill.c oom: fix constraint deadlock 2007-05-07 12:12:55 -07:00
page_alloc.c MM: alloc_large_system_hash() can free some memory for non power-of-two bucketsize 2007-07-16 09:05:35 -07:00
page_io.c
page-writeback.c consolidate generic_writepages and mpage_writepages 2007-05-11 08:29:35 -07:00
pdflush.c
prio_tree.c
quicklist.c Quicklists for page table pages 2007-05-07 12:12:54 -07:00
readahead.c readahead: code cleanup 2007-05-07 12:12:52 -07:00
rmap.c mm: kill validate_anon_vma to avoid mapcount BUG 2007-06-28 11:34:53 -07:00
shmem_acl.c
shmem.c shmem: convert to using splice instead of sendfile() 2007-07-10 08:04:15 +02:00
slab.c Make /proc/slabinfo use seq_list_xxx helpers 2007-07-16 09:05:35 -07:00
slob.c slob: rework freelist handling 2007-07-16 09:05:35 -07:00
slub.c slub: remove useless EXPORT_SYMBOL 2007-07-06 11:45:11 -07:00
sparse.c Move three functions that are only needed for CONFIG_MEMORY_HOTPLUG 2007-06-08 17:23:33 -07:00
swap_state.c
swap.c Add suspend-related notifications for CPU hotplug 2007-05-09 12:30:56 -07:00
swapfile.c mm: make read_cache_page synchronous 2007-05-07 12:12:51 -07:00
thrash.c Bug in mm/thrash.c function grab_swap_token() 2007-05-11 08:29:32 -07:00
tiny-shmem.c
truncate.c fs: convert core functions to zero_user_page 2007-05-09 12:30:55 -07:00
util.c
vmalloc.c Make __vunmap static 2007-05-17 05:23:04 -07:00
vmscan.c Add suspend-related notifications for CPU hotplug 2007-05-09 12:30:56 -07:00
vmstat.c mm: fixup /proc/vmstat output 2007-07-06 10:26:50 -07:00