kernel-ark/mm
Christoph Lameter 51ed449127 [PATCH] Reorder ZVCs according to cacheline
The global and per zone counter sums are in arrays of longs.  Reorder the ZVCs
so that the most frequently used ZVCs are put into the same cacheline.  That
way calculations of the global, node and per zone vm state touches only a
single cacheline.  This is mostly important for 64 bit systems were one 128
byte cacheline takes only 8 longs.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-11 10:51:17 -08:00
..
allocpercpu.c
backing-dev.c
bootmem.c
bounce.c
fadvise.c
filemap_xip.c [PATCH] mm: mremap correct rmap accounting 2007-01-30 08:33:32 -08:00
filemap.c [PATCH] mm: remove find_trylock_page 2007-02-09 08:06:14 -08:00
filemap.h
fremap.c
highmem.c [PATCH] Use ZVC for free_pages 2007-02-11 10:51:17 -08:00
hugetlb.c [PATCH] hugetlb: preserve hugetlb pte dirty state 2007-02-09 09:25:46 -08:00
internal.h
Kconfig
madvise.c
Makefile
memory_hotplug.c
memory.c [PATCH] page_mkwrite caller race fix 2007-02-11 10:51:17 -08:00
mempolicy.c
mempool.c
migrate.c
mincore.c
mlock.c
mmap.c [PATCH] Add install_special_mapping 2007-02-09 09:25:47 -08:00
mmzone.c
mprotect.c
mremap.c [PATCH] mm: mremap correct rmap accounting 2007-01-30 08:33:32 -08:00
msync.c
nommu.c
oom_kill.c
page_alloc.c [PATCH] Use ZVC for free_pages 2007-02-11 10:51:17 -08:00
page_io.c
page-writeback.c Fix balance_dirty_page() calculations with CONFIG_HIGHMEM 2007-01-29 16:37:38 -08:00
pdflush.c
prio_tree.c
readahead.c
rmap.c
shmem_acl.c
shmem.c
slab.c [PATCH] slab: use parameter passed to cache_reap to determine pointer to work structure 2007-02-11 10:51:17 -08:00
slob.c
sparse.c
swap_state.c
swap.c
swapfile.c
thrash.c
tiny-shmem.c
truncate.c
util.c
vmalloc.c
vmscan.c [PATCH] Use ZVC for inactive and active counts 2007-02-11 10:51:17 -08:00
vmstat.c [PATCH] Reorder ZVCs according to cacheline 2007-02-11 10:51:17 -08:00