kernel-ark/include/asm-generic
Andi Kleen 2b4a08150e [PATCH] x86-64: Increase TLB flush array size
The generic TLB flush functions kept upto 506 pages per
CPU to avoid too frequent IPIs.

This value was done for the L1 cache of older x86 CPUs,
but with modern CPUs it does not make much sense anymore.
TLB flushing is slow enough that using the L2 cache is fine.

This patch increases the flush array on x86-64 to cache
5350 pages. That is roughly 20MB with 4K pages. It speeds
up large munmaps in multithreaded processes on SMP considerably.

The cost is roughly 42k of memory per CPU, which is reasonable.

I only increased it on x86-64 for now, but it would probably
make sense to increase it everywhere. Embedded architectures
with SMP may keep it smaller to save some memory per CPU.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-12 10:49:58 -07:00
..
4level-fixup.h
bitops.h
bug.h
cputime.h
div64.h
dma-mapping-broken.h
dma-mapping.h
emergency-restart.h
errno-base.h
errno.h
fcntl.h [PATCH] Clean up struct flock64 definitions 2005-09-07 16:57:38 -07:00
ide_iops.h
iomap.h
ipc.h
local.h
page.h
pci-dma-compat.h
pci.h [PATCH] Make sparc64 use setup-res.c 2005-09-08 14:57:25 -07:00
percpu.h
pgtable-nopmd.h
pgtable-nopud.h
pgtable.h [PATCH] x86: ptep_clear optimization 2005-09-05 00:05:48 -07:00
resource.h
rtc.h
sections.h [PATCH] Kprobes: prevent possible race conditions generic 2005-09-07 16:57:59 -07:00
siginfo.h
signal.h
statfs.h
termios.h
tlb.h [PATCH] x86-64: Increase TLB flush array size 2005-09-12 10:49:58 -07:00
topology.h
uaccess.h
unaligned.h [PATCH] optimise 64bit unaligned access on 32bit kernel 2005-09-07 16:57:36 -07:00
vmlinux.lds.h [PATCH] i386 / uml: add dwarf sections to static link script 2005-09-10 12:00:17 -07:00
xor.h