um: implement flush_cache_vmap/flush_cache_vunmap

vmalloc() heavy workloads in UML are extremely slow, due to
flushing the entire kernel VM space (flush_tlb_kernel_vm())
on the first segfault.

Implement flush_cache_vmap() to avoid that, and while at it
also add flush_cache_vunmap() since it's trivial.

This speeds up my vmalloc() heavy test of copying files out
from /sys/kernel/debug/gcov/ by 30x (from 30s to 1s.)

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
This commit is contained in:
Johannes Berg 2021-03-15 23:38:04 +01:00 committed by Richard Weinberger
parent dd3035a21b
commit 80f849bf54
2 changed files with 10 additions and 1 deletions

View File

@ -0,0 +1,9 @@
#ifndef __UM_ASM_CACHEFLUSH_H
#define __UM_ASM_CACHEFLUSH_H
#include <asm/tlbflush.h>
#define flush_cache_vmap flush_tlb_kernel_range
#define flush_cache_vunmap flush_tlb_kernel_range
#include <asm-generic/cacheflush.h>
#endif /* __UM_ASM_CACHEFLUSH_H */

View File

@ -5,7 +5,7 @@
#include <linux/mm.h>
#include <asm/tlbflush.h>
#include <asm-generic/cacheflush.h>
#include <asm/cacheflush.h>
#include <asm-generic/tlb.h>
#endif