fee852e374
Here is something I spotted (while looking for something entirely different) the other day. Rather than using a completion in each and every struct gfs2_holder, this removes it in favour of hashed wait queues, thus saving a considerable amount of memory both on the stack (where a number of gfs2_holder structures are allocated) and in particular in the gfs2_inode which has 8 gfs2_holder structures embedded within it. As a result on x86_64 the gfs2_inode shrinks from 2488 bytes to 1912 bytes, a saving of 576 bytes per inode (no thats not a typo!). In actual practice we get a much better result than that since now that a gfs2_inode is under the 2048 byte barrier, we get two per 4k slab page effectively halving the amount of memory required to store gfs2_inodes. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> |
||
---|---|---|
.. | ||
locking | ||
acl.c | ||
acl.h | ||
bmap.c | ||
bmap.h | ||
daemon.c | ||
daemon.h | ||
dir.c | ||
dir.h | ||
eaops.c | ||
eaops.h | ||
eattr.c | ||
eattr.h | ||
gfs2.h | ||
glock.c | ||
glock.h | ||
glops.c | ||
glops.h | ||
incore.h | ||
inode.c | ||
inode.h | ||
Kconfig | ||
lm.c | ||
lm.h | ||
locking.c | ||
log.c | ||
log.h | ||
lops.c | ||
lops.h | ||
main.c | ||
Makefile | ||
meta_io.c | ||
meta_io.h | ||
mount.c | ||
mount.h | ||
ondisk.c | ||
ops_address.c | ||
ops_address.h | ||
ops_dentry.c | ||
ops_dentry.h | ||
ops_export.c | ||
ops_export.h | ||
ops_file.c | ||
ops_file.h | ||
ops_fstype.c | ||
ops_fstype.h | ||
ops_inode.c | ||
ops_inode.h | ||
ops_super.c | ||
ops_super.h | ||
ops_vm.c | ||
ops_vm.h | ||
quota.c | ||
quota.h | ||
recovery.c | ||
recovery.h | ||
rgrp.c | ||
rgrp.h | ||
super.c | ||
super.h | ||
sys.c | ||
sys.h | ||
trans.c | ||
trans.h | ||
util.c | ||
util.h |