gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] performance improvements


From: Vincent Régnard
Subject: [Gluster-devel] performance improvements
Date: Tue, 23 Oct 2007 18:36:53 +0200
User-agent: Thunderbird 1.5.0.10 (X11/20070221)

Hi all,

We are presently trying to tune our non-gluster configuration to improve glusterfs performance. My config is gluster 1.3.7/fuse2.7.0-glfs5, linux 2.6.16.55. We have 3 clients and 3 servers on a 100Mb network with 5ms round trip between clients and servers. The 3 clients replicate with afr on client side over the 3 servers.

We have a read/write throughput benchmark (dbench) between 2 and 5 MB/s.

The afr synchronisation using "find -mtime -1 -type f -exec head -c1 trick" takes approximately 30 minutes for a 20GB filesystem with 300.000 files. Which seems too long to be acceptable for us. I'd like to tune some parameters to increase performance.

I can imagine that reducing the roundtrip between servers might help ? But I cannot actually do anything for that. The only thing I might be able to do is to configure some QOS. Have you any suggestion about how we should do that ? Would giving priority to tcp/6996 between clients and servers really help ?

At the (linux) kernel level, could acting on PREMPTION MODEL and CONFIG_HZ produce improvement ?

Our present config is as follow:

# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_BKL=y

# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250

Is it better to prefer SMP to non-SMP kernel builds ? (We presently have SMP eneabled for our dual-cores). What impact on glusterfs performances if we deactivate SMP ?

We use linuxthread (glibc2.3) and have no NPTL support, can this influence the performances as well ?

We naturally already have gluster improvements in the configuration (io-{thread,cache}, readahead and writebehind).

Thanks in advance for your comments or suggestions.

Vincent.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]