|
From: | Kelly Burkhart |
Subject: | Re: [Gluster-devel] libgfapi threads |
Date: | Wed, 12 Feb 2014 22:26:50 -0600 |
glfs_initglfs_set_volfile_serverglfs_newWe've noticed that gfapi threads won't die until process exit, they aren't joined to in glfs_fini(). Is that expected? The following will create 4*N threads:for( idx=0; idx<N; ++idx) {// pause a bit hereglfs_fini
}
-K
On Fri, Jan 31, 2014 at 9:07 AM, Kelly Burkhart <address@hidden> wrote:
Thanks Anand,I notice three different kind of threads: gf_timer_proc and syncenv_processor in libglusterfs and glfs_poller in the api. Right off the bat two syncenv threads are created and one each of the other two are created. In my limited testing, it doesn't seem to take much for more threads to be created.The reason I'm concerned is that we intend to run our gluster client on a machine with all but one core dedicated to latency critical apps. The remaining core will handle all other things. In this scenario creating scads of threads seems likely to be a pessimization compared to just having one thread with an epoll loop handling everything. Would any of you familiar with the guts of gluster predict a problem with pegging a gfapi client and all of it's child threads to a single core?BTW, attached is a simple patch to help me track what threads are created, it's linux specific, but I think it's useful. It adds an identifier and instance count to each kind of child thread so I see this in top:top - 08:35:47 up 48 min, 3 users, load average: 0.12, 0.07, 0.05Tasks: 9 total, 0 running, 9 sleeping, 0 stopped, 0 zombieCpu(s): 0.2%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%stMem: 16007M total, 1372M used, 14634M free, 96M buffersSwap: 2067M total, 0M used, 2067M free, 683M cachedPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND22979 kelly 20 0 971m 133m 16m S 0 0.8 0:00.06 tst22987 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:022988 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:122989 kelly 20 0 971m 133m 16m S 0 0.8 0:00.03 tst/gp:022990 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/tm:022991 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:222992 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:322993 kelly 20 0 971m 133m 16m S 0 0.8 0:01.98 tst/gp:122994 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/tm:1Thanks,-KOn Thu, Jan 30, 2014 at 4:38 PM, Anand Avati <address@hidden> wrote:
Thread count is independent of number of servers. The number of sockets/connections is a function of number of servers/bricks. There are a minimum number of threads (like the timer threads, syncop exec threads, io-threads, epoll thread, depending on interconnect RDMA event reaping threads) and some of them (syncop and io-thread) count are dependent on the work load. All communication with servers is completely asynchronous and we do not spawn a new thread per server.HTHAvatiOn Thu, Jan 30, 2014 at 1:17 PM, James <address@hidden> wrote:
On Thu, Jan 30, 2014 at 4:15 PM, Paul Cuzner <address@hidden> wrote:My naive understanding is:
> Wouldn't the thread count relate to the number of bricks in the volume,
> rather that peers in the cluster?
1) Yes, you should expect to see one connection to each brick.
2) Some of the "scaling gluster to 1000" nodes work might address the
issue, as to avoid 1000 * brick count/perserver connections.
But yeah, Kelly: I think you're seeing the right number of threads.
But this is outside of my expertise.
James
_______________________________________________
Gluster-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/gluster-devel
[Prev in Thread] | Current Thread | [Next in Thread] |