qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 03/15] coroutine-ucontext: reduce stack size to


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 03/15] coroutine-ucontext: reduce stack size to 64kB
Date: Tue, 28 Jun 2016 12:35:01 +0100
User-agent: Mutt/1.6.1 (2016-04-27)

* Peter Lieven (address@hidden) wrote:
> Am 28.06.2016 um 12:57 schrieb Dr. David Alan Gilbert:
> > * Paolo Bonzini (address@hidden) wrote:
> > > 
> > > On 28/06/2016 11:01, Peter Lieven wrote:
> > > > evaluation with the recently introduced maximum stack size monitoring 
> > > > revealed
> > > > that the actual used stack size was never above 4kB so allocating 1MB 
> > > > stack
> > > > for each coroutine is a lot of wasted memory. So reduce the stack size 
> > > > to
> > > > 64kB which should still give enough head room.
> > > If we make the stack this much smaller, there is a non-zero chance of
> > > smashing it.  You must add a guard page if you do this (actually more
> > > than one because QEMU will happily have stack frames as big as 16 KB).
> > > The stack counts for RSS but it's not actually allocated memory, so why
> > > does it matter?
> > I think I'd be interested in seeing the /proc/.../smaps before and after 
> > this
> > change to see if anything is visible and if we can see the difference
> > in rss etc.
> 
> Can you advise what in smaps should be especially looked at.
> 
> As for RSS I can report hat the long term usage is significantly lower.
> I had the strange observation that when the VM is running for some minutes
> the RSS suddenly increases to the whole stack size.

You can see the Rss of each mapping; if you knew where your stacks were
it would be easy to see if it was the stacks that were Rss and if
there was anything else odd about them.
If you set hte mapping as growsdown then you can see the area that has a 'gd'
in it's VmFlags.

Dave

> 
> Peter
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]