qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] rcu: reduce more than 7MB heap memory by mal


From: Yang Zhong
Subject: Re: [Qemu-devel] [PATCH v3] rcu: reduce more than 7MB heap memory by malloc_trim()
Date: Mon, 4 Dec 2017 20:03:22 +0800
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
> On 01/12/2017 11:56, Yang Zhong wrote:
> >   This issue should be caused by much times of system call by malloc_trim(),
> >   Shannon's test script include 60 scsi disks and 31 ioh3420 devices. We 
> > need 
> >   trade-off between VM perforamance and memory optimization. Whether below 
> >   method is suitable?
> > 
> >   int num=1;
> >   ......
> > 
> >   #if defined(CONFIG_MALLOC_TRIM)
> >         if(!(num++%5))
> >         {
> >              malloc_trim(4 * 1024 * 1024);
> >         }
> >   #endif
> >  
> >   Any comments are welcome! Thanks a lot!
> 
> Indeed something like this will do, perhaps only trim once per second?
> 
  Hello Paolo,

  Thanks for comments!
  If we do trim once per second, maybe the frequency is a little high, what'e
  more, we need maintain one timer to call this, this also cost cpu resource.

  I added the log and did the test here with my test qemu command, when VM 
bootup,
  which did more than 600 times free operations and 9 times memory trim in rcu 
  thread. If i use our ClearContainer qemu command, the memory trim will down 
  to 6 times. As for Shannon's test command, the malloc trim number will 
abosultly 
  increse.

  In my above method, the trim is only executed in the multiple of 5, which will
  reduce trim times and do not heavily impact VM bootup performance. 

  I also want to use synchronize_rcu() and free() to replace call_rcu(), but 
this
  method serialize to malloc() and free(), which will reduce VM performance.

  The ultimate aim is to reduce trim system call during the VM bootup and 
running.
  It's appreciated that if you have better suggestions.

  Regards,

  Yang

> Thanks,
> 
> Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]