qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v9 00/56] Postcopy implementation


From: Bharata B Rao
Subject: Re: [Qemu-devel] [PATCH v9 00/56] Postcopy implementation
Date: Mon, 9 Nov 2015 15:58:43 +0530
User-agent: Mutt/1.5.23 (2014-03-12)

On Mon, Nov 09, 2015 at 09:08:33AM +0000, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (address@hidden) wrote:
> > On Fri, Nov 06, 2015 at 03:48:11PM +0000, Dr. David Alan Gilbert wrote:
> > > * Bharata B Rao (address@hidden) wrote:
> > > 
> > > > > Where we have iterable, but non-postcopiable devices (e.g. htab
> > > > > or block migration), complete them before forming the 'package'
> > > > > but with the CPUs stopped.  This stops them filling up the package.
> > > > 
> > > > That helps and the migration suceeds now when I switch to postcopy
> > > > immediately after starting the migration.
> > > 
> > > Excellent.
> > > 
> > > > However after postcopy migration, when I attempt to start an incoming
> > > > instance again to migrate the guest back, I see this failure:
> > > > 
> > > > qemu-system-ppc64: cannot set up guest memory 'ppc_spapr.ram': Cannot 
> > > > allocate memory
> > > > 
> > > > The same doesn't happen with normal migration.
> > > 
> > > Huh that's fun; that's the original source guest that's running out of 
> > > RAM?
> > > It's original QEMU should be gone by that point.
> > 
> > Yes, the original source QEMU is gone, but there is not enough memory left
> > in the host to start another incoming QEMU instance because...
> 
> Ah, so this is ping-pong on one host?

Yes, migrating within the localhost.

> 
> > At the beginning
> > -----------------
> > $ grep -i mem /proc/meminfo 
> > MemTotal:       132816832 kB
> > MemFree:        128781632 kB
> > MemAvailable:   131668224 kB
> > 
> > After starting the guest (-m 64G,slots=32,maxmem=128G)
> > ------------------------
> > $ grep -i mem /proc/meminfo 
> > MemTotal:       132816832 kB
> > MemFree:        124866880 kB
> > MemAvailable:   127753728 kB
> > 
> > After starting the destination instance (incoming)
> > -------------------------------------------------
> > $ grep -i mem /proc/meminfo 
> > MemTotal:       132816832 kB
> > MemFree:        122514880 kB
> > MemAvailable:   125401920 kB
> > 
> > After postcopy migration completes
> > ----------------------------------
> > $ grep -i mem /proc/meminfo 
> > MemTotal:       132816832 kB
> > MemFree:        55150592 kB
> > MemAvailable:   58037888 kB
> > 
> > After terminating the source instance
> > -------------------------------------
> > $ grep -i mem /proc/meminfo 
> > MemTotal:       132816832 kB
> > MemFree:        59432448 kB
> > MemAvailable:   62319872 kB
> > 
> > So as you can see, postcopy migration will result in guest claiming
> > its entire RAM memory from host. This doesn't happen during normal 
> > migration.
> 
> I'll try and see if I can replicate this.
> 
> Can you :
>    1) show me the command line you're using

./ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic -vga none -machine 
pseries -m 64G,slots=32,maxmem=128G -smp 16 -device 
virtio-blk-pci,drive=rootdisk -drive 
file=/home/bharata/F20-snap1,if=none,cache=none,id=rootdisk,format=qcow2 
-monitor telnet:127.0.0.1:1234,server,nowait -trace events=my-trace-events

>    2) Show me /proc/pid/smaps for the destination qemu

Attached.

>    3) Turn on the trace     postcopy_place_page_zero
>       the theory is that most of your pages should end up as zero pages
>       and not be allocated.

No hits for postcopy_place_page_zero either in source or destination QEMU.

Regards,
Bharata.

Attachment: dst-qemu-smaps.txt
Description: Text document


reply via email to

[Prev in Thread] Current Thread [Next in Thread]