qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 16/16] migration: fix crash in when incoming clie


From: Balamuruhan S
Subject: Re: [Qemu-devel] [PULL 16/16] migration: fix crash in when incoming client channel setup fails
Date: Fri, 29 Jun 2018 14:41:26 +0530
User-agent: Mutt/1.9.2 (2017-12-15)

On Thu, Jun 28, 2018 at 01:06:25PM +0200, Juan Quintela wrote:
> Balamuruhan S <address@hidden> wrote:
> > On Wed, Jun 27, 2018 at 02:56:04PM +0200, Juan Quintela wrote:
> >> From: Daniel P. Berrangé <address@hidden>
> 
> ....
> 
> > Hi Juan,
> >
> > I tried to perform multifd enabled migration and from qemu monitor
> > enabled mutlifd capability on source and target,
> > (qemu) migrate_set_capability x-multifd on
> > (qemu) migrate -d tcp:127.0.0.1:4444
> >
> > The migration succeeds and its cool to have the feature :)
> 
> Thanks.
> 
> > (qemu) info migrate
> > globals:
> > store-global-state: on
> > only-migratable: off
> > send-configuration: on
> > send-section-footer: on
> > decompress-error-check: on
> > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> > zero-blocks: off compress: off events: off postcopy-ram: off x-colo:
> > off release-ram: off block: off return-path: off
> > pause-before-switchover: off x-multifd: on dirty-bitmaps: off
> > postcopy-blocktime: off late-block-activate: off
> > Migration status: completed
> > total time: 1051 milliseconds
> > downtime: 260 milliseconds
> > setup: 17 milliseconds
> > transferred ram: 8270 kbytes
> 
> What is your setup?  This value looks really small.  I can see that you

I have applied this patchset to upstream qemu to test multifd migration,

qemu commandline is as below,

/home/bala/qemu/ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic \
-vga none -machine pseries -m 4G,slots=32,maxmem=32G -smp 16,maxcpus=32 \
-device virtio-blk-pci,drive=rootdisk -drive 
file=/home/bala/hostos-ppc64le.qcow2,\
if=none,cache=none,format=qcow2,id=rootdisk -monitor telnet:127.0.0.1:1234,\
server,nowait -net nic,model=virtio -net user -redir tcp:2000::22

> have 4GB of RAM, it should be a bit higher.  And setup time is also
> quite low from my experience.

sure, I will try with 32G mem. I am not aware about the setup time value.

> 
> > throughput: 143.91 mbps
> 
> I don't know what networking are you using, but my experience is that
> increasing packet_count to 64 or so helps a lot to increase bandwidth.

how do I configure packet_count to 64 ?

> 
> What is your networking, page_count and number of channels?

I tried local host migration but need to work on multihost migration.
page_count and number of channels are default values,

x-multifd-channels: 2
x-multifd-page-count: 16

> 
> > remaining ram: 0 kbytes
> > total ram: 4194560 kbytes
> > duplicate: 940989 pages
> > skipped: 0 pages
> > normal: 109635 pages
> > normal bytes: 438540 kbytes
> > dirty sync count: 3
> > page size: 4 kbytes
> >
> >
> > But when I just enable the multifd in souce but not in target
> >
> > source:
> > x-multifd: on
> >
> > target:
> > x-multifd: off
> >
> > when migration is triggered with,
> > migrate -d tcp:127.0.0.1:4444 (port I used)
> >
> > The VM is lost in source with Segmentation fault.
> >
> > I think the correct way is to enable multifd on both source and target
> > similar to postcopy, but in this negative scenario we should consider
> > the right way of handling not to loose the VM instead error out
> > appropriately.
> 
> It is necesary to enable both sides.  And it "used" to be that it
> dectected correctly when it was not enable on one of the sides.  Check
> should be lost in some rebase, or any other change.
> 
> Will take a look.

Thank you.

-- Bala

> 
> > Please correct me if I miss something.
> 
> Sure, thanks for the report.
> 
> Later, Juan.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]