qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 00/11] rdma: migration support


From: Chegu Vinod
Subject: Re: [Qemu-devel] [PATCH v6 00/11] rdma: migration support
Date: Fri, 03 May 2013 16:28:17 -0700
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5


Hi Michael,

I picked up the qemu bits from your github branch and gave it a try.   (BTW the setup I was given temporary access to has a pair of MLX's  IB QDR cards connected back to back via QSFP cables)

Observed a couple of things and wanted to share..perhaps you may be aware of them already or perhaps these are unrelated to your specific changes ? (Note: Still haven't finished the review of your changes ).

a) x-rdma-pin-all off case

Seem to only work sometimes but fails at other times. Here is an example...

(qemu) rdma: Accepting rdma connection...
rdma: Memory pin all: disabled
rdma: verbs context after listen: 0x555556757d50
rdma: dest_connect Source GID: fe80::2:c903:9:53a5, Dest GID: fe80::2:c903:9:5855
rdma: Accepted migration
qemu-system-x86_64: VQ 1 size 0x100 Guest index 0x4d2 inconsistent with Host ind
ex 0x4ec: delta 0xffe6
qemu: warning: error while loading state for instance 0x0 of device 'virtio-net'
load of migration failed


b) x-rdma-pin-all   on  case : 

The guest is not resuming on the target host. i.e. the source host's qemu states that migration is complete but the guest is not responsive anymore... (doesn't seem to have crashed but its stuck somewhere).    Have you seen this behavior before ? Any tips on how I could extract additional info ?

Besides the list of noted restrictions/issues around having to pin all of guest memory....if the pinning is done as part of starting of the migration it ends up taking noticeably long time for larger guests. Wonder whether that should be counted as part of the total migration time ?. 

Also the act of pinning all the memory seems to "freeze" the guest. e.g. : For larger enterprise sized guests (say 128GB and higher) the guest is "frozen" is anywhere from nearly a minute (~50seconds) to multiple minutes as the guest size increases...which imo kind of defeats the purpose of live guest migration.

Would like to hear if you have already thought about any other alternatives to address this issue ? for e.g. would it be better to pin all of the guest's memory as part of starting the guest itself ? Yes there are restrictions when we do pinning...but it can help with performance.
---
BTW, a different (yet sort of related) topic... recently a patch went into upstream that provided an option to qemu to mlock all of guest memory :

https://lists.gnu.org/archive/html/qemu-devel/2013-04/msg03947.html

but when attempting to do the mlock for larger guests a lot of time is spent bringing each page into cache and clearing/zeron'g it etc.etc.

https://lists.gnu.org/archive/html/qemu-devel/2013-04/msg04161.html 


----

Note: The basic tcp based live guest migration in the same qemu version still works fine on the same hosts over a pair of non-RDMA cards 10Gb NICs connected back-to-back.

Thanks
Vinod


 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]