[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support
From: |
Michael R. Hines |
Subject: |
Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support |
Date: |
Thu, 13 Jun 2013 10:55:24 -0400 |
User-agent: |
Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 |
On 06/13/2013 10:26 AM, Chegu Vinod wrote:
1. start QEMU with the lock option *first*
2. Then enable x-rdma-pin-all
3. Then perform the migration
What happens here? Does pinning "in advance" help you?
Yes it does help by avoiding the freeze time at the start of the
pin-all migration.
I already mentioned about this in in my earlier responses as an option
to consider for larger guests
(https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00435.html).
But pinning all of guest memory has a few drawbacks...as you may
already know.
Just to be sure I just double checked it again with your v7 bits .
Started a 64GB/10VCPU guest (started qemu with the"-realtime
mlock=on" option) and as expected the guest startup took about 20
seconds longer (i.e. time taken to mlock the 64GB of guest RAM) but
the pin-all migration started fine...i.e. didn't observe any freezes
at the start of the migration
(CC-ing qemu-devel).
OK, that's good to know. This means that we need to bringup the mlock()
problem as a "larger" issue in the linux community instead of the QEMU
community.
In the meantime, how about I make update to the RDMA patch which does
the following:
1. Solution #1:
If user requests "x-rdma-pin-all", then
If QEMU has enabled "-realtime mlock=on"
Then, allow the capability
Else
Disallow the capability
2. Solution #2: Create NEW qemu monitor command which locks memory *in
advance*
before the migrate command occurs, to clearly
indicate to the user
that the cost of locking memory must be paid
before the migration starts.
Which solution do you prefer? Or do you have alternative idea?
https://lists.gnu.org/archive/html/qemu-devel/2013-04/msg04161.html
Again this is a generic linux mlock/clearpage related issue and not
directly related to your changes.
Do you have any ideas on how linux can be improved to solve this?
Is there any ongoing work that you know of on mlock() performance?
Is there, perhaps, some way for linux to "parallelize" the
mlock()/clearpage operation?
- Michael
- [Qemu-devel] [PATCH v7 09/12] rdma: new QEMUFileOps hooks, (continued)
- [Qemu-devel] [PATCH v7 09/12] rdma: new QEMUFileOps hooks, mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 04/12] rdma: export throughput w/ MigrationStats QMP, mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 01/12] rdma: add documentation, mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 03/12] rdma: export yield_until_fd_readable(), mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 12/12] rdma: send pc.ram, mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 08/12] rdma: introduce qemu_ram_foreach_block(), mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 11/12] rdma: core logic, mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 06/12] rdma: export qemu_fflush(), mrhines, 2013/06/10
- [Qemu-devel] [PATCH v7 02/12] rdma: introduce qemu_update_position(), mrhines, 2013/06/10
- Message not available
Message not available