qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Fwd: Local storage-migration plus network disks


From: Blair Bethwaite
Subject: [Qemu-devel] Fwd: Local storage-migration plus network disks
Date: Sun, 20 Apr 2014 22:33:19 +1000

Hi, just wondering if devs think this behaviour is bug-worthy?

---------- Forwarded message ----------
From: Blair Bethwaite <address@hidden>
Date: 16 April 2014 16:29
Subject: Local storage-migration plus network disks
To: address@hidden


Hi all,

We have a production OpenStack cloud, currently on Qemu 1.0 & 1.5 using local storage with storage-migration when we need to move machines around. We noticed that with network storage attached (have seen this with iSCSI and Ceph RBD targets) that the migration moves all of the network storage contents as well, which for any non-toy disk sizes pretty much renders it useless as the migration is then bounded not by the guest memory size and activity but also by the block storage size.

I've been tracking (or at least trying to) the changes to storage migration over the last few releases in the hope this may be fixed and I just recently found this: http://wiki.libvirt.org/page/NBD_storage_migration, which suggests that "{shared, readonly, source-less}" disks won't be transferred.

But even with Qemu 1.5 we see the behaviour I described above, e.g., we've just migrated a guest with a 300GB Ceph RBD attached and it has taken over an hour to complete (over a 20GE network) and we observe similar amounts of RX and TX on both the source and destination, as the source reads blocks from the Ceph cluster, streams them to the destination, and the destination in turn writes them back to the Ceph cluster.

So why is Qemu performing storage migrate on "network" type devices?

--
Cheers,
~Blairo


reply via email to

[Prev in Thread] Current Thread [Next in Thread]