qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 1013714] [NEW] Data corruption after block migration (


From: Dennis Krul
Subject: [Qemu-devel] [Bug 1013714] [NEW] Data corruption after block migration (LV->LV)
Date: Fri, 15 Jun 2012 14:53:23 -0000

Public bug reported:

We quite frequently use the live block migration feature to move VM's
between nodes without downtime. These sometimes result in data
corruption on the receiving end. It only happens if the VM is actually
doing I/O (doesn't have to be all that much to trigger the issue).

We use logical volumes and each VM has two disks.  We use cache=none for
all VM disks.

All guests use virtio (a mix of various Linux distro's and Windows
2008R2).

We currently have two stacks in use and have seen the issue on both of
them:

Fedora - qemu-kvm 0.13
Scientific Linux 6.2 (RHEL derived) - qemu-kvm package 0.12.1.2

Even though we don't run the most recent versions of KVM I highly
suspect this issue is still unreported and that filing a bug is
therefore appropriate. (There doesn't seem to be any similar bug report
in launchpad or RedHat's bugzilla and nothing related in change logs,
release notes and  git commit logs.)

I have no idea where to look or where to start debugging this issue, but
if there is any way I can provide useful debug information please let me
know.

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1013714

Title:
  Data corruption after block migration (LV->LV)

Status in QEMU:
  New

Bug description:
  We quite frequently use the live block migration feature to move VM's
  between nodes without downtime. These sometimes result in data
  corruption on the receiving end. It only happens if the VM is actually
  doing I/O (doesn't have to be all that much to trigger the issue).

  We use logical volumes and each VM has two disks.  We use cache=none
  for all VM disks.

  All guests use virtio (a mix of various Linux distro's and Windows
  2008R2).

  We currently have two stacks in use and have seen the issue on both of
  them:

  Fedora - qemu-kvm 0.13
  Scientific Linux 6.2 (RHEL derived) - qemu-kvm package 0.12.1.2

  Even though we don't run the most recent versions of KVM I highly
  suspect this issue is still unreported and that filing a bug is
  therefore appropriate. (There doesn't seem to be any similar bug
  report in launchpad or RedHat's bugzilla and nothing related in change
  logs, release notes and  git commit logs.)

  I have no idea where to look or where to start debugging this issue,
  but if there is any way I can provide useful debug information please
  let me know.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1013714/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]