qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PULL 03/24] pflash_cfi01: write flash contents to bdrv on


From: Stefan Hajnoczi
Subject: [Qemu-devel] [PULL 03/24] pflash_cfi01: write flash contents to bdrv on incoming migration
Date: Mon, 8 Sep 2014 11:51:28 +0100

From: Laszlo Ersek <address@hidden>

A drive that backs a pflash device is special:
- it is very small,
- its entire contents are kept in a RAMBlock at all times, covering the
  guest-phys address range that provides the guest's view of the emulated
  flash chip.

The pflash device model keeps the drive (the host-side file) and the
guest-visible flash contents in sync. When migrating the guest, the
guest-visible flash contents (the RAMBlock) is migrated by default, but on
the target host, the drive (the host-side file) remains in full sync with
the RAMBlock only if:
- the source and target hosts share the storage underlying the pflash
  drive,
- or the migration requests full or incremental block migration too, which
  then covers all drives.

Due to the special nature of pflash drives, the following scenario makes
sense as well:
- no full nor incremental block migration, covering all drives, alongside
  the base migration (justified eg. by shared storage for "normal" (big)
  drives),
- non-shared storage for pflash drives.

In this case, currently only those portions of the flash drive are updated
on the target disk that the guest reprograms while running on the target
host.

In order to restore accord, dump the entire flash contents to the bdrv in
a post_load() callback.

- The read-only check follows the other call-sites of pflash_update();
- both "pfl->ro" and pflash_update() reflect / consider the case when
  "pfl->bs" is NULL;
- the total size of the flash device is calculated as in
  pflash_cfi01_realize().

When using shared storage, or requesting full or incremental block
migration along with the normal migration, the patch should incur a
harmless rewrite from the target side.

It is assumed that, on the target host, RAM is loaded ahead of the call to
pflash_post_load().

Suggested-by: Paolo Bonzini <address@hidden>
Signed-off-by: Laszlo Ersek <address@hidden>
Signed-off-by: Stefan Hajnoczi <address@hidden>
---
 hw/block/pflash_cfi01.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/hw/block/pflash_cfi01.c b/hw/block/pflash_cfi01.c
index fddef39..593fbc5 100644
--- a/hw/block/pflash_cfi01.c
+++ b/hw/block/pflash_cfi01.c
@@ -94,10 +94,13 @@ struct pflash_t {
     void *storage;
 };
 
+static int pflash_post_load(void *opaque, int version_id);
+
 static const VMStateDescription vmstate_pflash = {
     .name = "pflash_cfi01",
     .version_id = 1,
     .minimum_version_id = 1,
+    .post_load = pflash_post_load,
     .fields = (VMStateField[]) {
         VMSTATE_UINT8(wcycle, pflash_t),
         VMSTATE_UINT8(cmd, pflash_t),
@@ -982,3 +985,14 @@ MemoryRegion *pflash_cfi01_get_memory(pflash_t *fl)
 {
     return &fl->mem;
 }
+
+static int pflash_post_load(void *opaque, int version_id)
+{
+    pflash_t *pfl = opaque;
+
+    if (!pfl->ro) {
+        DPRINTF("%s: updating bdrv for %s\n", __func__, pfl->name);
+        pflash_update(pfl, 0, pfl->sector_len * pfl->nb_blocs);
+    }
+    return 0;
+}
-- 
1.9.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]