qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] VMSTate state of the union


From: Juan Quintela
Subject: [Qemu-devel] VMSTate state of the union
Date: Fri, 18 Jan 2013 13:13:10 +0100

Hi

Executive summary: Nothing changed since last time

Abstract:

Things missing:
- cpus:  They are blocked by comment that we should be able to sent
         generated fields.  Stalled :-(
- slirp: I have preleminary patches for it.  Stalled on how to describe
         LISTS.
- virtio: Stalled for even longer.  Suspect are LISTS again.

- rest of devices: not too much.


What is the problem with lists?

This is basically the current code that I have for virtio block
requests.  How to describe then in an easier to use place?  This is
basically a backport to use open code for it.  The problems are:

- we need to do malloc() on the receiving side
- the "next" pointer can be anywhere (if they are using QLISTS' they can
  be using any other list structure)
- They are (by definiton), a substructure, so we need to pass _also_ a
  vmstate for it.


Any good ideas?

Thanks, Juan.

-static void virtio_blk_save(QEMUFile *f, void *opaque)
+static const VMStateDescription vmstate_virtio_blk_req = {
+    .name = "virtio-blk-req",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_BUFFER_UNSAFE(elem, VirtIOBlockReq, 0, 
sizeof(VirtQueueElement)),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static void put_virtio_req(QEMUFile *f, void *pv, size_t size)
 {
-    VirtIOBlock *s = opaque;
+    VirtIOBlockReqHead *rq = pv;
     VirtIOBlockReq *req;;

-    virtio_save(&s->vdev, f);
-
-    QLIST_FOREACH(req, &s->rq, next) {
+    QLIST_FOREACH(req, rq, next) {
         qemu_put_sbyte(f, 1);
         qemu_put_buffer(f, (unsigned char*)&req->elem, sizeof(req->elem));
     }
     qemu_put_sbyte(f, 0);
 }

-static int virtio_blk_load(QEMUFile *f, void *opaque, int version_id)
+static int get_virtio_req(QEMUFile *f, void *pv, size_t size)
 {
-    VirtIOBlock *s = opaque;
+    VirtIOBlockReqHead *rq = pv;
+    VirtIOBlock *s = container_of(rq, struct VirtIOBlock, rq);

-    if (version_id != 2)
-        return -EINVAL;
-
-    virtio_load(&s->vdev, f);
     while (qemu_get_sbyte(f)) {
         VirtIOBlockReq *req = virtio_blk_alloc_request(s);
         qemu_get_buffer(f, (unsigned char*)&req->elem, sizeof(req->elem));
@@ -567,6 +573,25 @@ static const BlockDevOps virtio_block_ops = {
     .resize_cb = virtio_blk_resize,
 };

+const VMStateInfo vmstate_info_virtio_blk_req = {
+    .name = "virtio_blk_req",
+    .get  = get_virtio_req,
+    .put  = put_virtio_req,
+};
+
+static const VMStateDescription vmstate_virtio_blk = {
+    .name = "virtio-blk",
+    .version_id = 2,
+    .minimum_version_id = 2,
+    .minimum_version_id_old = 2,
+    .fields      = (VMStateField []) {
+        VMSTATE_VIRTIO(vdev, VirtIOBlock),
+        VMSTATE_SINGLE(rq, VirtIOBlock, 0,
+                       vmstate_info_virtio_blk_req, VirtIOBlockReqHead),
+        VMSTATE_END_OF_LIST()
+    }
+};
+





reply via email to

[Prev in Thread] Current Thread [Next in Thread]