qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH 09/21] Introduce event-tap.


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] Re: [PATCH 09/21] Introduce event-tap.
Date: Tue, 30 Nov 2010 11:11:06 -0200
User-agent: Mutt/1.5.20 (2009-08-17)

On Tue, Nov 30, 2010 at 07:35:54PM +0900, Yoshiaki Tamura wrote:
> Marcelo Tosatti wrote:
> >On Tue, Nov 30, 2010 at 06:28:55PM +0900, Yoshiaki Tamura wrote:
> >>2010/11/30 Marcelo Tosatti<address@hidden>:
> >>>On Thu, Nov 25, 2010 at 03:06:48PM +0900, Yoshiaki Tamura wrote:
> >>>>event-tap controls when to start FT transaction, and provides proxy
> >>>>functions to called from net/block devices.  While FT transaction, it
> >>>>queues up net/block requests, and flush them when the transaction gets
> >>>>completed.
> >>>>
> >>>>Signed-off-by: Yoshiaki Tamura<address@hidden>
> >>>>Signed-off-by: OHMURA Kei<address@hidden>
> >>>
> >>>>+static void event_tap_alloc_blk_req(EventTapBlkReq *blk_req,
> >>>>+                                    BlockDriverState *bs, BlockRequest 
> >>>>*reqs,
> >>>>+                                    int num_reqs, 
> >>>>BlockDriverCompletionFunc *cb,
> >>>>+                                    void *opaque, bool is_multiwrite)
> >>>>+{
> >>>>+    int i;
> >>>>+
> >>>>+    blk_req->num_reqs = num_reqs;
> >>>>+    blk_req->num_cbs = num_reqs;
> >>>>+    blk_req->device_name = qemu_strdup(bs->device_name);
> >>>>+    blk_req->is_multiwrite = is_multiwrite;
> >>>>+
> >>>>+    for (i = 0; i<  num_reqs; i++) {
> >>>>+        blk_req->reqs[i].sector = reqs[i].sector;
> >>>>+        blk_req->reqs[i].nb_sectors = reqs[i].nb_sectors;
> >>>>+        blk_req->reqs[i].qiov = reqs[i].qiov;
> >>>>+        blk_req->reqs[i].cb = cb;
> >>>>+        blk_req->reqs[i].opaque = opaque;
> >>>>+        blk_req->cb[i] = reqs[i].cb;
> >>>>+        blk_req->opaque[i] = reqs[i].opaque;
> >>>>+    }
> >>>>+}
> >>>
> >>>bdrv_aio_flush should also be logged, so that guest initiated flush is
> >>>respected on replay.
> >>
> >>In the current implementation w/o flush logging, there might be
> >>order inversion after replay?
> >>
> >>Yoshi
> >
> >Yes, since a vcpu is allowed to continue after synchronization is
> >scheduled via a bh. For virtio-blk, for example:
> >
> >1) bdrv_aio_write, event queued.
> >2) bdrv_aio_flush
> >3) bdrv_aio_write, event queued.
> >
> >On replay, there is no flush between the two writes.
> >
> >Why can't synchronization be done from event-tap itself, synchronously,
> >to avoid this kind of problem?
> 
> Thanks.  I would fix it.
> 
> >The way you hook synchronization into savevm seems unclean. Perhaps
> >better separation between standard savevm path and FT savevm would make
> >it cleaner.
> 
> I think you're mentioning about the changes in migration.c?
> 
> Yoshi

The important point is to stop vcpu activity after the event is queued,
and resume once synchronization is performed. Stopping the vm after
Kemari event queueing should do it, once Michael's "stable migration"
patchset is in (and net/block layers fixed).




reply via email to

[Prev in Thread] Current Thread [Next in Thread]