qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] virtio-balloon: resume collecting stats on


From: Roman Kagan
Subject: Re: [Qemu-devel] [PATCH 2/2] virtio-balloon: resume collecting stats on vmload
Date: Fri, 2 Sep 2016 10:21:58 +0300
User-agent: NeoMutt/20160827 ()

On Thu, Sep 01, 2016 at 10:26:54PM +0300, Michael S. Tsirkin wrote:
> On Thu, Sep 01, 2016 at 09:14:00PM +0300, Roman Kagan wrote:
> > Upon save/restore virtio-balloon stats acquisition stops.  The reason is
> > that the in-use virtqueue element is not saved, and upon restore it's
> > not released to the guest, making further progress impossible.
> > 
> > Saving the information about the used element would introduce
> > unjustified vmstate incompatibility.
> > 
> > However, the number of virtqueue in-use elements can be deduced from the
> > data available on restore (see bccdef6b1a204db0f41ffb6e24ce373e4d7890d4
> > "virtio: recalculate vq->inuse after migration").
> > 
> > So make the stats virtqueue forget that an element was popped from it,
> > and start over.  As this tackles the problem on the "load" side, it
> > is compatible with the state saved by earlier QEMU versions.
> > 
> > Signed-off-by: Roman Kagan <address@hidden>
> > Cc: "Michael S. Tsirkin" <address@hidden>
> > Cc: Ladi Prosek <address@hidden>
> > Cc: Stefan Hajnoczi <address@hidden>
> > ---
> >  hw/virtio/virtio-balloon.c | 21 ++++++++++++++++++++-
> >  1 file changed, 20 insertions(+), 1 deletion(-)
> > 
> > diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
> > index 5af429a..8660052 100644
> > --- a/hw/virtio/virtio-balloon.c
> > +++ b/hw/virtio/virtio-balloon.c
> > @@ -406,7 +406,13 @@ static void virtio_balloon_save_device(VirtIODevice 
> > *vdev, QEMUFile *f)
> >  
> >  static int virtio_balloon_load(QEMUFile *f, void *opaque, size_t size)
> >  {
> > -    return virtio_load(VIRTIO_DEVICE(opaque), f, 1);
> > +    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
> > +    VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
> > +    int ret = virtio_load(vdev, f, 1);
> > +    /* rewind needs vq->inuse populated which happens in virtio_load() 
> > after
> > +     * vdev->load */
> > +    virtqueue_rewind(s->svq);
> > +    return ret;
> >  }
> >  
> >  static int virtio_balloon_load_device(VirtIODevice *vdev, QEMUFile *f,
> > @@ -468,6 +474,18 @@ static void virtio_balloon_device_reset(VirtIODevice 
> > *vdev)
> >      }
> >  }
> >  
> > +static void virtio_balloon_set_status(VirtIODevice *vdev, uint8_t status)
> > +{
> > +    VirtIOBalloon *s = VIRTIO_BALLOON(vdev);
> > +
> > +    if (vdev->vm_running && balloon_stats_supported(s) &&
> > +        (status & VIRTIO_CONFIG_S_DRIVER_OK) && !s->stats_vq_elem) {
> > +        /* poll stats queue for the element we may have discarded when the 
> > VM
> > +         * was stopped */
> > +        virtio_balloon_receive_stats(vdev, s->svq);
> > +    }
> > +}
> > +
> >  static void virtio_balloon_instance_init(Object *obj)
> >  {
> >      VirtIOBalloon *s = VIRTIO_BALLOON(obj);
> > @@ -505,6 +523,7 @@ static void virtio_balloon_class_init(ObjectClass 
> > *klass, void *data)
> >      vdc->get_features = virtio_balloon_get_features;
> >      vdc->save = virtio_balloon_save_device;
> >      vdc->load = virtio_balloon_load_device;
> > +    vdc->set_status = virtio_balloon_set_status;
> >  }
> >  
> >  static const TypeInfo virtio_balloon_info = {
> 
> 
> I'm sorry - I don't like this patch. This means that
> virtio_balloon_receive_stats will be called and will poke
> at the ring even if the ring was never kicked.

I'm not sure I understand what the problem is with that:
virtio_balloon_receive_stats just returns early if virtio_queue_empty(),
which is no more poking at the ring than is already done in virtio_load.

> This is why in my opinion it is cleaner to have
> virtqueue_rewind return a true/false status,
> and only process the ring if ring was rewinded.
> 
> I think Stefan's patches did something similar.

I'm fine with Stefan's patches, too.

Roman.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]