qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] xen_disk: fix unmapping of persistent grants


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] xen_disk: fix unmapping of persistent grants
Date: Thu, 13 Nov 2014 12:42:57 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Am 12.11.2014 um 18:41 hat Stefano Stabellini geschrieben:
> On Wed, 12 Nov 2014, Roger Pau Monne wrote:
> > This patch fixes two issues with persistent grants and the disk PV backend
> > (Qdisk):
> > 
> >  - Don't use batch mappings when using persistent grants, doing so prevents
> >    unmapping single grants (the whole area has to be unmapped at once).
> 
> The real issue is that destroy_grant cannot work with batch_maps.
> One could reimplement destroy_grant to build a single array with all the
> grants to unmap and make a single xc_gnttab_munmap call.
> 
> Do you think that would be feasible?
> 
> Performance wise, it would certainly be better.
> 
> 
> >  - Unmap persistent grants before switching to the closed state, so the
> >    frontend can also free them.
> >
> > Signed-off-by: Roger Pau Monné <address@hidden>
> > Reported-and-Tested-by: George Dunlap <address@hidden>
> > Cc: Stefano Stabellini <address@hidden>
> > Cc: Kevin Wolf <address@hidden>
> > Cc: Stefan Hajnoczi <address@hidden>
> > Cc: George Dunlap <address@hidden>
> > ---
> >  hw/block/xen_disk.c | 35 ++++++++++++++++++++++++-----------
> >  1 file changed, 24 insertions(+), 11 deletions(-)
> > 
> > diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> > index 231e9a7..1300c0a 100644
> > --- a/hw/block/xen_disk.c
> > +++ b/hw/block/xen_disk.c
> > @@ -43,8 +43,6 @@
> >  
> >  /* ------------------------------------------------------------- */
> >  
> > -static int batch_maps   = 0;
> > -
> >  static int max_requests = 32;
> >  
> >  /* ------------------------------------------------------------- */
> > @@ -105,6 +103,7 @@ struct XenBlkDev {
> >      blkif_back_rings_t  rings;
> >      int                 more_work;
> >      int                 cnt_map;
> > +    bool                batch_maps;
> >  
> >      /* request lists */
> >      QLIST_HEAD(inflight_head, ioreq) inflight;
> > @@ -309,7 +308,7 @@ static void ioreq_unmap(struct ioreq *ioreq)
> >      if (ioreq->num_unmap == 0 || ioreq->mapped == 0) {
> >          return;
> >      }
> > -    if (batch_maps) {
> > +    if (ioreq->blkdev->batch_maps) {
> >          if (!ioreq->pages) {
> >              return;
> >          }
> > @@ -386,7 +385,7 @@ static int ioreq_map(struct ioreq *ioreq)
> >          new_maps = ioreq->v.niov;
> >      }
> >  
> > -    if (batch_maps && new_maps) {
> > +    if (ioreq->blkdev->batch_maps && new_maps) {
> >          ioreq->pages = xc_gnttab_map_grant_refs
> >              (gnt, new_maps, domids, refs, ioreq->prot);
> >          if (ioreq->pages == NULL) {
> > @@ -433,7 +432,7 @@ static int ioreq_map(struct ioreq *ioreq)
> >               */
> >              grant = g_malloc0(sizeof(*grant));
> >              new_maps--;
> > -            if (batch_maps) {
> > +            if (ioreq->blkdev->batch_maps) {
> >                  grant->page = ioreq->pages + (new_maps) * XC_PAGE_SIZE;
> >              } else {
> >                  grant->page = ioreq->page[new_maps];
> > @@ -718,7 +717,9 @@ static void blk_alloc(struct XenDevice *xendev)
> >      QLIST_INIT(&blkdev->freelist);
> >      blkdev->bh = qemu_bh_new(blk_bh, blkdev);
> >      if (xen_mode != XEN_EMULATE) {
> > -        batch_maps = 1;
> > +        blkdev->batch_maps = TRUE;
> > +    } else {
> > +        blkdev->batch_maps = FALSE;
> >      }
> 
> true and false, lower capitals

Or just blkdev->batch_maps = (xen_mode != XEN_EMULATE);

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]