qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] ide/atapi: partially avoid deadlock if the


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 2/2] ide/atapi: partially avoid deadlock if the storage backend is dead
Date: Thu, 3 Sep 2015 17:59:28 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Aug 20, 2015 at 10:14:08AM +0200, Peter Lieven wrote:
> the blk_drain_all() that is executed if the guest issues a DMA cancel
> leads to a stuck main loop if the storage backend (e.g. a NFS share)
> is unresponsive.
> 
> This scenario is a common case for CDROM images mounted from an
> NFS share. In this case a broken NFS server can take down the
> whole VM even if the mounted CDROM is not used and was just not
> unmounted after usage.
> 
> This approach avoids the blk_drain_all for read-only media and
> cancelles the AIO locally and makes the callback a NOP if the
> original request is completed after the NFS share is responsive
> again.
> 
> Signed-off-by: Peter Lieven <address@hidden>
> ---
>  hw/ide/pci.c | 32 ++++++++++++++++++--------------
>  1 file changed, 18 insertions(+), 14 deletions(-)
> 
> diff --git a/hw/ide/pci.c b/hw/ide/pci.c
> index d31ff88..a8b4175 100644
> --- a/hw/ide/pci.c
> +++ b/hw/ide/pci.c
> @@ -240,21 +240,25 @@ void bmdma_cmd_writeb(BMDMAState *bm, uint32_t val)
>      /* Ignore writes to SSBM if it keeps the old value */
>      if ((val & BM_CMD_START) != (bm->cmd & BM_CMD_START)) {
>          if (!(val & BM_CMD_START)) {
> -            /*
> -             * We can't cancel Scatter Gather DMA in the middle of the
> -             * operation or a partial (not full) DMA transfer would reach
> -             * the storage so we wait for completion instead (we beahve
> -             * like if the DMA was completed by the time the guest trying
> -             * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not
> -             * set).
> -             *
> -             * In the future we'll be able to safely cancel the I/O if the
> -             * whole DMA operation will be submitted to disk with a single
> -             * aio operation with preadv/pwritev.
> -             */
>              if (bm->bus->dma->aiocb) {
> -                blk_drain_all();
> -                assert(bm->bus->dma->aiocb == NULL);
> +                if (!bdrv_is_read_only(bm->bus->dma->aiocb->bs)) {
> +                    /* We can't cancel Scatter Gather DMA in the middle of 
> the
> +                     * operation or a partial (not full) DMA transfer would
> +                     * reach the storage so we wait for completion instead
> +                     * (we beahve like if the DMA was completed by the time 
> the
> +                     * guest trying to cancel dma with bmdma_cmd_writeb with
> +                     * BM_CMD_START not set). */
> +                    blk_drain_all();
> +                    assert(bm->bus->dma->aiocb == NULL);
> +                } else {
> +                    /* On a read-only device (e.g. CDROM) we can't cause 
> incon-
> +                     * sistencies and thus cancel the AIOCB locally and avoid
> +                     * to be called back later if the original request is
> +                     * completed. */
> +                    BlockAIOCB *aiocb = bm->bus->dma->aiocb;
> +                    aiocb->cb(aiocb->opaque, -ECANCELED);
> +                    aiocb->cb = NULL;

I'm concerned that this isn't safe.

What happens if the request does complete (e.g. will guest RAM be
modified by the read operation)?

What happens if a new request is started and then old NOPed request
completes?

Taking a step back, what are the semantics of writing !(val &
BM_CMD_START)?  Is the device guaranteed to cancel/complete requests
during the register write?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]