qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-stable] [PATCH v2] iscsi: Fix divide-by-zero regression on raw


From: Paolo Bonzini
Subject: Re: [Qemu-stable] [PATCH v2] iscsi: Fix divide-by-zero regression on raw SG devices
Date: Wed, 21 Sep 2016 18:42:57 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0


On 07/09/2016 23:27, Eric Blake wrote:
> When qemu uses iscsi devices in sg mode, iscsilun->block_size
> is left at 0.  Prior to commits cf081fca and similar, when
> block limits were tracked in sectors, this did not matter:
> various block limits were just left at 0.  But when we started
> scaling by block size, this caused SIGFPE.
> 
> Then, in a later patch, commit a5b8dd2c added an assertion to
> bdrv_open_common() that request_alignment is always non-zero;
> which was not true for SG mode.  Rather than relax that assertion,
> we can just provide a sane value (we don't know of any SG device
> with a block size smaller than qemu's default sizing of 512 bytes).
> 
> One possible solution for SG mode is to just blindly skip ALL
> of iscsi_refresh_limits(), since we already short circuit so
> many other things in sg mode.  But this patch takes a slightly
> more conservative approach, and merely guarantees that scaling
> will succeed, while still using multiples of the original size
> where possible.  Resulting limits may still be zero in SG mode
> (that is, we mostly only fix block_size used as a denominator
> or which affect assertions, not all uses).
> 
> Reported-by: Holger Schranz <address@hidden>
> Signed-off-by: Eric Blake <address@hidden>
> CC: address@hidden
> 
> ---
> v2: avoid second assertion failure
> ---
>  block/iscsi.c | 14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/block/iscsi.c b/block/iscsi.c
> index 95ce9e1..c01e955 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -1813,19 +1813,23 @@ static void iscsi_refresh_limits(BlockDriverState 
> *bs, Error **errp)
> 
>      IscsiLun *iscsilun = bs->opaque;
>      uint64_t max_xfer_len = iscsilun->use_16_for_rw ? 0xffffffff : 0xffff;
> +    unsigned int block_size = MIN_NON_ZERO(BDRV_SECTOR_SIZE,
> +                                           iscsilun->block_size);
> 
> -    bs->bl.request_alignment = iscsilun->block_size;
> +    assert(iscsilun->block_size >= BDRV_SECTOR_SIZE || bs->sg);
> +
> +    bs->bl.request_alignment = block_size;
> 
>      if (iscsilun->bl.max_xfer_len) {
>          max_xfer_len = MIN(max_xfer_len, iscsilun->bl.max_xfer_len);
>      }
> 
> -    if (max_xfer_len * iscsilun->block_size < INT_MAX) {
> +    if (max_xfer_len * block_size < INT_MAX) {
>          bs->bl.max_transfer = max_xfer_len * iscsilun->block_size;
>      }
> 
>      if (iscsilun->lbp.lbpu) {
> -        if (iscsilun->bl.max_unmap < 0xffffffff / iscsilun->block_size) {
> +        if (iscsilun->bl.max_unmap < 0xffffffff / block_size) {
>              bs->bl.max_pdiscard =
>                  iscsilun->bl.max_unmap * iscsilun->block_size;
>          }
> @@ -1835,7 +1839,7 @@ static void iscsi_refresh_limits(BlockDriverState *bs, 
> Error **errp)
>          bs->bl.pdiscard_alignment = iscsilun->block_size;
>      }
> 
> -    if (iscsilun->bl.max_ws_len < 0xffffffff / iscsilun->block_size) {
> +    if (iscsilun->bl.max_ws_len < 0xffffffff / block_size) {
>          bs->bl.max_pwrite_zeroes =
>              iscsilun->bl.max_ws_len * iscsilun->block_size;
>      }
> @@ -1846,7 +1850,7 @@ static void iscsi_refresh_limits(BlockDriverState *bs, 
> Error **errp)
>          bs->bl.pwrite_zeroes_alignment = iscsilun->block_size;
>      }
>      if (iscsilun->bl.opt_xfer_len &&
> -        iscsilun->bl.opt_xfer_len < INT_MAX / iscsilun->block_size) {
> +        iscsilun->bl.opt_xfer_len < INT_MAX / block_size) {
>          bs->bl.opt_transfer = pow2floor(iscsilun->bl.opt_xfer_len *
>                                          iscsilun->block_size);
>      }
> 

Queued, thanks.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]