qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] scsi-disk: Don't enlarge min_io_size to max_


From: Daniel Henrique Barboza
Subject: Re: [Qemu-devel] [PATCH v2] scsi-disk: Don't enlarge min_io_size to max_io_size
Date: Tue, 27 Mar 2018 14:05:04 -0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0



On 03/27/2018 01:41 PM, Fam Zheng wrote:
Some backends report big max_io_sectors. Making min_io_size the same
value in this case will make it impossible for guest to align memory,
therefore the disk may not be usable at all.

Do not enlarge them when they are zero.

Reported-by: David Gibson <address@hidden>
Signed-off-by: Fam Zheng <address@hidden>

---

v2: Leave the values alone if zero. [Paolo]
     At least we can consult block layer for a slightly more sensible
     opt_io_size, but that's for another patch.
---
  hw/scsi/scsi-disk.c | 10 ++++++----
  1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index f5ab767ab5..f8ed8cf2b4 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -714,10 +714,12 @@ static int scsi_disk_emulate_inquiry(SCSIRequest *req, 
uint8_t *outbuf)

                  /* min_io_size and opt_io_size can't be greater than
                   * max_io_sectors */
-                min_io_size =
-                    MIN_NON_ZERO(min_io_size, max_io_sectors);
-                opt_io_size =
-                    MIN_NON_ZERO(opt_io_size, max_io_sectors);
+                if (min_io_size) {
+                    min_io_size = MIN(min_io_size, max_io_sectors);
+                }
+                if (opt_io_size) {
+                    opt_io_size = MIN(opt_io_size, max_io_sectors);
+                }
              }
              /* required VPD size with unmap support */
              buflen = 0x40;

Reviewed-by: Daniel Henrique Barboza <address@hidden>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]