qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH WIP 0/4] vhost-scsi: new device supporting the t


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH WIP 0/4] vhost-scsi: new device supporting the tcm_vhost Linux kernel module
Date: Thu, 31 Jan 2013 13:12:29 +0200

On Wed, Jan 30, 2013 at 05:41:22PM +0100, Paolo Bonzini wrote:
> Ok, so here is my attempt at a vhost-scsi device.  I'm creating an
> entirely separate device, with the common parts of virtio-scsi and
> vhost-scsi (actually little more than the initialization) grouped into
> a VirtIOSCSICommon type.  The device is used simply like "-device
> vhost-scsi-pci,wwpn=WWPN", with all configuration done in configfs
> beforehand.
> 
> As expected, using a separate device finds a snag: vhost-scsi is passing
> force=false to vhost_dev_init, and the BIOS does not use MSI-X so it
> will actually use the non-vhost implementation which is wrong.  I fixed
> this by passing force=true; I'm not sure what that would break, but I
> figured "not much" since the BIOS polls and does not rely on interrupts.
> 
> That makes vhost start, but it still doesn't work for me with a 3.7.2
> kernel on the host.  Even Nick's patches hang the guest as soon as vhost
> starts, and I get the same behavior with mine.  (Of course with my
> patches the BIOS hangs and you never reach Linux; but try a BIOS without
> virtio-scsi support, and you'll see Linux hanging in the same way).
> 
> Here is my configuration:
> 
>   cd /sys/kernel/config/target
>   mkdir -p core/fileio_0/fileio
>   echo 'fd_dev_name=/home/pbonzini/test.img,fd_dev_size=5905580032' > 
> core/fileio_0/fileio/control 
>   echo 1 > core/fileio_0/fileio/enable
>   mkdir -p vhost/naa.600140554cf3a18e/tpgt_0/lun/lun_0
>   cd vhost/naa.600140554cf3a18e/tpgt_0
>   ln -sf ../../../../../core/fileio_0/fileio/ lun/lun_0/virtual_scsi_port
>   echo naa.60014053226f0388 > nexus
> 
> Nick's patches are run with "-vhost-scsi 
> id=vs,tpgt=0,wwpn=naa.600140554cf3a18e
> -device virtio-scsi-pci,vhost-scsi=vs".  Perhaps I'm doing something wrong.
> 
> Another small bug I found is an ordering problem between
> VHOST_SET_VRING_KICK and VHOST_SCSI_SET_ENDPOINT.  Starting the vq
> causes a "vhost_scsi_handle_vq endpoint not set" error in dmesg.
> Because of this I added the first two patches, which let me do
> VHOST_SCSI_SET_ENDPOINT before VHOST_SET_VRING_KICK but after setting
> up the vring.
> 
> Unfortunately, this is not enough to fix the hang.  And anyway, it's
> probably simpler to avoid the two patches and remove this test from the
> tcm_vhost.c vhost_scsi_set/clear_endpoint functions:
> 
>         mutex_lock(&vs->dev.mutex);
>         /* Verify that ring has been setup correctly. */
>         for (index = 0; index < vs->dev.nvqs; ++index) {
>                 /* Verify that ring has been setup correctly. */
>                 if (!vhost_vq_access_ok(&vs->vqs[index])) {
>                         mutex_unlock(&vs->dev.mutex);
>                         return -EFAULT;
>                 }
>         }
>         mutex_unlock(&vs->dev.mutex);

Well userspace should initialize the kick eventfd to 0,
it seems to init it to 1 which is why we get the error.
But I think the only issue is pr_err: vhost-net already
ignores such a kick with no backend. So let's just
remove it, preferably for 3.8.

--->
tcm_vhost: fix pr_err on early kick

It's OK to get kick before backend is set or after
it is cleared, we can just ignore it.

Signed-off-by: Michael S. Tsirkin <address@hidden>

---

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index b20df5c..22321cf 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -575,10 +575,8 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs)
 
        /* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
        tv_tpg = vs->vs_tpg;
-       if (unlikely(!tv_tpg)) {
-               pr_err("%s endpoint not set\n", __func__);
+       if (unlikely(!tv_tpg))
                return;
-       }
 
        mutex_lock(&vq->mutex);
        vhost_disable_notify(&vs->dev, vq);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]