qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH 03/12] usb-host-linux: Only enabling pipeling for ou


From: Hans de Goede
Subject: [Qemu-devel] [PATCH 03/12] usb-host-linux: Only enabling pipeling for output endpoints
Date: Mon, 8 Oct 2012 09:51:27 +0200

With the upcoming input pipelining support, large input packets
may get submitted to the device, which require special handling when
the packets ends up being split again by usb-host-linux due to usbfs
limitations. The exact demands for properly handling larger split input
transfers is explained in detail in this libusb commit:
https://github.com/libusbx/libusbx/commit/ede02ba91920f9be787a7f3cd006c5a4b92b5eab

Specifically in the large comment block the commit adds. Note that IMHO
it would be better to just port usb-host-linux to libusb and let libusb
worry about such usbfs details, rather then reproducing all this code
inside host-linux.c

Note that the not proper handling of this currently already can be a problem
when used in combination with an emulated xhci controller, as that can
already submit large enough transfers to trigger this.

Signed-off-by: Hans de Goede <address@hidden>
---
 hw/usb/host-linux.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/usb/host-linux.c b/hw/usb/host-linux.c
index 44f1a64..3a258b4 100644
--- a/hw/usb/host-linux.c
+++ b/hw/usb/host-linux.c
@@ -1224,7 +1224,8 @@ static int usb_linux_update_endp_table(USBHostDevice *s)
                 usb_ep_set_type(&s->dev, pid, ep, type);
                 usb_ep_set_ifnum(&s->dev, pid, ep, interface);
                 if ((s->options & (1 << USB_HOST_OPT_PIPELINE)) &&
-                    (type == USB_ENDPOINT_XFER_BULK)) {
+                    (type == USB_ENDPOINT_XFER_BULK) &&
+                    (pid == USB_TOKEN_OUT)) {
                     usb_ep_set_pipeline(&s->dev, pid, ep, true);
                 }
 
-- 
1.7.12.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]