qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PULL for-2.4 3/9] vmxnet3: Fix incorrect small packet padd


From: Stefan Hajnoczi
Subject: [Qemu-devel] [PULL for-2.4 3/9] vmxnet3: Fix incorrect small packet padding
Date: Tue, 7 Jul 2015 13:38:19 +0100

From: Brian Kress <address@hidden>

When running ESXi under qemu there is an issue with the ESXi guest
discarding packets that are too short.  The guest discards any packets
under the normal minimum length for an ethernet packet (60).  This
results in odd behaviour where other hosts or VMs on other hosts can
communicate with the ESXi guest just fine (since there's a physical NIC
somewhere doing padding), but VMs on the host and the host itself cannot
because the ARP request packets are too small for the ESXi host to
accept.

Someone in the past thought this was worth fixing, and added code to the
vmxnet3 qemu emulation such that if it is receiving packets smaller than
60 bytes to pad the packet out to 60. Unfortunately this code is wrong
(or at least in the wrong place). It does so BEFORE before taking into
account the vnet_hdr at the front of the packet added by the tap device.
As a result, it might add padding, but it never adds enough.
Specifically it adds 10 less (the length of the vnet_hdr) than it needs
to.

The following (hopefully "obviously correct") patch simply swaps the
order of processing the vnet header and the padding.  With this patch an
ESXi guest is able to communicate with the host or other local VMs.

Signed-off-by: Brian Kress <address@hidden>
Reviewed-by: Paolo Bonzini <address@hidden>
Reviewed-by: Dmitry Fleytman <address@hidden>
Signed-off-by: Stefan Hajnoczi <address@hidden>
---
 hw/net/vmxnet3.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c
index 104a0f5..706e060 100644
--- a/hw/net/vmxnet3.c
+++ b/hw/net/vmxnet3.c
@@ -1879,6 +1879,12 @@ vmxnet3_receive(NetClientState *nc, const uint8_t *buf, 
size_t size)
         return -1;
     }
 
+    if (s->peer_has_vhdr) {
+        vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf);
+        buf += sizeof(struct virtio_net_hdr);
+        size -= sizeof(struct virtio_net_hdr);
+    }
+
     /* Pad to minimum Ethernet frame length */
     if (size < sizeof(min_buf)) {
         memcpy(min_buf, buf, size);
@@ -1887,12 +1893,6 @@ vmxnet3_receive(NetClientState *nc, const uint8_t *buf, 
size_t size)
         size = sizeof(min_buf);
     }
 
-    if (s->peer_has_vhdr) {
-        vmxnet_rx_pkt_set_vhdr(s->rx_pkt, (struct virtio_net_hdr *)buf);
-        buf += sizeof(struct virtio_net_hdr);
-        size -= sizeof(struct virtio_net_hdr);
-    }
-
     vmxnet_rx_pkt_set_packet_type(s->rx_pkt,
         get_eth_packet_type(PKT_GET_ETH_HDR(buf)));
 
-- 
2.4.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]