qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v4 0/5] Add packed virtqueue to shadow virtqueue


From: Eugenio Perez Martin
Subject: Re: [RFC v4 0/5] Add packed virtqueue to shadow virtqueue
Date: Mon, 16 Dec 2024 09:39:32 +0100

On Sun, Dec 15, 2024 at 6:27 PM Sahil Siddiq <icegambit91@gmail.com> wrote:
>
> Hi,
>
> On 12/10/24 2:57 PM, Eugenio Perez Martin wrote:
> > On Thu, Dec 5, 2024 at 9:34 PM Sahil Siddiq <icegambit91@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> There are two issues that I found while trying to test
> >> my changes. I thought I would send the patch series
> >> as well in case that helps in troubleshooting. I haven't
> >> been able to find an issue in the implementation yet.
> >> Maybe I am missing something.
> >>
> >> I have been following the "Hands on vDPA: what do you do
> >> when you ain't got the hardware v2 (Part 2)" [1] blog to
> >> test my changes. To boot the L1 VM, I ran:
> >>
> >> sudo ./qemu/build/qemu-system-x86_64 \
> >> -enable-kvm \
> >> -drive 
> >> file=//home/valdaarhun/valdaarhun/qcow2_img/L1.qcow2,media=disk,if=virtio \
> >> -net nic,model=virtio \
> >> -net user,hostfwd=tcp::2222-:22 \
> >> -device intel-iommu,snoop-control=on \
> >> -device 
> >> virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_platform=on,guest_uso4=off,guest_uso6=off,host_uso=off,guest_announce=off,ctrl_vq=on,ctrl_rx=on,packed=on,event_idx=off,bus=pcie.0,addr=0x4
> >>  \
> >> -netdev tap,id=net0,script=no,downscript=no \
> >> -nographic \
> >> -m 8G \
> >> -smp 4 \
> >> -M q35 \
> >> -cpu host 2>&1 | tee vm.log
> >>
> >> Without "guest_uso4=off,guest_uso6=off,host_uso=off,
> >> guest_announce=off" in "-device virtio-net-pci", QEMU
> >> throws "vdpa svq does not work with features" [2] when
> >> trying to boot L2.
> >>
> >> The enums added in commit #2 in this series is new and
> >> wasn't in the earlier versions of the series. Without
> >> this change, x-svq=true throws "SVQ invalid device feature
> >> flags" [3] and x-svq is consequently disabled.
> >>
> >> The first issue is related to running traffic in L2
> >> with vhost-vdpa.
> >>
> >> In L0:
> >>
> >> $ ip addr add 111.1.1.1/24 dev tap0
> >> $ ip link set tap0 up
> >> $ ip addr show tap0
> >> 4: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state 
> >> UNKNOWN group default qlen 1000
> >>      link/ether d2:6d:b9:61:e1:9a brd ff:ff:ff:ff:ff:ff
> >>      inet 111.1.1.1/24 scope global tap0
> >>         valid_lft forever preferred_lft forever
> >>      inet6 fe80::d06d:b9ff:fe61:e19a/64 scope link proto kernel_ll
> >>         valid_lft forever preferred_lft forever
> >>
> >> I am able to run traffic in L2 when booting without
> >> x-svq.
> >>
> >> In L1:
> >>
> >> $ ./qemu/build/qemu-system-x86_64 \
> >> -nographic \
> >> -m 4G \
> >> -enable-kvm \
> >> -M q35 \
> >> -drive file=//root/L2.qcow2,media=disk,if=virtio \
> >> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 \
> >> -device 
> >> virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,bus=pcie.0,addr=0x7
> >>  \
> >> -smp 4 \
> >> -cpu host \
> >> 2>&1 | tee vm.log
> >>
> >> In L2:
> >>
> >> # ip addr add 111.1.1.2/24 dev eth0
> >> # ip addr show eth0
> >> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state 
> >> UP group default qlen 1000
> >>      link/ether 52:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
> >>      altname enp0s7
> >>      inet 111.1.1.2/24 scope global eth0
> >>         valid_lft forever preferred_lft forever
> >>      inet6 fe80::9877:de30:5f17:35f9/64 scope link noprefixroute
> >>         valid_lft forever preferred_lft forever
> >>
> >> # ip route
> >> 111.1.1.0/24 dev eth0 proto kernel scope link src 111.1.1.2
> >>
> >> # ping 111.1.1.1 -w3
> >> PING 111.1.1.1 (111.1.1.1) 56(84) bytes of data.
> >> 64 bytes from 111.1.1.1: icmp_seq=1 ttl=64 time=0.407 ms
> >> 64 bytes from 111.1.1.1: icmp_seq=2 ttl=64 time=0.671 ms
> >> 64 bytes from 111.1.1.1: icmp_seq=3 ttl=64 time=0.291 ms
> >>
> >> --- 111.1.1.1 ping statistics ---
> >> 3 packets transmitted, 3 received, 0% packet loss, time 2034ms
> >> rtt min/avg/max/mdev = 0.291/0.456/0.671/0.159 ms
> >>
> >>
> >> But if I boot L2 with x-svq=true as shown below, I am unable
> >> to ping the host machine.
> >>
> >> $ ./qemu/build/qemu-system-x86_64 \
> >> -nographic \
> >> -m 4G \
> >> -enable-kvm \
> >> -M q35 \
> >> -drive file=//root/L2.qcow2,media=disk,if=virtio \
> >> -netdev 
> >> type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,x-svq=true,id=vhost-vdpa0 \
> >> -device 
> >> virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,bus=pcie.0,addr=0x7
> >>  \
> >> -smp 4 \
> >> -cpu host \
> >> 2>&1 | tee vm.log
> >>
> >> In L2:
> >>
> >> # ip addr add 111.1.1.2/24 dev eth0
> >> # ip addr show eth0
> >> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state 
> >> UP group default qlen 1000
> >>      link/ether 52:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
> >>      altname enp0s7
> >>      inet 111.1.1.2/24 scope global eth0
> >>         valid_lft forever preferred_lft forever
> >>      inet6 fe80::9877:de30:5f17:35f9/64 scope link noprefixroute
> >>         valid_lft forever preferred_lft forever
> >>
> >> # ip route
> >> 111.1.1.0/24 dev eth0 proto kernel scope link src 111.1.1.2
> >>
> >> # ping 111.1.1.1 -w10
> >> PING 111.1.1.1 (111.1.1.1) 56(84) bytes of data.
> >>  From 111.1.1.2 icmp_seq=1 Destination Host Unreachable
> >> ping: sendmsg: No route to host
> >>  From 111.1.1.2 icmp_seq=2 Destination Host Unreachable
> >>  From 111.1.1.2 icmp_seq=3 Destination Host Unreachable
> >>
> >> --- 111.1.1.1 ping statistics ---
> >> 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2076ms
> >> pipe 3
> >>
> >> The other issue is related to booting L2 with "x-svq=true"
> >> and "packed=on".
> >>
> >> In L1:
> >>
> >> $ ./qemu/build/qemu-system-x86_64 \
> >> -nographic \
> >> -m 4G \
> >> -enable-kvm \
> >> -M q35 \
> >> -drive file=//root/L2.qcow2,media=disk,if=virtio \
> >> -netdev 
> >> type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0,x-svq=true \
> >> -device 
> >> virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,guest_uso4=off,guest_uso6=off,host_uso=off,guest_announce=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,packed=on,bus=pcie.0,addr=0x7
> >>  \
> >> -smp 4 \
> >> -cpu host \
> >> 2>&1 | tee vm.log
> >>
> >> The kernel throws "virtio_net virtio1: output.0:id 0 is not
> >> a head!" [4].
> >>
> >
> > So this series implements the descriptor forwarding from the guest to
> > the device in packed vq. We also need to forward the descriptors from
> > the device to the guest. The device writes them in the SVQ ring.
> >
> > The functions responsible for that in QEMU are
> > hw/virtio/vhost-shadow-virtqueue.c:vhost_svq_flush, which is called by
> > the device when used descriptors are written to the SVQ, which calls
> > hw/virtio/vhost-shadow-virtqueue.c:vhost_svq_get_buf. We need to do
> > modifications similar to vhost_svq_add: Make them conditional if we're
> > in split or packed vq, and "copy" the code from Linux's
> > drivers/virtio/virtio_ring.c:virtqueue_get_buf.
> >
> > After these modifications you should be able to ping and forward
> > traffic. As always, It is totally ok if it needs more than one
> > iteration, and feel free to ask any question you have :).
> >
>
> I misunderstood this part. While working on extending
> hw/virtio/vhost-shadow-virtqueue.c:vhost_svq_get_buf() [1]
> for packed vqs, I realized that this function and
> vhost_svq_flush() already support split vqs. However, I am
> unable to ping L0 when booting L2 with "x-svq=true" and
> "packed=off" or when the "packed" option is not specified
> in QEMU's command line.
>
> I tried debugging these functions for split vqs after running
> the following QEMU commands while following the blog [2].
>
> Booting L1:
>
> $ sudo ./qemu/build/qemu-system-x86_64 \
> -enable-kvm \
> -drive 
> file=//home/valdaarhun/valdaarhun/qcow2_img/L1.qcow2,media=disk,if=virtio \
> -net nic,model=virtio \
> -net user,hostfwd=tcp::2222-:22 \
> -device intel-iommu,snoop-control=on \
> -device 
> virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_platform=on,guest_uso4=off,guest_uso6=off,host_uso=off,guest_announce=off,ctrl_vq=on,ctrl_rx=on,packed=off,event_idx=off,bus=pcie.0,addr=0x4
>  \
> -netdev tap,id=net0,script=no,downscript=no \
> -nographic \
> -m 8G \
> -smp 4 \
> -M q35 \
> -cpu host 2>&1 | tee vm.log
>
> Booting L2:
>
> # ./qemu/build/qemu-system-x86_64 \
> -nographic \
> -m 4G \
> -enable-kvm \
> -M q35 \
> -drive file=//root/L2.qcow2,media=disk,if=virtio \
> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,x-svq=true,id=vhost-vdpa0 \
> -device 
> virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,bus=pcie.0,addr=0x7
>  \
> -smp 4 \
> -cpu host \
> 2>&1 | tee vm.log
>
> I printed out the contents of VirtQueueElement returned
> by vhost_svq_get_buf() in vhost_svq_flush() [3].
> I noticed that "len" which is set by "vhost_svq_get_buf"
> is always set to 0 while VirtQueueElement.len is non-zero.
> I haven't understood the difference between these two "len"s.
>

VirtQueueElement.len is the length of the buffer, while the len of
vhost_svq_get_buf is the bytes written by the device. In the case of
the tx queue, VirtQueuelen is the length of the tx packet, and the
vhost_svq_get_buf is always 0 as the device does not write. In the
case of rx, VirtQueueElem.len is the available length for a rx frame,
and the vhost_svq_get_buf len is the actual length written by the
device.

To be 100% accurate a rx packet can span over multiple buffers, but
SVQ does not need special code to handle this.

So vhost_svq_get_buf should return > 0 for rx queue (svq->vq->index ==
0), and 0 for tx queue (svq->vq->index % 2 == 1).

Take into account that vhost_svq_get_buf only handles split vq at the
moment! It should be renamed or splitted into vhost_svq_get_buf_split.

> The "len" that is set to 0 is used in "virtqueue_fill()" in
> virtio.c [4]. Could this point to why I am not able to ping
> L0 from L2?
>

It depends :). Let me know in what vq you find that.

> Thanks,
> Sahil
>
> [1] 
> https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L418
> [2] 
> https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-2
> [3] 
> https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L488
> [4] 
> https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L501
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]