> Command line example:
>
> qemu-system-x86_64 -enable-kvm -m 3072 -smp 3 \
> -machine q35,kernel-irqchip=split -cpu host \
> -k fr \
> -serial stdio \
> -net none \
> -qmp unix:/tmp/qmp.socket,server,nowait \
> -monitor telnet:127.0.0.1:5555,server,nowait \
> -device
pcie-root-port,id=root0,multifunction=on,chassis=0,addr=0xa \
> -device pcie-root-port,id=root1,bus=pcie.0,chassis=1 \
> -device pcie-root-port,id=root2,bus=pcie.0,chassis=2 \
> -netdev
tap,script=/root/bin/bridge.sh,downscript=no,id=hostnet1,vhost=on \
> -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:6f:55:cc,bus=root2,failover=on
\
> /root/rhel-guest-image-8.0-1781.x86_64.qcow2
>
> Then the primary device can be hotplugged via
> (qemu) device_add
vfio-pci,host=5e:00.2,id=hostdev0,bus=root1,standby=net1
I guess this is the commandline on the migration destination, and as
far as
I understand from this example, on the destination we (meaning
libvirt or
higher level management application) must *not* include the assigned
device
on the qemu commandline, but must instead hotplug the device later
after the
guest CPUs have been restarted on the destination.
So if I'm understanding correctly, the idea is that on the migration
source,
the device may have been hotplugged, or may have been included when
qemu was
originally started. Then qemu automatically handles the unplug of the
device
on the source, but it seems qemu does nothing on the destination,
leaving
that up to libvirt or a higher layer to implement.
Then in order for this to work, libvirt (or OpenStack or oVirt or
whoever)
needs to understand that the device in the libvirt config (it will
still be
in the libvirt config, since from libvirt's POV it hasn't been
unplugged):
1) shouldn't be included in the qemu commandline on the destination,