qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***


From: Wang, Wei W
Subject: Re: [Qemu-devel] [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***
Date: Tue, 30 Aug 2016 12:59:11 +0000

On Tuesday, August 30, 2016 7:11 PM, Michael S. Tsirkin wrote:
> On Tue, Aug 30, 2016 at 10:08:01AM +0000, Wang, Wei W wrote:
> > On Monday, August 29, 2016 11:25 PM, Stefan Hajnoczi wrote:
> > > To: Wang, Wei W <address@hidden>
> > > Cc: address@hidden; address@hidden; virtio-
> > > address@hidden; address@hidden; address@hidden
> > > Subject: Re: [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***
> > >
> > > On Mon, Jun 27, 2016 at 02:01:24AM +0000, Wang, Wei W wrote:
> > > > On Sun 6/19/2016 10:14 PM, Wei Wang wrote:
> > > > > This RFC proposes a design of vhost-pci, which is a new virtio device 
> > > > > type.
> > > > > The vhost-pci device is used for inter-VM communication.
> > > > >
> > > > > Changes in v2:
> > > > > 1. changed the vhost-pci driver to use a controlq to send
> acknowledgement
> > > > >    messages to the vhost-pci server rather than writing to the device
> > > > >    configuration space;
> > > > >
> > > > > 2. re-organized all the data structures and the description
> > > > > layout;
> > > > >
> > > > > 3. removed the VHOST_PCI_CONTROLQ_UPDATE_DONE socket message,
> > > which
> > > > > is redundant;
> > > > >
> > > > > 4. added a message sequence number to the msg info structure to
> > > > > identify socket
> > > > >    messages, and the socket message exchange does not need to be
> > > > > blocking;
> > > > >
> > > > > 5. changed to used uuid to identify each VM rather than using
> > > > > the QEMU
> > > process
> > > > >    id
> > > > >
> > > >
> > > > One more point should be added is that the server needs to send
> > > > periodic socket messages to check if the driver VM is still alive.
> > > > I will add this message support in next version.  (*v2-AR1*)
> > >
> > > Either the driver VM could go down or the device VM (server) could
> > > go down.  In both cases there must be a way to handle the situation.
> > >
> > > If the server VM goes down it should be possible for the driver VM
> > > to resume either via hotplug of a new device or through messages
> > > reinitializing the dead device when the server VM restarts.
> >
> > I got feedbacks from people that the name of device VM and driver VM
> > are difficult to remember. Can we use client (or frontend) VM and
> > server (or backend) VM in the discussion? I think that would sound
> > more straightforward :)
> 
> So server is the device VM?

Yes. 

> Sounds even more confusing to me :)
> 
> frontend/backend is kind of ok if you really prefer it, but let's add some 
> text that
> explains how this translates to device/driver that rest of text uses.

OK. I guess most people are more comfortable with frontend and backend :)

> >
> > Here are the two cases:
> >
> > Case 1: When the client VM powers off, the server VM will notice that
> > the connection is closed (the client calls the socket close()
> > function, which notifies the server about the disconnection). Then the
> > server will need to remove the vhost-pci device for that client VM.
> > When the client VM boots up and connects to the server again, the
> > server VM re-establishes the inter-VM communication channel (i.e.
> > creating a new vhost-pci device and hot-plugging it to the server VM).
> 
> So on reset you really must wait for backend to stop doing things before you
> proceed. Closing socket won't do this, it's asynchronous.

Agree.

>From the logic point of view, I think we can state the following in the spec:

Before the frontend VM is destroyed or migrated, all the clients that connect to
the server SHOULD send a VHOST_PCI_MSG_TYPE_DEVICE_INFO(DEL) socket message to
the server. The destroying or migrating activity MUST wait until all the
VHOST_PCI_MSG_TYPE_DEVICE_INFO_ACK(DEL_DONE) socket messages are received.


>From the implementation point of view, I think we can implement it like this:

Add a new virtio device_status value: VIRTIO_CONFIG_S_DRIVER_DEL_OK
On reset, the virtio driver's .remove() function will be invoked. At the 
beginning of that function, we can patch the following two lines of code:

..->set_status(dev, VIRTIO_CONFIG_S_DRIVER_DEL_OK);  // this is supposed to be 
a request, "OK?"
while (..->get_status(dev) != VIRTIO_CONFIG_S_DRIVER_DEL_OK);  // this is 
supposed to be an ack, "OK!"

The first function traps to QEMU. There in QEMU, it invokes the client socket 
to send a DEVICE_INFO(DEL) socket message to the server, and returns without 
setting the status to be "OK!". Then the frontend driver will wait in the 
while() function there until it's "OK!" to do the removal.

Once the server receives that DEVICE_INFO(DEL) message, it stops the 
corresponding vhost-pci driver and sends back a DEVICE_INFO(DEL_DONE) socket 
message. Upon receiving the message, the client sets the device status to be 
"OK!". Then the driver's .remove() function gets out of while() to continue its 
removing work.

Best,
Wei




reply via email to

[Prev in Thread] Current Thread [Next in Thread]