qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] From virtio_kick until VM-exit?


From: Charls D. Chap
Subject: Re: [Qemu-devel] From virtio_kick until VM-exit?
Date: Wed, 27 Jul 2016 16:20:21 +0300

Hello List (again),
Thank you Stefan for your quick responses! You are great.


On Wed, Jul 27, 2016 at 3:52 PM, Stefan Hajnoczi <address@hidden> wrote:
>
> On Wed, Jul 27, 2016 at 12:19:52PM +0300, charls chap wrote:
> > Hello All,
> >
> > I am new with qemu, I am trying to understand the I/O path of a synchronous
> > I/O.
>
> What exactly do you mean by "synchronous I/O"?

An I/O , that is associated with a device opened with O_SYNC, O_DIRECT flags,
so that we are sure that it's going all the path down, until the
actual write of the data to the physical device.
That's why, i can't understand, how vcpu can continue execution,
without waiting on a condition or sleeping.
If vcpu is not sleeping, then it means, that vcpu didn't execute the
kick in the guest kernel?


For the return path
--------------------------
>After the ioeventfd has been signalled, kvm.ko does a vmenter and
>resumes guest code execution.  The guest finds itself back after the
>instruction that wrote to VIRTIO_PCI_QUEUE_NOTIFY.

>During this time there has been no QEMU userspace activity because
>ioeventfd signalling happens in the kernel in the kvm.ko module.  So
>QEMU is still inside ioctl(KVM_RUN).

iothread is in control and this is the thread that will follow the
common kernel path for the I/O submit and completion. I mean, that
iothread, will be waiting in Host kernel, I/O wait queue,
after the submission of I/O.

In the meantime, kvm does a VM_ENTRY to where?
Since, the interrupt is not completed, the return point couldn't be the
guest interrupt handler...

In short, i still can't find the following:
from which thread in what function  VM-exit-to  which point in kvm.ko?
and
from which point of kvm.ko   VM-entry-to  which point/function in qemu?

Virtual interrupt injection from which point of host kernel to which
point/function in QEMU?


>
>
> Most modern devices have asynchronous interfaces (i.e. a ring or list of
> requests that complete with an interrupt after the vcpu submits them and
> continues execution).
>
> > 1) if i am correct:
> > When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime
> > No vcpus threads?
> >
> > qemu_tcg_cpu_thread_fn
> > tcg_exec_all();
> >
> > No interactions with kvm module. On the other hand, when we have
> > virtualization, there are no
> > interactions with any part of the tcg implementation.
>
> Yes, it's either TCG or KVM.
>
> > The tb_gen_code in translate-all, and find_slot and find_fast,  its not
> > part of the tcg, and there still
> > "executed, in the KVM case?
> > So if we have
> > for (;;)
> > c++;
> >
> > vcpu thread executes code, using cpu-exec?
>
> In the KVM case the vcpu thread does ioctl(KVM_RUN) to execute guest
> code.
ioctl(KVM_RUN) means that we have QEMU/host switch. So how we can say
that guest code is executed natively?


>
> > 2)
> > What is this pipe, i mean between who?
> > when is used?
> > int event_notifier_test_and_clear(EventNotifier *e)
> > {
> >     int value;
> >     ssize_t len;
> >     char buffer[512];
> >
> >     /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
> >     value = 0;
> >     do {
> >         len = read(e->rfd, buffer, sizeof(buffer));
> >         value |= (len > 0);
> >     } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
> >
> >     return value;
> > }
>
> Read eventfd(2) to understand this primitive.  The "pipe" part is a
> fallback for systems that don't support eventfd(2).  eventfd is used for
> signalling between threads.
>
a pipe between who?


> The kvm.ko module can signal an ioeventfd when a particular memory or
> I/O address is written.  This means that the thread monitoring the
> ioeventfd will run when the guest has written to the memory or I/O
> address.
>
> This ioeventfd mechanism is an alternative to the "heavyweight exit"
> code path (return from ioctl(KVM_RUN) and dispatch the memory or I/O
> access in QEMU vcpu thread context before calling ioctl(KVM_RUN) again).
> The advantage of ioeventfd is that device emulation can happen in a
> separate thread while the vcpu continues executing guest code.
>
> >
> > 3)
> > I've tried to trace iothread,
> > It seems that the following functions executed once:
> > iothread_class_init
> > iothread_register_types
> >
> > But i have no idea, when static void *iothread_run(void *opaque)
> > Acutally when iothread is created?
>
> An IOThread is only created if you put -object iothread,id=iothread0 on
> the command-line.  Then you can associate a virtio-blk or virtio-scsi
> device with a particular IOThread: -device
> virtio-blk-pci,iothread=iothread0,drive=drive0.
>
> When no IOThread is given on the command-line, the ioeventfd processing
> happens in the QEMU main loop thread.
>



> Stefan



Thanks,
Charls



reply via email to

[Prev in Thread] Current Thread [Next in Thread]