qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption


From: Peter Crosthwaite
Subject: Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption
Date: Mon, 2 Jul 2012 20:18:37 +1000

On Mon, Jul 2, 2012 at 8:01 PM, Kevin Wolf <address@hidden> wrote:
> Am 02.07.2012 11:42, schrieb Peter Crosthwaite:
>> On Mon, Jul 2, 2012 at 7:04 PM, Kevin Wolf <address@hidden> wrote:
>>> Am 02.07.2012 10:57, schrieb Peter Crosthwaite:
>>>> No conditional on the qemu_coroutine_create. So it will always create
>>>> a new coroutine for its work which will solve my problem. All I need
>>>> to do is pump events once at the end of machine model creation. Any my
>>>> coroutines will never yield or get queued by block/AIO. Sound like a
>>>> solution?
>>>
>>> If you don't need the read data in your initialisation code,
>>
>> definately not :) Just as long as the read data is there by the time
>> the machine goes live.  Whats the current policy with bdrv_read()ing
>> from init functions anyway? Several devices in qemu have init
>> functions that read the entire storage into a buffer (then the guest
>> just talks to the buffer rather than the backing store).
>
> Reading from block devices during device initialisation breaks
> migration, so I'd like to see it go away wherever possible. Reading in
> the whole image file doesn't sound like something for which a good
> excuse exists,

Makes sense for small devices on embedded platforms. You end up with a
very simple and clean device model. The approach is totally broken for
HDDs but it does make some sense for the little embedded flashes where
you can get away with caching the entire device storage in RAM for the
lifetime of the system.

> you can do that as well during the first access.
>

Kind of painful though to change this for a lot of existing devices.
Its a reasonable policy for new devices, but can we just fix the
init->bdrv_read() calls in place for the moment?

>> Pflash (pflash_cfi_01.c) is the device that is causing me interference
>> here and it works exactly like this. If we make the bdrv_read() aio
>> though, how do you ensure that it has completed before the guest talks
>> to the device? Will this just happen at the end of machine_init
>> anyways? Can we put a one liner in the machine init framework that
>> pumps all AIO events then just mass covert all these bdrv_reads (in
>> init functions) to bdrv_aio_read with a nop completion callback?
>
> The initialisation function of the device can wait at its end for all
> AIOs to return.

You lose the performance increase discussed below however, as instead
of the init function returning to continue on with the machine init,
you block on disk IO.

I wouldn't want to encourage more block layer use during
> the initialisation phase by supporting it in the infrastructure.
>
>> then yes,
>>> that would work. bdrv_aio_* will always create a new coroutine. I just
>>> assumed that you wanted to use the data right away, and then using the
>>> AIO functions wouldn't have made much sense.
>>
>> You'd get a small performance increase no? Your machine init continues
>> on while your I/O happens rather than being synchronous so there is
>> motivation beyond my situation.
>
> Yeah, as long as the next statement isn't "while (!returned)
> qemu_aio_wait();", which it is in the common case.
>
> Kevin

Regards,
Peter



reply via email to

[Prev in Thread] Current Thread [Next in Thread]