[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block
From: |
Kevin Wolf |
Subject: |
Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend |
Date: |
Thu, 06 Sep 2012 12:07:50 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0 |
Am 06.09.2012 11:38, schrieb Paolo Bonzini:
> Il 06/09/2012 11:06, Kevin Wolf ha scritto:
>>>> If it works, I think this change would be preferrable to using a "magic"
>>>> BH in every driver.
>> The way it works in posix-aio-compat is that the request is first
>> removed from the list and then the callback is called. This way
>> posix_aio_flush() can return 0 and bdrv_drain_all() completes.
>
> So the same could be done in gluster: first decrease qemu_aio_count,
> then call the callback, then call qemu_aio_release.
>
> But in either case, wouldn't that leak the AIOCBs until the end of
> qcow2_create?
>
> The AIOCB is already invalid at the time the callback is entered, so we
> could release it before the call. However, not all implementation of
> AIO are ready for that and I'm not really in the mood for large scale
> refactoring...
But the way, what I'd really want to see in the end is to get rid of
qemu_aio_flush() and replace it by .bdrv_drain() callbacks in each
BlockDriver. The way we're doing it today is a layering violation.
Doesn't change anything about this problem, though. So the options that
we have are:
1. Delay the callback using a BH. Doing this in each driver is ugly.
But is there actually more than one possible callback in today's
coroutine world? I only see bdrv_co_io_em_complete(), which could
reenter the coroutine from a BH.
2. Delay the callback by just calling it later when the cleanup has
been completed and .io_flush() can return 0. You say that it's hard
to implement for some drivers, except if the AIOCB are leaked until
the end of functions like qcow2_create().
3. Add a delay only later in functions like bdrv_drain_all() that assume
that the request has completed. Are there more of this type? AIOCBs
are leaked until a bdrv_drain_all() call. Does it work with draining
specific BDSes instead of everything?
Unless I forgot some important point, it almost looks like option 1 is
the easiest and safest.
Kevin
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Bharata B Rao, 2012/09/05
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Bharata B Rao, 2012/09/05
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Paolo Bonzini, 2012/09/06
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Kevin Wolf, 2012/09/06
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Paolo Bonzini, 2012/09/06
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend,
Kevin Wolf <=
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Paolo Bonzini, 2012/09/06
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Kevin Wolf, 2012/09/06
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Paolo Bonzini, 2012/09/06
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Bharata B Rao, 2012/09/07
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Paolo Bonzini, 2012/09/07
- Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Bharata B Rao, 2012/09/08
Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend, Kevin Wolf, 2012/09/05