qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block


From: Bharata B Rao
Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend
Date: Wed, 5 Sep 2012 13:11:06 +0530
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Aug 09, 2012 at 06:32:16PM +0530, Bharata B Rao wrote:
> +static void qemu_gluster_complete_aio(GlusterAIOCB *acb)
> +{
> +    int ret;
> +
> +    if (acb->canceled) {
> +        qemu_aio_release(acb);
> +        return;
> +    }
> +
> +    if (acb->ret == acb->size) {
> +        ret = 0; /* Success */
> +    } else if (acb->ret < 0) {
> +        ret = acb->ret; /* Read/Write failed */
> +    } else {
> +        ret = -EIO; /* Partial read/write - fail it */
> +    }
> +    acb->common.cb(acb->common.opaque, ret);

The .cb() here is bdrv_co_io_em_complete(). It does qemu_coroutine_enter(),
handles the return value and comes back here.

But if the bdrv_read or bdrv_write or bdrv_flush was called from a
coroutine context (as against they themselves creating a new coroutine),
the above .cb() call above doesn't return to this point. Hence I won't
be able to release the acb and decrement the qemu_aio_count.

What could be the issue here ? In general, how do I ensure that my
aio calls get completed correctly in such scenarios where bdrv_read etc
are called from coroutine context rather than from main thread context ?

Creating qcow2 image would lead to this scenario where
->bdrv_create (=qcow2_create) will create a coroutine and subsequently
read and write are called within qcow2_create in coroutine context itself.

> +    qemu_aio_release(acb);
> +}
> +
> +static void qemu_gluster_aio_event_reader(void *opaque)
> +{
> +    BDRVGlusterState *s = opaque;
> +    GlusterAIOCB *event_acb;
> +    int event_reader_pos = 0;
> +    ssize_t ret;
> +
> +    do {
> +        char *p = (char *)&event_acb;
> +
> +        ret = read(s->fds[GLUSTER_FD_READ], p + event_reader_pos,
> +                   sizeof(event_acb) - event_reader_pos);
> +        if (ret > 0) {
> +            event_reader_pos += ret;
> +            if (event_reader_pos == sizeof(event_acb)) {
> +                event_reader_pos = 0;
> +                qemu_gluster_complete_aio(event_acb);
> +                s->qemu_aio_count--;
> +            }
> +        }
> +    } while (ret < 0 && errno == EINTR);
> +}
> +




reply via email to

[Prev in Thread] Current Thread [Next in Thread]