qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] [PATCHv10 00/31] aio / timers: Add AioContext tim


From: Jan Kiszka
Subject: Re: [Qemu-devel] [RFC] [PATCHv10 00/31] aio / timers: Add AioContext timers and use ppoll
Date: Tue, 13 Aug 2013 14:22:41 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2013-08-11 18:42, Alex Bligh wrote:
> [ This patch set is available from git at:
>    https://github.com/abligh/qemu/tree/aio-timers10
> As autogenerated patch 30 of the series is too large for the mailing list. ]
> 
> This patch series adds support for timers attached to an AioContext clock
> which get called within aio_poll.

OK, here are my findings while making use of this API for threading
timers of the RTC device model:


In general, the new timer API makes sense to me and turned out to be
reusable for factoring out separate timer threads. There are basically
two ways to do this. One is to use the timerlist abstraction directly,
mostly reimplementing what AioContext provides (timerlist processing +
early wakeups via notification). The second is to tweak aio_poll and the
AioContext setup according to the needs of a timer handling thread.

With tweaking I mean:

bool aio_poll(AioContext *ctx, bool blocking,
              void (*blocking_cb)(bool, void *),
              void *blocking_cb_opaque);

i.e. adding a callback that aio_poll will invoke before and right after
waiting for events/timeouts. This allows to drop/reacquire locks that
protect data structures used both by the timer thread and other threads
running the device model. The result looks like this:

static void rtc_aio_blocking_cb(bool blocking, void *opaque)
{
    RTCState *s = opaque;

    if (blocking) {
        qemu_mutex_unlock(&s->lock);
    } else {
        qemu_mutex_lock(&s->lock);
    }
}

static void *rtc_aio_thread(void *opaque)
{
    RTCState *s = opaque;

    qemu_mutex_lock(&s->lock);
    s->thread_init_done = true;
    qemu_cond_signal(&s->init_cond);

    while (1) {
        aio_poll(s->aio_ctx, true, rtc_aio_blocking_cb, s);
    }

    return NULL;
}


Another trick necessary to make this work is the following:

static int rtc_aio_flush_true(EventNotifier *e)
{
    return 1;
}

...
    s->aio_ctx = aio_context_new();
    aio_set_event_notifier(s->aio_ctx, &s->aio_ctx->notifier,
                           (EventNotifierHandler *)
                           event_notifier_test_and_clear,
                           rtc_aio_flush_true);

ie. enable blocking of aio_poll via the only i/o channel a timer thread
has: the event notifier.

But these shortcomings when reusing AioContext are not due to the new
timer API, they predate it.


I've just tested a prototype here. It consists of Paolo's RCU work, this
series, support for real-time threads, unlocked address space access
dispatching and several hacks to make the RTC work outside of BQL (most
of the time). Our benchmark is a guest that validates the periodic RTC
timer IRQ against a second clock source at ~1 KHz - and it's happy with
the result so far.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]