qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: FreeBSD timing issues and qemu (was: Re: [Qemu-devel] Re: Breakage w


From: Luigi Rizzo
Subject: Re: FreeBSD timing issues and qemu (was: Re: [Qemu-devel] Re: Breakage with local APIC routing)
Date: Sat, 12 Sep 2009 17:48:36 +0200
User-agent: Mutt/1.4.2.3i

On Fri, Sep 11, 2009 at 01:01:54PM -0400, John Baldwin wrote:
> On Friday 11 September 2009 1:03:17 pm Luigi Rizzo wrote:
...
> > Note that the per-cpu ticks i was proposing were only visible to the
> > timing wheels, which don't use absolute timeouts anyways.
> > So i think the mechanism would be quite safe: right now, when you
> > request a callout after x ticks, the code first picks a CPU
> > (with some criteria), then puts the request in the timer wheel for
> > that CPU using (now) the global 'ticks'. Replacing ticks with cc->cc_ticks,
> > would completely remove the races in insertion and removal.
> > 
> > I actually find the per-cpu ticks even less intrusive than this change.
> 
> Well, it depends.  If TCP ever started using per-CPU callouts (i.e. 
> callout_reset_on())

It seems that this is already the case in practice.

callout_reset() is just #defined to callout_reset_on(c, ... c->cc_cpu)
so all calls end up there.
c->cc_cpu is initialized in callout_init as c->c_cpu = timeout_cpu;
(which is a static int variable; i still don't understand what is
the final value it gets, because the comment says that
kern_timeout_callwheel_alloc() can be called multiple times
and here is where timeout_cpu is initialized.)

cheers
luigi




reply via email to

[Prev in Thread] Current Thread [Next in Thread]