qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] [PATCHv8 09/30] aio / timers: Add QEMUTimerListGr


From: Alex Bligh
Subject: Re: [Qemu-devel] [RFC] [PATCHv8 09/30] aio / timers: Add QEMUTimerListGroup and helper functions
Date: Sun, 11 Aug 2013 09:29:12 +0100

Paolo,

--On 11 August 2013 09:53:38 +0200 Paolo Bonzini <address@hidden> wrote:

There is actually a disadvantage of moving TimerListGroup to AioContext.
 The disadvantage is that GSources can only work with millisecond
resolution.  Thus you would need anyway some special casing of the
"timer AioContext" to get the deadline in nanoseconds.

We also need to special case the notifier as it needs to qemu_notify()
rather than aio_notify().

So let's keep the TimerListGroup for now.

OK - do you want me to wrap it in a struct? Other than that I think I've
done all the comments in v8. Happy to do that with v10 if there are
other comments on v9.

I note no one has yet commented on the changes to the icount stuff
where a timeout is apparently arbitrarily capped at 2^31 ns (about
2.1 seconds) in PATCHv9 19/31 - attached below. That's the area
I'm worried about as I'm not sure I understood the code.

--
Alex Bligh



---------- Forwarded Message ----------

Subject: [RFC] [PATCHv9 19/31] aio / timers: Use all timerlists in icount warp calculations

...

For compatibility, maintain an apparent bug where when using
icount, if no vm_clock timer was set, qemu_clock_deadline
would return INT32_MAX and always set an icount clock expiry
about 2 seconds ahead.

...

diff --git a/cpus.c b/cpus.c
index 0f65e76..673d506 100644
--- a/cpus.c
+++ b/cpus.c

...

@@ -314,7 +314,18 @@ void qemu_clock_warp(QEMUClock *clock)
    }

    vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
-    deadline = qemu_clock_deadline(vm_clock);
+    /* We want to use the earliest deadline from ALL vm_clocks */
+    deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+    /* Maintain prior (possibly buggy) behaviour where if no deadline
+     * was set (as there is no vm_clock timer) or it is more than
+     * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+     * nanoseconds.
+     */
+    if ((deadline < 0) || (deadline > INT32_MAX)) {
+        deadline = INT32_MAX;                 ///// <-------- THIS
+    }
+
    if (deadline > 0) {
        /*
         * Ensure the vm_clock proceeds even when the virtual CPU goes to
@@ -333,8 +344,8 @@ void qemu_clock_warp(QEMUClock *clock)
         * packets continuously instead of every 100ms.
         */
        qemu_mod_timer(icount_warp_timer, vm_clock_warp_start + deadline);
-    } else {
-        qemu_notify_event();
+    } else if (deadline == 0) {               //// <---------- AND THIS
+        qemu_clock_notify(vm_clock);
    }
}

...

@@ -1145,11 +1161,23 @@ static int tcg_cpu_exec(CPUArchState *env)
#endif
    if (use_icount) {
        int64_t count;
+        int64_t deadline;
        int decr;
        qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
        env->icount_decr.u16.low = 0;
        env->icount_extra = 0;
-        count = qemu_icount_round(qemu_clock_deadline(vm_clock));
+        deadline = qemu_clock_deadline_ns_all(vm_clock);
+
+        /* Maintain prior (possibly buggy) behaviour where if no deadline
+         * was set (as there is no vm_clock timer) or it is more than
+         * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+         * nanoseconds.
+         */
+        if ((deadline < 0) || (deadline > INT32_MAX)) {
+            deadline = INT32_MAX;          ///// <----------- AND THIS
+        }
+
+        count = qemu_icount_round(deadline);
        qemu_icount += count;
        decr = (count > 0xffff) ? 0xffff : count;
        count -= decr;

--
Alex Bligh



reply via email to

[Prev in Thread] Current Thread [Next in Thread]