qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v4 4/5] hpet 'driftfix': add code in update_irq() to


From: Ulrich Obergfell
Subject: [Qemu-devel] [PATCH v4 4/5] hpet 'driftfix': add code in update_irq() to detect coalesced interrupts (x86 apic only)
Date: Mon, 9 May 2011 09:03:20 +0200

update_irq() uses a similar method as in 'rtc_td_hack' to detect
coalesced interrupts. The function entry addresses are retrieved
from 'target_get_irq_delivered' and 'target_reset_irq_delivered'.

This change can be replaced if a generic feedback infrastructure to
track coalesced IRQs for periodic, clock providing devices becomes
available.

Signed-off-by: Ulrich Obergfell <address@hidden>
---
 hw/hpet.c |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/hw/hpet.c b/hw/hpet.c
index 7ab6e62..e57c654 100644
--- a/hw/hpet.c
+++ b/hw/hpet.c
@@ -175,11 +175,12 @@ static inline uint64_t hpet_calculate_diff(HPETTimer *t, 
uint64_t current)
     }
 }
 
-static void update_irq(struct HPETTimer *timer, int set)
+static int update_irq(struct HPETTimer *timer, int set)
 {
     uint64_t mask;
     HPETState *s;
     int route;
+    int irq_delivered = 1;
 
     if (timer->tn <= 1 && hpet_in_legacy_mode(timer->state)) {
         /* if LegacyReplacementRoute bit is set, HPET specification requires
@@ -204,8 +205,16 @@ static void update_irq(struct HPETTimer *timer, int set)
         qemu_irq_raise(s->irqs[route]);
     } else {
         s->isr &= ~mask;
-        qemu_irq_pulse(s->irqs[route]);
+        if (s->driftfix) {
+            target_reset_irq_delivered();
+            qemu_irq_raise(s->irqs[route]);
+            irq_delivered = target_get_irq_delivered();
+            qemu_irq_lower(s->irqs[route]);
+        } else {
+            qemu_irq_pulse(s->irqs[route]);
+        }
     }
+    return irq_delivered;
 }
 
 static void hpet_pre_save(void *opaque)
-- 
1.6.2.5




reply via email to

[Prev in Thread] Current Thread [Next in Thread]