qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [patch v4 12/16] e1000: apply fine lock on e1000


From: Jan Kiszka
Subject: Re: [Qemu-devel] [patch v4 12/16] e1000: apply fine lock on e1000
Date: Tue, 23 Oct 2012 11:04:57 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-10-22 11:23, Liu Ping Fan wrote:
> Use local lock to protect e1000. When calling the system function,
> dropping the fine lock before acquiring the big lock. This will
> introduce broken device state, which need extra effort to fix.
> 
> Signed-off-by: Liu Ping Fan <address@hidden>
> ---
>  hw/e1000.c |   24 +++++++++++++++++++++++-
>  1 files changed, 23 insertions(+), 1 deletions(-)
> 
> diff --git a/hw/e1000.c b/hw/e1000.c
> index ae8a6c5..5eddab5 100644
> --- a/hw/e1000.c
> +++ b/hw/e1000.c
> @@ -85,6 +85,7 @@ typedef struct E1000State_st {
>      NICConf conf;
>      MemoryRegion mmio;
>      MemoryRegion io;
> +    QemuMutex e1000_lock;
>  
>      uint32_t mac_reg[0x8000];
>      uint16_t phy_reg[0x20];
> @@ -223,13 +224,27 @@ static const uint32_t mac_reg_init[] = {
>  static void
>  set_interrupt_cause(E1000State *s, int index, uint32_t val)
>  {
> +    QemuThread *t;
> +
>      if (val && (E1000_DEVID >= E1000_DEV_ID_82547EI_MOBILE)) {
>          /* Only for 8257x */
>          val |= E1000_ICR_INT_ASSERTED;
>      }
>      s->mac_reg[ICR] = val;
>      s->mac_reg[ICS] = val;
> -    qemu_set_irq(s->dev.irq[0], (s->mac_reg[IMS] & s->mac_reg[ICR]) != 0);
> +
> +    t = pthread_getspecific(qemu_thread_key);
> +    if (t->context_type == 1) {
> +        qemu_mutex_unlock(&s->e1000_lock);
> +        qemu_mutex_lock_iothread();
> +    }
> +    if (DEVICE(s)->state < DEV_STATE_STOPPING) {
> +        qemu_set_irq(s->dev.irq[0], (s->mac_reg[IMS] & s->mac_reg[ICR]) != 
> 0);
> +    }
> +    if (t->context_type == 1) {
> +        qemu_mutex_unlock_iothread();
> +        qemu_mutex_lock(&s->e1000_lock);
> +    }

This is ugly for many reasons. First of all, it is racy as the register
content may change while dropping the device lock, no? Then you would
raise or clear an IRQ spuriously.

Second, it clearly shows that we need to address lock-less IRQ delivery.
Almost nothing is won if we have to take the global lock again to push
an IRQ event to the guest. I'm repeating myself, but the problem to be
solved here is almost identical to fast IRQ delivery for assigned
devices (which we only address pretty ad-hoc for PCI so far).

And third: too much boilerplate code... :-/

>  }
>  
>  static void
> @@ -268,6 +283,7 @@ static void e1000_reset(void *opaque)
>      E1000State *d = opaque;
>  
>      qemu_del_timer(d->autoneg_timer);
> +
>      memset(d->phy_reg, 0, sizeof d->phy_reg);
>      memmove(d->phy_reg, phy_reg_init, sizeof phy_reg_init);
>      memset(d->mac_reg, 0, sizeof d->mac_reg);
> @@ -448,7 +464,11 @@ e1000_send_packet(E1000State *s, const uint8_t *buf, int 
> size)
>      if (s->phy_reg[PHY_CTRL] & MII_CR_LOOPBACK) {
>          s->nic->nc.info->receive(&s->nic->nc, buf, size);
>      } else {
> +        qemu_mutex_unlock(&s->e1000_lock);
> +        qemu_mutex_lock_iothread();
>          qemu_send_packet(&s->nic->nc, buf, size);
> +        qemu_mutex_unlock_iothread();
> +        qemu_mutex_lock(&s->e1000_lock);

And that is the also a problem to be discussed next: How to handle
locking of backends? Do we want separate locks for backend and frontend?
Although they are typically in a 1:1 relationship? Oh, I'm revealing the
content of my talk... ;)

>      }
>  }
>  
> @@ -1221,6 +1241,8 @@ static int pci_e1000_init(PCIDevice *pci_dev)
>      int i;
>      uint8_t *macaddr;
>  
> +    qemu_mutex_init(&d->e1000_lock);
> +
>      pci_conf = d->dev.config;
>  
>      /* TODO: RST# value should be 0, PCI spec 6.2.4 */
> 

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]