qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] remove unused field from pci device.


From: Blue Swirl
Subject: Re: [Qemu-devel] [PATCH] remove unused field from pci device.
Date: Mon, 8 Jun 2009 20:40:29 +0300

On 6/8/09, Gleb Natapov <address@hidden> wrote:
> On Mon, Jun 08, 2009 at 08:31:09PM +0300, Blue Swirl wrote:
>  > On 6/8/09, Gleb Natapov <address@hidden> wrote:
>  > > Signed-off-by: Gleb Natapov <address@hidden>
>  > >  diff --git a/hw/pci.c b/hw/pci.c
>  > >  index 0ab5b94..02b335f 100644
>  > >  --- a/hw/pci.c
>  > >  +++ b/hw/pci.c
>  > >  @@ -268,7 +268,7 @@ static PCIDevice *do_pci_register_device(PCIDevice 
> *pci_dev, PCIBus *bus,
>  > >          config_write = pci_default_write_config;
>  > >      pci_dev->config_read = config_read;
>  > >      pci_dev->config_write = config_write;
>  > >  -    pci_dev->irq_index = pci_irq_index++;
>  > >  +    pci_irq_index++;
>  > >      bus->devices[devfn] = pci_dev;
>  > >      pci_dev->irq = qemu_allocate_irqs(pci_set_irq, pci_dev, 4);
>  > >      return pci_dev;
>  > >  diff --git a/hw/pci.h b/hw/pci.h
>  > >  index 0405837..fb7b89a 100644
>  > >  --- a/hw/pci.h
>  > >  +++ b/hw/pci.h
>  > >  @@ -154,8 +154,6 @@ struct PCIDevice {
>  > >      PCIConfigReadFunc *config_read;
>  > >      PCIConfigWriteFunc *config_write;
>  > >      PCIUnregisterFunc *unregister;
>  > >  -    /* ??? This is a PC-specific hack, and should be removed.  */
>  > >  -    int irq_index;
>  > >
>  > >      /* IRQ objects for the INTA-INTD pins.  */
>  > >      qemu_irq *irq;
>  >
>  > After the patch, pci_irq_index no longer tracks irqs but how many PCI
>  > devices there are in the system, so perhaps it should be renamed as
>  > well.
>
> pci_irq_index tracked the number of PCI devices before this patch too
>  since irq_index is unused anyway. Do you want me to resend the patch
>  with renaming or do renaming in separate patch?

I guess it does not affect bisectability if you also do renaming in
the same patch.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]