qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Align PCI capabilities in pci_find_space


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH] Align PCI capabilities in pci_find_space
Date: Tue, 25 Sep 2012 15:37:40 -0600

On Tue, 2012-09-25 at 15:59 -0500, address@hidden wrote:
> From: Matt Renzelmann <address@hidden>
> 
> The current implementation of pci_find_space does not properly align
> PCI capabilities in the PCI configuration space.  This patch fixes
> this issue.
> 
> Signed-off-by: Matt Renzelmann <address@hidden>
> ---
> 
> This is my first patch to QEMU so I hope I'm not screwing up too much.
> The purpose of this patch is to mask off the low-order two bits--Linux
> masks these while scanning the PCI configuration space, for example,
> so we need to make sure QEMU's behavior matches the standard.
> 
> No current QEMU hardware is likely using this but it may be important
> later.
> 
>  hw/pci.c |   14 ++++++++++----
>  1 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/pci.c b/hw/pci.c
> index e149305..8771b7e 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -1571,11 +1571,17 @@ static int pci_find_space(PCIDevice *pdev, uint8_t 
> size)
>      int config_size = pci_config_size(pdev);
>      int offset = PCI_CONFIG_HEADER_SIZE;
>      int i;
> -    for (i = PCI_CONFIG_HEADER_SIZE; i < config_size; ++i)
> -        if (pdev->used[i])
> -            offset = i + 1;
> -        else if (i - offset + 1 == size)
> +    int masked;
> +
> +    for (i = PCI_CONFIG_HEADER_SIZE; i < config_size; ++i) {
> +        masked = i & (~3);
> +        if (pdev->used[i]) {
> +            offset = masked + 4;
> +        } else if (i - offset + 1 == size) {
>              return offset;
> +        }
> +    }
> +
>      return 0;
>  }
>  

I think you could just search every 4th byte.  In fact, this whole used
byte-map could be turned into a single uint64_t bitmap for standard
config space.  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]