qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] PCI: minor performance optimization


From: Cao jin
Subject: Re: [Qemu-devel] [PATCH] PCI: minor performance optimization
Date: Fri, 20 Nov 2015 19:04:07 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0



On 11/20/2015 06:45 PM, Michael S. Tsirkin wrote:
On Fri, Nov 20, 2015 at 06:45:01PM +0800, Cao jin wrote:
1. Do param check in pci_add_capability2(), as it is a public API.

Separate patch pls.

OK


2. As spec says, each capability must be DWORD aligned, so an optimization can
    be done via Loop Unrolling.

Why do we want to optimize it?


For tiny performance improvement via less loop. take pcie express capability(60 bytes at most) for example, it may loop 60 times, now we just need 15 times, a quarter of before.


Signed-off-by: Cao jin <address@hidden>
---
  hw/pci/pci.c | 12 ++++++++----
  1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 168b9cc..1e99603 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -1924,13 +1924,15 @@ PCIDevice *pci_create_simple(PCIBus *bus, int devfn, 
const char *name)
  static uint8_t pci_find_space(PCIDevice *pdev, uint8_t size)
  {
      int offset = PCI_CONFIG_HEADER_SIZE;
-    int i;
-    for (i = PCI_CONFIG_HEADER_SIZE; i < PCI_CONFIG_SPACE_SIZE; ++i) {
+    int i = PCI_CONFIG_HEADER_SIZE;;
+
+    for (; i < PCI_CONFIG_SPACE_SIZE; i = i + 4) {
          if (pdev->used[i])
-            offset = i + 1;
-        else if (i - offset + 1 == size)
+            offset = i + 4;
+        else if (i - offset >= size)
              return offset;
      }
+
      return 0;
  }

@@ -2144,6 +2146,8 @@ int pci_add_capability2(PCIDevice *pdev, uint8_t cap_id,
      uint8_t *config;
      int i, overlapping_cap;

+    assert(size > 0);
+
      if (!offset) {
          offset = pci_find_space(pdev, size);
          if (!offset) {
--
2.1.0
.


--
Yours Sincerely,

Cao Jin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]