qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical ad


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
Date: Fri, 29 Apr 2011 10:52:06 -0600

On Fri, 2011-04-29 at 09:38 -0600, Alex Williamson wrote:
> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> > On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> > > On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> > >> When we're trying to get a newly registered phys memory client updated
> > >> with the current page mappings, we end up passing the region offset
> > >> (a ram_addr_t) as the start address rather than the actual guest
> > >> physical memory address (target_phys_addr_t).  If your guest has less
> > >> than 3.5G of memory, these are coincidentally the same thing.  If
> > 
> > I think this broke even with < 3.5G as phys_offset also encodes the
> > memory type while region_offset does not. So everything became RAMthis
> > way, no MMIO was announced.
> > 
> > >> there's more, the region offset for the memory above 4G starts over
> > >> at 0, so the set_memory client will overwrite it's lower memory entries.
> > >>
> > >> Instead, keep track of the guest phsyical address as we're walking the
> > >> tables and pass that to the set_memory client.
> > >>
> > >> Signed-off-by: Alex Williamson <address@hidden>
> > > 
> > > Acked-by: Michael S. Tsirkin <address@hidden>
> > > 
> > > Given all this, can yo tell how much time does
> > > it take to hotplug a device with, say, a 40G RAM guest?
> > 
> > Why not collect pages of identical types and report them as one chunk
> > once the type changes?
> 
> Good idea, I'll see if I can code that up.  I don't have a terribly
> large system to test with, but with an 8G guest, it's surprisingly not
> very noticeable.  For vfio, I intend to only have one memory client, so
> adding additional devices won't have to rescan everything.  The memory
> overhead of keeping the list that the memory client creates is probably
> also low enough that it isn't worthwhile to tear it all down if all the
> devices are removed.  Thanks,

Here's a first patch at a patch to do this.  For a 4G guest, it reduces
the number of registration induced set_memory callbacks from 1048866 to
296.

Signed-off-by: Alex Williamson <address@hidden>
---
diff --git a/exec.c b/exec.c
index e670929..5510b0b 100644
--- a/exec.c
+++ b/exec.c
@@ -1741,8 +1741,15 @@ static int cpu_notify_migration_log(int enable)
     return 0;
 }
 
+struct last_map {
+    target_phys_addr_t start_addr;
+    ram_addr_t size;
+    ram_addr_t phys_offset;
+};
+
 static void phys_page_for_each_1(CPUPhysMemoryClient *client,
-                                 int level, void **lp, target_phys_addr_t addr)
+                                 int level, void **lp,
+                                 target_phys_addr_t addr, struct last_map *map)
 {
     int i;
 
@@ -1754,15 +1761,28 @@ static void phys_page_for_each_1(CPUPhysMemoryClient 
*client,
         addr <<= L2_BITS + TARGET_PAGE_BITS;
         for (i = 0; i < L2_SIZE; ++i) {
             if (pd[i].phys_offset != IO_MEM_UNASSIGNED) {
-                client->set_memory(client, addr | i << TARGET_PAGE_BITS,
-                                   TARGET_PAGE_SIZE, pd[i].phys_offset);
+                target_phys_addr_t cur = addr | i << TARGET_PAGE_BITS;
+                if (map->size &&
+                    cur == map->start_addr + map->size &&
+                    pd[i].phys_offset == map->phys_offset + map->size) {
+
+                    map->size += TARGET_PAGE_SIZE;
+                    continue;
+                } else if (map->size) {
+                    client->set_memory(client, map->start_addr,
+                                       map->size, map->phys_offset);
+                }
+
+                map->start_addr = addr | i << TARGET_PAGE_BITS;
+                map->size = TARGET_PAGE_SIZE;
+                map->phys_offset = pd[i].phys_offset;
             }
         }
     } else {
         void **pp = *lp;
         for (i = 0; i < L2_SIZE; ++i) {
             phys_page_for_each_1(client, level - 1, pp + i,
-                                 (addr << L2_BITS) | i);
+                                 (addr << L2_BITS) | i, map);
         }
     }
 }
@@ -1770,9 +1790,15 @@ static void phys_page_for_each_1(CPUPhysMemoryClient 
*client,
 static void phys_page_for_each(CPUPhysMemoryClient *client)
 {
     int i;
+    struct last_map map = { 0 };
+
     for (i = 0; i < P_L1_SIZE; ++i) {
         phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1,
-                             l1_phys_map + i, i);
+                             l1_phys_map + i, i, &map);
+    }
+    if (map.size) {
+        client->set_memory(client, map.start_addr,
+                           map.size, map.phys_offset);
     }
 }
 






reply via email to

[Prev in Thread] Current Thread [Next in Thread]