qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PULL 08/15] exec: Factor out section_covers_addr


From: Paolo Bonzini
Subject: [Qemu-devel] [PULL 08/15] exec: Factor out section_covers_addr
Date: Mon, 7 Mar 2016 18:36:54 +0100

From: Fam Zheng <address@hidden>

This will be shared by the next patch.

Also add a comment explaining the unobvious condition on "size.hi".

Signed-off-by: Fam Zheng <address@hidden>
Message-Id: <address@hidden>
[Small change to the comment. - Paolo]
Signed-off-by: Paolo Bonzini <address@hidden>
---
 exec.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/exec.c b/exec.c
index ad8b826..9279af5 100644
--- a/exec.c
+++ b/exec.c
@@ -307,6 +307,17 @@ static void phys_page_compact_all(AddressSpaceDispatch *d, 
int nodes_nb)
     }
 }
 
+static inline bool section_covers_addr(const MemoryRegionSection *section,
+                                       hwaddr addr)
+{
+    /* Memory topology clips a memory region to [0, 2^64); size.hi > 0 means
+     * the section must cover the entire address space.
+     */
+    return section->size.hi ||
+           range_covers_byte(section->offset_within_address_space,
+                             section->size.lo, addr);
+}
+
 static MemoryRegionSection *phys_page_find(PhysPageEntry lp, hwaddr addr,
                                            Node *nodes, MemoryRegionSection 
*sections)
 {
@@ -322,9 +333,7 @@ static MemoryRegionSection *phys_page_find(PhysPageEntry 
lp, hwaddr addr,
         lp = p[(index >> (i * P_L2_BITS)) & (P_L2_SIZE - 1)];
     }
 
-    if (sections[lp.ptr].size.hi ||
-        range_covers_byte(sections[lp.ptr].offset_within_address_space,
-                          sections[lp.ptr].size.lo, addr)) {
+    if (section_covers_addr(&sections[lp.ptr], addr)) {
         return &sections[lp.ptr];
     } else {
         return &sections[PHYS_SECTION_UNASSIGNED];
-- 
2.5.0





reply via email to

[Prev in Thread] Current Thread [Next in Thread]