qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 08/10] Rework ram block hash


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 08/10] Rework ram block hash
Date: Tue, 19 May 2015 19:55:04 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

* Michael R. Hines (address@hidden) wrote:
> On 04/20/2015 10:57 AM, Dr. David Alan Gilbert (git) wrote:
> >From: "Dr. David Alan Gilbert" <address@hidden>
> >
> >RDMA uses a hash from block offset->RAM Block; this isn't needed
> >on the destination, and now that the destination sorts the ramblock
> >list, is harder to maintain.
> 
> Destination sorts the ramblock list? Is the patchset out-of-order?
> I didn't see that yet..... Why is it sorted?

It's the next patch in the list - please see that one.
Since I use an index into the list it made it the easiest thing to index
on.

> I would like to keep the ramblock list directly addressable by hash
> on both sides, because, as I mentioned earlier, we want as much
> flexibility in registering RAMBlock memory as possible by being
> able to add or delete arbitrary blocks int the list at anytime during
> a migration.
> 
> I will try to get the patchset that allows anyone to register memory
> for transfer out as soon as I can.

Hmm OK, I think I can rework that to regenerate the hash; it's a little
difficult without knowing how you're intending to use it.

Dave

> 
> >
> >Split the hash so that it's only generated on the source.
> >
> >Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> >---
> >  migration/rdma.c | 32 ++++++++++++++++++++------------
> >  1 file changed, 20 insertions(+), 12 deletions(-)
> >
> >diff --git a/migration/rdma.c b/migration/rdma.c
> >index fe3b76e..185817e 100644
> >--- a/migration/rdma.c
> >+++ b/migration/rdma.c
> >@@ -533,23 +533,22 @@ static int rdma_add_block(RDMAContext *rdma, const 
> >char *block_name,
> >                           ram_addr_t block_offset, uint64_t length)
> >  {
> >      RDMALocalBlocks *local = &rdma->local_ram_blocks;
> >-    RDMALocalBlock *block = g_hash_table_lookup(rdma->blockmap,
> >-        (void *)(uintptr_t)block_offset);
> >+    RDMALocalBlock *block;
> >      RDMALocalBlock *old = local->block;
> >
> >-    assert(block == NULL);
> >-
> >      local->block = g_malloc0(sizeof(RDMALocalBlock) * (local->nb_blocks + 
> > 1));
> >
> >      if (local->nb_blocks) {
> >          int x;
> >
> >-        for (x = 0; x < local->nb_blocks; x++) {
> >-            g_hash_table_remove(rdma->blockmap,
> >-                                (void *)(uintptr_t)old[x].offset);
> >-            g_hash_table_insert(rdma->blockmap,
> >-                                (void *)(uintptr_t)old[x].offset,
> >-                                &local->block[x]);
> >+        if (rdma->blockmap) {
> >+            for (x = 0; x < local->nb_blocks; x++) {
> >+                g_hash_table_remove(rdma->blockmap,
> >+                                    (void *)(uintptr_t)old[x].offset);
> >+                g_hash_table_insert(rdma->blockmap,
> >+                                    (void *)(uintptr_t)old[x].offset,
> >+                                    &local->block[x]);
> >+            }
> >          }
> >          memcpy(local->block, old, sizeof(RDMALocalBlock) * 
> > local->nb_blocks);
> >          g_free(old);
> >@@ -571,7 +570,9 @@ static int rdma_add_block(RDMAContext *rdma, const char 
> >*block_name,
> >
> >      block->is_ram_block = local->init ? false : true;
> >
> >-    g_hash_table_insert(rdma->blockmap, (void *) block_offset, block);
> >+    if (rdma->blockmap) {
> >+        g_hash_table_insert(rdma->blockmap, (void *) block_offset, block);
> >+    }
> >
> >      trace_rdma_add_block(block_name, local->nb_blocks,
> >                           (uintptr_t) block->local_host_addr,
> >@@ -607,7 +608,6 @@ static int qemu_rdma_init_ram_blocks(RDMAContext *rdma)
> >      RDMALocalBlocks *local = &rdma->local_ram_blocks;
> >
> >      assert(rdma->blockmap == NULL);
> >-    rdma->blockmap = g_hash_table_new(g_direct_hash, g_direct_equal);
> >      memset(local, 0, sizeof *local);
> >      qemu_ram_foreach_block(qemu_rdma_init_one_block, rdma);
> >      trace_qemu_rdma_init_ram_blocks(local->nb_blocks);
> >@@ -2248,6 +2248,14 @@ static int qemu_rdma_source_init(RDMAContext *rdma, 
> >Error **errp, bool pin_all)
> >          goto err_rdma_source_init;
> >      }
> >
> >+    /* Build the hash that maps from offset to RAMBlock */
> >+    rdma->blockmap = g_hash_table_new(g_direct_hash, g_direct_equal);
> >+    for (idx = 0; idx < rdma->local_ram_blocks.nb_blocks; idx++) {
> >+        g_hash_table_insert(rdma->blockmap,
> >+                (void *)(uintptr_t)rdma->local_ram_blocks.block[idx].offset,
> >+                &rdma->local_ram_blocks.block[idx]);
> >+    }
> >+
> >      for (idx = 0; idx < RDMA_WRID_MAX; idx++) {
> >          ret = qemu_rdma_reg_control(rdma, idx);
> >          if (ret) {
> 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]