qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 11/12] rdma: core logic


From: Michael R. Hines
Subject: Re: [Qemu-devel] [PATCH v5 11/12] rdma: core logic
Date: Tue, 23 Apr 2013 19:53:31 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

On 04/23/2013 04:59 PM, Paolo Bonzini wrote:
Il 23/04/2013 03:55, address@hidden ha scritto:
+static size_t qemu_rdma_get_max_size(QEMUFile *f, void *opaque,
+                                     uint64_t transferred_bytes,
+                                     uint64_t time_spent,
+                                     uint64_t max_downtime)
+{
+    static uint64_t largest = 1;
+    uint64_t max_size = ((double) (transferred_bytes / time_spent))
+                            * max_downtime / 1000000;
+
+    if (max_size > largest) {
+        largest = max_size;
+    }
+
+    DPRINTF("MBPS: %f, max_size: %" PRIu64 " largest: %" PRIu64 "\n",
+                qemu_get_mbps(), max_size, largest);
+
+    return largest;
+}
Can you point me to the discussion of this algorithmic change and
qemu_get_max_size?  It seems to me that it assumes that the IB link is
basically dedicated to migration.

I think it is a big assumption and it may be hiding a bug elsewhere.  At
the very least, it should be moved to a separate commit and described in
the commit message, but actually I'd prefer to not include it in the
first submission.

Paolo


Until now, I stopped using our 40G hardware (only 10G hardware).

But when I switched back to our 40G hardware, the throughput
was being artificially limited to < 10G.

So, I started investigating the problem, and I noticed that whenever
I disabled the limits of max_size, the throughput went back to
the normal throughput (peak of 26 gbps).

So, rather than change the default max_size calculation for TCP,
which would improperly impact existing users of TCP migration,
I introduced a new QEMUFileOps change to solve the problem.

What do you think?

- Michael







reply via email to

[Prev in Thread] Current Thread [Next in Thread]