On Thu, Apr 11, 2013 at 01:49:34PM -0400, Michael R. Hines wrote:
On 04/11/2013 10:56 AM, Michael S. Tsirkin wrote:
On Thu, Apr 11, 2013 at 04:50:21PM +0200, Paolo Bonzini wrote:
Il 11/04/2013 16:37, Michael S. Tsirkin ha scritto:
pg1 -> pin -> req -> res -> rdma -> done
pg2 -> pin -> req -> res -> rdma -> done
pg3 -> pin -> req -> res -> rdma -> done
pg4 -> pin -> req -> res -> rdma -> done
pg4 -> pin -> req -> res -> rdma -> done
It's like a assembly line see? So while software does the registration
roundtrip dance, hardware is processing rdma requests for previous
chunks.
Does this only affects the implementation, or also the wire protocol?
It affects the wire protocol.
I *do* believe chunked registration was a *very* useful request by
the community, and I want to thank you for convincing me to implement it.
But, with all due respect, pipelining is a "solution looking for a problem".
The problem is bad performance, isn't it?
If it wasn't we'd use chunk based all the time.
Improving the protocol does not help the behavior of any well-known
workloads,
because it is based on the idea the the memory footprint of a VM would
*rapidly* shrink and contract up and down during the steady-state iteration
rounds while the migration is taking place.
What gave you that idea? Not at all. It is based on the idea
of doing control actions in parallel with data transfers,
so that control latency does not degrade performance.
This simply does not happen - workloads don't behave that way - they either
grow really big or they grow really small and they settle that way
for a reasonable
amount of time before the load on the application changes at a
future point in time.
- Michael
What is the bottleneck for chunk-based? Can you tell me that? Find out,
and you will maybe see pipelining will help.
Basically to me, when you describe the protocol in detail the problems
become apparent.
I think you worry too much about what the guest does, what APIs are
exposed from the migration core and the specifics of the workload. Build
a sane protocol for data transfers and layer the workload on top.