The migration speed was set to 40G and the downtime to 2sec for all experiments below. Note: Idle guests are not interesting due to tons of zero pages etc...but including them here to highlght the overhead of pinning. 1) 20vcpu/64GB guest: (kind of a larger sized Cloud-type guest) a) Idle guest with No pinning (default) : capabilities: xbzrle: off x-rdma-pin-all: off Migration status: completed total time: 51062 milliseconds downtime: 1948 milliseconds pin-all: 0 milliseconds transferred ram: 1816547 kbytes throughput: 6872.23 mbps remaining ram: 0 kbytes total ram: 67117632 kbytes duplicate: 16331552 pages skipped: 0 pages normal: 450038 pages normal bytes: 1800152 kbytes b) Idle guest with Pinning : capabilities: xbzrle: off x-rdma-pin-all: on Migration status: completed total time: 47451 milliseconds downtime: 2639 milliseconds pin-all: 22780 milliseconds transferred ram: 67136643 kbytes throughput: 25222.91 mbps remaining ram: 0 kbytes total ram: 67117632 kbytes duplicate: 0 pages skipped: 0 pages normal: 16780064 pages normal bytes: 67120256 kbytes There weere no freezes observed in the guest at the start of the migration but the qemu monitor prompt was not responsive for the duration of the memory pinning. Total migration time was affected by the cost pinning at the start of the migration as shown above( This issue can be pursued and optimized later). c) Pining + guest running a Java warehouse workload (I cranked the workload up to keep the guest 95+% busy) capabilities: xbzrle: off x-rdma-pin-all: on Migration status: active total time: 412706 milliseconds expected downtime: 499 milliseconds pin-all: 22758 milliseconds transferred ram: 657243669 kbytes throughput: 25241.89 mbps remaining ram: 7281848 kbytes total ram: 67117632 kbytes duplicate: 0 pages skipped: 0 pages normal: 164270810 pages normal bytes: 657083240 kbytes dirty pages rate: 369925 pages No Convergence ! (For workloads where the memory dirty rate is very high there are other alternatives that have been discussed in the past...) --- Enterprise type guests tend to get fatter (more memory per cpu) than the larger Cloud guests...so here are a coupld of them. a) 20VCPU/256G Idle guest : Default: capabilities: xbzrle: off x-rdma-pin-all: off Migration status: completed total time: 259259 milliseconds downtime: 3924 milliseconds pin-all: 0 milliseconds transferred ram: 5522078 kbytes throughput: 6586.06 mbps remaining ram: 0 kbytes total ram: 268444224 kbytes duplicate: 65755168 pages skipped: 0 pages normal: 1364124 pages normal bytes: 5456496 kbytes Pinned: capabilities: xbzrle: off x-rdma-pin-all: on Migration status: completed total time: 219053 milliseconds downtime: 4277 milliseconds pin-all: 118153 milliseconds transferred ram: 268512809 kbytes throughput: 22209.32 mbps remaining ram: 0 kbytes total ram: 268444224 kbytes duplicate: 0 pages skipped: 0 pages normal: 67111817 pages normal bytes: 268447268 kbytes b) 40VCPU/512GB Idle guest : Default: capabilities: xbzrle: off x-rdma-pin-all: off Migration status: completed total time: 670577 milliseconds downtime: 6139 milliseconds pin-all: 0 milliseconds transferred ram: 10279256 kbytes throughput: 6150.93 mbps remaining ram: 0 kbytes total ram: 536879680 kbytes duplicate: 131704099 pages skipped: 0 pages normal: 2537017 pages normal bytes: 10148068 kbytes Pinned: capabilities: xbzrle: off x-rdma-pin-all: on Migration status: completed total time: 527576 milliseconds downtime: 6314 milliseconds pin-all: 312984 milliseconds transferred ram: 537129685 kbytes throughput: 20177.27 mbps remaining ram: 0 kbytes total ram: 536879680 kbytes duplicate: 0 pages skipped: 0 pages normal: 134249644 pages normal bytes: 536998576 kbytes No freezes in the guest due to memory pinning. (Freezes were only due to the dirty bitmap synchup stuff which is being done while BQL is held. Juan is working on addresing already for qemu 1.6)