This should be run under the iothread lock. Pay attention to avoiding
lock inversion: the I/O thread always takes the iothread lock outside
and the ramlist lock within, so the migration thread must do the same.
BTW, I think this code in the migration thread patch also needs the
iothread lock:
if (stage < 0) {
cpu_physical_memory_set_dirty_tracking(0);
return 0;
}
if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
qemu_file_set_error(f);
return 0;
}
Callers of above code snippets (sync_migration_bitmap etc.) are holding
the iothread mutex. It has been made sure that the original qemu dirty
bitmap is only accessed when holding the mutex.
+ if (stage != 3) {
+ qemu_mutex_lock_ramlist();
+ qemu_mutex_unlock_iothread();
+ }
+
while (!qemu_file_rate_limit(f)) {
int bytes_sent;
@@ -308,6 +315,11 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage,
void *opaque)
}
}
+ if (stage != 3) {
+ qemu_mutex_lock_iothread();