qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v3 00/27] COarse-grain LOck-stepping(COLO) V


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH RFC v3 00/27] COarse-grain LOck-stepping(COLO) Virtual Machines for Non-stop Service
Date: Tue, 24 Feb 2015 11:08:45 +0000
User-agent: Mutt/1.5.23 (2014-03-12)

* zhanghailiang (address@hidden) wrote:
> 3. Prepare host kernel
> colo-proxy kernel module need cooperate with linux kernel.
> You should put a kernel patch 'colo-patch-for-kernel.patch'
> (It's based on linux kernel-3.19) which you can get from 
> https://github.com/gao-feng/colo-proxy.git
> and then compile kernel and intall the new kernel.
> 
> 4. Proxy module
> proxy module is used for network packets compare, you can also get the lastest
> version from: https://github.com/gao-feng/colo-proxy.git.
> You can compile and install it by using command 'make' && 'make install'.

I'm seeing an rcu hang when a COLO enabled qemu quits:

Feb 24 05:29:14 virtlab413 kernel: INFO: task qemu-system-x86:13033 blocked for 
more than 120 seconds.
Feb 24 05:29:14 virtlab413 kernel:      Tainted: G           OE  
3.18.0uf-colo-00028-g75d30f0-dirty #18
Feb 24 05:29:14 virtlab413 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 24 05:29:14 virtlab413 kernel: qemu-system-x86 D 0000000000000000     0 
13033  13004 0x00000080
Feb 24 05:29:14 virtlab413 kernel: ffff880ff5837b38 0000000000000096 
ffff880ff2f35b00 0000000000012d40
Feb 24 05:29:14 virtlab413 kernel: ffff880ff5837fd8 0000000000012d40 
ffff88100bdc16c0 ffff880ff2f35b00
Feb 24 05:29:14 virtlab413 kernel: 0000000000000000 7fffffffffffffff 
ffff880ff5837ca0 ffff880ff5837c98
Feb 24 05:29:14 virtlab413 kernel: Call Trace:
Feb 24 05:29:14 virtlab413 kernel: [<ffffffff81669d29>] schedule+0x29/0x70
Feb 24 05:29:14 virtlab413 kernel: [<ffffffff8166f16c>] 
schedule_timeout+0x1ec/0x350
Feb 24 05:29:14 virtlab413 kernel: [<ffffffff8166b4f2>] ? 
wait_for_completion+0x32/0x120
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff8166b5a4>] 
wait_for_completion+0xe4/0x120
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff8108e110>] ? 
wake_up_state+0x20/0x20
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff810c2460>] ? rcu_barrier+0x20/0x20
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff810beb3c>] wait_rcu_gp+0x5c/0x80
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff810beac0>] ? 
ftrace_raw_output_rcu_utilization+0x50/0x50
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff810c29bf>] 
synchronize_rcu.part.54+0x1f/0x40
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff810c29f8>] 
synchronize_rcu+0x18/0x20
Feb 24 05:29:15 virtlab413 kernel: [<ffffffffa078d175>] 
colo_node_unregister+0x45/0x70 [nf_conntrack_colo]
Feb 24 05:29:15 virtlab413 kernel: [<ffffffffa078d9b5>] 
colonl_close_event+0xa5/0xac [nf_conntrack_colo]
Feb 24 05:29:15 virtlab413 kernel: [<ffffffffa078d948>] ? 
colonl_close_event+0x38/0xac [nf_conntrack_colo]
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81080b25>] ? 
__atomic_notifier_call_chain+0x5/0xa0
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff8108035c>] 
notifier_call_chain+0x4c/0x70
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81080b82>] 
__atomic_notifier_call_chain+0x62/0xa0
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81080b25>] ? 
__atomic_notifier_call_chain+0x5/0xa0
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff8150c02d>] ? skb_dequeue+0x5d/0x80
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81080bd6>] 
atomic_notifier_call_chain+0x16/0x20
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff8155eb02>] 
netlink_release+0x302/0x340
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81503c8f>] sock_release+0x1f/0x90
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81503d12>] sock_close+0x12/0x20
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff811dce53>] __fput+0xd3/0x210
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff811dcfde>] ____fput+0xe/0x10
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff8107d5a7>] task_work_run+0xa7/0xe0
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81002dd7>] 
do_notify_resume+0x97/0xb0
Feb 24 05:29:15 virtlab413 kernel: [<ffffffff81671047>] int_signal+0x12/0x17
Feb 24 05:29:15 virtlab413 kernel: INFO: lockdep is turned off.
Feb 24 05:29:15 virtlab413 kernel: INFO: rcu_preempt detected stalls on 
CPUs/tasks: {} (detected by 4, t=240014 jiffies, g=60214, c=60213, q=0)
Feb 24 05:29:15 virtlab413 kernel: INFO: Stall ended before state dump start
Feb 24 05:31:15 virtlab413 kernel: INFO: task qemu-system-x86:13033 blocked for 
more than 120 seconds.
Feb 24 05:31:15 virtlab413 kernel:      Tainted: G           OE  
3.18.0uf-colo-00028-g75d30f0-dirty #18
Feb 24 05:31:15 virtlab413 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 24 05:31:15 virtlab413 kernel: qemu-system-x86 D 0000000000000000     0 
13033  13004 0x00000080
Feb 24 05:31:15 virtlab413 kernel: ffff880ff5837b38 0000000000000096 
ffff880ff2f35b00 0000000000012d40
Feb 24 05:31:15 virtlab413 kernel: ffff880ff5837fd8 0000000000012d40 
ffff88100bdc16c0 ffff880ff2f35b00
Feb 24 05:31:15 virtlab413 kernel: 0000000000000000 7fffffffffffffff 
ffff880ff5837ca0 ffff880ff5837c98
Feb 24 05:31:15 virtlab413 kernel: Call Trace:
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81669d29>] schedule+0x29/0x70
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8166f16c>] 
schedule_timeout+0x1ec/0x350
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8166b4f2>] ? 
wait_for_completion+0x32/0x120
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8166b5a4>] 
wait_for_completion+0xe4/0x120
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8108e110>] ? 
wake_up_state+0x20/0x20
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff810c2460>] ? rcu_barrier+0x20/0x20
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff810beb3c>] wait_rcu_gp+0x5c/0x80
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff810beac0>] ? 
ftrace_raw_output_rcu_utilization+0x50/0x50
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff810c29bf>] 
synchronize_rcu.part.54+0x1f/0x40
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff810c29f8>] 
synchronize_rcu+0x18/0x20
Feb 24 05:31:15 virtlab413 kernel: [<ffffffffa078d175>] 
colo_node_unregister+0x45/0x70 [nf_conntrack_colo]
Feb 24 05:31:15 virtlab413 kernel: [<ffffffffa078d9b5>] 
colonl_close_event+0xa5/0xac [nf_conntrack_colo]
Feb 24 05:31:15 virtlab413 kernel: [<ffffffffa078d948>] ? 
colonl_close_event+0x38/0xac [nf_conntrack_colo]
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81080b25>] ? 
__atomic_notifier_call_chain+0x5/0xa0
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8108035c>] 
notifier_call_chain+0x4c/0x70
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81080b82>] 
__atomic_notifier_call_chain+0x62/0xa0
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81080b25>] ? 
__atomic_notifier_call_chain+0x5/0xa0
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8150c02d>] ? skb_dequeue+0x5d/0x80
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81080bd6>] 
atomic_notifier_call_chain+0x16/0x20
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8155eb02>] 
netlink_release+0x302/0x340
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81503c8f>] sock_release+0x1f/0x90
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81503d12>] sock_close+0x12/0x20
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff811dce53>] __fput+0xd3/0x210
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff811dcfde>] ____fput+0xe/0x10
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff8107d5a7>] task_work_run+0xa7/0xe0
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81002dd7>] 
do_notify_resume+0x97/0xb0
Feb 24 05:31:15 virtlab413 kernel: [<ffffffff81671047>] int_signal+0x12/0x17

> 
> 5. Modified iptables
> We have add a new rule to iptables command, so please get the patch from
> https://github.com/gao-feng/colo-proxy/blob/master/COLO-library_for_iptables-1.4.21.patch
> It is based on version 1.4.21.

I see there's also an arptables patch as well that I built.

Dave

> 
> 6. Qemu colo
> Checkout the latest colo branch from
> https://github.com/coloft/qemu/commits/colo-v1.0
> configure and make: 
> # ./configure --target-list=x86_64-softmmu --enable-colo --enable-quorum 
> # make
> 
> * Test steps:
> 1. load module
> # modprobe nf_conntrack_colo (Other colo module will be automatically loaded 
> by
> script colo-proxy-script.sh)
> # modprobe xt_mark
> # modprobe kvm-intel
> 
> 2. startup qemu
> master:
> # qemu-system-x86_64 -enable-kvm -netdev 
> tap,id=hn0,colo_script=./scripts/colo-proxy-script.sh,colo_nicname=eth1 
> -device virtio-net-pci,id=net-pci0,netdev=hn0 -boot c -drive 
> driver=quorum,read-pattern=first,children.0.file.filename=suse11_3.img,children.0.driver=raw,children.1.file.driver=nbd+colo,children.1.file.host=192.168.2.88,children.1.file.port=8889,children.1.file.export=colo1,children.1.driver=raw,if=virtio
>  -vnc :7 -m 2048 -smp 2 -device piix3-usb-uhci -device usb-tablet -monitor 
> stdio -S
> slave:
> # qemu-system-x86_64 -enable-kvm -netdev 
> tap,id=hn0,colo_script=./scripts/colo-proxy-script.sh,colo_nicname=eth1 
> -device virtio-net-pci,id=net-pci0,netdev=hn0 -boot c -drive 
> driver=blkcolo,export=colo1,backing.file.filename=suse11_3.img,backing.driver=raw,if=virtio
>  -vnc :7 -m 2048 -smp 2 -device piix3-usb-uhci -device usb-tablet -monitor 
> stdio -incoming tcp:0:8888
> 
> 3. On Secondary VM's QEMU monitor, run
> (qemu) nbd_server_start 192.168.2.88:8889 
> 
> 4.on Primary VM's QEMU monitor, run following command:
> (qemu) migrate_set_capability colo on
> (qemu) migrate tcp:192.168.2.88:8888
> 
> 5. done
> You will see two runing VMs, whenever you make changes to PVM, SVM
> will be synced to PVM's state.
> 
> 6. failover test:
> You can kill SVM (PVM) and run 'colo_lost_heartbeat' in PVM's (SVM's) monitor
> at the same time, then PVM (SVM) will failover and client will not feel this
> change.
> 
> It is still a framework, far away from commercial use,
> so any comments/feedbacks are warmly welcomed ;)
> 
> PS: 
> We (huawei) have cooperated with fujitsu on COLO work,
> and we work mainly on COLO frame and fujitsu will focus on COLO block.
> 
> TODO list:
> 1) Optimize the process of checkpoint, shorten the time-consuming
> 2) Add more debug/stat info 
> 3) Strengthen failover 
> 4) The capability of continuous FT
> 
> v3:
> - use proxy instead of colo agent to compare network packets
> - add block replication
> - Optimize failover disposal
> - handle shutdown
> 
> v2:
> - use QEMUSizedBuffer/QEMUFile as COLO buffer
> - colo support is enabled by default
> - add nic replication support
> - addressed comments from Eric Blake and Dr. David Alan Gilbert
> 
> v1:
> - implement the frame of colo
> 
> 
> zhanghailiang (27):
>   configure: Add parameter for configure to enable/disable COLO support
>   migration: Introduce capability 'colo' to migration
>   COLO: migrate colo related info to slave
>   migration: Integrate COLO checkpoint process into migration
>   migration: Integrate COLO checkpoint process into loadvm
>   migration: Don't send vm description in COLO mode
>   COLO: Implement colo checkpoint protocol
>   COLO: Add a new RunState RUN_STATE_COLO
>   QEMUSizedBuffer: Introduce two help functions for qsb
>   COLO: Save VM state to slave when do checkpoint
>   COLO RAM: Load PVM's dirty page into SVM's RAM cache temporarily
>   COLO VMstate: Load VM state into qsb before restore it
>   COLO RAM: Flush cached RAM into SVM's memory
>   COLO failover: Introduce a new command to trigger a failover
>   COLO failover: Implement COLO master/slave failover work
>   COLO failover: Don't do failover during loading VM's state
>   COLO: Add new command parameter 'colo_nicname' 'colo_script' for net
>   COLO NIC: Init/remove colo nic devices when add/cleanup tap devices
>   COLO NIC: Implement colo nic device interface configure()
>   COLO NIC : Implement colo nic init/destroy function
>   COLO NIC: Some init work related with proxy module
>   COLO: Do checkpoint according to the result of net packets comparing
>   COLO: Improve checkpoint efficiency by do additional periodic
>     checkpoint
>   COLO NIC: Implement NIC checkpoint and failover
>   COLO: Disable qdev hotplug when VM is in COLO mode
>   COLO: Implement shutdown checkpoint
>   COLO: Add block replication into colo process
> 
>  arch_init.c                            | 196 ++++++++-
>  configure                              |  14 +
>  hmp-commands.hx                        |  15 +
>  hmp.c                                  |   7 +
>  hmp.h                                  |   1 +
>  include/exec/cpu-all.h                 |   1 +
>  include/migration/migration-colo.h     |  57 +++
>  include/migration/migration-failover.h |  22 +
>  include/migration/migration.h          |  14 +
>  include/migration/qemu-file.h          |   3 +-
>  include/net/colo-nic.h                 |  25 ++
>  include/net/net.h                      |   4 +
>  include/sysemu/sysemu.h                |   3 +
>  migration/Makefile.objs                |   2 +
>  migration/colo-comm.c                  |  81 ++++
>  migration/colo-failover.c              |  48 +++
>  migration/colo.c                       | 743 
> +++++++++++++++++++++++++++++++++
>  migration/migration.c                  |  74 +++-
>  migration/qemu-file-buf.c              |  57 +++
>  net/Makefile.objs                      |   1 +
>  net/colo-nic.c                         | 438 +++++++++++++++++++
>  net/tap.c                              |  45 +-
>  qapi-schema.json                       |  27 +-
>  qemu-options.hx                        |  10 +-
>  qmp-commands.hx                        |  19 +
>  savevm.c                               |  10 +-
>  scripts/colo-proxy-script.sh           |  88 ++++
>  stubs/Makefile.objs                    |   1 +
>  stubs/migration-colo.c                 |  49 +++
>  vl.c                                   |  36 +-
>  30 files changed, 2047 insertions(+), 44 deletions(-)
>  create mode 100644 include/migration/migration-colo.h
>  create mode 100644 include/migration/migration-failover.h
>  create mode 100644 include/net/colo-nic.h
>  create mode 100644 migration/colo-comm.c
>  create mode 100644 migration/colo-failover.c
>  create mode 100644 migration/colo.c
>  create mode 100644 net/colo-nic.c
>  create mode 100755 scripts/colo-proxy-script.sh
>  create mode 100644 stubs/migration-colo.c
> 
> -- 
> 1.7.12.4
> 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]