qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] KVM Networking 2 Bridges


From: Robert P
Subject: [Qemu-devel] KVM Networking 2 Bridges
Date: Tue, 20 Dec 2011 08:51:05 +0000

Hello,

i'm going to realize a KVM / DRBD / Pacemaker Cluster with 5 to 7 KVM instances.

Basic Layout of all KVM instances is as following:
The physical HP-Server ("hera", HP DL 360G6) has two ethernet devices.
For every virtual machine there are the network bridges with a corresponding tun device, like that:

address@hidden ~]# brctl show
bridge name bridge id STP enabled interfaces
brRnet 8000.68b599cde132 yes eth1
                                               tapAPHrnet -> that's a tun interface for KVM"APH", and would eth0 inside the machine
brYnet 8000.68b599cde130 yes eth0
                                         tapAPHynet -> that's eth1 in KVM "APH".

eth0 is in 192.168.1.0/24, eth1 in 10.214.0.0/24, and eth0 / eth1 in the KVM also, of course.

with a "dmesg" on the physical server i get messages like that:

[ 5761.995699] brYnet: port 2(tapAPHynet) entering learning state
[ 5762.011734] brRnet: port 2(tapAPHrnet) entering learning state
[ 5776.966953] brYnet: topology change detected, sending tcn bpdu
[ 5776.966968] brYnet: port 2(tapAPHynet) entering forwarding state
[ 5776.982873] brRnet: topology change detected, sending tcn bpdu
[ 5776.982883] brRnet: port 2(tapAPHrnet) entering forwarding state

Is there any way to get rid of such messages in the kernel-logs?

As long as there's just one KVM running on the server, no problem.
The kvm command for all KVM's is like:
kvm --enable-kvm -monitor unix:/var/run/qemu-server/100.mon,server,nowait -daemonize -vnc 0.0.0.0:0 -m 4096 -smp 4 -cpu host -name Aphrodite -drive file=/media/athene/backup/KVM/APH/KVM_Aphrodite_Ubuntu11.04_32bit_Master.img.raw.sav,if=virtio,format=raw,boot=on -net nic,macaddr=12:34:56:78:10:01,model=virtio -net tap,ifname=tapAPHynet,script=/media/athene/server/kvm/netscripts/ifup-ynet,downscript=/media/athene/server/kvm/netscripts/ifdown-ynet -net nic,macaddr=12:34:56:78:10:02,model=virtio -net tap,ifname=tapAPHrnet,script=/media/athene/server/kvm/netscripts/ifup-rnet,downscript=/media/athene/server/kvm/netscripts/ifdown-rnet
where the macaddr, name, id, name of tapdevice and basefile differs.

The Problem:
If i start another KVM when the first one's running, then the cpu load of the new KVM process just "explodes", and we do have a lot of lags in our WHOLE network.
It seems that the data is looping from one tun -interface of the second started KVM into the other tun - interface, i checked that via "ifconfig".
Then it's also almost impossible to log into a server in the whole network, and also locally on the server where the second KVM was started it's not possible to login as root .... ?!
The shell just "hangs" :-(
I guess, there's some misconfiguration of our network bridges.

When the second KVM Instance comes up, dmesg looks like:
[ 831.953312] tapPOSrnet: received packet with own address as source address
[ 831.953349] tapPOSrnet: received packet with own address as source address
[ 836.372390] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 836.372394] brRnet: topology change detected, propagating
[ 836.376117] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 836.376119] brYnet: topology change detected, sending tcn bpdu
[ 836.934685] __ratelimit: 1079 callbacks suppressed
[ 836.934688] tapAPHrnet: received packet with own address as source address
[ 836.934704] tapAPHrnet: received packet with own address as source address
[ 836.934718] tapAPHrnet: received packet with own address as source address
[ 836.934731] tapAPHrnet: received packet with own address as source address
[ 836.934747] tapAPHrnet: received packet with own address as source address
[ 836.934848] tapAPHrnet: received packet with own address as source address
[ 836.934896] tapAPHrnet: received packet with own address as source address
[ 836.934911] tapAPHrnet: received packet with own address as source address
[ 836.934927] tapAPHrnet: received packet with own address as source address
[ 836.934956] tapAPHrnet: received packet with own address as source address
[ 856.346067] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 856.346072] brRnet: topology change detected, propagating
[ 856.346146] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 856.346148] brYnet: topology change detected, sending tcn bpdu
[ 858.342241] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 858.342244] brYnet: topology change detected, sending tcn bpdu
[ 878.275703] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 878.275707] brRnet: topology change detected, propagating
[ 938.143879] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 938.143883] brRnet: topology change detected, propagating
[ 938.143950] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 938.143952] brYnet: topology change detected, sending tcn bpdu
[ 940.144074] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 940.144077] brYnet: topology change detected, sending tcn bpdu
[ 959.155231] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 959.155236] brRnet: topology change detected, propagating
[ 959.155331] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 959.155333] brYnet: topology change detected, sending tcn bpdu
[ 961.155469] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 961.155472] brYnet: topology change detected, sending tcn bpdu
[ 981.104826] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 981.104830] brRnet: topology change detected, propagating
[ 981.104949] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 981.104951] brYnet: topology change detected, sending tcn bpdu
[ 983.104906] brYnet: received tcn bpdu on port 3(tapPOSynet)
[ 983.104909] brYnet: topology change detected, sending tcn bpdu
[ 1003.050508] brRnet: neighbor 8000.68:b5:99:cd:e1:30 lost on port 3(tapPOSrnet)
[ 1003.050512] brRnet: topology change detected, propagating
[ 1078.685214] INFO: task kvm:2483 blocked for more than 120 seconds.
[ 1078.685297] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1078.685381] kvm D 0000000000000000 0 2483 1 0x00000004
[ 1078.685386] ffff88021ec754c0 0000000000000082 0000000000000000 ffffffffa038021f
[ 1078.685389] ffff88021b0cf078 ffff88021d716800 000000000000f9e0 ffff8801e3733fd8
[ 1078.685393] 0000000000015780 0000000000015780 ffff88021c20b880 ffff88021c20bb78
[ 1078.685396] Call Trace:
[ 1078.685409] [] ? rpc_wake_up_next+0x181/0x188 [sunrpc]
[ 1078.685415] [] ? call_transmit_status+0x3c/0x57 [sunrpc]
[ 1078.685422] [] ? sync_page+0x0/0x46

Our OS on the physical Server is Debian Squeeze 64bit.

Can please somebody give me ANY help on that problem ?
Or maybe somebody could please post a useful link, on how to setup a KVM with 2 or more Network-Bridges correctly?
I'm also not 100% sure, if we need to have Spanning Tree Protocoll enabled or not on the network bridges...

Thanks a lot.
Robert

reply via email to

[Prev in Thread] Current Thread [Next in Thread]