qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V2 00/20] Multiqueue virtio-net


From: Wanlong Gao
Subject: Re: [Qemu-devel] [PATCH V2 00/20] Multiqueue virtio-net
Date: Tue, 29 Jan 2013 13:36:07 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130110 Thunderbird/17.0.2

On 01/28/2013 12:24 PM, Jason Wang wrote:
> On 01/28/2013 11:27 AM, Wanlong Gao wrote:
>> On 01/25/2013 06:35 PM, Jason Wang wrote:
>>> Hello all:
>>>
>>> This seires is an update of last version of multiqueue virtio-net support.
>>>
>>> This series tries to brings multiqueue support to virtio-net through a
>>> multiqueue support tap backend and multiple vhost threads.
>>>
>>> To support this, multiqueue nic support were added to qemu. This is done by
>>> introducing an array of NetClientStates in NICState, and make each pair of 
>>> peers
>>> to be an queue of the nic. This is done in patch 1-7.
>>>
>>> Tap were also converted to be able to create a multiple queue
>>> backend. Currently, only linux support this by issuing TUNSETIFF N times 
>>> with
>>> the same device name to create N queues. Each fd returned by TUNSETIFF were 
>>> a
>>> queue supported by kernel. Three new command lines were introduced, "queues"
>>> were used to tell how many queues will be created by qemu; "fds" were used 
>>> to
>>> pass multiple pre-created tap file descriptors to qemu; "vhostfds" were 
>>> used to
>>> pass multiple pre-created vhost descriptors to qemu. This is done in patch 
>>> 8-13.
>>>
>>> A method of deleting a queue and queue_index were also introduce for virtio,
>>> this is done in patch 14-15.
>>>
>>> Vhost were also changed to support multiqueue by introducing a start vq 
>>> index
>>> which tracks the first virtqueue that will be used by vhost instead of the
>>> assumption that the vhost always use virtqueue from index 0. This is done in
>>> patch 16.
>>>
>>> The last part is the multiqueue userspace changes, this is done in patch 
>>> 17-20.
>>>
>>> With this changes, user could start a multiqueue virtio-net device through
>>>
>>> ./qemu -netdev tap,id=hn0,queues=2,vhost=on -device 
>>> virtio-net-pci,netdev=hn0
>>>
>>> Management tools such as libvirt can pass multiple pre-created fds/vhostfds 
>>> through
>>>
>>> ./qemu -netdev tap,id=hn0,fds=X:Y,vhostfds=M:N -device 
>>> virtio-net-pci,netdev=hn0
>>>
>>> No git tree this round since github is unavailable in China...
>> I saw that github had already been opened again. I can use it.
> 
> Thanks for reminding, I've pushed the new bits to
> git://github.com/jasowang/qemu.git.

I got host kernel oops here using your qemu tree and 3.8-rc5 kernel on host,

[31499.754779] BUG: unable to handle kernel NULL pointer dereference at         
  (null)
[31499.757098] IP: [<ffffffff816475ef>] _raw_spin_lock_irqsave+0x1f/0x40
[31499.758304] PGD 0 
[31499.759498] Oops: 0002 [#1] SMP 
[31499.760704] Modules linked in: tcp_lp fuse xt_CHECKSUM lockd ipt_MASQUERADE 
sunrpc bnep bluetooth rfkill bridge stp llc iptable_nat nf_nat_ipv4 nf_nat 
iptable_mangle nf_conntr
ack_ipv4 nf_defrag_ipv4 nf_conntrack snd_hda_codec_realtek snd_hda_intel 
snd_hda_codec vhost_net tun snd_hwdep macvtap snd_seq macvlan coretemp 
kvm_intel snd_seq_device kvm snd_p
cm crc32c_intel r8169 snd_page_alloc snd_timer ghash_clmulni_intel snd mei 
iTCO_wdt mii microcode iTCO_vendor_support uinput serio_raw wmi i2c_i801 
lpc_ich soundcore pcspkr mfd_c
ore i915 video i2c_algo_bit drm_kms_helper drm i2c_core [last unloaded: 
ip6t_REJECT]
[31499.766412] CPU 2 
[31499.766426] Pid: 18742, comm: vhost-18728 Not tainted 3.8.0-rc5 #1 LENOVO 
QiTianM4300/To be filled by O.E.M.
[31499.769340] RIP: 0010:[<ffffffff816475ef>]  [<ffffffff816475ef>] 
_raw_spin_lock_irqsave+0x1f/0x40
[31499.770861] RSP: 0018:ffff8801b2f9dd08  EFLAGS: 00010086
[31499.772380] RAX: 0000000000000286 RBX: 0000000000000000 RCX: 0000000000000000
[31499.773916] RDX: 0000000000000100 RSI: 0000000000000286 RDI: 0000000000000000
[31499.775394] RBP: ffff8801b2f9dd08 R08: ffff880132ed4368 R09: 0000000000000000
[31499.776923] R10: 0000000000000001 R11: 0000000000000001 R12: ffff880132ed8590
[31499.778466] R13: ffff880232a6c290 R14: ffff880132ed42b0 R15: ffff880132ed0078
[31499.780012] FS:  0000000000000000(0000) GS:ffff88023fb00000(0000) 
knlGS:0000000000000000
[31499.781574] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[31499.783126] CR2: 0000000000000000 CR3: 0000000132d9c000 CR4: 00000000000427e0
[31499.784696] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[31499.786267] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[31499.787822] Process vhost-18728 (pid: 18742, threadinfo ffff8801b2f9c000, 
task ffff880036959740)
[31499.788821] Stack:
[31499.790392]  ffff8801b2f9dd38 ffffffff81082534 0000000000000000 
0000000000000001
[31499.792029]  ffff880132ed0000 ffff880232a6c290 ffff8801b2f9dd48 
ffffffffa023fab6
[31499.793677]  ffff8801b2f9de28 ffffffffa0242f64 ffff8801b2f9ddb8 
ffffffff8109e0e0
[31499.795332] Call Trace:
[31499.796974]  [<ffffffff81082534>] remove_wait_queue+0x24/0x50
[31499.798641]  [<ffffffffa023fab6>] vhost_poll_stop+0x16/0x20 [vhost_net]
[31499.800313]  [<ffffffffa0242f64>] handle_tx+0x4c4/0x680 [vhost_net]
[31499.801995]  [<ffffffff8109e0e0>] ? idle_balance+0x1b0/0x2f0
[31499.803685]  [<ffffffffa0243155>] handle_tx_kick+0x15/0x20 [vhost_net]
[31499.805128]  [<ffffffffa023f95d>] vhost_worker+0xed/0x190 [vhost_net]
[31499.806842]  [<ffffffffa023f870>] ? vhost_work_flush+0x110/0x110 [vhost_net]
[31499.808553]  [<ffffffff81081b70>] kthread+0xc0/0xd0
[31499.810259]  [<ffffffff81010000>] ? 
ftrace_define_fields_xen_mc_entry+0x30/0xf0
[31499.811996]  [<ffffffff81081ab0>] ? kthread_create_on_node+0x120/0x120
[31499.813726]  [<ffffffff8164fb2c>] ret_from_fork+0x7c/0xb0
[31499.815442]  [<ffffffff81081ab0>] ? kthread_create_on_node+0x120/0x120
[31499.817168] Code: 08 61 cb ff 48 89 d0 5d c3 0f 1f 00 66 66 66 66 90 55 48 
89 e5 9c 58 66 66 90 66 90 48 89 c6 fa 66 66 90 66 66 90 ba 00 01 00 00 <f0> 66 
0f c1 17 0f b6 ce 38 d1 74 0e 0f 1f 44 00 00 f3 90 0f b6 
[31499.821098] RIP  [<ffffffff816475ef>] _raw_spin_lock_irqsave+0x1f/0x40
[31499.823040]  RSP <ffff8801b2f9dd08>
[31499.824976] CR2: 0000000000000000
[31499.844842] ---[ end trace b7130aab34f0ed9c ]---


According printing the value, I saw that the NULL pointer is poll->wqh in 
vhost_poll_stop(),

[  136.616527] vhost_net: poll = ffff8802081f8578
[  136.616529] vhost_net: poll>wqh =           (null)
[  136.616530] vhost_net: &poll->wait = ffff8802081f8590
[  136.622478] Modules linked in: fuse ebtable_nat xt_CHECKSUM lockd sunrpc 
ipt_MASQUERADE nf_conntrack_netbios_ns bnep nf_conntrack_broadcast bluetooth 
bridge rfkill ip6table_mangle stp llc ip6t_REJECT nf_conntrack_ipv6 
nf_defrag_ipv6 iptable_nat nf_nat_ipv4 nf_nat iptable_mangle nf_conntrack_ipv4 
nf_defrag_ipv4 xt_conntrack nf_conntrack ebtable_filter ebtables 
ip6table_filter ip6_tables snd_hda_codec_realtek snd_hda_intel vhost_net 
snd_hda_codec tun macvtap snd_hwdep macvlan snd_seq snd_seq_device coretemp 
snd_pcm kvm_intel kvm snd_page_alloc crc32c_intel snd_timer ghash_clmulni_intel 
snd r8169 iTCO_wdt microcode iTCO_vendor_support mei lpc_ich pcspkr mii 
soundcore mfd_core i2c_i801 serio_raw wmi uinput i915 video i2c_algo_bit 
drm_kms_helper drm i2c_core
[  136.663172]  [<ffffffffa0283afc>] vhost_poll_stop+0x5c/0x70 [vhost_net]
[  136.664880]  [<ffffffffa0286cf2>] handle_tx+0x262/0x650 [vhost_net]
[  136.668289]  [<ffffffffa0287115>] handle_tx_kick+0x15/0x20 [vhost_net]
[  136.670013]  [<ffffffffa028395d>] vhost_worker+0xed/0x190 [vhost_net]
[  136.671737]  [<ffffffffa0283870>] ? vhost_work_flush+0x110/0x110 [vhost_net]


But I don't know whether we should check poll->wqh here. Or it's a qemu bug 
causes host kernel oops?

Thanks,
Wanlong Gao

>>
>> Thanks,
>> Wanlong Gao
>>
>>
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]