qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Bug 1253563] [NEW] bad performance with rng-egd backen


From: Amos Kong
Subject: Re: [Qemu-devel] [Bug 1253563] [NEW] bad performance with rng-egd backend
Date: Tue, 26 Nov 2013 23:57:30 +0800
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Nov 21, 2013 at 09:24:11AM -0000, Amos Kong wrote:
> Public bug reported:
> 
> 
> 1. create listen socket
> # cat /dev/random | nc -l localhost 1024
> 
> 2. start vm with rng-egd backend
> 
> ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -mon 
> chardev=qmp,mode=control,pretty=on -chardev 
> socket,id=qmp,host=localhost,port=1234,server,nowait -m 2000 -device 
> virtio-net-pci,netdev=h1,id=vnet0 -netdev tap,id=h1 -vnc :0 -drive 
> file=/images/RHEL-64-virtio.qcow2 \
> -chardev socket,host=localhost,port=1024,id=chr0 \
> -object rng-egd,chardev=chr0,id=rng0 \
> -device virtio-rng-pci,rng=rng0,max-bytes=1024000,period=1000
> 
> (guest) # dd if=/dev/hwrng of=/dev/null
> 
> note: cancelling dd process by Ctrl+c, it will return the read speed.
> 
> Problem:   the speed is around 1k/s
> 
> ===================
> 
> If I use rng-random backend (filename=/dev/random), the speed is about
> 350k/s).
> 
> It seems that when the request entry is added to the list, we don't read the 
> data from queue list immediately.
> The chr_read() is delayed, the virtio_notify() is delayed.  the next request 
> will also be delayed. It effects the speed.

Currently we have a request queue to cache un-processed request,
but new request_entropy is only comming when last request is processed
(chr dev data is filled to request->data, copied to VQ, and notify
virtio.

So the queue always has 0 or 1 items. The request process is designed
to sync, but it doesn't work.

Does this limitation come from virtio-rng driver in guest? 
 
> I tried to change rng_egd_chr_can_read() always returns 1,  the speed is
> improved to (about 400k/s)
> 
> Problem: we can't poll the content in time currently
 
 
> Any thoughts?
> 
> Thanks, Amos



reply via email to

[Prev in Thread] Current Thread [Next in Thread]