qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performanc


From: Amit Shah
Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
Date: Wed, 8 Jan 2014 21:53:02 +0530

On (Wed) 08 Jan 2014 [17:14:41], Amos Kong wrote:
> On Wed, Dec 18, 2013 at 11:05:14AM +0100, Giuseppe Scrivano wrote:
> > Markus Armbruster <address@hidden> writes:
> > 
> > > Amos Kong <address@hidden> writes:
> > >
> > >> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563
> > >>
> > >> We have a requests queue to cache the random data, but the second
> > >> will come in when the first request is returned, so we always
> > >> only have one items in the queue. It effects the performance.
> > >>
> > >> This patch changes the IOthread to fill a fixed buffer with
> > >> random data from egd socket, request_entropy() will return
> > >> data to virtio queue if buffer has available data.
> > >>
> > >> (test with a fast source, disguised egd socket)
> > >>  # cat /dev/urandom | nc -l localhost 8003
> > >>  # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \
> > >>         -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \
> > >>         -device virtio-rng-pci,rng=rng0
> > >>
> > >>   bytes     kb/s
> > >>   ------    ----
> > >>   131072 ->  835
> > >>    65536 ->  652
> > >>    32768 ->  356
> > >>    16384 ->  182
> > >>     8192 ->   99
> > >>     4096 ->   52
> > >>     2048 ->   30
> > >>     1024 ->   15
> > >>      512 ->    8
> > >>      256 ->    4
> > >>      128 ->    3
> > >>       64 ->    2
> > >
> > > I'm not familiar with the rng-egd code, but perhaps my question has
> > > value anyway: could agressive reading ahead on a source of randomness
> > > cause trouble by depleting the source?
> > >
> > > Consider a server restarting a few dozen guests after reboot, where each
> > > guest's QEMU then tries to slurp in a couple of KiB of randomness.  How
> > > does this behave?
> 
> Hi Giuseppe,
>  
> > I hit this performance problem while I was working on RNG devices
> > support in virt-manager and I also noticed that the bottleneck is in the
> > egd backend that slowly response to requests.
> 
> o Current situation:
>   rng-random backend reads data from non-blocking character devices
>   New entropy request will be sent from guest when last request is processed,
>   so the request queue can only cache one request.
>   Almost all the request size is 64 bytes.
>   Egd socket responses the request slowly.
> 
> o Solution 1: pre-reading, perf is improved, but cost much memory 
>   In my V1 patch, I tried to add a configurable buffer to pre-read data
>   from egd socket. The performance was improved but it used a big memory
>   as the buffer.

I really dislike buffering random numbers or entropy from the host,
let's rule these options out.

> o Solution 2: pre-sending request to egd socket, improve is trivial
>   I did another test, we just pre-send entropy request to egd socket, not
>   really read the data to a buffer.
> 
> o Solution 3: eyeless poll, not good
>   Always returns an integer in rng_egd_chr_can_read(), the perf can be 
>   improved to 120 kB/s, it reduce the delay caused by poll mechanism.
> 
> o Solution 4:
>   Try to use the new message type to improve the response speed of egd socket
> 
> o Solution 5:
>   non-block read?

I'd just say let the "problem" be.  I don't really get the point of
egd.  The egd backend was something Anthony wanted, but I can't
remember if there has been enough justification for it.  Certainly the
protocol isn't documented, and not using the backend doesn't give us
drawbacks.

Moreover, reasonable guests won't request for a whole lot of random
numbers in a short interval, so the theoretical performance problem
we're seeing is just going to remain theoretical for well-behaved
guests.

We have enough documentation by now about this issue, I say let's just
drop this patch and worry about this only if there's a proven need to
better things here.

                Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]