qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hyperc


From: Amit Shah
Subject: Re: [Qemu-ppc] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hypercall in QEMU
Date: Wed, 2 Sep 2015 15:36:32 +0530

On (Wed) 02 Sep 2015 [10:58:57], Thomas Huth wrote:
> On 02/09/15 09:48, David Gibson wrote:
> > On Wed, Sep 02, 2015 at 11:04:12AM +0530, Amit Shah wrote:
> >> On (Mon) 31 Aug 2015 [20:46:02], Thomas Huth wrote:
> >>> The PAPR interface provides a hypercall to pass high-quality
> >>> hardware generated random numbers to guests. So let's provide
> >>> this call in QEMU, too, so that guests that do not support
> >>> virtio-rnd yet can get good random numbers, too.
> >>
> >> virtio-rng, not rnd.
> 
> Oh, sorry, I'll fix the description.

Thanks.  (It's that way in patch 0 too.)

> >> Can you elaborate what you mean by 'guests that do not support
> >> virtio-rng yet'?  The Linux kernel has had the virtio-rng driver since
> >> 2.6.26, so I'm assuming that's not the thing you're alluding to.
> >>
> >> Not saying this hypercall isn't a good idea, just asking why.  I think
> >> there's are valid reasons like the driver fails to load, or the driver
> >> is compiled out, or simply is loaded too late in the boot cycle.
> > 
> > Yeah, I think we'd be talking about guests that just don't have it
> > configured, although I suppose it's possible someone out there is
> > using something earlier than 2.6.26 as well.  Note that H_RANDOM has
> > been supported under PowerVM for a long time, and PowerVM doesn't have
> > any virtio support.  So it is plausible that there are guests out
> > there with with H_RANDOM support but no virtio-rng support, although I
> > don't know of any examples specifically.  RHEL6 had virtio support,
> > including virtio-rng more or less by accident (since it was only
> > supported under PowerVM).  SLES may not have made the same fortunate
> > error - I don't have a system handy to check.
> 
> Right, thanks David, I couldn't have explained it better.
> 
> >>> Please note that this hypercall should provide "good" random data
> >>> instead of pseudo-random, so the function uses the RngBackend to
> >>> retrieve the values instead of using a "simple" library function
> >>> like rand() or g_random_int(). Since there are multiple RngBackends
> >>> available, the user must select an appropriate backend via the
> >>> "h-random" property of the the machine state to enable it, e.g.
> >>>
> >>>  qemu-system-ppc64 -M pseries,h-random=rng-random ...
> >>>
> >>> to use the /dev/random backend, or "h-random=rng-egd" to use the
> >>> Entropy Gathering Daemon instead.
> >>
> >> I was going to suggest using -object here, but already I see you and
> >> David have reached an agreement for that.
> >>
> >> Out of curiosity: what does the host kernel use for its source when
> >> going the hypercall route?
> > 
> > I believe it draws from the same entropy pool as /dev/random.
> 
> The H_RANDOM handler in the kernel uses powernv_get_random_real_mode()
> in arch/powerpc/platforms/powernv/rng.c ... that seems to be a
> powernv-only pool (but it is also used to feed the normal kernel entropy
> pool, I think), but I am not an expert here so I might be wrong.

Thanks for the pointer, I'm going to take a look.

> >>> +static void random_recv(void *dest, const void *src, size_t size)
> >>> +{
> >>> +    HRandomData *hrcrdp = dest;
> >>> +
> >>> +    if (src && size > 0) {
> >>> +        memcpy(&hrcrdp->val.v8[hrcrdp->received], src, size);
> >>> +        hrcrdp->received += size;
> >>> +    }
> >>> +    qemu_sem_post(&hrcrdp->sem);
> >>> +}
> >>> +
> >>> +static target_ulong h_random(PowerPCCPU *cpu, sPAPRMachineState *spapr,
> >>> +                             target_ulong opcode, target_ulong *args)
> >>> +{
> >>> +    HRandomData hrcrd;
> >>> +
> >>> +    if (!hrandom_rng) {
> >>> +        return H_HARDWARE;
> >>> +    }
> >>> +
> >>> +    qemu_sem_init(&hrcrd.sem, 0);
> >>> +    hrcrd.val.v64 = 0;
> >>> +    hrcrd.received = 0;
> >>> +
> >>> +    qemu_mutex_unlock_iothread();
> >>> +    while (hrcrd.received < 8) {
> >>> +        rng_backend_request_entropy((RngBackend *)hrandom_rng,
> >>> +                                    8 - hrcrd.received, random_recv, 
> >>> &hrcrd);
> >>> +        qemu_sem_wait(&hrcrd.sem);
> >>> +    }
> >>
> >> Is it possible for a second hypercall to arrive while the first is
> >> waiting for the backend to provide data?
> > 
> > Yes it is.  The hypercall itself is synchronous, but you could get
> > concurrent calls from different guest CPUs.  Hence the need for
> > iothread unlocking.
> 
> BQL and semaphore handling should be ok, I think, but one remaining
> question is: Can the RngBackend deal with multiple requests in flight
> from different vCPUs? Or is it limited to one consumer only? Amit, do
> you know this?

It's not limited by one consumer, it should work fine for the way
you're using it.  For virtio-rng, though, I've had this feeling for a
while that it won't do the right thing (ie it will source more bytes
than asked for), which bothers me.  One of the things I want to look
at later..


                Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]