qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hyperc


From: Amit Shah
Subject: Re: [Qemu-ppc] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hypercall in QEMU
Date: Wed, 2 Sep 2015 15:32:01 +0530

On (Wed) 02 Sep 2015 [17:48:01], David Gibson wrote:
> On Wed, Sep 02, 2015 at 11:04:12AM +0530, Amit Shah wrote:
> > On (Mon) 31 Aug 2015 [20:46:02], Thomas Huth wrote:
> > > The PAPR interface provides a hypercall to pass high-quality
> > > hardware generated random numbers to guests. So let's provide
> > > this call in QEMU, too, so that guests that do not support
> > > virtio-rnd yet can get good random numbers, too.
> > 
> > virtio-rng, not rnd.
> > 
> > Can you elaborate what you mean by 'guests that do not support
> > virtio-rng yet'?  The Linux kernel has had the virtio-rng driver since
> > 2.6.26, so I'm assuming that's not the thing you're alluding to.
> > 
> > Not saying this hypercall isn't a good idea, just asking why.  I think
> > there's are valid reasons like the driver fails to load, or the driver
> > is compiled out, or simply is loaded too late in the boot cycle.
> 
> Yeah, I think we'd be talking about guests that just don't have it
> configured, although I suppose it's possible someone out there is
> using something earlier than 2.6.26 as well.  Note that H_RANDOM has
> been supported under PowerVM for a long time, and PowerVM doesn't have
> any virtio support.  So it is plausible that there are guests out
> there with with H_RANDOM support but no virtio-rng support, although I
> don't know of any examples specifically.  RHEL6 had virtio support,
> including virtio-rng more or less by accident (since it was only
> supported under PowerVM).  SLES may not have made the same fortunate
> error - I don't have a system handy to check.

RHEL6 also used 2.6.32, which means it inherited from upstream.  But
you're right that x86 didn't have a device for virtio-rng then.

> > > Please note that this hypercall should provide "good" random data
> > > instead of pseudo-random, so the function uses the RngBackend to
> > > retrieve the values instead of using a "simple" library function
> > > like rand() or g_random_int(). Since there are multiple RngBackends
> > > available, the user must select an appropriate backend via the
> > > "h-random" property of the the machine state to enable it, e.g.
> > > 
> > >  qemu-system-ppc64 -M pseries,h-random=rng-random ...
> > > 
> > > to use the /dev/random backend, or "h-random=rng-egd" to use the
> > > Entropy Gathering Daemon instead.
> > 
> > I was going to suggest using -object here, but already I see you and
> > David have reached an agreement for that.
> > 
> > Out of curiosity: what does the host kernel use for its source when
> > going the hypercall route?
> 
> I believe it draws from the same entropy pool as /dev/random.

OK - I'll take a look there as well.

> > > +static void random_recv(void *dest, const void *src, size_t size)
> > > +{
> > > +    HRandomData *hrcrdp = dest;
> > > +
> > > +    if (src && size > 0) {
> > > +        memcpy(&hrcrdp->val.v8[hrcrdp->received], src, size);
> > > +        hrcrdp->received += size;
> > > +    }
> > > +    qemu_sem_post(&hrcrdp->sem);
> > > +}
> > > +
> > > +static target_ulong h_random(PowerPCCPU *cpu, sPAPRMachineState *spapr,
> > > +                             target_ulong opcode, target_ulong *args)
> > > +{
> > > +    HRandomData hrcrd;
> > > +
> > > +    if (!hrandom_rng) {
> > > +        return H_HARDWARE;
> > > +    }
> > > +
> > > +    qemu_sem_init(&hrcrd.sem, 0);
> > > +    hrcrd.val.v64 = 0;
> > > +    hrcrd.received = 0;
> > > +
> > > +    qemu_mutex_unlock_iothread();
> > > +    while (hrcrd.received < 8) {
> > > +        rng_backend_request_entropy((RngBackend *)hrandom_rng,
> > > +                                    8 - hrcrd.received, random_recv, 
> > > &hrcrd);
> > > +        qemu_sem_wait(&hrcrd.sem);
> > > +    }
> > 
> > Is it possible for a second hypercall to arrive while the first is
> > waiting for the backend to provide data?
> 
> Yes it is.  The hypercall itself is synchronous, but you could get
> concurrent calls from different guest CPUs.  Hence the need for
> iothread unlocking.

OK, thanks!


                Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]