qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v4] ptp: Add vDSO-style vmclock support


From: David Woodhouse
Subject: Re: [RFC PATCH v4] ptp: Add vDSO-style vmclock support
Date: Wed, 17 Jul 2024 09:16:47 +0100
User-agent: Evolution 3.44.4-0ubuntu2

On Tue, 2024-07-16 at 15:20 +0200, Peter Hilber wrote:
> On 16.07.24 14:32, David Woodhouse wrote:
> > On 16 July 2024 12:54:52 BST, Peter Hilber <peter.hilber@opensynergy.com> 
> > wrote:
> > > On 11.07.24 09:50, David Woodhouse wrote:
> > > > On Thu, 2024-07-11 at 09:25 +0200, Peter Hilber wrote:
> > > > > 
> > > > > IMHO this phrasing is better, since it directly refers to the state 
> > > > > of the
> > > > > structure.
> > > > 
> > > > Thanks. I'll update it.
> > > > 
> > > > > AFAIU if there would be abnormal delays in store buffers, causing some
> > > > > driver to still see the old clock for some time, the monotonicity 
> > > > > could be
> > > > > violated:
> > > > > 
> > > > > 1. device writes new, much slower clock to store buffer
> > > > > 2. some time passes
> > > > > 3. driver reads old, much faster clock
> > > > > 4. device writes store buffer to cache
> > > > > 5. driver reads new, much slower clock
> > > > > 
> > > > > But I hope such delays do not occur.
> > > > 
> > > > For the case of the hypervisor←→guest interface this should be handled
> > > > by the use of memory barriers and the seqcount lock.
> > > > 
> > > > The guest driver reads the seqcount, performs a read memory barrier,
> > > > then reads the contents of the structure. Then performs *another* read
> > > > memory barrier, and checks the seqcount hasn't changed:
> > > > https://git.infradead.org/?p=users/dwmw2/linux.git;a=blob;f=drivers/ptp/ptp_vmclock.c;hb=vmclock#l351
> > > > 
> > > > The converse happens with write barriers on the hypervisor side:
> > > > https://git.infradead.org/?p=users/dwmw2/qemu.git;a=blob;f=hw/acpi/vmclock.c;hb=vmclock#l68
> > > 
> > > My point is that, looking at the above steps 1. - 5.:
> > > 
> > > 3. read HW counter, smp_rmb, read seqcount
> > > 4. store seqcount, smp_wmb, stores, smp_wmb, store seqcount become 
> > > effective
> > > 5. read seqcount, smp_rmb, read HW counter
> > > 
> > > AFAIU this would still be a theoretical problem suggesting the use of
> > > stronger barriers.
> > 
> > This seems like a bug on the guest side. The HW counter needs to be read 
> > *within* the (paired, matching) seqcount reads, not before or after.
> > 
> > 
> 
> There would be paired reads:
> 
> 1. device writes new, much slower clock to store buffer
> 2. some time passes
> 3. read seqcount, smp_rmb, ..., read HW counter, smp_rmb, read seqcount
> 4. store seqcount, smp_wmb, stores, smp_wmb, store seqcount all become
>    effective only now
> 5. read seqcount, smp_rmb, read HW counter, ..., smp_rmb, read seqcount
> 
> I just omitted the parts which do not necessarily need to happen close to
> 4. for the monotonicity to be violated. My point is that 1. could become
> visible to other cores long after it happened on the local core (during
> 4.).

Oh, I see. That would be a bug on the device side then. And as you say,
it could be fixed by using the appropriate barriers. Or my alternative
of just documenting "Don't Do That Then".

Attachment: smime.p7s
Description: S/MIME cryptographic signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]