qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/6] libnvdimm: nd_region flush callback supp


From: Dan Williams
Subject: Re: [Qemu-devel] [PATCH v5 1/6] libnvdimm: nd_region flush callback support
Date: Mon, 22 Apr 2019 12:44:21 -0700

On Mon, Apr 22, 2019 at 8:59 AM Jeff Moyer <address@hidden> wrote:
>
> Dan Williams <address@hidden> writes:
>
> > On Thu, Apr 18, 2019 at 9:18 AM Christoph Hellwig <address@hidden> wrote:
> >>
> >> On Thu, Apr 18, 2019 at 09:05:05AM -0700, Dan Williams wrote:
> >> > > > I'd either add a comment about avoiding retpoline overhead here or 
> >> > > > just
> >> > > > make ->flush == NULL mean generic_nvdimm_flush(). Just so that 
> >> > > > people don't
> >> > > > get confused by the code.
> >> > >
> >> > > Isn't this premature optimization?  I really don't like adding things
> >> > > like this without some numbers to show it's worth it.
> >> >
> >> > I don't think it's premature given this optimization technique is
> >> > already being deployed elsewhere, see:
> >> >
> >> > https://lwn.net/Articles/774347/
> >>
> >> For one this one was backed by numbers, and second after feedback
> >> from Linux we switched to the NULL pointer check instead.
> >
> > Ok I should have noticed the switch to NULL pointer check. However,
> > the question still stands do we want everyone to run numbers to
> > justify this optimization, or make it a new common kernel coding
> > practice to do:
> >
> >     if (!object->op)
> >         generic_op(object);
> >     else
> >         object->op(object);
> >
> > ...in hot paths?
>
> I don't think nvdimm_flush is a hot path.  Numbers of some
> representative workload would prove one of us right.

I'd rather say that the if "if (!op) do_generic()" pattern is more
readable in the general case, saves grepping for who set the op in the
common case. The fact that it has the potential to be faster is gravy
at that point.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]