bug-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimization for reading *.d files


From: Paul Smith
Subject: Re: Optimization for reading *.d files
Date: Sun, 19 Mar 2017 01:02:23 -0400

Before you go too far with performance improvements you should really
move to the latest version of GNU make (4.2.1) or even try the current
Git HEAD (but you'll need to install autotools etc. to build from Git).

It's less useful to be doing performance testing with a version of make
that old.


On Sat, 2017-03-18 at 19:25 -0700, brenorg wrote:
> There are lots of dependency files and they can be processed in parallel,
> before being merged into the database.

Well, make is not multithreaded so you can't process files in parallel. 
I suppose that for slower disks it could be that some kind of
asynchronous file reading which would allow data to be retrieved from
the disk while make works on previously-retrieved data could be useful
but I'm not aware of any async file IO which is anywhere close to
portable.  Also, with SSD more common these days file IO latency is
nowhere near what it used to be, and decreasing all the time.

Someone would have to prove the extra complexity was gaining a
significant amount of performance before I would be interested.

> For that, GNU make could need an extension on the include directive to
> handle "include *.d" differently as it knows dependency files won't
> alter/create variables but just add dependencies.

I'm certainly not willing to do something like declare all included
files ending with .d should be treated as dependency files: people might
use .d files for other things, and they might create dependency files
with different names.  ".d" is just a convention and not a strong one at
that IMO.

However, it could be that the user would declare in her makefile that
all included files matching a given pattern be treated as simple files
(for example).  That would be acceptable, again if the gains were
significant enough.

I'm not convinced that it's impossible to speed up the parser in
general, though.  It shouldn't take twice as long to parse a simple line
using the full scanner, as it does with a targeted scanner.  After all,
you're not making use of any of the special features of parsing so those
should cost you very little.

I'd prefer to investigate improving the existing parser, rather than
create a completely separate parser path.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]