automake-ng
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Automake-NG] [PATCH 00/11] Several changes to parallel-tests suppor


From: Stefano Lattarini
Subject: Re: [Automake-NG] [PATCH 00/11] Several changes to parallel-tests support
Date: Thu, 10 May 2012 17:01:23 +0200

On 05/10/2012 04:46 PM, Bob Friesenhahn wrote:
> On Thu, 10 May 2012, Stefano Lattarini wrote:
>>
>> ===================================
>> BEFORE the series is been applied
>> ===================================
>>
>> $ make all                                      # 12 times
>> 1.5 seconds
>>
>> ===================================
>> AFTER the series has been applied
>> ===================================
>>
>> $ make all                                      # 12 times
>> 10 seconds
>>
>> These numbers are still acceptable IMHO, and are a price worth to paying
>> for the improved cleanliness of the code.
> 
> 10 seconds would not be at all acceptable to me.
>
But notice it is 10 seconds for *12 runs* and with *5 thousands tests*.  That
is already a very corner case situation, not a typical one.

> To me classic Automake 'make all' "NOP" time is already 2X what it should be
> for my package due to the recursion (just over a second, or 1/2 a second,
> depending on the machine).
> 
> Building the whole package might take 40 seconds, and the time is primarily
> to blame on the compiler.  A typical case is to edit a single source file
> and type 'make'.  This typical case needs to be as optimized as possible.
> Parsing and evaluating the Makefile should be a fraction of the time
> required to run the compiler on one source module and re-link.
> 
>> So, in conclusion: while I think we should keep an eye on performance,
>> I believe this series should be merged as-is, and optimizations should
>> be done later (once we'll have more framework about variable handling
>> in place).
>>
>> WDYT?
> 
> There is some concern about corner-painting.  There needs to be a plan
> to avoid being painted into a corner without hope of restoring the
> performance which was lost.
>
In my mail I drafted such a plan (at least for the kind of issues under
discussion here, i.e., deeply nested lazy-eval variables evaluated several
times).  Didn't that sounds reasonable to you?  If not, why?  Honest
question, not rethorical.

Thanks,
  Stefano




reply via email to

[Prev in Thread] Current Thread [Next in Thread]