|
From: | Howard Chu |
Subject: | Re: Fwd: [RFC]serialize the output of parallel make? |
Date: | Mon, 02 Aug 2010 08:42:45 -0700 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; rv:1.9.3a6pre) Gecko/20100708 Firefox 3.6 |
Edward Welbourne wrote:
2x is too much. 1.5x has been the best in my experience, any more than that and you're losing too much CPU to scheduling overhead instead of real work. Any less and you're giving up too much in idle or I/O time.This depends a bit on whether you're using icecc or some similar distributed compilation system. I believe a better approach is to set a generous -j, such as twice the count of CPUs, but impose a load limit using -l, tuned rather more carefully. Scheduling overhead contributes to load, so is taken into account this way.
Perhaps in a perfect world -l would be useful. In fact, since load averages are calculated so slowly, by the time your -l limit is reached the actual CPU load will have blown past it and your machine will be thrashing. That's the entire reason I came up with the -j implementation in the first place.
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
[Prev in Thread] | Current Thread | [Next in Thread] |