[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Slow start to cope with load
From: |
Ole Tange |
Subject: |
Re: Slow start to cope with load |
Date: |
Thu, 22 Mar 2012 17:20:52 +0100 |
On Thu, Mar 22, 2012 at 4:48 PM, Jay Hacker <jayqhacker@gmail.com> wrote:
> Perhaps this is a bit simplistic, but what if you took your idea and
> also kept a running estimate of the amount of load added by each job?
> Start out assuming each job adds 1 unit of load, and then measure:
> "Okay, I started 4 jobs last time, and the load went up by 8, so I
> estimate each job causes 2 units of load." Then when you sample the
> difference and current load is say 12, with 16 procs, you'll only add
> 2 jobs, and the load doesn't go over the max.
That would only work on dedicated single user systems.
My servers are (ab)used by 3-5 people at the same time.
But I am warming up to the idea of ignoring load and instead just look
at 'ps -A -o s'.
1: If number of 'R' == number of cpus: Do not start another.
2: If number of 'D' amongst (grand)children >= 1: Do not start another.
3: Else start a job more.
CPU limited tasks will be limited by rule 1.
Disk and NFS I/O limited tasks will be limited by rule 2.
Net I/O will not be limited.
I have not tested what will happen if the machine is swapping.
/Ole
Re: Slow start to cope with load, Jay Hacker, 2012/03/22
- Re: Slow start to cope with load,
Ole Tange <=