[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: need a server-busting test case
From: |
Todd Denniston |
Subject: |
Re: need a server-busting test case |
Date: |
Wed, 16 Apr 2008 09:36:59 -0400 |
User-agent: |
Thunderbird 2.0.0.12 (X11/20080213) |
Ted Stern wrote, On 04/15/2008 06:47 PM:
Hi all,
At a company That Shall Not Be Named,
Voldemort international?
the bean counters in IT support
S. Snape?
are telling us that our new server can't be a standalone workstation.
Of course, it's a SERVER not a workstation.
That however does not mean it could not be a standalone SERVER, which could
bring on requirements for physical security, backups and fire safety.
Instead, they're going to give us a virtual Linux server on a
mainframe.
We will be running various SCM utilities on this server. That means
some web hosting, CVS, subversion, bugzilla, etc.
A virtual server in itself wouldn't be so bad, but this one will be
static -- it will have about half the memory and I/O bandwidth and
capacity of the standalone Linux workstation we had envisioned. This
doesn't make a lot of sense to me, since it seems like we're going
backwards from what we have now.
So I'm trying to come up with a test case we can use to break the
server, in order to show that their assessment of our requirements is
incorrect.
Is there something I can do with CVS that will demonstrate high I/O
and/or memory requirements? I have access to a repository with ~43K
files. Would it suffice to have some tagging, checkouts and commits
going (almost) concurrently?
Ted
IIRC checking out/diffing against/merging with files that are on a branch and
have been changed many times can cause sub optimal results, because cvs has to
get the branch revision and apply deltas to that to get the version you
asked for, and that happens IIRC on the server. And as Kevin indicated earlier
revisions cause cvs have to construct it from differences from the current
HEAD, so if you combine these and do your operations between a sort of recent
branch and an active branch that branched off early in the file's history I
think you should be able to do some interesting beating.
Also if you have a large binary file that has had many revisions, a checkout
of it may put some stress on the system.
Also consider the processor load, using compression on the data stream would
lessen the required bandwidth, but it would up the needed processor. (see -z9) :)
before you do any of that, you need to set something to measure by... as only
running the server out of memory (or disk space) is going to _crash_ it. And
if you don't tell your bosses what you need _before_ you start crying 'it is
not fast/big enough' they will say 'it works at all, so I don`t care'.
i.e.
A) what is a reasonable time for one of your developers to have to wait for:
a) a full checkout (from a well used early branch ... hehehe)
b) a full update (on a well used branch)
c) a full diff
B) how many developers do you expect to be able to get the results set in (A)
at the same time? i.e., how may concurrent connections to the server?
I would suspect it would be hard to claim more than ~80-90% of the team is
going to hit it at the same time.
C) Why? (i.e., the hard question, why does the server need to meet those
requirements?)
[note random estimates of time below, replace with what seems sane from your
shop.]
possible answer: spending an hour of developer time each day to do an update
is a lot more expensive to the project, than the cost of a measly decent PC
with RAID and a tape drive which would allow them to do it in 1 minute.
Oh, and that update each day means the developers spend .5 hour each day
working together to keep the whole baseline moving forward instead of 2 weeks
a month integrating their changes together in big bangs that never quite get
everything working together correctly.
D) these tests need to be ran using several workstations to drive it, because
1) testing on the server does not test the network IO of the server, 2) you
need to use several, so that any possible network contention comes to light,
and it allows race condition blockers to be hit. And don't forget, this test
needs to happen at the same time as the project(s) that 'owns' the other
virtual machine on the server is conducting similar tests (got to make sure
you don't have any negative impacts on them).
E) don't forget to make sure the bosses are doing bugzilla queries, and page
surfing on the web server, while you are doing this test(flush caches first).
--
Todd Denniston
Crane Division, Naval Surface Warfare Center (NSWC Crane)
Harnessing the Power of Technology for the Warfighter