help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Long response times from elpa.gnu.org


From: Bob Proulx
Subject: Re: Long response times from elpa.gnu.org
Date: Sat, 8 Feb 2014 14:02:38 -0700
User-agent: Mutt/1.5.21 (2010-09-15)

Johan Andersson wrote:
> Contacting host: elpa.gnu.org:80
                   ^^^^^^^^^^^^
> Failed to download `gnu' archive.

Stefan Monnier wrote:
> I indeed saw this problem recently (a couple weeks ago), and when
> I logged into elpa.gnu.org to investigate, I saw a flood of connections
                ^^^^^^^^^^^^

Eli Zaretskii wrote:
> > Bob Proulx wrote:
> > > When you see such slow response times, please go to savannah.gnu.org and
> > > open a support request about it.
> > 
> > Good idea but as far as I know elpa.gnu.org is not a Savannah machine.
> 
> ??? Then how come I have in my elpa/.git/config this snippet:
> 
>   [remote "origin"]
>         url = git+ssh://git.savannah.gnu.org/srv/git/emacs/elpa

Because git.savannah.gnu.org != elpa.gnu.org .  Those are different
systems.  How is elpa.gnu.org related to git.sv.gnu.org?

However if you are talking about vcs.sv.gnu.org VM (hosting
git.sv.gnu.org aka git.savannah.gnu.org) then the entire VM stack has
known performance problems.  There are at least 24 VMs hosted on one
Xen based dom0 system.  Karl and I believe that the dom0 is I/O
saturated when several of the systems are active at the same time.
The I/O saturation causes long I/O waits with little cpu usage causing
the appearance of a high load average while the cpu is idle.
Meanwhile any performance metrics observed on the VM is fake data and
they always report all okay.  The only way to really know what is
happening would be to observe the dom0 host during performance
brownouts.  If we had some visibility into what was reported by the
dom0 then we would know something.  So far we don't.  Until someone
can look at the dom0 we can't actually know anything.

The FSF admins so far are unconvinced that the dom0 is at I/O
capacity.  They think that the dom0 should be able to handle all of
the current load plus more.  And so we have the current status.  But I
don't think anything will improve the situation short of adding
additional hardware to increase the amount of data I/O capability.
Three dom0 systems instead of one would give 3x the capability.  Put
vcs onto its own hardware and I believe the problem would go away.
Note that they require that their systems run coreboot bios and
therefore I can't just send additional hardware to Boston to help.

Bob



reply via email to

[Prev in Thread] Current Thread [Next in Thread]