savannah-hackers-public
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Savannah-hackers-public] FSF public IP addresses are changing betwe


From: Bob Proulx
Subject: Re: [Savannah-hackers-public] FSF public IP addresses are changing between December 20 and January 7th
Date: Thu, 3 Jan 2019 21:51:09 -0700
User-agent: Mutt/1.10.1 (2018-07-13)

Bob Proulx wrote:
> My plan is that tomorrow, Thursday, late morning I am going to start
> walking through the systems and moving them to the new IP addresses.
> This will be at least somewhat disruptive as in some cases four
> systems must all be up and online simultaneously in order to function.

Today was a busy day!  The systems except for frontend0 all have IPv6
addresses now.  frontend-dev was previously migrated.  nfs1 is
migrated.  internal0 now allows connections from both IPv4 and IPv6.
download0 has been mostly migrated but is still using the old IPv4
connection to internal0.  I have been rolling the systems from the
hand generated iptables over to Shorewall as I have been going because
of the protection Shorewall provides with "safe-restart" and other
helpfulness.

> Hopefully things will be concluded by early afternoon.  Hopefully.

Hahaha!  It's always easy to be optistic before really getting into
it.  :-)

It's just a very tedious and time consuming process to walk through
everything.  Progress is being made.  We will get there.  It just
takes a little time.

Two big snafus caught me.  The worst being that we have a fragility I
didn't realize existed.  For vcs0 and download0 ssh access is through
ssh keys through a custom ssh key configuration.

  AuthorizedKeysCommand /root/bin/sv_get_authorized_keys

Where sv_get_authorized_keys contacts internal0 for the account
database.  A fragility is that if for whatever reason it cannot
communicate with internal0 the old script installed on download0 would
hang and block all login access including root login access.  Turns
out I had rewritten that script a couple of years ago and it was
installed on vcs0 but not yet installed on download0.  And needless to
say during the process it wasn't able to connect to internal0 at which
time I could no longer log into the system.  Oops.  I had to get the
FSF admins to mount the image and neuter that file so I could log in
again until I could figure it out.

This needs improvement so that when the connection to the database
system is impossible that root can still log into the system.  That's
currently not happening.  Plus it does not work using a raw IPv6
address and requires a name.  That is less than optimal too.

Then perhaps more serious of a problem was that libnss-mysql is used
for the account database.  I rolled it from the IPv4 addresses to the
new IPv6 addresses so that then I could move the IPv4 ones.  This
seemed to work.  Initially.  But for some reason does not work long
term.  It very quickly complains that connections to internal0 are
refused due to too many database connections.  internal0 is configured
for 250 connections and that has always been sufficient previously.

The root, root cause is not yet known.  It is weird that using the
IPv6 address instead of the IPv4 would cause a too many connection
error but seemingly that was the behavior.  I switched it back to IPv4
so that we could all go home and get some dinner since it was getting
late by that time.  I'll note that editing files when there are
hundreds of wall messages thrown to logins complaining from libnss
about too many connections is interesting.  (Emacs tramp remote file
editing works around that problem.  FTW!)

There was also something funky with the download0 http /release mirror
redirector.  It also failed the regression test when the database
connection was IPv6 but magically came back to working order when
returning to IPv4.

The big systems left are vcs0 and the backend NFS storage on old vcs
and olddownload.  As with everything it is the connection pair of
systems that need to move together.

Lots still left to do for tomorrow.

Bob



reply via email to

[Prev in Thread] Current Thread [Next in Thread]