nmh-workers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Nmh-workers] Re: nmh 1.2 is released


From: Igor Sobrado
Subject: Re: [Nmh-workers] Re: nmh 1.2 is released
Date: Fri, 23 Dec 2005 10:38:31 +0100

In message <address@hidden>, Bill Wohler writes:
> Igor Sobrado <address@hidden> writes:
> 
> > Seriously, I hope that the tarballs for nmh-1.2 will be available
> > from ftp://ftp.mhost.com/ too.
> 
> That's extra work with potentially little or no payoff.

I supposed that only uploading the source code tarball to the anonymous
FTP server and updating the reference to it at http://www.nongnu.org/nmh/
was required.  I supposed that ftp.mhost.com was being maintained by
MCCS, Inc., and that additional maintenance work was not required.

> >                               I do not like a lot using a browser
> > to download the nmh source code.
> 
> Try wget, curl.

Indeed, it is a replacement for ftp where only HTTP downloads
are available.  Thanks for the advice.

> >                                       http://www.nongnu.org/ is good
> > for communication between developers and to provide a description
> > of nmh to first-time users, but anyone that likes MH/nmh will
> > enjoy an anonymous FTP server for downloading it too.
> 
> You perhaps, but not anyone. Certainly not me. I prefer the browser
> interface. I'll use wget over ftp any day. If SourceForge didn't
> require the use of ftp to upload tarballs to incoming, I'd delete
> ncftp from my system.

Browsers are big and complex software packages with lot of dependencies
on third party software (e.g., gtk) usually.  I enjoy FTP for downloading
files from servers (I usually do not run window managers on them),
but I believe that it is a useful service for workstations too.

> I definitely don't run any ftp servers. Unnecessary security risk. The
> servers tend to be older, with lots of buffer overflow bugs that are
> going unpatched since folks are busy using newer things.

Certainly there are FTP servers with a history of security weaknesses
that makes them not recommendable for production environments.  But
these servers are usually overfeatured software packages.  I agree,
setting up an anonymous FTP area is difficult.  If anonymous uploads
are allowed then it becomes a real nightmare.

But there are some reliable and well maintained FTP servers too.

In any case, I was only suggesting uploading the latest sources
to ftp://ftp.mhost.com/; I supposed that the latest tarballs were
not uploaded yet as a consequence of lack of time.  If you, or
someone else, belive that this service must be dropped then do not
upload the tarballs to it.  HTTP downloads are an acceptable
replacement to FTP and, as you say, it means that few services
must be maintained.

In fact, as you I like running as few services as possible too:

$ netstat -a
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address          Foreign Address        State
tcp        0      0  192.168.1.230.65529    string1.ciencias.ssh   ESTABLISHED
tcp        0      0  localhost.smtp         *.*                    LISTEN
tcp        0      0  *.ssh                  *.*                    LISTEN
Active Internet6 connections (including servers)
Proto Recv-Q Send-Q  Local Address          Foreign Address        (state)
tcp6       0      0  localhost.smtp         *.*                    LISTEN
tcp6       0      0  *.ssh                  *.*                    LISTEN
Active UNIX domain sockets
[...]

(only ssh is listening to the world on my laptop)

> You're welcome to volunteer to maintain the nmh tarballs at
> ftp.mhost.com although in that case I would *definitely* include
> checksums for safety.

No, thanks.  It is a big responsibility I really prefer not accepting.
If you feel that FTP is a superfluous service now that HTTP downloads
are being provided, drop it.  In any case, I would suggest you adding
these checksums for HTTP downloads too.

There is a fine article from Jonathan Stone and Craig Partridge,
"When the CRC and TCP Checksum Disagree", on this matter.  The
results come from NFS services without UDP checksums enabled, if I
remember correctly.  In some cases (each 1000 up to 30000 packages)
TCP checksum fails.  Even if gzip(1) adds its own checksums it is
usually a good idea providing checksums and, perhaps, even digital
signatures for files.  But gzip(1)'s checksums are larger than
TCP's ones.

Cheers,
Igor.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]