Sorry, I'm having trouble with my email client. My last
post got munged. Trying again, hope it works, bear with me...
Phil Holmes wrote:
If robots.txt was getting updated properly, all of our
Google search bar problems would be solved. We could
then stop telling Google to restrict the search results
to a patrticular version from the search box itself. The
robots.txt file only allows the current stable docs to
be indexed.
No - it would (AFAICS) prevent indexing docs prior to
current stable. It would still index current development,
which I believe remains correct.
I know I've been out of the loop, but when was it decided
that we should allow Google to index the development docs?
The CG indicates that the robots.txt file should disallow
the current devel docs with the line
"Disallow: /doc/v2.CURRENT-DEVELOPMENT/":
http://lilypond.org/doc/v2.17/Documentation/contributor/major-release-checklist#Housekeeping-requirements
OK - I've checked the server, and you're quite right -
there appears no mechanism for
git/Documentation/web/server/robots.txt to update the root
of the web server.
That is a bug, and if no one has a solution ready, it needs
to be added to the tracker, either as a new issue or as an
addendum to #2909, #3209, or #3367. I think all 3 could
profitably be merged into one.
I believe that make website copies it to
/website/robots.txt, which is essentially useless. As I
see it, there are 3 options:
1) I could manually copy robots.txt. This is not a
long-term solution, but would be a step forward right
now. If Mark wants me to do this and no-one shouts,
I will.
2) We could have a Cron job on the server to do this.
This strikes me as less good than
3) we could update make website to do this.
Option no. 3! I'm not opposed to option 1 right now, as
long as option 3 is recorded in the tracker. Or if anyone
knows how to fix it, feel free to chime in!
- Mark