So for all the emails, I am just trying to update you on what happened.
I let it run for a while and I noticed it was retrieving mostly pages from google, like sub folders on
, and also copying images files and so on.
I didn't really see it going for my website cache files on google.
Also, I only really need the HTML pages, nothing more.
Also, when I stopped the process and looked at my server I don't see any new folder where those files would have been copied too. Would it create a new folder by itself?
I really really appreciate your effort to help me. Thanks.
Below I am pasting some of the statutes fromt he process so you can see what happened
100%[==========================================================================>] 10,776 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: 1647 (1.6K) [image/gif]
100%[==========================================================================>] 1,647 --.-K/s in 0s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 25,578 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: 1525 (1.5K) [image/gif]
100%[==========================================================================>] 1,525 --.-K/s in 0s
HTTP request sent, awaiting response... 302 Moved Temporarily
HTTP request sent, awaiting response... 200 OK
Length: 11720 (11K) [text/html]
100%[==========================================================================>] 11,720 --.-K/s in 0.02s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 45,225 --.-K/s in 0.02s
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: /support/accounts/?hl=en [following]
HTTP request sent, awaiting response... 200 OK
Length: 16183 (16K) [text/html]
100%[==========================================================================>] 16,183 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 11,591 --.-K/s in 0.001s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 28,659 --.-K/s in 0.08s
HTTP request sent, awaiting response... 302 Moved Temporarily
HTTP request sent, awaiting response... 200 OK
Length: 14490 (14K) [text/html]
100%[==========================================================================>] 14,490 --.-K/s in 0.04s
HTTP request sent, awaiting response... 302 Moved Temporarily
HTTP request sent, awaiting response... 200 OK
Length: 11866 (12K) [text/html]
100%[==========================================================================>] 11,866 --.-K/s in 0.04s
HTTP request sent, awaiting response... 302 Found
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 25,578 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: 10833 (11K) [text/html]
100%[==========================================================================>] 10,833 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: 8558 (8.4K) [image/gif]
100%[==========================================================================>] 8,558 --.-K/s in 0.02s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 36,833 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
[ <=> ] 18,495 --.-K/s in 0.04s
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
On Thu, Nov 13, 2008 at 7:37 AM, Ben Smith
<address@hidden> wrote:
Actually, I realized there's an easier way. Just use this command:
Make a text file (filelist.txt), with all the addresses of the results pages:
Then use this command (all on one line, no spaces after --exclude-domains until the space before
--input-file):
wget -r -l1 -UFirefox -H -erobots=off --wait 1 --exclude-domains=
images.google.com,
maps.google.com,
news.google.com,
mail.google.com,
video.google.com,
groups.google.com,
books.google.com,
scholar.google.com,
finance.google.com,
blogsearch.google.com,
www.youtube.com,
picasaweb.google.com,
docs.google.com,
sites.google.com,
www.snowbrasil.com,
translate.google.com --input-file=filelist.txt
All of your cache files will end up in a single subdirectory named after the IP address that hosted the cached files. When I tested it, it was
74.125.45.104, but that may vary. They are easy to identify since they have cache in the filename and look similar to this:
address@hidden
3Awww.snowbrasil.com%2Ffotos%2Fv%2Fcbdn2007%2FDSC01076_resize.JPG.html+site%
3Awww.snowbrasil.com%2Ffotos&hl=en&ct=clnk&cd=20&gl=us&ie=UTF-8&client=firefox-a
To: Ben Smith <address@hidden>
Sent: Thursday, November 13, 2008 2:34:56 AM
Subject: Re: [Bug-wget] Fwd: Trying to download HTML from Google's Cache. Pls help]
Thanks so much for responding. Do I need to write a script with these commands or do I run one at a time on the command line on my server?
Would you please just tell me what the syntax is so I only download the cache files?
Thanks so much
On Wed, Nov 12, 2008 at 9:30 PM, Ben Smith
<address@hidden> wrote:
From: Yan Grossman
<address@hidden>
To: address@hidden
Sent: Wednesday, November 12, 2008 2:03:58 PM
Subject: Fwd: [Fwd: Re: [Bug-wget] Fwd: Trying to download HTML from Google's Cache. Pls help]
---------- Forwarded message ----------
From:
Yan Grossman <address@hidden>
Date: Wed, Nov 12, 2008 at 10:49 AM
Subject: Re: [Fwd: Re: [Bug-wget] Fwd: Trying to download HTML from Google's Cache. Pls help]
To: Micah Cowan <
address@hidden>
Thanks so much. But what does it mean "
Then grep each of the results files to find the line with links to the
all cached pages. You can pipe that output into sed"
I am not familiar with "grep" and "sed"
Could you please elaborate?
Thanks
On Wed, Nov 12, 2008 at 10:32 AM, Micah Cowan
<address@hidden> wrote:
-------- Original Message --------
Subject: Re: [Bug-wget] Fwd: Trying to download HTML from Google's
Cache. Pls help
Date: Wed, 12 Nov 2008 10:00:34 -0800 (PST)
From: Ben Smith <address@hidden>
To: Micah Cowan <address@hidden>
References: <address@hidden>
<address@hidden>
Adding -UFirefox allows the download. So you should first wget
-UFirefox all the listed results pages from Google:
http://www.google.com/search?q=site%3Awww.snowbrasil.com%2Ffotos&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
http://www.google.com/search?hl=en&client=firefox-a&rls=org.mozilla:en-US:official&hs=zle&q=site:www.snowbrasil.com/fotos&start=10&sa=N
http://www.google.com/search?hl=en&client=firefox-a&rls=org.mozilla:en-US:official&hs=o6J&q=site:www.snowbrasil.com/fotos&start=20&sa=N
etc., up to start=570 (since there are 577 results).
Then grep each of the results files to find the line with links to the
all cached pages. You can pipe that output into sed, which you can use
to remove everything but the links to the cached pages (replace the info
before, after, and between the cache links with a space). Then simply
pipe that to wget -UFirefox, and you should get all your files.
----- Original Message ----
> From: Micah Cowan <
address@hidden>
> To: Ben Smith <
address@hidden>
> Cc:
address@hidden
> Sent: Tuesday, November 11, 2008 3:27:05 PM
> Subject: Re: [Bug-wget] Fwd: Trying to download HTML from Google's Cache. Pls help
>
> Ben Smith wrote:
>
>> Subject: Re: [Bug-wget] Re: Bug-wget Digest, Vol 1, Issue 10
>
>>> When replying, please edit your Subject line so it is more specific
>>> than "Re: Contents of Bug-wget digest..."
>
> It's helpful if you adhere to this guideline; otherwise it's hard to
> follow threads. (I've fixed the subject in my reply.)
>
>> It would be theoretically possible by using grep and sed to strip out
>> the links to the cached files and piping that to wget. However,
>> Google appears to block access to results pages and cached pages via
>> wget. I tried to download several using wget and got a 403 Forbidden
>> response.
>
>
http://wget.addictivecode.org/FrequentlyAskedQuestions#not-downloading
> should be helpful for such problems (using -U is the most applicable
> suggestion, but you may also run into the others). Please also consider
> adding --limit-rate or --wait.
>
--
Micah J. Cowan
Programmer, musician, typesetting enthusiast, gamer.
GNU Maintainer: wget, screen, teseq
http://micah.cowan.name/