bug-wget
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-wget] retry if rate drops below some threshold


From: larytet
Subject: Re: [Bug-wget] retry if rate drops below some threshold
Date: Sat, 13 Feb 2010 09:47:54 -0800 (PST)

I did the patch in 1.12 (see end of the e-mail). I have added option and run 
the patched version for about 48 hours already. The patch can come handy if the 
target can be downloaded from different mirrors and the current server slows 
down for some reason.

Now my patch is rather ugly - i drop the connection by returning from 
fd_read_body() with negative return code. Still probably somebody will find it 
useful until there is official support for the feature. The full source code 
can be downloaded (GIT)
git clone git://git.assembla.com/wgetplus.git

If you think that the patch can be improved and eventually go into official 
release, than please, let me know. I will try to fix it.

I am considering to add support of sites like rapidshare - these sites  
introduce delays between the chunks. Some template based solution would be cool.
The existing open source  solutions for rapidhsare, like Tucan, come with GUI. 
My ultimate goal is running all downloads from low power Linux router, like 
OpenWRT/Tomato and store the files on the network drive.
I will appreciate any tips.

Best regards.


--- a/src/connect.c
+++ b/src/connect.c
@@ -333,6 +333,18 @@ connect_to_ip (const ip_address *ip, int port, const char 
*print)
       /* When we add limit_rate support for writing, which is useful
          for POST, we should also set SO_SNDBUF here.  */
     }
+  if (opt.limit_rate && opt.limit_rate < 8192)
+    {
+      int bufsize = opt.limit_rate;
+      if (bufsize < 512)
+        bufsize = 512;          /* avoid pathologically small values */
+#ifdef SO_RCVBUF
+      setsockopt (sock, SOL_SOCKET, SO_RCVBUF,
+                  (void *)&bufsize, (socklen_t)sizeof (bufsize));
+#endif
+      /* When we add limit_rate support for writing, which is useful
+         for POST, we should also set SO_SNDBUF here.  */
+    }

   if (opt.bind_address)
     {
diff --git a/src/init.c b/src/init.c
index 5a05d03..c7ed4a4 100644
--- a/src/init.c
+++ b/src/init.c
@@ -184,6 +184,7 @@ static const struct {
   { "iri",              &opt.enable_iri,        cmd_boolean },
   { "keepsessioncookies", &opt.keep_session_cookies, cmd_boolean },
   { "limitrate",        &opt.limit_rate,        cmd_bytes },
+  { "limitratemin",     &opt.limit_rate_min,    cmd_bytes },
   { "loadcookies",      &opt.cookies_input,     cmd_file },
   { "localencoding",    &opt.locale,            cmd_string },
   { "logfile",          &opt.lfilename,         cmd_file },
diff --git a/src/main.c b/src/main.c
index 9836a01..503906e 100644
--- a/src/main.c
+++ b/src/main.c
@@ -484,6 +484,8 @@ Download:\n"),
     N_("\
        --limit-rate=RATE         limit download rate to RATE.\n"),
     N_("\
+       --limit-rate-min=RATE     limit minimum download rate to RATE.\n"),
+    N_("\
        --no-dns-cache            disable caching DNS lookups.\n"),
     N_("\
        --restrict-file-names=OS  restrict chars in file names to ones OS 
allows.\n"),
diff --git a/src/options.h b/src/options.h
index a895863..b97baaf 100644
--- a/src/options.h
+++ b/src/options.h
@@ -121,6 +121,8 @@ struct options

   wgint limit_rate;            /* Limit the download rate to this
                                   many bps. */
+  wgint limit_rate_min;                /* Limit the minimum download rate to 
this
+                                  many bps. */
   SUM_SIZE_INT quota;          /* Maximum file size to download and
                                   store. */

diff --git a/src/retr.c b/src/retr.c
index edc4829..30cdad4 100644
--- a/src/retr.c
+++ b/src/retr.c
@@ -86,10 +86,11 @@ limit_bandwidth_reset (void)
    is the timer that started at the beginning of download.  */

static void
-limit_bandwidth (wgint bytes, struct ptimer *timer)
+limit_bandwidth (wgint bytes, struct ptimer *timer, bool 
*drop_slow_connection, wgint total_read)
{
   double delta_t = ptimer_read (timer) - limit_data.chunk_start;
   double expected;
+  double download_rate;

   limit_data.chunk_bytes += bytes;

@@ -130,6 +131,20 @@ limit_bandwidth (wgint bytes, struct ptimer *timer)
         limit_data.sleep_adjust = -0.5;
     }

+  *drop_slow_connection = false;
+ 
+  /* Calculate the rate, If the rate is lower than minimum allowed
+     drop the conection and retry the file */
+  download_rate = (double) limit_data.chunk_bytes / delta_t;
+  /* the minimum rate is interesting only for large files. Let the system
+     to download some data before declaring "slow connection "*/
+  if ((download_rate < opt.limit_rate_min) && (total_read > 1024*1024))
+    {
+      DEBUGP (("\ndownload rate %.2f is lower than minimum alload rate %.2f 
\n",
+               download_rate, (double)opt.limit_rate_min));
+      *drop_slow_connection = true;
+    }
+
   limit_data.chunk_bytes = 0;
   limit_data.chunk_start = ptimer_read (timer);
}
@@ -223,6 +238,7 @@ fd_read_body (int fd, FILE *out, wgint toread, wgint 
startpos,
      values are used so that the gauge can update the display when
      data arrives slowly. */
   bool progress_interactive = false;
+  bool drop_slow_connection = false;

   bool exact = !!(flags & rb_read_exactly);
   wgint skip = 0;
@@ -250,7 +266,7 @@ fd_read_body (int fd, FILE *out, wgint toread, wgint 
startpos,
   /* A timer is needed for tracking progress, for throttling, and for
      tracking elapsed time.  If either of these are requested, start
      the timer.  */
-  if (progress || opt.limit_rate || elapsed)
+  if (progress || opt.limit_rate || opt.limit_rate_min || elapsed)
     {
       timer = ptimer_new ();
       last_successful_read_tm = 0;
@@ -301,7 +317,7 @@ fd_read_body (int fd, FILE *out, wgint toread, wgint 
startpos,
       else if (ret <= 0)
         break;                  /* EOF or read error */

-      if (progress || opt.limit_rate)
+      if (progress || opt.limit_rate || opt.limit_rate_min)
         {
           ptimer_measure (timer);
           if (ret > 0)
@@ -318,8 +334,14 @@ fd_read_body (int fd, FILE *out, wgint toread, wgint 
startpos,
             }
         }

-      if (opt.limit_rate)
-        limit_bandwidth (ret, timer);
+      if (opt.limit_rate || opt.limit_rate_min)
+        limit_bandwidth (ret, timer, &drop_slow_connection, sum_read);
+      if (drop_slow_connection)
+        {
+          ret = -1;
+         break;
+        }
+

       if (progress)
         progress_update (progress, ret, ptimer_read (timer));

--- On Fri, 2/12/10, Micah Cowan <address@hidden> wrote:

> From: Micah Cowan <address@hidden>
> Subject: Re: [Bug-wget] retry if rate drops below some threshold
> To: "larytet" <address@hidden>
> Cc: address@hidden
> Date: Friday, February 12, 2010, 7:11 PM
> larytet wrote:
> > It looks that my ISP started to traffic shape the
> connection. It works like this:
> >  - Using --limit-rate I specify maximum upload
> speed 75K (from 95K available). This is the only application
> accessing Internet. 
> >  - For approximately 10 minutes the wget works
> just fine and pulls at 75K/s. After that the rate drops to
> 30K/s. The drop is fast, looks like lights went off. I did
> not check in the sniffer what is going on, but I suspect
> that I will see dropped packets and TCP retransmissions.
> >  - If I restart wget for the same file it returns
> to download at 70K/s for 10 minutes more. 
> > 
> > This is not an issue with the server. I tried very
> fast servers including pulling Eclipse from Amazon cloud,
> rapidshare etc. The servers which usually saturated my
> downstream. 
> > 
> > I am looking for two possible approaches to the
> problem
> > - Is there a patch which allows to force retry if the
> rate drops below some preset limit ? Any tips how such patch
> could be implemented (i probably could the work) ?
> > - Replace ISP (so far I can not make them to fix the
> issue)
> 
> As far as I know, there's no current patch for that.
> Someone suggested
> addressing this in the past, but no one's working on it. I
> _may_ be
> misremembering, and the person who suggested fixing it may
> have supplied
> a patch. You might search the mail archives.
> 
> -- 
> Micah J. Cowan
> http://micah.cowan.name/
> 







reply via email to

[Prev in Thread] Current Thread [Next in Thread]