help-gnunet
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gnunet] INDIRECTION_TABLE_SIZE and download speed


From: James Blackwell
Subject: Re: [Help-gnunet] INDIRECTION_TABLE_SIZE and download speed
Date: Fri, 6 Sep 2002 10:32:11 -0400

> 
> The other side must have been using a 9600 baud modem because it was
> pathetic. :) And I was trying to download around 8 files. Only 5 ever
> actually started receiving data after a couple of hours and those five
> only received a hundred k or so each. Odds are the files were coming from
> different places so they should not have all been incredibly slow.

I can't help but wonder if by coincidence you are pulling from me. I did
a large insertion of a few thousand of what we could probably best call
"popular daa". At the same time though, I have the following limits set:

MAXNETUPBPSTOTAL = 15000
MAXNETDOWNPSTOTAL = 30000


I figure that if there were 5 people trying to get 3 things each (or 10
people 2 each, etc) from my node all at the same time, the effective 
download rate for any given file would be 1000bps, which is disturbingly 
close to "must have been using a 9600 baud modem". 

I can't help but wonder if this isn't an achilles heel for all p2p
systems? 

If person A with a T1 donates 5,000 unique files to a p2p network of which there
is immediate interest in 50 files of 5 megabytes each, then the p2p
system is going to try and get all 50 at the same time. This results in
an average download rate of just under 1k a second, which means that for
these requests to download (and *then* be cached), we need to wait 83
minutes for the 5 megabyte files to finish -- all at the same time! 


The problem is that until things get out from the initial serving place
and cached, there is a huge bottleneck.

Christian, do you think it would make sense to throttle the number of
unique-to-this-node uploads sent out from the nodes? Something along 
the lines of : 


MAXNEWTIME = 600;  // Seconds most people would be willing to wait to
                   // get an average file in full. A social engineering
                                                 // question.
MINNEWFILES = 1;   // Always let at least this many unique file
                   // uploads to occur at once.

if (! FILE_INSERTED_HERE || FILE_UPLOADED_BEFORE) {
   // Do normal GNUNet thing
}
else {
   size = getuniquesize(filename)
        time = (size/MAXNETUPBPSTOTAL)
        if (activeunique < MINNEWFILES || time < MAXNEWTIME) {
            allowfileupload;
        }
        else {
            denyfileupload;
        }
}


Basically, the idea is to throttle how many of these "new to gnunet"
files can go out at once so that we don't clog so badly that nobody gets
anything new.


> Practically none of my bandwidth was used at the time. I am using 0.4.6c.

As am I. :)



-- 
GnuPG fingerprint AAE4 8C76 58DA 5902 761D  247A 8A55 DA73 0635 7400
James Blackwell  --  Director http://www.linuxguru.net




reply via email to

[Prev in Thread] Current Thread [Next in Thread]