help-gnunet
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gnunet] INDIRECTION_TABLE_SIZE and download speed


From: James Blackwell
Subject: Re: [Help-gnunet] INDIRECTION_TABLE_SIZE and download speed
Date: Fri, 6 Sep 2002 11:24:21 -0400

> Hey James! Small world.

Hey there. I figured you came to gnunet because of some posts I put on
freenet-support politely suggesting to people that don't feel like
freenet isn't doing for them yet to give something else such as gnunet a
try. :)

>> I can't help but wonder if by coincidence you are pulling from me. I did

>> MAXNETUPBPSTOTAL =3D 15000
> 
> Why bother setting such limits? Once the popular file is cached off your
> machine (shouldn't take long) the bandwidth useage will subside.=20

Temporarily that would be the case. Eventually GNUNet will start
actually pushing and caching as part of its trust thing. I'm willing to 
share 10% of a T1 for just GNUNet, not a full 100%. After all, I do have
other interests besides GNUNet, however you capitalize that second n. ;)

For the very least, by having set it low prematurely, We may or may not
have figured out a logical flaw in p2p networks in general. :)

 
>> I can't help but wonder if this isn't an achilles heel for all p2p
>> systems?=20
> 
> Probably. At least until GNUnet gets big enough to actually cache popular
> stuff.

That was rather my point. Anytime someone shows up on a network and puts
up a bunch of somethings that are all popular at once, then they get
crammed through the pipe at the same time. 


>> The problem is that until things get out from the initial serving place
>> and cached, there is a huge bottleneck.
> 
> The sooner you can get the files cached the sooner the bottleneck goes
> away. It seems like rather than limiting bandwidth it would be a better
> idea to limit the number of simultaneous downloads.

Ahhh. But when we use cp, rsync or any other file copying tool, do we
tell the hard drive to "copy all of them at once" or do we do them "only
one or two at a time". The latter one of course, because otherwise we
would be so busy thrashing between all of the files we're trying to copy
at once that we would never get done (well, eventually we will, and then
everything shows up at about the same time).

Maybe an example that would be closer. Imagine if 50 of us are talking
to an NFS server at once and we all want a different file. Which is
better for the ftp server? To serve them 1 or 2 at a time serially, or
to try and push all 50 at the same time concurrantly? 

Granted. Once those files are "out in the wild" the problem ceases to
exist because there would be twenty other guys that are there to assist
getting a file out. But during the initial process.....


>> Basically, the idea is to throttle how many of these "new to gnunet"
>> files can go out at once so that we don't clog so badly that nobody gets
>> anything new.
> 
> Exactly.=20
> 
> I started downloading half a dozen files and they are all down to an
> average of around 100bps now. I still get a new block every now and then
> so the machine must still be online. I don't know why it has gotten so
> slow.

In private email tell me the filenames you are looking for and I'll tell
you if I happen to be that bad node you are talking about? 



-- 
GnuPG fingerprint AAE4 8C76 58DA 5902 761D  247A 8A55 DA73 0635 7400
James Blackwell  --  Director http://www.linuxguru.net




reply via email to

[Prev in Thread] Current Thread [Next in Thread]