help-gnunet
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gnunet] INDIRECTION_TABLE_SIZE and download speed


From: Igor Wronsky
Subject: Re: [Help-gnunet] INDIRECTION_TABLE_SIZE and download speed
Date: Sat, 7 Sep 2002 02:21:04 +0300 (EEST)

On Fri, 6 Sep 2002, Tracy R Reed wrote:

> Hey James! Small world.
> Why bother setting such limits? Once the popular file is cached off your
> machine (shouldn't take long) the bandwidth useage will subside. 

Nope. ;) The node will route queries and replies for other nodes,
and whats more, the nodes will replicate queries this and that way,
causing more load. I don't know if the bandwidth usage will go down 
or not (CG certainly knows more about this) when there's more 
surplus nodes around not generating any traffic of their own, but 
I've got a feeling that most of the traffic related to a node is 
not related to locally indexed content. Besides, you're right
that even if it were, the demand would subside after the stuff
gets off the local - unfortunately the load would probably stay 
quite the same. ;)
 
> The sooner you can get the files cached the sooner the bottleneck goes
> away. It seems like rather than limiting bandwidth it would be a better
> idea to limit the number of simultaneous downloads.

Sending a block of a node to any node using activemigration will
make it more probable that that block doesn't have to be dl'ed
off from the local node anymore. And because the blocks are
downloaded in random order, its not likely that people are
asking for the same blocks at the same time.

> I started downloading half a dozen files and they are all down to an
> average of around 100bps now. I still get a new block every now and then
> so the machine must still be online. I don't know why it has gotten so
> slow.

The reason might be that some blocks/queries got lost on the 
way, and each time a block is requeried, a longer timeout
is used before issuing the query. If I remember right, its
exponential. This means, that when you start the download,
it sends lots of queries happily, but if/when a couple
of blocks fail, it will take a much longer time before
those blocks are asked for again. And if they fail again,
its even longer again etc etc. 

Btw, making many demands at once might make the it more
probably for them to disappear on the way. Besides, there's
the economy issue... A too greedy node might not be the
first to be served under load, and 0.4.6c might even consider 
some more of that load stuff than previous versions. ;)


I.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]