gzz-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gzz] 25th, 26th, 27th & 28th (hh)


From: B. Fallenstein
Subject: Re: [Gzz] 25th, 26th, 27th & 28th (hh)
Date: Fri, 29 Nov 2002 17:04:12 +0200

Hi,

address@hidden schrieb:
> Quoting Tuomas Lukka <address@hidden>:
> > > > Does this work also with the "local store" model?
> > >
> > > What do you mean by "local store" (replication or something else) ?
> >
> > I mean: for each node, how much capacity should it allocate to the P2P
> > network?
> > I.e. if everyone has 50MB of P2P data that they *WANT* to store and
> > allocate
> > 2MB for data that the algorithm wants to store, what happens? Does only the
> > 2MB
> > get really used by others?
> 
> As far as I know, there is no limitation like this in DHTs (or in other
> algorithms (?)).

A DHT stores mappings from keys to values. A node is allocated part of
the key space; it must store all mappings that have keys in this key
space. This is, indeed, limited by the local store. Let's take
retrieving Storm blocks as a simple example; here, in the DHT, we store
locations for each block, i.e., we have (block-id, peer-address) entries
in the database. Let's assume that the block-ids are Storm IDs
represented as strings (ca. 40 bytes/id) and peer-addresses are
stringular representations of IP addresses and ports (ca. 20
bytes/address). Rounding up to 64 bytes/item in the DHT, in 2MB, we can
store about 2**14, i.e. roughly 16'000 items. If we assume 16 replicas
for each item, a peer can insert about 1'000 entries into the DHT, i.e.
it can publish 1'000 blocks.

- Benja




reply via email to

[Prev in Thread] Current Thread [Next in Thread]