[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...
From: |
Hermanni Hyytiälä |
Subject: |
[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert... |
Date: |
Wed, 20 Nov 2002 07:46:49 -0500 |
CVSROOT: /cvsroot/gzz
Module name: gzz
Changes by: Hermanni Hyytiälä <address@hidden> 02/11/20 07:46:49
Modified files:
Documentation/misc/hemppah-progradu: masterthesis.tex
Log message:
Chapter about of current search methods. Please notice: This is not my
own text. This is supporting text. I'll create my own text based on this and
add correct refs!!
CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex.diff?tr1=1.1&tr2=1.2&r1=text&r2=text
Patches:
Index: gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
diff -u gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.1
gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.2
--- gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.1 Thu Nov
7 08:03:58 2002
+++ gzz/Documentation/misc/hemppah-progradu/masterthesis.tex Wed Nov 20
07:46:49 2002
@@ -97,6 +97,532 @@
\section{Existing Peer-to-Peer systems}
+\subsection{Search Methods}
+
+Current discovery methods are not suitable for large decentralized networks
+Current centralized methods of discovery that are acceptable for dedicated
servers
+hosting relatively static content break down when applied to large peer based
networks.
+Current decentralized methods lack the efficiency, flexibility and performance
to be
+effective in large networks.
+
+Searching the internet and other large networks is currently a very
centralized process.
+All of the major search engines such as Google rely on very large databases
and servers to
+process queries. These servers and storage systems are very expensive to build
and maintain,
+and often have problems keeping information they contain current and relevant.
+
+Search engines are also limited as to the sites they can crawl to obtain the
data stored in
+their databases. Your typical peer based network client is far beyond their
grasp. This
+makes the vast amount of data available within each peer unknown via this
traditional
+method. Data stored in databases accessed via HTML forms and CGI queries is
also outside
+the reach of traditional web crawlers.
+
+Peer based networks, such as Freenet and Gnutella rely on a different approach
to searching.
+In some cases this is a shared index or external indexing system. In other
cases this may
+entail querying specific peers or groups of peers until the resource is
located (or you grow
+tired of the search).
+
+All of these approaches lack the flexibility and performance for use in large
peer based networks.
+
+
+
+Resource discovery in peer based networks is critical to the value of the
network as a whole
+
+The main benefit provided by peer based networks is the fact that they allow
access to all
+kinds of information and resources which were previously unavailable. This may
be files and
+documents of interest, or computing power for complex computational tasks.
+
+An important feature of these decentralized peer networks is that their
perceived value is
+directly related to the quantity and quality of the resources available within
them. More
+resources can be added by increasing the number of peers within the network.
Thus, the value
+of the network grows as its popularity increases, which further increases its
growth, etc, etc.
+
+There comes a point, however, at which more peers no longer increase the
number of resources
+available to each peer, and may even cause availability of resources to drop.
If the network
+cannot locate resources within the large numbers of peers, or locating
resources becomes
+exponentially more expensive as the size of the network grows, it will be
forever crippled at
+this threshold.
+
+The ability to locate resources efficiently and effectively regardless of
network size is
+therefore critical to the value and utility of the network as a whole.
+
+
+
+Locating resources requires a diverse amount information to be widely
effective.
+
+Effective discovery methods must rely on a large variety of information about
the desired
+resources, typically in the form of metadata.
+
+Metadata varies widely between each kind of resource described. This data can
be as simple
+as a filename and SHA-1 hash value, or as detailed as a full cast and credits
roster for a
+motion picture. How this meta data is interpreted can also vary widely between
types of resources.
+A search for a given amount of processor time for a complex/grid computation
may require checking
+system resources, such as scheduled jobs and system load before a reply can be
provided.
+
+Metadata can vastly improve the accuracy and efficiency of a search, which
directly affects the
+utility and popularity of the network.
+
+Support for a wide variety of meta data and searching options is critical to
the value and
+utility of any peer based network.
+
+
+
+Any discovery mechanism for large peer based networks must provide a minimum
set of features
+
+To summarize, an effective discovery mechanism is critical to the value and
utility of a peer
+based network. To be effective a discovery mechanism must support a minimum of
features including:
+
+ Efficient operation in small or large networks
+ Efficient operation for small or large numbers of resources
+ Support a wide variety of meta data and query processing
+ Provide accurate, relevant information for each query
+ Resistant to malicious attack or exploitation
+
+Existing Decentralized Discovery Methods
+
+A short description and assessment of existing decentralized discovery
mechanisms is provided to
+compare with a new approach presented in this document.
+
+
+
+All existing discovery methods fail to meet all the desired requirements for
use in large networks.
+
+There are a number of existing decentralized discovery methods in use today
which use a variety
+of designs and architectures. All of these methods have various strengths
which make them attractive
+for certain circumstances, however, none of them meet all the criteria desired
for use in large
+peer based networks.
+
+The major types of discovery methods we will examine are:
+
+ Flooding broadcast of queries
+ Selective forwarding/routing of queries
+ Decentralized hash table networks
+ Centralized indexes and repositories
+ Distributed indexes and repositories
+ Relevance driven network crawlers
+
+
+
+
+
+Flooding broadcast systems do not scale well
+
+The original Gnutella implementation is a prime example of a flooding
broadcast discovery mechanism.
+This type of method has the advantage of flexibility in the processing of
queries. Each peer can
+determine how it will process the query and respond accordingly. Unfortunately
this type of method
+is efficient only for small networks.
+
+Due to the broadcast nature of each query, the bandwidth required for each
query grows exponentially
+with a linear increase in the number of peers. Rising popularity will cause
the network to quickly
+reach a bandwidth saturation point. This causes fragmentation of the network
into smaller groups of
+peers, and consumes a large amount of bandwidth while in operation.
+
+Segmentation of the network reduces the number of peers visible and the
quantity of resources
+available. Queries must be sent over and over again to try and compensate for
the reduced range of
+queries in a highly segmented network. It may take a large amount of time for
a suitable number of
+peers to be queried, which further reduces the effectiveness of this approach.
+
+This type of discovery mechanism is very susceptible to malicious activity.
Rogue peers can send out
+large numbers bogus queries which produce a significant load on the network
and disproportionately
+reduce network effectiveness.
+
+False replies to queries can be formulated for spam / advertising purposes,
which reduces the accuracy
+ of the queries.
+
+
+
+Selective forwarding systems are susceptible to malicious activity
+
+Selective forwarding systems are much more scalable than flooding broadcast
networks. Instead of
+sending a query to all peers, it is selectively forwarded to specific peers
who are considered likely
+to be able to locate the resource. While this approach greatly reduces
bandwidth limitations to
+scalabality, it still suffers from a number of shortcomings.
+
+First and foremost is susceptibility to malicious activity. Due to the fact
that a much smaller
+number of peers receive the query, it is vastly more important that each of
these peers be reputable
+for this operation to be effective.
+
+A rogue peer can insert itself into the network at various points and misroute
queries, or discard
+them altogether. Results can be falsified to degrade the accuracy and
relevance of results. Depending
+on the pervasiveness and operation of this peer(s), performance can be
degraded significantly.
+
+Any system that relies on trust in an open, decentralized network will
inevitably run into problems
+from misuse and malicious activity.
+
+Each peer must also contain some amount of additional information used to
route or direct queries
+received. For small networks this overhead is negligible, however, in larger
networks this overhead
+may grow to levels that are unsupportable.
+
+While an improvement over flooding broadcast techniques, this approach is
still not suitable for a
+large peer based network.
+
+
+
+Decentralized hash table networks do not support robust search
+
+Decentralized hash table networks further optimize the ability to locate a
given piece of information.
+Every document or file stored within the system is given a unique ID,
typically an SHA-1 hash of its
+contents, which is used to identify and locate a resource. The network and
peers are designed in such a
+way that a given key can be located very quickly despite network size. This
type of system does have
+severe drawbacks which preclude its use as a robust searching and discovery
method.
+
+Since data is identified solely by ID, it is impossible to perform a fuzzy or
keyword search within
+the network. Everything must be retrieved or inserted using an ID.
+
+These systems are also susceptible to malicious activity by rouge peers. A
rogue peer may misdirect
+queries, insert large amounts of frivolous data to clutter the keyspace, or
flood the network with
+queries to degrade performance. In such hierarchial or shared index systems
these attacks can inflict
+much more damage than the bandwidth and CPU resources required to initiate
them.
+
+(Amplifying effect on the attack)
+
+While more resilient than flooding broadcast networks, and efficient at
locating known pieces of
+information, these networks are still not able to perform robust discovery in
large peer based networks.
+
+
+
+Centralized indexes are expensive and legally troublesome
+
+Centralized indexes have provided the best performance for resource discovery
to date. However, they
+still entail a number of significant drawbacks which preclude their use in
large peer based networks.
+
+The most serious issue is cost. The bandwidth and hardware required to support
large networks of peers
+is prohibitively expensive. Scaling this kind of network requires substantial
capital investment and may
+still reach limits that unsupportable.
+
+Recent court rulings cast serious doubt about the liability involved in using
centralized servers to
+index resources in a peer based network. It has been said that the recent
legal precedents require any
+such system to monitor usage and activity of the network exactly to ensure
that no types of copyright
+violations are occurring. The ability to monitor and enforce this requirement
is quite challenging, and
+may be too much of a risk.
+
+Centralized index systems are not suitable solutions for resource discovery in
large peer based networks.
+
+
+
+Distributed indexes are dificult to maintain and susceptible to malicious
activity
+
+Distributed indexes eliminate the need for expensive centralized servers by
sharing the indexing burden
+among peers in the network. Legal vulnerability is greatly decreased by
removing central control of indexing
+operations. When designed correctly, these types of networks provide the best
performance and scalability
+of any solution. Even more so than most centralized solutions.
+
+The most difficult problem with these types of indexing systems is cache
coherence of all the indexed data.
+Peer networks are much more volatile, in terms of peers joining and leaving
the network, as well as the
+resources contained within the index. The overhead in keeping everything up to
date and efficiently distributed
+is a major detriment to scalability.
+
+There have been a number of proposals and implementations of shared index
systems which address this problem.
+Unfortunately distributed indexes encounter problems in the following
situations:
+
+ The number of peers supporting the index network is large
+ Many peers join and depart the network maintaining the index
+ The amount of data to be indexed is significant
+ The meta data for the indexed data is very diverse
+ Malicious peers exploit the trust implicit in a shared index
+
+All large peer based networks exhibit these features, making a distributed
index system incredibly
+complicated.
+
+Their susceptability to malicious attack is also increased. Rogue peers may
insert large amounts
+of frivolous data which burdens the shared index as well as reducing the
accuracy of searches
+within it. There is a much larger degree of trust placed on each peer, due to
that fact that
+each peer must handle and search the indexed data correctly, and also that
each peer help maintain
+(in terms of bandwidth and physical storage) the shared index equally (or at
least to the best of
+their ability given finite resources) This makes resilience in the face of
rogue peers extremely difficult.
+
+Supporting a wide range of meta data can also be difficult. An XML schema may
be provided to
+contain this data, however, tracking the meta data in addition to keys or
names significantly
+increases the indexing overhead, further reducing scalability of the network.
Since each peer
+must search its section of the index at given times, each peer must also be
able to understand
+the meta data as it relates to the query it is processing. This is also a
significant burden,
+as diverse peers may or may not understand the meta data and how to interpret
it.
+
+Distributed indexing systems as they currently exist cannot provide robust
discovery in large
+networks. I hope that will change at some point in the future, as this would
be the best solution hands down.
+
+
+
+Relevance driven network crawlers lack support for proactive queries and
diverse data
+
+Relevance driven network crawlers are a different approach to the resource
discovery problem. Instead
+of performing a specific query based on peer request, they use a database of
existing information the
+peer has accumulated to determine which resources it encounters may or may not
be relevant or interesting
+to the peer.
+
+Over time a large amount of information is accrued which is analyzed to
determine what common elements
+the peer has found relevant. The crawler then traverses the network, usually
consisting on HTML documents
+for new information which matches the profile distilled from previous peer
information.
+
+The problem with this system is that it lacks support for proactive queries
for specific information, as
+it is directed by past information. Support for a wide variety of resources is
also missing, since the
+relevance engine expects a certain kind of data on which it can operate. This
usually consists of HTML
+or or other text documents.
+
+Finally, this type of discovery can be too slow for most uses. The time
required for the crawler to
+traverse a significant amount of content can be prohibitively long for uses on
modems or DSL connections.
+
+Relevance driven network crawlers are not suitable for discovery in large
networks.
+
+
+Optimizations to Existing Discovery Methods
+
+Many of the afore mentioned discovery methods have been tweaked and tuned in
various ways to increase the
+efficiency and accuracy of their operation. A few of this enhancements are
described below.
+
+
+
+Intelligence and hierarchy in flooding broadcast networks
+
+The Gnutella network has come a long way since its conception in April of
2000. The first new feature is
+increased intelligence in the peers in the network. The second is the use of
hierarchy to differentiate high
+bandwidth, dedicated peers from slower, less powerful peer clients.
+
+The original Gnutella specification was very simple and intended for small
groups of peers. This simple protocol
+ lacked the forethought required for scaling in larger networks. Once the
network gained popularity it became
+ obvious to all involved that additional features were required to avoid the
congestion in a larger, busy network.
+
+One popular modification was denying access to gnutella resources to web based
gnutella clients. These web
+interfaces allowed a large number of users to search the network without
participating, and thus placed a
+large load on the network with no return value. Many clients will no longer
share files with peers who themselves
+do not share.
+
+Other expensive protocol operations, such as unnecessary broadcast replies
were quickly replaced with
+intelligent forwarding to intended destinations.
+
+Connection profiles were implemented to favor higher bandwidth connections
over slower modem connections so
+that slow users were pushed to the outer edges of the network, and no longer
presented a bottle neck to network
+communication.
+
+Expanding on this theme, the Clip2 Reflector was introduced to allow high
bandwidth broadband users to act as
+proxies for slower modem users.
+
+All in all the Gnutella network and related systems have made vast progress.
In many cases they may provide
+adequate performance despite their intrinsic weakenesses.
+
+
+
+Catalogs and meta indexes in distributed hash table networks
+
+The desire to allow flexible keyword and meta data searching in distributed
hash table networks has resulted in
+various methods to catalog the data contained within them.
+
+A new project called Espra stores catalog documents within Freenet itself that
describe the resources represented
+by their hash key identifier. Additions and searching can be performed on
these catalogs to locate resources
+efficiently and quickly within the network.
+
+Other networks consist of similar methods which keep the catalog or index in
external web servers or documents.
+
+The main drawback with this approach is that it requires the maintenance of
these catalogs. Locating a given catalog
+or index in the first place may also be a problem.
+
+These methods have provided a much needed ability to search for resources in
these distributed hash table networks,
+however, they still lack the robustness and flexibility desired in an optimal
solution.
+
+
+
+Keyword search for distributed hash table networks
+
+Another use of distributed hash tables is keyword searching using individual
hash values for each keyword in a query.
+Each keyword produces a set of matches, which can then be combined for complex
muti-word keyword searches.
+
+This approach looks very promising, as it retains the attractive performance
and scalability of distributed hash tables
+while providing the flexiblity of keyword / metadata based searching. There
should be some implementations of this
+coming out sometime in 2002, however, none are in a stable, useable state as
of this time.
+
+Implementations of searching over distributed hash tables need to solve two
hard problems. The first is support for
+load distribution of hotspots: very popular hash keys. Some keywords are very
popular and these keywords could drive
+an unsupportable amount of traffic to a single node (or small set of nodes) in
the distributed hash table network.
+There must be some mechanism for many nodes to share the load of popular
keywords.
+
+The second problem is the protection of the insert mechanism in the keyword
indexes. It is hard to ensure that all
+users returning hits for a given keyword are legitimate, and false or
malicious results stored/appended at a given
+keyword could severely impact the performance of the search.
+
+Once these problems are solved or minimized searching over distributed hash
table networks could provide a very robust
+search mechanism for large peer networks.
+
+
+
+Hybrid networks using super peers and self organization
+
+A popular type of hybrid network has been implemented by FastTrack and used in
the Morpheus and KaZaa media sharing
+applications. This approach has also been implemented in the now defunct Clip2
Reflector, and the JXTA Search implementation.
+
+This type of network replaces the dedicated central servers used in indexing
content with a large number of super peers.
+These peers have above average bandwidth and processing power which allows
them to take on this additional workload without
+affecting performance a great deal. Every peer in the network contacts one or
more of these super nodes to search for
+matches to a given query.
+
+Super peers are selected automatically based on some kind of bandwidth and
memory/cpu metric. Often there is some kind of
+colloboration between super peers to relay queries if no matches are found
locally, and to provide super peer nodes to new clients.
+
+This architecture provides the best solution to date. By avoiding fully
centralized servers these networks have been a bit
+more resiliant legally (although KaZaa and FastTrack are currently in legal
manuevers).
+
+These types of networks appear to be the current sweet spot for searching
networks. Napster was too centralized, and gnutella
+not enough. Meeting at the middle with a hybrid super peer network gives you
the best of both worlds.
+
+There are still a number of problems with this architecture. Despite being
less of a legal target than a true centralized
+server, they are still 'mini' centralized servers in function. Given the
recent court rulings these nodes would have to monitor
+ and filter content to avoid possible copyright infringement violations.
Requiring each node to contain a list of all filter
+ information would be near impossible to implement given the current size of
filters used by the RIAA alone. The now defunct
+ OpenNap server network was a distributed collection of smaller centralized
servers, and they were threatened out of existence.
+ It is likely that once the encryption used in FastTrack has been circumvented
that the super peers would be a prime target for RIAA/MPAA nasty grams.
+
+Support for robust meta data information is also difficult to provide with
this type of architecture. This requires each super
+node to support all of the meta data types used in matching queries for the
resources it indexes. For a wide variety of meta
+data this would require a large amount of overhead in synchronizing support
for this meta data in all super nodes as well as
+adding the functionality for specific meta data types in each super node.
+
+These super nodes are also prime targets for malicious attack. Since each peer
they are connected to provides them with index
+information, as well as queries, it takes a small amount of effort for a peer
to send a large volume of false index information
+as well as large numbers of bogus queries. Depending on the specific
implementation of these super peers this may cause
+excessive memory usage, truncated indexes, and low performance.
+
+Finally, this type of network relies the on the generosity of peers in the
network to provide these super peers. In current
+implementations this is an optional feature and may or may not be feasible in
a large network.
+
+
+
+
+
+An Adaptive Social Discovery Mechanism for Large Peer Based Networks
+
+We now describe the architecture of an adaptive social discovery mechanism
that is designed to work efficiently,
+effectively, and in a scalable manner for large peer based networks.
+
+
+
+Social discovery implies a direct, continued interaction between peers in the
network
+
+One of the fundamental differences with this approach is that it requires a
direct connection between each peer and the
+peers it communicates with. We will see that this impacts a large number of
the requirements for a robust discovery mechanism.
+
+Each peer directly controls which peers it communicates with, how bandwidth is
consumed, and how the network is used.
+This provides powerful abilities to resist abuse of the network, allocate
bandwidth according to the users preferences,
+and last but not least, allows many optimizations of the discovery process
which would not be available otherwise.
+
+Each connection is also much longer lived than a typical TCP connection. These
connections can be re-established when a
+dialup user changes IP addresses or a NAT user changes ports. They persist as
long as the peers agree to communicate.
+
+This longevity of connections allows peers to maintain a history of their
interaction with each of their peers which in
+turn is used for reputation management and optimization of discovery
operations within the network.
+
+
+
+Simple, low overhead messaging forms the foundation of peer communication
+
+At the base of this discovery implementation is the use of UDP for simple, low
overhead messaging via small data packets. All
+communication between peers is performed through a single UDP socket. An
application level multiplexing protocol supports the
+large number of direct connections with very little overhead. This is similar
to the way that TCP and UDP connections are
+multiplexed over IP using port numbers.
+
+All discovery operations require a certain amount of communication between
peers to locate a given resource. In large
+decentralized networks this often consumes the majority of bandwidth
available. By making the messaging protocol as compact
+and lightweight as possible, we reduce the overhead required for sending any
given message.
+
+
+
+Connection persistence allows profile and performance tracking of peers
+
+The base protocol also uses much longer connection lifetimes between peers.
Connections can be re-established if the
+application is restarted, if the modem line disconnects, and if the ports
change on a NAT firewall. As long as the peers
+wish to remain connected they may do so.
+
+The reason for this feature is to maintain a history for each peer. This
history is used to build a profile of the peer
+to determine how 'valuable' it is for discovery operations, and how many
resources it has used.
+
+Peers that are outright malicious can be identified by providing no value, yet
using large amounts of bandwidth or other
+resources. Their connection is then terminated.
+
+Peers who consume but do not share resources will in turn be viewed as very
low quality peers and their connections terminated
+as well. This prevents abuse of the network, or the tragedy of the commons
effect, and encourages peers to provide resources
+and be good neighbors.
+
+
+
+Past query responses are used to optimize resource discovery
+
+The actual search for resources within the network is accomplished by sending
a single compact query packet to each
+peer in the group to be queried. This proceeds in a linear fashion until a
sufficient number of resources are
+located, or the user terminates the query.
+
+This would be a rather slow and inefficient operation if no further
optimizations were made. To increase the
+efficiency of the discovery operation the profile associated with each peer is
used to determine the order in
+which each peer is sent a query packet.
+
+Peers who have responded with relevant, quality resources in the past will
have a higher quality value in their
+profile than those peers who have not.
+
+By querying the peers with the higher quality value first, the chances of
finding a resource quickly are greatly
+increased. This in turn decreases the total amount of bandwidth and time
required for a search.
+
+
+
+Social discovery and profiling encourages sharing and good behavior
+
+Most searching networks provide little incentive for peers to provide more
resources. The 'Free Loaders' problem
+has been stated quite often when discussions about peer networking arise.
There have been some attempts to
+eliminate free loading and bad behavior using agorics or reputation, however,
these methods have proven very difficult to apply.
+
+In a social discovery network each peer must contribute or risk loosing the
peers that it is connected to. Likewise,
+if you want to be able to connect to high quality peers, you must strive to be
a high quality peer yourself. This
+is all handled autonomously given the adaptive nature of peer organization
during queries and other operations.
+
+As peers continually refine their peer groups, the bad or low quality peers
will be dropped and replaced with new
+peers who might have better characteristics. In this way, good behavior and
large numbers of quality resources are
+rewarded and encouraged.
+
+
+
+Distinct groups of peers are supported for distinct types of discovery
+
+In many cases a user will search for various types of resources on the same
network. While a peer may be a very
+good peer for one type of query, it may be very poor for another. For this
reason groups of peers are supported
+so that peers can be queried when most appropriate.
+
+This prevents high quality peers from getting poor ratings during queries
which they do not support, and allows
+increased efficiency for the discovery operation by providing groups of peers
tuned to the specific type of
+discovery operation.
+
+For example, one set of peers may be used to locate classical recordings,
while another may be used to locate small
+animation files. Each peer may be useful for one type of query and not the
other, and groups ensure that peers are
+treated appropriately based on their performance for specific types of queries.
+
+
+
+Extensions are supported for a wide range of meta data and functionality
+
+Another core feature of this approach is the use of modular extensions to the
discovery operations and application
+functionality. A protocol extension ID is specified within each query packet.
Any third party can define a set of
+meta data or protocol extensions and assign it a unique extension ID. Any
client which supports that extension can
+now process the meta data appropriately for much greater flexibility and
accuracy during the discovery operation.
+
+Often there is additional processing required for a given set of protocol or
meta data extensions. This is supported
+using dynamic modules which contain the required code to process this
information. These modules can be loaded and
+unloaded at runtime according to a users needs.
+
+This modular, extensible system provides the flexibility to support a wide
range of meta data and protocol extensions
+to further increase the quality and value of responses received.
+
+
+
+Adaptive social discovery relates directly to the interaction of a user with
his/her peers
+
+Taken as a whole, this process maps closely to the actual interaction that
occurs between a user and the peers
+(s)he communicates with in the network.
+
+Groups of peers with similar interests will organize spontaneously as they
would in the physical world, and can
+remain in continued interaction with each other as long as they find the
relationship valuable.
+
+Conversely, those peers which do not contribute to the group or attempt to
attack the peers outright will find
+themselves ostracized until they cease their undesirable behavior.
+
+By taking advantage of this style of interaction the quality, performance and
flexibility required for decentralized
+resource discovery in large peer based networks can be implemented
successfully.
+
+
+
\subsection{Business}
\subsection{Business}
\subsection{Business}
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...,
Hermanni Hyytiälä <=
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/20
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/26
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/26
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/26
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/27
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/27
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/27
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/27
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/28
- [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert..., Hermanni Hyytiälä, 2002/11/28