nel-all
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Nel] Game Object Models


From: Vincent Caron
Subject: Re: [Nel] Game Object Models
Date: 26 Jan 2002 04:55:38 +0100

On Tue, 2002-01-22 at 00:47, John Hayes wrote:
> The problem is TCP is substantially slower for retransmission [...]

At least you didn't say the famous 'TCP is slow' statement :)
TCP is designed with precise purposes in mind, which are (in order) :
sequential delivering of datagrams and reliability, both making up the
'stream' service delivered by TCP. The notion of 'speed' is somewhat
linked with the notion of reliability : it depends on your viewing
window on the stream. If you expect a large amount of data to be
delivered on a given schedule, then you won't ever notice that some
packets of the window were late because they had to be retransmitted.
The whole was globally on time. When you lower your window and reach the
datagram size, a dropped packet means an immediate bandwidth drop, since
you don't want to consider the following packets until the lost one is
retransmitted.

I believe the 'game model' is actually somewhere in between the stream
and datagram notions. The fact that TCP has been designed with 'long'
(considering our problem) timeouts comes from the congestion pb : all
parameters of TCP have been finely tuned (in every implementation of
TCP/IP stack) for a model that provides good cooperation between network
nodes. TCP will bring sequentiality, reliability, inernet-wide
cooperation, but certainly not delay guarantees, we all know that.

> Ok, so no TCP, UDP is great but you don't want to reimplement TCP all
> over again - reliability is good but you want the minimum number of
> packet dependencies (loose ordering so the game doesn't stall) and the
> minimum amount of retransmission (discard what you no longer care about)
> but conflicting goal of a minimum amount of latency for retransmission
> when it's required.

The Internet infrastructure is packet driven : every packet is supposed
to take an impredictable path between constant nodes, every packet can
suffer a variable delay, and finaly packets can be dropped. We have to
live with that, this is real life :)

Sequentiality, reliability, delay and bandwith are all linked. In most
cases you can't raise one without lowering the others. From there you
can progress in two ways :

- find a set of feature above IP (the lowest level you can access with
Internet operability in mind) matching the closest to your needs. Expect
to come up with your own layer 5 protocol over UDP, since it is rather
difficult to 'remove' features from TCP :)

- find the network properties you can live with. This is the most
disregarded path. I'm always wondering why people always blindly try to
implement more or less ackward reliability hacks over UDP, before even
asking if they need reliability. Yes, you often can live with dropped
packets : this is what interpolation and extrapolation is about.
Detecting them is easy, if you numbered your packets for sequentiality.
You might even find yourself dropping a late packet, because you already
have fresher info (depends on the info carried by the packet, of
course).


> So - what kind of rules can we make up for this? Network messages from
> the server to the client have two functions - updating state and
> "one-shot" actions (which is really a special case of updating state).

Your description is interesting. I won't comment directly on it, instead
I would stress the importance of a dimension that is so commonly
neglected : time.

Considering the kind of real-time info you're mentioning (player pos,
shot, etc), time is a vital info. The pb is that you don't have an
absolute clock that every network node shares. With a synchronization
protocol such as NTP you can achieve 1ms coherence on a LAN, maybe 500ms
on a WAN/MAN if you're lucky, and most of all you'll need something like
1 hour to converge toward this figure ! :)

Until computers can get a pico-second timestamp from a GPS satellite,
we'll have a tough ride. [Well, it is perfectly possible to connect a
GPS device to your computer, but you won't ask this to all gamers... oh,
and why not ?:)]

So we don't have synchronized nodes, and what's worse, we don't know
path time length between nodes, but we know this kind of value is not
constant ! Ouch. Now you replicate a client object on a bunch of target
nodes. What do they get _when_ they receive their copy  ? The state of
the object at a slightly elapsed time. They don't know how much. The
only measure you can get between two nodes is the round time trip (RTT),
ie. the A->B->A time (what 'ping' measure). Since most packet delivery
is assymetrical, you only have a poor estimation of A->B or B->A delays
with RTT/2, and this measure is _not_ constant.

When you realized that every node (clients and servers) have their own,
timely view of the objects, my guess is that you end with two model of
distribution :

- centralized : one (server/master) node is considered to be the only
one to have a full coherent view of all objects at any time. All other
nodes (clients, or let's say more generaly 'slaves') have a slightly and
globally delayed view of the referential view. If you get lucky, RTT 
between master and slaves is homogeneous and you'll end up with tightly
synchronized slaves. You'll only get into trouble when the master must
integrate client responses, since they 1) are a reaction to an info
which is 1 RTT late and 2) a client info which is ~RTT/2 late.

  This approach has well known pb : if a client is notably lagging, he
will contribute only sparse info to the server, and thus to all other
clients. And it's often an advantage for the lagging player, since with
poor inerpolation he will only exist on discrete time-space points of
your scene (ever seen these quake players jumping from places to places
between frames ? :)). But it is easy to implement.

- fully distributed : the 'holy grail' :). In theory, each node is
connected to all the other participating nodes. Of course, it's a
bandwidth killer and it doesn't scale. But there are some points that
make it somewhat conceivable :

 * a node is often only affected by a few other nodes : the players in
sight of your gun, etc. If every node can maintain a 'neighborhood
dependency', you'll end with a finite number of links per node (depends
on your game design, might prove wrong in a virtual stadium :))

 * you might find optimal time paths between nodes. You actually have
the internet model : choose any path to go from A to B ! Clients are not
dependent on a server location/reachability.

- in-between: such as the Fastrack P2P (peer-to-peer) model (Morpheus,
Kazaaa, etc). A server holds a directory of nodes with relevant per-node
info. You get connected to another node when you decided that it has
relevant information for you (Hey, I want this DivX;). Not a real-time
example, but a distribution model. Server themselves can belong to a P2P
subnet, solving the directory scaling pb easily.

Of course the P2P model is in its infancy, and what's important, the
infrastructure is not ready. Multicast would be a blessing for instance,
but it's only integrated in Ipv6, and there will be a long time before
global routers do multicast. You'll also eagerly wait for scheduling
options (again in IPv6, IPv4 has an 'urgent' bit which is ignored by
most routers), they will enable you to choose VoIP-like (real time
behaviour) which would be of course very desirable for MMORPG.

--

Well, just my 2 cents. I had something like that in my head when the TCP
vs UDP debate occured. My main concerns are (as a conclusion) :

- can't you cope with the inherent non-sequential and unreliable
Internet infrastructure rather than applying a TCP-like work around ?

- think 4D, _time_ counts ! I wouldn't talk about a 'replication pb',
rahter an 'information transportation pb'. This is where the strict
object concept fails : an object must be considered in its environment.
An object stored in A at time t0, and arriving in B a t0+t doesn't carry
the same information. Pretty disturbing to locate objects in time and
space, eh ? Time slips, nothing to do about that ... :))

- data/computing distribution is an open problem today. However there
are solutions on many precise applications at very different scales
(Mosix clusters, NUMA, Fastrack, [Internet at the packet level]!), so
why not a good model for MMORPG ?

So long. If I was clear somewhere, you're lucky ;)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]