swarm-support
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: vision


From: glen e. p. ropella
Subject: Re: vision
Date: Tue, 19 Nov 1996 11:54:18 -0700

Alex,

> Yes, it would be if we assumed 30 deg arc per neuron - but I was
> thinking of a 25 degree arc per neuron model. I don't know how
> realistic this would be in terms of how real vision actually works,
> but it's a first approximation...

I was just wondering why you chose 7 neurons to cover 180deg,
I guess.

> At this stage the answer would be: (given that we are proposing a
> fairly simple model of vision - the hard work in terms of our
> simulation will be in the neural network construction, rather than the
> "realism" of the world, per se) pre-defined for each bug. If we were
> to make it bug-specific we would probably go for "directed" light
> which would mean that all "objects" in the world would be 100%
> absorptive. If there are any easy ways to calculate ambient light, I
> would prefer to go that way - but I would imagine it would a fairly
> computationally intensive task.

I agree that modelling real vision is impractical at the moment, at
least, not as a sub-model of something else.  However, if you make all
the bugs the same (or a small number of types of bugs), then it would
be fairly trivial to generate an algorithm that would search the
surrounding "ball" of each bug that was position independent.  This
algorithm could be tuned fairly well, I think.  And this would save
the overhead that using a quadtree would entail.  (I.e. every time
a bug moves, the tree has to be updated as well as the grid-space.)

Now, if you're space were actually a graph space instead of a 2d
space, the quadtree would be the way to go.  But, if all of your
operators and agent-behaviour is based on a Moore or Von Neuman
neighborhood in a 2d grid, it seems a bit much to implement the
quadtree just so that searching a region will be efficient.

(I'd still like to see Ginger's quadtree implementation, though. :-)

> We would like to make the decision-to-move dynamics emerge at the
> level of the neural processing of each bug. In other words, the bug
> would "look", process information about the "world" and then there
> would be a "move" phase that may or may not result in actual bug
> movement. So in answer to your question the bug would look then move
> in each time step. Again, this is a simplification and a more complex
> simulation would allow the possibility of different forms of
> sequencing.  Ideally, the precise sequencing of action would be
> another emergent property of the neural network and not be imposed on
> the simulation.

Why not, then, use dynamic scheduling instead of "null op" step calls?
You could have your bug do a "look" and then, if it saw something, it
could schedule an action or not.  An example of this kind of thing is
in our Mousetrap demo.  The traps schedule "triggers" of other traps.

> Hope this helps let you know where I'm at. Let me know if you have any
> ideas or successes with your own vision projects. Have there been many
> Swarm projects involving vision, out there that you are aware of?
> 
> Alex

I'm going to use the above method of searching a ball.  Each agent in
my sim will have available to it an efficient search algorithm for
some small set of radii around any given cell.  At first, there will
only be one algorithm that searches a ball of some fixed radius.  But,
later, I hope to be able to generate the ball search algorithm based
on the space (1d, 2d, 3d) and the radius.  Then when the modelSwarm
runs its buildobjects, it will create each object with whatever
algorithm corresponds to how far the agent can see.

Let me know how yours goes.

glen


reply via email to

[Prev in Thread] Current Thread [Next in Thread]