swarm-support
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: vision


From: Alex Lancaster
Subject: Re: vision
Date: Sun, 17 Nov 1996 01:33:16 +1100

At 12:27 14/11/96 -0700, you wrote:
>
>>     Sorry, I missed the beginning of the vision discussion.  (My email
reader's
>> had a firm talking-to for deleting all my mail this morning....)
>> 
>>     I use a quad tree in Gecko on Swarm.  If someone wants to look, let me
>> know.  Manor and I have discussed generalizing this into a space object, but
>> neither of us has jumped to put out the effort.  If (someone--who started
this
>> today?) is considering implementing quadtrees anyway, perhaps he'd consider
>> just filching the base code and building the general space object instead.
>> 
>> Ciao,        
>>     Ginger
>
>Definitely!  I'm going to need a "vision" based target selection
>in another app I'm doing.  Unfortunately, Manor doesn't always
>let the rest of the hive know what he's talking about doing.

Glen,

Thanks for your comments. I have answered your questions to the best of my
current
knowledge, below:

>I also had a couple of questions to Alex about his app:
>
>  1) Does each neuron cover an arc distance?  If so, it
>     seems that 7 would give more like a 210 deg arc for
>     the entire "retina."

Yes, it would be if we assumed 30 deg arc per neuron - but I was thinking of
a 25 degree arc per neuron model. I don't know how realistic this would be
in terms of how real vision actually works, but it's a first approximation...

>  2) Is the radius of the vision arc going to be pre-defined
>     for all bugs, or bug-specific?  And if it's bug-specific,
>     will that radius be a function of ambient light in the 
>     space or directed light?  And how reflective would the
>     bugs be?
 
At this stage the answer would be: (given that we are proposing a fairly
simple model of vision - the hard work in terms of our simulation will 
be in the neural network construction, rather than the "realism" of the
world, per se) pre-defined for each bug. If we were to make it bug-specific
we would probably go for "directed" light which would mean that all "objects"
in the world would be 100% absorptive. If there are any easy ways to calculate
ambient light, I would prefer to go that way - but I would imagine it would a
fairly computationally intensive task.

>  3) Will the bugs both move and look at every time step?  Or
>     is there another sequencing, like move, look, look, move.

We would like to make the decision-to-move dynamics emerge at the level of the
neural processing of each bug. In other words, the bug would "look", process 
information about the "world" and then there would be a "move" phase that may or
may not result in actual bug movement. So in answer to your question the bug
would
look then move in each time step. Again, this is a simplification and a more
complex
simulation would allow the possibility of different forms of sequencing.
Ideally, 
the precise sequencing of action would be another emergent property of the
neural network and not be imposed on the simulation.

Hope this helps let you know where I'm at. Let me know if you have any ideas or
successes with your own vision projects. Have there been many Swarm projects 
involving vision, out there that you are aware of?

Alex
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
 Alex Lancaster                                   _0  
   walkabout engineer and oftentime netsurfer    / |\/
                                                 \/ \
  e-mail: mailto:address@hidden                   /
  web:    http://www.real.net.au/~alex
  tel:    +61-(0)2-9487-3933
///////////////////////////////////////////////////////////////////



reply via email to

[Prev in Thread] Current Thread [Next in Thread]