Centro di Scienza Cognitiva

Università degli Studi e Politecnico di Torino

Homing Behaviour:

an Autonomous Agent Emulates a Desert Ant

Gianluca Smeraldi

Technical Report 96-02

  

Abstract

The aim of this work has been to design an autonomous agent able to perform homing behaviour, that is to return to a starting point from any current position. I have taken as an example the homing behaviour of the desert ants. The ethological research indicates that the basic information used by ants to return to the nest by the direct way is the azimuth of the sun. The biological agent refers only to an egocentric frame of reference while no cognitive map is involved. Starting from these evidences I have formulated a mathematical model of homing behaviour It is a geometrical model and it is called vectorial model. I have realized a simulated autonomous agent whose controller approximates such model. In this paper I will describe the vectorial model and the general architecture of the artificial agent that emulates the biological one. As a final consideration, I will note that the simulation of agent-environment interaction shows that the behaviour of the artificial agent preserves all the characteristics observed in the biological one. In fact, if I reproduce the experimental conditions of the ethological experiments on desert ants, the artificial agent shows an equivalent behaviour.

1. INTRODUCTION

This work has been developed in the perspective of autonomous agents (see Agree, 1995). In this line of research the agents are considered like adaptive systems and the emphasis is on the interaction between the agent and its environment (Brooks, 1990; 1991; Maes, 1994; Colombetti, 1994; Colombetti & Dorigo, 1996). The focus is on those tasks which are central for adaptiveness of the system. In this sense the dynamic system agent - environment - task is considered as a global one (Beer, 1995).

     Perception and action play a central role in performing adaptive behaviour. To build system that could interact with an environment, it is necessary to connect it to the world via a set of sensors and actuators (Brooks, 1990). It is important that the sensors and actuators would be low-level devices. In robotics sensors are usually light or colour sensors, infrared light emitter and receiver or sonar scanner, while actuators are usually motor devices that control wheels.

     It is necessary some kind of controller that collects information through sensors and sends out information to the actuators (Colombetti & Dorigo, 1996). The controller could be considered as the mind of the agent. The term mind is used here as synonymous of control system (Newell, 1990).          The controller of the artificial autonomous agent could be considered as the set of the skills that allow the agent to perform a particular kind of behaviour. The controller interfaces perception with action; it accepts data from perception and send output to the effectors. So the actions of the agent will effect perception at subsequent instant and the controller, in response, give other output to the effectors. In this way the controller implements the feedback-loop and the controlled process is just the behaviour that the agent should perform.

     The design of the controller has certainly a central role in designing an autonomous agents. At present one can see three ways to develop a controller.

1.  The first, that could be called emergent behaviour (Floreano & Mondada, 1994), is based on the genetic algorithm theory (Holland, 1975). This method is often used in robotics. The features of the controller are defined as long as the artificial autonomous agent is interacting with its environment. The selecting mechanism is an artificial model of the natural selection. An initial population of different genotypes, each codifying the control system of a robot are created randomly (see Nolfi & Parisi, 1995). A fitness function is defined, so as to assign a score to any desired task: the more the task corresponds to a task that is useful for the survive of the robot, the more is the score assigned to that task. The robots are evaluated in the environment by the fitness function (see Colombetti, 1994). The robot that obtained higher fitness scores are allowed to reproduce by generating copies of their genotypes with addiction of random changes (“mutations”). The process is repeated for a certain number of generations until desired performances are archived (see Nolfi, Miglino & Parisi, 1994 and Nolfi, Floreano, Miglino & Mondada, 1994 for applications). 

2.  Another way to develop controller of the autonomous agents is to take as example the biological agents. The developer chooses a behaviour of a biological agent that he wants to emulate. Then he tries to identify the set of strategies that the agent uses. These strategies are usually some essential skills of the behaviour of the agent. The results of ethological researches on that kind of biological agents are usually used by the developer in order to identify such skills. The set of the basic skills is called computational model (see Gallistel, 1990 for a discussion). The computational model could be translated into a set of mathematical relations, so to implement them on the controller of an artificial autonomous agent. The evaluation of the behaviour of the artificial autonomous agent and the subsequent comparison between artificial and biological autonomous agent could give us some interesting results.

3.  There is a third way to develop the controller of an artificial agent. The developer  formulates some abstract hypothesis in order to isolate some skills that the controller has to perform in an abstract way. This approach could be defined as engineering one.

     The aim of this work has been to design an artificial system able to perform homing behaviour, that is to return to a starting point from any current position. In particular the agent should be able to head and move directly for the starting point by reference to a source of light. I have taken as an example the homing behaviour of the desert ants, Cataglyphis fortis. In this sense, in developing the controller of this agent, I follow the second of the three way that I have just explained. The idea of developing an artificial agent which is inspired to a biological one, in fact, is useful for at least two reasons: on one hand to study living beings, whose behaviour has been shaped by evolution is very rich of information; on the other hand to assume the point of view of a specific biological agent forces us to develop a more ecological behavioural model.

     The desert ant shows a typical behaviour: the foragers set out from their nest to search for food pursuing a tortuous path; when they find food they turn and head directly for home (Gallistel, 1990). Some experiments (Whener & Menzel, 1990, Whener & Srinivasan, 1981) show that the ants use the sun as point of reference. In particular the crucial information they use is the azimuth of the sun. I will start from these evidences to formulate the assumption of a theory of homing behaviour in desert ant. Subsequently I have realized a simulated autonomous agent, which implements the mathematical functions of the dynamic system.

     According to the theory of autonomous agents, it is an “activity producer” which interfaces with the environment through perception and action. In particular the perceptual system is the simulation of a visual system whose input device is a couple of retina constituted by luminance receptors. The output device is the simulation of a motor system. The input information is a low-level information just like the output information, according to the philosophy of autonomous agents. As a consequence of the choice to consider the interaction between the agent and its environment I have simulated the environment too. I have simulated an environment with the characteristics of the real one, in which the artificial agent can move. I have defined a fitness function in order to evaluate the adaptation of the agent.

     Simulation shows that the behaviour of the artificial agent is similar to that of the desert ant. At present the interest is not in studying how the artificial system can learn and modify itself to adapt to its environment but in analysing the innate skills which allow a special kind of biological agent to perform basic tasks to survive (e.g. see Brooks, 1990). So I decided to study that level of homing behaviour that could be considered the simplest one: that of desert ants. In fact it does not involves cognitive maps to be performed. The desert ants homing by Dead Reckoning (Gallistel, 1990). Dead Reckoning generates the simplest of all spatial representations: the geometric relation between the position where the reckoning started and the current position of the animal on the earth’s surface. Homing behaviour can be performed by a lot of natural species of animals. This kind of behaviour involves more or less basic skills; homing is typical in ants, bees, pigeons, mammals etc. There are different levels of complexity in performing homing behaviour because in each case the skills involved are different from those involved in the others.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1. Homing path of a desert ant. S: fictive feeding station (releasing point). N*: fictive nest. O: estimated nest; point where the ant turns and begins piloting.


     From the methodological point of view, the first step will be the review of the ethological literature in order to define the basic skills that allows the ants to perform homing behaviour (Section 2), then  I will formulate a set of assumptions on these skills and I will propose a mathematical model of homing behaviour in desert ant (Section 3). The controller of an artificial agent will implement the dynamic model (Section 4). The artificial agent interacting with the environment will be simulated on a computer (Section 5). The simulation of the agent that interacts with the environment will show that the agent is able to perform homing behaviour referring to a fixed source of light and that, in the same experimental conditions, the artificial agent behaves just like the desert ant.

2.   THE BIOLOGICAL AGENT

This Section describes the homing behaviour of the desert ants Cataglyphys fortis as it has been observed by ethologists.

When the ants leave their nest searching for food, they pursue a tortuous path and proceed for more than 200 metres (Whener & Menzel, 1990). When the ants find the food, they move directly back toward the nest. When they are within a few meters of the entrance of the nest, they  start searching for familiar landmarks (Gallistel, 1990). According to ethological literature two navigational systems seems to be involved in the homing of desert ants:

1.  path integration (Whener & Menzel, 1990, McNaughton, Chen & Markus, 1991) or dead reckoning (Gallistel, 1990): during the searching path, the ant continuously monitors the angle steered and the distance travelled to obtain the vector pointing from its current position toward home. Dead reckoning generates the geometric relation between the position where the reckoning commenced and the current position of the ant (Gallistel, 1990) .

2.  goal localization using some kind of landmark around the goal. Once they are near their nest, the ant start searching for some familiar landmarks or snapshots, using a piloting mechanism (Gallistel, 1990). As the ant continuously compute their position relative to the starting point, the path integration mechanism is subject to cumulative errors. Piloting behaviour starts as the ant are near the nest, so to minimize the effects of the error cumulated during the searching path.

Other ethological data I have taken in consideration are the following.

[1]   Whener and Flatt (1972) have shown that the ants do not have a cognitive map of the territory  surrounding their nest. The authors trapped the ants as they emerged from their nest and then released them at a randomly chosen location, 2-5 metres away. The ants showed no evidence of knowing where to head for. They searched for the nest in all directions. Only 57 per cent reached the nest in less than 5 minutes. The mean time of 2.18 minutes needed by this 57 per cent exceeded of a factor of ten the time required by a homeward-bound ant to cover the same distance. It shows that the ants do not know where they are unless they themselves get there. It shows also that the nest gives off no beacon that the ants can detect at any distance (see Gallistel, 1990).

[2]   Whener and Srinivasan (1981) have demonstrated that no cognitive map is implicated in homing behaviour. The ants were captured as they departed from the feeding station toward home and released at a location (fictive feeding station N* in figure 1) about 600 metres distant from the nest. When released, the ants pursued the direct route toward the place where the nest should have been. The linear march of the ants terminated with a sharp turn. At this point the piloting behaviour began and the ants started searching for familiar landmarks surrounding the nest. The point where the ants turned and started searching for the nest can  be considered as the fictive nest (figure 1 - filled circle labelled O). The path from the fictive feeding station to the fictive nest was parallel to that from the real feeding station (where the ants were captured) and the real nest, from where the ants started their journey.

[3]   Same experiment [2] of Whener and Srinivasan (1981). When the ants got closed to the fictive nest and failed to find the real nest, they began searching for it. The study of the search pattern evidenced two properties: it remained centred on the original estimate of the nest location and the place where the nest was expected to be (fictive nest) was traversed with a high frequency. In other words, the ant start searching for the nest with tortuous loops, but they repeatedly return to the site where their dead reckoning has originally indicated that the nest should be (see figure 2). If the ant,  after they have searched for a while, are  displaced about 10 metres away, the new search is not centred on the old starting point, but on the new releasing point. This result shows that the searching of the nest is conducted by dead reckoning (Gallistel, 1990).

[4]   Sanchti (1913) has shown that ants refer themselves to the sun. The author found an ant marching with the sun on its left. He obscured its direct view of the sun and put a mirror angled so that the ant saw the reflection of the sun in the mirror. The ant turned around and marched in the opposite way. When Sanchti angled the mirror in various way, the ant adjusted its march so as to maintain the angle relative to the azimuthal angle of the image of the sun (see Gallistel, 1990). The author has demonstrated that the crucial information for the ants is the azimuth of the sun.

On the basis of these experiments, Gallistel (1990) concluded that desert ants do homing by dead reckoning . They compute their position relative to a starting point (displacement vector) using the azimuth of the sun as compass heading. As the azimuthal angle of the sun changes during the day and the rate of this change is not constant, an endogenous mechanism is required to correct the effect of this changing .

     These studies allow me to define some assumptions about homing behaviour in order to formulate a dynamic model of such behaviour. First, homing behaviour is genetic and no learning is involved. Every time the ant starts searching for food, it does not take in account the previous experience. In this sense the ants need a local memory (Staddon, 1983) as their behaviour is affected only by present events and events in the immediate past (form 1). Second, no absolute geocentric frame of reference is involved. The ant can refer only to an egocentric frame of reference, anchored to its point of view.

3.  THE MATHEMATICAL MODEL

To design the artificial agent, the first step has been to formulate a mathematical model of the natural agent's internal process able to generate the homing behaviour. More precisely, the mathematical model defines the input-output function of the controller of the agent. It is an abstract model  that  does not specify the machine that realizes the process nor the necessary resources. The  model I propose is expressed in form of vectorial operations.

The model must be formulated respecting the assumptions I have formulated starting from the ethological evidences. The same behaviour, in fact, could be described in many ways from the mathematical point of view. Some authors, in the ethologic area, (e. g. Gallistel, 1990, Whener & Menzel, 1990) call this kind of theory computational model that they define as a “theory of a neurobehavioural process formulated in mathematical terms”. The dynamic system and the computational model could be considered as similar concepts.

 

Figure 2. An agent who starts from the starting point S and runs along the path P until it reaches the current point C. The vector h that connects S with C results from the vectorial sum between the vectors of the path P.


     For example when an ant starts from nest it seems to run in a random way, pursuing a tortuous path. When it finds food it turn and move to the nest for the direct way. If one observes this kind of behaviour, and he wants to propose a model of such behaviour, it could be described as illustrated in figure 2. Time is considered as discrete; to each instant corresponds a vector that represents direction and speed in that instant. The vectorial sum of all the vectors of the exploration path gives us just the vector representing the homing path. But it is a solution that does not take in account the specific biological agent. In fact if you consider the ethological researches, you could see that the desert ants perform homing behaviour by referring to the sun, using the azimuth of the sun as a compass heading. The ants have not a cognitive map of the territory surrounding the surface around their nest, and they can’t refer to a geocentric frame of reference, as I have assumed. The model of figure 2 is a wrong model of the behaviour of the desert ant, because it does not take in account the assumptions. In fact it assumes a geocentric frame of reference and does not involves the azimuth of the sun as a compass heading.

     Now I try to formulate a vectorial model of homing behaviour that respects our assumptions. The time is assumed to be discrete, so as each instant t is equal to the others.

3.1. The vectorial model

Figure 3. Left: simple path of two vectors. L: projection on the two-dimensional plane of the light source. SE: Egocentric frame of reference. d: direction of the light source as to the egocentric frame of reference. Right: associative law of the vectorial sum with path of more than two vectors.


The dynamic system that will be explained is called vectorial model. It is because it could be considered as a geometrical model. The basic idea is that the result of the computation is the direction that the light source would have if the agent would be lined up with the path connecting the current and the starting point. If the agent wants to return to the starting point from the current one, it has to rotate itself until the current direction of the light source corresponds to the direction resulting from the model. To return to the starting point the agent has to maintain this alignment and move by rectilinear motion.

     Consider the figure 3. The direction of the light source L can be considered as a vector. The direction of the light source is always the same for any position of the agent so as the azimuth of the sun. In fact, the distance between the sun and the earth, from the point of  view of an agent which runs along few meters, is approximate to infinite.

     The left square of the figure 3 shows a simple path which connects the starting point S to the current one C with two vectors a, b. Since I am considering time as discrete, at each instant t the path can be represented as a vector, whose direction is the direction of the path and whose modulus is the speed in that instant. To simplify the exposure, now the speed is supposed to be the same at each instant. The bottom side of the left square shows the direction of the light source with respect to the egocentric frame of reference x, that is from the agent’s point of view. SEa is the direction of the light source da with respect to the egocentric frame of reference x when the direction of the agent is a, while SEb is the direction of the light source db with respect to the egocentric frame of reference x when the direction of the agent is b. Now consider Sea+b in figure 3 - left square: the vectorial sum da+db is just the direction of the light source with respect to the egocentric frame of reference x when the agent is lined up with the path connecting the starting point S with the current one C (dotted lines). This is in accord with the commutative law of the vectorial sum or parallelogram rule (see Appendix for geometrical proof).

     The right square of figure 3 shows a path from the starting point S to the current one C connected with three vectors a, b and c. As shown, when the agent is lined up with the path connecting S to C (signed with (a+b)+c) the sum (da+db)+dc is just the direction of the light source with respect to the egocentric frame of reference x, in virtue of the associative law of the vectorial sum. An artificial agent which starts from S and runs along the path a, b, c takes a fix of the direction of the light source at each instant t and computes the sum of the vector representing the current direction with respect to the egocentric frame of reference (e.g. dc) and the sum of the previous vectors (e.g.  da+db), according to the associative law. The resultant is the vector representing the direction that the light source would have if the agent would be lined up with the path connecting S to C. If it wants to return to the starting point S from the current C, it has to rotate itself until the current direction of the light source (e.g. dc in the figure 3 - right square) corresponds to the direction resultant from the associative vectorial sum (e.g. (da+db)+dc in figure 3 - right square). At this point, the agent is lined up with the path connecting S to C, and the starting point is in back of it. If the agent lines up with the resultant  (da+db)+dc rotated of 180° he has the starting point in front of it. To return to S it has to maintain this alignment and move by rectilinear motion: it will find the starting point.

     From now on, I call expectation vector the vector resultant from the associative sum, perception vector the one which represents the direction of the light source at each instant and path vector the one which represents the direction of the path. I call homing vector the expectation vector rotated by 180°

 

Figure 4.  The expectation  vector depends on the vector modulus da and db. The thin dotted lines refers to the case in which vector a and b has the same modulus (figure 3).


     Consider the path vector. Till now (figure 3) I have supposed that the speed was the same at each instant. Now I consider an agent which speed can change. The speed at each instant is defined as the modulus of the vector whose direction represents the direction of the path in that instant If you look at the figure 3, you can see that the direction of the homing path (heavy dotted line) changes proportionally to speed. As you can see in figure 4 by comparing heavy dotted lines with thin ones - the last refer to the case in which vector a and b has the same moduli (figure 3) - the direction of the expectation vector, with respect to the egocentric frame of reference x, depends on the speed too (see Sea+b). A way to make the vectorial model to take in account this evidence is to assign the moduli of the path vectors a and b (remember that it is the speed in that instant) to the moduli of the perception vectors da and db. As a consequence, in fact, the resultant expectation vector da+db depends directly on the speed.

Form 1. The vectorial model.

 

Assumptions

a)  homing behaviour is genetic and no learning is involved. Every time the ant starts searching for food, it does not take in account the previous experience.

b)  the system can refer itself only to events at the instant t and to those at the instant t-1 (local memory);

c)  the system can refer only to an egocentric frame of reference.

 

Model

1.  At instant t=0 the direction of the light source which the agent has to line up with to have in front the starting point, the expectation vector, is exactly the one which it perceive, rounded by 180°;

2.  at instant t=1 the direction of the expectation vector is the resultant from the vectorial sum of the actual perception vector and the previous one. The modulus of the  perception vectors is equal to the modulus of the corresponding path vector (the speed at the corresponding instant);

3.  at instant t=2, in virtue of the associative law of vectorial sum, the direction of the expectation vector is the resultant from the vectorial sum of the actual perception vector and the expectation vector at the previous instant t=1. The modulus of the actual perception vector is equal to that of the path vector (the speed) at instant t=2;

4.  at instant t=n, in virtue of the associative law of vectorial sum, the direction of the expectation  vector is the resultant of the vectorial sum of the perception vector at actual instant t=n and the expectation vector at instant t=n-1. The modulus of the actual perception vector is equal to that of the path vector (the speed) at instant t=n.

    

The distance between current and starting point is equal to the modulus of the expectation vector.

     The expectation vector has another important property: its modulus is equal to the distance between the current position C (or C’) and the starting position S. In fact the expectation vector is the diagonal of the parallelogram whose sides are just the perception vectors whose modulus corresponds to the distance (see appendix). Form 1 summarizes the main features of the vectorial model.

     The vectorial model is consistent with the basic assumptions. First, it does not take in account the previous experience. Then, it is consistent with the local memory assumption: at each instant, in virtue of the associative law of vectorial sum, the only information used are the actual information (the perception vector) and the information at the previous instant (the expectation vector). The vectorial model is also consistent with the assumption that the artificial system can refer only to an egocentric frame of reference anchored to its point of view: no absolute geocentric frame of reference is involved.

4. THE ARTIFICIAL AGENT

The vectorial model could be implemented by the controller of an artificial agent. In other words I will try to design an architecture that approximates the vectorial model. The controller processes the information coming from the visual device and sends information to the motor system. The design of the architecture involves also the input-output devices. This kind of characteristics are those described in the literature about the approach of autonomous agents (e. g. see Agree ,1995; Brooks, 1990; 1991; Beer, 1995; Colombetti, 1994; Colombetti & Dorigo, 1996 etc.).

     Figure 5 shows the whole architecture of the system. The significance of each components will be clarified in the next Subsections. First I will consider how information are acquired from the environment - input device -, how they are represented, and how movement is generated - output device[1].

4.1. Visual device

The input device of the agent is a visual system (figure 5 - VIS).

 

Figure 5. The architecture of the artificial system. VIS: visual system. AM: alignment module. HM: heading module. PROP: proprioceptive system. MOT: motor system.


     Figure 6 B shows the hypothetical physical structure of the agent and C shows the schematic disposition of the eyes with respect to the egocentric frame of reference x that coincides with the main axis of the body of the agent. In order to simulate the interaction between the light and the eyes of the agent, the distance between the two eyes is insignificant because of the characteristics of the light source that is considered at a distance approximated to infinite (see Staddon, 1983 for proof). So in the simulation I will consider a cone of light which hits the spherical surface composed by the two hemispherical eyes (see figure 6 C).

     The figure 7 shows the light-shadows areas generated by a cone of light which hits a spherical surface (A, B). The square a shows the direction of the light source L in the specific example of the figure. Each eye of the agent is composed by an artificial retina. Each artificial retina has 24 receptors (figure 6 A). In the simulated agent these units react to the light as they were placed on an hemispherical surface (figure 7 C).

4.2. Visual mapping: the Kohonen matrix

The pattern of activation of the receptors is mapped to be in an internal useful form. The activation of each matrix of receptors is processed by a Kohonen neural network  (Kohonen, 1978). The output of this network is a square matrix of 2´2 elements (output units or neurones). The information lies in a particular zone on the matrix. That matrix, that I will call visual matrix, maps in a topological way the information about the direction of the light source with respect to the egocentric frame of reference. The visual matrix is oriented just like the egocentric frame of reference.

 

Figure 6. A: Hypothetical physical structure of the artificial agent. B: the distance between the two eyes d is insignificant. C: the units that composed each eye of the agent.


     Figure 8 shows the Kohonen map K corresponding to the visual patterns P (1, 2, 3, 0) which are used to train the neural network W. As you can see, for example, when the light source is in front of the agent (P1) the visual matrix Kb maps the information so as to make explicit that direction (see the portion activated of the matrix in the figure 8). This kind of mapping is realized in correspondence to all directions of the light source. In fact, recall tests have shown that there is a topological representation of the information on the Kohonen matrix corresponding to each visual pattern generated by a different direction of the light source. By consequence, the Kohonen map is a useful representation of visual information.

Figure 7. A light source L (square a) hits a sphere and generates lights and shadows zones (A, B). C: activation of the receptors of the eyes of the artificial agent


     The sense of processing visual pattern by Kohonen neural network will be clear if you consider that the information mapped on the Kohonen matrix has the same characteristic whatever would be the visual device. In other word, if an agent has a visual device different to that of figure 6, for example a robot with its set of sensors, the Kohonen matrix could be trained so as to give off a topological representation of  such a visual device. So I call device-independent the information of the Kohonen matrix.

4.3. The motor system

The output of the system (Figure 5 - MOT) is the activation that the controller sends out to some motor-neurones that generate the movement of the agent (figure 9 - top). On each side of the body there is a unit, that is called motor-neurone. The mechanism is that of the caterpillar which is propelled by its tracks: if the activation of the motor-neurone at the two sides is identical the agent will move in a rectilinear direction; if this activation is different, that is if one of them is active and the other has no activation, the agent turns around. The direction of rotation depends on the side that is more active.

     The speed is supposed to change proportionally to the amount of activation of each motor-neurones. In particular the linear speed is proportional to the amount of simultaneous activation of the two motor neurones, while the angular speed is proportional to the amount of the activation of the motor-neurones active respect to that inactive.

Figure 8. P (1, 2, 3, 4): visual patterns used to train the Kohonen neural network W. K: Kohonen matrixes.


     I call motor system “path generator” (Form 2) because it generates the agent movement, sending activation to the motor-neurones. At the instant t, the value send to the motor-neurones is the same to the left motor-neurone and to the right one: the agent move in a rectilinear way. At the subsequent instant the value is send only to one motor-neurone while that send to the motor-neurone on the other side is zero: the agent turns. It turns left if the active neurone is that on the right side; it turns right if the active neurone is that on the left side. The rate of turning is proportional to the amount of the value of the motor-neurone active. The instant at which the motion is rectilinear and the instant at which the agent turns are alternating during all the time that the agent moves (see Form 2, “path generator”). The path resulting from the movement of the artificial agent is similar to that of the ant.

     The proprioceptive system (figure 5- PROP) retrieves the value of motor-neurones. The activation of the motor pattern is processed by a Kohonen neural net (figure 9 - bottom) and it is mapped on a matrix of 2´2 elements. The matrix represents the motor information in a topological way with respect to direction (with respect to egocentric frame of reference x) and in an analogical way with respect to speed. The utility of this matrix, that I call proprioceptive matrix, is to give the agent information about its current state. This point will be clarified in the next Section.

4.4. The controller

The architecture of the system is modular. Modularity, in this case must not be intended as Fodor intends (Fodor, 1983), because no cognitive task is implicated. Simply each basic function is encapsulated in a specific structure, that I call module. This kind of modularity has a hierarchic structure: in a limit case the entire process involved in homing behaviour could be considered as a module which has other modules into itself. The controller of the system involved some basic functions encapsulated in modules.

     The module that implements the vectorial model, that I call the heading module, is just one of them (figure 5 - HM). The other module that is a part of the controller is the alignment module (figure 5 - AM) that computes the values to send to motor-neurones (see Section 4.4.3). The proprioceptive system (see Section 4.3) is the square labelled with PROP in figure 5. It computes the instantaneous linear and angular speed of the agent from the value of the activation of the motor-neurones at each instant.


Figure 9. Top: the hypothetical physical body of the agent. The amount of activation of the motor-neurones Mt determines the movement of the agent. Bottom: P(1, 2, 3, 4):  motor patterns used to train the Kohonen neural network Wp (black: maximum activation; white: zero activation). K: Kohonen matrixes.

 

4.4.1. Basic features

Consider the perception vector described in section 3.1. and the visual matrix, described in section 4.2. Both represent the direction of the light source with respect to the egocentric frame of reference: visual representation (the visual matrix) is similar to the mathematical concept (the perception vector). Then, the vector resultant from the sum of vectors has the same direction than the direction that represents the activation on the matrix that results from the sum of matrixes, as you can see in figure 10 - top square. The system has just to sum the visual matrix at each instant to the sum of the preceeding visual matrixes, in an associative way. I call the matrix resulting from the associative sum homing matrix. So, at each instant the visual matrix of the actual instant is summed to the homing matrix of the preceeding instant. When the agent wants to return to the starting point it has to move so to align the actual visual matrix with the homing matrix rotated by 180°. It is the basic algorithm of the artificial system. Such process approximates the vectorial model.

     Before describing it in details, look at figure 10 - bottom square. It shows the sum of two visual matrixes relative to opposite directions (A). On the resultant matrix activation is uniform. This because two opposite directions annul each other, like in vectorial sum. The activation of the resultant matrix increments time after time producing useless noise. If the matrixes are pre-processed by a neural network that inhibits opposite components with respect to the active one, this drawback can be avoided. Figure 10 bottom square B shows the same situation of A, but now the matrixes are pre-processed to inhibit the opposite components with respect to those active. The amount of inhibition in a zone of the matrix is just the amount of the activation in the opposite zone.

Figure 10. Top square. Vectorial sum and matrix sum. Bottom square. A: sum of the perception matrixes corresponding to two opposite directions of the light source with respect to the egocentric frame of reference x. B: same situation, but now the matrixes are pre-processed so that the units opposite to those active are inhibited.


4.4.2. The heading module

In figure 11 you can see the architecture of the heading module (figure 5 - HM). Inputs to it are the visual matrix and the proprioceptive matrix. The dotted zone of the proprioceptive matrix represents the speed of the agent. It is the value of w1 that weights each element of the visual matrix. So proprioceptive feed-back of speed normalizes the visual matrix to instantaneous speed, just like the value of instantaneous speed is assigned to the modulus of the perception vector in the vectorial model. WH is the neural network which inhibits the opposite components of the matrix with respect to the active ones. S is the sum of the current visual matrix and the preceding homing matrix. The resultant of this sum (the current homing matrix) is stored in the local memory Mloc and will be used at the next instant to be summed to the next visual matrix. The heading module computes the homing matrix rotated by 180° (see r180).

     When the agent starts making a tortuous path, at each instant the heading module refreshes the homing matrixes. The homing matrix will be summed to the visual matrix at the next instant and the homing matrix is now available. If the agent wants to return to the starting point, e.g. as a consequence of particular stimulus, it has to make the visual matrix corresponding to the homing one, rotated by 180°. In this way it is lined up with the homing path and the starting point is in front of it. To do that the agent has to activate its motor system. A particular module (the alignment module, see Section 4.4.3.) encapsulates this function.

     The distance between the current position of the agent and the starting point is the absolute value of the global activation of the homing matrix. Figure 12 shows this concept. Suppose that the agent moves from P (the starting point) to P’ and L is the direction of the light source. At the starting point P the total absolute activation of the homing matrix is assumed to be zero (Hom(P)). During the path from P to P’ the total absolute value of the homing matrix increases. Conversely, during the return path, the visual matrixes which sum to the homing matrix (Bi(ret)) are opposite to the ones activated during the path from P to P’ (see Bi(exp)). So the total absolute activation of homing matrix progressively decreases. When the absolute value is zero the agent is close to the starting point.

 

Figure 11. HM: heading module. It computes the homing matrix Hom from the visual matrix Bi and the proprioceptive speed mapped on the proprioceptive matrix Kp (dotted unit). r180 rotates the matrix resulting from the associative sum, by 180°.so that if the visual matrix corresponds to the homing matrix the starting point is in front of the agent.


4.4.3. The alignment module

The alignment module (figure 5 - AM) computes the value of the motor-neurones during the homing path. When the agent departs from the starting point, the path is the resultant of randomly chosen movements. It is a first approximation of the real movements of an ant that departs from its nest for exploration. It is not in the aim of this work to build a model of exploration paths, so I assume that the artificial agents moves in a random way during exploration. The movement during exploration path is generated by the motor system (“path generator”, see Form 2).When the agent reaches some significative goal, so as the ant reaches the food, it have to return to the starting point. The alignment module computes the value to send to motor-neurones so to align the visual matrix, that corresponds to actual perception of the light source, with the homing matrix rotated by 180°, computed by the heading module. This process is continuing during the homing path: on one hand the heading module computes the homing matrix taking in account the last movement, on the other hand alignment modules computes the new value to send to the motor neurones. So the agent continuously adjust its direction during the homing path.

     As the agent is getting near the starting point the global absolute value of the homing matrix decreases. As the value decrease under some critic value (e.g. a threshold), the agent is close to its estimated starting point.

Figure 12. Example of a rectilinear path from the starting point P to P’. The global absolute value of the homing matrix is zero at the starting point (see Hom(p)). This value increases during the exploration path P-P’ and decreases during the homing path P’-P (dotted arrows).


5. SIMULATION

The architecture of the artificial agent and the environment are simulated by a programming language on a computer[2]. This program is called simulator. It draws on the monitor a line corresponding to the movement of the agent and it has some functions that simulate interaction between a fixed source of light and the visual system according to the movement of the agent. In the next Subsection I will describe the characteristics of the simulator according to the movement and the visual system of the agent.

5.1. The environment

The artificial environment is supposed to be a two-dimensional plane on which the agent can move and a source of light at a distance such that its direction could be considered the same in each point of the plane. As a first approximation the light source is considered as fixed on the azimuthal plane, and it has zero elevation on the horizon.

5.2. Movement

The graphical interface between the experimenter and the behaviour of the agent is the screen of the monitor of the computer. The simulator draws a line on the screen using the amount of activation of motor-neurones. For example the simulator draws a rectilinear line when the agent move in a rectilinear way. The length of the line is proportionally to the activation of the motor-neurones (for example if the speed of the agent is 3, the simulator draw a rectilinear line length 3 pixels). When the agent turns the simulator registers the turning so as the next line is turned by the same amount. For example if the left motor-neurones has activation 2 and the right one has zero activation, the agent turns right of 4.5´2=9 degrees (see Form 2 “path generator”). The simulator will draw the next line so as the angle between the previous line and the next would be of 9 degrees on the right.

Form 3. The alignment module.

 

“behaviour generator”

 

The agent explores: the level of linear speed in rectilinear motion (even instants) and of angular speed in turn (odd instants) is under the control of motor system (randomly chosen).

 

The agent finds the goal: switch from random movement to homing behaviour. The alignment module send activation to motor-neurones so as to align the visual matrix, that corresponds to actual perception of the light source, with the homing matrix, computed by the heading module.

 

The agent is near to the starting point: if the total absolute value of the homing matrix is under a critic level homing behaviour terminated and motor system take the control of the movement, so the agent starts moving in a random way. It is to minimize the effect of the cumulative error that occurs during the computation of the homing matrix. In fact the agent is now surely near to the starting point, and he will found it in a brief time.

5.3. Visual perception

The simulator generates light-shadow areas by assigning a value to those receptors that are in the light area, and no value to that receptors that are in the shadow area. To that receptor that falls in the boundary between light and shadow area is assigned a value proportionally to what amount of such receptor is in the light or in the shadow area. For example in figure 7 C, the receptors that falls in the light area are active (black), those in the shadow area have zero value (white) and those in the boundary has a value proportional to what part of them falls in the light area (grey). The simulator determines all the light-shadows areas generated by a cone of light, with zero elevation on the horizon, moving around the spherical surface.  In reality the agent turns around and the light source is supposed to be fix, so when the agent turns, the simulator generates a turning of the light source of the same amount but in the opposite sense with respect to the egocentric frame of reference of the agent.

5.4. Architecture

The simulator implements the architecture of the agent. The modules I have just described are realized by specific routines and algorithms implemented by the C programming language. These routines realized the functions that are encapsulated in the modules.

5.5 The behaviour

The artificial agent starts from a specific point placed in the middle of the screen. The user can follow its movement because it draws a line while it is moving. The searching movement is random. When the user gives the agent a particular command (e.g. by pressing the ‘c’ key on keyboard) it turns and moves to the starting point by the direct way, from any position. The simulated homing path is very similar to that of the desert ant (see figure 13). As you have just seen as I were  talking about the ethological researches (Section 2), when the ant is closed to its nest, it seems to switch to another kind of behaviour that I have called piloting. Some ethological researches (see experiment [3] in Section 2) suggest that the searching path remains closed to the estimate of the position of the nest. In simulating the behaviour of the agent I were surprised to see that it behave exactly as the ant when it is closed to its nest. It was a unexpected result. In fact, as you can see in figure 13, the agent begins making some loops when it is closed to the starting position. These loops are centred on the estimate of the position of the nest.

 

Figure 13. Output of the simulation on the monitor screen of the computer.


     The searching behaviour that one can observe when the agent is closed to the estimated starting position allows the agent to reach its goal to return to the starting position even if that estimate is not precise. This kind of mechanism is necessary because dead reckoning is subject to cumulative error (Whener & Menzel, 1990). The suprise is that the same modules generates both homing and piloting behaviour.

5.5.1. The fitness function

A fitness function could be defined as a function that assign a score to the behaviour of the agent (Colombetti, 1994). Given an agent A and environment E, the fitness function f assigns a score f(a) to each behaviour produced by A in E. If the value of f is high, A is well-adapted to E.

     The fitness function is defined as follows. The score assigned by the fitness function to the behaviour of the agent is proportional to its precision in estimating the starting position. It is to say that the more closed to the nest is that estimate, the more will be the value of the score assigned to that behaviour. Some concentric circles are drawn around the starting point. To each of these circles is assigned a value like in the target-shooting (see figure 13 - fitness score). The score is maximal in the centre of the circles.

     On one hand I consider the position score. In this case the estimated point corresponds to that when the estimated distance is zero. Another way to evaluate the score is to see if the agent reach the maximal score. I could suppose that whatever would be the position score, the agent reaches the starting point. It is thanks to those loops it performs when it is closed to the nest that start when the estimated distance is zero.

 

Figure 14. Results of the experiments with the  simulated artificial agent.


5.5.2. Results

The evaluation of the fitness function was essentially qualitative. It means that I does not use any statistical test to analyze the results. It is because  the behaviour observed is just a line on the screen. Simulation could be useful to verify the hypothesis on the model and on the architecture. It could indicate the way to realize a real artificial agent grounded in the real environment and could show us some characteristics that I have not predicted, like  the loops that the agent performs as it is closed to the nest.

     The results I have observed are the follows:

1.  The artificial agent is able to return to the starting point referring to a source of light (figure 13). If you observe the homing path of the simulation and compare it with that of the desert ant (figure 1), you can see that they are very similar. The beginning of each homing path can be considered as the searching of the alignment between the estimated direction of the light and the real one. In both cases the homing path seems to be generated by a typical feed back behaviour. The agent turns on a side at an instant, then turns on the other side and so on until the rate of the turning became very little and the path is proceeding more and more rectilinear. It would indicate that each agent, the biological and the artificial one, is searching for the best alignment during the homing path.

2.  The estimate position of the homing path is not very precise. It could depends either on the resolution of the visual simulator (4.5 degrees) or on the error that the system accumulates during the searching path.

3.  Even if the estimate position give off a low score in the fitness evaluation, the behaviour of the agent shows a high fitness score if  the whole homing behaviour is evaluated. It is to say that whatever would be the estimating position, the agent reaches the starting point a lot of times. It means that the estimated position of the starting point give off a low fitness score, but the fitness score increases if  the evaluation involves if the agent reaches or not the starting point at the end of the homing behaviour.

Figure 15. S: starting point. C: point where the agent is “captured”. R: point where the agent is released. E: estimated starting point. D: direction of the light source.


4.   I have reproduced the experimental conditions of the ethological experiments on desert ants (see Section 2). I have observed that, in that case, the artificial agent shows a behaviour similar to that of the desert ant. Look at figure 14. When one gives the agent the ‘c’ command, it tuns and begin homing path. If one “capture” it when he gives it the ‘c’ command (capture point CP) and “releases” it to a new position (releasing point RP transferred by the vector V) the agent turns and begin homing following a path that is parallel to the ideal homing path represented by the vector IHP in figure 14 (see V and V’). This result is identical to that on the desert ant (see [2] in Section 2). If one “captures” the agent from the estimated starting point (XSP) and releases it to a new position (transferred by the vector V’’) the new searching is centred on the releasing point (new searching position NSP) according to the results observed on desert ant (see [4] in Section 2).

5.5.3. Discussion

In this Subsection I try to discuss some of the results I have observed in simulation tests. First I could say that the architecture of the controller is a good approximation of the vectorial model. As you have observed the agent is able to return directly near the starting point from any position.

     I have also observed that the artificial agent shows the same behaviour of the desert ant when I simulate the same experimental conditions. These results come out from the dynamical characteristics of the interaction and from the mathematical properties of the model. The fact that the direction of the light is the same in every points of the plane and that the agent can not refer to any landmarks, make the agent unable to distinguish between the path starting from the capture point or from the released point. Figure 15 shows a geometrical representation of this fact. From the point of view of the agent there is no difference between the vector h and the vector h’, because the angle between this vector and that representing the direction of the light source d is exactly the same. On the other hand each point of the plane is equal to another, because no one of them represents a landmark. The agent evaluates only its estimation of direction and distance. It is because the searching is centred on the releasing point in the second experiment.

6. CONCLUSIONS

The simulation of agent-environment interaction demonstrates that the abilities described by the  vectorial model are sufficient for homing behaviour so as you can observe in desert ants. I have defined a fitness function able to assign scores to the behaviour of the agent proportional to its precision in reaching the starting point. Evaluation of this function in simulation tests shows that the agent is adapted to its environment, because its control system attempts to maximize the score.

     The homing behaviour of the artificial agent preserves all the characteristic observed in the biological one. In fact, if I reproduce the experimental conditions of the ethological experiments on desert ants, the artificial agent shows an equivalent behaviour.

Acknowledgements

This work has been developed with the collaboration of Prof. Antonella Carassa (Dipartimento di Psicologia Generale - Università di Padova) whose suggestions and proposals have been very useful to the realization of the research.

     Thanks also to Prof. Marco Colombetti (Progetto di Intelligenza Artificiale e Robotica - Dipartimento di Elettronica e Informazione - Politecnico di Milano) for precious suggestions about the technical aspects.

References

Agree P. E. (1995). Computational research on interaction and agency, Artificial Intelligence, 72, 1-52.

Beer R. D. (1995). A dynamical perspective on agent-environment interaction, Artificial Intelligence, 72, 173-215.

Brooks A. R. (1990). Elephant Don’t Play Chess, in P. Maes (eds.), Designing Autonomous Agents: Theory and Practise from Biology to Engineering and Back. North-Holland: Elsevier Science Publishers B.V.

Brooks A. R. (1991). Intelligence without representation. Artificial Intelligence, 47, 139-159.

Colombetti  M. (1994). Adaptive agents. Steps to an ethology of the artificial, in press in S. Masulli, P. G. Morasso e A. Schenone (eds.), Neural network in biomedicine, Singapore: World Scientific, 391-403.

Floreano D.,  Mondada F. (1994). Autonomous and self-sufficient: emergent homing behaviours in a mobile robot, LAMI Technical Report No. R94.14I.

Gallistel C. R. (1990). The Organization of learning, Cambridge, MA: MIT Press.

Holland J. H. (1975). Adaptation in Natural and Artificial Systems. Ann Arbor, Mich.: University of Michigan Press.

Kohonen T. (1978). Associative memory. A system-theoretic approach, New York: Springer-Velag.

Maes P. (1990). Designing Autonomous Agents: Theory and Practise from Biology to Engineering and Back, Guest Editorial. North-Holland: Elsevier Science Publishers B.V.

Newell A. (1990). Unified theories of cognition, Harvard, Mass.: Harvard University Press.

Nolfi S., Parisi D. (1995). Evolving non-trivial behaviors on real robots: An autonomous robot that pick up objects, Institute of Psychology, C.N.R. - Rome: Technical Report 95-03.

Nolfi S., Miglino O., Parisi D. (1995). Phenotypic Plasticity in Evolving Neural Networks, Institute of Psychology, C.N.R. - Rome: Technical Report PCIA-94-05.

Nolfi S., Floreano D., Miglino O., Mondada F. (1994). How to Evolve Autonomous Robots: Different Approaches in Evolutionary Robotics. Institute of Psychology, C.N.R. - Rome: Technical Report PCIA-94-03.

Staddon R. E. J. (1983). Adaptive behavior and learning. Cambridge: Cambridge University Press.

Wehner R., Menzel R. (1990). Do insects have cognitive maps?, Annu. Rev. Neurosci., 13, 403-14.

Wehner R., Srinivasan S. (1981). Searching behaviour of desert ants, genus Cataglyphis (Formicidae, Hymenoptera), J. of Comparative Physiology., 142, 315-38.

 

Appendix

Figure 16.


Figure 16 A illustrates a geometrical representation of the homing behaviour as you have just seen in desert ant. An hypothetical agent starts from S running along the path represented by the two vector a, b. In particular it runs along a, then turns and runs along b. In the specific case of the figure the speed of the agent is faster as it runs along a than as it runs along b. In fact the modulus of the vector a is higher than that of b. As long as the agent runs along a the direction of the light source is a. The vector a is parallel to the vector b, that represents the direction of the light source as the agent runs along b. If you observe the figure 16 B, you can see that from the point of view of the agent, the angle between its front direction (what I have called egocentric frame of reference) and the direction of the light source when it runs along a, is b and when it runs along b that angle is the sum between the angle b and the rate of the turning, represented by the angle a. If you observe the situation with respect to the egocentric frame of reference what you would see is that represented in figure 16 C. By definition the moduli of the two vectors a and b are the same of the vectors a and b. I want to demonstrate that the vector h, resulting from the vectorial sum between a and b as you consider them with respect to the egocentric frame of reference:

a)  is parallel to the vector a and b with respect to the geocentric frame of reference (figure 16 B);

b)  has the same modulus than the vector h.

a)  The vector b’’ is drawn, parallel and equal to b, and the vector a’’ parallel and equal to a, as you can see in figure 16 B (dotted). The straight line r is drawn as a prolongation of the vector a. The angle between a and b’’ is a by construction, because this angle and that between r and the vector b are corresponding internal angle of two vectors b and b’’ parallel by construction, intersected by the straight line r. The figure 16 C shows the situation with respect to the egocentric frame of reference; there is no difference between the direction a and the direction b. The angle between the vectors a and a is b and between b and b is a+b as shown in figure 16 B. The angle between b and a with respect to the egocentric frame of reference is just (a+b)-b, that is a. The modulus of a is equal to that of a by definition, so as the modulus of b is equal to that of b, and the angle between b and a is a, so as the angle between a and b’’. I can conclude that the vector h, resulting from the sum between a and b has the same moduli of that resulting from the sum between a and b (or b’’), because the vectors h and h are the diagonals of two equal parallelograms.

a)  Since the parallelogram formed by a and b and that formed by a and b are equal, the angle g between a and h with respect to the geocentric frame of reference (figure 16 D) is just the same as the angle between h and a with respect to the egocentric frame of reference (figure 16 E). So the angle between the vectors h and h is just b+g with respect to the egocentric frame of reference. But if you look at figure 16 D, you can see that the angle b+g is just that between a and h and between b and h. So u can conclude that the vector h is parallel to a and to b.


[1] Remember that the artificial agent and environment are simulated. So what I am calling visual device, motor device, two-dimensional plane etc. are hypothetical concepts realized by an algorithm and implemented by a programming language. See Section 4 (“Simulation”) for more details.

[2] The programming language is the C language and the computer is a PC compatible.

 

© Copyright 2011 Emernet