• Non ci sono risultati.

ACACIA: an agent-based program for simulating behavior to reach long-term goals

N/A
N/A
Protected

Academic year: 2021

Condividi "ACACIA: an agent-based program for simulating behavior to reach long-term goals"

Copied!
14
0
0

Testo completo

(1)

Cognitive Processing. (2009). 10, 95-99

Running head: SIMULATING BEHAVIOR TO REACH LONG TERM GOALS ACACIA: An agent-based program for simulating behavior

to reach long-term goals

Francesc S. Beltran1,Vicenç Quera1, Elisabetta Zibetti2, Charles Tijus2 & Meritxell Miñano1

1

Universitat de Barcelona, 2Université de Paris-8

Author’s Note

Francesc S. Beltran, Vicenç Quera, and Meritxell Miñano, Adaptive Behavior and Interaction Research Group, Universitat de Barcelona. Elisabetta Zibetti and Charles Tijus, Laboratoire de Cognitions Humaine et Artificielle, Université de Paris-8.

This research project was supported by grants from the Directorate General for Research of the Government of Catalonia (2001SGR-00140), the Directorate General for Higher Education and Culture of the Spanish Government (BSO 2001-2844) and the French Institut National de Recherche en Informatique et en Automatique (02N81/0011). This research project was also supported by funds from the European Union’s FEDER program. The authors would like to thank two anonymous reviewers for their comments on an earlier version of the manuscript.

Correspondence should be addressed to Francesc Salvador Beltran, Departament de Metodologia de les Ciències del Comportament, Facultat de Psicologia, Universitat de Barcelona, Campus Mundet, 08035 Barcelona, Spain. Email: francesc.salvador@ub.edu. Phone: +34-933125085. Fax: +34-934021359

(2)

Abstract

1

We present ACACIA, an agent-based program implemented in Java

2

StarLogo 2.0 that simulates a two-dimensional microworld populated by agents,

3

obstacles and goals. Our program simulates how agents can reach long-term goals

4

by following sensorial-motor couplings (SMCs) that control how the agents

5

interact with their environment and other agents through a process of local

6

categorization. Thus, while acting in accordance with this set of SMCs, the agents

7

reach their goals through the emergence of global behaviors. This agent-based

8

simulation program would allow us to understand some psychological processes

9

such as planning behavior from the point of view that the complexity of these

10

processes is the result of agent-environment interaction.

11

12

13

Key words: agent-based simulation, long-term goals, local categorization

(3)

ACACIA: An agent-based program for simulating behavior to reach long-term

1

goals

2

In the last years, some approaches state that behavior emerges from the

3

interaction of the organism with the environment (Bakker, 2000; Brooks, 1999;

4

Holland, 1995; Maes, 1997; Meyer & Guillot, 1991). In fact, many complex

5

global behavioral events emerge from decentralized, independent components that

6

interact among them and with the local environment. Some examples include

7

traffic jams (Resnick, 1994), coordinated motion group such as bird flocking

8

(Reynolds, 1993), herds (Werner & Dyer, 1992), pedestrian behavior

9

(Schreckenberg & Sharma, 2002), and robots collecting objects (Maris & te

10

Boekhorst, 1996). In all those systems, a set of local rules is organized in terms of

11

the actions to be performed in order to respond to the circumstances of the

12

immediate environment. These local rules, defined as sensory-motor couplings by

13

Braitenberg (1984), guide the organism-environment interaction.

14

The aim of the agent-based simulation approach is to emulate the behavior

15

of natural organisms in complex, dynamic environments. By creating an artificial

16

agent able to perform certain behaviors in a virtual environment, it is possible to

17

try to determine the internal mechanisms underlying these behaviors. We present

18

ACACIA, an agent-based simulation program that simulates a multi-agent system

19

where agents interact with their environment and other agents in order to reach

20

long-term goals (Zibetti, Quera, Beltran & Tijus, 2001), which are defined here as

21

places that are desirable for the agents and that may be some distance away. The

22

program shows how the agents can reach a long-term goal based on a set of SMCs

23

that controls the agent’s local interaction with its environment and with other

24

agents. The set of SMCs does not specify a global internal representation of the

(4)

environment or a sequence of steps necessary to reach the goal; rather, it is a

1

process of local categorization that determines how the agent relates locally with

2

objects and other agents in its environment (Zibetti, Quera, Tijus & Beltran,

3

2001). Thus, through the SMCs the agent groups and differentiates the entities in

4

its environment based both on their physical properties and on the task the agent

5

must perform. In the next sections, we show how de program works, some

6

previous results using the program and an example of a simulation experiment

7

that illustrates ACACIA’s abilities.

8

The ACACIA Program

9

ACACIA is implemented in StarLogo (Colella, Klopfer & Resnick, 2001;

10

Resnick, 1994), a programmable environment designed to model multi-agent

11

simulation systems. It was developed using the Java StarLogo 2.0 version, which

12

runs on different operating systems, including Windows, Mac OS and Linux. The

13

program simulates a discrete, two-dimensional microworld that can be either a

14

torus or a closed space surrounded by walls. In both cases, the surface is divided

15

into 50 x 50 square cells (or patches). The microworld contains three different

16

kinds of entities: goals, obstacles, and agents (see Figure 1).

17

Goals

18

Goals are static entities that are sought by agents. Goals are shown red on

19

the screen and each one occupies one patch, or location. When the simulation

20

starts, goals are scattered randomly throughout the microworld. The number of

21

goals can vary from 1 to 20 and is set by the user.

22

Obstacles

23

Obstacles are areas composed of many contiguous patches that cannot be

24

occupied by agents and which agents cannot see through. Obstacles are shown

(5)

yellow on the screen, and are randomly distributed throughout the microworld,

1

their number (0 to 20) and shape (regular or irregular) being set by the user.

2

Agents

3

Agents have two-dimensional coordinates that specify their positions at

4

time t, and headings that indicate the directions of their movements. An agent can

5

move one cell or patch per time unit in any direction relative to its current

6

position. An agent’s heading is defined as the angle between the linear path that

7

links its positions at times t-1 and t and the X, or horizontal, axis of the

8

microworld. Initially, agents can be assigned either random headings or an

9

identical heading for all of them. Agent coordinates are initially set at random.

10

Agents can scan their neighborhood in order to identify different kinds of entities

11

(goals, obstacles, and other agents) that they might encounter while exploring the

12

environment. This mechanism has three parameters: (a) neighborhood radius, or

13

depth of the agent’s field of perception; (b) neighborhood angle, or width of the

14

agent’s field of perception; and (c) scan resolution, which specifies how precise

15

the agent’s perception is within its neighborhood. The higher the resolution the

16

greater the number of patches the agent can scan. The three parameters define a

17

dynamic perceptual field in front of each agent so that only goals, other agents,

18

and obstacles that lie in that field can be currently perceived by it.

19

Depending on the entities currently detected by an agent, its “internal”

20

status can change.Agent statuses are represented by different colors on the

21

computer screen. Initially, at time t = 1, the status for all the agents is “explorer”,

22

as they explore the environment looking for goals, and are shown green;

23

eventually, explorer agents may turn magenta, brown or orange as they perceive

24

other agents or obstacles. When an agent reaches a goal, its status changes to

(6)

“rich”, and it is shown blue. Rich agents do not react to other entities; instead,

1

they keep moving straight ahead in a random direction from the patch where the

2

previous goal was located until they encounter an obstacle or a wall, then they

3

disappear from the microworld because they have already reached the goal. The

4

reason why rich agents do not disappear as soon as they reach the goal is that we

5

felt this new information could be used by the other agents to reach it as well. If

6

different entities are simultaneously detected in the neighborhood, then goals have

7

priority over agents, and agents have priority over obstacles.

8

An explorer agent has five hierarchical SMCs that allows it to respond

9

differently to the entities in the environment: (a) SMC-I: If an agent is exploring

10

the environment and it detects a goal, then the agent sets the coordinates of the

11

goal as its target location, and moves one patch forward; agents applying this

12

SMC are are shown magenta. (b) SMC-II: If an agent is exploring and it detects a

13

rich agent in its neighborhood, then the former sets its heading opposite to that of

14

the rich agent, then moving one patch forward; moving in a direction opposite to

15

that of a rich agent may be a successful behavior because the rich agent is coming

16

from a goal; agents applying SMC-II are shown orange. (c) SMC-III: If an agent

17

is exploring and detects another explorer agent in its neighborhood, then the

18

former first checks whether their headings are similar (within a tolerance limit

19

defined by the user) and, if so, it sets its new heading so that it is the same as that

20

of the latter, and then moves one patch forward; moving in the same direction as

21

another explorer agent that is ahead may be a successful behavior because the

22

latter might have already seen a goal and be heading toward it; agents applying

23

SMC-III are shown brown. (d) SMC-IV: If an obstacle or a wall is detected, the

24

agent first checks whether it had already detected an obstacle or a wall in the

(7)

previous simulation step. If not so, it changes its heading 90º randomly to the left

1

or to the right; if an obstacle or a wall was detected in the previous step, then the

2

agent changes its heading 90º to the left if it had previously changed it to the left,

3

or 90º to the right if it had previously changed it to the right; in both cases, the

4

agent then moves one patch forward, provided there is no obstacle or wall in the

5

pacth to be occupied. Finally, (e) SMC-V: If no entity is detected, the agent

6

simply moves forward one patch following its heading and continues exploring;

7

agents applying SMC-V are shown green.

8

The user can selectively set on or off SMCs II, III and IV for all explorer

9

agents while the simulation is running. If the three SMCs are set on and an

10

explorer agent simultaneously meets a rich agent, an explorer agent and an

11

obstacle, SMC-II has priority over SMC-III and SMC-IV.

12

Running ACACIA

13

When the program is run, two StarLogo windows are displayed on the

14

screen: One contains the procedures coded in StarLogo language and the other

15

shows the ACACIA virtual microworld. Buttons and sliders for controlling

16

simulation parameters are also shown (see Figure 1). Other windows are: (a) an

17

output window that shows a summary of the parameters and the values of some

18

specific dependent variables as the simulation progresses (e.g., the percentage of

19

agents reaching the goal, the percentage of agents acting in accordance with

SMC-20

III, and so on); (b) a graphical window showing those dependent variables as time

21

series; and (c) an information window that provides details of the main features of

22

the simulator and how it works. When the simulation finishes, a dialog window

23

pops up to save the contents of the output window as a file.

(8)

The simulation dynamics vary depending on the microworld parameters.

1

In previous experiments, we found that when the agents’ perceptual ability was

2

limited by defining a neighborhood radius equal to 10 patches, setting SMC-III on

3

enabled agents to reach the goal. Moreover, a collective searching behavior

4

emerged, whereby agents followed each other. Note that no specific SMC for the

5

agents defined such collective searching behavior (Miñano & Beltran, 2004). We

6

also found that SMC-II (whereby agents headed in the opposite direction when

7

they met rich agents) also increased the probability of reaching the goal. As more

8

agents reached the goal, there were more opportunities for the other agents to see

9

them and to act in accordance with SMC-II. Thus, the activation of SMC-II

10

produced an emergent global migration of the agents to the goal. Nevertheless,

11

this collective migration was not specified in an SMC (Miñano & Zibetti, 2005) .

12

In summary, we observed that the limited perceptual features of the agents

13

were compensated by a collective behavior emerging from the activation of

SMC-14

II and/or SMC-III. Like other computer models (Couzin, Krause, James, Ruxton

15

& Franks, 2002; Epstein & Axtell, 1996; Hemelrijk, 1996, 2003), local interaction

16

among agents results in cognitive optimization, i.e., “collaboration” among

17

individuals enables agents to create global patterns of collective behavior that

18

allow every individual agent to achieve its adaptive aims (Kennedy & Eberhart,

19

2001). Based on our previous results and in order to illustrate the ACACIA

20

features, we tried to find out whether setting SMC-III on also compensated for the

21

disadvantage of having a narrower neighborhood angle.

22

Method

23

We performed a series of simulations. For each simulation, we set 1 goal

24

and 50 randomly distributed agents, with a neighborhood radius of 10 patches and

(9)

a scan resolution of 10 scan lines. We systematically varied the independent

1

variables according to a three-factor design: 2 (SMC-III on or off) x 2 (angle: 120º

2

or 180º) x 2 (0 or 5 obstacles). 160 independent simulations were run for each

3

design cell, thus there were 1280 simulations in all. We measured the percentage

4

of agents that reached the goal after 400 simulation steps as a dependent variable.

5

Results and Discussion

6

An analysis of variance was performed on the percentage of agents that

7

reached the goal. The results showed statistically significant effects for the three

8

main factors: (a) when the neighborhood angle was set to 180º, a greater

9

percentage of agents reached the goal than when it was set to 120º (M=40.32 and

10

M=34.54, respectively; F1,1272=45.45, p<.0001); (b) a higher percentage of agents

11

reached the goal when SMC-III was on than when it was off (M=41.84 and

12

M=33.02, respectively; F1,1272=105.78, p<.0001); and (c) the presence of obstacles

13

decreased the percentage of agents reaching the goal, compared with when there

14

were no obstacles (M=32.01 and M=42.85, respectively; F1,1272=159.78, p<.0001).

15

The analysis of variance also indicated statistically significant effects between

16

neighborhood angle and number of obstacles (F1,1272=5.20, p<.05), but not

17

between neighborhood angle and SMC-III, between SMC-III and number of

18

obstacles, and between the tree factors.

19

The results show that a neighborhood angle of 180º increased the agents’

20

chances of reaching the goal, even when the complexity of the environment was

21

increased to 5 obstacles. When SMC-III was set on results in an increase of the

22

number of the agents reaching the goal. Nevertheless, contrarily to what we

23

expected, the perceptual disadvantage of the agents with a neighborhood angle of

24

120º was not compensated by setting SMC-III on. However, some results of this

(10)

simulation experiment confirmed previous findings, specifically, that endowing

1

the agents with SMC-III (Miñano & Beltran, 2004).

2

Final Comments

3

We have shown that in some cases it is possible to reach a long-term goal

4

through the collective behavior that emerges from a set of sensorial-motor

5

couplings, and it is not necessary for the agent to generate an overall

6

representation of its environment. Thus, self-organized cognition based on a set of

7

sensorial-motor couplings could show a promising way to implement complex

8

behavior and reasoning. Therefore, in a future version, in order to improve the

9

performance of the ACACIA agents, they should build on their knowledge

10

through learning (as they would be initially naïve about their environment), which

11

could be made possible, for example, by implementing a learning-classification

12

system in each agent (Holland, 1995). Other features to be included in future

13

versions are individual differences in the agents’ learning and perception,

14

perception errors (e.g., agents could mistake goals for obstacles) and inter-agent

15

communication.

16

Availability

17

ACACIA can be downloaded from

18

www.ub.es/comporta/gcai/Paginas/gcai_Downloads.htm. To run it on Windows,

19

Java Runtime Environment and StarLogo 2.21 must be preinstalled. StarLogo

20

2.21 can be downloaded from http://education.mit.edu/starlogo/.

(11)

References

1

Bakker, B. (2000). The adaptative behavior approach to psychology. Cognitive

2

Processing, 1, 39-70.

3

Braitenberg, V. (1984). Vehicles: Experiments in synthetic psychology.

4

Cambridge, Mass.: MIT Press.

5

Brooks, R. A. (1999). Cambrian intelligence: The early history of the new

6

AI. Cambridge, Mass.: Bradford Books/ MIT Press.

7

Colella, V., Klopfer, E. & Resnick, M. (2001). Adventures in modeling.

8

Exploring complex, dynamic systems with StarLogo. New York: Teachers

9

College Press.

10

Couzin, I. D., Krause, J., James, R., Ruxton, G. D. & Franks, N. R. (2002).

11

Collective memory and spatial sorting in animal groups. Journal of Theoretical

12

Biology, 218, 1-11.

13

Epstein, J. M. & Axtell, R. (1996). Growing artificial societies: Social

14

science from the bottom-up. Washington, DC: Brookings Cambridge, Mass.: MIT

15

Press.

16

Hemelrijk, C. K. (1996). Dominance interactions, spatial dynamics and

17

emergent reciprocity in a virtual world. In P. Maes, M. Mataric, J. A. Meyer, J.

18

Pollack & S. W. Wilson (Eds.), From animals to animats: Proceedings of the

19

Fourth International Conference on Simulation of Adaptive Behavior (pp.

545-20

552). Cambridge, Mass.: MIT Press.

21

Hemelrijk, C. K. (2003). Female co-dominance in a virtual world:

22

ecological, cognitive, social and sexual causes. Behavior, 140, 1247-1273.

23

Holland, J. H. (1995). Hidden order: How adaptation builds complexity.

24

Reading, Mass.: Perseus Books.

(12)

Kennedy, J. & Eberhart, R. C. (2001). Swarm intelligence. San Francisco:

1

Morgan Kaufmann.

2

Maes, P. (1997). Modeling adaptive autonomous agents. In C. G. Langton

3

(Ed.), Artificial life: An overview (pp. 135-162). Cambridge, Mass.: MIT Press.

4

Maris, M. & te Boekhorst, R. (1996). Exploiting physical constraints:

5

Heap formation through behavioral error in a group of robots. In M. Asada (Ed.),

6

Proceedings of the International Conference on Intelligent Robots and Systems,

7

(pp. 1655-1660). Osaka, Japan: Senri Life Science Center.

8

Meyer, J. A. & Guillot, A. (1991). Simulation of adaptive behavior in

9

animats: Review and prospect. In J. A. Meyer & S. W. Wilson (Eds.), From

10

animals to animats 1: Proceedings of the First International Conference on

11

Simulation of Adaptive Behavior (pp. 2-14). Cambridge, Mass.: Bradford Books/

12

MIT Press.

13

Miñano, M. & Beltran, F. S. (2004). Reaching long-term goals based on

14

local interaction between the organism and its environment: computer simulations

15

based on adaptive behavior. Perceptual and Motor Skills, 99, 27-33.

16

Miñano, M. & Zibetti, E. (2005). Reaching long-term goals based on local

17

categorization. Unpublished manuscript. Université de Paris-8.

18

Resnick, M. (1994). Turtles, termites and traffic jams: Explorations in

19

massively parallel microworlds. Cambridge, Mass.: MIT Press.

20

Reynolds, C. W. (1993). An evolved, vision-based behavioral model of

21

coordinated group motion. In J. A. Meyer, H. L. Roitblat & S. W. Wilson (Eds.),

22

From animals to animats 2: Proceedings of the Second International Conference

23

on Simulation of Adaptive Behavior (pp. 384-392). Cambridge, Mass.: MIT

24

Press.

(13)

Schreckenberg, M & Sharma, S. D. (Eds.) (2002). Pedestrian and

1

evacuation dynamics. New York: Springer.

2

Werner G. M. & Dyer M. G. (1992). Evolution of herding behaviour in

3

artificial animals. In J. A. Meyer, H. L. Roitblat & S. W. Wilson (Eds.), From

4

animals to animats 2: Proceedings of the Second International Conference on

5

Simulation of Adaptive Behaviour (pp. 393-399). Cambridge, Mass.: MIT Press.

6

Zibetti, E., Quera, V., Beltran, F. S. & Tijus, C. (2001). Contextual

7

categorization: A mechanism linking perception and knowledge in modelling and

8

simulating perceived events as actions. In V. Akman, P. Bouquet, R. Thomason &

9

R. A. Young (Eds.), Modeling and using context (pp. 395-408). Berlin: Springer.

10

Zibetti, E., Quera, V., Tijus, C. & Beltran, F. S. (2001). Reasoning based

11

on categorisation for interpreting and acting: A first approach. Mind & Society, 4

12 (2), 89-106. 13 14 15 16 17 18 19 20

(14)

Figure 1. The ACACIA screen, displaying the entities in the microworld and the sliders that allow the user to manipulate the simulation parameters.

Figura

Figure 1. The ACACIA screen, displaying the entities in the microworld and the  sliders that allow the user to manipulate the simulation parameters

Riferimenti

Documenti correlati

Gaseous targets used in LWFA are: i) supersonic gas jets that are easy to implement, allow for a good control over the peak particle number density [5], but produce density

These values are comprised between the range described by the intact rock parameters (Tab. The increase of mechanical properties in respect to the empirical Hoek and

There are conventional challenges here but the key ones that need attention are: (i) overcoming the uncertainty among public authorities and services about whether, when and how

In this new context, the Juncker Plan takes on broader and deeper significance. It is not simply a new large Guarantee Facility for European SMEs and infrastructure. But it

Monetary and private financial integration between the European Union and the Mediterranean is a crucial hub for the development of trade, the ex- pansion of financial

Relapsed and refractory myeloma is defined as disease that is nonresponsive while on salvage therapy, or progresses within 60 days of last therapy in patients who have achieved

In questa prospettiva la campagna contro la “tratta delle bianche”, dipingendo i fenomeni in atto come un ritorno in grande stile allo schiavismo, sarebbe stato il

In the present study, 1-day treatment with a mixed endothelin (ET)-1 receptor A and B antagonist (bosentan) induced a significant reduction of systolic pulmonary artery pressure and