• Non ci sono risultati.

Silvia Rossi!

N/A
N/A
Protected

Academic year: 2021

Condividi "Silvia Rossi!"

Copied!
108
0
0

Testo completo

(1)

Lezione n. !

Corso di Laurea:! Insegnamento:!

Email:!

A.A. 2014-2015!

Silvia Rossi!

Agent Architectures

Lezione n. 3

Informatica!

Insegnamento:

Sistemi !

multi-agente!

Email:

[email protected]!

(2)

3-2

Agent Architectures!

"! An agent is a computer system capable of flexible autonomous action… !

"! Issues one needs to address in order to build agent-based systems… !

"! Three types of agent architecture: ! – !symbolic/logical!

– !reactive!

– !hybrid!

(3)

3-3

Agent Architectures!

! Originally (1956-1985), pretty much all agents

designed within AI were symbolic reasoning agents

! Its purest expression proposes that agents use

explicit logical reasoning in order to decide what to do

! Problems with symbolic reasoning led to a reaction against this — the so-called reactive agents

movement, 1985–present

! From 1990-present, a number of alternatives

proposed: hybrid architectures, which attempt to combine the best of reasoning and reactive

architectures

(4)

3-4

Symbolic Reasoning Agents!

"! The classical approach to building agents is to view them as a particular type of knowledge-based system, and bring all the associated (discredited?!)

methodologies of such systems to bear!

"! This paradigm is known as symbolic AI!

"! We define a deliberative agent or agent architecture to be one that:!

! contains an explicitly represented, symbolic model of the world!

! makes decisions (for example about what actions to perform) via symbolic reasoning!

(5)

3-5

Symbolic Reasoning Agents!

! If we aim to build an agent in this way, there are two key problems to be solved:

1.!The transduction problem:

that of translating the real world into an

accurate, adequate symbolic description, in time for that description to be useful…vision, speech understanding, learning

2.!The representation/reasoning problem: that of how to symbolically represent

information about complex real-world entities and processes, and how to get agents to

reason with this information in time for the results to be useful…knowledge

representation, automated reasoning, automatic planning

(6)

3-6

Symbolic Reasoning Agents!

"! Most researchers accept that neither problem is anywhere near solved!

"! Underlying problem lies with the complexity of symbol manipulation algorithms in general: many (most) search- based symbol manipulation algorithms of interest are highly intractable!

"! Because of these problems, some researchers have

looked to alternative techniques for building agents; we look at these later!

(7)

3-7

Deductive Reasoning Agents!

! How can an agent decide what to do using theorem proving?

! Basic idea is to use logic to encode a theory

stating the best action to perform in any given situation

! Let:

! ! be this theory (typically a set of rules) ! " be a logical database that describes the

current state of the world

! Ac be the set of actions the agent can perform ! " |-!# mean that # can be proved from " using !

(8)

Agents as Theorem Provers II

Agent

Environment

Action Output Sensor

Input

see action next !

(9)

3-9

Deductive Reasoning Agents!

/* try to find an action explicitly prescribed */

for each a $ Ac do

if "

|-

! Do(a) then return a

end-if end-for

/* try to find an action not excluded */

for each a $ Ac do

if "

|-

! ¬Do(a) then return a

end-if end-for

return null /* no action found */

(10)

3-10

Deductive Reasoning Agents!

! An example: The Vacuum World

! Goal is for the robot to clear up all dirt

(11)

3-11

Deductive Reasoning Agents!

! Use 3 domain predicates to solve problem:

In(x, y) agent is at (x, y)

Dirt(x, y) there is dirt at (x, y)

Facing(d) the agent is facing direction d

! Possible actions:

Ac = {turn, forward, suck}

P.S. turn means “turn right”

(12)

3-12

Deductive Reasoning Agents!

! Rules ! for determining what to do:

! …and so on!

! Using these rules (+ other obvious ones), starting at (0, 0) the robot will clear up dirt

(13)

3-13

Deductive Reasoning Agents!

! Problems:

! How to convert video camera input to Dirt(0, 1)?

! decision making assumes a static environment: calculative rationality

! decision making using first-order logic is undecidable!

! Even where we use propositional logic, decision making in the worst case means solving co-NP- complete problems

(PS: co-NP-complete = bad news!)

! Typical solutions:

! weaken the logic

! use symbolic, non-logical representations

! shift the emphasis of reasoning from run time to design time

! We will look at some examples of these approaches

(14)

3-14

More Problems…!

! The “logical approach” that was presented implies adding and removing things from a database

! That’s not pure logic

! Early attempts at creating a “planning agent”

tried to use true logical deduction to the solve the problem

(15)

3-15

Planning Systems (in general)!

! Planning systems find a sequence of actions that transforms an initial state into a goal state

I G

a1

a17

a142

(16)

3-16

Planning!

! Planning involves issues of both Search and Knowledge Rrepresentation

! Sample planning systems:

! Plan as a search problem ! Robot Planning (STRIPS) ! Planning of speech acts

(17)

3-17

AGENT0 and PLACA!

! Much of the interest in agents from the AI

community has arisen from Shoham’s notion of agent oriented programming (AOP)

! AOP a ‘new programming paradigm, based on a societal view of computation’

! The key idea that informs AOP is that of directly programming agents in terms of intentional

notions like belief, commitment, and intention

! The motivation behind such a proposal is that, as we humans use the intentional stance as an

abstraction mechanism for representing the properties of complex systems.

In the same way that we use the intentional

stance to describe humans, it might be useful to use the intentional stance to program machines.

(18)

3-18

AGENT0!

! Shoham suggested that a complete AOP system will have 3 components:

! a logic for specifying agents and describing their mental states

! an interpreted programming language for programming agents

! an ‘agentification’ process, for converting ‘neutral applications’ (e.g., databases) into agents

! Results only reported on first two components

! Relationship between logic and programming language is semantics

! We will skip over the logic(!), and consider the first AOP language, AGENT0

(19)

3-19

AGENT0!

! AGENT0 is implemented as an extension to LISP

! Each agent in AGENT0 has 4 components:

! a set of capabilities (things the agent can do) ! a set of initial beliefs

! a set of initial commitments (things the agent will do)

! a set of commitment rules

! The key component, which determines how the agent acts, is the commitment rule set

(20)

3-20

AGENT0!

! Each commitment rule contains ! a message condition

! a mental condition ! an action

! On each ‘agent cycle’…

! The message condition is matched against the messages the agent has received

! The mental condition is matched against the beliefs of the agent

! If the rule fires, then the agent becomes

committed to the action (the action gets added to the agent’s commitment set)

(21)

3-21

AGENT0!

! Actions may be ! private:

an internally executed computation, or ! communicative:

sending messages

! Messages are constrained to be one of three types:

! “requests” to commit to action

! “unrequests” to refrain from actions ! “informs” which pass on information

(22)

3-22

AGENT0!

(23)

3-23

AGENT0!

! A commitment rule:

COMMIT(

( agent, REQUEST, DO(time, action) ), ;;; msg condition

( B,

[now, Friend agent] AND CAN(self, action) AND

NOT [time, CMT(self, anyaction)]

), ;;; mental condition self,

DO(time, action) )

(24)

3-24

AGENT0!

! This rule may be paraphrased as follows:

if I receive a message from agent which requests me to do action at time, and I believe that:

! agent is currently a friend ! I can do the action

! At time, I am not committed to doing any other action

then commit to doing action at time

(25)

Planning as Problem Solving !

(RN: 3.1, 3.2 e 3.6)

25

(26)

Ipotesi!

•! L’ ambiente o mondo puo’ essere in ogni istante rappresentato da uno stato.

•! Un goal è un insieme di stati del mondo:

esattamente quegli stati del mondo in cui il goal è soddisfatto.

•! Le azioni possono essere viste come

realizzanti transizioni fra stato e stato.

(27)

Soluzione !

•! Risolvere un problema significhera’ trovare sequenze di azioni che se eseguite

trasformano lo stato iniziale nello stato finale

! problema: le azioni rimangono eseguibili una volta pensate?

(28)

Trovare la soluzione !

! E’ compito del processo di ricerca: Search process.

! Qui esamineremo soluzioni come sequenze lineari di azioni, ma è bene tenere in mente che esistono

problemi le cui soluzioni prevedono punti di scelta e quindi divaricazioni di path.

Esempio: Scrittura automatica di programmi o problemi in ambienti non deterministici

(29)

29 Example: Romania!

•! On holiday in Romania; currently in Arad.

•! Formulate goal:

! be in Bucharest

•! Formulate problem:

! states: various cities

! actions: drive between cities

•! Find solution:

! sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

(30)

30 Example: Romania!

(31)

Problem-solving agents!

Formulate -> Search -> Execute

when it is executing the sequence it ignores its

percepts. 31

(32)

32

15-Puzzle !

Introduced (?) in 1878 by Sam Loyd, who dubbed himself “America’s greatest puzzle- expert”

(33)

33

15-Puzzle !

Sam Loyd offered $1,000 of his own money to the first person who would solve the

following problem:

12 14

11 15

10 13

9

5 6 7 8 4 3

2 1

12 15

11 14

10 13

9

5 6 7 8 4 3

2 1

?

(34)

But no one ever won the prize !! 34

(35)

35

(36)

8-Queens Problem Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal.

36

A solution Not a solution

(37)

Formulation #1 !

!!States: all arrangements of 0, 1, 2, ..., 8 queens on the board

!!Initial state: 0 queens on the board

!!Successor function: each of the successors is obtained by adding one queen in an empty square

!!Arc cost: irrelevant

!!Goal test: 8 queens are on the

board, with no queens attacking each other

! ~ 64x63x...x57 ~ 3x1014 states 37

(38)

Formulation #2 !

!!States: all arrangements of k = 0, 1, 2, ..., 8 queens in the k leftmost

columns with no two queens attacking each other

!!Initial state: 0 queens on the board

!!Successor function: each successor is obtained by adding one queen in any square that is not attacked by any

queen already in the board, in the leftmost empty column

!!Arc cost: irrelevant

!!Goal test: 8 queens are on the board

! 2,057 states 38

(39)

n-Queens Problem !

!!A solution is a goal node, not a path to this node (typical of design problem) (CSP)

!!Number of states in state space:

! 8-queens " 2,057

! 100-queens " 1052

!!But techniques exist to solve n-queens problems efficiently for large values of n

They exploit the fact that there are many solutions well distributed in the state space

39

(40)

Path Planning !

40

What is the state space

?

(41)

Formulation #1 !

41

Cost of one horizontal/vertical step = 1 Cost of one diagonal step = %2

(42)

Optimal Solution

42

This path is the shortest in the discretized state space, but not in the original continuous space

(43)

Formulation #2

43

sweep-line

(44)

Formulation #2 !

44

(45)

States !

45

(46)

Successor Function

46

(47)

Solution Path !

47

A path-smoothing post-processing step is usually needed to shorten the path further

(48)

Elementi essenziali del processo di ricerca!

Notare differenza fra albero di ricerca e spazio degli stati:

•! il primo è formato da “nodi” il secondo da stati;

•! nel primo piu’ nodi (potenzialmente infiniti) possono essere in corrispondenza con uno stesso stato negli spazi degli stati.

•! Se gli stessi stati possono essere rivisitati l’albero di ricerca può essere infinito anche se lo spazio degli stati è finito.

48

(49)

Una struttura dati per il nodo!

Nodo è composto da 5 componenti:

! Lo stato a cui corrisponde

! Il nodo che lo ha generato.

! L’ operatore che lo ha generato.

! Il suo depth.

! Il costo del path della soluzione che passa per esso.

49

(50)

Algoritmo di Ricerca!

.

50

(51)

Caratteristiche delle strategie di ricerca!

Completezza:garantisce di trovare una

soluzione se ne esiste una (Ricordare che il problema di cui parliamo è la particolare

formulazione del problema che abbiamo scelto!) Ottimalita’: la soluzione trovata è la “ migliore”

secondo un qualche criterio, nelle strategie di ricerca solitamente il criterio è implementato in una funzione “costo”.

51

(52)

Caratteristiche delle strategie di ricerca!

Space Complexity: quanta memoria richiede portare avanti la ricerca.

Time Complexity: quanto tempo occorre a

trovare la soluzione (numero di nodi generati).

b: fattore di ramificazione massimo (massimo numero di successori per ogni stato)

d: profondità del nodo obiettivo meno profondo

m: profondità massima dello spazio degli stati (può essere !)

52

(53)

Caratteristiche delle strategie di ricerca!

Spesso sono conflittuali fra loro:

per esempio: Completezza e Complessita’

53

(54)

Caratteristiche delle strategie di ricerca!

Costo totale e razionalità:

Bilancio tra il costo per trovare la soluzione e il costo della soluzione stessa!

54

(55)

Tipi di Strategie di ricerca!

Distinzione fra: Strategie “senza informazione”

ovvero “strategie cieche”, e Strategie “che usano informazione” ovvero “Strategie

euristiche”.

Il termine “strategia euristica” attualmente significa qualsiasi elemento che migliora il

comportamento medio di un sistema di IA, non è detto migliori i casi peggiori.

55

(56)

Euristica!

Negli algoritmi di ricerca significa “funzione di valutazione”, cioè una funzione che da una

stima del “costo di una soluzione”.

Differenza principale: euristica general purpose e euristica legata al problema.

56

(57)

Esempio

57

Per una strategia cieca, N1 e N2 sono equivalenti

Goal state N1

N2 1

2 3 4

5 6

7 8

1 2 3 4 5

6 7 8

1 2 3 4 5 6 7 8

(58)

Esempio

58

Per una strategia con Euristica (es. Il numero di tasselli fuori posto) N2 è più promettente di N1

Goal state N1

N2 1

2 3 4

5 6

7 8

1 2 3 4 5

6 7 8

1 2 3 4 5 6 7 8

(59)

Confronto fra le strategie !

59

!! Breadth-first è completa e ottimale, ma una grande complessità di spazio;

!! Depth-first è efficiente sullo spazio, ma non è nè completa nè ottimale;

!! Iterative deepening è completa e ottimale, con la stessa space complexity di depth-first e circa la stessa time

complexity di breadth-first.

(60)

Problem Solving: Ricerca Euristica !

(RN:4.1, 4.2)

60

(61)

Outline!

Best-first search

•! Greedy best-first search

•! A* search Heuristics

61

(62)

Best-first search!

Idea: use an evaluation function f(n) for each node

estimate of "desirability“

[Traditionally, f(N) is an estimated cost; so, the smaller f(N), the more promising N]

"!Expand most desirable unexpanded node

Implementation:

Order the nodes in fringe in decreasing order of desirability – priority queue

[Arbitrary order is assumed among nodes with equal f]

Special cases:

greedy best-first search A* search

62

(63)

Romania with step costs in km!

63

(64)

Greedy best-first search!

Evaluation function f(n) = h(n) (heuristic)

= estimate of cost from n to goal

e.g., hSLD(n) = straight-line distance from n to Bucharest

Greedy best-first search expands the node that appears to be closest to goal

64

(65)

Greedy best-first search example!

65

(66)

Greedy best-first search example!

66

(67)

Greedy best-first search example!

67

(68)

Greedy best-first search example!

68

(69)

Properties of greedy best-first search!

Complete? No – can get stuck in loops

Time? O(bm), worst case, but a good heuristic can give dramatic improvement

Space? O(bm) - keeps all nodes in memory Optimal? No

69

(70)

Heuristic Function!

!!The heuristic function h(N) & 0 estimates the cost to go from STATE(N) to a goal state

Its value is independent of the current search tree;

it depends only on STATE(N) and the goal test GOAL?

!!Example:

h1(N) = number of misplaced numbered tiles

[Why is it an estimate of the distance to the goal?]

70

(71)

Admissible heuristics!

E.g., for the 8-puzzle:

h1(n) = number of misplaced tiles h2(n) = total Manhattan distance

(i.e., no. of squares from desired location of each tile)

h1(S) = ? h2(S) = ?

71

(72)

Admissible heuristics!

E.g., for the 8-puzzle:

h1(n) = number of misplaced tiles h2(n) = total Manhattan distance

(i.e., no. of squares from desired location of each tile)

h1(S) = 8

h2(S) = 3+1+2+2+2+3+3+2 = 18

72

(73)

8-Puzzle !

73 4

5

5 3

3 4

3 4

4

2 1

2 0 3

4 3

f(N) = h(N) = number of misplaced numbered tiles

(74)

8-Puzzle !

74 5

6

6 4

4

2 1

2 0 5

5 3

f(N) = h(N) = S distances of numbered tiles to their goals

(75)

Robot Navigation !

75

xN

yN N

xg yg

(L2 or Euclidean distance)

h2(N) = |xN-xg| + |yN-yg| (L1 or Manhattan distance)

(76)

Best-First ' E "ciency!

76

f(N) = h(N) = straight distance to the goal

Local-minimum problem

(77)

A* search!

Idea: avoid expanding paths that are already expensive

Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n

h(n) = estimated cost from n to goal

f(n) = estimated total cost of path through n to goal

77

(78)

8-Puzzle !

78 0+4

1+5

1+5 1+3

3+3 3+4

3+4

3+2 4+1

5+2 5+0 2+3

2+4 2+3

f(N) = g(N) + h(N)

with h(N) = number of misplaced numbered tiles

(79)

A* search example!

79

(80)

A* search example!

80

(81)

A* search example!

81

(82)

A* search example!

82

(83)

A* search example!

83

(84)

A* search example!

84

(85)

Admissible heuristics!

A heuristic h(n) is admissible if for every node n,

h(n) ! h*(n), where h*(n) is the true cost to reach the goal state from n.

An admissible heuristic never over-estimates the cost to reach the goal, i.e., it is optimistic

Example: hSLD(n) (never overestimates the actual road distance)

85

(86)

8-Puzzle Heuristics !

!!h1(N) = number of misplaced tiles = 6 is ???

86

(87)

8-Puzzle Heuristics

!!h1(N) = number of misplaced tiles = 6 is admissible

!!h2(N) = sum of the (Manhattan) distances of every tile to its goal position

= 3+1+2+2+2+3+3+2 = 18 is ???

87

(88)

8-Puzzle Heuristics !

!!h1(N) = number of misplaced tiles = 6 is admissible

!!h2(N) = sum of the (Manhattan) distances of every tile to its goal position

= 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13 is admissible

88

(89)

Robot Navigation Heuristics !

89

Cost of one horizontal/vertical step = 1 Cost of one diagonal step =

Cost of one diagonal step =

is admissible

(90)

Robot Navigation Heuristics !

90

Cost of one horizontal/vertical step = 1 Cost of one diagonal step =

Cost of one diagonal step = h2(N) = |xN-xg| + |yN-yg| is ???

(91)

Robot Navigation Heuristics !

91

Cost of one horizontal/vertical step = 1 Cost of one diagonal step =

Cost of one diagonal step =

h2(N) = |xN-xg| + |yN-yg| is admissible if moving along diagonals is not allowed, and not admissible otherwise

h*(I) = 4%2

h2(I) = 8

(92)

Admissible heuristics!

A heuristic h(n) is admissible if for every node n,

h(n) ! h*(n), where h*(n) is the true cost to reach the goal state from n.

Theorem: If h(n) is admissible, A

*

using TREE-SEARCH is complete

Theorem: If h(n) is admissible, A

*

using TREE-SEARCH is optimal

92

(93)

Optimality of A* (proof)!

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

f(G2) = g(G2) since h(G2) = 0

g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0

f(G2) > f(G) from above

93

(94)

Optimality of A* (proof)!

Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

f(G2) > f(G) from above

h(n) " h*(n) since h is admissible g(n) + h(n) " g(n) + h*(n)

f(n) " f(G)

Hence f(G2) > f(n), and A* will never select G2 for expansion 94

(95)

Properties of A*!

Complete? Yes (unless there are infinitely many nodes with f ! f(G))

Time? Exponential

Space? Keeps all nodes in memory Optimal? Yes

95

(96)

Robot Navigation !

1 2 3 4 5 6 7 8 9 10 11

A B C D E

96

(97)

97

Robot Navigation

0 1 2 1

5 8 7

7

3 4

7

6 7

6 3 2

8

6

4 5

2

3 3

3

6 5 4 2 3 4 5 5

4 6

5

6 4 5

f(N) = h(N), with h(N) = Manhattan distance to the goal (not A*)

(98)

98

Robot Navigation

0 1 2 1

5 8 7

7

3 4

7

6 7

6 3 2

8

6

4 5

2

3 3

3

6 5 4 2 3 4 5 5

4 6

5

6 4 5

f(N) = h(N), with h(N) = Manhattan distance to the goal (not A*)

7

0

(99)

99

Robot Navigation

f(N) = g(N)+h(N), with h(N) = Manhattan distance to goal (A*)

0 1 2 1

5 8 7

7

3 4

7

6 7

6 3 2

8

6

4 5

2

3 3

3

6 5 4 2 3 4 5 5

4 6

5

6 4 7+0 5

6+1

6+1 8+1

7+0 7+2 6+1

7+2 6+1 8+1

7+2 8+3

7+2 6+3 6+3 5+4 5+4 4+5 4+5 3+6 3+6 2+7 8+3 7+4 7+4 6+5

5+6

6+3 5+6

2+7 3+8 4+7

5+6 4+7 3+8

4+7 3+8 3+8 2+9 2+9 3+10 2+9

3+8

2+9 1+10 1+10 0+11 0+11

(100)

How to create an admissible h? !

!!An admissible heuristic can usually be seen as the cost of an optimal solution to a relaxed

problem (one obtained by removing constraints)

!!In robot navigation:

! The Manhattan distance corresponds to removing the obstacles

! The Euclidean distance corresponds to removing

both the obstacles and the constraint that the robot moves on a grid

!!8-puzzle?

100

(101)

How to create an admissible h?!

Description of Actions:

A tile can move from square A to square B if

•! A is horizontally or vertically adjacent to B

•! and B is blank

101

(102)

How to create an admissible h?!

Relaxed problem:

(a)! A tile can move from square A to square B if A is adjacent to B. (Manhattan distance)

(b) A tile can move from square A to square B if B is blank.

(c) A tile can move from square A to square B.

(misplaced tiles)

102

(103)

How to create an admissible h?!

the relaxed problems generated by this technique can be solved essentially

without search

103

(104)

Dominance!

If h

2

(n) " h

1

(n) for all n (both admissible) then h

2

dominates h

1

h

2

is better for search

Example:

!! h1(N) = number of misplaced tiles

!! h2(N) = sum of distances of every tile to its goal position

104

(105)

Dominance!

If h2(n) " h1(n) for all n (both admissible) then h2 dominates h1

h2 is better for search

Typical search costs (average number of nodes expanded):

C* = f(G) f(n) < C*

h(n) < C* - g(n) h2(n) < C* - g(n)

105

(106)

Dominance!

If h2(n) " h1(n) for all n (both admissible) then h2 dominates h1

h2 is better for search

Typical search costs (average number of nodes expanded):

d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes

A*(h2) = 73 nodes

d=24 IDS = too many nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes

106

(107)

E #ective Branching Factor!

!!It is used as a measure the effectiveness of a heuristic

!!Let n be the total number of nodes expanded by A* for a particular problem and d the depth of the solution

!!The effective branching factor b* is defined by n = 1 + b* + (b*)2 +...+ (b*)d

107

(108)

Experimental Results !

!!8-puzzle with:

!! h1 = number of misplaced tiles

!! h2 = sum of distances of tiles to their goal positions

!!Random generation of many problem instances

!!Average effective branching factors (number of expanded nodes):

108

d IDS A1* A2*

2 2.45 1.79 1.79

6 2.73 1.34 1.30

12 2.78 (3,644,035) 1.42 (227) 1.24 (73)

16 -- 1.45 1.25

20 -- 1.47 1.27

24 -- 1.48 (39,135) 1.26 (1,641)

Riferimenti

Documenti correlati

4 (Reduction Size) Given constants ki for each player i, does there exist a maximally reduced game where each player i has exactly ki actions.. For iterated strict dominance

Approval voting: Each voter can cast a single vote for as many of the candidates as he wishes; the candidate with the most votes is selected...

Mechanism design is a strategic version of social choice theory, which adds the assumption that agents will behave so as to maximize their..

Single potential seller &amp; many potential buyers Seller usually wants highest possible price.

•  no change under various risk attitudes for second price. •  in first-price, increasing bid amount increases probability of winning,

•  A negotiation object, which defines the set of possible outcomes (possible proposals that agents can make).. •  A protocol according to which agents search for specific

•  If the core is non-empty then the grand coalition is stable, since nobody can benefit from defection.. Problems with

•  possible-worlds semantics; an agent's beliefs, knowledge, goals, etc., are characterized as a set of so-called possible worlds, with an accessibility relation holding