• Non ci sono risultati.

Silvia Rossi

N/A
N/A
Protected

Academic year: 2021

Condividi "Silvia Rossi"

Copied!
23
0
0

Testo completo

(1)

Lezione n.

Corso di Laurea:

Insegnamento:

Email:

A.A. 2014-2015

Silvia Rossi

Strategic Voting

14

Informatica

Sistemi

multi-agente

silrossi@unina.it

(2)

Social Choice and AI

Voting theory (which is part of social choice theory , mostly

developed in Economics) has a number of natural applications in AI:

•  Multi-agent Systems: to aggregate the beliefs + to coordinate the actions of groups of autonomous software agents

•  Search Engines: to determine the most important sites based on links (“votes") + to aggregate the output of several search engines

•  Recommender Systems: to recommend a product to a user based on earlier ratings by other users

But not all of the classical assumptions will fit these new applications.

So AI needs to develop new models and ask new questions.

(3)

Strategic Voting – Mechanism Design

MAS 10.1,10.2

3

(4)

Voting

• Boris, Horace and Maurice determine who can be a member of the Dead Poet Society:

•  proposal: allow Alice

•  amendment: allow Bob, rather than Alice

•  first vote over amendment, then over proposal

4

(5)

Voting

5 Alice vs. Bob

Alice Bob

Alice vs. Nobody Bob vs. Nobody

Nobody

Alice Bob

Alice Nobody

Bob

Nobody Alice

Bob

Bob Alice Nobody

Borice Horace Maurice

(6)

Strategic voting

•  first between A, B

•  winner Alice

•  then between A, N

•  winner Alice

•  strategic voting H:

•  first vote for Bob!

•  result … B, N

6

Alice Nobody

Bob Nobody

Alice Bob

Bob Alice Nobody

Borice

Maurice Horace

(7)

Strategic voting

•  first between A, B

•  winner Alice

•  then between A, N

•  winner Alice

•  strategic voting H:

•  first vote for Bob!

•  result … B, N

•  M anticipates: vote for A

7

Alice Nobody

Bob Nobody

Alice Bob

Bob Alice Nobody

Borice

Maurice Horace

(8)

Mechanism design

Mechanism design is a strategic version of social choice theory, which adds the assumption that agents will behave so as to maximize their

individual payoffs.

For example, in an election agents may not vote their true preference.

8

(9)

Mechanism design (inverse game theory)

How do we predict or prescribe the course of action of the various agents participating in the interaction?

Rather than investigate a given strategic interaction, we start with certain desired behaviors on the part of

agents and ask what strategic interaction among these agents might give rise to these behaviors.

9

(10)

We will assume unknown individual preferences, and

ask whether we can design a game such that, no matter what the secret preferences of the agents actually are, the equilibrium of the game is guaranteed to have a

certain desired property or set of properties.

Mechanism design concerns itself with designing effective protocols for distributed systems.

auction theory

10

(11)

Bayesian Game

Extend the social choice setting to a new setting where agents can't be relied upon to disclose their preferences honestly.

Start with a set of agents in a Bayesian game setting (but no actions).

11

(12)

Mechanism Design

Thus, the designer gets to specify

•  the action sets for the agents (though they may be constrained by the environment)

•  the mapping to outcomes, over which agents have utility

•  can't change outcomes; agents' preferences or type spaces

It is deterministic if M(a)(o)=1 12

(13)

What we're up to

The problem is to pick a mechanism that will cause rational agents to behave in a desired way, specifically maximizing the mechanism designer's own “utility" or objective function

•  each agent holds private information, in the Bayesian game sense

•  often, we're interested in settings where agents' action space is identical to their type space, and an action can be interpreted as a declaration of the agent's type

Various equivalent ways of looking at this setting

•  perform an optimization problem, given that the values of (some of) the inputs are unknown

•  choose the Bayesian game out of a set of possible Bayesian games that maximizes some performance measure

•  design a game that implements a particular social choice function in equilibrium, given that the designer no longer knows agents'

preferences and the agents might lie

13

(14)

Implementation in Dominant Strategies

The outcomes that arise when the game is played are consistent with a given social choice function

strategy-proof mechanism

there is no need for agents to reason about each others’

actions in order to maximize their utility.

14

(15)

Implementation in Bayes-Nash equilibrium

•  Goal is to design the rules of the game (aka

mechanism) so that in Bayes-Nash equilibrium (s

1

, …, s

n

), the outcome of the game is f(θ

1

,…,θ

n

)

•  Weaker requirement than dominant strategy implementation

–  An agent’s best response strategy may depend on others’

strategies

•  Agents may benefit from counterspeculating each others’

–  Preferences, rationality, endowments, capabilities…

–  Can accomplish more than under dominant strategy implementation

•  E.g., budget balance & Pareto efficiency (social welfare maximization) under quasilinear preferences

(16)

Bayes-Nash Implementation Comments

Bayes-Nash Equilibrium Problems:

•  there could be more than one equilibrium

•  which one should I expect agents to play?

•  agents could miscoordinate and play none of the equilibria

16

(17)

Implementation Comments

We can require that the desired outcome arises

•  in the only equilibrium

•  in every equilibrium

•  in at least one equilibrium Forms of implementation:

•  Direct Implementation: agents each simultaneously send a single message to the center

•  Indirect Implementation: agents may send a

sequence of messages; in between, information may be (partially) revealed about the messages that were sent previously like extensive form

17

(18)

Revelation Principle

One property that is often desired of mechanisms is called truthfulness.

This property holds when agents truthfully disclose their preferences to the mechanism in equilibrium.

•  It turns out that any social choice function that can be implemented by any mechanism can be implemented by a truthful, direct

mechanism!

•  Consider an arbitrary, non-truthful mechanism (e.g., may be indirect)

18

(19)

Revelation Principle

•  It turns out that any social choice function that can be implemented by any mechanism can be implemented by a truthful, direct

mechanism!

•  Consider an arbitrary, non-truthful mechanism (e.g., may be indirect)

•  Recall that a mechanism denes a game, and consider an equilibrium s = (s1, … , sn)

19

Original Mechanism

M outcome

strategy s (θ ) type θ

strategy s (θ ) type θ

Original Mechanism

M outcome

strategy s (θ ) type θ

strategy s (θ ) type θ

Original Mechanism

M outcome

strategy s (θ ) type θ strategy s (θ ) type θ

strategy s (θ ) type θ strategy s (θ ) type θ

(20)

Revelation Principle

•  We can construct a new direct mechanism, as shown above

•  This mechanism is truthful by exactly the same argument that s was an equilibrium in the original mechanism

•  “The agents don't have to lie, because the mechanism already lies for them.”

20

(

New Mechanism Original Mechanism

M outcome

strategy s (θ ) type θ

strategy s (θ ) type θ

s (s (θ )) M

s (s (θ ))

(

New Mechanism Original Mechanism

M outcome

strategy s (θ ) type θ

strategy s (θ ) type θ strategy s (θ ) type θ

s (s (θ )) M

s (s (θ ))

(21)

Revelation Principle Impossibility Discussion of the Revelation Principle

• The set of equilibria is not always the same in the original mechanism and revelation mechanism

•  of course, we've shown that the revelation mechanism does have the original equilibrium of interest

•  however, in the case of indirect mechanisms, even if the indirect

mechanism had a unique equilibrium, the revelation mechanism can also have new, bad equilibria

• So what is the revelation principle good for?

•  recognition that truthfulness is not a restrictive assumption

•  for analysis purposes, we can consider only truthful mechanisms, and be assured that such a mechanism exists

•  recognition that indirect mechanisms can't do (inherently) better than direct mechanisms

21

(22)

Inpossibility Result

Theorem (Gibbard-Satterthwaite)

22

(23)

What does this mean?

We should be discouraged about the possibility of implementing arbitrary social-choice functions in mechanisms.

However, in practice we can circumvent the Gibbard-Satterthwaite theorem in two ways:

•  use a weaker form of implementation

•  note: the result only holds for dominant strategy implementation, not e.g., Bayes-Nash implementation

•  relax the onto condition and the (implicit) assumption that agents are allowed to hold arbitrary preferences

23

Riferimenti

Documenti correlati

Computing all of the equilibria of a two-player, general- sum game requires worst-case time that is exponential in the number of actions for each

4 (Reduction Size) Given constants ki for each player i, does there exist a maximally reduced game where each player i has exactly ki actions.. For iterated strict dominance

Approval voting: Each voter can cast a single vote for as many of the candidates as he wishes; the candidate with the most votes is selected...

Single potential seller & many potential buyers Seller usually wants highest possible price.

•  no change under various risk attitudes for second price. •  in first-price, increasing bid amount increases probability of winning,

•  A negotiation object, which defines the set of possible outcomes (possible proposals that agents can make).. •  A protocol according to which agents search for specific

•  If the core is non-empty then the grand coalition is stable, since nobody can benefit from defection.. Problems with

•  possible-worlds semantics; an agent's beliefs, knowledge, goals, etc., are characterized as a set of so-called possible worlds, with an accessibility relation holding