• Non ci sono risultati.

Sistemi per il Governo dei Robot

N/A
N/A
Protected

Academic year: 2021

Condividi "Sistemi per il Governo dei Robot"

Copied!
111
0
0

Testo completo

(1)

Sistemi per il Governo dei Robot

Silvia Rossi - Lezione 4

(2)

IL PARADIGMA GERARCHICO

SENSE ACT

world PLAN

(3)

Forward Search

Operators are applied until the goals are reached ProgWS(state, goals, actions, path)

If state satisfies goals, then return path else a = choose(actions), s.t.

preconditions(a) satisfied in state if no such a, then return failure

else return

ProgWS(apply(a, state), goals, actions,

concatenate(path, a))

(4)

Need for an Accurate Heuristic

(5)

Need for an Accurate Heuristic

 Forward planning simply searches the space

of world states from the initial to the goal state

(6)

Need for an Accurate Heuristic

 Forward planning simply searches the space of world states from the initial to the goal state

 Imagine an agent with a large library of

actions, whose goal is G, e.g., G = Have(Milk)

(7)

Need for an Accurate Heuristic

 Forward planning simply searches the space of world states from the initial to the goal state

 Imagine an agent with a large library of

actions, whose goal is G, e.g., G = Have(Milk)

 In general, many actions are applicable to

any given state, so the branching factor is

huge

(8)

Need for an Accurate Heuristic

 Forward planning simply searches the space of world states from the initial to the goal state

 Imagine an agent with a large library of

actions, whose goal is G, e.g., G = Have(Milk)

 In general, many actions are applicable to any given state, so the branching factor is huge

 In any given state, most applicable actions

are irrelevant to reaching the goal Have(Milk)

(9)

Need for an Accurate Heuristic

 Forward planning simply searches the space of world states from the initial to the goal state

 Imagine an agent with a large library of

actions, whose goal is G, e.g., G = Have(Milk)

 In general, many actions are applicable to any given state, so the branching factor is huge

 In any given state, most applicable actions

are irrelevant to reaching the goal Have(Milk)

(10)

Need for an Accurate Heuristic

 Forward planning simply searches the space of world states from the initial to the goal state

 Imagine an agent with a large library of

actions, whose goal is G, e.g., G = Have(Milk)

 In general, many actions are applicable to any given state, so the branching factor is huge

 In any given state, most applicable actions are irrelevant to reaching the goal Have(Milk)

 Fortunately, an accurate consistent heuristic

can be computed

(11)

Forward planning still suffers from an excessive branching factor

 In general, there are much fewer actions that are relevant to achieving a goal than actions that are applicable to a state

 How to determine which actions are

relevant? How to use them?

(12)

Relevant Action

An action is relevant to a goal if

one of its effects matched a goal

proposition

(13)

Relevant Action

An action is relevant to a goal if one of its effects matched a goal proposition

Stack(B,A)

P = HOLDING(B), BLOCK(A), CLEAR(A)

E = ON(B,A), ¬CLEAR(A), ¬HOLDING(B),

CLEAR(B), HANDEMPTY

(14)

Relevant Action

An action is relevant to a goal if one of its effects matched a goal proposition

Stack(B,A)

P = HOLDING(B), BLOCK(A), CLEAR(A)

E = ON(B,A), ¬CLEAR(A), ¬HOLDING(B),

CLEAR(B), HANDEMPTY

(15)

Relevant Action

An action is relevant to a goal if one of its effects matched a goal proposition

Stack(B,A)

P = HOLDING(B), BLOCK(A), CLEAR(A)

E = ON(B,A), ¬CLEAR(A), ¬HOLDING(B),

CLEAR(B), HANDEMPTY

(16)

Regression of a Goal

THE REGRESSION OF A GOAL G THROUGH AN ACTION A IS THE WEAKEST PRECONDITION

R[G,A] SUCH THAT:

IF A STATE S SATISFIES R[G,A] THEN:

1. THE PRECONDITION OF A IS SATISFIED IN S

2. APPLYING A TO S YIELDS A STATE THAT

SATISFIES G

(17)

Example

(18)

Example

G = ON(B,A), ON(C,B) Stack(C,B)

P = HOLDING(C), BLOCK(C), BLOCK(B), CLEAR(B)

E = ON(C,B), ¬CLEAR(B), ¬HOLDING(C), CLEAR(C), HANDEMPTY

R[G,Stack(C,B)] =

ON(B,A), HOLDING(C), BLOCK(C),

BLOCK(B), CLEAR(B)

(19)

Example

(20)

Example

G = CLEAR(B), ON(C,B) Stack(C,B)

P = HOLDING(C), BLOCK(C), BLOCK(B), CLEAR(B)

E = ON(C,B), ¬CLEAR(B), ¬HOLDING(C), CLEAR(C), HANDEMPTY

R[G,Stack(C,B)] = False

(21)

Computation of R[G,A]

1. If any element of G is deleted by A then return False

2. G’  Precondition of A

3. For every element SG of G do

If SG is not added by A then add SG to

4. Return G’ G’

(22)

Another Example

R1 R2

(23)

Another Example

 G = In(key,Box) ∧ Holding(Key)

Put-Key-Into-Box

P = In(Robot,R1) ∧ Holding(Key) D = Holding(Key), In(Key,R1)

A = In(Key,Box)

 R[G,Put-Key-Into-Box] = ??

R1 R2

(24)

Inconsistent Regression

R1 R2

(25)

Inconsistent Regression

R1 R2

(26)

 G = In(key,Box) ∧ Holding(Key)

Put-Key-Into-Box

P = In(Robot,R1) ∧ Holding(Key) D = Holding(Key), In(Key,R1)

A = In(Key,Box)

 R[G,Put-Key-Into-Box] = False

where False is the un-achievable goal

 This means that In(key,Box) ∧ Holding(Key) can’t be achieved by executing Put-Key-Into-Box

Inconsistent Regression

R1 R2

(27)

Computation of R[G,A]

1. If any element of G is deleted by A then return False

2. G’  Precondition of A

3. For every element SG of G do

If SG is not added by A then add SG to G’

4. If an inconsistency rule applies to G’ then return False

5. Return G’

(28)

Progression vs. Regression

Both algorithms are:

•Sound: the result plan is valid

•Complete: if valid plan exists, they find one Non-deterministic choice => search!

•Brute force: DFS, BFS, Iterative Deepening, ..,

•Heuristic: A*, IDA*, …

Complexity: O(b

n

) worst-case

b = branching factor, n = |“choose”|

Regression: often smaller b, focused by goals

Progression: full state to compute heuristics

(29)

3 Missionari - 3 Cannibali

3 missionari e 3 cannibali fanno un viaggio insieme e devono attraversare un fiume sfruttando una

zattera che può ospitare al massimo 2 persone.

Bisogna fare in maniera che sulle due sponde il numero dei cannibali non superi mai quello dei missionari.

Suggerimento: usare un albero dove ogni nodo

rappresenta la situazione sulle due sponde e ogni

arco un operatore che indica quali soggetti sono

traghettati.

(30)

PASSO AZIONE

RIVA di RIVA di

PASSO AZIONE

PARTENZA ARRIVO

1 Situazione iniziale MMMCCC /

2 traghettano 2 cannibali MMMC CC

3 1 cannibale torna indietro MMMCC C

4 traghettano 2 cannibali MMM CCC

5 1 cannibale torna indietro MMMC CC

6 traghettano 2 missionari MC MMCC

7 tornano 1 cannibale e 1 missionario MMCC MC

8 traghettano 2 missionari CC MMMC

9 1 cannibale torna indietro CCC MMM

10 traghettano 2 cannibali C MMMCC

11 1 cannibale torna indietro CC CMMM

12 traghettano 2 cannibali / MMMCCC

(31)

CC MMMCCC/

-C MMMC/CC

MMMCC/C MMCCC/M 1MC

M

MMCC/MC

-C -MC

MMMCCC/

MMCCC/M

-M

MM MMM/CCC

MMMC/CC

MC/MMCC

MMCC/MC

CC/MMMC

CCC/MMM

C/MMMCC

CC/MMMC CC

-C

-MC

MM

-C

CC

-C -C

MMCCC/M

MM CM

MMMC/CC MMCC/CM MCC/MMC MMM/CCC MMC/CMC

C M

CC MMMCC/C

MM CM

MMMC/CC MMCC/CM MCC/MMC MMM/CCC MMC/CCM

C M

CC

MMMC/CC MMMCC/C MMMCCC/

-C -CC

MCCC/M MMMCCC/

-C -CM

-M

-C -CC

(32)

Shakey

Shakey è il primo robot mobile in AI che ha usato un algoritmo generalizzato per pianificare lo

svolgimento dei suoi compiti.

Il metodo applicato era una variante del GPS.

Questo programma usa un approccio detto analisi means-ends dove se il robot non può svolgere un compito o raggiungere il goal in un solo

movimento prende in considerazione una azione

che possa ridurre la differenza fra lo stato in cui si

trova e quello verso cui vuole andare.

(33)

MEANS-END ANALYSIS DEL GPS

(34)

MEANS-END ANALYSIS DEL GPS 1 - TRASFORMA LO STATO CORRENTE IN UNO STATO GOAL

CONFRONTA LO STATO CORRENTE CON IL

GOAL E TROVA LA DIFFERENZA

SUB-GOAL ELIMINA LA DIFFERENZA DIFFERENZA

FALLIMENTO

SUCCESSO

NESSUNA DIFFERENZA

(35)

MEANS-END ANALYSIS DEL GPS 1 - TRASFORMA LO STATO CORRENTE IN UNO STATO GOAL

CONFRONTA LO STATO CORRENTE CON IL

GOAL E TROVA LA DIFFERENZA

SUB-GOAL ELIMINA LA DIFFERENZA DIFFERENZA

FALLIMENTO

SUCCESSO

NESSUNA DIFFERENZA

2 - ELIMINA LA DIFFERENZA

CERCA UN OPERATORE PER ELIMINARE LA

DIFFERENZA

SUB-GOAL ELIMINA

LA DIFFERENZA TROVATO CONFRONTA LE

CONDIZIONI

DELL’OPERATORE CON LO STATO CORRENTE

SUCCESSO DIFFERENZA

FALLIMENTO

(36)

STRIPS

•Symbolic representations of knowledge about the problem domain.

•Goal decomposition.

•Incorporate relevant operators to the plan even

when they can not be applied to the current state.

•Mixed of regression and progression.

(37)

•Search:

• Nodes: current state and a stack of goals- operators

• Root node: initial state and goal configuration

•Idea:

• Push onto the stack the goals to achieve and the operators that achieve these goals

• Pop from the stack goals that are present in the current state, and operators that are

executable

(38)

STRIPS planning

STRIPS maintains two additional data structures:

State List - all currently true predicates.

Goal Stack - a push down stack of goals to be solved, with current goal on top of stack.

If current goal is not satisfied by present state,

examine add lists of operators, and push operator and preconditions list on stack. (Subgoals)

When a current goal is satisfied, POP it from stack.

When an operator is on top stack, record the

application of that operator on the plan sequence and

use the operator’s add and delete lists to update the

(39)

1. Match: if the objective of the top of the stack matches with the

current state, this objective is

deleted from the stack.

(40)

2. Decompose: If the goal in the top of

the stack is a conjunction, then add the

literal components at the top of the stack

in a certain order.

(41)

3. Solve: When the goal at the top of the

stack is one unresolved literal, then find

an operator whose adds list contains a

literal that matches with the objective.

(42)

4. Apply: When the object on top of

the stack is an operator, then delete

it from the stack and apply it on the

current state.

(43)

The process is repeated until the

stack is empty.

(44)

• Can Strips always find a plan if there is one?

• When Strips returns a plan, is it

always correct? efficient?

(45)
(46)
(47)
(48)
(49)
(50)
(51)
(52)
(53)
(54)
(55)
(56)
(57)
(58)
(59)
(60)
(61)
(62)
(63)
(64)

Example 2

(65)
(66)
(67)

Goal interaction

A B

C

A B C

(68)

Goal interaction

Simple planning algorithms assume that the goals to be achieved are independent

Each can be solved separately and then the solutions concatenated

A B

C

A B C

GOAL STATE

(69)

Goal interaction

Simple planning algorithms assume that the goals to be achieved are independent

Each can be solved separately and then the solutions concatenated

This planning problem, called the “Sussman Anomaly,” is the classic example of the goal interaction problem:

Solving on(A,B) first (by doing unstack(C,A), stack(A,B) will be undone when solving the second goal on(B,C) (by doing unstack(A,B),

stack(B,C)).

Solving on(B,C) first will be undone when solving on(A,B)

A B

C

A B C

(70)

Goal interaction

Simple planning algorithms assume that the goals to be achieved are independent

Each can be solved separately and then the solutions concatenated

This planning problem, called the “Sussman Anomaly,” is the classic example of the goal interaction problem:

Solving on(A,B) first (by doing unstack(C,A), stack(A,B) will be undone when solving the second goal on(B,C) (by doing unstack(A,B),

stack(B,C)).

Solving on(B,C) first will be undone when solving on(A,B)

Classic STRIPS could not handle this, although minor modifications can get it to do simple cases

A B

C

A B C

GOAL STATE

(71)

Linear Planning Limitations

•Assumes goal independence and goals are dealt with linearly: it does not work on next goal until a plan for the current one is found.

• Incompleteness: there is a solution but the planner can’t find it.

• Non-optimality: can’t find optimal solution

(72)

IL PARADIGMA GERARCHICO

(73)

State-space planning

We initially have a space of situations (where you are, what you have, etc.)

The plan is a solution found by “searching”

through the situations to get to the goal

A progression planner searches forward from initial state to goal state

A regression planner searches backward from the goal

Ideally this leads to reduced branching –you are only considering things that are relevant to the goal

(74)

Plan-space planning

(75)

Plan-space planning

An alternative is to search through the space of plans, rather than situations.

(76)

Plan-space planning

An alternative is to search through the space of plans, rather than situations.

Start from a partial plan which is expanded and refined until a complete plan that solves the problem is generated.

(77)

Plan-space planning

An alternative is to search through the space of plans, rather than situations.

Start from a partial plan which is expanded and refined until a complete plan that solves the problem is generated.

(78)

Plan-space planning

An alternative is to search through the space of plans, rather than situations.

Start from a partial plan which is expanded and refined until a complete plan that solves the problem is generated.

Refinement operators add constraints to the partial plan and modification operators for other changes.

(79)

Plan-space planning

An alternative is to search through the space of plans, rather than situations.

Start from a partial plan which is expanded and refined until a complete plan that solves the problem is generated.

Refinement operators add constraints to the partial plan and modification operators for other changes.

We can still use STRIPS-style operators:

Op(ACTION: RightShoe, PRECOND: RightSockOn, EFFECT:

RightShoeOn)

Op(ACTION: RightSock, EFFECT: RightSockOn)

Op(ACTION: LeftShoe, PRECOND: LeftSockOn, EFFECT: LeftShoeOn) Op(ACTION: LeftSock, EFFECT: leftSockOn)

(80)

Plan-space planning

An alternative is to search through the space of plans, rather than situations.

Start from a partial plan which is expanded and refined until a complete plan that solves the problem is generated.

Refinement operators add constraints to the partial plan and modification operators for other changes.

We can still use STRIPS-style operators:

Op(ACTION: RightShoe, PRECOND: RightSockOn, EFFECT:

RightShoeOn)

Op(ACTION: RightSock, EFFECT: RightSockOn)

Op(ACTION: LeftShoe, PRECOND: LeftSockOn, EFFECT: LeftShoeOn) Op(ACTION: LeftSock, EFFECT: leftSockOn)

could result in a partial plan of

[RightShoe, LeftShoe]

(81)

Partial-order planning

(82)

Partial-order planning

A linear planner builds a plan as a totally ordered

sequence of plan steps

(83)

Partial-order planning

A linear planner builds a plan as a totally ordered sequence of plan steps

A non-linear planner (aka partial-order planner) builds up a plan as a set of steps with some

temporal constraints

constraints of the form S1<S2 if step S1 must comes before S2.

(84)

Partial-order planning

A linear planner builds a plan as a totally ordered sequence of plan steps

A non-linear planner (aka partial-order planner) builds up a plan as a set of steps with some

temporal constraints

constraints of the form S1<S2 if step S1 must comes before S2.

(85)

Partial-order planning

A linear planner builds a plan as a totally ordered sequence of plan steps

A non-linear planner (aka partial-order planner) builds up a plan as a set of steps with some

temporal constraints

constraints of the form S1<S2 if step S1 must comes before S2.

One refines a partially ordered plan (POP) by either:

adding a new plan step, or

adding a new constraint to the steps already in the plan.

(86)

Partial-order planning

A linear planner builds a plan as a totally ordered sequence of plan steps

A non-linear planner (aka partial-order planner) builds up a plan as a set of steps with some

temporal constraints

constraints of the form S1<S2 if step S1 must comes before S2.

One refines a partially ordered plan (POP) by either:

adding a new plan step, or

adding a new constraint to the steps already in the plan.

(87)

Partial-order planning

A linear planner builds a plan as a totally ordered sequence of plan steps

A non-linear planner (aka partial-order planner) builds up a plan as a set of steps with some

temporal constraints

constraints of the form S1<S2 if step S1 must comes before S2.

One refines a partially ordered plan (POP) by either:

adding a new plan step, or

adding a new constraint to the steps already in the plan.

A POP can be linearized (converted to a totally

(88)

Least commitment

Non-linear planners embody the principle of least commitment

only choose actions, orderings, and variable

bindings that are absolutely necessary, leaving other decisions till later

avoids early commitment to decisions that don’t really matter

A linear planner always chooses to add a plan step in a particular place in the sequence

A non-linear planner chooses to add a step

and possibly some temporal constraints

(89)

Non-linear plan

A non-linear plan consists of

(1) A set of steps {S1, S2, S3, S4…}

Each step has an operator description, preconditions and post-conditions

(2) A set of causal links { … (Si,C,Sj) …}

Meaning a purpose of step Si is to achieve precondition C of step Sj

(3) A set of ordering constraints { … Si<Sj … }

if step Si must come before step Sj

(90)

Causal Link:

Action Ac (consumer) has precondition Q that is established in the plan by

Ap(producer).

(91)

Step At threatens link (Ap, Q, Ac) if:

1. At has (not Q) as an effect, and 2. At could come between Ap and Ac, i.e.

O u (Ap< At< Ac) is consistent

(92)

Soluzione

Una soluzione è un piano completo e consistente

In un piano completo ogni precondizione di un passo viene raggiunta da qualche altro passo. Un passo raggiunge una condizione se la condizione è uno degli effetti del passo e se nessun altro

passo può eliminare la condizione.

In un piano consistente non c’è alcuna

contraddizione nell’ordinamento o nei vincoli sui

legami (se validi entrambi c’è contraddizione)

(93)

The initial plan

Every plan starts the same way

S1:START

S2:FINISH

INITIAL STATE GOAL STATE

(94)

Initial Plan

For uniformity, represent initial state and goal with two special actions:

Start:

• no preconditions,

• initial state as effects,

• must be the first step in the plan.

Finish:

• no effects

• goals as preconditions

• must be the last step in the plan.

(95)

Trivial example

Operators:

Op(ACTION: RightShoe, PRECOND: RightSockOn, EFFECT:

RightShoeOn)

Op(ACTION: RightSock, EFFECT: RightSockOn)

Op(ACTION: LeftShoe, PRECOND: LeftSockOn, EFFECT:

LeftShoeOn)

Op(ACTION: LeftSock, EFFECT: leftSockOn)

S1:STAR T

S2:FINIS H

RIGHTSHOEON ^ LEFTSHOEON

STEPS: {S1:[OP(ACTION:START)], S2:[OP(ACTION:FINISH,

PRE: RIGHTSHOEON^LEFTSHOEON)]}

LINKS: {}

ORDERINGS: {S1<S2}

(96)

Solution

Start

Left Sock

Right Sock

Right Shoe Left

Shoe

Finish

(97)

Linearizzazioni

(98)

POP

POP is sound and complete POP Plan is a solution if:

◦All preconditions are supported (by causal links), i.e., no open conditions.

◦No threats

◦Consistent temporal ordering

By construction, the POP algorithm

reaches a solution plan

(99)

Example

Op(ACTION: START, PRECONDITION: nil, EFFECT:

AT(HOME) and SELLS(STORE,DRILL) and

SELLS(SUPER,MILK) and SELLS(SUPER, BREAD))

Op(ACTION: FINISH, PRECONDITION: HAVE(DRILL) and

HAVE(MILK) and HAVE(BREAD) and AT(HOME), EFFECT: nil)

Op(ACTION: GO(x), PRECONDITION: AT(x), EFFECT: AT(y) and not AT(x))

Op(ACTION: BUY(x), PRECONDITION: AT(SHOP) and SELLS(SHOP,x), EFFECT: HAVE(x))

(100)

Initial Plan

(101)
(102)
(103)
(104)

Corretto?

(105)

Piani con condizioni

(106)

Demotion

(107)

Promotion

(108)
(109)
(110)

Piano Totale

(111)

POP

Riferimenti

Documenti correlati

If left-sensor is just right (0.33) and forward sensor is too far (0.33 for the ex.) then drive straight.. If left-sensor is too far (0.00) and forward sensor is too (0.00) far

Lo schema percettivo doveva usare la linea bianca per calcolare la differenza fra dove era il centroide della linea bianca e dove il robot avrebbe dovuto essere, mentre gli

Gli automi a stati finiti sono uno strumento utile per scrivere un programma di controllo coordinato di un behavior astratto e. quindi diventano un buon strumento di programmazione

• Define each of the following terms in one or two sentences: proprioception, exteroception, exproprioception,proximity sensor, logical sensor, false positive, false negative,

La rappresentazione del mondo viene modificata ogni volta che il robot percepisce l’ambiente e il piano delle azioni viene stabilito sulla base di tale rappresentazione..

una conoscenza a priori del mondo quando è disponibile e stabile può essere usata per configurare o riconfigurare i behavior efficientemente modelli del mondo acquisiti

• P,S-A, deliberation uses global world models, reactive uses behavior-specific or

– Coverage (ex. Spread out to cover as much as possible) – Convergence (ex. Robots meet from different start positions) – Movement-to (ex. Doesn’t require agents to know about