• Non ci sono risultati.

Moreau’s Sweeping Process and its control

N/A
N/A
Protected

Academic year: 2022

Condividi "Moreau’s Sweeping Process and its control"

Copied!
87
0
0

Testo completo

(1)

Moreau’s Sweeping Process and its control

Giovanni Colombo

Universit`a di Padova

June 15, 2016

(2)

Joint works with

– B. Mordukhovich (Wayne State), Nguyen D. Hoang (Valparaiso, Chile), R. Henrion (WIAS)

– Michele Palladino (Penn State) – Chems Eddine Arroud (Jijel, Algeria)

(3)

Outline of the Talk

The dynamics of the sweeping process Control problems for the sweeping process

A Hamilton-Jacobi characterization of the minimum time function Necessary optimality conditions

(4)

The dynamics of the sweeping process

Consider a moving setC (t), depending on the time t ∈ [0, T ], and an initial condition x0 ∈ C (0).

In several contexts, the modelization of the displacementx (t) of the initial condition x0 subject to thedragging, or sweepingdue to the displacement of C (t) pops up.

It is natural to think that the point x (t) remains at rest until it is caught by the boundary of C (t) and then its velocity is normal to ∂C (t).

It is a kind of one sided movement.

(5)

Formally, the sweeping process(processus de rafle) is the differential inclusion with initial condition

˙x (t) ∈ −NC (t)(x (t)), x (0) = x0 ∈ C (0).

Here NC(x ) denotes the normal cone to C at x ∈ C . In particular, NC(x ) = {0} if x ∈ intC

NC(x ) = ∅ if x /∈ C .

The simplest example is theplay operator:

˙x (t) ∈ −NC +u(t)(x (t)), namely C (t) is a translation.

(6)

Formally, the sweeping process(processus de rafle) is the differential inclusion with initial condition

˙x (t) ∈ −NC (t)(x (t)), x (0) = x0 ∈ C (0).

Here NC(x ) denotes the normal cone to C at x ∈ C . In particular, NC(x ) = {0} if x ∈ intC

NC(x ) = ∅ if x /∈ C . The simplest example is theplay operator:

˙x (t) ∈ −NC +u(t)(x (t)), namely C (t) is a translation.

(7)

Classical assumptions are:

x ∈ H, a Hilbert space

t 7→ C (t) is Lipschitz continuous C (t) is closed and convex

Classical results are (the inclusion is forwardin time):

Existence of a Lipschitz solution for the Cauchy problem

Uniqueness and continuous dependence from the initial condition Classical methods are:

Moreau-Yosida approximation The catching up algorithm

(both constructing a Cauchy sequence of approximate solutions). This goes back to J.-J. Moreau (early ’70s).

There are also (too) many generalizations.

(8)

Classical assumptions are:

x ∈ H, a Hilbert space

t 7→ C (t) is Lipschitz continuous C (t) is closed and convex

Classical results are (the inclusion is forwardin time):

Existence of a Lipschitz solution for the Cauchy problem

Uniqueness and continuous dependence from the initial condition

Classical methods are:

Moreau-Yosida approximation The catching up algorithm

(both constructing a Cauchy sequence of approximate solutions). This goes back to J.-J. Moreau (early ’70s).

There are also (too) many generalizations.

(9)

Classical assumptions are:

x ∈ H, a Hilbert space

t 7→ C (t) is Lipschitz continuous C (t) is closed and convex

Classical results are (the inclusion is forwardin time):

Existence of a Lipschitz solution for the Cauchy problem

Uniqueness and continuous dependence from the initial condition Classical methods are:

Moreau-Yosida approximation The catching up algorithm

(both constructing a Cauchy sequence of approximate solutions).

This goes back to J.-J. Moreau (early ’70s).

There are also (too) many generalizations.

(10)

Classical assumptions are:

x ∈ H, a Hilbert space

t 7→ C (t) is Lipschitz continuous C (t) is closed and convex

Classical results are (the inclusion is forwardin time):

Existence of a Lipschitz solution for the Cauchy problem

Uniqueness and continuous dependence from the initial condition Classical methods are:

Moreau-Yosida approximation The catching up algorithm

(both constructing a Cauchy sequence of approximate solutions).

This goes back to J.-J. Moreau (early ’70s).

There are also (too) many generalizations.

(11)

The perturbedsweeping process:

˙x (t) ∈ −NC (t)(x (t)) + f (x (t)), x (0) = x0 ∈ C (0). (1)

If f is Lipschitz, essentially the above results and methods are valid. The important case where C is constant, namely

˙x (t) ∈ −NC(x (t)) + f (x (t)), x (0) = x0∈ C (0), (2) models a state constrained evolution where the constraint appears actively in the dynamics.

It is different from weak flow invariance, a.k.a. viability, because the state constraint is built in the dynamics.

Note that the right hand side of (1), (2) is discontinuouswith respect to the state, for two different reasons.

(12)

The perturbedsweeping process:

˙x (t) ∈ −NC (t)(x (t)) + f (x (t)), x (0) = x0 ∈ C (0). (1) If f is Lipschitz, essentially the above results and methods are valid.

The important case where C is constant, namely

˙x (t) ∈ −NC(x (t)) + f (x (t)), x (0) = x0∈ C (0), (2) models a state constrained evolution where the constraint appears actively in the dynamics.

It is different from weak flow invariance, a.k.a. viability, because the state constraint is built in the dynamics.

Note that the right hand side of (1), (2) is discontinuouswith respect to the state, for two different reasons.

(13)

The perturbedsweeping process:

˙x (t) ∈ −NC (t)(x (t)) + f (x (t)), x (0) = x0 ∈ C (0). (1) If f is Lipschitz, essentially the above results and methods are valid.

The important case where C is constant, namely

˙x (t) ∈ −NC(x (t)) + f (x (t)), x (0) = x0∈ C (0), (2) models a state constrained evolution where the constraint appears actively in the dynamics.

It is different from weak flow invariance, a.k.a. viability, because the state constraint is built in the dynamics.

Note that the right hand side of (1), (2) is discontinuouswith respect to the state, for two different reasons.

(14)

The perturbedsweeping process:

˙x (t) ∈ −NC (t)(x (t)) + f (x (t)), x (0) = x0 ∈ C (0). (1) If f is Lipschitz, essentially the above results and methods are valid.

The important case where C is constant, namely

˙x (t) ∈ −NC(x (t)) + f (x (t)), x (0) = x0∈ C (0), (2) models a state constrained evolution where the constraint appears actively in the dynamics.

It is different from weak flow invariance, a.k.a. viability, because the state constraint is built in the dynamics.

Note that the right hand side of (1), (2) is discontinuouswith respect to the state, for two different reasons.

(15)

The perturbedsweeping process:

˙x (t) ∈ −NC (t)(x (t)) + f (x (t)), x (0) = x0 ∈ C (0). (1) If f is Lipschitz, essentially the above results and methods are valid.

The important case where C is constant, namely

˙x (t) ∈ −NC(x (t)) + f (x (t)), x (0) = x0∈ C (0), (2) models a state constrained evolution where the constraint appears actively in the dynamics.

It is different from weak flow invariance, a.k.a. viability, because the state constraint is built in the dynamics.

Note that the right hand side of (1), (2) is discontinuouswith respect to the state, for two different reasons.

(16)

A proof of continuous dependence from the initial condition.

Two solutions:

˙x1(t) ∈ −NC (t)(x1(t)) + f (x1(t)), x1(0) =ξ1

˙x2(t) ∈ −NC (t)(x2(t)) + f (x2(t)), x2(0) =ξ2

By convexity:

− ˙x1(t) + f (x1(t)) · x2(t) − x1(t) ≤ 0

− ˙x2(t) + f (x2(t)) · x1(t) − x2(t) ≤ 0 Summing and rearranging:

˙x2(t) − ˙x1(t) · x2(t) − x1(t) ≤ f (x2(t)) − f (x1(t)) · x2(t) − x1(t)

≤ Lkx2(t) − x1(t)k2

Conclude by Gronwall’s lemma. Also mildly nonconvex sets are allowed.

(17)

A proof of continuous dependence from the initial condition.

Two solutions:

˙x1(t) ∈ −NC (t)(x1(t)) + f (x1(t)), x1(0) =ξ1

˙x2(t) ∈ −NC (t)(x2(t)) + f (x2(t)), x2(0) =ξ2

By convexity:

− ˙x1(t) + f (x1(t)) · x2(t) − x1(t) ≤ 0

− ˙x2(t) + f (x2(t)) · x1(t) − x2(t) ≤ 0

Summing and rearranging:

˙x2(t) − ˙x1(t) · x2(t) − x1(t) ≤ f (x2(t)) − f (x1(t)) · x2(t) − x1(t)

≤ Lkx2(t) − x1(t)k2

Conclude by Gronwall’s lemma. Also mildly nonconvex sets are allowed.

(18)

A proof of continuous dependence from the initial condition.

Two solutions:

˙x1(t) ∈ −NC (t)(x1(t)) + f (x1(t)), x1(0) =ξ1

˙x2(t) ∈ −NC (t)(x2(t)) + f (x2(t)), x2(0) =ξ2

By convexity:

− ˙x1(t) + f (x1(t)) · x2(t) − x1(t) ≤ 0

− ˙x2(t) + f (x2(t)) · x1(t) − x2(t) ≤ 0 Summing and rearranging:

˙x2(t) − ˙x1(t) · x2(t) − x1(t) ≤ f (x2(t)) − f (x1(t)) · x2(t) − x1(t)

≤ Lkx2(t) − x1(t)k2

Conclude by Gronwall’s lemma. Also mildly nonconvex sets are allowed.

(19)

A proof of continuous dependence from the initial condition.

Two solutions:

˙x1(t) ∈ −NC (t)(x1(t)) + f (x1(t)), x1(0) =ξ1

˙x2(t) ∈ −NC (t)(x2(t)) + f (x2(t)), x2(0) =ξ2

By convexity:

− ˙x1(t) + f (x1(t)) · x2(t) − x1(t) ≤ 0

− ˙x2(t) + f (x2(t)) · x1(t) − x2(t) ≤ 0 Summing and rearranging:

˙x2(t) − ˙x1(t) · x2(t) − x1(t) ≤ f (x2(t)) − f (x1(t)) · x2(t) − x1(t)

≤ Lkx2(t) − x1(t)k2

Also mildly nonconvex sets are allowed.

(20)

A proof of continuous dependence from the initial condition.

Two solutions:

˙x1(t) ∈ −NC (t)(x1(t)) + f (x1(t)), x1(0) =ξ1

˙x2(t) ∈ −NC (t)(x2(t)) + f (x2(t)), x2(0) =ξ2

By convexity:

− ˙x1(t) + f (x1(t)) · x2(t) − x1(t) ≤ 0

− ˙x2(t) + f (x2(t)) · x1(t) − x2(t) ≤ 0 Summing and rearranging:

˙x2(t) − ˙x1(t) · x2(t) − x1(t) ≤ f (x2(t)) − f (x1(t)) · x2(t) − x1(t)

≤ Lkx2(t) − x1(t)k2

Conclude by Gronwall’s lemma. Also mildly nonconvex sets are allowed.

(21)

Control problems

Given a dynamics,

it is impossible resisting to the temptation of putting some control.

˙x (t) ∈ −NC(t)(x (t)) + f (x (t),u), u ∈ U.

The control may appear in the colored items: shape optimizationand classical control.

There are very few results on the control of the sweeping process. Results not presented today:

Existence of optimal solutions: not difficult, under standard convexity

& coercivity assumptions, because of thegraph closedness of the normal cone andAttouch’ theorem(=normals to a converging sequence of convex sets do converge).

Controllability (up to now no deep results: only a sufficient condition for the minimum time function to be finite and continuous in a neighborhood of the target, which is similarto a classical one).

(22)

Control problems

Given a dynamics, it is impossible resisting to the temptation of putting some control.

˙x (t) ∈ −NC(t)(x (t)) + f (x (t),u), u ∈ U.

The control may appear in the colored items: shape optimizationand classical control.

There are very few results on the control of the sweeping process. Results not presented today:

Existence of optimal solutions: not difficult, under standard convexity

& coercivity assumptions, because of thegraph closedness of the normal cone andAttouch’ theorem(=normals to a converging sequence of convex sets do converge).

Controllability (up to now no deep results: only a sufficient condition for the minimum time function to be finite and continuous in a neighborhood of the target, which is similarto a classical one).

(23)

Control problems

Given a dynamics, it is impossible resisting to the temptation of putting some control.

˙x (t) ∈ −NC(t)(x (t)) + f (x (t),u), u ∈ U.

The control may appear in the colored items: shape optimizationand classical control.

There are very few results on the control of the sweeping process.

Results not presented today:

Existence of optimal solutions: not difficult, under standard convexity

& coercivity assumptions, because of thegraph closedness of the normal cone andAttouch’ theorem(=normals to a converging sequence of convex sets do converge).

Controllability (up to now no deep results: only a sufficient condition for the minimum time function to be finite and continuous in a neighborhood of the target, which is similarto a classical one).

(24)

Control problems

Given a dynamics, it is impossible resisting to the temptation of putting some control.

˙x (t) ∈ −NC(t)(x (t)) + f (x (t),u), u ∈ U.

The control may appear in the colored items: shape optimizationand classical control.

There are very few results on the control of the sweeping process.

Results not presented today:

Existence of optimal solutions: not difficult, under standard convexity

& coercivity assumptions, because of thegraph closedness of the normal cone andAttouch’ theorem(=normals to a converging sequence of convex sets do converge).

Controllability (up to now no deep results: only a sufficient condition for the minimum time function to be finite and continuous in a neighborhood of the target, which is similarto a classical one).

(25)

Control problems

Given a dynamics, it is impossible resisting to the temptation of putting some control.

˙x (t) ∈ −NC(t)(x (t)) + f (x (t),u), u ∈ U.

The control may appear in the colored items: shape optimizationand classical control.

There are very few results on the control of the sweeping process.

Results not presented today:

Existence of optimal solutions: not difficult, under standard convexity

& coercivity assumptions, because of thegraph closedness of the normal cone andAttouch’ theorem(=normals to a converging sequence of convex sets do converge).

Controllability (up to now no deep results: only a sufficient condition for the minimum time function to be finite and continuous in a neighborhood of the target, which is similarto a classical one).

(26)

Control problems

Given a dynamics, it is impossible resisting to the temptation of putting some control.

˙x (t) ∈ −NC(t)(x (t)) + f (x (t),u), u ∈ U.

The control may appear in the colored items: shape optimizationand classical control.

There are very few results on the control of the sweeping process.

Results not presented today:

Existence of optimal solutions: not difficult, under standard convexity

& coercivity assumptions, because of thegraph closedness of the normal cone andAttouch’ theorem(=normals to a converging sequence of convex sets do converge).

Controllability (up to now no deep results: only a sufficient condition for the minimum time function to be finite and continuous in a neighborhood of the target, which is similarto a classical one).

(27)

Results presented today:

Characterization of the value function through Hamilton-Jacobi inequalities (i.e., it is the unique solution of a boundary value problem for a first order PDE to be interpreted in a suitable weak sense).

Necessary optimality conditions (i.e., PMP).

(28)

Results presented today:

Characterization of the value function through Hamilton-Jacobi inequalities (i.e., it is the unique solution of a boundary value problem for a first order PDE to be interpreted in a suitable weak sense).

Necessary optimality conditions (i.e., PMP).

(29)

Hamilton-Jacobi characterization of the minimum time function

Consider the Optimal Control Problem

(P)

Minimize T > 0 such that

˙x (t) ∈ G (x (t)) a.e.

x (0) = x0, x (T ) ∈ S

Data: G : Rn Rn, Lipschitz, S ⊂ Rn is the target(closed set).

Minimal Time Function:

T (x0) = inf{T > 0 : ∃ a solution x (.) of (P) s.t. x (0) = x0, x (T ) ∈ S }

(30)

Dynamic Programming for T(x).

Fix x0 ∈ Rn. Equivalent formulations:

1) for every solution x (.), for every t ∈ [0, T (x0)], T (x0) ≤ t + T (x (t))

2) there exists ¯x (.) such that, for every t ∈ [0, T (x0)], T (x0) = t + T (¯x (t))

(i.e., ¯x (.) is optimal for (P)).

(31)

Dynamic Programming for T(x).

Fix x0 ∈ Rn. Equivalent formulations:

1) for every solution x (.), for every t ∈ [0, T (x0)], T (x0) ≤ t + T (x (t))

2) there exists ¯x (.) such that, for every t ∈ [0, T (x0)], T (x0) = t + T (¯x (t))

(i.e., ¯x (.) is optimal for (P)).

(32)

Rephrasing 1) and 2): needing some definitions.

K is any closed set, with (proximal) normal cone NK. Denote as:

h(x , p) := min

v ∈G (x )v · p, H(x , p) := max

v ∈G (x )v · p ≤ 0.

Definition 1. (G , K ) isweakly invariantif, for every x0 ∈ K , there exists a solution x such that

x (0) = x0, x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 1. (G , K ) is weakly invariant if and only if h(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Definition 2. (G , K ) is strongly invariantif, for every x0 ∈ K and every solution x such that x (0) = x0, we have

x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 2. (G , K ) is strongly invariant if and only if H(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Remark: Lipschitz cont. of G is crucial for Prop. 2, but not for Prop. 1.

(33)

Rephrasing 1) and 2): needing some definitions.

K is any closed set, with (proximal) normal cone NK. Denote as:

h(x , p) := min

v ∈G (x )v · p, H(x , p) := max

v ∈G (x )v · p ≤ 0.

Definition 1. (G , K ) isweakly invariantif, for every x0 ∈ K , there exists a solution x such that

x (0) = x0, x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 1. (G , K ) is weakly invariant if and only if h(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Definition 2. (G , K ) is strongly invariantif, for every x0 ∈ K and every solution x such that x (0) = x0, we have

x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 2. (G , K ) is strongly invariant if and only if H(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Remark: Lipschitz cont. of G is crucial for Prop. 2, but not for Prop. 1.

(34)

Rephrasing 1) and 2): needing some definitions.

K is any closed set, with (proximal) normal cone NK. Denote as:

h(x , p) := min

v ∈G (x )v · p, H(x , p) := max

v ∈G (x )v · p ≤ 0.

Definition 1. (G , K ) isweakly invariantif, for every x0 ∈ K , there exists a solution x such that

x (0) = x0, x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 1. (G , K ) is weakly invariant if and only if h(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Definition 2. (G , K ) is strongly invariantif, for every x0 ∈ K and every solution x such that x (0) = x0, we have

x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 2. (G , K ) is strongly invariant if and only if H(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Remark: Lipschitz cont. of G is crucial for Prop. 2, but not for Prop. 1.

(35)

Rephrasing 1) and 2): needing some definitions.

K is any closed set, with (proximal) normal cone NK. Denote as:

h(x , p) := min

v ∈G (x )v · p, H(x , p) := max

v ∈G (x )v · p ≤ 0.

Definition 1. (G , K ) isweakly invariantif, for every x0 ∈ K , there exists a solution x such that

x (0) = x0, x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 1. (G , K ) is weakly invariant if and only if h(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Definition 2. (G , K ) is strongly invariantif, for every x0 ∈ K and every solution x such that x (0) = x0, we have

x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 2. (G , K ) is strongly invariant if and only if

Remark: Lipschitz cont. of G is crucial for Prop. 2, but not for Prop. 1.

(36)

Rephrasing 1) and 2): needing some definitions.

K is any closed set, with (proximal) normal cone NK. Denote as:

h(x , p) := min

v ∈G (x )v · p, H(x , p) := max

v ∈G (x )v · p ≤ 0.

Definition 1. (G , K ) isweakly invariantif, for every x0 ∈ K , there exists a solution x such that

x (0) = x0, x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 1. (G , K ) is weakly invariant if and only if h(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Definition 2. (G , K ) is strongly invariantif, for every x0 ∈ K and every solution x such that x (0) = x0, we have

x (t) ∈ K ∀ t ∈ [0, T ].

Proposition 2. (G , K ) is strongly invariant if and only if H(x , p) ≤ 0 ∀ x ∈ K , ∀ p ∈ NKP(x ).

Remark: Lipschitz cont. of G is crucial for Prop. 2, but not for Prop. 1.

(37)

Remark: Assume T (.) is continuous.

Denote as:

epiT := {(x , α) : T (x ) ≤ α} (closed set) hypoT := {(x , α) : T (x ) ≥ α} (closed set)

Proposition: Condition2)is equivalent to weak invariancefor

(G × {−1}, epiT ). (h(x , p) ≤ −1).

Proposition: Condition1)is equivalent to strong invariancefor (G × {−1}, hypoT ). (h(x , p) ≥ −1).

(38)

Remark: Assume T (.) is continuous.

Denote as:

epiT := {(x , α) : T (x ) ≤ α} (closed set) hypoT := {(x , α) : T (x ) ≥ α} (closed set) Proposition: Condition2)is equivalent toweak invariance for

(G × {−1}, epiT ). (h(x , p) ≤ −1).

Proposition: Condition1)is equivalent to strong invariancefor (G × {−1}, hypoT ). (h(x , p) ≥ −1).

(39)

Remark: Assume T (.) is continuous.

Denote as:

epiT := {(x , α) : T (x ) ≤ α} (closed set) hypoT := {(x , α) : T (x ) ≥ α} (closed set) Proposition: Condition2)is equivalent toweak invariance for

(G × {−1}, epiT ). (h(x , p) ≤ −1).

Proposition: Condition1)is equivalent to strong invariancefor (G × {−1}, hypoT ). (h(x , p) ≥ −1).

(40)

Comparison Lemma. Suppose θ : Rn→ R ∪ {∞} continuous, θ(x) = 0 if x ∈ S (target) and θ(x ) > 0 otherwise . Then:

If (G × {−1}, epi θ) isweakly invariant, then θ(x ) ≤ T (x ) ∀ x.

If (G × {−1}, hypo θ) isstrongly invariant, then θ(x ) ≥ T (x ) ∀ x.

Theorem (Wolenski-Zhuang, ’99: H-J characterization of T ). There exists a unique θ : Rn→ R+∪ {∞} such that

(HJ): For each x /∈ S and p ∈ ∂Pθ(x ),

h(x , p) = −1. (BC): for each x ∈ S satisfies θ(x ) = 0 and

h(x , p) ≥ −1, ∀ p ∈ ∂Pθ(x ). (CV): T (x ) = θ(x ) for all x ∈ Rn.

(41)

Comparison Lemma. Suppose θ : Rn→ R ∪ {∞} continuous, θ(x) = 0 if x ∈ S (target) and θ(x ) > 0 otherwise . Then:

If (G × {−1}, epi θ) isweakly invariant, then θ(x ) ≤ T (x ) ∀ x.

If (G × {−1}, hypo θ) isstrongly invariant, then θ(x ) ≥ T (x ) ∀ x.

Theorem (Wolenski-Zhuang, ’99: H-J characterization of T ). There exists a unique θ : Rn→ R+∪ {∞} such that

(HJ): For each x /∈ S and p ∈ ∂Pθ(x ),

h(x , p) = −1.

(BC): for each x ∈ S satisfies θ(x ) = 0 and

(42)

We want to do the same for the controlled sweeping process (now the moving set is given, i.e., not controlled).

The main difficulty is the lack of Lipschitz continuity of the right-hand side of the dynamics. Of course dissipativity helps, but one has to figure out which is the right Hamiltonian.

(SP)

Minimize T over solutions of

˙x (t) ∈−NC (t)(x (t))+ G (x (t)) a.e. x (t0) = x0 ∈ C (t0), x (T ) ∈ S .

Assumptions: the moving set C (.) is Lipschitz, closed and convex valued; G (.) is Lipschitz and compact and convex valued.

Compatibility Condition: ∃ ¯t > 0 such that C (¯t) ∩ S 6= ∅. Method: characterization through invariance of epi/hypograph.

(43)

We want to do the same for the controlled sweeping process (now the moving set is given, i.e., not controlled).

The main difficulty is the lack of Lipschitz continuity of the right-hand side of the dynamics. Of course dissipativity helps, but one has to figure out which is the right Hamiltonian.

(SP)

Minimize T over solutions of

˙x (t) ∈−NC (t)(x (t))+ G (x (t)) a.e. x (t0) = x0 ∈ C (t0), x (T ) ∈ S .

Assumptions: the moving set C (.) is Lipschitz, closed and convex valued; G (.) is Lipschitz and compact and convex valued.

Compatibility Condition: ∃ ¯t > 0 such that C (¯t) ∩ S 6= ∅. Method: characterization through invariance of epi/hypograph.

(44)

We want to do the same for the controlled sweeping process (now the moving set is given, i.e., not controlled).

The main difficulty is the lack of Lipschitz continuity of the right-hand side of the dynamics. Of course dissipativity helps, but one has to figure out which is the right Hamiltonian.

(SP)

Minimize T over solutions of

˙x (t) ∈−NC (t)(x (t))+ G (x (t)) a.e.

x (t0) = x0∈ C (t0), x (T ) ∈ S .

Assumptions: the moving set C (.) is Lipschitz, closed and convex valued;

G (.) is Lipschitz and compact and convex valued.

Compatibility Condition: ∃ ¯t > 0 such that C (¯t) ∩ S 6= ∅.

Method: characterization through invariance of epi/hypograph.

(45)

We want to do the same for the controlled sweeping process (now the moving set is given, i.e., not controlled).

The main difficulty is the lack of Lipschitz continuity of the right-hand side of the dynamics. Of course dissipativity helps, but one has to figure out which is the right Hamiltonian.

(SP)

Minimize T over solutions of

˙x (t) ∈−NC (t)(x (t))+ G (x (t)) a.e.

x (t0) = x0∈ C (t0), x (T ) ∈ S .

Assumptions: the moving set C (.) is Lipschitz, closed and convex valued;

G (.) is Lipschitz and compact and convex valued.

Compatibility Condition: ∃ ¯t > 0 such that C (¯t) ∩ S 6= ∅.

Method: characterization through invariance of epi/hypograph.

(46)

Characterization of strong invariance for the perturbed sweeping process (C., Palladino; 2015)

Theorem: Assume (HG), (HC) and take K ⊂ Gr C closed.

({1} × {−NC + G }, K) is strongly invariant⇐⇒ for every (τ, x) ∈ K min

v ∈{0}×{−NC (τ )(x )∩(LC+MG)B}v · p + max

v ∈{1}×{G (x )}v · p ≤ 0 for every p ∈ NKP(τ, x ).

Remark: Monotonicity of the normal coneplays a crucial role.

(47)

Characterization of strong invariance for the perturbed sweeping process (C., Palladino; 2015)

Theorem: Assume (HG), (HC) and take K ⊂ Gr C closed.

({1} × {−NC + G }, K) is strongly invariant⇐⇒ for every (τ, x) ∈ K min

v ∈{0}×{−NC (τ )(x )∩(LC+MG)B}v · p + max

v ∈{1}×{G (x )}v · p ≤ 0 for every p ∈ NKP(τ, x ).

Remark: Monotonicity of the normal coneplays a crucial role.

(48)

HJ inequalities for SP (C.-Palladino; ’15)

Define:

H(τ, x , λ, p) := min

{0}×{−NC (τ )(x )∩(LC+MG)B}×{0}v ·p+ min

v ∈{1}×{G (x )}×{−1}v ·p, H+(τ, x , λ, p) := min

{0}×{−NC (τ )(x )∩(LC+MG)B}×{0}v ·p+ max

v ∈{1}×{G (x )}×{−1}v ·p, Theorem: Assume T (., .)continuous. Then T (., .) is the unique function satisfying:

T (t, x ) > 0 ∀ (t, x) ∈ Gr C for which x /∈ S, T (t, x ) = 0 ∀ (t, x) ∈ Gr C for which x ∈ S ,

H(t, x , T (t, x ), p) ≤ 0 ∀ (t, x ) ∈ GrC ,x /∈ S, ∀ p ∈ NepiTP (t, x , T (t, x )), H+(t, x , T (t, x ), p) ≤ 0 ∀ (t, x ) ∈ GrC , ∀ p ∈ NhypoTP (t, x , T (t, x )).

Remarks. 1) If x ∈ int C (t) and x /∈ S, then the classical HJ equality holds. 2) The structure of the Hamiltonians: observe the min in the normal part. disegna l’esempio

(49)

HJ inequalities for SP (C.-Palladino; ’15)

Define:

H(τ, x , λ, p) := min

{0}×{−NC (τ )(x )∩(LC+MG)B}×{0}v ·p+ min

v ∈{1}×{G (x )}×{−1}v ·p, H+(τ, x , λ, p) := min

{0}×{−NC (τ )(x )∩(LC+MG)B}×{0}v ·p+ max

v ∈{1}×{G (x )}×{−1}v ·p, Theorem: Assume T (., .)continuous. Then T (., .) is the unique function satisfying:

T (t, x ) > 0 ∀ (t, x) ∈ Gr C for which x /∈ S, T (t, x ) = 0 ∀ (t, x) ∈ Gr C for which x ∈ S ,

H(t, x , T (t, x ), p) ≤ 0 ∀ (t, x ) ∈ GrC ,x /∈ S, ∀ p ∈ NepiTP (t, x , T (t, x )), H+(t, x , T (t, x ), p) ≤ 0 ∀ (t, x ) ∈ GrC , ∀ p ∈ NhypoTP (t, x , T (t, x )).

2) The structure of the Hamiltonians: observe the min in the normal part. disegna l’esempio

(50)

HJ inequalities for SP (C.-Palladino; ’15)

Define:

H(τ, x , λ, p) := min

{0}×{−NC (τ )(x )∩(LC+MG)B}×{0}v ·p+ min

v ∈{1}×{G (x )}×{−1}v ·p, H+(τ, x , λ, p) := min

{0}×{−NC (τ )(x )∩(LC+MG)B}×{0}v ·p+ max

v ∈{1}×{G (x )}×{−1}v ·p, Theorem: Assume T (., .)continuous. Then T (., .) is the unique function satisfying:

T (t, x ) > 0 ∀ (t, x) ∈ Gr C for which x /∈ S, T (t, x ) = 0 ∀ (t, x) ∈ Gr C for which x ∈ S ,

H(t, x , T (t, x ), p) ≤ 0 ∀ (t, x ) ∈ GrC ,x /∈ S, ∀ p ∈ NepiTP (t, x , T (t, x )), H+(t, x , T (t, x ), p) ≤ 0 ∀ (t, x ) ∈ GrC , ∀ p ∈ NhypoTP (t, x , T (t, x )).

Remarks. 1) If x ∈ int C (t) and x /∈ S, then the classical HJ equality holds. 2) The structure of the Hamiltonians: observe the min in the normal part. disegna l’esempio

(51)

Necessary optimality conditions 1: shape optimization (C.,Hoang,Henrion,Mordukhovich)

Minimize a cost (final + integral, finite horizon) over trajectories of

˙x (t) ∈ −NC (t)(x (t)), x (0) = x0 ∈ C (0), where C is assumed to be a polyhedron:

C (t) :=x ∈ Rn

hui(t), x i ≤ bi(t), i = 1, . . . , m with kui(t)k = 1 for all t ∈ [0, T ], i = 1, . . . , m, ui, bi are the controls:

they are Lipschitz and satisfy a constraint qualification.

(52)

The cost is

J[x , u, b] : = ϕ x (T ) + Z T

0

`1(t, z) + `2( ˙x ) + `3(t, ˙u, ˙b) dt, where u = (u1, . . . , um), b = (b1, . . . , bm) and the Lagrangians satisfy reasonable assumptions.

Why a polyhedron?

Because, as is usual in necessary conditions, one expects to use the generalized gradient of the dynamics with respect to the state, namely the generalized gradient of the normal cone, i.e., a second order object. Some explicit computations for that are available in the literature of nonsmooth analysis.

(53)

The cost is

J[x , u, b] : = ϕ x (T ) + Z T

0

`1(t, z) + `2( ˙x ) + `3(t, ˙u, ˙b) dt, where u = (u1, . . . , um), b = (b1, . . . , bm) and the Lagrangians satisfy reasonable assumptions.

Why a polyhedron?

Because, as is usual in necessary conditions, one expects to use the generalized gradient of the dynamics with respect to the state, namely the generalized gradient of the normal cone, i.e., a second order object. Some explicit computations for that are available in the literature of nonsmooth analysis.

(54)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞

(a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z; – we prove that a sequence of minimizers of the discrete problems

converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer; – we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(55)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞ (a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z; – we prove that a sequence of minimizers of the discrete problems

converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer; – we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(56)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞ (a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z;

– we prove that a sequence of minimizers of the discrete problems converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer; – we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(57)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞ (a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z;

– we prove that a sequence of minimizers of the discrete problems converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer; – we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(58)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞ (a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z;

– we prove that a sequence of minimizers of the discrete problems converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer;

– we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(59)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞ (a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z;

– we prove that a sequence of minimizers of the discrete problems converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer;

– we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(60)

Let ¯z := (¯x , ¯u, ¯b) be a local W1,1 minimizer such that ¯u, ¯b are piecewise W2,∞ (a strong but not a crazy assumption, if we remember that u and b must be Lipschitz).

We use a method, designed by Mordukhovich, based on (time) discrete approximations:

– we construct approximations of ¯z and among them we minimize a discrete cost which involves a penalization of the squared distance from ¯z;

– we prove that a sequence of minimizers of the discrete problems converges strongly to ¯z;

– we write down necessary conditions for each discrete minimizer;

– we establish some a priori estimates, pass to the limit along such necessary conditions and obtain a set of necessary conditions for ¯z.

(technical)

(61)

Necessary conditions

Theorem. There exist a multiplier λ ≥ 0, an absolutely continuous adjoint arc p(·), signed vector measures γ and ξ, as well as functions

w (·), v (·) ∈ L2 with

w (t), v (t) ∈ co ∂` t, ¯z(t), ˙¯z(t) for a.e. t ∈ [0, T ] such that the following conditions are satisfied:

• The primal-dual dynamic relationships:

¯ui(t), ¯x (t) < ¯bi(t) =⇒ ηi(t) = 0 for a.e. t ∈ [0, T ], i = 1, . . . , m, ηi(t) > 0 =⇒λvx(t)−qx(t), ¯ui(t) = 0 for a.e. t ∈ [0, T ], i = 1, . . . , m,

˙p(t) = λw (t) + 0, − η(t), λvx(t) − qx(t), 0 for a.e. t ∈ [0, T ], qu(t) = λ ∂

∂ ˙u`2 u(t), ˙¯˙¯ b(t), and qb(t) = λ ∂

∂ ˙b`2 u(t), ˙¯˙¯ b(t) for a.e. t, where η(·) = (η1(·), . . . , ηm(·)) with nonnegative components is a uniquely defined vector function determined by the representation

(62)

and where q is a function of bounded variation on [0, T ] with its left-continuous representative given, for all t ∈ [0, T ] except at most a countable subset, by

q(t) = p(t) −

Z

[t,T ] m

X

i =1

¯

ui(s)d γi(s), Z

[t,T ]

h¯x (s)), d γ(s)i + 2 Z

[t,T ]

h¯u(s), d ξ(s)i, − Z

[t,T ]

d γ(s) .

• The transversality conditions at the right endpoint:

−px(T ) ∈ λ∂ϕ ¯x (T ) +

m

X

i =1

pib(T )¯ui(T ),

pui(T )+pib(T )¯x (T ) =D

pui(T )+pib(T )¯x (T ), ¯ui(T )E

¯

ui(T ), i = 1, . . . , m, pb(T ) ∈ Rm+ and ¯ui(T ), ¯x (T ) < ¯bi(T ) =⇒ pib(T ) = 0, i = 1, . . . , m.

(63)

• The measure nonatomicity condition:

If t ∈ [0, T ) and h¯ui(t), ¯x (t)i < ¯bi(t) for all i = 1, . . . , m, then there is a neighborhood Vt of t in [0, T ] such that γ(V ) = 0 for any Borel subset V of Vt.

• Nontriviality condition: We have

λ + kq(0)k + kp(T )k 6= 0.

What is missing?

The maximization of the Hamiltonian!

However our necessary conditions can be applied, and do give information, to some academic examples.

Generalizations (Tan Cao & Mordukhovich, 2016): allow a controlled dynamics like ˙x ∈ −NC(x ) + f (x , u), with applications to a model of movement of population in a crowded environment.

(64)

• The measure nonatomicity condition:

If t ∈ [0, T ) and h¯ui(t), ¯x (t)i < ¯bi(t) for all i = 1, . . . , m, then there is a neighborhood Vt of t in [0, T ] such that γ(V ) = 0 for any Borel subset V of Vt.

• Nontriviality condition: We have

λ + kq(0)k + kp(T )k 6= 0.

What is missing?

The maximization of the Hamiltonian!

However our necessary conditions can be applied, and do give information, to some academic examples.

Generalizations (Tan Cao & Mordukhovich, 2016): allow a controlled dynamics like ˙x ∈ −NC(x ) + f (x , u), with applications to a model of movement of population in a crowded environment.

(65)

• The measure nonatomicity condition:

If t ∈ [0, T ) and h¯ui(t), ¯x (t)i < ¯bi(t) for all i = 1, . . . , m, then there is a neighborhood Vt of t in [0, T ] such that γ(V ) = 0 for any Borel subset V of Vt.

• Nontriviality condition: We have

λ + kq(0)k + kp(T )k 6= 0.

What is missing?

The maximization of the Hamiltonian!

However our necessary conditions can be applied, and do give information, to some academic examples.

Generalizations (Tan Cao & Mordukhovich, 2016): allow a controlled dynamics like ˙x ∈ −NC(x ) + f (x , u), with applications to a model of movement of population in a crowded environment.

(66)

• The measure nonatomicity condition:

If t ∈ [0, T ) and h¯ui(t), ¯x (t)i < ¯bi(t) for all i = 1, . . . , m, then there is a neighborhood Vt of t in [0, T ] such that γ(V ) = 0 for any Borel subset V of Vt.

• Nontriviality condition: We have

λ + kq(0)k + kp(T )k 6= 0.

What is missing?

The maximization of the Hamiltonian!

However our necessary conditions can be applied, and do give information, to some academic examples.

Generalizations (Tan Cao & Mordukhovich, 2016): allow a controlled dynamics like ˙x ∈ −NC(x ) + f (x , u), with applications to a model of movement of population in a crowded environment.

(67)

• The measure nonatomicity condition:

If t ∈ [0, T ) and h¯ui(t), ¯x (t)i < ¯bi(t) for all i = 1, . . . , m, then there is a neighborhood Vt of t in [0, T ] such that γ(V ) = 0 for any Borel subset V of Vt.

• Nontriviality condition: We have

λ + kq(0)k + kp(T )k 6= 0.

What is missing?

The maximization of the Hamiltonian!

However our necessary conditions can be applied, and do give information, to some academic examples.

Generalizations (Tan Cao & Mordukhovich, 2016): allow a controlled dynamics like ˙x ∈ −NC(x ) + f (x , u), with applications to a model of

(68)

A maximum principle for the controlled sweeping process (Arroud-C., 2016)

Minimize h(x (T )) subject to

 ˙x (t) ∈ −NC (t)(x (t)) + f (x (t),u(t))), x (0) = x0 ∈ C (0) ,

with respect to u: [0, T ] → U, u measurable (now C is given,smooth).

What should one expect?

An example: in R2, minimize x (1) + y (1) subject to

( ˙x , ˙y ) ∈ −NC(x , y ) + (ux, uy), |ux|, |uy| ≤ 1 where C = {(x , y ) : y ≥ 0}. disegna

Some degeneracy of the y -component of the adjoint vector is to be expected (is not a pathology).

(69)

A maximum principle for the controlled sweeping process (Arroud-C., 2016)

Minimize h(x (T )) subject to

 ˙x (t) ∈ −NC (t)(x (t)) + f (x (t),u(t))), x (0) = x0 ∈ C (0) ,

with respect to u: [0, T ] → U, u measurable (now C is given,smooth).

What should one expect?

An example: in R2, minimize x (1) + y (1) subject to

( ˙x , ˙y ) ∈ −NC(x , y ) + (ux, uy), |ux|, |uy| ≤ 1 where C = {(x , y ) : y ≥ 0}. disegna

Some degeneracy of the y -component of the adjoint vector is to be expected (is not a pathology).

(70)

A maximum principle for the controlled sweeping process (Arroud-C., 2016)

Minimize h(x (T )) subject to

 ˙x (t) ∈ −NC (t)(x (t)) + f (x (t),u(t))), x (0) = x0 ∈ C (0) ,

with respect to u: [0, T ] → U, u measurable (now C is given,smooth).

What should one expect?

An example: in R2, minimize x (1) + y (1) subject to

( ˙x , ˙y ) ∈ −NC(x , y ) + (ux, uy), |ux|, |uy| ≤ 1 where C = {(x , y ) : y ≥ 0}. disegna

Some degeneracy of the y -component of the adjoint vector is to be expected (is not a pathology).

(71)

A maximum principle for the controlled sweeping process (Arroud-C., 2016)

Minimize h(x (T )) subject to

 ˙x (t) ∈ −NC (t)(x (t)) + f (x (t),u(t))), x (0) = x0 ∈ C (0) ,

with respect to u: [0, T ] → U, u measurable (now C is given,smooth).

What should one expect?

An example: in R2, minimize x (1) + y (1) subject to

( ˙x , ˙y ) ∈ −NC(x , y ) + (ux, uy), |ux|, |uy| ≤ 1 where C = {(x , y ) : y ≥ 0}. disegna

(72)

Method: Moreau-Yosida approximation (inspired by Brokate & Kreiˇc´ı, DCDS-B 2013).

Given ε > 0 and a globalminimizer (¯x , ¯u), one considers the problem of minimizing

h(x (T )) + 1 2

Z T 0

|u − ¯u|2dt subject to

˙xε(t) = −1

ε xε(t) − projC (t)(xε(t)) + f (xε(t), u), x (0) = x0

Existence and uniqueness of a solution and estimate d (xε(t), C (t)) ≤ const ε for all t ∈ [0, T ]. Strong L2 convergence of a sequence of minimizers uε to ¯u.

(73)

Method: Moreau-Yosida approximation (inspired by Brokate & Kreiˇc´ı, DCDS-B 2013).

Given ε > 0 and a globalminimizer (¯x , ¯u), one considers the problem of minimizing

h(x (T )) + 1 2

Z T 0

|u − ¯u|2dt subject to

˙xε(t) = −1

ε xε(t) − projC (t)(xε(t)) + f (xε(t), u), x (0) = x0

Existence and uniqueness of a solution and estimate d (xε(t), C (t)) ≤ const ε for all t ∈ [0, T ].

2

(74)

This is a classical control problem:

– write down necessary conditions.

– try to pass to the limit.

Remark. xε(t) − projC (t)(xε(t)) = ∇x12d2(xε(t), C (t)) (∇xd2(., C (t)) is Lipschitz).

Then necessary conditions read as

(− ˙pε(t) = −12xd2(xε(t), C (t)) + ∇xf (xε(t), uε(t))pε(t), t ∈ [0, T ]

−pε(T ) = ∇h(xε(T )) + PMP.

(75)

This is a classical control problem:

– write down necessary conditions.

– try to pass to the limit.

Remark. xε(t) − projC (t)(xε(t)) = ∇x12d2(xε(t), C (t)) (∇xd2(., C (t)) is Lipschitz).

Then necessary conditions read as

(− ˙pε(t) = −12xd2(xε(t), C (t)) + ∇xf (xε(t), uε(t))pε(t), t ∈ [0, T ]

−pε(T ) = ∇h(xε(T )) + PMP.

(76)

Computation (valid for all x /∈ ∂C (t))

2xd2(x , C (t)) = d (x , C (t))∇2xd (x , C (t)) + ∇xd (x , C (t)) ⊗ ∇xd (x , C (t)) continuous discontinuous at ∂C (t) 1

ε∇2xd2(x , C (t)) =

= 1

εd (x , C (t))∇2xd (x , C (t)) +1

ε∇xd (x , C (t)) ⊗ ∇xd (x , C (t))

bounded ??

full bottle vs. drunk mathematician Brokate & Kreiˇc´ı smooth out the distance. (disegna)This simplifies the estimate of the red part, while complicates the estimate of thebluepart: they need to assume uniform strict convexity of C .

(77)

Computation (valid for all x /∈ ∂C (t))

2xd2(x , C (t)) = d (x , C (t))∇2xd (x , C (t)) + ∇xd (x , C (t)) ⊗ ∇xd (x , C (t)) continuous discontinuous at ∂C (t) 1

ε∇2xd2(x , C (t)) =

= 1

εd (x , C (t))∇2xd (x , C (t)) +1

ε∇xd (x , C (t)) ⊗ ∇xd (x , C (t))

bounded ??

full bottle vs. drunk mathematician

Brokate & Kreiˇc´ı smooth out the distance. (disegna)This simplifies the estimate of the red part, while complicates the estimate of thebluepart: they need to assume uniform strict convexity of C .

(78)

Computation (valid for all x /∈ ∂C (t))

2xd2(x , C (t)) = d (x , C (t))∇2xd (x , C (t)) + ∇xd (x , C (t)) ⊗ ∇xd (x , C (t)) continuous discontinuous at ∂C (t) 1

ε∇2xd2(x , C (t)) =

= 1

εd (x , C (t))∇2xd (x , C (t)) +1

ε∇xd (x , C (t)) ⊗ ∇xd (x , C (t))

bounded ??

full bottle vs. drunk mathematician Brokate & Kreiˇc´ı smooth out the distance. (disegna)This simplifies the estimate of the red part, while complicates the estimate of thebluepart:

they need to assume uniform strict convexity of C .

(79)

We avoid strict convexity of C (t) by adopting another strategy: an outward/inward pointing condition, which transforms the state discontinuity into a time discontinuity. (disegna)

This way we can get an a priori estimate also on the redpart.

The difficulty (namely the discontinuity of ∇xd (., C (t)) at points in

∂C (t)) is brutalized.

(80)

We avoid strict convexity of C (t) by adopting another strategy: an outward/inward pointing condition, which transforms the state discontinuity into a time discontinuity. (disegna)

This way we can get an a priori estimate also on the redpart.

The difficulty (namely the discontinuity of ∇xd (., C (t)) at points in

∂C (t)) is brutalized.

(81)

Theorem (Arroud-C). Let (x, u) be a global minimizer satisfying the outward (or inward) pointing condition. Then there exist a BV adjoint vector p : [0, T ] → Rn, a finite signed Radon measure µ on [0, T ], and measurable vectors ξ, η : [0, T ] → Rn, with ξ(t) ≥ 0 for µ-a.e. t and 0 ≤ η(t) ≤ β + γ for a.e. t, satisfying the following properties:

• (adjoint equation) for all continuous functions ϕ : [0, T ] → Rn

− Z

[0,T ]

hϕ(t), dp(t)i = − Z

[0,T ]

hϕ(t), ∇xd (x(t), C (t))iξ(t) d µ(t)

− Z

[0,T ]

hϕ(t), ∇2xd (x(t), C (t))p(t)iη(t) dt +

Z

[0,T ]

hϕ(t), ∇xf (x(t), u(t))p(t)i dt,

• (transversality condition) −p(T ) = ∇h(x(T )),

• (maximality condition)

(82)

When ¯x (t) ∈ int C (t), the usual adjoint equation holds true.

Under the outward pointing condition, the optimal trajectory is forced to slide on the boundary if hits it.

The adjoint vector is absolutely continuous in the open interval where

¯

x (t) ∈ ∂C (t).

The measure µ may admit a Dirac mass only at the final time, as it happens in the example.

(83)

When ¯x (t) ∈ int C (t), the usual adjoint equation holds true.

Under the outward pointing condition, the optimal trajectory is forced to slide on the boundary if hits it.

The adjoint vector is absolutely continuous in the open interval where

¯

x (t) ∈ ∂C (t).

The measure µ may admit a Dirac mass only at the final time, as it happens in the example.

(84)

Minimize x (1) + y (1) subject to

( ˙x , ˙y ) ∈ −NC(x , y ) + (ux, uy), |ux|, |uy| ≤ 1 where C = {(x , y ) : y ≥ 0}.

The adjoint vector (px, py) is absolutely continuous on (0, 1), ˙px = 0,

˙py = 0 a.e. on [0, T ], px(1) = py(1) = −1, px is continuous at t = 1 and py(1−) + 1 = 1, namely py(1−) = 0. Thus the adjoint vector (px, py) is :

px(t) = −1 for all t ∈ [0, 1]

py(t) = 0 for all t ∈ [0, 1) py(1) = −1 ⇒ µ = −δ1. The maximum condition reads as

h(−1, 0), (¯ux, ¯uy)i = max

|ux|,|uy|≤1h(−1, 0), (ux, uy)i for 0 ≤ t < 1, i.e.,

−¯ux = max

|ux|≤1{−ux} ⇒ ¯ux ≡ 1, no information on ¯uy.

(85)

Some information on ¯uy in the interval where the optimal solution belongs to int C are lost.

Work in progress (with Michele Palladino) to find another method to obtain necessary optimality conditions which allow a more general behavior of minimizers.

(86)

References

Colombo, Palladino

The Minimum Time Function for the Controlled Moreau’s Sweeping Process (SICON, in print)

Colombo, Hoang, Henrion, Mordukhovich

Optimal control of the sweeping process: the polyhedral case J. Differential Eqs. 260 (2016), 3397-3447.

Colombo, Hoang, Henrion, Mordukhovich

Discrete approximations of a controlled sweeping process, Set-Valued Variat. Anal. 23 (2015), 69-86.

Arroud, Colombo

A maximum principle for the controlled sweeping process preprint (2016).

(87)

THANK YOU FOR YOUR ATTENTION.

Riferimenti

Documenti correlati

This paper deals with a class of nonlinear elliptic equations involving a critical power-nonlinearity as well as a potential featuring multiple inverse square singularities.. We

In order to perform the investigation on the RTD fluctuations the mixer was divided along the axial direction into six identical portions. In this way six bins to be used as

In this frame, since the Italian Pharmacovigilance System (Italian Medicines Agency—AIFA) only collects reports for registered drugs, the Italian National Institute of Health

Il simulatore ci ha consentito infatti di osservare in maniera diretta gli scambi di energia tra i diversi sottosistemi che compongono il satellite e monitorare lo stato

Because well over 50% of the world population (with peaks of 80% in Europe, United States, Australia, Canada, and Japan) lives in urban, sub-urban, or semi-urban agglomerates,

© Davide Manca – Dynamics and Control of Chemical Processes – Master Degree in ChemEng – Politecnico di Milano.. Introduction to process dynamics

When demand in an area spikes and the waiting time for a car rises, surge pricing kicks in; users requesting cars are informed that the fare will be a multiple of the normal rate..

Da questo punto di vista, non possiamo oggi dire di aver fatto tutto il possibile, negli anni passa- ti, affinché le cose cambiassero; per lungo tempo troppi tra noi si