• Non ci sono risultati.

3 Markov Processes

N/A
N/A
Protected

Academic year: 2022

Condividi "3 Markov Processes"

Copied!
26
0
0

Testo completo

(1)

Doctoral School, Academic Year 2011/12

Paolo Guiotto

Contents

1 Introduction 1

2 Preliminaries: functional setting 2

3 Markov Processes 3

4 Feller processes 4

5 Strongly continuous semigroups on Banach spaces 8

6 Hille–Yosida theorem 13

7 Generators of Feller semigroups 17

8 Examples 20

8.2 Brownian motion . . . 21

8.2.1 Case d = 1 . . . 22

8.3.1 Case d > 2 . . . 23

8.11 Ising model . . . 26

9 Exponential Formula 28

1 Introduction

The main aim of these notes is to connect Markov processes to semigroups of linear operators on functions spaces, an important connection that allows to a very useful and natural way to define Markov processes through their associated semigroup.

There’re lots of different definitions of Markov process in the literature. If this create a little bit of confusion at first sight, all of them are based of course on the same idea: a Markov process is some evolution phenomena whose future depends upon the past only by the present. Actually in most of the applications we are interested in families of Markov processes living in some state space E characterized by a parameter x ∈ E which represents the starting point for the various processes of the family. Moreover, we could define the processes as usual stochastic processes (that is functions of time and of some random parameter) or, and it is what we prefer here, through their laws, that is measures on the path space that,

1

(2)

for a general Markov process, is the space DE[0, +∞[ of E−valued functions continuous from the right and with limit from the left (so they may have jumps).

Like for ordinary dynamical systems an eventually non linear dynamics induces naturally a linear dynamics on observables, that is numerical functions defined on the state space E. We gain something in description (linearity) but we have to move to an infinite dimensional context (functions space). A priori this is not better or worse, but for some questions may be better to treat with an eventually infinite dimensional but linear setting. In many applications (e.g. Markov processes arising as diffusions or interacting particle systems) this approach gives a very quick way to define the process itself defining just a linear (generally unbounded) operator on observables: the so called infinitesimal generator.

2 Preliminaries: functional setting

Along this section we will provide the preliminaries about the main settings where we will work. In all what follows, (E, d) will play the role of state space and will be a locally compact metric space. We will call

• B(E) the σ−algebra of Borel sets of E;

• B(E) the set of bounded and measurable real valued functions on E: in particular we recall that a function ϕ : E −→ R is called measurable if ϕ(A) ∈ B(E) for any Borel set A.

• C0(E) the set of continuous real valued functions on E vanishing at infinity. By this we mean, in particular that

∃x0∈ E, : ∀ε > 0, ∃R(ε) > 0, : |f (x)| 6 ε, ∀x ∈ E, : d(x, x0) > R(ε). (2.1) Of courseC0(E) ⊂B(E). On these spaces it is defined a natural norm

kϕk:= sup

x∈E

|ϕ(x)|, ϕ ∈B(E).

It is a standard work to check thatB(E) and C0(E) are Banach spaces with this norm. In general, a function ϕ : E −→ R will be called observable. Moreover, if f ∈ C0(E) the sup–norm is actually a true maximum as it is easily proved applying the Weierstrass theorem being E locally compact. Sometimes it will be useful to recall the

Theorem 2.1 (Riesz). The topological dual ofC0(E) is the space of all bounded real valued measure on B(E). In particular

?, ϕi = Z

E

ϕ(x) µ(dx).

Moreover theC0(E)?= closurehδx : x ∈ Ei, where hδx, ϕi = ϕ(x).

The natural space for trajectories of E−valued Markov processes is the space

DE[0, +∞[:= {γ : [0, +∞[−→ E, γ right continuous and with left limit} .

Frenchmen call this type of trajectories cadlag: continue a droite et avec limite a gauche. The space E is called states space. We define also the classical coordinate mappings

πt: DE[0, +∞[−→ E, πt(γ) := γ(t), t > 0.

Moreover, we will define

(3)

• F the smallest σ−algebra of DE[0, +∞[ such that all πtare measurable;

• Ftthe smallest σ−algebra of DE[0, +∞[ such that all πsfor s 6 t are measurable.

Clearly (Ft) is an increasing family of σ−algebras.

3 Markov Processes

Definition 3.1. Let (E, d) be a metric space. A family (Px)x∈Eof probability measures on the path space (DE[0, +∞[,F ) is called Markov process if

i) Px(γ(0) = x) = 1, for any x ∈ E.

ii) (Markov property) Px(γ(t + ]) ∈ F | Ft) = Pγ(t)(F ), for any F ∈F and t > 0.

iii) the mapping x 7−→ Px(F ) is measurable for any F ∈F .

Let (Px)x∈E be a Markov process. We denote by Exthe expectation w.r.t. Px, that is Ex[Φ] =

Z

DE[0,+∞[

Φ dPx, Φ ∈ L1(DE[0, +∞[,F , Px).

The Markov property has a more flexible and general form by means of conditioned expectations:

Ex[Φ(γ(t + ])) |Ft] = Eγ(t)[Φ] , ∀Φ ∈ L. (3.1) We now introduce the fundamental object of our investigations: let’s define

S(t)ϕ(x) := Ex[ϕ(γ(t))] ≡ Z

DE[0,+∞[

ϕ(γ(t)) dPx(γ), ϕ ∈B(E). (3.2)

We will see immediately that any S(t) is well defined for t > 0. The family (S(t))t>0 is called Markov semigroup associated to the process (Px)x∈E. This is because of the following

Proposition 3.2. Let (Px)x∈E be a Markov process on E and (S(t)t>0 be the associated Markov semi- group. Then:

i) S(t) : B(E) −→ B(E) is a bounded linear operator for any t > 0 and kS(t)ϕk6 kϕk for any ϕ ∈ B(E), t > 0 (that is kS(t)k 6 1 for any t > 0).

ii) S(0) = I.

iii) S(t + r) = S(t)S(r), for any t, r > 0.

iv) S(t)ϕ > 0 a.e. if ϕ > 0 a.e.: in particular, if ϕ 6 ψ a.e., then S(t)ϕ 6 S(t)ψ a.e..

v) S(t)1 = 1 a.e. (here 1 is the function constantly equal to 1).

(4)

Proof — i) It is standard (about the measurability proceed by approximation: the statement is true for ϕ = χA, A ∈B(E) by iii) of the definition of Markov process because

S(t)χA(x) = Px(γ(t) ∈ A) = Pxt(A)) ,

and F := πt (A) ∈ F ; hence it holds for ϕ sum of χA, that is for simple functions; for general ϕ take first f > 0 and approximate it by an increasing sequence of simple functions). Linearity follows by the linearity of the integral. Clearly

|S(t)ϕ(x)| 6 Ex[|ϕ(γ(t))|] 6 kϕk, ∀x ∈ E, =⇒ kS(t)ϕk6 kϕk, ∀t > 0.

In other words S(t) ∈L (B(E)) and kS(t)k 6 1.

ii) Evident.

iii) This involves the Markov property:

S(t + r)ϕ(x) = Ex[ϕ(γ(t + r))] = Ex[Ex[ϕ(γ(t + r)) |Ft]](3.1)= ExEγ(t)[ϕ(γ(r))] = Ex[S(t)ϕ(γ(r))]

= S(r) [S(t)ϕ] (x).

iv), v) Evident.

4 Feller processes

To treat with bounded measurable observables is in general quite difficult because of their poor properties, so it’s better to restrict to continuous observables:

Definition 4.1 (Feller property). Let (S(t))t>0 be the Markov semigroup associated to a Markov process (Px)x∈E where (E, d) is locally compact. We say that the semigroup fulfills the Feller property if

S(t)f ∈C0(E), ∀f ∈C0(E), ∀t > 0.

This property turns out to give the strongly continuity of the semigroup:

Theorem 4.2 (strong continuity). Let (S(t))t>0be the Markov semigroup associated to a Markov process (Px)x∈E where (E, d) is locally compact. If (S(t))t>0 fulfills the Feller property, it is then strongly continuous on C0(E), that is

S(·)ϕ ∈C ([0, +∞[; C0(E)), ∀ϕ ∈C0(E).

Proof — First we prove right weak-continuity, that is

t→tlim0+S(t)ϕ(x) = S(t0)ϕ(x), ∀x ∈ E, ∀ϕ ∈Cb(E).

This follows immediately as an application of dominated convergence and because trajectories are right continuous.

Indeed

t→tlim0+S(t)ϕ(x) = lim

t→t0+

Z

DE[0,+∞[

ϕ(γ(t)) Px(dγ).

(5)

Now: |ϕ(γ(t))| 6 kϕk which is Px−integrable, and γ(t) −→ γ(t0) as t −→ t0 because γ ∈ DE[0, +∞[. In a similar way we have

∃ lim

t→t0S(t)ϕ(x), ∀x ∈ E, ∀ϕ ∈C0(E), ∀t0> 0.

So: fixing x ∈ E we have that the function t 7−→ S(t)ϕ(x) has limit from the left and from the right at any point of [0, +∞[. It is a standard first year Analysis exercise to deduce that S(·)ϕ(x) has at most a countable number of discontinuities, hence is measurable.

Now, define

Rλϕ(x) :=

Z +∞

0

e−λtS(t)ϕ(x) dt, λ > 0, x ∈ E.

We will see later what is the meaning of this. The integral is well defined and convergent because

|e−λtS(t)ϕ(x)| = e−λt|S(t)ϕ(x)| 6 e−λtkϕk.

We say that Rλϕ ∈C0(E) if ϕ ∈C0(E). Indeed: everything follows immediately as application of dominated convergence because any S(t)ϕ ∈C0(E) and because of the usual bound |S(t)ϕ(x)| 6 kϕk.

We will show now strong continuity for ϕ of type Rλϕ, that is S(·)Rλϕ ∈C ([0, +∞[; C0(E)). This will be done in steps: the first is to prove strong right continuity, that is

S(·)RλϕC−→ S(t0(E) 0)Rλϕ, as t −→ t0+ . We start with the case t0= 0. Notice that

S(t)Rλϕ(x) = S(t) Z +∞

0

e−λrS(r)ϕ(x) dr = Z +∞

0

e−λrS(r + t)ϕ(x) dr = eλt Z +∞

t

e−λrS(r)ϕ(x) dr, hence

S(t)Rλϕ(x) − Rλϕ(x) = (eλt− 1) Z +∞

t

e−λrS(r)ϕ(x) dr + Z t

0

e−λrS(r)ϕ(x) dr, therefore

kS(t)Rλϕ − Rλϕk6

eλt− 1Z +∞

t

e−λrkϕk dr + Z t

0

e−λrkϕkdr 6 eλt− 1

λ kϕk+ tkϕk−→ 0, as t −→ 0+. For generic t0 we have

kS(t)Rλϕ − S(t0)Rλϕk= kS(t0) (S(t − t0)Rλϕ − Rλϕ)k6 kS(t − t0)Rλϕ − Rλϕk−→ 0, as t −→ t0+. Now we can prove the left continuity at t0: assuming now t < t0,

kS(t)Rλϕ − S(t0)Rλϕk= kS(t) (S(t0− t)Rλϕ − Rλϕ)k6 kS(t0− t)Rλϕ − Rλϕk−→ 0, t −→ t0− . We will now show that the set of Rλϕ (λ > 0) is dense inC0(E). To this aim take µ?∈C0(E)?such that

0 = hµ?, Rλϕi = Z

E

Rλϕ(x) dµ(x), ∀ϕ ∈C0(E).

The aim is to prove µ ≡ 0. Now notice that λRλϕ(x) =

Z +∞

0

e−λtS(t)ϕ(x) d(λt) = Z +∞

0

e−rSr λ



ϕ(x) dr.

Applying the right continuity in 0 of the semigroup and the dominated convergence it is easy to deduce that λRλϕ(x) −→

Z +∞

0

e−rS(0)ϕ(x) dr = S(0)ϕ(x) Z+∞

0

e−r dr = ϕ(x).

(6)

Moreover, always by previous formula

kRλϕk6 Z+∞

0

e−rkϕkdr = kϕk. Therefore, applying the dominated convergence (µ is a finite measure) we have

0 = Z

E

λRλϕ dµ −→

Z

E

ϕ dµ, ∀ϕ ∈C0(E).

But then hµ?, ϕi = 0 for any ϕ ∈C0(E), and this means µ = 0.

Conclusion: take ψ ∈S

λ>0Rλ(C0(E)) such that kϕ − ψk6 ε (such sequence exists by previous step). Each S(·)ϕn∈C ([0, +∞[; C0(E)) by the first step. Therefore

kS(t)ϕ − S(t0)ϕk 6 kS(t)ϕ − S(t)ψk+ kS(t)ψ − S(t0)ψk+ kS(t0)ψ − S(t0)ϕk

6 2kϕ − ψk+ kS(t)ψ − S(t0)ψk6 2ε + kS(t)ψ − S(t0)ψk, therefore

lim sup

t→t0

kS(t)ϕ − S(t0)ϕk6 2ε + lim sup

t→t0

kS(t)ψ − S(t0)ψk= 2ε, and because ε is arbitrary the conclusion follows.

Because this will be the main subject of our investigations here we introduce the

Definition 4.3. A family of linear operators (S(t))t>0⊂L (C0(E)), (E, d) locally compact metric space, is called a Feller semigroup if

i) S(0) = I;

ii) S(t + r) = S(t)S(r) for any t, r > 0;

iii) S(·)ϕ ∈C ([0, +∞[; C0(E));

iv) S(t)ϕ > 0 for any ϕ ∈ C0(E), ϕ > 0 and for all t > 0;

v) S(t)ϕ 6 1 for any ϕ ∈ C0(E), ϕ 6 1 and for all t > 0.

We say that a Feller semigroup is conservative if

∀(ϕn) ⊂C0(E), : ϕn % 1 on E, =⇒ S(t)ϕn% 1, on E.

Remark 4.4. Notice in particular that by previous properties follows that kS(t)k 6 1. Indeed: if kϕk6 1, in particular |ϕ(x)| 6 1, that is −1 6 ϕ(x) 6 1 for any x ∈ E. By iv) and v) we have

−1 6 ϕ 6 1, =⇒ −1 6 S(t)ϕ(x) 6 1, ∀x ∈ E, ⇐⇒ |S(t)ϕ(x)| 6 1, ∀x ∈ E, ⇐⇒ kS(t)ϕk6 1.

But this means exactly kS(t)k 6 1.

It is natural to ask if by a Feller semigroup it is possible to construct a Markov process. This is actually true and the first step is the construction of a transition probability function:

(7)

Proposition 4.5. Let (S(t))t>0 be a conservative Feller semigroup on C0(E), (E, d) locally compact metric space. For any t > 0, x ∈ E there exists a probability measure

Pt(x, ·) : B(E) −→ [0, 1], such that

S(t)ϕ(x) = Z

E

ϕ(y)Pt(x, dy), ∀ϕ ∈C0(E).

Moreover:

i) x 7−→ Pt(x, F ) ∈B(E) for any t > 0, F ∈ B(E).

ii) (Chapman–Kolomogorov equation) for any t, r > 0, Pt+r(x, F ) =

Z

E

Pt(x, dy)Pr(y, F ). (4.1)

Pt(x, dy) is called transition probability.

Proof — Fix t > 0 and x ∈ E and consider the functional ϕ 7−→ S(t)ϕ(x). It is clearly linear and continuous, so by Riesz representation theorem 2.1 there exists a finite Borel measure Pt(x, ·) on E such that

S(t)ϕ(x) = Z

E

ϕ(y) Pt(x, dy), ∀ϕ ∈C0(E).

Moreover, because S(t)ϕ > 0 if ϕ > 0 we have Pt(x, ·) is a positive measure. Moreover, by conservativity and monotone convergence,

Pt(x, E) = Z

E

Pt(x, dy) = lim

n

Z

E

ϕn(y) Pt(x, dy) = lim

n S(t)ϕn(x) = 1, ∀x ∈ E, t > 0.

so Pt(x, dy) turns out to be a probability measure. By standard approximation methods i) follows. ii) follows by the semigroup property: first notice that

Z

E

ϕ(z)Pt+r(x, dz) = S(t + r)ϕ(x) = S(t)S(r)ϕ(x) = Z

E

S(r)ϕ(y)Pt(x, dy)

= Z

E

Z

E

ϕ(z)Pr(y, dz)



Pt(x, dy).

This holds for any ϕ ∈C0(E), hence by approximations, for any ϕ ∈B(E), therefore also in the case ϕ = χF. In this case we obtain

Pt+r(x, F ) = Z

E

Z

E

χF(z)Pr(y, dz)



Pt(x, dy) = Z

E

Pr(y, F ) Pt(x, dy), which is just the Chapman–Kolmogorov equation.

Remark 4.6. If S(t) is not conservative, Pt(x, E) could be, possibly, strictly less than 1. This correspond to the case on which the underlying process we are trying to reconstruct leaving at time 0 from the state

(8)

x die (or escape at infinity) at some time t > 0. If this is the case we could construct the one point compactification of E, ¯E := E ∪ {∞} with the usual topology and set

t(x, F ) :=

















Pt(x, F ), if x ∈ E, F ⊂ E, 1 − Pt(x, E), if x ∈ E, F = {∞},

0, if x = ∞, F ⊂ E,

1, if x = ∞, F = {∞}.

It is easy to check that i) and ii) of previous Proposition are still true.

Transition probabilities are a good tool to construct Markov processes. The idea is to use the Kolmogorov Theorem imposing the Markov property by the transition probabilities using the Chapman–Kolmogorov equation. Basically we want that

Px(γ ∈ DE[0, +∞[ : γ(0) = x, γ(t1) ∈ F1, . . . , γ(tn) ∈ Fn) =

= Z

F1

Pt1(x, dy1) Z

F2

Pt2−t1(y1, dy2) · · · Z

Fn

Ptn−tn−1(yn−1, dyn).

(4.2)

We are not interested here going in deep with this, see Revuz & Yor [4].

5 Strongly continuous semigroups on Banach spaces

In this section we will see some general facts that involves only the structure of continuous semigroup on a generic Banach space X.

Definition 5.1. A family (S(t))t>0 of bounded linear operators on a Banach space X is called strongly continuous semigroup if

i) S(0) = I;

ii) S(t + r) = S(t)S(r), for all t, r > 0;

iii) t 7−→ S(t)ϕ ∈ C([0, +∞[; X) for all ϕ ∈ X.

If kS(t)k 6 1 for all t > 0 the semigroup is called contraction semigroup.

By what we have seen in the previous section we will be particularly interested in contraction semigroups so we will limit the general discussion to this case even if all the theorem have extensions to the general case. It is natural to expect that, in a suitable sense, S(t) = etA for some A. Thinking to the case A ∈L (X),

A = lim

t→0+

etA− I t . For this reason we will introduce the following

(9)

Definition 5.2. Let (S(t))t>0 be a strongly continuous semigroup on a Banach space X. The operator Aϕ := lim

h→0+

S(h)ϕ − ϕ

h , x ∈ D(A) :=



ϕ ∈ X : ∃ lim

h→0+

S(h)ϕ − ϕ h

 . is called infinitesimal generator of (S(t))t>0.

The reason why we call it infinitesimal generator will be clear by the Hille–Yosida theorem: we will first characterize some properties an infinitesimal generator verifies; once we will have these properties we will see that they are enough to construct a strongly continuous semigroup by an operator which verifies them. A first set of properties is given by the

Theorem 5.3. Let X be a Banach space, (S(t))t>0 a contraction semigroup and A its infinitesimal generator. Then

i) D(A) is dense in X;

ii) A is closed (i.e. G(A) := {(ϕ, Aϕ) ∈ X × X : ϕ ∈ D(A)} is closed in the product topology of X × X).

Proof — i) Let ϕ ∈ X and define ψε:=1εRε

0 S(r)ϕ dr. Clearly ψε−→ ϕ as ε & 0 (mean value theorem). Let’s prove that ψε∈ D(A) for all t > 0. We have

1

h(S(h)ψε− ψε) = 1 ε· 1

h

 S(h)

Z ε 0

S(r)ϕ dr − Z ε

0

S(r)ϕ dr



= 1 εh

Zε+h h

S(r)ϕ dr − Zε

0

S(r)ϕ dr



= 1 ε

 1 h

Zε+h ε

S(r)ϕ dr −1 h

Z h 0

S(r)ϕ dr



−→ 1

ε(S(ε)ϕ − ϕ), always in force of the mean value theorem. This proves i).

ii) We have to prove that

if (ϕn, Aϕn) −→ (ϕ, ψ), =⇒ (ϕ, ψ) ∈ G(A), i.e. ϕ ∈ D(A) and ψ = Aϕ.

In other words, we have to prove that

∃ lim

h→0+

S(h)ϕ − ϕ

h = ψ.

Now

S(h)ϕ − ϕ = lim

n (S(h)ϕn− ϕn) .

Because ϕn∈ D(A) we know that t 7−→ S(t)ϕnis differentiable in t = 0 from the right. More:

Lemma 5.4. If ϕ ∈ D(A) then

∃d

dtS(t)ϕ = S(t)Aϕ = AS(t)ϕ, ∀t > 0.

Proof of the Lemma — We have to prove that t 7−→ S(t)ϕ is differentiable for all t > 0 and that S(t)ϕ ∈ D(A).

We start with the first.

First step: exists ddt+S(t)ϕ. Indeed d+

dtS(t)ϕ = lim

h→0+

S(t + h)ϕ − S(t)

h = S(t) lim

h→0+

S(h)ϕ − ϕ

h = S(t)Aϕ.

(10)

Second step: exists ddtS(t)ϕ. Indeed d

dtS(t)ϕ = lim

h→0+

S(t − h)ϕ − S(t)

h = lim

h→0+S(t − h)ϕ − S(h)ϕ

h .

Now:

S(t − h)ϕ − S(h)ϕ

h − S(t)Aϕ 6

S(t − h) ϕ − S(h)ϕ

h − Aϕ



+ kS(t − h)Aϕ − S(t)Aϕk.

Clearly, by the strong continuity, the second term converges to 0. For the first, by the estimate kS(t)k 6 1 we obtain

S(t − h) ϕ − S(h)ϕ

h − Aϕ

 6

ϕ − S(h)ϕ

h − Aϕ

−→ 0, h & 0.

By this the conclusion follows.

Third step: S(t)ϕ ∈ D(A). Indeed S(h)S(t)ϕ − S(t)ϕ

h = S(t)S(h)ϕ − ϕ

h −→ S(t)Aϕ, =⇒ AS(t)ϕ = S(t)Aϕ.

Coming back to the proof of the Theorem, S(h)ϕn− ϕn=

Z h 0

d

drS(r)ϕndr = Zh

0

S(r)Aϕndr −→

Z h 0

S(r)ψ dr.

To justify the last passage, notice that

Z h 0

S(r)Aϕn dr − Z r

0

S(r)ψ dr 6

Z h 0

kS(r)(Aϕn− ψ)k dr 6 hkAϕn− ψk −→ 0.

Finally:

S(h)ϕ − ϕ = Z h

0

S(r)ψ dr.

Therefore,

S(h)ϕ − ϕ

h = 1

h Zh

0

S(r)ψ dr −→ ψ, =⇒ ϕ ∈ D(A), e Aϕ = ψ.

The previous result gives the first indications about the properties of the infinitesimal generator. Of particular interest is the weak continuity property given by the closure of the operator A. This property is weaker than continuity and it is fulfilled by unbounded operators. For instance,

Example 5.5. Let X = C([0, 1]) be endowed with the sup–norm k · k and A given by D(A) :=ϕ ∈ C1([0, 1]) : ϕ(0) = 0 , Aϕ = ϕ0,

then A is closed.

Sol. — Indeed, if (ϕn) ⊂ D(A) is such that ϕn−→ ϕ in X with Aϕn= ϕ0n−→ ψ in X (that is, uniformly on [0, 1]), by a well known result ϕ ∈ C1([0, 1]) and ϕ0 = ψ, that is Aϕ = ψ. To finish just notice that ϕ(0) = 0, therefore ϕ ∈ D(A).

(11)

Unfortunately, being A unbounded in general, it seems difficult to give a good meaning to the exponential series

S(t)ϕ ≡ etAϕ =

X

n=0

(tA)n n! ϕ,

to define the semigroup by a given A. There’s however another possible formula to define the exponential, that is

etAϕ = lim

n→+∞

 I +tA

n

n

ϕ ≡ lim

n→+∞

 I −tA

n

−n

ϕ. (5.1)

While the first limit seems bad because I + µA and its powers seems bad, the second is much more interesting, because we may expect that (I − µA)−1 is nicer than A.

Example 5.6. Compute (I − µA)−1 in the case of A defined in the previous example.

Sol. —

(I − µA)ϕ = ϕ − µϕ0= ψ, ⇐⇒ ϕ = (I − µA)−1ψ.

We may think that given ψ we have to solve the differential equation ϕ0= 1

µϕ −1

µψ, ⇐⇒ ϕ(x) = e

Rx 0 1

µ dyZ x 0

e

Ry 0 1

µdz1

µψ(y) dy + C



= exµ 1 µ

Z x 0

eµyψ(y) dy + C

 . Now, imposing that ϕ ∈ D(A) we have

ϕ(0) = 0, ⇐⇒ C = 0, ⇐⇒ ϕ(x) ≡ (I − µA)−1ψ(x) =eµx µ

Z x 0

eµyψ(y) dy.

Writing

(I − µA)−1= 1 µ

 1 µI − A

−1

=: 1 µR1

µ, we find a very familiar concept of Functional Analysis:

Definition 5.7. Let λ ∈ C. If Rλ:= (λI − A)−1∈L (X) we say that λ ∈ ρ(A) (resolvent set) and we call Rλ resolvent operator. The set σ(A) := C\ρ(A) is called spectrum of A. Analytically the spectrum is divided in

• point spectrum, denoted by σp(A), that is the set of λ ∈ C such that λI − A is not injective (in other words: the elements of σp(A) are the eigenvalues);

• continuum spectrum, denoted by σc(A), that is the set of λ ∈ C such that λI − A is bijective but the inverse is not continuous;

• residual spectrum, what it remains, that is σ(A)\{σp(A) ∪ σc(A)}.

Looking at the second limit in (5.1), we would expect that ρ(A) ⊃ [λ0, +∞[ for some λ0. To this aim we need a relationship between the resolvent operator and the semigroup. This is formally easy: writing S(t) = etAand treating λI − A as a negative number we have

Z +∞

0

e−λrS(r)ϕ dr = Z +∞

0

e−r(λ−A)ϕ dr = e−r(λ−A) λ − A

r=+∞

r=0

ϕ = (λI − A)−1ϕ.

This formula turns out to be true. Precisely we have the

(12)

Theorem 5.8. Let (S(t))t>0 be a contraction semigroup on a Banach space X. Then:

i) ρ(A) ⊃ {Reλ > 0} and Rλϕ =

Z +∞

0

e−λrS(r)ϕ dr, ∀λ ∈ C : Reλ > 0, ∀ϕ ∈ X; (5.2) ii) The following estimate holds:

kRλk 6 1

Reλ, ∀λ ∈ C : Reλ > 0. (5.3)

ΣHAL

Figure 1: The spectrum contained in the half plane Reλ < 0.

Proof — We first prove that the integral in (5.2) is well defined. This is easy because, recalling that kS(t)k 6 1 for any t > 0, we have

ke−rλS(r)ϕk 6 e−rReλ

kϕk ∈ L1([0, +∞[), ∀λ ∈ C : Reλ > 0.

Therefore, being r 7−→ e−rλS(r)ϕ ∈ C([0, +∞[; X) the integral is well defined. By the same estimate we get kRλϕk 6

Z +∞

0

kϕke−rReλ

dr = 1

Reλkϕk, ∀ϕ ∈ X,

that is (5.3). It remain to prove that the integral operator is indeed the resolvent operator, that is:

a) Rλϕ ∈ D(A), ∀ϕ ∈ X, b) (λI − A)Rλϕ = ϕ, ∀ϕ ∈ X, c) Rλ(λI − A)ϕ = ϕ, ∀ϕ ∈ D(A).

We start from the first. Notice that S(h)Rλϕ − Rλϕ

h = 1

h

 S(h)

Z +∞

0

e−λrS(r)ϕ dr − Z +∞

0

e−λrS(r)ϕ dr



= 1 h

Z +∞

h

e−λ(r−h)S(r)ϕ dr − Z +∞

0

e−λrS(r)ϕ dr



=eλh− 1 h

Z+∞

h

e−λrS(r)ϕ dr +1 h

Z +∞

h

e−λrS(r)ϕ dr − Z+∞

0

e−λrS(r)ϕ dr



−→ λ Z +∞

0

e−λrS(r)ϕ dr − ϕ = λRλϕ − ϕ.

(13)

Therefore, Rλϕ ∈ D(A) and

ARλϕ = λRλϕ − ϕ, ⇐⇒ (λI − A)Rλϕ = ϕ, ∀ϕ ∈ X,

that is (λI − A)Rλ= I, that is b). To finish, we have to prove c). Let ϕ ∈ D(A). Notice that Rλ(λI − A)ϕ = λ

Z +∞

0

e−λrS(r)ϕ dr − Z +∞

0

e−λrS(r)Aϕ dr.

Because ϕ ∈ D(A), by Lemma 5.4 we have that S(r)Aϕ = (S(r)ϕ)0. Therefore, integrating by parts, Z +∞

0

e−λrS(r)Aϕ dr = Z +∞

0

e−λr(S(r)ϕ)0 dr =h

e−λrS(r)ϕir=+∞

r=0

− Z +∞

0

−λe−λrS(r)ϕ dr = −ϕ + λRλϕ, that is the conclusion.

6 Hille–Yosida theorem

In the previous section we have seen that:

Corollary 6.1. The infinitesimal generator A : D(A) ⊂ X −→ X of a contraction semigroup fulfills the following properties:

i) A is densely defined and closed;

ii) ρ(A) ⊃ {λ ∈ C : Reλ > 0} and kRλk = k(λI − A)−1k 6 Re1λ for any λ ∈ C such that Reλ > 0.

Actually we may notice that to be closed is redundant because the following general fact

Proposition 6.2. Let A : D(A) ⊂ X −→ X be a linear operator such that ρ(A) 6= ∅. Then A is closed.

Proof — Let (ϕn) ⊂ D(A) such that

ϕn−→ ϕ, Aϕn−→ ψ.

We need to prove ϕ ∈ D(A) and Aϕ = ψ. Now let λ ∈ ρ(A) and consider

λϕn− Aϕn= (λI − A)ϕn, ϕn= (λI − A)−1(λϕn− Aϕn) −→ (λI − A)−1(λϕ − ψ) because (λI − A)−1∈L (X). In particular

ϕ = (λI − A)−1(λϕ − ψ) ∈ D(A), and λϕ − Aϕ = λϕ − ψ, ⇐⇒ Aϕ = ψ.

In this section we will see that these conditions are sufficient to construct a unique contraction semigroup which generator is just A. The idea is to construct etAby approximations

etAϕ = lim

λ→+∞etAλϕ,

where Aλ ∈ L (X) are suitable approximations of A. Such approximations, that will be called Yosida regularizations, are extraordinarily intuitive because are given by

Aλ:= A

I −λ1A ”λ−→+∞−→ ” A.

Let us introduce formally these operators:

(14)

Definition 6.3. Let A : D(A) ⊂ X −→ X be a densely defined linear operator such that ρ(A) ⊃ {λ ∈ C : Reλ > 0} , kRλk = k(λI − A)−1k 6 1

Reλ, ∀λ ∈ C : Reλ > 0.

We call Yosida regularization of A the family (Aλ)λ>0⊂L (X) defined as Aλ:= λARλ= λA(λI − A)−1, ∀λ > 0.

Remark 6.4. At first sight may be is not evident that Aλ∈L (X): indeed (λI − A)Rλ= IX, =⇒ Aλ= λARλ= λ(λRλ− IX) ∈L (X).

Morally Aλ−→ A as λ −→ +∞. Indeed we have the

Lemma 6.5. Let A : D(A) ⊂ X −→ X an operator fulfilling hypotheses of Definition 6.3 on X Banach.

Then

lim

λ→+∞Aλϕ = Aϕ, ∀ϕ ∈ D(A).

Proof — Because, as in the remark, ARλ= λRλ− IX we can write Aλ= λARλ= λ2Rλ− λIX.

Therefore, if ϕ ∈ D(A) we have Aλϕ = λ(λRλϕ − ϕ). But Rλ(λI − A) = ID(A), so

Rλ(λϕ − Aϕ) = ϕ, ⇐⇒ λRλϕ − ϕ = RλAϕ, =⇒ Aλϕ = λRλAϕ.

Set ψ := Aϕ. If we prove that

λ→+∞lim λRλψ = ψ, ∀ψ ∈ X, (6.1)

we are done. Assume first that ψ ∈ D(A): by the same identity as before, λRλψ − ψ = RλAψ, =⇒ kλRλψ − ψk = kRλAψk 6 kRλkkAψk 6 1

λkAψk −→ 0, λ −→ +∞.

In the general case that ψ ∈ X, by density of D(A) in X there exists ψε ∈ D(A) such that kψ − ψεk 6 ε.

Therefore,

kλRλψ − ψk 6 kλRλψ − λRλψεk + kλRλψε− ψεk + kψε− ψk 6 λ1

λε + kλRλψε− ψεk + ε, and by this the conclusion follows easily.

We are now ready for the main result:

Theorem 6.6 (Hille–Yosida). Let X be a Banach space, A : D(A) ⊂ X −→ X fulfilling hypotheses of Definition 6.3 that is:

i) A is densely defined;

ii) ρ(A) ⊃ {λ ∈ C : Reλ > 0} and

kRλk = k(λI − A)−1k 6 1

Reλ, ∀λ ∈ C : Reλ > 0. (6.2)

(15)

There exists then a unique contraction semigroup (S(t))t>0 which generator is A. This semigroup will be denoted by (etA)t>0.

Proof — As announched, we want to define S(t)ϕ := limλ→+∞etAλϕ.

First step : construction of the semigroup. Because Aλ∈L (X), it is well defined the uniformly continuous group etAλ. By the fundamental theorem of calculus,

etAλϕ − etAµϕ = Z1

0



etrAλet(1−r)Aµϕ0

dr = Z 1

0



etrAλtAλet(1−r)Aµϕ + etrAλet(1−r)Aµ(−tAµ)ϕ dr.

Clearly, AλAµ= AµAλ. Therefore

etAλϕ − etAµϕ = t Z 1

0

etrAλet(1−r)Aµ(Aλ− Aµ) ϕ dr, henceforth

etAλϕ − etAµϕ 6t

Z 1 0

etrAλ

et(1−r)Aµ

dr k(Aλ− Aµ)ϕk . Let us give an estimate of keuAλk. Recall first that Aλ= λ2Rλ− λIX. Therefore(1)

euAλ= eu(λ2Rλ−λIX)= e2Rλe−uλIX = e−uλe2Rλ,

so

euAλ

= e−uλ e2Rλ

6e−uλe2kRλk6 e−uλe= 1. (6.3) We deduce by this that

etAλϕ − etAµϕ

6t k(Aλ− Aµ)ϕk . (6.4)

Now, if ϕ ∈ D(A) by previous Lemma Aλϕ −→ Aϕ. With this in hand it is easy to deduce that the sequence of functions (e·Aλϕ)λ>0 is uniformly Cauchy on every interval [0, T ], for all T > 0. Call S(·)ϕ the uniform limit function, defined on [0, +∞[. Passing to the limit into the (6.4) we have

etAλϕ − S(t)ϕ

6t kAλϕ − Aϕk , ∀ϕ ∈ D(A). (6.5)

In this way, for all t > 0, we have defined S(t)ϕ := limλ→+∞etAλϕ for all ϕ ∈ D(A). Clearly i) S(t) : D(A) ⊂ X −→ X is linear;

ii) S(0)ϕ = ϕ, ∀ϕ ∈ D(A);

iii) S(t + r)ϕ = S(t)S(r)ϕ, ∀ϕ ∈ D(A);

iv) t 7−→ S(t)ϕ ∈ C([0, +∞[; X), ∀ϕ ∈ D(A).

Moreover

kS(t)ϕk = lim

λ→+∞ketAλϕk 6 kϕk.

Being X complete and D(A) dense, it is easy to conclude that S(t) is extendible to all X and kS(t)k 6 1. Of course, ii) and iii) hold true to all X. It remain to prove that also iv) extends to all X. Fix ϕ ∈ X and let ϕε∈ D(A) such that kϕ − ϕεk 6 ε. Then, if h > 0,

kS(t + h)ϕ − S(t)ϕk 6 kS(t + h)(ϕ − ϕε)k + kS(t + h)ϕε− S(t)ϕεk + kS(t)ϕε− S(t)ϕk 6 2kϕε− ϕk + kS(t + h)ϕε− S(t)ϕεk

6 2ε + kS(t + h)ϕε− S(t)ϕεk.

1Here we are using the property eA+B = eAeB. Of course in general it is false, but if A and B commutes, as in the present case, it is true.

(16)

Therefore,

lim sup

h→0+ kS(t + h)ϕ − S(t)ϕk 6 2ε.

But ε > 0 is free, so lim suph→0+kS(t + h)ϕ − S(t)ϕk = 0 and this says that S(·)ϕ is right continuous on [0, +∞[.

For left continuity, if h > 0

kS(t − h)ϕ − S(t)ϕk = kS(t − h)[ϕ − S(h)ϕ]k 6 kϕ − S(h)ϕk −→ 0, h −→ 0+, by right continuity.

Second step: A is the generator of (S(t))t>0. Let B : D(B) ⊂ X −→ X, D(B) :=



ϕ ∈ X : ∃ lim

h→0+

S(h)ϕ − ϕ h



, Bϕ := lim

h→0+

S(h)ϕ − ϕ

h , ϕ ∈ D(B), the infinitesimal generator of (S(t))t>0. We will prove that D(B) = D(A) and A = B.

We start by proving that D(A) ⊂ D(B) and Aϕ = Bϕ for all ϕ ∈ D(A). Let ϕ ∈ D(A). By definition of S(h) we have

S(h)ϕ − ϕ = lim

λ→+∞



ehAλϕ − ϕ

= lim

λ→+∞

Z h 0

 erAλϕ0

dr = lim

λ→+∞

Z h 0

erAλAλϕ dr.

It’s natural to show that

λ→+∞lim Z h

0

erAλAλϕ dr = Zh

0

S(r)Aϕ dr. (6.6)

We have

Z h 0

erAλAλϕ dr − Z h

0

S(r)Aϕ dr

6

Z h 0

erAλAλϕ − S(r)Aϕ dr

6 Z h

0

erAλ(Aλϕ − Aϕ) +

erAλAϕ − S(r)Aϕ dr

6 hkAλϕ − Aϕk + Z h

0

erAλAϕ − S(r)Aϕ dr

By Lemma 6.5 the first term goes to 0 as λ −→ +∞. For the second term notice first that erAλψ −→ S(r)ψ uniformly on [0, h] for all ψ ∈ X. Indeed: by construction, this is true if ψ ∈ D(A). On the other hand, being D(A) dense in X, if ψε∈ D(A) is such that kψ − ψεk 6 ε, by estimate (6.5)

kerAλψ − S(r)ψk 6 kerAλ(ψ − ψε)k + kerAλψε− S(r)ψεk + kS(r)(ψ − ψε)k 6 2ε + rkAλψε− Aψεk, Therefore

ke·Aλψ − S(·)ψk∞,[0,h]6 2ε + hkAλψε− Aψεk, =⇒ lim sup

λ→+∞

ke·Aλψ − S(·)ψk∞,[0,h]6 2ε.

Being ε > 0 arbitrary the conclusion follows. This explain completely (6.6). As consequence,

S(h)ϕ − ϕ = Z h

0

S(r)Aϕ dr, ∀ϕ ∈ D(A). (6.7)

By mean value theorem, lim

h→0+

S(h)ϕ − ϕ

h = lim

h→0+

1 h

Z h 0

S(r)Aϕ dr = S(0)Aϕ = Aϕ,

(17)

and this means exactly that D(A) ⊂ D(B) and Bϕ = Aϕ for ϕ ∈ D(A).

The inverse inclusion is much more soft. Indeed: because B is the generator of a contraction semigroup, it fulfills the conditions i) and ii) of Theorem 5.8. In particular, 1 ∈ ρ(B), that is (I − B)−1 : X −→ D(B) is bounded and D(B) = (I − B)−1X. On the other hand, we have seen that D(A) ⊂ D(B) and B

D(A)

= A. In particular, (I − B)D(A) = (I − A)D(A). By our assumption, 1 ∈ ρ(A), so again D(A) = (I − A)−1X, that is X = (I − A)D(A). Hence, (I − B)D(A) = X and this is possible iff D(B) ⊂ D(A). By this the conclusion follows easily.

7 Generators of Feller semigroups

What we have seen in the previous two sections are general results that involves the first three properties in the Definition 4.3 of Feller semigroup. Now clearly the question is: which other property we need in order that, in the specific case of the Banach space X = C0(E), a linear operator A be the generator of e Feller semigroup? Knowing the general connection between a semigroup and its generator we have immediately the

Proposition 7.1. Let (etA)t>0 be a Feller semigroup on C0(E), (E, d) locally compact metric space.

Then the positive maximum principle holds:

if ϕ ∈ D(A) has a maximum at x0 with ϕ(x0) > 0 then Aϕ(x0) 6 0.

Proof — Let x0a positive maximum for ϕ. Notice first that by definition Aϕ = lim

h→0+

S(h)ϕ(x0) − ϕ(x0)

h .

The limit is intended inC0(E), therefore in the uniform convergence. This means, in particular, that Aϕ(x0) = lim

h→0+

S(h)ϕ(x0) − ϕ(x0)

h .

Now, setting ϕ+:= max{ϕ, 0} ∈C0(E), we have

S(h)ϕ 6 S(h)ϕ+6 kS(h)ϕ+k6 kϕ+k= ϕ(x0).

Therefore, if h > 0,

S(h)ϕ(x0) − ϕ(x0)

h 6 ϕ(x0) − ϕ(x0)

h = 0, =⇒ Aϕ(x0) 6 0.

The maximum principle gives basically the estimate (6.2). Actually

Proposition 7.2. Let A : D(A) ⊂C0(E) −→C0(E). If A fulfills the positive maximum principle then it is dissipative that is

kλϕ − Aϕk > λkϕk, ∀ϕ ∈ D(A), ∀λ > 0. (7.1) Proof — Because ϕ ∈C0(E), by Weierstrass thm there exists x0∈ E such that

|ϕ(x0)| = max

x∈E|ϕ(x)| = kϕk.

(18)

We may assume ϕ(x0) > 0 (otherwise replace ϕ with −ϕ). Therefore kλϕ − Aϕk = sup

x∈E|λϕ(x) − Aϕ(x)| > |λϕ(x0) − Aϕ(x0)|.

By the positive maximum principle, Aϕ(x0) 6 0, therefore λϕ(x0) − Aϕ(x0) > 0, hence kλϕ − Aϕk > |λϕ(x0) − Aϕ(x0)| = λϕ(x0) − Aϕ(x0) > λϕ(x0) = λkϕk.

Notice in particular that

Lemma 7.3. Let A : D(A) ⊂ X −→ X, X normed space. If A is dissipative then λI − A is injective. If it is also surjective then λ ∈ ρ(A) and kRλk 6 1λ for any λ > 0.

Proof — Immediate.

Dissipativity gives an immediate simplification to check the ii) of Hille–Yosida thm.

Theorem 7.4 (Phillips). Let X be a Banach space, A : D(A) ⊂ X −→ X be a densely defined linear operator. Suppose that

i) A is dissipative;

ii) R(λ0I − A) = X for some λ0> 0 (that is: λ0I − A is surjective).

The A generates a strongly continuous semigroup of contractions. (2) Proof — By Hille–Yosida Theorem, we have to prove that

ρ(A) ⊃]0, +∞[, and kRλk 6 1

λ, ∀λ > 0.

By the Lemma 7.3 it is clear that λ0∈ ρ(A). Therefore the unique thing to prove is that ρ(A) ⊃]0, +∞[, that is:

every λ > 0 belongs to the resolvent set. The idea is to prove that ρ(A)∩]0, +∞[ (which is non empty by what we have seen just now) is open and closed: it is therefore connected, hence equal to all ]0, +∞[.

First: ρ(A) is open, therefore ρ(A)∩]0, +∞[ is open in ]0, +∞[. This is a general fact so we state it separately:

Lemma 7.5. The resolvent set of any linear (eventually unbounded) operator is open in C and λ 7−→ Rλ ∈ C(ρ(A);L(X)). In particular: if λ ∈ ρ(A) then B(λ,2kR1λk[⊂ ρ(A).

Proof — The argument is based on the following formal identity: fixed λ ∈ ρ(A) and µ ∈ C, Rµ” = ” 1

µ − A= 1

µ − λ + (λ − A) = 1 λ − A

1

1 +µ−λλ−A” = ”Rλ(I + (µ − λ)Rλ)−1. (7.2) It is easy to check that if the right hand side makes sense as bounded linear operator then it is exactly Rµ. To give a meaning to the right hand side, we have to justify that B = I + (µ − λ)Rλ is invertible with continuous inverse. To this aim recall that

if kI − Bk < 1, =⇒ B−1=

X

n=0

(I − B)n∈L(X).

2Actually, if the space X is reflexive, the Phillips thm. is an iff.

(19)

In our case

kI − Bk = k(µ − λ)Rλk = |µ − λ|kRλk < 1, ⇐⇒ |µ − λ| < 1 kRλk. This means that B(λ,kR1

λk[⊂ ρ(A). The continuity of the resolvent map follows again by the (7.2). Indeed Rµ= Rλ

X

n=0

((λ − µ)Rλ)n= Rλ+(λ−µ)Rλ

X

n=0

((λ − µ)Rλ)n, =⇒ kRµ−Rλk 6 |λ−µ|kRλk

X

n=0

k(λ−µ)Rλkn.

Now, if |λ − µ| 6 2kR1λk it follows that

kRµ− Rλk 6 |λ − µ|kRλk

X

n=0

1

2n = 2kRλk|λ − µ|, and now the conclusion is evident.

Second: ρ(A)∩]0, +∞[ is closed. Let (λn) ⊂ ρ(A)∩]0, +∞[ such that λn−→ λ ∈]0, +∞[. We want to prove that λ ∈ ρ(A). By previous arguments, this is equivalent to show thatR(λI − A) = H. So, fix ψ ∈ H. We look for ϕ ∈ D(A) such that

λϕ − Aϕ = ψ.

Let ϕn∈ D(A) such that

λnϕn− Aϕn= ψ.

The idea is to pass to the limit in this equation. We have just to prove that (ϕn) is convergent first, in particular of Cauchy. To this aim notice that

λnϕn− Aϕn= ψ, λmϕm− Aϕm= ψ,

=⇒ λnϕn−λmϕm−A(ϕn−ϕm) = 0, ⇐⇒ λnn−ϕm)−A(ϕn−ϕm) = (λm−λnm

so, by dissipativity,

m− λn|kϕmk = kλnn− ϕm) − A(ϕn− ϕm)k > |λn|kϕn− ϕmk.

Now: because λn−→ λ ∈]0, +∞[ we can say that |λn| > α for all n ∈ N, for some α > 0. Suppose we have proved that (ϕn) is bounded, that is kϕnk 6 K for all n: we are done because, in this case,

n− ϕmk 6 K

α|λn− λm| 6 ε, ∀n, m > N0(ε),

i.e. (ϕn) would be a Cauchy sequence. Boundedness of (ϕn) follows by the same argument due to the dissipativity:

indeed

λnnk 6 kλnϕn− Aϕnk = kψk, =⇒ kϕnk 6kψk

α , ∀n ∈ N.

Summarizing: (ϕn) is convergent. Then, because Aϕn = λnϕn− ψ, also (Aϕn) is convergent. Now, you will recall that if ρ(A) 6= ∅ then A is closed. So, being this the case in our context, we have that ϕn−→ ϕ ∈ D(A), Aϕn−→ Aϕ and passing to the limit in the equation we would get λϕ − Aϕ = ψ, that is the conclusion. With this the proof is finished.

Definition 7.6. Let A : D(A) ⊂ C0(E) −→ C0(E) be a linear operator, (E, d) be a locally compact metric space. We say that A is a Markov generator if

(20)

i) D(A) is dense in C0(E).

ii) A fulfills the positive maximum principle.

iii) R(λ0I − A) = C0(E) for some λ0> 0.

In other words, combining the Hille–Yosida thm with the Phillips thm we have the

Corollary 7.7. A linear operator A generates a Markov semigroup onC0(E) iff A is a Markov generator.

8 Examples

To check that an operator A is a Markov generator is not, in general, an easy business. In particular, is the third conditions that usually is a little bit difficult to check and often it is useful to have a sort of mild version of it. The problem is that we have in particular to solve the equation

λϕ − Aϕ = ψ,

for a given ψ ∈C0(E). This is not generally easy. Let’s see a first example.

Example 8.1. The operator

A : D(A) ⊂C0([0, 1]) −→C0([0, 1]), Aϕ := ϕ00, D(A) :=ϕ ∈C2([0, 1]) ∩C0([0, 1]) : ϕ00∈C0([0, 1]) , is a Markov generator.

Sol. — Clearly the first two properties are true. Let’s see the third. Take ψ ∈ C0([0, 1]) and consider the equation

λϕ(x) − ϕ00(x) = ψ(x), x ∈ [0, 1].

As well known by ODEs theory, the general solution of previous equation is ϕ(x) = c1w1(x) + c2w2(x) + U (x),

where (w1, w2) is a fundamental system for the homogeneous equation λϕ − ϕ00= 0. If λ > 0 (as in our case) we have w1,2(x) = e±

λx. By the variation of constants formula U (x) =



− 1 2√ λ

Z x 0

ψ(y)e

λy dy

 e

λx+

 1 2√ λ

Z x 0

ψ(y)e

λy dy

 e

λx

= 1

√ λ

Zx 0

ψ(y)e

λ(x−y)− e

λ(x−y)

2 dy = 1

√ λ

Z x 0

ψ(y) sinh√

λ(x − y) dy.

Therefore

ϕ(x) = c1e

λx+ c2e

λx+ 1

√λ Z x

0

ψ(y) sinh√

λ(x − y) dy.

Here there’s a first problem we meet: if ψ ∈C ([0, 1]) it is not evident that ϕ ∈ C2([0, 1]). Indeed, it is easy to check that ϕ ∈C1([0, 1]) and

ϕ0(x) = −c1

√ λe

λx+ c2

√ λe

λx+ 1

√λψ(x) sinh(√ λ · 0) +

Z x 0

ψ(y) cosh(√

λ(x − y)) dy

= −c1

√ λe

λx+ c2

√ λe

λx+

Z x 0

ψ(y) cosh(√

λ(x − y)) dy.

(21)

Repeating the procedure we see that ϕ ∈C2([0, 1]). Let’s impose that ϕ, ϕ00∈C0([0, 1]). This means

ϕ(0) = ϕ(1) = 0, ϕ00(0) = ϕ00(1) = 0.

Notice that by the equation we have

ϕ00(x) = λϕ(x) − ψ(x),

therefore once we know ϕ ∈C0([0, 1]) we get ϕ00 ∈C0([0, 1]) because ψ ∈C0([0, 1]). So the previous conditions reduce only to the first: ϕ(0) = ϕ(1) = 0, that is





c1+ c2= 0,

c1e

λ+ c2e

λ+ 1

√ λ

Z 1 0

ψ(y) sinh√

λ(1 − y)

dy = 0.

This is a 2 × 2 system in c1, c2 with determinant e

λ− e

λ= 2 sinh√

λ 6= 0 if λ 6= 0. It is clear that it admits a unique solution (c1, c2). This means that there exists a unique ϕ ∈ D(A).

As you see most of the difficulties are due to the check that λI − A is surjective because this involves the solution of a more or less complicate equation. Other problems are that in general it is not easy to describe a generator by giving his exact domain. We will meet this problem in the next examples.

Moreover, when we define processes starting by their infinitesimal generators, we need to have a clear understanding of the meaning of the generator for the underlying processes. To this aim notice that we could write,

S(h)ϕ − ϕ = hAϕ + o(h), ⇐⇒ Ex[ϕ(γ(h)) − ϕ(γ(0))] = hAϕ(x) + o(h), or, more in general because

S(t + h)ϕ − S(t)ϕ = hAϕ + o(h), we could write

Ex[ϕ(γ(t + h)) − ϕ(γ(t)) |Ft] = hAϕ(γ(t)) + o(h). (8.1) Therefore, we could think to Aϕ as the rate of infinitesimal variation of the observable ϕ along the infinitesimal displacement of state from γ(t) to γ(t + h).

8.2 Brownian motion

As well known the case of BM on the state space E := Rd corresponds to the case Aϕ := 1

2∆ϕ.

Actually, there’re some technical aspects that complicate the discussion, mainly due to the problem to solve the equation

λϕ − Aϕ = ψ, ⇐⇒ λϕ −1

2∆ϕ = ψ.

We discuss separately as d = 1 and d > 2 this equation.

(22)

8.2.1 Case d = 1

Let’s start by solving the equation λϕ −1

00= ψ, ⇐⇒ 2λϕ − ϕ00= 2ψ, ψ ∈C0(R).

Because the equation is basically the same of Example 8.1 so we have, for general solution ϕ(x) = c1e

2λx+ c2e

2λx+ 2

√2λ Zx

0

ψ(y) sinh√

2λ(x − y) dy.

By the same argument of Example 8.1 we deduce that ϕ ∈C2(R). Let’s see if ϕ ∈ C0(R), that is when ϕ(±∞) = 0.

To this aim rewrite the solution in the form, ϕ(x) =

 c1+ 1

√2λ Z x

0

ψ(y)e

2λydy

 e

2λx+

 c2− 1

√2λ Z x

0

ψ(y)e

2λy dy

 e

2λx.

Notice that we have a unique possibility in such a way that ϕ(+∞) = 0, that is c1+ 1

√2λ Z +∞

0

ψ(y)e

2λydy = 0. (8.2)

Indeed: first notice that

c2e

2λx−→ 0, x −→ +∞, and

e

2λxZ x

0

ψ(y)e

2λydy = Rx

0 ψ(y)e

2λy dy e

2λx

(H)= ψ(x)e

2λx

√2λe

2λx = 1

√2λψ(x) −→ 0, x −→ +∞, because ψ ∈C0(R). Moreover, the integral in (8.2) is convergent because |ψ(y)e

2λy| 6 kψke

2λy. Therefore if the (8.2) is not 0, by previous considerations, we would have

 c1+ 1

√2λ Z x

0

ψ(y)e

2λydy

 e

2λx−→ ±∞.

So the unique possibility is that (8.2) holds true. In that case applying again the Hˆopital rule we would have c1+1

Rx 0 ψ(y)e

2λydy e

2λx

(H)=

1

ψ(x)e

2λx

−√ 2λe

2λx = − 1

2λψ(x) −→ 0, x −→ +∞,

again because ψ ∈C0(R). The moral is: the unique possible choice for c1such that ϕ(+∞) = 0 is given by (8.2).

Similarly, at −∞ we will have

c2− 1

√2λ Z −∞

0

ψ(y)e

2λydy = 0. (8.3)

This means that there’s a unique ϕ ∈C02(R) such that λϕ − Aϕ = ψ. We can summarize the discussion with the statement

Theorem 8.3. The operator A := 1 2

d2

dx2, D(A) :=ϕ ∈C2(R) ∩ C0(R) : ϕ00∈C0(R) is a Markov generator.

Riferimenti

Outline

Documenti correlati

We will relate the transmission matrix introduced earlier for conductance and shot noise to a new concept: the Green’s function of the system.. The Green’s functions represent the

In this paper we show that this condition (Assumption 1 below) is necessary and sufficient in order that, for any smooth boundary datum φ and for any bounded Ω with smooth boundary,

Abbiamo anche dimostrato per la prima volta che la proteina BAG3 viene rilasciata da cardiomiociti sottoposti a stress e può essere rilevabile nel siero di pazienti affetti da

The main idea is that the cultural center will become a dynamic space that will bring people from outside (city) to inside (exhibition ramp, outside access,

To receive full credit, show all of your work.. Neither calculators nor computers

[r]

Compatibility of conditional distributions, de Finetti’s coherence prin- ciple, Disintegrability, Equivalent martingale measure, Equivalent probability measure with given

Remark 3.18 Proposition 3.17 says that the general integral of the nonhomogeneous linear system is the general integral of the associated homogeneous system (which is a vectorial