• Non ci sono risultati.

The averaging principle for stochastic differential equations and a financial application

N/A
N/A
Protected

Academic year: 2021

Condividi "The averaging principle for stochastic differential equations and a financial application"

Copied!
52
0
0

Testo completo

(1)

2 .1 . | M A R C H I O E LO G OT I P O

POLITECNICO DI MILANO

SCUOLA DI INGEGNERIA INDUSTRIALE E DELL’INFORMAZIONE Corso di Laurea Magistrale in Ingegneria Matematica

Fine estimates on the matching problem

via PDE techniques

Candidato:

Giacomo Enrico Sodini Matricola 892142 Relatore: Prof. S. Salsa Correlatore: Prof. L. Ambrosio Anno Accademico 2018 - 2019

Master of Science in Mathematical Engineering

School of Industrial and Information Engineering

Candidate:

Filippo de Feo

898747

Supervisor:

Prof. Giuseppina Guatteri

The Averaging Principle

for Stochastic Differential Equations

and a Financial Application

(2)
(3)

Filippo de Feo

(4)
(5)

Table of contents

Abstract 7 Sommario 9 Ringraziamenti 11 1 Synopsis 13 2 Preliminary Results 17 2.1 Skorokhod Theorem . . . 17 2.2 Prohorov Theorem . . . 18

2.3 Stochastic Differential Equations . . . 19

2.4 Martingale Problem . . . 21

3 The Averaging Principle for a Class of Stochastic Differential Equations 23 3.1 Framework and Main Theorem . . . 23

3.2 Proof of the Main Theorem . . . 26

4 Financial Application 39 4.1 Local Stochastic Volatility Model . . . 39

4.2 Convergence to a Local Volatility Model . . . 39

4.3 Convergence to the Black and Scholes Model . . . 45

4.4 Example . . . 45

(6)
(7)

Abstract

In this thesis we study the averaging principle for slow-fast systems of stochastic dif-ferential equations. In particular in the first part we extend Khasminskii’s result in [10] to the non autonomous case assuming the Lipschitzianity and the sublinearity of the coefficients, an ergodic condition and some regularity of the fast component. In this setting we prove the weak convergence of the slow component to the solution of the averaged equation making use of the Khasminskii’s discretization techniques.

In the second part of the thesis we propose a financial application of this result. In particular we apply the theory developed to a slow-fast local stochastic volatility model and we show that the prices of derivatives converge to the ones given by the averaged model which turns out to be a local volatility model. Moreover we prove that, in case of a slow-fast stochastic volatility model, the averaged model is a Black and Scholes.

(8)
(9)

Sommario

In questa tesi studiamo il principio di media per sistemi lenti-veloci di equazioni dif-ferenziali stocastiche. In particolare, nella prima parte, estendiamo il risultato di Khas-minskii in [10] al caso non autonomo, assumendo la sottolinearità e la Lipschitzia-nità dei coefficienti, una condizione ergodica e una certa regolarità della soluzione dell’equazione veloce. In questo framework dimostriamo la convergenza debole della soluzione dell’equazione lenta alla soluzione dell’equazione mediata, facendo uso delle tecniche di discretizzazione di Khasminskii.

Nella seconda parte proponiamo un’applicazione finanziaria di tale risultato. In parti-colare lo applichiamo a un modello lento-veloce di volatilità locale stocastica e dimo-striamo che i prezzi dei derivati convergono a quelli calcolati con il modello limite che si rivela essere un modello di volatilità locale. Inoltre mostriamo che, nel caso di un modello lento-veloce a volatilità stocastica, il modello limite è quello di Black and Scholes.

(10)
(11)

Ringraziamenti

Prima di procedere, vorrei dedicare qualche parola per ringraziare tutti coloro che mi sono stati vicini in questo percorso di crescita:

in primo luogo la mia relatrice Giuseppina Guatteri per la sua infinita disponibilità e competenza dalla scelta dell’argomento all’aiuto nel risolvere i problemi che si sono via via presentati e nella stesura dell’eleborato;

Michele Azzone del Dipartimento di Matematica che mi ha offerto la sua preziosa con-sulenza per la parte finanziaria di questo lavoro di tesi;

la mia famiglia al cui supporto devo il raggiungimento di questo traguardo, in partico-lare la mamma, il papà, mia sorella Rebecca, la nonna Anna, la nonna Licia e i nonni Umberto e Nino non più presenti, le zie e gli zii, le cugine e i cugine e tutti i parenti che da sempre mi sono vicini;

Nausica, la mia ragazza, che in questi ultimi mesi mi ha trasmesso la sua grandissima gioia e solarità;

i miei compagni di questi cinque anni di formazione e apprendimento a ingegneria matematica, in particolare Marco, Teo, Michele, Max, Tony, Edo, Lorenzo, Dan, Carbo e tutti gli altri;

gli amici più cari, tra cui Deca, Bonnie, Frank, Ludovico, Virgy, Pinto, Miguel, Guerro, Massi, Chiara, Steve, Leo, Ferro e il mio coach;

tutti i professori avuti in questi 5 anni e in particolare quelli del Dipartimento di Mate-matica che hanno fatto crescere in me giorno dopo giorno la grande passione che ho per la matematica.

(12)
(13)

1 Synopsis

In this work of thesis we are concerned with the study of the averaging principle for slow-fast systems of stochastic differential equations of the form

8>>> >>< >>>> >: dX✏t ⇤ b(t, X✏t, Yt✏)dt + (t, X✏t, Yt✏)dWt X✏0 ⇤ x0 2íd dYt✏⇤ ✏ 1B(t, X✏t, Yt✏)dt + ✏ 1/2C(t, X✏t, Yt✏)d ˜Wt Y0✏ ⇤ y0 2íl (1.1) where t 2 [0, T], W, ˜Ware Brownian motions, b, , B, C are some regular functions and ✏2 (0, 1] is a small parameter.

We note that ✏ is explicit only in the second equation and as ✏ ! 0 the speed of the second component Y✏ formally tends to explode. For this reason the first and the

second component X✏ and Y✏ take the name of slow and fast component respectively.

The idea of the averaging principle is to show the convergence of the slow variable as ✏! 0 to the solution X of the so called averaged equation

dXt⇤ b(t, Xt)dt + (t, Xt)dWt X0⇤ x0,8t2 [0, T]

which doesn’t depend anymore on the fast component and the coefficients b, are opportune averages of the original ones b, .

In this sense from a modeling point of view it is possible to approximate the behavior of the slow component X✏with X which follows a simpler equation and to control the

error of such approximation. For this reason multiscale stochastic systems are widely used in many areas of physics, chemistry, biology, financial mathematics and many other applications areas (e.g. [6], [7], [11], [17]).

The averaging principle was initially proposed for the classical equations of celestial mechanics by Clairaut, Laplace and Lagrange in the 18thcentury. Later on in the 20th

century rigorous result were proved by Bogoliubov, Krylov and Mitropolsky. In [12, Appendices] it can be found an entire section devoted to the history of the averaging principle for differential equations.

For what concerns stochastic differential equations the first results on the averaging principle were due to Khasminskii, Vrkoc and Gikhman in the ’60s.

In [10] Khasminskii considered the autonomous system of stochastic differential equa-tions 8>>> >>< >>>> >: dX✏t ⇤ b(X✏t, Yt✏)dt + (X✏t, Yt✏)dWt X✏0 ⇤ x0 2íd dYt✏⇤ ✏ 1B(X✏t, Yt✏)dt + ✏ 1/2C(X✏t, Yt✏)d ˜Wt Y0✏⇤ y0 2íl

(14)

� Synopsis

and proved that the averaging principle holds when the coefficients are Lipschitz and have a controlled growth. Moreover a certain regularity of the fast component and the following ergodic condition are assumed:

there exist a d-dimensional vector ¯b(x) and a square d-dimensional matrix ¯a(x) such that for every t0  T, ⌧ 0, x 2 íd, y 2íl

E⌧ 1 æt0+⌧ t0 b⇣x, y(x,y)s ⌘ds ¯b(x)  ¯K(⌧) 1 + |x|2+|y|2 E  ⌧ 1 æt0+⌧ t0 a ⇣ x, y(x,y)s ⌘ ds ¯a(x)  ¯K(⌧) 1 + |x|2+|y|2 where lim⌧!1 ¯K(⌧) ⇤ 0 and {y(x,y)s }s 0satisfies the so called frozen equation

dy(x,y)s ⇤ B ⇣ x, y(x,y) s ⌘ ds + C ⇣ x, y(x,y) s ⌘ d ˜Ws y(x,y)0 ⇤ y, s 0

In particular Khasminskii showed that the process X✏ converges weakly in C[0, T]dto

the averaged process X satisfying the averaged equation

dXt ⇤ b(Xt)dt + (Xt)dWt X0 ⇤ x0,8t 2 [0, T]

Usually the coefficients b, satisfying the ergodic condition are averages of the initial ones b, with respect to the invariant measure of the frozen equation which must be shown to exist.

The proof of Khasminskii based on a discretization technique was very influential and it inspired many researches conducted by mathematicians like Freidlin, Wentzell, Veretennikov, Cerrai and many others. We need to remark that when is independent of the fast variable Y✏

t we usually have the stronger convergence in probability of the

slow variable X✏

t (e.g. [3], [8]).

In [4] Cerrai and Lunardi in 2016 studied a non autonomous slow-fast system in infinite dimension where the coefficients of the fast equation depend on time and satisfy the almost periodic in time condition. They proved the convergence in probability of X✏ t

when is independent of Y✏

t.

More recently in 2019 in [16] Wei Liu, Michael Röckner, Xiaobin Sun and Yingchao Xie considered the non autonomous system (1.1) with independent of Y✏

t and proved the

Lp convergence of X✏

t. Here the coefficients of (1.1) are assumed to depend on t and

!2 ⌦.

In this thesis we consider the system (1.1) and we show the weak convergence of X✏ t

generalizing in some sense Khasminskii’s result to the non autonomous case. We make use of the discretization techniques inspired by him in [10].

The thesis is organized as follows:

in chapter 2 we introduce some concepts and some preliminary results that we need in the following chapters.

In chapter 3 we introduce a class of stochastic differential equations and we prove that 14

(15)

the averaging principle holds for such class.

In chapter 4 we apply the result shown in the previous chapter to a financial model, in particular a local stochastic volatility model.

(16)
(17)

2 Preliminary Results

2.1 Skorokhod Theorem

In this section we introduce the Skorokhod theorem which guarantees the almost sure convergence starting from weak convergence. In order to do so we give first the definition of weak convergence for probabilities, random variables and stochastic processes. This can be found in [9, chapter 2].

Definition 2.1.1. Let (S, ⇢) be a metric space with a Borel -field B(S). Let {Pn}n2é be a

sequence of probability measures on (S, B(S)) and let P be another probability measure on this space. We say that {Pn}n2éconverges weakly to P if

lim n!+1 æ S f(s)dPn(s) ⇤ æ S f(s)dP(s)

for every bounded, continuous real-valued function f on S.

Remark 2.1.2. Let X be a random variable defined on a probability space (⌦, F , P) with values

in a measurable space (S, B(S)). Then X induces a probability measure PX 1on (S, B(S)):

let B 2 B(S)

PX 1(B) ⇤ P({! 2 ⌦ : X(!) 2 B}) (2.1)

Remark 2.1.3. Let X ⇤ {Xt}t 0 be a continuous íd-stochastic process on (⌦, F , P). We

can see X as a random variable on (⌦, F , P) with values in (C[0, +1)d, B(C[0, +1)d)). The

induced probability PX 1of (2.1) is called the law of X. We remind that C[0, +1)dis a complete

separable metric space with the following metric:

⇢(y1, y2) ⇤ +1 ’ n⇤0 1 2n0tnmax(|y1(t) y2(t)| ^ 1) y1, y2 2 C[0, +1) d

Moreover on (C[0, +1)d, B(C[0, +1)d), PX 1) it is possible to construct the coordinated

process ˜X defined by ˜Xt(x) ⇤ x(t), x 2 C[0, +1)dwhich has the same law of X.

Definition 2.1.4. Let {(⌦n, Fn, Pn)}n2ébe a sequence of probability spaces. On each of them

consider a random variable Xnwith values in the metric space (S, ⇢). Let (⌦,F, P) be another

probability space on which a random variable X with values in (S, ⇢) is given. We say Xn! X

weakly if the sequence of measures {PnXn1}n2é converges weakly to the measure PX 1.

(18)

� Preliminary Results

Theorem 2.1 (Skorokhod). Let (S, ⇢) be a separable metric space with a Borel -field B(S), let

X, {Xn}n2é random variables, defined on a probability space (⌦, F ,ê) with values in S, have

the property that Xn ! X weakly, then we can find a new probability space (⌦⇤, F⇤,ê⇤) and

random variables X, X

ndefined on it such that X⇤n ! X⇤ a.s. and X⇤n ⇤ Xnin law 8n 1

and X⇤ Xin law.

2.2 Prohorov Theorem

In this section we introduce the Prohorov theorem which states an equivalence between the relative compactness and the tightness for a sequence of probability measures. The following can be found in [9, chapter 2].

Let (S, ⇢) be a metric space and let ⇧ be a family of probability measures on (S, B(S)).

Definition 2.2.1. We say that ⇧ is relatively compact if every sequence of elements of ⇧ contains

a weakly convergent subsequence.

Definition 2.2.2. We say that ⇧ is tight if for every ✏ > 0 there exists a compact set K ✓ S

such that P(K) 1 ✏ for every P 2 ⇧.

Thanks to remark 2.1.2 we can give the definition of tightness and relatively compactness of random variables.

Definition 2.2.3. A family of random variables is relatively compact if the family of its induced

probabilities is relatively compact.

Definition 2.2.4. A family of random variables is tight if the family of its induced probability

is tight.

Now we can state the Prohorov theorem:

Theorem 2.2 (Prohorov). Let ⇧ be a family of probability measures on a complete, separable

metric space S. This family is relatively compact if and only if it is tight.

Finally we give a sufficient condition for the tightness of a sequence of stochastic processes:

Theorem 2.3. Let {Xm}

m2ébe a sequence of continuousíd-stochastic processes on (⌦, F , Ft,ê)

t 0 satisfying the following conditions:

(i) supm 1Ö ⇥ Xm0 ⌫⇤ ⇤ M <1 (ii) supm 1Ö ⇥ Xmt Xms ↵⇤  CT |t s|1+ ,8T > 0, 0  s, t  T

for some positive constants ↵, , ⌫ (universal) and CT. Then the probability measures

Pm ⇤ P(Xm) 1 induced by these processes on (C[0, +1)d, B(C[0, +1)d)) form a tight

se-quence.

(19)

�.� Stochastic Differential Equations

2.3 Stochastic Differential Equations

In this section we introduce the concept of stochastic differential equation (SDE) and a general result of existence and uniqueness of the solution. The following can be found in [1, chapter 8,9].

Let T 2 [0, +1) be the time horizon, we recall the definition of Mp[0, T] space.

Definition 2.3.1. We denote by Mp[0, T] the space of the equivalent classes of theíd-stochastic

processes X ⇤ (⌦, F , Ft, Xt,ê)tT such that

Ö æT

0 |Xs|

p < +1

By the Burkholder-Davis-Gundy and Holder’s inequalities we obtain the following inequalities for the stochastic integral of a process in M2[0, T]

Theorem 2.4. Let p 2, {Wt}t 0be a Brownian motion, then there exists a constant Cp>0

such that8X 2 M2[0, T] Ö " sup t2[0,T] æt 0 XsdWs p#  CpÖ "✓æT 0 |Xs| 2ds◆p/2 #  CpT(p 2)/2Ö " æT 0 |Xs| pds #

Now we define the concept of solution for the stochastic differential equation (SDE) (

dXt ⇤ b(t, Xt)dt + (t, Xt)dWt

X0⇤ x02 íd (2.2)

where b, are measurable functions

b: [0, T] ⇥íd !íd : [0, T] ⇥íd !íd⇥íd

Definition 2.3.2. The couple of processes (X, W) ⇤ (⌦, F , Ft, Wt, Xt,ê)tT is a solution of

the SDE (2.2) if (⌦, F , Ft, Wt,ê)t 0 is a d-dimensional standard Brownian motion and for

every t 2 [0, T] we have: Xt ⇤ x0+ æt 0 b(s, Xs)ds + æt 0 (s, X, s)dWs êa.s.

It is important to notice that the Brownian Motion and the probability space with respect to which the SDE is defined are not given a priori. Notice also that the solution of the SDE (2.2) is necessarily a continuous process.

Definition 2.3.3. We say that the SDE (2.2) has strong solutions if for every standard Brownian

(20)

� Preliminary Results

Definition 2.3.4. We say that the SDE (2.2) has weak solutions if there exists a couple (X, W)

which is a solution to (2.2).

Obviously the existence of strong solutions implies the existence of weak solutions.

Definition 2.3.5. We say that for the SDE (2.2) there is pathwise uniqueness if, given two

solutions (Xi, W) ⇤ (⌦, F , F

t, Wt, Xit), i ⇤ 1, 2 defined on the same probability space and

with respect to the same Brownian Motion, then X1and X2are indistinguishable, i.e.

ê(X1t ⇤ X2t,8t 2 [0, T]) ⇤ 1

Thanks to remark 2.1.3 we can state the definition of uniqueness in law:

Definition 2.3.6. We say that for the SDE (2.2) there is uniqueness in law if, given two solutions

(Xi, Wi) ⇤ (⌦i, Fi, Fi

t, Wti, Xit,êi), i ⇤ 1, 2 (possibly defined on different probability space

and Brownian Motions), then X1and X2have the same law.

It is possible to show that pathwise uniqueness implies uniqueness in law.

Now we introduce the hypothesis under which we will have the theorem of existence and uniqueness of the solution of the SDE.

Definition 2.3.7. We say that b, satisfy Assumption (A) if there exist constants L > 0,

M >0 such that: |b(t, x)|  M(1 + |x|) | (t, x)|  M(1 + |x|) |b(t, x) b(t, y)|  L|x y| | (t, x) (t, y)|  L|x y| (2.3) (2.4) (2.5) (2.6) for every t 2 [0, T], x, y 2íd.

Assumption (A) implies that the measurable functions b, are sublinear in x uniformly in t and Lipschitz in x uniformly in t.

Now we state that under Assumption (A) the SDE (2.2) has strong solutions and pathwise uniqueness holds, [1, theorems 9.1, 9.2].

Theorem 2.5. Let (⌦, F , Ft, Wt,ê)t 0 be a d-dimensional standard Brownian motion and

x0 2íd, then under Assumption (A) there exists a process X 2 M2[0, T] such that Xt⇤ x0+

æt

0 b(s, Xs)ds +

æt

0 (s, X, s)dWs êa.s.

for every t 2 [0, T] and pathwise uniqueness holds. Moreover for every p 2 we have the following estimate:

Ö  sup 0tT|Xt| p  C(1 + |x 0|p) (2.7) for some C ⇤ C(p, T, M) > 0. 20

(21)

�.� Martingale Problem

2.4 Martingale Problem

Now we introduce the martingale problem and we explain its connection with the stochastic differential equations. The following can be found in [9, chapter 5]. Let b(t, x) ⇤ [bi(t, x)] be a d-dimensional vector, (t, x) ⇤ [ ij(t, x)] a d ⇥ r-dimensional

matrix, where 0  t < +1 and x 2íd.

We define the differential operator: Atf(x) ⇤ d ’ i⇤1 bi(t, x) @f @xi(x) + 1 2 d ’ i,k⇤1 aik(t, x) @2f @xi@xj(x) (2.8) where aik(t, x) ⇤Õrj⇤1 ij(t, x) kj(t, x), f 2 C2(íd), t < +1.

Definition 2.4.1. A probability measure P on (C[0, +1)d, B(C[0, +1)d)), under which the

process {Mf

t}t 0defined by

Mft(y) ⇤ f(y(t)) f(y(0)) æt

0 Asf(y(s))ds, 0  t < +1, y 2 C[0, +1)

d (2.9)

is a continuous local Ft-martingale for every f 2 C2(íd), is called a solution to the local

martingale problem associated with {At}. HereFt ⇤ Gt+and {Gt} is the augmentation under Pof the canonical filtration B(C[0, t)d).

Now we want to show the relation between the martingale problem and the following stochastic differential equation:

(

dXt ⇤ b(t, Xt)dt + (t, Xt)dWt 0  t < +1

X0 ⇤ x0 2íd (2.10)

having the couple (X, W) ⇤ (⌦, F , Ft,ê, Xt, Wt) as a weak solution.

Let f 2 C2(íd), by Ito’s lemma we have:

mft ⇤ f(Xt) f(X0)

æt

0 Asf(Xs)ds ⇤

æt

0 rf(Xs) (s, Xs)dWs

from which it follows that the process {mf

t}t 0is a local Ft-martingale.

Moreover {mf

t}t 0 is by construction FtX-measurable where FtX ⇤ (Xs, s  t) ✓ Ft.

Then we have: Ö⇥mft FsX⇤ ⇤ÖÖmf t Fs ⇤ FX s ⇤ ⇤Ömf sFsX ⇤ ⇤ mfs for every 0  s  t.

This implies that {mf

t}t 0is a local FtX-martingale and it follows that

Ö⇥(mft mfs)1Cs ⇤

(22)

� Preliminary Results

for every indicator function 1Cs, Cs 2 FsX.

Now we consider X as a random variable with values in (C[0, +1)d, B(C[0, +1)d)), we

know it induces a probability P on (C[0, +1)d, B(C[0, +1)d)).

On (C[0, +1)d, B(C[0, +1)d), B(C[0, t)d), P) we consider the coordinated process ˜X

(remark 2.1.3) defined by ˜Xt(y) ⇤ y(t), y 2 C[0, +1)d.

Then ˜X and X have the same law and this implies that the processes mf

tand

Mft ⇤ f( ˜Xt) f( ˜X0)

æt

0 Asf( ˜Xs)ds (2.12)

have the same law since they are the same function of processes with the same law. From this fact and by (2.11) it follows that

Ö⇥(Mft Mfs)1Cs ⇤

⇤0 (2.13)

for every Cs2 Fs˜X.

We also have B(C[0, t)d) ⇤ F ˜X

t ⇤ ( ˜Xs, s  t) by definition. This implies that {Mft}t 0

is a local continuous B(C[0, t)d)-martingale from which it follows that it is a local

con-tinuousFt-martingale.

This means that the process {Xt}t 0induces on (C[0, +1)d, B(C[0, +1)d)) a

probabil-ity measure P which solves the local martingale problem. The converse is also true, [9, proposition 4.6].

Theorem 2.6. Let P be a probability measure on (C[0, +1)d, B(C[0, +1)d)) under which the

process Mf

tof (2.9) is a continuous local martingale for the choices f(x) ⇤ xiand f(x) ⇤ xixk,

i, k  d. Then there is an r-dimensional Brownian motion W ⇤ {Wt, ˆFt, t < +1} defined on

an extension ( ˆ⌦, ˆF, ˆP) of (C[0, +1)d, B(C[0, +1)d), P) such that ( ˆ⌦, ˆF, ˆFt, ˆP, Xt⇤ y(t), Wt)

is a weak solution to (2.10).

Remark 2.4.2. The notion of extension of a probability space can be found in [9, chapter 3].

As a consequence we have the following corollaries, [9, corollaries 4.8, 4.9].

Corollary 2.4.3. The existence of a solution P of the local martingale problem associated with

Atis equivalent to the existence of a weak solution (X, W) to the equation (2.10).

The two solutions are related by P ⇤ ˆêX 1, i.e. X induces the measure P on (C[0, +1)d, B(C[0, +1)d)).

Corollary 2.4.4. The uniqueness of the solution P to the martingale problem with fixed initial

condition y(0) ⇤ x02 ídis equivalent to the uniqueness in law of the solution to (2.10).

(23)

3 The Averaging Principle for a Class of

Stochastic Differential Equations

3.1 Framework and Main Theorem

Let (⌦, F , Ft, Wt,ê)t 0be a continuous standardíd-Brownian motion and let { ˜Wt}t 0

be a continuous standardíl-Brownian motion on the same filtration having instanta-neous correlation ⇢ 2 ( 1, 1)

Ö[dWtid ˜Wjt] ⇤ ⇢dt 8i  d, j  l Let’s consider the system of stochastic differential equations:

8>>> >>< >>>> >: dX✏t ⇤ b(t, X✏t, Yt✏)dt + (t, X✏t, Yt✏)dWt X✏0 ⇤ x0 dYt✏⇤ ✏ 1B(t, X✏t, Yt✏)dt + ✏ 1/2C(t, X✏t, Yt✏)d ˜Wt Y0✏ ⇤ y0 (3.1) where t 2 [0, T], T is the time horizon , x0 2 íd, y0 2 íl, ✏ 2 (0, 1], b, , B, C are Borel functions:

b: [0, T] ⇥íd⇥íl !íd B: [0, T] ⇥íd⇥íl !íl

: [0, T] ⇥íd⇥íl !íd⇥íd C: [0, T] ⇥íd⇥íl !íl⇥íl

Now we list the following hypothesis under which we will show the main theorem of this work:

Hypothesis 1. There exist L > 0, > 0 such that:

|b(t, x2, y2) b(s, x1, y1)|  L(|t s| + |x2 x1| + |y2 y1|) | (t, x2, y2) (s, x1, y1)|  L(|t s| + |x2 x1| + |y2 y1|) |B(t, x2, y2) B(s, x1, y1)|  L(|t s| + |x2 x1| + |y2 y1|) |C(t, x2, y2) C(s, x1, y1)|  L(|t s| + |x2 x1| + |y2 y1|)

(24)

� The Averaging Principle for a Class of Stochastic Differential Equations

Remark 3.1.1. Hypothesis 1 implies that the functions b, , B, C are Lipschitz functions in x,

yuniformly with respect to t and -Holder continuous in t uniformly with respect to x, y.

Hypothesis 2 (Sublinearity). There exists M > 0 such that:

|b(t, x, y)|  M(1 + |x|) | (t, x, y)|  M(1 + |x|) |B(t, x, y)|  M(1 + |x| + |y|) |C(t, x, y)|  M(1 + |x| + |y|)

for every 0  t  T, x 2íd, y 2íl.

Remark 3.1.2. We can’t assume

|b(t, x, y)|  M(1 + |x| + |y|) | (t, x, y)|  M(1 + |x| + |y|)

because the behaviour Y

t is too fast.

Proposition 3.1. Under hypothesis 1, 2 equation (3.1) has a unique strong solution {(X

t, Yt✏)}t2[0,T]

for every ✏ > 0.

Proof. We use theorem 2.5 after noticing that we can find some independent Brownian

motions after a linear transformation of the correlated ones W, ˜W. ⇤ Now we define the square d-dimensional matrix a(t, x, y) ⇤ (t, x, y) T(t, x, y)/2 and

we state the ergodic condition.

Hypothesis 3 (Ergodicity). There exist a measurable d-dimensional vector ¯b(t, x) and a

measurable square d-dimensional matrix (t, x) such that for every 0  t  T, t0 0, ⌧ 0,

x2 íd, y 2íl E⌧ 1 æt0+⌧ t0 b ⇣ t, x, y(t,x,y) s ⌘ ds ¯b(t,x)  ¯K(⌧) 1 + |x|2+|y|2 (3.2) E  ⌧ 1 æt0+⌧ t0 a ⇣ t, x, y(t,x,y)s ⌘ ds (t, x) T(t, x)/2  ¯K(⌧) 1 + |x|2+|y|2 (3.3)

where lim⌧!1 ¯K(⌧) ⇤ 0 and {y(t,x,y)s }s t0 is the unique strong solution to the frozen equation:

dy(t,x,y)s ⇤ B ⇣ t, x, y(t,x,y)s ⌘ ds + C ⇣ t, x, y(t,x,y)s ⌘ d ˜Ws y(t,x,y)t0 ⇤ y, s t0

Moreover we assume that the solution of the averaged equation

dXt⇤ ¯b(t,Xt)dt + (t, Xt)dWt X0⇤ x0, t 2 [0, T] (3.4)

is unique in law.

(25)

�.� Framework and Main Theorem

Then we define the matrix ¯a(t, x) ⇤ (t, x) T(t, x)/2.

Remark 3.1.3. {y(t,x,y)

s }s t0 is the time changed process s ! ✏s with the time t-component

and the x-component frozen. Moreover the frozen equation has a unique strong solution thanks to hypothesis 1, 2.

Remark 3.1.4. Letting ⌧ ! +1 in (3.2), (3.3) we get the following characterizations:

¯b(t,x) ⇤ lim⌧!+1E  ⌧ 1 æt0+⌧ t0 b ⇣ t, x, y(t,x,y)s ⌘ ds 8y2íl, t0 0 (3.5) ¯a(t, x) ⇤ lim⌧!+1E  ⌧ 1 æt0+⌧ t0 a ⇣ t, x, y(t,x,y)s ⌘ ds 8y2 íl, t0 0 (3.6)

Remark 3.1.5. For what concerns (t, x) usually one calculates first the matrix ¯a(t, x) as the

ergodic limit (3.6) and proves that (3.3) holds. Then the matrix (t, x) can be chosen in the following way: since a(t, x, y) is symmetric and positive semidefinite this implies that ¯a(t, x) is also symmetric and positive semidefinite and so it has a unique symmetric positive semidefinite square root (t, x) ⇤p¯a(t, x).

Remark 3.1.6. Let’s observe that the matrix (t, x) satisfying (3.3) could be not unique. In

fact, given the symmetric positive semidefinite matrix ¯a(t, x), the matrix (t, x) such that

¯a(t, x) ⇤ (t, x) T(t, x)/2 in general is not unique. This means that we would end up with

different averaged equations of the form (3.7). Anyway this is not a problem because the law of the solution of (3.7) depends on (t, x) only through ¯a(t, x), see for example [14, chapter 5].

Let now BB(íl) be the space of bounded measurable functions.

Remark 3.1.7. If for fixed t, x the transition semigroup of the frozen equation {Ps(t,x)}s 0

defined by

Ps(t,x) (y) ⇤Ö[ (y(t,x,y)s )] 2 BB(íl), y 2íl

has a unique invariant measure µ(t,x) with a certain rate of decay to the equilibrium, then the

vector æ

íl

b(t, x, y)µ(t,x)(dy)

and the matrix æ

íl

a(t, x, y)µ(t,x)(dy)

are the natural candidates for ¯b(t, x) and ¯a(t, x) respectively. A similar argument can be found in [16].

Remark 3.1.8. Thanks to (3.5), (3.6) ¯b, satisfy

¯b(t,x)  M(1 + |x|) | (t, x)|  M(1 + |x|)

(26)

� The Averaging Principle for a Class of Stochastic Differential Equations

Hypothesis 4. The family of processes {Y}✏>0satisfies:

sup ✏2(0,1] sup 0tTÖ h Yt✏2 i  C(1 + |y0|2)

We can now introduce the main theorem of this work which states that the averaging principle holds for system (3.1).

Theorem 3.1. Assume that hypothesis 1, 2, 3, 4 hold. Then X! X weakly in C[0, T]d as

! 0 where the process X is the solution of the averaged equation

dXt ⇤ ¯b(t,Xt)dt + (t, Xt)dWt X0 ⇤ x0,8t 2 [0, T] (3.7)

Remark 3.1.9. We remind that C[0, T]dis a complete metric space with metric

⇢(y2, y1) ⇤ sup

t2[0,T]|y2(t) y1(t)|

y1, y2 2 C[0, T]d

3.2 Proof of the Main Theorem

To prove the theorem we start by constructing an auxiliary process obtained by a discretization in time: fix 2 [0, T] and let K ⇤ 0...bT/ c

8>>> >>< >>>> >: d ˆX✏t ⇤ b(K , X✏K , ˆYt✏)dt + (K , X✏K , ˆYt✏)dWt t 2 [K , (K + 1) ) ˆX✏ 0 ⇤ x0 d ˆYt✏ ⇤ ✏ 1B(K , X✏K , ˆYt✏)dt + ✏ 1/2C(K , X✏K , ˆYt✏)d ˜Wt t 2 [K , (K + 1) ) ˆY✏ K ⇤ YK✏

and we choose the length of the intervals ⇤ (✏) ⇤ ✏(log ✏ 1)1/4such that as ✏ ! 0

then ! 0 and /✏ ! +1.

We now prove the following a priori estimates:

Lemma 3.2.1. Let p 2, then there exists a constant C ⇤ C(p, T) > 0 such that8✏ > 0 Ö " sup t2[0,T]|X ✏ t|p #  C(1 + |x0|p) < 1 (3.8) Ö " sup t2[0,T]| ˆX ✏ t|p #  C(1 + |x0|p) < 1 (3.9)

Proof. Let’s fix R>0 and consider the stopping time ⌧R ⇤inf(t > 0, |X✏t| R)

Let’s define X✏ R(t) ⇤ X✏⌧R^t, then: X✏R(t) ⇤ x0+ æt 0 b(r, X ✏ R(r), Yr✏) r⌧Rdr + æt 0 (r, X ✏ R(r), Yr✏) r⌧RdWr 26

(27)

�.� Proof of the Main Theorem

Then we have that:

Ö " sup s2[0,t]|X ✏ R(s)|p #  3p 1|x 0|p+3p 1Ö " sup s2[0,t] æs 0 b(r, X ✏ R(r), Yr✏) r⌧Rdr p# +3p 1Ö " sup s2[0,t] æs 0 (r, X ✏ R(r), Yr✏) r⌧RdWr p#

By Holder’s inequality and by sublinearity we obtain the estimate for the second term on the right hand side:

Ö " sup s2[0,t] æs 0 b(r, X ✏ R(r), Yr✏) r⌧Rdr p#  tp 1Ö æt 0 b(r, X ✏ R(r), Yr✏) p dr  Tp 1MpÖ æt 0(1 + |X ✏ R(r)|)pdr

By theorem 2.4 and sublinearity we get the estimate for the third term on the right hand side: Ö " sup s2[0,t] æs 0 (r, X ✏ R(r), Y ✏ r) r⌧RdWr p#  Cpt(p 2)/2Ö æt 0 (r, X ✏ R(r), Y ✏ r) p dr  CpT(p 2)/2MpÖ æt 0(1 + |X ✏ R(r)|)pdr

Putting everything together we obtain: Ö " sup s2[0,t]|X ✏ R(s)| p #  3p 1x 0+ C1Ö æt 0(1 + |X ✏ R(r)|) pdr  C2(1 + |x0|p) + C3 æt 0 Ö " sup s2[0,r]|X ✏ R(s)|pdr # where the constants C1, C2, C3are independent of ✏.

Defining v(t) ⇤Ö " sup s2[0,t]|X ✏ R(s)|p # we notice that: v(t)  C2(1 + |x0|p) + C3 æt 0 v(r)dr

We can apply Gronwall’s Lemma because v(t)  max(Rp, |x

(28)

� The Averaging Principle for a Class of Stochastic Differential Equations v(T) ⇤Ö " sup s2[0,T]|X ✏ R(s)|p #  C2(1 + |x0|p)eT C3 ⇤ C(1 + |x0|p) (3.10)

where the constant C is independent of ✏.

Letting R ! 1 we see that ⌧R ! +1 a.s. with respect toêand this implies:

lim R!+1s2[0,T]sup |X ✏ R(s)|p⇤ sup s2[0,T]|X ✏(s)|p a.s.

Applying Beppo Levi’s theorem to (3.10) we get (3.8).

Using (3.8) it’s easy to prove (3.9). ⇤

We now prove the following lemma:

Lemma 3.2.2. Let p 2, then there exists a constant C ⇤ C(p, T) > 0 such that8✏ > 0 Ö⇥| ˆX✏t2 ˆX✏t1|p⇤  C|t2 t1|p/2 (3.11) Ö⇥|X✏t2 X✏t1|p⇤  C|t2 t1|p/2 (3.12)

for every t1, t22 [0, T].

Proof. Let t1  t2 2 [0, T], by usual calculation we have:

Ö[| ˆX✏t2 ˆX✏t1|p]  2p 1Ö  æt2 t1 b(bs/ c , X✏(bs/ c ), ˆYs✏)ds p +2p 1Ö  æt2 t1 (bs/ c , X✏ (bs/ c ), ˆYs✏)dWs p  2p 1Ö  |t2 t1|p 1 æt2 t1 b(bs/ c , X✏(bs/ c ), ˆYs✏)pds +2p 1CpÖ  |t2 t1|(p 2)/2 æt2 t1 (bs/ c , X✏ (bs/ c ), ˆYs✏) p ds  2p 1|t 2 t1|p 1Ö æt2 t1 Mp⇣1 + X✏(bs/ c )⌘pds +2p 1Cp|t2 t1|(p 2)/2Ö æt2 t1 Mp ⇣ 1 + X✏ (bs/ c ) ⌘p ds where Cpis independent of ✏.

Finally using (3.8) we get (3.11), in fact:

Ö[| ˆX✏t2 ˆX✏t1|p]  C1|t2 t1|p 1|t2 t1| + C2|t2 t1|(p 2)/2|t2 t1|  C |t2 t1|p/2 where the constants C, C1, C2are independent of ✏.

In the same way it’s possible to prove (3.12). ⇤

(29)

�.� Proof of the Main Theorem

We are now ready to prove theorem 3.1.

Proof. We start by showing the following relation:

lim ✏!0Ö " sup t2[0,T]|X ✏ t ˆX✏t|2 # ⇤0 (3.13)

which means we can work on ˆX✏

t instead of X✏t.

In order to do this we first prove: lim

✏!0xsup0,y0t2[0,T]sup Ö[|Y

t ˆYt✏|2] ⇤ 0 (3.14)

Let’s fix t 2 [K , (K + 1) ), then: Ö[|Yt✏ ˆYt✏|2]  2Ö " ✏ 2 æt K B(s, X✏s, Ys✏) B(K , X✏K , ˆYs✏)ds 2# +2Ö " ✏ 1 æt K C(s, X✏s, Ys✏) C(K , X✏K , ˆYs✏)d ˜Ws 2# ⇤2(I✏ 1 + I✏2)

First we consider the term I✏

1. By Holder’s inequality and hypothesis 1 we have:

I✏1  ✏ 2 Ö " æt K B(s, X✏s, Ys✏) B(K , X✏K , ˆYs✏)2ds #  3✏ 2 Ö " æt K |B(s, X ✏ s, Ys✏) B(K , X✏s, Ys✏)|2ds # +3✏ 2 Ö " æt K B(K , X✏s, Ys✏) B(K , X✏K , Ys✏)2ds # +3✏ 2 Ö " æt K B(K , X✏K , Ys✏) B(K , X✏K , ˆYs✏)2ds #  3L2 ✏ 2 æt K (s K ) 2 +Öh X✏ s X✏K 2i + Öh Ys✏ ˆYs✏2ids An analogous estimate can be obtained for the term I✏

(30)

hy-� The Averaging Principle for a Class of Stochastic Differential Equations pothesis 1: I✏2 ⇤ ✏ 1Ö " æt K C(s, X✏s, Ys✏) C(K , X✏K , ˆYs✏)d ˜Ws 2# ⇤ ✏ 1Ö " æt K C(s, X✏s, Ys✏) C(K , X✏K , ˆYs✏)2ds #  3L2✏ 1 æt K (s K ) 2 +Öh X✏ s X✏K 2i + Ö h Ys✏ ˆYs✏2 i ds

Putting everything together we have: Ö[|Yt✏ ˆYt✏|2]  6L2( ✏ 2+ ✏ 1) æt K (s K ) 2 +ÖhX✏ s X✏K 2i + ÖhYs✏ ˆYs✏2 i ds

By lemma 3.2.2 and since t 2 [K , (K + 1) ), we have:

Ö[|Yt✏ ˆY✏t|2]  6L2( ✏ 2+ ✏ 1) æt K 2 + C +ÖhY✏ s ˆYs✏ 2i ds  6L2( ✏ 2+ ✏ 1)( 1+2 + C 2) + 6L2( ✏ 2+ ✏ 1) æt K ÖhYs✏ ˆYs✏2 i ds

Reminding that ⇤ ✏(log ✏ 1)1/4we use Gronwall’s lemma:

Ö[|Yt✏ ˆYt✏|2]  6L2( ✏ 2+ ✏ 1)( 1+2 + C 2) exp (6L2( 2✏ 2+ 1)) (3.15) which tends to zero as as ✏ ! 0. This implies (3.14).

We can now prove (3.13):

Ö " sup t2[0,T]|X ✏ t ˆX✏t|2 #  2Ö2666 64tsup2[0,T] bt/ c 1’ K⇤0 æ(K+1) K b(K , X✏K , ˆYs✏) b(s, X✏s, Ys✏)ds 237 77 75 +2Ö2666 64tsup2[0,T] bt/ c 1’ K⇤0 æ(K+1) K (K , X ✏ K , ˆY ✏ s) (s, X✏s, Ys✏)dWs 237 77 75 ⇤2(E✏ 1 + E✏2)

We consider the first term on the right hand side E✏

1.

(31)

�.� Proof of the Main Theorem

By Holder’s inequality and hypothesis 1 we get: E✏1  Ö "bT/ c 1 K⇤0 æ(K+1) K b(K , X✏K , ˆYs✏) b(s, X✏s, Ys✏)2ds #  3 Ö "bT/ c 1 K⇤0 æ(K+1) K b(K , X✏K , ˆYs✏) b(s, X✏K , ˆYs✏)2ds # +3 Ö "bT/ c 1 K⇤0 æ(K+1) K b(s, X✏K , ˆYs✏) b(s, X✏s, ˆYs✏)2ds # +3 Ö "bT/ c 1 K⇤0 æ(K+1) K b(s, X✏s, ˆYs✏) b(s, X✏s, Ys✏)2ds #  3 L2 bT/ c 1’ K⇤0 æ(K+1) K (s K ) 2 +Öh X✏ s X✏K 2i + Ö h Ys✏ ˆYs✏2 i ds

By (3.15) and lemma 3.2.2 it implies E✏

1 ! 0 as ✏ ! 0.

In a similar way we treat the term E✏

2. By Doob’s inequality and Lipschitzianity we get:

E✏2  4Ö "bT/ c 1 K⇤0 æ(K+1) K (K , X ✏ K , ˆYs✏) (s, X✏s, Ys✏) 2 ds #  12L2 bT/ c 1’ K⇤0 æ(K+1) K (s K ) 2 +Öh X✏ s X✏K 2i + Ö h Ys✏ ˆYs✏2 i ds

from which it follows E✏

2 ! 0 as ✏ ! 0.

This implies (3.13).

Now we want to apply theorem 2.3 to the family of processes {( ˆX✏, W, ˜W)}

✏to show

that it is tight.

Until now we have considered processes on a time horizon T fixed. To apply the theorem we need to extend them from [0, T] to [0, +1), for example by choosing Zt(!) ⇤ ZT(!),

! 2 ⌦, t T where Z is any of the processes we have considered. In this way we keep the continuity of the trajectories and we can see Z as a random variable with values on (C[0, +1)d, B(C[0, +1)d)).

We show the hypothesis of theorem 2.3 hold: (i) supÖh( ˆX✏

0, W0, ˜W0)2

i

⇤(|x0|2, 0, 0)

(ii) SinceÖh|Wt Ws|4]  C |t s|2 for every 0  s, t  T and by (3.11) we have:

sup ✏ Öh( ˆX✏t, Wt, ˜Wt) ( ˆX✏s, Ws, ˜Ws) 4i  CT |t s|2 for every 0  s, t  T.

This implies the family {( ˆX✏, W, ˜W)}

(32)

� The Averaging Principle for a Class of Stochastic Differential Equations

weakly compact. This means that any sequence {( ˆX✏n, W, ˜W)}

✏n has a subsequence {( ˆX✏n0, W, ˜W)}

✏n0 converging weakly in C[0, +1)

2d+l. If we prove that the limit is

unique then the whole family {( ˆX✏, W, ˜W)}

✏ converges weakly in C[0, +1)2d+l by [2,

theorem 2.6].

We take an arbitrary subsequence {( ˆX✏n0, W, ˜W)}

✏n0 converging weakly in C[0, +1)

2d+l

to a certain process ( ˆX0, W, ˜W).

Then by (3.13) we have that {(X✏n0, ˆX✏n0, W, ˜W)}

n0 converges weakly in C[0, +1)

3d+l

to ( ˆX0, ˆX0, W, ˜W) and by the Skorokhod’s theorem we can find a new probability space

(⌦⇤, F,ê) and random variables {(x✏n0, ˆx✏n0, w✏n0, ˜w✏n0)}

✏n0 defined on it with val-ues in C[0, +1)3d+l that have the same law of {(X✏n0, ˆX✏n0, W, ˜W)}

✏n0 and such that {(x✏n0, ˆx✏n0, w✏n0, ˜w✏n0)}

✏n0 converges a.s. to a random variable (ˆz

0, ˆx0, w, ˜w) with

val-ues in C[0, +1)3d+lthat has the same law of ( ˆX0, ˆX0, W, ˜W).

For semplicity we rename {(x✏, ˆx✏, w✏, ˜w✏)}

✏the subsequence {(x✏n0, ˆx✏n0, w✏n0, ˜w✏n0)}✏n0 converging a.s.

Now we want to apply theorem 2.6 to obtain the thesis.

We consider the probability P induced by ˆx0on (C[0, +1)d, B(C[0, +1)d)) and we

con-sider the filtrationFtdefined in definition 2.4.1.

We than considerFt0 ⇤ {ws, ˜ws, s  t} and we show that the process {mxi

t }t 0defined by mxi t ⇤ ˆx 0,i t ˆx 0,i 0 æt 0 ¯bi(s, ˆx 0 s)ds

is a continuous local martingale with respect toFt0, t 0 for every i  d. We fix the component i  d and we notice that8t 2 [K , (K + 1) )

ˆx✏,it ˆx✏,iK æt K bi(K , x✏K , ˆYs✏)ds ⇤ æt K [ (K , x ✏ K , ˆYs✏)dws]i (3.16) and that (K , x✏

K , ˆYs✏) 2 M2[0, T] (by sublinearity and (3.9)).

This implies that the process

ˆx✏,it ˆx✏,i0 bt/ c 1’

K⇤0

æ(K+1)

K

bi(K , x✏K , ˆYs✏)ds

is a continuous localFt0-martingale. We need to pass this property to the limit as ✏ ! 0. Now we take t1  t2and we show that:

lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K bi(K , x✏K , ˆYs✏) ¯bi(K , x✏K )dsFt01 37 77 75 37 77 75 ⇤0 (3.17) We define ˜Y✏

t as the time-changed process ˜Y✏t ⇤ ˆY✏t✏ and we see that ˜Y✏t|FK0 where

t2 [K , (K + 1) ) has the same law of y(K ,xk,yk)

t defined in hypothesis 3.

(33)

�.� Proof of the Main Theorem

By hypothesis 3, 4 and by (3.8) we obtain (3.17), in fact: Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K bi(K , x✏K , ˆYs✏) ¯bi(K , x✏K )dsFt01 37 77 75 37 77 75 ⇤Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ bi(K , x ✏ K , ˜Y✏s) ¯bi(K , x✏K )dsFK0 # Ft013777 75 37 77 75 ⇤Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ bi(K , xk, y (K ,xk,yk) s ) ¯bi(K , xk)ds x✏K , ˆYK✏ # Ft013777 75 37 77 75 Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ ¯K( /✏)(1 + |x✏ K |2+| ˆYK✏ |2)Ft01 37 77 75 37 77 75 ⇤ b(t2 t’1)/ c 1 K⇤dt1e/ ¯K( /✏)(1 +Ö⇥|x✏K |2⇤+Ö⇥| ˆY✏ K |2 ⇤ )  C ¯K( /✏) /✏!+1! ✏!0 0

By dominated convergence for the second term on the right side (it can be applied by sublinearity and by (3.9)) it follows that

lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 t’1)/ c 1 K⇤dt1e/ æ(K+1) K bi(K , x✏K , ˆYs✏) ¯bi(s, ˆx0s)dsFt01 37 77 75 37 77 75  lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 t’1)/ c 1 K⇤dt1e/ æ(K+1) K bi(K , x✏K , ˆYs✏) ¯bi(K , x✏K )dsFt013777 75 37 77 75 +lim ✏!0Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K ¯bi(K , x✏K ) ¯bi(s, ˆx0s) ds 37 77 75 ⇤0

From this and by dominated convergence for uniform integrable functions for the term ˆx✏,it2 ˆx✏,it1 we have:

lim ✏!0Ö " ˆx✏,it2 ˆx✏,it1 b(t2 t’1)/ c 1 K⇤dt1e/ æ(K+1) K bi(K , x✏K , ˆYs✏)dsFt01 # ⇤Ö " ˆx0,it2 ˆx0,it1 æt2 t1 ¯bi(s, ˆx0s)dsFt01 # where the convergence as ✏ ! 0 is in L1(⌦).

By (3.16) the term on the left-hand-side

ˆx✏,it ˆx✏,i0 bt/ c 1’

K⇤0

æ(K+1)

K

(34)

� The Averaging Principle for a Class of Stochastic Differential Equations

is a sum of stochastic integrals. It implies8t1  t2

Ö  ˆx0,it2 ˆx0,it1 æt2 t1 ¯bi(s, ˆx0s)dsFt01 ⇤0 ê⇤a.s. and it follows: Ö  ˆx0,it2 æt2 0 ¯bi(s, ˆx 0 s)dsFt01 ⇤ ˆx 0,i t1 æt1 0 ¯bi(s, ˆx 0 s)ds ê⇤a.s.

This implies that {mxi

t }t 0is a local continuousFt0-martingale.

Then we have thatFt0 ◆ Ftˆx0 ⇤ {ˆx0

s, s  t}: in fact the process ˆx0t isFt0-measurable

since it is the a.s. limit of the family {ˆx✏}

✏which isFt0-measurable.

As in section of the Martingale Problem it follows that the process {mxi

t }t 0is a

con-tinuous local Fˆx0

t -martingale and this implies that the process {M

xi

t }t 0 defined on

(C[0, +1)d, B(C[0, +1)d)) with values iníd is a continuous local martingale with

re-spect toFtfor every i  d. Here {Mxti}t 0is obtained through the coordinated process

of ˆx0in a similar way as we did for (2.12).

Now we show that the following process is a martingale with respect toFt0, t 0 mxtixj ⇤ ˆx0,i t ˆx 0,j t ˆx 0,i 0 ˆx0,j0 æt 0 ¯aij(s, ˆx 0 s) + ˆx0,is ¯bj(s, ˆx0s) + ˆx0,js ¯bi(s, ˆx0s)ds 8i, j  d For t 2 [K , (K + 1) ) we have: ˆx✏,it ˆx✏,jt ˆx✏,i0 ˆx✏,j0 ⇤ æt 0 aij(K , x ✏

K , ˆYs✏) + ˆx✏,is bj(K , x✏K , ˆYs✏) + ˆx✏,js bi(K , x✏K , ˆYs✏)ds

+ æt 0 ˆx ✏,i s [ (K , x✏K , ˆYs✏)dws]j+ æt 0 ˆx ✏,j s [ (K , x✏K , ˆYs✏)dws]i

As before, since ˆx✏,is (K , x✏K , ˆYs✏) 2 M2[0, T], we have that the following process is a

local continuousFt0-martingale, t 0

ˆx✏,it ˆx✏,jt ˆx✏,i0 ˆx✏,j0 bt/ c 1’

K⇤0

æ(K+1)

K

aij(K , x✏K , ˆYs✏) + ˆxs✏,ibj(K , x✏K , ˆYs✏)

+ˆx✏,js bi(K , x✏

K , ˆY

s)ds

We need to pass this property to the limit process as ✏ ! 0.

With the same calculation as before we are able to show that8i, j  d,80  t1  t2

lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 t’1)/ c 1 K⇤dt1e/ æ(K+1) K aij(K , x✏K , ˆYs✏) ¯aij(s, ˆx0s)dsFt01 37 77 75 37 77 75 ⇤0 (3.18) 34

(35)

�.� Proof of the Main Theorem lim ✏!0Ö  Ö  ˆx✏,it ˆx✏,jt ˆx0,it ˆx0,jt Ft01 ⇤0 (3.19) Now we treat the last term remaining and we show that8i, j  d, 0  t1  t2

lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 t’1)/ c 1 K⇤dt1e/ æ(K+1) K ˆx✏,js bi(K , x✏K , ˆYs✏) ˆx0,js ¯bi(s, ˆx0s)dsFt01 37 77 75 37 77 75 ⇤0 (3.20) First we prove: lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K ˆx✏,js (bi(K , x✏K , ˆYs✏) ¯bi(K , x✏K ))dsFt01 37 77 75 37 77 75 ⇤0 (3.21) In fact we have: Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K ˆx✏,js (bi(K , x✏K , ˆYs✏) ¯bi(K , x✏K ))dsFt01 37 77 75 37 77 75 ⇤Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ ˆx ✏,j s✏(bi(K , x✏K , ˜Y✏s) ¯bi(K , x✏K ))dsFK0 # Ft013777 75 37 77 75 ⇤Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ ˆx ✏,j s✏(bi(K , xk, ys(K ,xk,yk)) ¯bi(K , xk))dsFK0 # Ft013777 75 37 77 75 Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ (ˆx ✏,j s✏ xjk)(bi(K , xk, ys(K ,xk,yk)) ¯bi(K , xk))dsFK0 # Ft013777 75 37 77 75 +Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ x j k(bi(K , xk, y (K ,xk,yk) s ) ¯bi(K , xk))dsFK0 # Ft013777 75 37 77 75 We show that the first term on the right-hand-side tends to zero.

(36)

� The Averaging Principle for a Class of Stochastic Differential Equations Ö2666 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ (ˆxs✏✏,j xk)(bi(K , xk, ys(K ,xk,yk)) ¯bi(K , xk))dsFK0 # Ft013777 75 37 77 75 Ö2666 64 b(t2 ’t1)/ c 1 K⇤dt1e/ ✏ æ(K+1) /✏ K /✏ | ˆx✏,js✏ x✏,jK ||bi(K , xK✏ , ˜Y✏s) ¯bi(K , x✏K )|ds 37 77 75  b(t2 ’t1)/ c 1 K⇤dt1e/ ( Ö "æ(K+1) /✏ K /✏ 2✏ 2| ˆx✏,j s✏ ˆx✏,jK |2+2✏2| ˆx✏,jK x✏,jK |2ds #1/2 · ·Ö "æ(K+1) /✏ K /✏ |bi(K , x✏K , ˜Y✏s) ¯bi(K , x✏K )|2ds #1/2)  b(t2 ’t1)/ c 1 K⇤dt1e/ 26 66 64 æ(K+1) /✏ K /✏ ✏2C +2✏2Öh| ˆx✏,jK x✏,jK |2ids !1/2 æ(K+1) /✏ K /✏ C1ds !1/237 77 75 ✏!0 ! 0

where C, C1, C2are independent of ✏.

Now we show that also the second term on the right-hand-side goes to zero by hypoth-esis 3, 4 and by (3.8). Ö2666 64Ö 26 66 64 b(t2 t’1)/ c 1 K⇤dt1e/ Ö " ✏ æ(K+1) /✏ K /✏ xkj(bi(K , xk, ys(K ,xk,yk)) ¯bi(K , xk))dsFK0 # Ft013777 75 37 77 75 Ö2666 64Ö 26 66 64 b(t2 t’1)/ c 1 K⇤dt1e/ ¯K( /✏)(|x✏ K | + |x✏K |3+|x✏K || ˆYK✏ |2)Ft01 37 77 75 37 77 75  b(t2 ’t1)/ c 1 K⇤dt1e/ ¯K( /✏)(Ö⇥|xK |⇤+Ö⇥|x✏ K |3 ⇤ +Ö⇥|x✏ K || ˆYK✏ |2 ⇤ )  C ¯K( /✏) /✏!+1! ✏!0 0 where by hypothesis 4 Ö⇥|x✏K || ˆYK✏ |2⇤ ⇤Ö⇥| ˆY✏ K |2Ö ⇥ |x✏ K | ˆYK✏ ⇤⇤  C3Ö⇥| ˆYK✏ |2] C4

where C3, C4are independent of ✏. Note thatÖ⇥|x✏K | ˆYK✏ ⇤  C3, in fact:

Ö⇥|x✏K | ˆYK✏ ⇤ ⇤ÖÖ⇥|x✏ K | ˆYs✏, s  K ⇤ ˆY K ⇤ Ö⇥Ö⇥|x✏K |2 ˆYs✏, s  K ⇤ ˆYK✏ ⇤ and Ö⇥|x✏K |2 ˆYs✏⇤ ys, s  K ⇤

 C3 because we can repeat the same proof of (3.8)

with any deterministic continuous function ys. This implies (3.21).

(37)

�.� Proof of the Main Theorem

As before by (3.21) and dominated convergence we have (3.20): lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K ˆx✏,js bi(K , x✏K , ˆYs✏) ˆx0,js ¯bi(s, ˆx0s)dsFt01 37 77 75 37 77 75  lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K ˆx✏,js (bi(K , x✏K , ˆYs✏) ¯bi(K , x✏K ))dsFt01 37 77 75 37 77 75 + lim ✏!0Ö 26 66 64Ö 26 66 64 b(t2 ’t1)/ c 1 K⇤dt1e/ æ(K+1) K ˆx✏,js ¯bi(K , x✏K ) ˆxs0,j¯bi(s, ˆx0s)dsFt01 37 77 75 37 77 75 ⇤0 Conditions (3.18), (3.19), (3.20) imply that the process {mxixj

t }t 0is a local continuous

martingale with respect toFt0, t 0.

As before sinceFt0 ◆ Ftˆx0it follows that {mtxixj}t 0is a continuous local F ˆx

0

t -martingale

and this implies that the process {Mxixj

t }t 0defined on (C[0, +1)d, B(C[0, +1)d)) with

values inídis a local martingale with respect toF

t, t 0,8i, j  d. Here {Mxtixj}t 0

is obtained through the coordinated process of ˆx0as we did for Mxi

t .

At this point we can apply theorem 2.6 and say that there exists a d-dimensional Brow-nian motion {w0

t}t 0defined on an extension ( ˆ⌦, ˆF, ˆê) of (C[0, +1)d, B(C[0, +1)d), P)

such that ( ˆ⌦, ˆF, ˆFt, ˆê, ˆx0t, w0t) is a weak solution to (3.7) and it solves the corresponding

martingale problem. Moreover the solution of (3.7) is unique in law thanks to hypoth-esis 3. Then by [2, theorem 2.6] we have that the whole family { ˆX✏}

✏converges weakly

in C[0, +1)dto the solution of (3.7) and this implies the weak convergence in C[0, T]d.

(38)
(39)

4 Financial Application

4.1 Local Stochastic Volatility Model

Under the risk neutral measure we consider a local stochastic volatility model defined by the following system of stochastic differential equations dependent on a parameter ✏2 (0, 1] 8>>> >>< >>>> >: dS✏t ⇤ rS✏tdt +F (t, St✏, Yt✏)S✏tdWt S✏0 ⇤ s0>0 dYt✏ ⇤ ✏ 1( Yt✏+ f(Yt✏))dt + ✏ 1/2 d ˜Wt Y0✏ ⇤ y0 2í (4.1) where t 0, {S✏

t}t 0 is the stock price, {Wt}t 0 and { ˜Wt}t 0 are Brownian motions

with instantaneous correlation ⇢ 2 ( 1, 1), r > 0 is the risk-free interest rates, the real function F (t, s, y) is the local stochastic volatility, f(y) is a real function and 2í. The parameter ✏ represents the ratio of time scales between the stock price and the volatility. In [7] there is a detailed description of evidence of a fast scale in volatility.

To apply the theory of the previous chapter it is convenient to consider the log-price X✏t ⇤log(S✏

t).

By Ito’s formula we have: 8>>> >>< >>>> >: dX✏t ⇤(r F(t, X✏ t, Yt✏)2/2)dt + F(t, X✏t, Yt✏)dWt X✏0 ⇤ x0 dYt✏ ⇤ ✏ 1( Yt✏+ f(Yt✏))dt + ✏ 1/2 d ˜Wt Y0✏ ⇤ y0 (4.2) where x0⇤log(s0) and F(t, x, y) ⇤ F (t, ex, y).

We assume that 0 < m  F(t, x, y)  M, F(t, x, y) is Lipschitz uniformly in t and Holder’s continuous in t for fixed x, y 2 í, f(y) is assumed to be bounded and Lipschitz.

4.2 Convergence to a Local Volatility Model

Writing system (4.2) in the form of (3.1) we have:

b(t, x, y) ⇤ r F(t, x, y)2/2 (t, x, y) ⇤ F(t, x, y)

B(t, x, y) ⇤ y + f(y) C(t, x, y) ⇤

(40)

� Financial Application

We then consider the process {y(t,x,y)

s }s t0, t0 0 introduced in hypothesis 3. We can rename it {yy

s}s 0because in our model the coefficients B and C are independent of t

and x. In our model it satisfies:

dyys ⇤( yys + f(yys))ds + d ˜Ws yy

0 ⇤ y2í

Now we consider an European option. A derivative of this kind is defined by its non negative payoff function (s) which prescribes the value of the contract at its maturity time T when the stock price is s ⇤ ST.

Under the risk neutral measure we consider the fair price P✏ of the contract at time

t ⇤0 with initial conditions s0>0, y02 í. This is calculated as: P✏ ⇤Öe rT (S✏

T) S

0 ⇤ s0, Y0✏ ⇤ y0⇤

We denote by BB(í) the Banach space of bounded and measurable real functions

endowed with |f|1⇤supx2í|f(x)| the supremum norm.

We will show the following theorem:

Theorem 4.1. Let 2 CB(í) then for every s0>0, y02 í:

lim

✏!0P

⇤ P (4.3)

where P ⇤Ö⇥e rT ( ¯ST) ¯S0 ⇤ s0] and { ¯St}t 0is the unique strong solution to:

d ¯St ⇤ r ¯Stdt + ¯F (t, ¯St) ¯StdWt ¯S0⇤ s0, t 0 (4.4)

where ¯F (t, s) ⇤íF (t, s, z)2µ(dz) and µ is the distribution of the unique invariant measure

for the semigroup {Pt}t 0defined by Pt (y) ⇤Ö[ (yyt)], y 2í, 2 BB(í).

Remark 4.2.1. The theorem holds also for path-dependent options with bounded payoff.

Let’s note that the limit model for the stock price { ¯St}t 0 is a local volatility model

with volatility ¯F . The theorem states that when the ratio of time scales ✏ is sufficiently small then the prices P✏ can be well approximated by the local volatility price P. This

means from a financial point of view that if, after the calibration of the local stochastic volatility model (4.2), we obtain an ✏ sufficiently small than we can price derivatives using the local volatility price P which is easier to calculate because it comes from a simpler model. Note that in this case the limit model is easily constructed starting from the calibrated local stochastic volatility model. Moreover if for a market there is evidence for a fast scale in volatility then it is appropriate to calibrate directly the local volatility model.

To prove the theorem first we introduce the following lemmas:

Lemma 4.2.2. There exist some constants ˆC > 0, ˆ⌘ > 0 such that for every t 0, y1, y2 2 í

we have:

Ö⇥ (yyt2)⇤ Ö⇥ (yyt1)⇤  ˆCe ˆ⌘t(1 + |y1|2+|y2|2)| |1

for every real function 2 BB(í).

(41)

�.� Convergence to a Local Volatility Model Proof. It’s a direct application of [5, theorem 2.4] to the process {yy

t}t 0. ⇤

We now recall the definition of the Markov transition probabilities:

Pt(y, ) ⇤ Pt1 (y) ⇤Ö[1 (yty)] 8y2í, 2 B(í)

where 1 is the indicator function.

Lemma 4.2.3. There exists a unique invariant measure µ for the semigroup {Pt}t 0 and

Pt(y, ) ! µ( ) as t ! +1 for every 2 B(í), y 2í.

Proof. It’s shown in [5, proof of theorem 2.6].

Lemma 4.2.4. There exist some constants C > 0, ˆ⌘ > 0 such that8t 0,8y2íwe have: Ö⇥ (yyt)⇤ æ

í (z)µ(dz)  Ce

ˆ⌘t(1 + |y|2)| |

1 (4.5)

for every real function 2 CB(í).

Proof. Let 2 CB(í) the space of continuous bounded real functions, let t , h > 0,

y2í, then: Ö⇥ (yyt)⇤ æ í (z)µ(dz)  Ö⇥ (yyt)⇤ Ö⇥ (yyt+h)⇤ + Ö⇥ (yy t+h) ⇤ æ í (z)µ(dz) (4.6) We consider the first term on the right hand side. Since {yy

t}t 0 is a Markov process we have: Ö⇥ (yyt)⇤ Ö⇥ (yt+hy )⇤ ⇤ ÖhÖhÖ⇥ (yy t) ⇤ (yy t+h) Fh ii ⇤ ÖhÖhÖ⇥ (yy t) ⇤ (yy t+h) y y h ii (4.7) For the Markov property and the fact that the system is autonomous we also have: let y12 í Öh (yyt+h) yyh ⇤ y1 i ⇤ Öh (yh,y y h t+h ) y y h ⇤ y1 i ⇤ Öh (yh,y1 t+h) i ⇤ Ö⇥ (yy1 t ) ⇤ where {yh,yh

t }t h is the process starting at time h from yh and following the same

dynamic of {yy t}t 0. Then (4.7) becomes: Ö⇥ (yyt)⇤ Ö⇥ (yt+hy )⇤ ⇤ ÖhÖhÖ⇥ (yy t) ⇤ Ö ⇥ (yyt1)⇤ yyh ii

(42)

� Financial Application By lemma 4.2.2 we have: Ö⇥ (yyt)⇤ Ö ⇥ (yyt+h)⇤  ÖhÖhˆCe ˆ⌘t(1 + |y|2+|y1|2) yy h ii | |1 ⇤ Ö⇥ˆCe ˆ⌘t(1 + |y|2+|yy h|2) ⇤ | |1

By (2.7) we haveÖ ⇥|yyt|2⇤  C1(1 + |y|2), where C1is independent of t and y, then:

Ö⇥ (yyt)⇤ Ö⇥ (yyt+h)⇤  Ce ˆ⌘t(1 + |y|2)| |1 where C ⇤ ˆC(1 + C1). Then (4.6) becomes: Ö⇥ (yyt)⇤ æ í (z)µ(dz)  Ce ˆ⌘t(1 + |y|2)| | 1 + Ö⇥ (yy t+h) ⇤ æ í (z)µ(dz)

Letting h ! +1 we have the thesis. In fact by lemma 4.2.3 the following limit holds for any indicator function (y) ⇤ 1 (y), 2 B(í)

lim h!+1 Ö ⇥ (yy t+h) ⇤ æ í (z)µ(dz) ⇤ limh!+1 Pt+h(y, ) µ( ) ⇤ 0

and then it can be extended to any 2 CB(í). ⇤

We can now prove theorem 4.1.

Proof. We immediately note that the coefficients , B, C of our model satisfy

hypothe-sis 1, 2.

Also b satisfies hypothesis 1, 2. In fact F2is bounded and Lipschitz uniformly in t:

for every t  T, x1, x2, y1, y22 í

F(t, x2, y2)2 F(t, x1, y1)2 ⇤ |F(t, x2, y2) F(t, x1, y1)| |F(t, x2, y2) + F(t, x1, y1)|  LM |y2 y1|

where L is the Lipschitz constant of F and M is the constant which bounds F. Now we prove that hypothesis 3 holds.

Let’s fix t 0, x 2íand consider the function (z) ⇤ b(t, x, z), then

| |1 ⇤supz2í|b(t, x, z)|  M1where M1is a constant independent of t, x.

Applying lemma 4.2.4 we have: Ö ⇥b(t, x, yys)⇤ æ

í

b(t, x, z)µ(dz)  M1ˆCe ˆ⌘s(1 + |y|2)|

(43)

�.� Convergence to a Local Volatility Model

for every s > 0.

Let t0 >0 we integrate with respect to s and divide by ⌧ > 0 Ö  1/⌧æt0+⌧ t0 ✓ b(t, x, yys) æ í b(t, x, z)µ(dz) ◆ ds  M1ˆC/⌧ æt0+⌧ t0 e ˆ⌘sds(1 + |y|2)  ¯K(⌧)(1 + |y|2) where ¯K(⌧) ⇤ M1ˆC/⌧(1 e ˆ⌘⌧) ! 0 as ⌧ ! +1.

This implies that there exists a function ¯b defined by ¯b(t, x) ⇤ ¥íb(t, x, z)µ(dz) that satisfies (3.2). Moreover it is Lipschitz uniformly in t since b is Lipschitz uniformly in t. Now we consider the function a(t, x, z) ⇤ (t, x, z)2/2 and we choose (z) ⇤ a(t, x, z)

with t, x fixed. With the same procedure we are able to show that there exists a function ¯ defined by ¯ (t, x) ⇤ p¯a(t, x) which satisfies (3.3). Here ¯a(t, x) ⇤ ¥í (t, x, z)2µ(dz)/2. Then we have: | ¯a(t, x2) ¯a(t, x1)|  12 æ í| (t, x2, z) 2 (t, x 1, z)2|µ(dz) ⇤ 1 2 æ í| (t, x2, z) (t, x1, z)|| (t, x2, z) + (t, x1, z)|µ(dz) for every t  T, x1, x2 2í.

Since is bounded and Lipschitz uniformly in t it implies that ¯a is Lipschitz uniformly in t. Then we consider the equality:

| ¯ (t, x2) ¯ (t, x1)| ⇤ | ¯ (t, x2)2 ¯ (t, x1)2|| ¯ (t, x2) + ¯ (t, x1)| 1

⇤| ¯a(t, x2) ¯a(t, x1)|| ¯ (t, x2) + ¯ (t, x1)| 1 for every t  T, x1, x2 2í.

Since 0 < m  ¯(t, x) and ¯a is Lipschitz uniformly in t, this implies that also ¯ is Lipschitz uniformly in t.

It follows that the averaged equation

d ¯Xt⇤ ¯b(t, ¯Xt)dt + ¯ (t, ¯Xt)dWt ¯X0⇤ x0,8t2 [0, T]

has a pathwise unique strong solution. This implies that hypothesis 3 holds. Lastly we prove that hypothesis 4 holds.

By Ito’s formula and since |f|  C1we get:

des/✏|Ys✏|2⇤ es/✏  1 ✏|Y ✏ s|2+ 2 ✏f(Y ✏ s)Ys✏+ 2 ds + 2 p ✏ Y ✏ sd ˜Ws  es/✏  1 ✏|Y ✏ s|2+ 2 ✏C1|Y ✏ s| + 2 ds + 2 p ✏ Y ✏ sd ˜Ws By Young’s inequality: des/✏|Ys✏|2 es/✏ " C21 ✏ + 2 # ds + p2 ✏ Y ✏ sd ˜Ws

(44)

� Financial Application Then as ✏ 2 (0, 1] we have: Ö[|Yt|2]  e t/✏y20+ æt 0 e(s t)/✏ ✏ ⇥ C21+ ✏ 2⇤ds ⇤ e t/✏y20+ ⇣ 1 e t/✏⌘ ⇥C2 1+ ✏ 2⇤ds  y20+ C21+ 2

which proves hypothesis 3.

At this point we can apply theorem 3.1 and say that the family of processes {X✏}

converges weakly in C[0, T] to a process ¯X which is the solution of:

d ¯Xt ⇤ ¯b(t, ¯Xt)dt + ¯ (t, ¯Xt)dWt ¯X0 ⇤ x0,8t 2 [0, T]

Computing the coefficients we have:

d ¯Xt⇤ r ¯F(t, ¯Xt)2/2 dt + ¯F(t, ¯Xt))dWt ¯X0⇤ x0,8t 2 [0, T]

where ¯F(t, x) ⇤q¥íF(t, x, z)2µ(dz)/2.

Then we define ¯St⇤exp( ¯Xt) and by Ito’s formula we have:

d ¯St ⇤ r ¯Stdt + ¯F (t, ¯St) ¯StdWt ¯S0⇤ s0,8t 2 [0, T]

where s0⇤exp(x0) and ¯F (t, s) ⇤ ¯F(t, log(s)) ⇤q¥

íF (t, s, z)2µ(dz)/2.

Since the family of processes {X✏}

✏ converges weakly in C[0, T] to ¯X then the family

of processes {S✏ exp(X✏)}

✏ converges weakly in C[0, T] to the process ¯S ⇤ exp( ¯X)

because exp(·) 2 C(í).

By definition of weak convergence it follows: lim ✏!0 æ C[0,T] gdP(S✏) 1⇤ æ C[0,T] gdP( ¯S) 1 for every g 2 CB(C[0, T]) or equivalently

lim

✏!0Ö[g(S

)] ⇤ Ög( ¯S)

for every g 2 CB(C[0, T]).

Then we consider an arbitrary function h 2 CB(í) as the composition of an arbitrary

function g 2 CB(C[0, T]) and the continuous projection operator ⌘t : C[0, T]) ! í

defined by ⌘t(y) ⇤ yt, y 2 C[0, T]), t 2 [0, T]. We have:

lim

✏!0Ö

h(S✏t)⇤ ⇤Öh( ¯St)⇤ for every h 2 CB(í), for every t 2 [0, T].

Since 2 CB(í) this implies (4.3). ⇤

(45)

�.� Convergence to the Black and Scholes Model

4.3 Convergence to the Black and Scholes Model

Under the risk neutral measure we now consider the case of a stochastic volatility model dependent on ✏ > 0 8>>> >>< >>>> >: dS✏t ⇤ rS✏tdt +F (Yt✏)S✏tdWt S✏0 ⇤ s0>0 dYt✏ ⇤ ✏ 1( Yt✏+ f(Yt✏))dt + ✏ 1/2 d ˜Wt Y0✏ ⇤ y0 2í (4.8)

where t 0, {Wt}t 0and { ˜Wt}t 0are Brownian motions with instantaneous

correla-tion ⇢ 2 ( 1, 1), r > 0, F (y) is the stochastic volatility and 2 í.

From a mathematical point of view system (4.8) is a particular case of (4.1). Then we assume that 0 < m  F (y)  M, F (y) is Lipschitz and f(y) is bounded and Lipschitz. As for the case of the local stochastic volatility model we consider an European deriva-tive with payoff and we keep the same notation for the price P✏.

Applying theorem 4.1 we have the following result:

Theorem 4.2. Let 2 CB(í) then for every s0 >0, y0 2í:

lim

✏!0P

⇤ P (4.9)

where P ⇤Ö⇥e rT ( ¯ST) ¯S0⇤ s0] and { ¯St}t 0is the geometric Brownian motion:

d ¯St ⇤ r ¯Stdt + ¯F ¯StdWt ¯S0⇤ s0, t 0 (4.10)

where ¯F ⇤íF (z)2µ(dz) and µ is the distribution of the unique invariant measure for the

semigroup {Pt}t 0defined by Pt (y) ⇤Ö[ (yyt)], y 2í, 2 BB(í).

Let’s note that the difference is that here the limit model for the stock price { ¯St}t 0is

not a local volatility model but it is a Black and Scholes model with volatility ¯F .

4.4 Example

Now we consider a concrete example in which we choose the coefficients for {Y✏

t}t 0

and we show how to calculate the distribution of the invariant measure µ.

This permits us to compute the dynamic of the limit equation (4.4) so that we are able to compute the prices of the derivatives.

Let’s consider the following stochastic differential equation for {Y✏

t}t 0 ( dYt✏⇤ ✏ 1(m Yt✏+sin(Y✏ t))dt + ✏ 1/2 p 2⌫d ˜Wt Y0✏ ⇤ y0 2í

Riferimenti

Documenti correlati

[r]

In the paper, a new design procedure is presented, completely based on closed form numerical computations, which allows for the determination of a “near-optimal” choice of

Prestigious worldwide-recognized marine ecologists, including the most known experts of marine regime shifts, signed critical reviews dedicated to indicators of resilience (Dakos

di Lingue e Letterature Straniere e Culture Moderne dell’Università degli Studi di

Come dire nome, età, indirizzo. Quanto alle città nominate, pur aven- do una collocazione geografica sono spazi vuoti, irrilevanti per lo svol- gimento del racconto e creati solo

Quando il testo si avvia verso l’epilogo, compare un tratto che man- ca sia nella versione raccolta da Briz sia in quella di Capmany: tra gli oggetti mostrati per attivare

In recent years, important experimental [11, 12, 13, 14, 15, 16] and the- oretical [17, 18, 19, 20, 21, 22, 23, 24] works concerning the periodic driving of these systems have

La carta del patrimonio relativa a questo contesto mostra infatti una vasta intersezione di componenti di valore inerenti le diverse strutture territoriali (qui definite