• Non ci sono risultati.

Generalized Merton Models

N/A
N/A
Protected

Academic year: 2021

Condividi "Generalized Merton Models"

Copied!
46
0
0

Testo completo

(1)

Università di Pisa

Dipartimento di matematica

Corso di Laurea in Matematica

Tesi di Laurea Magistrale

Generalized Merton Models

25 ottobre 2019

Candidato

Alessandro Combi

Relatore

Prof. Maurizio Pratelli

Università di Pisa

(2)
(3)

Contents

1 Itô-Lévy Processes and Affine Processes 4

1.1 Jump Processes . . . 4

1.2 Itô-Lévy Jump Diffusions . . . 6

1.3 Characterization of Affine Processes . . . 9

1.4 Poisson Random Measures . . . 13

1.5 Series Expansion of the Characteristic Function . . . 15

1.6 Option Pricing using the Characteristic Function . . . 19

1.6.1 Carr-Madan Formula . . . 19

1.6.2 An alternative Fourier-based formula . . . 20

2 Affine Models for Option Pricing 23 2.1 Merton model . . . 23

2.2 Heston Model . . . 24

2.3 Heston Model with State-dependent Jumps . . . 25

2.4 Bates Model . . . 27

2.5 Generalized Merton Models . . . 27

3 Numerical Experimentation 29 3.1 Existence of the series expansion . . . 29

3.2 The Jump-free case . . . 30

3.3 The Jump Case . . . 34

3.4 MonteCarlo Simulation . . . 35

3.5 Performance of the approximated formula in the jump case . 37

(4)

Abstract

In this thesis we discuss numerical properties of Generalized Merton Mod-els. Following the article by C. Bayer and J. Schoenmakers "Option Pricing in Affine Generalized Merton Models", we modelize the price process of a financial underlying (e.g. an asset or a bond) as the exponential of a Heston Process with state-dependent jumps. This model has stochastic volatility and jumps, which are realistic features according to financial datas.

This process belongs to the class of Affine Jump Diffusions, which are Markov processes with log-affine characteristic function with respect to the initial state. In their seminal work "Affine processes and applications in finance", D. Duffie, D. Filipovic and W. Schachermayer provide a character-ization of those processes based on the form of the infinitesimal generator. Their theorem also show that the coefficients of the characteristic function satisfy a generalized Riccati differential equation. This equation cannot be always solved explicitely.

In their article "Holomorphic transforms with application to affine pro-cesses" D. Belomestny, J. Kampen and J. Schoenmakers show that a series expansion of the log-affine characteristic function may be found under some hypothesis, and provide a recursion for the calculation.

The pricing of call and put options may be done using the characteristic function through the well-known formula demonstrated by P. Carr and D. Madan in "Option valuation using the fast Fourier transform".

In "Option Pricing in Affine Generalized Merton Models" Bayer and Schoenmakers compare the prices obtained truncating the expansion of the characteristic function with the results of a MonteCarlo simulation, pricing options with a variation of Carr-Madan formula.

In this thesis we illustrate the results of this experimentation using dif-ferent parameters in order to fulfill the Feller Condition, which guarantees the positivity of the variance. I then compare the results obtained with a variation of Carr Madan formula which reduces errors using ingredients from a Bates Model, similar to the one used by Bayer and Schoenmakers. This model admits a closed formula for the characteristic function, and it is more indicated in the jump case than Black Scholes, which was used by Bayer and Schoenmakers. Further development could include changes in the law of jumps of the price process, or testing the approximation with more

(5)

complicated derivatives, such as american or asian style options. Another interesting change in the model may include jumps in the volatility, which are observed in the market according to recent articles.

(6)

Chapter 1

Itô-Lévy Processes and Affine

Processes

In this Chapter we expose the theoretical environment of the financial models we will use. The main concept we need is the definition of Affine Processes, which are Strong Markov processes defined by the affine form of the loga-rithm of their Characteristic function. However, we will start with a little more generality with Itô-Lévy processes, which are defined by a Stochastic Differential Equation and therefore may give the reader more intuition. We will end the chapter presenting a series expansion of the Characteristic Func-tion (useful when it is not known in closed form) and a formula by Carr and Madan for option pricing.

1.1

Jump Processes

The simplest kind of jump process is called Counting Process: it is a process which increase by unit steps at isolated random times.

Definition 1.1. Let (Tn)n≥0 be a sequence of random times such that

• T0 = 0;

• Tn< Tn+1 for Tn< ∞. The process (Nt)t∈[0,T ] with N0 = 0 and

Nt=

X

n≥0

n1{Tn≤t<Tn+1} (1.1)

is called a Counting Process.

Notice that N is a right-continuous process. We will call Nt− the left limit

of N in t and denote by ∆Nt the difference Nt− Nt−.

A Counting Process is adapted to a filtration F if and only if the times

(7)

If (Ct)t∈[0,T ] is a bounded, measurable process, we define the integral

against Nt pathwise as a Riemann-Stieltjes integral by

T Z 0 CtdNt:= Z (0,T ] CtdNt= ∞ X n=1 CTn1{Tn≤t}. (1.2)

We now define a first type of Counting Process, the Poisson Point Process,

whose jump times Tn are such that (Tn− Tn−1) is an exponential variable

of parameter λ.

Definition 1.2. An adapted process (Xt)t∈[0,T ] is a Poisson Point Process

with intensity λ ∈ R>0 if:

• X0 = 0 a.s.;

• Xt− Xs is a Poisson random variable of parameter λ(t − s) for each

0 ≤ s < t ≤ T ;

• X has independent increments.

One may generalize this definition, for example allowing the intensity λ to vary in time. For definition and properties of this processes, called Inhomogeneous Poisson Processes, we invite the reader to consult the book "Mathematical Methods for Financial Markets" by M. Jeanblanc, M. Yor and M. Chesney ([13]).

In this thesis we will further generalize the concept of Poisson Process. Namely, we want the intensity to change depending on the current state of the process. Thus we want it to be stochastic.

Definition 1.3. Let F be a given filtration, N an adapted counting process

and (λt)t∈[0,T ] a positive and progressively measurable process such that, for

every t > 0, Λ(t) =

t

R

0

λ(s)ds < ∞, P − a.s.

The process N is an Inhomogeneous Poisson Process with Stochastic In-tensity λ if, for every positive, F -predictable process φ, the following equality is satisfied: E Z ∞ 0 φsdNs  = E Z ∞ 0 φsλsds  (1.3) To help the reader develop some intuition about those processes we may

identify them with time-changed Poisson Processes Nt= ˜NΛt, where ˜N is a

Poisson Process with unitary intensity.

Remark 1.4. Consider Equation 1.3. Let φu = 1A if 0 < s < u ≤ t,

and φu = 0 elsewhere, with A ∈ Fs. Recall that an adapted, left-continuous

process is predictable. Then, by 1.3, we have

E [1A(Nt− Ns)] = E Z t s 1Aλudu  . (1.4)

(8)

If we now let t = s + δ and allow δ to tend to 0, we have lim δ→0E  1A Ns+δ− Ns δ  = E [1Aλs] (1.5)

This means that in an infinitesimal interval of time after s, the process

behave like a Poisson Process with fixed intensity λs.

We may also write

E [Nt− Ns|Fs] = E Z t s λudu|Fs  . (1.6)

Notice that, from Equation 1.6, we deduce that the process Nt−

t

R

0

λudu

is a Martingale, which is the compensated martingale associated to N . Notice that the size of jumps in those kind of processes is always one. We want now to allow them to jump according to a law ν. Thus we start defining the Compound Poisson Process.

Definition 1.5. Let (Xi)i∈N be a collection of indipendent identically

dis-tributed random variables with law ν on Rd. If Nt is a Poisson Process with

intensity λ, a Compound Poisson Process is a process of the form

Xt=

Nt X

i=0

Xi. (1.7)

The analogous definition may be given to have general jump processes

with law of jumps ν and stochastic intensity (λt)t∈[0,T ] by sustituting Nt

with a Stochastic Intensity Poisson Process.

Now we have to redefine the compensated martingale associated to the process.

Proposition 1.6. If X is a jump process with stochastic intensity λt and

law of jumps ν, then the process

Zt= Xt− λt

Z

zν(dz) (1.8)

is a martingale, named the compensated martingale associated to X.

1.2

Itô-Lévy Jump Diffusions

We fix a probability space (Ω, F , P). On this space we will consider a

fil-tration (Ft)t∈[0,T ], with F0 = {∅, Ω} and a d-dimensional Brownian Motion

(Bt)t∈[0,T ]. We then fix a state space E ⊆ Rd. From now on, all the process

(9)

Definition 1.7. A stochastic process X is an Itô-Lévy Jump Diffusion if it satisfies the following SDE.

dXt= b(Xt)dt + σ(Xt)dBt+ dNt, (1.9)

where Nt is a jump process with distribution of jumps ν and stochastic

in-tensity (λ(Xt))t∈[0,T ]. We assume b, σ and λ are regular enough to guarantee

existence and unicity of a strong solution of Equation 1.9. We will specify sufficient conditions later.

We will now state some properties of such processes.

Proposition 1.8. If X is an Itô-Lévy Jump Diffusion, then it is a semi-martingale.

Proof. A Jump-Diffusion can be decomposed as the sum of three processes. The first one is an integral against a Brownian Motion, which, provided

that the integrand is in Λ2 =

( (Xt)t∈[0,T ]: T R 0 Xt2dt < ∞ a.s. ) , is a local martingale. The second one is a time integral, which is a Bounded Variation

process, provided that the integrand is a.s. integrable, i.e. is in Λ1. The third

one is a compensated jump process, which in our case is a true martingale by Proposition 1.6.

The semimartingale property is fundamental because, for example, it permits us to use Ito’s Lemma for semimartingales.

Another crucial feature is the Markov Property, which makes possible to use a wide set of theoretical instruments, like semigroups and infinitesimal generators.

To be precise, for now we assume that Xt is a Strong Markov Process,

while later we will impose conditions on the coefficients b, σ and λ that guarantee the property.

Definition 1.9. Let (Xt)t∈[0,T ]be a time-homogeneous Markov Process. Its

infinitesimal generator is defined as the operator A where: Af (x) = lim

t→0

E[f (Xt)] − E[f (X0)]

t . (1.10)

Its domain D(A) ⊂ C0(Rn) is the set of continuous functions vanishing at

infinity for which the limit exists.

We will use a Characterization of Infinitesimal Generators, whose proof may be found for example in [13].

Theorem 1.10. If f ∈ D(A), then the process

Mtf = f (Xt) − f (X0) −

t

Z

0

(10)

is a martingale.

Conversely if f ∈ C0(Rn) and there exists g ∈ C0(Rn) such that

Mtf = f (Xt) − f (X0) −

t

Z

0

g(Xs)ds (1.12)

then f ∈ D(A) and g = Af.

We now enunciate the Ito’s Lemma for Semimartingales.

Proposition 1.11. Suppose X is a semimartingale and f is a C2 function.

Then f (Xt) =f (X0) + t Z 0 ∇f (Xs−)dXs+ 1 2 t Z 0 X i,j ∂2f (Xs) ∂xi∂xj d < Xi,c, Xj,c >s +X s≤t [f (Xs) − f (Xs−) − f0(Xs−)∆Xs], (1.13)

where Xc is the continuous part of X and ∆Xs= Xs− Xs−.

We may now calculate the generator of an Itô-Lévy Jump-Diffusion. Proposition 1.12. The infinitesimal generator af an Itô-Lévy Jump-Diffusion (Xt)t∈[0,T ] has the following form for f ∈ C2(Rn) :

Af (x) = ∇f (x)Tb(x) + tr(Hf(x)σσT) + λ(x)

Z

(f (x + z) − f (x))ν(dz), (1.14)

where tr(Hf) is trace of the Hessian matrix of f.

Proof. Let us apply Ito’s Lemma 1.11. We have:

f (Xt) − f (X0) = t Z 0 ∇f (Xs−)b(Xs)ds + t Z 0 ∇f (Xs−)σ(Xs)dBs + t Z 0 ∇f (Xs−)dZs+ 1 2 X i,j ∂2f (Xs) ∂xi∂xj (σ(Xs)σ(Xs)T)ijds +X s≤t [f (Xs) − f (Xs−) − ∇f (X − s )∆Xs] (1.15)

By Theorem 1.10 we need to show that subtracting the integral in s of

(11)

Observing Equation 1.15 notice that the second derivative term cancels

subractingR0tAf (Xs). Then we observe that

t Z 0 ∇f (Xs−)dZs= X s≤t ∇f (Xs−)∆Xs. (1.16)

Thus in the difference between f (Xt) − f (X0) and

t

R

0

Af (Xs)ds only the

brownian term survive, and, under correct hypothesis, it is a martingale.

1.3

Characterization of Affine Processes

In the previous section we assumed that a solution of Equation 1.9 exists and is a Strong Markov Process. We now define a setting where it does infact exist, imposing conditions on σ, b and λ.

Proposition 1.13. Let b(x) = b0+ xTb1, σ(x)Tσ(x) = σ0+ xTσ1, λ(x) =

λ0+ xTλ1. Let Zt be the compensated martingale associated to the

Stochas-tic Intensity Poisson Process of intensity λ(Xt) and law of jumps ν. Let

R

Rn\{0}xν(dx) < ∞. Then the following equation has a solution, unique in

law:

dXt= b(Xt)dt + σ(Xt)dBt+ dZt. (1.17)

The solution is a time-homogeneous Markov Process.

Proof. The proof of the existence and uniqueness is classic and may be found in [11] or in [12]. Let us remind the Markov Property, which requires that, for each t, h, for each ϕ bounded measurable,

E[ϕ(Xt+h)|Ft] = E[ϕ(Xt+h)|Xt]. (1.18)

One may prove the property for ϕ twice differentiable using Ito’s Lemma and then for all ϕ bounded measurable, passing to the limit and using the prop-erty of measurability of the limit. The process is clearly time-homogeneous by the same proof.

The generator of a solution of Equation 1.18 has the form:

Af (x) = ∇f (x)b(x)+tr(Hf(x)σσT)+λ(x)

Z

(f (x+z)−f (x)−zT∇f (x))ν(dz)

(1.19)

One can also add state-independent jumps to Zt, ending up with a

(12)

Af (x) = ∇f (x)b(x)+tr(Hf(x)σσT)+

Z

(f (x+z)−f (x)−zT∇f (x))ν(x, dz),

(1.20)

where ν(x, dz) = xTλ1ν1(dz) + λ0ν0(dz).

We will now give a general definition of Affine Processes, following [1].

Definition 1.14. Denote by fu(x), u ∈ Rd, the functions fu(x) = exp(iuTx).

A regular affine process is a Strong Markov Process with state-space D =

Rm≥0× Rn⊂ Rd such that

• the function t 7→ Ptfu(x) has right-derivative in t = 0 :

Afu(x) = ∂t−Ptfu(x)|t=0 (1.21)

• the charachteristic function Ptfu(x) = E[exp(iuTx)] is of exponential

affine form for each u ∈ Rd, namely

Ptfu(x) = exp(φ(t, u) + xTψ(t, u)), ∀x ∈ D. (1.22)

The rigorous study of Regular Affine Processes has been done by D. Duffie, D. Filipovic and W. Schachermayer in their work "Affine Processes ans applications in finance" ([1]). They gave a complete characterization of Regular Affine Processes by the form of their Infinitesimal Generator. Let us denote I = {1, . . . , m}, J = {m + 1, . . . , m + n} and, for i ∈ I, I(i) = I \ {i} and J (i) = J ∪ {i}. Notice that the theory is developed in full generality, allowing explosions in finite time of the process, while we will impose a condition of conservativeness and finite activity of the jump part.

Remark 1.15. In the following Theorem we will use the truncation function

χj(x1, . . . , xn+m) = ( xj/(1 ∧ |xj|) if xj 6= 0; 0 if xj = 0; χ(x1, . . . , xn+m) = (χ1(x1), . . . , χn(xn+m)). (1.23)

We also need to introduce a definition of Feller Process.

Definition 1.16. A Markov Process (Xt)t∈[0,T ]with state-space D is a Feller

Process if its transition semigroup (Pt)t∈[0,T ]has the following properties for

each f ∈ C0(D) :

• kPtf k ≤ kf k;

• lim

t→0kPtf − f k = 0.

The following Theorem provides a characterization of regular affine pro-cesses.

(13)

Theorem 1.17. Let Xtbe a regular affine process. Then it is a Feller process and there exist some admissible parameters (a, α, b, β, c, γ, m, µ) such that for

all f ∈ Cc2(D), x = (y, z) ∈ D = Rm≥0× Rn, Af (x) = d X k,l=1 (akl+ yTαI,kl) ∂2f (x) ∂xk∂xl + (b + βx)T∇f (x) − (c + yTγ)f (x) + Z D\{0} (f (x + ξ) − f (x) − χJ(ξ)T∇Jf (x))m(dξ) + m X i=1 Z D\{0}

(f (x + ξ) − f (x) − χJ (i)(ξ)T∇J (i)f (x))yiµi(dξ).

(1.24)

Moreover, (1.22) holds for all t ∈ R+, u ∈ Cm− × iRn, x = (y, z) ∈ D =

Rm≥0× Rn, where φ(t, u) and ψ(t, u) solve the following differential equations,

called Generalized Riccati equations:      φ(t, u) =Rt 0F (ψ(s, u))ds, ∂tψY(t, u) = RY(ψY(t, u), etβ Z w), ψY(0, u) = v, ψZ(t, u) = etβZw, (1.25) where F (t, u) =uTau + bTu − c + Z D\{0} (euTξ− 1 − uTJχJ(ξ))m(dξ), RYi (t, u) =uTαiu + uTβiY − γi + Z D\{0} (euTξ− 1 − uTJ (i)χJ (i)(ξ))µi(dξ), f or i ∈ I, βiY =βi{1,...,d}T ∈ Rn+m, i ∈ I, βZ = βJ JT ∈ Rn×n. (1.26)

Conversely, let (a, α, b, β, c, γ, m, µ) be a set of admissible parameters. Then there exist a unique regular, affine process which is a Feller process with

semigroup (Pt) and generator of the form 1.24 and characteristic function

as in Equation 1.22.

Remark 1.18 (Connection between objects in Theorem 1.17). The oper-ators F and R of Theorem 1.17 are connected to the generator A by the following relation:

t+Ptfu(x)|t=0 = (F (u) + xTR(u))fu(x) = Afu(x) (1.27)

Let’s give a precise definition of admissible parameters. We will denote

(14)

Definition 1.19. A set of parameters (a, α, b, β, c, γ, m, µ) is called admis-sible if:

1. a ∈ Semd, with aII = 0;

2. α = (α1, . . . , αm) with αi ∈ Semd and ai,II = αi,iiId(i) for all i ∈ I,

where Id(i) has 1 in position (i, i) and 0 elsewhere. 3. b ∈ D;

4. β ∈ Rd×d such that βIJ = 0 and βiI(i) ∈ Rm−1+ for all i ∈ I, where

I(i) = I \ {i};

5. c ∈ R+,

6. γ ∈ Rm+,

7. m is a Borel measure on D \ {0} satisfying M :=

Z

D\{0}

(< χI(ξ), 1 > +kχJ(ξ)k2)m(dξ) < ∞ (1.28)

8. µ = (µ1, . . . , µm) where every µiis a Borel measure on D\{0} satisfying

Mi=

Z

D\{0}

(< χI(i)(ξ), 1 > +kχJ (i)(ξ)k2)µi(dξ) < ∞ (1.29)

Those conditions are too general for our case, and, since we want a simpler formula for the generator for calculations, we now restrict Definition 1.19. In order to guarantee the conservativeness of the process and the existence

of first moments for m and µi we will strengthen last definition replacing

conditions from 5 to 8 as follows.

Definition 1.20. A set of parameters (a, α, b, β, c, γ, m, µ) is called admis-sible if:

1. a ∈ Semd, with aII = 0;

2. α = (α1, . . . , αm) with αi ∈ Semd and ai,II = αi,iiId(i) for all i ∈ I,

where Id(i) has 1 in position (i, i) and 0 elsewhere.

3. b ∈ D, such that for each i ∈ I, we have bi−R ξidµ(ξ) ≥ 0.

4. β ∈ Rd×d such that βIJ = 0 and bki−

R

D\{0}ξkµi(dξ) ≥ 0, for i ∈ I

and k ∈ I \ {i}. 5. c = 0;

(15)

7. m is a Borel measure on D \ {0} satisfying: Z D\{0} kξk ∧ kξk2+X i∈I ξi ! m(dξ) < ∞ (1.30)

8. µ = (µ1, . . . , µn) where every µiis a Borel measure on D\{0} satisfying

Z D\{0}  kξk ∧ kξk2+ X i∈I\{i} ξi  µi(dξ) < ∞ (1.31)

With those new admissible parameters we may reformulate Theorem 1.17, following [3], obtaining the following form for the generator:

Af (x) = d X k,l=1 (akl+ yTαI,kl) ∂2f (x) ∂xk∂xl + (b + βx)T∇f (x) + Z D\{0} (f (x + ξ) − f (x) − ξJT∇Jf (x))m(dξ) + m X i=1 Z D\{0} (f (x + ξ) − f (x) − (ξJ (i))T∇J (i)f (x))xiµi(dξ). (1.32)

This form is of the kind of Equation 1.20, adding restrictions to ensure

the process has the state space Rm≥0× Rn instead of Rm+n. Notice that we

avoid truncation functions because we assumed integrability for measures m

and µi. From Equation 1.32 we may see that the intensity of jumps only

depends on the components i = 1, . . . , m, i.e. on the components in Rm≥0.

1.4

Poisson Random Measures

The jump process associated to a Jump Diffusion is its discontinuous part,

defined by Xsd=P

0<s≤t∆Xs, where ∆Xt= Xt− Xt−, for t ∈ [0, T ]. In the setting of this thesis, the process models the price of an

under-lying, thus we may assume P

0≤s≤t|∆Xs| < ∞ almost surely. This kind

of processes are said to be finite activity jump diffusion processes, in the language of Lévy Process.

For the theory of Lévy Processes the reader may refer to [5] for a brief introduction or to the classical [13].

We now introduce a useful tool, the random measure of jumps of the process.

Definition 1.21. Let A ∈ B(Rn), with 0 /∈ ¯A. The random measure of

jumps of X is defined by

µX(ω; t, A) = # {0 < s ≤ t : ∆Xs(ω) ∈ A} =

X

0<s≤t

(16)

This measure counts, for each ω, the number of jumps performed by X up to time t with size in A.

Let us sketch the theory in the Lévy case, i.e. in the case only state independent jumps are performed and the drift and Brownian coefficients are constant.

Remark 1.22. If X is a Lévy process, then

µX(t, A) − µX(s, A) = X

s<u≤t

1A(∆Xu(ω)) (1.34)

is a random variable independent from Fs, with 0 expectation. Thus, because

of Watanabe Characterization ([11]) it is a Poisson Process with intensity

E[µX(1, A)].

In general we can prove the following.

Theorem 1.23. The set function A 7→ µX(ω; t, A) is a σ-finite measure on

R \ {0}, for each ω ∈ Ω.

The set function A 7→ ν(A) := E[µX(1, A)] is a σ-finite measure for each

ω ∈ Ω.

Proof. The first one is a counting measure, hence the thesis is trivial. Thus,

ν(A) := E[µX(1, A)] is clearly a Borel Measure.

By definition of µX, we have Z t 0 Z A zµ(ω, ds, dz) = X 0<s≤t ∆Xs(ω)1{∆Xs∈A}. (1.35)

In the case of general Jump Diffusions, the process µ([0, t], A)t∈[0,T ] is

a stochastic intensity Poisson Process with intensity λ(Xt). Thus we will

denote the former integral by

t Z 0 Z A zµ(Xs, ds, dz). (1.36)

This notation makes sense in a differential form. Let us consider a set of

times t0, . . . , tnand approximate the former integral by

n−1 X i=0 ti+1 Z ti Z A zµ(Xti, ds, dz). (1.37)

When the size of a single interval ti − ti−1 goes to zero, conditional

(17)

Random Measure with intensity ν(x, A) = ν0(A)+ < Xti−1, ν1(A) > . This was observed in Remark 1.4.

To be precise µ(Xt, dt, dz) is the measure Pn∈NδTn(t) × ν(dz), where

δx(dt) is the Dirac Delta measure concentrated in x and Tn are the jumping

times of X.

Notice that the measure ν, called the Lévy measure in the Lévy case, is the average number of jumps in a chosen set A during a unitary time interval.

Denote by (Tn)n∈N the jump times of X. Then we have the following.

Proposition 1.24. Let ν be the Lévy Measure of an Affine Jump Process,

defined as in Theorem 1.23. Assume ν is a finite measure on Rn. Fixed a

random jump time T (ω), the law of ∆XT is the probability ν(R\{0}).

Proof. By Definition we have

1 Z 0 Z A µ(ω, ds, dz) = X Tn≤1 1A(∆XTn(ω)) = X n∈N 1Tn<11A(∆Tn). (1.38)

Taking expectation we deduce

ν(A) =X

n∈N

P[(Tn< 1) ∧ (∆XTn ∈ A)]. (1.39)

However we supposed the jump sizes and the jump times are independent and that the jump sizes are identically distributed. Thus we have

ν(A) =X n∈N P[Tn< 1]P[∆XT1 ∈ A] =P[∆XT1 ∈ A] X n∈N P[Tn< 1] = P[XT1 ∈ A]ν(R n\ {0}). (1.40)

This calculation shows that the jump magnitude distribution is ν(Rν(dz)n\{0}).

1.5

Series Expansion of the Characteristic Function

We now follow the approach used by D. Belomestny, J. Kampen and J. Schoenmakers in their article "Holomorphic transforms with application to affine processes" ([2]). The aim is to find a functional series expansion of the charachteristic function, with ingredients taken from the infinitesimal

generator. We will consider an open set X ⊂ Rn of possible initial states.

The (generalized) Cauchy problem resolved by the charachteristic function

p(t, x, u) = Ptfu(x) is

(∂p

∂s(s, x, u) = Ap(s, x, u),

p(0, x, u) = fu(x), s ≥ 0, x ∈ X ⊂ Rn

(18)

First we introduce the notion of analytic vectors.

Definition 1.25. A set F = {fu}u∈I is a set of analytic vectors for an

operator A in an open set X if

• Akf

u exists for all u ∈ I and k ∈ N,

• for every u ∈ I there exists Ru> 0 such that, for all x ∈ X,

lim k→∞supr≥k r r |Arf u(x)| r! ≤ R −1 u , (1.42)

where the limit is uniform on compact sets of X. Notice that if one states the Cauchy problem 1.41

∂tPtfu(x) = APtfu(x), (1.43)

the heuristic solution is Ptfu(x) = exp(tA)fu(x) =

P

k=1 (tA)k

k! fu(x). So, clearly

Definition 1.25 is made to guarantee some kind of convergence of this oper-ator using Cauchy’s root condition.

Since we are interested in the characteristic function Ptfu, where fu(x) =

exp(uTx), we will state condition for the family exp(uTx)| u ∈ Cn to be

a family of analityc vectors.

Proposition 1.26. Suppose the generator A is a differential operator of the form Af (x) =

P

k=0

aα(x)∂αf (x), where coefficients are affine in x. If the set

X of starting values is bounded and the series

X

k=0

aα(x)uα, (1.44)

absolutely converges for all u ∈ Cn, then the set exp(uTx)| u ∈ Cn is a

family of analytic vectors for the operator A.

Notice that the affine generator of Equation 1.20 is of the required form since the following holds:

Z D\{0} (f (x + z) − f (x) − zT∇f (x))ν(x, dz) = X |α|≥2 1 α!∂xαf (x) Z zαν(x, dz) = X |α|≥2 mα(x) α! ∂xαf (x). (1.45) ,

where mα(x) is the α moment of the measure ν(x, dz).

The following Theorem provides the functional series expansion required. The proof can be found in [2].

(19)

Theorem 1.27. Let F be a set of analytical vectors, u ∈ I be fixed. Let p be the solution of the Cauchy problem (1.41). Then the following statements are equivalent:

1. There exist a constant Ru > 0 such that, for each x ∈ X, the map

s 7→ p(s, x, u) has a holomorphic extension to the domain

GRu = {z : |z| < Ru} ∪ {z : Re z > 0 ∧ |Im z| < Ru} (1.46)

2. There exist an ηu > 0 such that for each x ∈ X the following series

representation holds: p(s, x, u) = ∞ X k=0 q(ηu) k (x, u)(1 − e −ηus)k, 0 ≤ s < ∞ q(ηu) k (x, u) =e uTx X |γ|≤k hk,γ(u; ηu)xγ (1.47)

The coefficients hr,γ are calculated below.

Expansion coefficients of 1.47 can be calculated when the form of the affine generator is known with the following recursion.

(r + 1)hr+1,γ = X |β|≤r−|γ| η−1u γ + β β  hr,γ+βb0β + X |κ|=1,κ≤γ X |β|≤r+1−|γ| ηu−1γ − κ + β β  hr,γ−κ+βb1β,κ+ rhr,γ bβ(x, u) =i−|β|∂uβ Afu(x) fu(x) = b0β(u) + X κ,|κ|=1 b1β,κ(u)xκ (1.48) Thus we would like to write down sufficient conditions to have an holo-morphic extension of p(s, x, u) in a strip around the positive real axis. We will use the following criterion for affine processes, proved in [2].

Theorem 1.28. Let X be a bounded domain. Assume φ(s, u), ψ(s, u) are solutions of the Riccati generalized equations of Theorem 1.17 and assume

ψ(s, u) remains bounded as s → ∞. Then there exist Ru such that for any

t ≥ 0, the map s 7→ p(t + s, x, u), 0 ≤ s < Ru has a holomorphic extension

to the disc {s ∈ C : |s| < Ru}. Moreover it holds:

p(t + s, x, u) = ∞ X k=1 sk k!A kp(t, ·, u)(x), |s| < R u (1.49)

(20)

Theorem 1.28 guarantee that under certain hypothesis one may extend holomorphically p(t, x, u) in fixed radius balls around each point of the real axis. The extensions are locally coherent in view of Formula 1.49.

In order to guarantee the hypothesis of Theorem 1.28 we will use the fol-lowing convergence criterion, interesting in itself, mutuated from the article by P. Jin, J. Kremer and B. Rudiger, "Existence of limiting distribution for affine processes" ([3]). In the following Theorem we will denote the space of real d × d matrices whose eigenvalues have strictly negative real parts by

M−d. Recall that M ∈ M

d is equivalent to k exp(tM )k → 0 as t → ∞.

Theorem 1.29. Let X be a conservative affine process in D = Rm+ × Rn

with admissible parameters (a, α, b, β, c, γ, m, µ). If

β ∈ M−d and

Z

kξk>1

log(kξk)m(dξ) < ∞, (1.50)

then Xt weakly converges in law to a distribution π, independent of X0 = x,

with charachteristic function Z D euTxπ(dx) = exp    ∞ Z 0 F (ψ(s, u))ds    , u ∈ U. (1.51)

Since in our hypothesis m has finite first moment, clearly conditions 1.50

reduces to β ∈ M−d. In this case by Theorem 1.29 and by Lévy Continuity

Theorem we have: lim t→∞φ(t, u) = ∞ Z 0 F (ψ(s, u))ds; lim t→∞ψ(t, u) = 0. (1.52)

We end this section with some useful regularity conditions demonstrated in [1], which we will use later.

Theorem 1.30. Suppose X is a conservative regular affine process, and let

t ∈ R+. Let k ∈ N and 1 ≤ l ≤ d. If ∂2kφ(t, iλ) ∂x2k λl |λ=0 exist, then Ex[(Xtl)2k] < ∞, ∀x ∈ D. (1.53)

Lemma 1.31. Let k ∈ N. If F and RYi are in Ck(U ) for all i ∈ Y, then φ

(21)

1.6

Option Pricing using the Characteristic

Func-tion

1.6.1 Carr-Madan Formula

We now expose a pricing formula used first by Carr and Madan in [6]. We will truncate the expansion 1.27 and plug it into this formula in order to price options even when the charachteristic function of the model is not known in

explicit form. Denote by St the price of an underlying at time t, and by

st = log(St) its logarithm. Suppose sT has a risk-neutral density qT(s) and

denote by P∗ the martingale probability (with E∗ the associated integral).

Its characteristic functions is

φT(u) =

Z +∞

−∞

eiusqT(s)ds. (1.54)

Recall the no-arbitrage call price formula for a call with maturity T and strike price K,

CT(K) = exp(−rT )E∗[(ST − K)+]. (1.55)

If we set k = log(K), we may write

CT(k) =

Z +∞

k

exp(−rT )(es− ek)qT(s)ds. (1.56)

We would like to perform a Fourier inversion on CT(k), in order to write

CT(k) in terms of the known charachteristic function φT(k). Unfortunately

CT(k) is not square-integrable: one can see that CT(k) tends to S0 as k

ap-proaches −∞. Thus we condider a modified call price cT(k) = exp(αk)CT(k).

We will later choose an appropriate α > 0 in order to have cT(k) ∈ L2.

Calcu-lating the Fourier transform ψT(v) =R+∞

−∞ eivkcT(k)dk, we get an analytical

expression in term of φT :

ψT(v) =

e−rTφT(v − (α + 1)i)

α2+ α − v2+ i(2α + 1)v. (1.57)

Finally, we plug this formula in the inverse transform

CT(k) = exp(−αk) 2π Z +∞ −∞ e−ivkψT(v)dv. (1.58)

In order to have the modified call value cT(k) square integrable one may

observe that, being cT a bounded function, it is sufficient that it its

inte-grable, or equivalently that ψT(0) < ∞. Using equation 1.58 we can rewrite

the condition as φT(0) < ∞, hence equivalently

(22)

Notice that we can prove this condition for Affine Processes using Theorem 1.30.

The complete formula for Option Pricing is thus:

CT(k) = exp(−αk) 2π Z ∞ −∞ e−ivk e −rTφ T(v − (α + 1)i) α2+ α − v2+ i(2α + 1)v. (1.60)

When we will perform a numerical approximation of the integral, we will need too choose finite extremes of integration. So, the decay properties at infinity of the integrand will be crucial. The integrand in Equation 1.60 is

of class o(ΦT(v − (α + 1)i)v2) when v → ∞.

In the following we will calculate the characteristic function of Yt =

log(St/S0− rt). Denote this function by ϕt. Then we have

φt(u) = exp(iu(log(S0) + rT ))ϕT(u). (1.61)

Thus we will compute

CT(k) = exp(−αk) 2π Z ∞ −∞ e−ivke −rTei(v−(α+1)i)(log(S0)+rT )ϕ T(v − (α + 1)i) α2+ α − v2+ i(2α + 1)v = exp(−αk) 2π Z ∞ −∞ e−ivke i(v−αi)(log(S0)+rT )ϕ T(v − (α + 1)i) α2+ α − v2+ i(2α + 1)v = exp(−αk) 2π Z ∞ −∞ e−ivk+i(v−αi)(log(S0)+rT ) ϕT(v − (α + 1)i) α2+ α − v2+ i(2α + 1)v. (1.62)

1.6.2 An alternative Fourier-based formula

In the previous section we modified the call price formula by multiplication

with an exponential in order to get an L2 function. This solved the problem

that CT(k) → S0when k → −∞. A different modification of the formula may

be obtained subtracting a function in k which tends to S0 when k → −∞

and vanish at +∞. We define a new modified Call Value:

D(k) = CT(k) − (S0− e−rT +k)+. (1.63)

We have to prove that D is square-integrable.

Proposition 1.32. The function D(k) defined in Equation 1.63 is in L2(R).

Proof. Since D vanish at ∞ and only has a discontinuity in k = log(S0)+rT,

it is a limited function. Hence we only need to prove it is integrable. Thus our claim is

Z

(23)

Let us denote by J the discontinuity point and separate the integration between the intevals (−∞, J ) and (J, +∞). For the first part we have

∞ Z J dk ∞ Z k e−rT(es− ek)qT(s)ds = e−rT ∞ Z J ds qT(s) s Z J (es− ek)dk = e−rT Z ∞ J ds qT(s)(ses− J es− es+ eJ). (1.65)

This is a finite quantity, provided that log(St) has second moment and

exponential moment.

The second integral reads

J Z −∞ dk( ∞ Z k e−rT(es− ek)qT(s)ds − S0+ ek−rT) (1.66)

Consider the integrand in dk : when k < j it may be written as

E[e−rT(ST − ek)+] − (S0− ek−rT)+=

E[e−rT(ST − ek)1ST>ek] − E[STe

−rT − ek−rT] = E[e−rT1ST≤ek(e k−rT − S Te−rT)] = k Z −∞ e−rT(ek− es)qT(s)ds. (1.67)

Let us now integrate in dk :

J Z −∞ dk( Z k −∞ e−rT(ek− es)qT(s)ds) (1.68)

One may now conclude by the same calculation as before.

A new Option Evaluation may be obtained summing and subtracting (S0− e−rT +k)+ to the function CT(k), which implies:

CT(k) = (S0− Ke−rt)++ Z 1 − φ T(z − i) z(z − i) exp  −iz ln Ke −rT S0  dz. (1.69) Recall K = log(k). This Formula may not guarantee the same decay at infinity of Formula 1.60: in fact we will see in many cases that its behaviour

(24)

at infinity is different. However Equation 1.69 lends itself to useful

gener-alizations when the characteristic function of Yt is not explicitely known,

which could improve approximation.

A strategy to generalize this approach goes as follows:

• select a model whose characteristic function is explicitely known, call

this function ϕknown;

• add and subtract formula 1.69 for this model to the previous formula. Thus the new Formula is the following:

CT(k) =(S0− Ke−rt)++ S0 2π Z 1 − φknown T (z − i) z(z − i) exp  −iz ln Ke −rT S0  dz+ S0 2π Z φknown T (z − i) − φT(z − i) z(z − i) exp  −iz ln Ke −rT S0  dz. (1.70) The advantage here is that the first integral can be computed with high speed at desired accuracy with a large interval of integration, while the sec-ond one have good decay properties provided that

max(φknownT (z − i), φT(z − i)) → 0, (1.71)

when z → ∞. We will empirically see that this is the case. In order to increase the weight of the first integral in the formula we will need to chose

a model with characteristic function φknownt with similar behavior to the

(25)

Chapter 2

Affine Models for Option

Pricing

This Chapter define the models we will use to analyze the evolution of the price of an underlying, in order to end up with a formula for option pric-ing. For the Heston Model a closed formula is known for the charachteristic function, thus, using the results of Section 1.5, one may price options with a brief computation. In more general models, however, we need to truncate the series expansion of Section 1.4, which inherits more computation time and errors.

2.1

Merton model

In 1976 Merton included jumps in Black Scholes model and ended up with a formula for call and put option for the subsequent model:

(

St= exp(rt + Yt);

Yt= γt + σWt+ Jt.

(2.1)

In this formula Jt is a compound Poisson process, which can be

repre-sented as Jt =

N (t)

P

k=1

Uk, where Uk are i.i.d. variables and N (t) is a Poisson

Point Process. Hence in a time interval [s, t] the logaritm of St performs

a Poisson number of jumps with independent sizes governed of law L(U1).

In this case the logarithm of the price process is the the sum of a geomet-ric brownian motion and a Compound Poisson Process. Its characteristic function has the form

φt(z) =eizγtE[eizσWt]E[eizJt]

= exp[izγt −z

2σ2

2 t + λt

Z

(26)

where µ is the jump probability measure.

2.2

Heston Model

One of the most popular Stochastic Volatility Models is Heston Model, origi-nally defined in "A Closed-Form Solution for Option with Stochastic Volatil-ity with Applications to Bond and Currency Options" by Steven L. Heston ([8]).

Here the price process (St)(t∈[0,T ])is modeled as St= exp(−rt+Yt), where

r is a constant continuously composed rate, while Yt= γt + Xt1, γ ∈ R. The

process Xtis a solution of the following stochastic differential equation, where

Bt= (Bt1, Bt2) is a Brownian Motion, while α, σ, κ, ρ, θ are real parameters:

   dXt1 = −1 2α 2X2 tdt + α √ X2dB1 t, X01= 0; dXt2 =κ(θ − Xt2)dt + σ √ X2(ρdB1 t + p (1 − ρ2)dB2 t), X02 = θ. (2.3)

Each of the parameters α, σ, κ, ρ, θ has an heuristic interpretation and is given a name.

• α is the rate of return of the asset;

• ρ : notice that ρB1

t +p(1 − ρ2)Bt2 is a Brownian Motion, due to Levy

characterization, which has correlation ρt with Bt1;

• σ is the volatility of the volatility;

• θ is the a.s. limit of X2, the asymptotic volatility. The proof may

be found in the article by Cox, J.C., J.E. Ingersoll and S.A. Ross "A Theory of the Term Structure of Interest Rates" ([14]). It is often called long time volatility;

• κ measure the speed of convergence of X2

t to θ. It is called rate of

mean-reversion.

Notice that, since X2 appear in Equation 2.3 under square root, one has

to ensure that it is almost surely positive. This is accomplished through a condition often called Feller Condition, which is

2κθ > σ2. (2.4)

Comparing Heston Model with the classical Black-Scholes Model, the main advantage is that it models the volatility as a stochastic variable, while B.S. assume it is a constant. This is more realistic, according to datas from real financial markets.

Notice that this is an Affine Model in the sense that it satisfies an Equa-tion of the form of 1.18. Thus its characteristic funcEqua-tion can be found solving

(27)

the Riccati Equations 1.25. Those equations may be solved explicitely in this case. The following result may be found in Lord and Kahl’s article "Complex logarithms in Heston-like models" ([15]).

Proposition 2.1. The first component Xt1 of the Heston model has the

fol-lowing Characteristic Function:

p(t, (0, x), (z, 0)) = exp(A(z, t) + B(z, t)θ); where A(z, t) = θκ σ2  (a − d)t − 2 loge −dt− g 1 − g  ; B(z, t) = a + d σ2 1 − edt 1 − gedt; a = κ − izασρ; d =pa2+ α2σ2(iz + z2); g = a + d a − d. (2.5)

In order to price options, we have to know the Stochastic Differential Equation satisfied by the asset under the Martingale-Equivalent Probability.

Hence we have to impose Yt = exp(−rt)St is a martingale. This implies

S0 = E[e−rtSt]. The condition reads p(t, (0, θ), (−i, 0)) = 1 and is satisfied

by construction as can be easily seen substituting z = −i in Equation 2.5.

2.3

Heston Model with State-dependent Jumps

In this section we generalize the Heston Model, adding state-dependent

jumps. The new equations are meant to describe a situation where the

price of an underlying have more frequent jumps as the volatility increase. This moment could adapt to a situation of crisis, where higher volatility in the market imply more frequent negative jumps, due to a fall in the demand of a certain underlying (asset, bond, ecc.).

Thus we consider two measures, which for simplicity are assumed to

be absolutely continuous with respect to Lebesgue Measure, µ0(y)dy and

µ1(y)dy, which respectively represent the distrubutions of state-independent

and dependent jumps. The state-dependent intensity is assumed to be a

linear function of the second component X2, the volatility. So the jump

intensities are respectively λ0 and Xt2λ1.

The form of the equation is thus the following:      dXt1 = −λ0a0− (λ1a1+12α2)Xt2dt + α p X2 tdt+ R Ry(N (X 2

t−, dt, dy) − λ0µ0(y)dy − Xt2λ1µ1(y)dy);

dXt2 = κ(θ − Xt2) + σpX2 t(ρdBt1+ p 1 − ρ2dB2 t). (2.6)

(28)

In this equation we denoted by a0 and a1 the following quantities: a0= Z (exp(z) − z − 1)µ0(z)dz a1 = Z (exp(z) − z − 1)µ1(z)dz. (2.7)

Notice that this is the same equation of 2.3 with the compensated jump term in the first component. Another difference is the term in dt, the drift. This

is modified to ensure the process exp(Xt1) is a martingale. That’s what we

are about to prove.

Proposition 2.2. If (Xt1, Xt2)t∈[0,T ] is a strong solution of Equation 2.6,

then exp(Xt1)t∈[0,T ] is a martingale with expected value E[exp(Xt1)] = 1.

Proof. We need to prove E[exp(Xt1) − exp(Xs1)|Fs] = 0 for each 0 ≤ s < t ≤

T.

Let us apply Ito’s Lemma 1.11 to Equation 2.6 to evaluate exp(Xt) −

exp(Xs). We obtain the following:

exp(Xt) − exp(Xs) = Z t s exp(Xu1−)dXu+ 1 2 Z t s exp(Xu1−)α2Xu2du+ X s≤u≤t

(exp(Xu) − exp(Xu−) − exp(Xu−)∆Xu).

(2.8)

We may ignore the Brownian integrals, which are martingales. Let’s take a

look at the jump part of the first integral, which readsP

s≤u≤texp(Xu1−)∆Xu1. This term thus cancel with the last one. Thus what remains is the jump

process P

s≤u≤t(exp(Xs1) − exp(Xs1−)) plus a time integral. The time

inte-gral must equal the compensator of the jump part of exp(Xu)s≤u≤t. Since

the compensator is the expected value of the jump width, it is R (exp(z) −

1)µ(z)dz, which is the required term. Notice that, if X1 increase by z time

his value, exp(X1) increase by (exp(z) − 1) time his value.

In this model, the jump measure is

ν(x, dz) = ν0(dz) + xTν1(dz) = δ0(dz2) ⊗ (λ0µ0(z1)dz1+ λ1xTµ1(z1)dz1),

(2.9)

where δ0(dz2) is the probability concentrated in 0, because the volatility X2

is assumed to perform no jump.

In order to use Carr-Madan Formula for option pricing, we need to cal-culate the expansion 1.27, thus we should write the generator. This is done by means of Formula 1.20.

(29)

Proposition 2.3. Let A be the generator of a process satisfying equation 2.6

and f ∈ C2(R2). Then we have

Af (x) =  −λ0a0− (λ1a1+ 1 2α 2)x 2  ∂x1f (x) + κ(θ − x2)∂x2f (x) +1 2α 2x 2∂x1x1f (x) + ασρ∂x1x2f (x) + 1 2σ 2x 2∂x2x2f (x), ∀x = (x1, x2). (2.10)

2.4

Bates Model

A variation of the Heston Model, which adds Compensated Compound Pois-son jumps to the first component in 2.3, is the Bates Model ([10]). We set α = 1 in Equation 2.3. This can be seen as a merge between Merton and

Heston Models. Let St= exp(rt + Xt1). The equations for X = (X1, X2) are

the following:    dXt1 =(−λ¯k − 1 2X 2 t)dt + √ X2dB1 t + dZt, X01 = 0; dXt2 =κ(θ − Xt2)dt + σ √ X2(ρdB1 t + p (1 − ρ2)dB2 t), X02 = θ. , (2.11)

where Zt is a Compound Poisson Process of parameter λ and law of

jumps ν with mean ¯k.

This is no longer a Levy Process, nevertheless a closed form of the

char-acteristic function can be found. In fact, Xt1 is the sum of two

indepen-dent processes for which we already know the characteristic function. This

happens because the stochastic differential equation for X1 only involves

dependence on X2. Thus the characteristic function for the Bates Model is

exp(A(z, t) + B(z, t)θ + γt Z

(eizu−1)ν(du), (2.12) where A and B are defined in Proposition 2.5.

2.5

Generalized Merton Models

In this section we will follow the approach used by C. Bayer and J. Schoen-machers in "Affine Generalized Merton Models" ([9]). The authors define a generalization of Merton Model 2.1. Their definition is the following:

Yt= γt + Ht+ Xt1. (2.13)

where H is the first component of a log-Heston Model, while Xt1 is the

first component of a jump diffusion process, with X01 = H0 = 0. The

(30)

H in X1. However the advantage of Heston Model is the closed form of the Characteristic Function, which reduce the error in the calculation of

the Characteristic Function of X1, that may be only approximated through

expansions of the form 1.27.

We will consider the case when X is a Heston State Dependent Jump

Model. In this case we choose γ = 0, since exp(Ht+ Xt1) is already a

martingale. That’s because if X and Y are independent martingales with respect to their natural filtrations, then XY is a martingale with respect to its natural filtration.

This error-reducing approach may simplify calculations, but it has disad-vantages in some cases. In some cases the variance of H may be high while the variance of X is low, which means jump intensity is low while variance is high, which is a situation we want to avoid. Thus one should choose H with a smaller variance, i.e. with small θ.

(31)

Chapter 3

Numerical Experimentation

This chapter is meant to extend the first guide to a numerical implementation of the method in [2] (which we presented in Section 1.5) provided by Bayer and Schoenmakers in [9].

First of all we verify that sufficient conditions of Section 1.5 are satisfied in the case of state dependent negative jumps with exponential law.

In the second section we compare the performances of the method in the case of a Merton Generalized Model without jumps, i.e. in the Heston case. This has already been done in [2], but we will be more careful with the sufficient conditions for the series expansion of the characteristic function and for the existence and unicity of the solution of SDE 2.3.

In the fourth section we show the comparison of the MonteCarlo results with the Series Expansion Method in the general case of HSDJ Model (cf. Section 2.3).

In the next section we show how the MonteCarlo simulation was per-formed.

3.1

Existence of the series expansion

In order to test the efficiency of the approximated method for option pricing, we must first ensure that our HSDJ process, defined by Equation 2.6 satisfy the conditions stated in Section 1.5.

First of all we ensure the conditions of Proposition 1.26.

Proposition 3.1. If ν is an exponential probability of parameter p, and

mα =R zαν(dz) is its α moment, the the series

∞ X α=2 mα α! u α (3.1)

(32)

H X1 α 1 1 κ 1.5 1.5 σ 0.6 0.3 θ 0.15 0.05 ρ -0.2 -0.3

Table 3.1: Parameters of Heston Processes.

Proof. Let us fix C = 1−exp(−pM )1 . Integrating by part we have:

mα= 1 C 0 Z −∞ zαpepzdz = 1 C  zαepz|0−∞− 0 Z −∞ αzα−1epzdz   = −α pmα−1. (3.2)

We now use the D’Alembert criterion in order to ensure convergence. Denote

by cα the quantity mα!α. cα−1 cα = αmα−1 mα = −p. (3.3)

Thus imposing |p| > 1 we have convergence.

3.2

The Jump-free case

Throughout this Section we denote by (Yt)t∈[0,T ] the process

Yt= Ht+ Xt1, (3.4)

where H and X are the first components of a couple of independent Heston Processes. An simulated trajectory for this process is showed in Figure 3.1.

Recall that we defined St= S0exp(rt + Yt). The choice of parameters is

indicated in Table 3.2.

Notice that the Feller Condition 2κθ > σ2 is fulfilled.

For simplicity we assume S0= 1. Following [9], we compute the Series

ex-pansion of E[eiuYt] up to the eighth term. In Figures 3.2 and 3.3 we compared

the real and imaginary parts of the exact and approximated characteristic

function of Ytin this case.

From the graphics we observe that the formula is satisfactory in the jump-free case, expecially for low times. Its behaviour goes deteriorating

(33)

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -0.2 -0.15 -0.1 -0.05 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.02 0.03 0.04 0.05 0.06 0.07

Figure 3.1: Simulated trajectory of the two components of process Xt.

-15 -10 -5 0 5 10 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -15 -10 -5 0 5 10 15 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08

Figure 3.2: Approximated (blue) and exact (red) real and imaginary part of

(34)

-5 -4 -3 -2 -1 0 1 2 3 4 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -5 -4 -3 -2 -1 0 1 2 3 4 5 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2

Figure 3.3: Approximated (blue) and exact (red) real and imaginary part of

the Characteristic Function of Yt, with t = 2.

L Carr-Madan Bayer-Schoenmachers

Exact Approx. Exact Approx.

5 1.9233 1.9234 1.9364 1.9344 6 1.9309 1.9319 1.9364 1.9344 7 1.9343 1.9360 1.9361 1.9339 8 1.9356 1.9380 1.9362 1.9340 9 1.9361 1.9390 1.9363 1.9341 10 1.9363 1.9395 1.9364 1.9343

Table 3.2: Exact and approximated prices calculated with Bayer and Carr Formulas on domains of integration [−L, L] for maturities fixed at t = 1.

as |u| tends to infinity, however, since both the exact and approximated characteristic functions tends to 0 at +/ − ∞, this should not give problem when we integrate, provided that both approximated and exact integrals converge.

We may now use the exact and approximated characteristic functions in order to calculate the prices of call options. In Table 3.2 we compared the prices obtained via Carr-Madan Formula (1.62) and via the method by Bayer and Schoenmachers (Formula 1.70) for call options with strike price 10, initial price 10, rate 0.05 with different domains of integration.

From Table 3.2 we observe that both the methods provide really accurate approximation, hence the decay of the integrand is sharp in both cases. Nevertheless it seems clear that Bayer-Schoenmakers method have better properties with narrower domains of integration. Since the integration steps are the same for all the calculations, namely 0.1, this means that Carr-Madan

(35)

Figure 3.4: Approximated (blue) and exact (red) real and imaginary part of the Implied Volatility of Heston+Heston model. Notice that significative errors are observed only for high maturities.

Formula provides correct option pricing in a longer calculation time.

We now compare option prices calculated via Bayer-Schoenmakers For-mula using approximated and exact characteristic function fixing the inte-gration domain [−L, L] with L = 32 for different strike prices.

Strike Maturity t = 1/2 Maturity t = 2

Exact Approx. Exact Approx.

7 3.3359 3.3360 4.3430 4.3571 8 2.5509 2.5514 3.7637 3.7714 9 1.8822 1.8826 3.2528 3.2507 10 1.3434 1.3432 2.8063 2.7929 11 0.9318 0.9310 2.4190 2.3947 12 0.6320 0.6312 2.0847 2.0521 13 0.4219 0.4215 1.7974 1.7600

An important feature of Stochastic Volatility Models is the smile of the volatility, which means that implied volatility is higher for strike prices dis-tant from the starting value of the underlying. This is clearly observed in

(36)

Figure 3.5: Relative error of the implied volatility for Heston+Heston model.

Xt1

λ1 10

µ1(y) pepydy

p 4.48

Table 3.3: Jump parameters of X1 in HSDJ model.

Figure 3.4. For a brief overview on the notion of implied volatility to Ap-pendix A. In Figure 3.5 we show the relative error in implied volatility, which is low for short maturities.

3.3

The Jump Case

We now consider the case of a Generalized Merton Model. This model consist

in a price process St= S0exp(rt + Yt), where St satisfies:

St= Ht+ Xt1, (3.5)

where Htis the first component of a log-Heston process, Xt1 is the first

com-ponent of a Heston SDJ process, while γ = 0 ensures exp(Yt) is a martingale.

For the HSDJ process Xt, we consider only negative state-dependent

jumps with exponential law of parameter p. Jump parameters are indicated

in Table 3.3. Recall that λ1X2 represent the intensity of state dependent

(37)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -2 -1.5 -1 -0.5 0 0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.05 0.1 0.15 0.2 0.25

Figure 3.6: Simulated trajectory of the two components of process Xt. Jumps

are highlighted in red.

The stochastic differential equation for Xt1 is

     dXt1 = −(λ1(m1+ a1) +12α2)Xt2dt+ αpXt2dt +R RyN (X 2 t−, dt, dy); dX2 t = κ(θ − Xt2) + σ p X2 t(ρdBt1+ p 1 − ρ2dB2 t). , (3.6)

where m1 = R yµ1(y)dy, and a1 = R (ey − y − 1)µ1(y)dy. Since m1 is the

average size of negative jumps we have m1 = −1/p, while a1 = p(p+1)1 .

Figure 3.6 shows a simulated trajectory for price and volatility over time. This trajectory is an example of a crisis situation, with three significant jumps in price, corresponding to high volatility situations.

3.4

MonteCarlo Simulation

In order to test the approximation for Merton Generalized Model with jumps, we need a solid benchmark. In the absence of closed formulas, we rely on a MonteCarlo simulation. This method consist in generating a large amount

(38)

Exact Variable Approximated Variable t+δ R t XsdWs Xt(Wt+δ− Wt) t+δ R t bsds δbt t+δ R t R R\0 zN (Xs, dz, dt) Yt

Table 3.4: Local approximation of integral quantities.

of trajectories N and calculating the empirical characteristic function ψt(u):

ψt(u) = N P i=1 exp(iuTX(i) t ) N , (3.7)

where (Xt(i))i=1,...,N is the statistic champion generated. The trajectories

are discretized with a constant time step δ. From time t to t + δ compute the approximations showed in Table 3.4. Notice that the state-dependent jump part is locally approximated by a Poisson Compound variable Y with

intensity λ1Xt2 as showed in Remark 1.4.

In order to generate Ytwe observe that jumps arrive with intensity λ1Xt2,

thus the number of jumps between t and t + δ is an poisson random variable

with parameter δλ1Xt2. Each jump can be then generated using the law ν1.

The pseudocode of the algorithm is provided in Listing 3.1.

Listing 3.1: Pseudo Code of MonteCarlo Simulation for HSDJ process

1 X=[0 t h e t a ] ; 2 c o u n t =0; 3 f o r c o u n t =0: s t e p s −1 4 X(2)=max( 0 ,X ( 2 ) ) ; 5 Norm=sqrt ( d e l t a ) ∗ normrnd ( 0 , 1 ) ; 6 Norm1=sqrt ( d e l t a ) ∗ normrnd ( 0 , 1 ) ;

7 Norm2=rh o ∗Norm+sqrt (1− rh o ^2)∗Norm1 )

8 d r i f t 1 = −(0.5∗ a l p h a^2+lambda1 ∗ma1 ) ∗X( 2 ) ∗ d e l t a ; 9 d i f f u s i o n 1=a l p h a ∗X( 2 ) ∗ Norm ; 10 X(1)=X(1)+ d r i f t 1+d i f f u s i o n 1 ; 11 P o i s s o n P a r a m e t e r=X( 2 ) ∗ lambda1 ∗ d e l t a ; 12 jumps=p o i s s r n d ( P o i s s o n P a r a m e t e r ) ; 13 f o r c o u n t 1 =1: jumps 14 jumpWidth=exprnd ( 1 / p ) ; 15 X(1)=X(1) − jumpWidth ;

(39)

-15 -10 -5 0 5 10 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -15 -10 -5 0 5 10 15 -0.3 -0.2 -0.1 0 0.1 0.2 0.3

Figure 3.7: Approximated (blue) and exact (red) real and imaginary part of

the Characteristic Function of Yt, with t = 5 in the jump case.

16 end

17 d r i f t 2=kappa ∗ ( t h e t a −X( 2 ) ) ∗ d e l t a ;

18 d i f f u s i o n 2=sigma ∗ sqrt (X( 2 ) ) ∗ Norm2 ;

19 X(2)=X(2)+ d r i f t 2+d i f f u s i o n 2 ;

20 end

3.5

Performance of the approximated formula in

the jump case

Figure ?? shows the approximate characteristic function compared with the MonteCarlo estimate. Results are very similar in the real part, while immagi-nary part differ a little more. However the different scale must be considered. Consider that we choose maturities t = 5, otherwise the differences could not be appreciated.

We can see the effect of the changed distribution in Figure ??, which shows the implied volatility of the two approximations. In particular implied

volatility is larger and the smile is more pronunciated. Notice that the

precision is even better than in the jump free case. This happens because

the characteristic function of Ht is calculated using the exact formula. This

also allowed us to increase the number of terms in the expansion of the characteristic functions up to K = 15. Figure 3.11 shows that this provide an importante increase in precision, even for high maturities.

(40)

Figure 3.8: Implied volatility of Generalized Merton Model 2.1 calculated with the approximated method (in blue) and using a MonteCarlo simulation (in red), with K = 8 terms in the expansion.

Figure 3.9: Relative error of the approximated method calculating implied volatility, with K = 8 terms in the expansion.

(41)

Figure 3.10: Implied volatility of Generalized Merton Model 2.1 calculated with the approximated method (in blue) and using a MonteCarlo simulation (in red), with K = 15 terms in the expansion.

Figure 3.11: Relative error of the approximated method calculating implied volatility, with K = 15 terms in the expansion.

(42)

Conclusions

Our calculations show that precise option pricing may be done through ap-proximation of the characteristic function. The method is not that precise for high maturities options, having errors up to 15% for options with ma-turity t = 5. In the paper "Option Pricing in Affine Generalized Merton Models" the results in the jump case appear to be much more biased, proba-bly because of some error in the MonteCarlo simulation. Another important difference in our experimentation is the fulfillement of Feller condition for the positivity of variance.

Future development may include positive jumps in the volatility, which have already been modeled by Duffie, Pan and Singleton in "Transform Anal-ysis and asset pricing for Affine Jump-Diffusions" ([16])in a state indepen-dent jump scenario. A statistical inquiry by V. Todorov and G. Tauchen in "Volatility Jumps"([17]) show evidence that jumps in volatility are highly correlated with jumps in price in the modelization of crisis scenarios.

(43)

Appendix A

The mathematical modelling of financial asset prices started in 1973 with

Samuelson Black Scholes article [7]. Let us consider a probability space

(Ω, F , P ), toghether with a filtration (Ft)t∈[0,T ] satisfying the usual

hypoth-esis (right-continuity and completeness). We let F0= {∅, Ω} and FT = F .

Samuelson, Black and Scholes modeled the price of an underlying S as

an adapted stochastic process (St)t∈[0,T ] satisfying the following stochastic

differential equation: (

dSt= St(µdt + σdBt);

S0 = x0 ∈ R.

(A.1)

The solution of Equation A.1 is called exponential Brownian motion, and has the following form:

St= S0exp((µ −

σ2

/ 2)t + σBt). (A.2)

IWe assumed a constant interest rate r > 0 and constant µ, σ ∈ R. Using Girsanov Theorem one may prove the existence of a

martingale-equivalent probability P∗, that is, a probability under which the actualized

value exp(−rt)Stis a martingale. This ensure the impossibility of arbitrages

in the market.

The no-arbitrage price of a contract paying an amount f (ST) at time

t < T is E∗[exp(−r(T − t))f (ST)|Ft]. If we assume that it is sold at time 0,

the price reads CT = E∗[exp(−rT )f (ST)].

The authors provide a closed formula for call and put options, that is,

options with f (ST) = (ST − K)+ and f (ST) = (K − ST)+ respectively.

However, Samuelson-Black-Scholes model has shown some weaknesses. First of all when one tries to estimate the parameter σ, called volatility (µ is

not relevant under the martingale-equivalent probability P∗), he may choose

two ways:

• Historic Volatility: observing market prices St, one observe that if t0 <

(44)

is a Gaussian with variance σ√θ. Thus performing a Gaussian statistic test with unknown mean an estimate is obtained;

• Implied Volatility: observing call and put prices, one may invert Black-Scholes formula to find σ.

Unofortunately those two parameters in many cases do not correspond. This shows the need of more sophisticated models. Another issue is thzt Implied Volatility is not constant over times neither over strike prices values. This phenomenon is known as the smile of the volatility.

Let us introduce some terminology: if the prices of a stock is S0, a call or

put option with strike k = S0 is called at the money (ATM), one with k < S0

out of the money (OTM), while an option with k > S0 is called in the money

(ITM). A smile in the volatility properly means that the implied volatility is higher for OT M and IT M options than for AT M. One possible solution to this problem is given by stochastic volatility models: in this models the volatility is modeled as another stochastic process satisfying a SDE.

Another problem is that Geometric Brownian Motion is a continuous process, while real stock prices can have really sharp changes of value, thus are more suitable to being modeled as stochastic processes with jumps.

Affine Jump-Diffusions provide a class of models where both those prob-lems are dealt with.

(45)

Bibliography

[1] D. Duffie, D. Filipovic and W. Schachermayer, "Affine Processes ans applications in finance", The Annals of Applied Probability 2003, Vol. 13, No. 3, 984–1053, Institute of Mathematical Statistics, 2003.

[2] D. Belomestny, J. Kampen and J. Schoenmakers, "Holomorphic trans-forms with application to affine processes", Journal of Functional Anal-ysis 257 (2009) 1222–1250.

[3] P. Jin, J. Kremer and Barbara Rudiger, "Existence of limiting distribu-tion for affine processes", arXiv:1812.05402 [math.PR], Dec 2018. [4] B. Øksendal, A. Sulem, Applied Stochastic Control of Jump Diffusions,

Springer, 2007.

[5] A. Papapantoleon, "An Introduction to Lévy Processes with Applications in finance", arXiv:0804.0482v2 [q-fin.PR] 3 Nov 2008

[6] P. Carr, D.B. Madan, "Option valuation using the fast Fourier transform" (1999), The Journal of Computational Finance, 2 (4): 61–73.

[7] F. Black, M. Scholes, "The Pricing of Options and Corporate Liabilities", Journal of political economy, 1973

[8] Heston, Steven L. (1993). "A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options". The Review of Financial Studies. 6 (2): 327–343

[9] C. Bayer, J. Schoenmakers, "Option Pricing in Affine Generalized Merton Models", arXiv:1512.03677 [q-fin.CP], 2016

[10] D. Bates, "Jumps and stochastic volatility: the exchange rate pro-cesses implicit in Deutschemark options", Rev. Fin. Studies, 9 (1996), pp. 69–107.

[11] Nobuyuki Ikeda, Shinzo Watanabe, "Stochastic Differential Equations and diffusion processes", North Holland (February 26, 1981)

(46)

[12] Protter, Philip E., "Stochastic Integration and Differential Equations", Springer; 2nd edition (May 24, 2005)

[13] M. Jeanblanc, M. Yor, M. Chesney, "Mathematical methods for finan-cial markets", Springer Finance, Springer, Nov 2009

[14] Cox, J.C., J.E. Ingersoll and S.A. Ross "A Theory of the Term Structure of Interest Rates", Econometrica Vol. 53, No. 2 (Mar., 1985), pp. 385-407 (23 pages)

[15] R. Lord, C. Kahl, "Complex logarithms in Heston-like models", Math-ematical Finance, Vol. 20, Issue 4, pp. 671-694, October 2010

[16] D. Duffie, J.Pan, K. Singleton, "Transform Analysis and asset pricing for Affine Jump-Diffusions", Econometrica Vol. 68, N.6 (Nov. 2000), pp. 1343-1376

[17] V. Todorov, G. Tauchen, "Volatility Jumps", 2011 American Statistical Association Journal of Business and Economic Statistics, July 2011, Vol. 29, No. 3

Riferimenti

Documenti correlati

By integrating the empirical evidence of the previous field research with the results of the laboratory studies described above, the aim of the present research was to verify, in

all’interventismo tradizionale del ministero delle proprietà dema- niali – cioè il ministero del popolamento del territorio per antonomasia – occorre aggiungere

Further, we contribute to academic research by presenting evidence of the effect of value perceptions, namely purposive value, usability, monetary value,

I fattori che hanno determinato la scelta del tema del mio lavoro di tesi, riguardano le caratteristiche relazionali presenti all’interno del gruppo sezione in cui ho svolto

clarifying that the group of ductal carcinoma, accounting for approximately 80% of all types of breast carcinoma, contains an unpredictable number of different molecular subtypes.

Da circa un decennio il Laboratorio promuove la ricerca e il dibattito su questioni concernenti le trasformazioni delle istituzioni e dell’ordine normativo: sulle politiche pubbliche,

In the case of solvmanifolds, Hasegawa proved in [25, 26] that a solvmanifold carries a K¨ ahler metric if and only if it is covered by a finite quotient of a complex torus, which

In this paper we use the method established by Day [1] to solve Truesdell’s problem rephrased for the torsion of elastic cylinders with microstructure.. Keywords: torsion,