• Non ci sono risultati.

Credit Risk Modeling A.Y. 2017/18 M.A. in Finance Structural modeling of default arrival

N/A
N/A
Protected

Academic year: 2021

Condividi "Credit Risk Modeling A.Y. 2017/18 M.A. in Finance Structural modeling of default arrival"

Copied!
56
0
0

Testo completo

(1)

A.Y. 2017/18 M.A. in Finance

Structural modeling of default arrival

Luca Regis1

1DEPS, University of Siena

(2)

Structural Modeling Approach

Idea: firms default when they are not able to meet their financial obligations.

Explicit modeling of the default-triggering event (endogenous).

Default is induced by market information, the value of the firm.

More apt to price products that involve equity variables as well;

Easier and more natural inclusion of correlation among defaults (also calibration).

(3)

Merton’s Model (1974)

A firm is financed by equity and a single issue of zero-coupon debt with face value F maturing at T . Markets are frictionless (no taxes, transaction costs...), continuous trading.

Discount factors: P (t, T ) = e−r(T −t).

The total market (or asset) value of the firm,

V (t) = D(t) + S(t), where D(t) is debt value and S(t) is equity value follows:

dV (t) = µV (t)dt + σV (t)dWt, V0 > 0,

µ = r − k under the risk neutral measure, with k being the payout ratio.

(4)

Default Time

Let τ denote the default time. We have:

τ =

 T : V (T ) < F

∞ : else Default is linked to the ability to repay F.

Default can happen only at debt maturity T . We define the default indicator function

1{τ =T }=

 1 : V (T ) < F (default) 0 : V (T ) ≥ F (survival)

(5)

Payoffs at maturity

With absolute priority (debtholders reimbursed before equityholders), we have the following payoffs at maturity T :

Bonds Equity V (T ) ≥ F F V (T ) − F V (T ) < F V (T ) 0 For instance:

T = 2, F = 100 Bonds Equity V (2) ≥ 100 100 V (2) − 100 V (2) < 100 V (2) 0

⇒ ¯P (T, T ) = min(F, V (T )) = F − max(0, F − V (T ))

⇒ S(T ) = max(0, V (T ) − F )

(6)

Equity as a call option

S(T ) = max(0, V (T ) − F ) ⇒

⇒ S(t) = BSC(t, T, F, r, σ, V (t)) = V (t)Φ(d1) − F e−r(T −t)Φ(d2)

d1 = ln(V (t)F ) + (r − k + 12σ2)(T − t) σ√

T − t , d2 = d1− σ√ T − t

(7)

Example

Set t = 0, r = 2%, µ = 3%, σ = 20%, T = 2, k = 0 which implies B(0, T ) = exp(−0.02 × 2) = 0.96079

d1= ln(V (0)100) + (2% + 124%)2 20%√

2 , d2 = d1− 20%√ 2.

Suppose that V (0) = 80, then

d1 = −0.506, Φ(d1) = 0.3064 d2 = −0.789, Φ(d2) = 0.2151 S(0) = V (0)Φ(d1) − F e−rTΦ(d2) =

= 80 × 0.3064 − 100 × 0.960 79 × 0.2151 =

= 3.85

(8)

Evaluating the bond

We know that ¯P (T, T ) = F − max(0, F − VT) ⇒

i) ¯P (t, T ) = F e−r(T −t)− BSP(t, , T, F, r, σ, V (t)) =

= F e−r(T −t)− (F e−r(T −t)Φ(−d2) − V (t)Φ(−d1)) =

= F e−r(T −t)Φ(d2) + V (t)Φ(−d1).

Notice indeed that: ¯P (t, T ) can also be obtained using V (t) = S(t) + ¯P (t, T ) and the put-call parity.

In the above example

P (0, T )¯ = F e−rT − BSP(σ, T, F, r, V (0)) =

= F e−rTΦ(d2) + V (0)Φ(−d1) =

= 100 × 0.960 79 − 19.93 = 76. 149

(9)

Evaluating the bond

Notice also that

ii) ¯P (0, T ) = EQe−rT(F − max(0, F − V (T )))

= e−rTF − e−rTEQ[max(0, F − V (T ))]

| {z }

(r.n.) Expected loss

Comparing i) and ii):

F e−rT −F e−rTΦ(−d2) − V (0)Φ(−d1)

= e−rTF − e−rTrnEl F e−rTΦ(−d2) − V (0)Φ(−d1) = e−rTrnEl

F Φ(−d2) − V (0)erTΦ(−d1) = rnEl In the example:

rnEL = 100Φ(0.789) − 80 × e0.04Φ(0.506) = 20.74 or, the compounded value of the put: 19.93 × e0.04= 20.74

(10)

Risk neutral and Historical default probability

Notice that the risk-neutral default probability is equal to:

˜

πT = eP [τ ≤ T ] = Φ(−d2),

that is the probability of exercising the put option!

Now, the historical default probability, since X is a standard Normal, is:

πT = P [τ ≤ T ] = P [V (T ) < F ] =

= Ph

V (0)e(µ−12σ2)T +σX

T < Fi

= P X < lnV (0)F − (µ −12σ2)T σ√

T

!

=

= Φ(−D2).

(11)

Example

In the example, if X is a standard Normal,

d2= −0.789, Φ(−d2) = 0.7849 = 1 − Φ(d2) = 1 − 0.2151 D2 = ln10080 + (3% −124%)2

20%√

2 = −0.435

Φ(−D2) = 0.6684

Hence, default probability under the historical measure is smaller, since µ > r.

(12)

Loss Given Default and recovery rate

Loss Given Default and Recovery rate are endogenous in structural models:

Lgd = rn El

˜

πTF = F Φ(−d2) − V (0)erTΦ(−d1) F Φ(−d2)

= 1 −V (0)

F erTΦ(−d1) Φ(−d2)

| {z }

Recovery Rate R

= 1 −1 l

Φ(−d1) Φ(−d2)

l = F eV (0)−rT is the quasi-leverage (value of the

non-defaultable bond with face value F over value).

In our example,

Lgd = 100×0.784920.74 = 0.264 = 26% → R = 0.735 = 73.5%.

(13)

Credit Spread

The time t = 0 spread is obtained as s(0, T ) = −1

T ln

P (0, T )¯ F e−rT



=

= −1

T ln F e−rTΦ(d2) + V (0)Φ(−d1) F e−rT



=

= −1 T ln



Φ(d2) +1

dΦ(−d1)



In our example d = F eV (0)−rT = 100×0.96079 80 = 1.2.

s(0, 2) = −1 2ln



Φ(−0.789) + 1

1.2Φ(0.506)



=

= −1 2ln



0.2151 + 1 1.20.693



= 0.11622

(14)

Merton’s model and short-term spreads

All others equal, as T → 0, the spread tends to zero, while in the intensity models with constant intensity, for

instance, was equal to λ > 0 and in general, is non-zero for intensity models.

Inability to capture short-term spreads.

(15)

Firm value and equity value

Since equity value is a call option, S(t) = Call(t, V (t)), we can derive its dynamics using Ito’s formula:

dS(t) = dCall(V (t)) = (...)dt + ∂Call

∂V σVV (t)dW (t).

Hence, if we want to rewrite equity dynamics as a BS dynamics:

dS(t) = (r − q)S(t)dt + σSS(t)dW (t),

we have that σSS = σVCallV and hence σS = σVΦ(d1)VS.

(16)

Calibration

Problem: given T, F , r, we need V (0), σV, unobservable.

For public firms, for which equity is traded, we can observe S(0) and calibrate σS.

T ,F can be recovered from balance sheet data, r from market data;

V0 and σV are unobservable, but can be obtained from S(0) and σS.

We have the following relations:

S(0) = BSCV, T, F, r, V(0))

= V(0)Φ(d1) − F e−rTΦ(d2) σSS0 = Φ(d1VV(0)

(17)

Example

Let r = 1%. Consider a firm with σS = 20%, S(0) = 5 millions, F = 6 millions, T = 1. From the relations:

 5 = V (0)Φ(d1) − 6e−0.01Φ(d2) 0.2 × 5 = Φ(d1VV (0)

We get σV = 9.14%, V (0) = 10.94 < E(0) + F = 5 + 6, but close to S(0) + F e−0.01= 5 + 6e−0.01= 10.94.

Hence:

P (0, 1)¯ = V (0) − S(0) = 5.94;

P [τ ≤ T ]˜ = Φ(−d2) = 0.0000 s(0, 1) = 0.0000000

R = 0.987 = 98.7%; Lgd = 0.013 = 1.3%.

(18)

Extensions to Merton’s (1974) Model

Shortcomings of Merton’s model:

1 Default can happen only at a fixed debt maturity T : Black and Cox 1976 model;

2 Short term credit spreads cannot be captured: use a jump-diffusion, uncertainty on the barrier (F ),... less tractable;

3 costless bankruptcy (can be extended easily);

4 risk-free rate is constant (can be extended, even to a stochastic correlated case, see Longstaff and Schwartz 1995);

Hence, extensions to correct for such imperfections.

Also, capital structure models with endogenous default.

Idea: firms choose F (the default barrier) to maximize market value of equity (Leland 1994,...).

(19)

First-passage time models

To avoid the restrictive assumption of default at T: model a (possibly time-varying) barrier level representing the safety covenants of the firm.

Default occurs as soon as the value of the firm is lower than this boundary.

Loss of tractability unless assumptions are placed over dV (t) and the barrier H(t), but if H constant, still

analytical survival probabilities with jump-diffusive process for V (Zhou 2001).

Bond pricing: pricing a barrier option.

(20)

Black and Cox (1976)

Black and Cox model a barrier, representing the safety covenants for the firm.

Default can happen at any time prior to T : it occurs when the value of the firm hits the barrier; bondholders are repaid at default.

Firm value follows a GBM as in Merton’s model:

dV (t) = (r − k)V (t)dt + σVV (t)dW (t).

The barrier H(t, T ) is equal to:

(F t = T ;

Ke−γ(T −t) t < T ;

with Ke−γ(T −t)< F e−r(T −t), i.e covenant lower than

(21)

Survival probabilities - Black and Cox

Default time is defined as

τ = inf{t ∈ (0, T ] : V (t) ≤ H(t, T )}.

If the parameters of the dynamics of V are constant, survival probabilities for t < T can be computed as:

Q(τ > t) = Φ

 log

V (0) H(0)



+ (r − k − γ − 12σv2)t σV

t

+

−  H(0) V (0)

2a

Φ

 log

H(0) V (0)



+ (r − k − γ − 12σv2)t σV

√ t

,

with a = r−k−γ−

1 2σ2v σ2V .

(22)

Correlated Defaults

In the classical Merton (1974) model with two correlated firms:

dVi(t) = µiVidt + σiVi(t)dWi(t), dW1dW2= ρdt.

Joint default probabilities are given by

P [τ1≤ T, τ2 ≤ T ] = PVT1< F1, VT2< F2

= P

h WT1/

T < −D12, WT2/

T < −D22 i

= PX1< −D12, X2 < −D22

= Φ2(ρ, −D12, −D22)

=

Z −D12

−∞

Z −D22

−∞

1 2πp

1 − ρ2exp 2ρst − s2− t2 2 (1 − ρ2)

 dsdt

(23)

Example

For instance, with two firms of the type used in the (non-calibrated) example above, the joint default probability is:

πT1,2= Φ2(ρ, −0.435, −0.435) Why do we need these probabilities?

for risk management, when analyzing the benefits of credit risk diversification;

to evaluate some credit derivatives, e.g. a first-to-default CDS and CDO tranches;

Problem: first-passage models require to deal with multiple barriers when applied to the pricing of multi-name

products... often intractable!

(24)

Default correlation in intensity-based models

Consider the time to default correlation ρ(τ1, τ2) = E(τ1τ2) − E(τ1)E(τ2)

pvar(τ1)var(τ2)

where the expectation E(τ1τ2) needs F (τ1, τ2), which is in principle unknown even when F11), F22) are known.

How to model this in intensity-based models?

(25)

Correlating Default Intensities

Consider different firms, each having default time linked to an exponential random variable ξ by Λ(τ ) = ξ.

Correlation between default times can be induced:

1 by correlation between the default intensities λ1, λ2, ..., λn (if stochastic);

2 by correlation among the ξ;

3 by correlating both.

With first choice, conditional on the n intensity processes, then, default times are independent and τi is the first arrival time of a time-inhomogeneous Poisson process with time-varying intensity λi.

Tractable models are models that are multivariate versions of the models we have seen previously (affine multivariate models).

(26)

Default-time simulation algorithms

Take a doubly stochastic model, with stochastic intensities λ1, ..., λn and consider the interval (0,T).

Multi-compensator method: conditional on the paths of λ’s, default times are independent.

1 Simulate n independent unit-mean exponential r.v.

Z1, ..., Zn.

2 For each i, if Λi(T ) < Zi then τi> T .

3 Otherwise, τi= min{t : Λi(t) = Zi}.

(27)

First-defaults simulation

Assume the first m defaults out of n must be simulated.

Then:

1 Simulate m independent exponential r.v. Z1, ....Zm.

2 Order the Zi’s.

3 For each k, let E(k) denote the set of surviving entities after the k-th default. At start, k = 0, E(k) = {1, ...n}.

4 Simulate for each t and each element in E(k), the

compensator Λi(t), until either T is reached or Λi(t) reaches Zk+1. At t, i defaults and is removed from A(k), to form A(k + 1).

5 Set k = k + 1, and go back to 3. unless k = m + 1.

(28)

Copula functions

Second option: introducing dependence between ξ’s;

This and third option (correlation in both λ’s and ξ) are usually performed using copula functions.

Copula functions are a useful and powerful tool to describe joint default probabilities.

Idea: they are functions that describe the dependence structure: disentangle the estimation of single-name default probabilities and the estimation of the dependence

structure between default times.

Pro: correlation even with deterministic intensities (easier to calibrate on single-name CDS).

Cons: copula estimation is subject to some technical issues.

(29)

Joint Default Probabilities and Copula Functions

Recall that, if X is distributed according to F, then F (X) is uniform. Indeed, set U = FX(X), FX one-to-one and invertible:

FU(u) = Q(U ≤ u) = Q(FX(X) ≤ u) =

= Q(FX(X) ≤ FX(FX−1(u))) =

= Q(X ≤ FX−1(u)) = FX(FX−1(u)) = u.

Then, FU(u) = u implies that U is uniform.

Transform X by their FX, obtaining uniform variables that contain the same information as X.

(30)

Application to default

The idea is, rather than modeling the joint distribution of τi’s directly, to take as primitives the survival probabilities (i.e. the marginal cdfs) and the joint probability

distribution of the correlated uniforms.

Such joint pdf is specified by the copula function C, that will be a function such that

C(u1, ..., un) = P(U1 ≤ u1, ..., Un≤ un).

Also, a survival copula can be defined:

C(v, z) = v + z − 1 + C(1 − v, 1 − z).¯ Hence:

C(1 − v, 1 − z) = P(U¯ 1 > v, U2 > z).

(31)

Copulas: formal definition

Definition

A two-dimensional copula C is a real function defined on I2 d= [0, 1] × [0, 1], with range I= [0, 1], such that:d

1 for every (v, z) of I2, C(v, 0) = 0 = C(0, z), C(v, 1) = v, C(1, z) = z;

2 for every rectangle [v1, v2] × [z1, z2] in I2,with v1 ≤ v2 and z1 ≤ z2, C(v2, z2) − C(v2, z1) − C(v1, z2) + C(v1, z1) ≥ 0

(32)

Sklar’s theorem

Theorem (Sklar (1959))

Let F (x, y) be a joint distribution function with continuous marginals F1(x) and F2(y). Then there exists a unique copula such that

F (x, y) = C(F1(x), F2(y)) (1) Conversely, if C is a copula and F1(x), F2(y) are continuous univariate distributions, F (x, y) = C(F1(x), F2(y)) is a joint distribution function with marginals F1(x), F2(y).

The theorem suggests then that it is possible to represent the multiplicity of joint distributions consistent with given marginals through copulas.

(33)

Simulation and consequences of Sklar’s Theorem

Sklar’s theorem allows to separate the description of the marginals from their aggregation, via a copula.

Hence, describing a joint distribution of default times via a copula allows to simulate it by drawing from uniforms Ui

and then to compute τi such that S(τi) = Ui, where S(·) denotes the survival function, and exploiting the fact that

C(u1, ..., un) = Q(U1 ≤ F1(u1), U2≤ F2(u2))

(34)

Why copulas?

Copulas allow to go beyond linear dependence (correlation).

E.g.: X, standard normal and Y = X5 should have maximal dependence:

(35)

Linear correlation may be not enough to capture dependence

Linear correlation is:

E[X5X] − E[X5]E[X]

Std(X5)Std(X) = r 5

21 < 1 2. Synthetic measures of dependence can capture non-linearities:

Spearman’s rank correlation, ρS: ρS =

P

i(ri− ¯r)(si− ¯s) pP

i(ri− ¯r)2pP

i(si− ¯s)2. Kendall’s τ ,

τ = C − N C n(n − 1)/2,

where C is the number of concordant pairs, i.e. the number of pairs (xi, yi), (xj, yj), i 6= j for which xi> xj, yi> yj or xi< xj, yi< yj and NC is the number of pairs for which

(36)

Link with copulas

Such measures have a relationship with copula functions:

ρs= 12 Z 1

0

Z 1 0

C(u, v)dudv − 3;

τ = 4 Z 1

0

Z 1 0

C(u, v)dC(u, v) − 1.

Such relationship is also useful for calibration: measure sample Spearman’s ρ or Kendall’s τ and calibrate the parameters of a copula to such measures.

They take values between -1 and 1; -1 indicates perfect negative dependene, 1 maximal dependence (perfect).

(37)

Examples of bivariate copulas

Some examples of copulas:

1 the minimum C(v, z) = max(v + z − 1, 0)

2 the product C(v, z) = v · z

3 the maximum copula C(v, z) = min(v, z).

Since

max(F1(x) + F2(y) − 1, 0) ≤ F (x, y) ≤ min(F1(x), F2(y)) (2) then, for any copula the Frechet-Hoeffding bound holds:

max(v + z − 1, 0) ≤ C(v, z) ≤ min(v, z)

Product copula holds iff there is independence, while at the extreme copula bounds there is perfect positive and

negative dependence.

(38)

Gaussian copula

The Gaussian copula is

CGa(v, z) = ΦρXY Φ−1(v), Φ−1(z) where ΦρXY is the joint distribution function of a bi-dimensional standard normal vector, with linear correlation coefficient ρXY, Φ is the standard normal distribution function.

Therefore

ΦρXY Φ−1(v), Φ−1(z) =

=

Z Φ−1(v)

−∞

Z Φ−1(z)

−∞

1 2π

q

1 − ρ2XY

exp 2ρXYst − s2− t2 2 1 − ρ2XY

! dsdt (3)

(39)

Student copula

The bivariate Student’s copula, Tρ,υ, is defined as Tρ,υ(v, z) = tρ,υ t−1υ (v), t−1υ (z) =

t−1υ (v)

Z

−∞

t−1υ (z)

Z

−∞

1 2πp

1 − ρ2



1 +s2+ t2− 2ρst υ (1 − ρ2)

υ+22

dsdt, where

tυ(x) = Z x

−∞

Γ υ+12 

√πυΓ υ2

 1 +s2

υ

υ+1

2

ds

(40)

Application to joint default probability modelling

We are interested in joint default probabilities:

F (t, t) = P [τ1 ≤ t, τ2≤ t] =

= C (P [τ1 < t] , P [τ2 < t]) = C(F1(t), F2(t)).

Using:

minimum copula:

C = max(v + z − 1, 0), F (t, t) = max(F1(t) + F2(t) − 1, 0);

maximum copula: C = min(v, z), F (t, t) = min(F1(t), F2(t));

product copula: C = vz, F (t, t) = F1(t)F2(t).

Example: Suppose the marginal default probabilities of two firms at time 5 are 0.023, 0.023 = 2.3%, we have

minimum copula: max(2.3% + 2.3% − 1, 0) = 0;

maximum copula: min(F1(t), F1(t)) = 2.3%;

(41)

Multivariate Merton’s model

Notice: multivariate Merton uses the Gaussian Copula:

F (t, t) = C Φ(−D12), Φ(−D22) = Φ2 ρ, −D21, −D22 =

=

Z −D12

−∞

Z −D22

−∞

1 2πp

1 − ρ2exp 2ρst − s2− t2 2 (1 − ρ2)

 dsdt

(42)

Copula choice sensitivity

let the marginal default probabilities of two firms at a certain t be 0.023;

let the correlation coefficient of the Gaussian copula be .8.

The joint default probability under the Gaussian copula is 0.0098;

under the Student, for varying υ, is described by the following table:

υ joint def prob

1 0.015

2 0.014

3 0.013

4 0.012

5 0.012

6 0.012

(43)

Sensitivity to parameters

Let us change the correlation coefficient and repeat the comparative static exercise, with 3 dof:

ρ -0.2 0 0.2

joint def prob Gaussian 3 × 10−6 5 × 10−4 4 × 10−3 joint def prob Student 7 × 10−4 5 × 10−4 7.8 × 10−3 The correlation (together with other parameters, if they exist, as in the case of v) matters!

Choosing the correct copula and the correct dependence parameters is crucial but not simple: goodness of fit tests;

copulas are static, are like multivariate distributions!

(44)

Example: k-th to default basket swap

k-th to default are products that associate a payment after the k-th default among a set of names;

Assume:

Notional amounts are the same for all names, A;

No accrued premium:

fractional recovery R.

Premium Leg, with premium W : E

m

X

j=1

j−1,jW P (0, tj)A1{N (t)<k}

,

where N (t) is the counting process of defaults in the basket;

Default Payment:

E(1 − R)AP (0, τk)1{0≤τ (k)≤T } = (1−R)A Z T

0

P (0, t)dFk(t),

(45)

Two-names case

If τ1 is the arrival of first default, consider the payoff (1 − R1)11≤T } at T

Consider a fee W 11>ti}i = 0, 1, ..., n − 1.

PV contingent leg:

e

RT

0 r(s)ds(1 − R1)11≤T }....(3) PV fee leg:

n−1

X

i=0

eR0tir(s)dsW 11>ti}...(4)

Hence:

W : ˜E(3) = ˜E(4)

(46)

Two-name case/2

If we are pricing a first-to-default between A and B, under independency of τAand τB wrt both r and (1 − R)A:

P (0, T ) ˜E(1 − R)A Q [min(τA, τB) ≤ T ]

| {z }

1 − Q [τA> T , τB > T ]

| {z }

joint survival r.n. probability

= WPn−1

i=0 P (0, ti) Q [min(τA, τB) > ti]

| {z }

Q [τA> ti, τB > ti]

(47)

Application of the Gaussian copula framework

To price the k-th to default swap via MC simulation using a Gaussian copula then:

1 Fix or estimate the marginals;

2 Fix or estimate the parameters of the copula;

3 Simulate the N correlated standard normals X1, ...XN;

4 compute the default times: τi= Fi−1(Φ(Xi)).

(48)

Value-at-risk and Loss probability

The Value at Risk (VaR) of a portfolio is a percentile of its loss distribution over an horizon ∆t. It is evaluated at a confidence level , which defines the percentile of interest.

We define the loss L over an interval ∆t as −∆V , where

∆V is the change in the value of the portfolio with assets valued S:

L = V (S, t) − V (S + ∆S, t + ∆t).

We define as FL(x) = P (L < x) the distribution of L.

The V aR is the point x that satisfies 1 − FL(x) = P (L > x) = 

(49)

Simulating the distribution of losses

Estimating the distribution of losses can be achieved by generating a large number of independent replications (say n) of the following algorithm:

1 Generate the trajectories of assets and in particular their values at t + ∆t, by simulating either S(t + ∆t) or

∆S = S(t + ∆t) − S(t);

2 Evaluate the portfolio at ∆t, V (S(t + ∆t), t + ∆t) and the loss: V (S(t), t) − V (S(t + ∆t), t + ∆t)

3 Estimate P (L > x) by

F¯L,n(x) = 1 n

n

X

i=1

1Li>x,

where Li is the loss computed in the i-th replication.

(50)

Estimating VaR

Given the empirical distribution of portfolio losses FL,n(x) it is easy to obtain a simple estimate of the VaR as the empirical quantile:

x= FL,n−1(1 − ) In order to find x numerically:

1 Compute the empirical distribution of L FL,n(xj), j = 1, ..., M .

2 Derive a continuous-time curve ¯FL,n by interpolating FL,n 3 Solve the equation ¯FL,n(x) = 1 − .

(51)

A very simple example

% Consider a portfolio made by

three independent assets whose dynamics follow a GBM.

r=0.02;

S0=[12.5;18;9];

sigma=[0.2;0.3;0.08];

n=[4 8 -3];

T=1;

N_Sim=100000;

Derive the empirical cdf of the Loss of the portfolio and its 99% VaR.

(52)

A very simple example/2

% Monte Carlo Simulation of asset prices mu=(r-sigma.^2/2).*T;

s=sigma.*sqrt(T);

L=zeros(N_Sim,1);

for i=1:N_Sim U=randn(3,1);

ST=S0.*exp(mu+s.*U);

L(i)=n*(S0-ST);

end

(53)

A very simple example/3

% Deriving cdf of probability of loss and computing 99% VaR

eps=0.01;

x=[-100:0.01:150];

for i=1:length(x)

F(i)=sum(L<=x(i))/N_Sim;

end

F_hat=@(xeps) interp1(x,F,xeps)

VaR=fsolve(@(xeps) F_hat(xeps)-(1-eps),20);

(54)

Credit Value at Risk (CrVar)

Definition

Given a confidence level q and an horizon ¯T , the Credit Value at Risk is the q percentile of the loss Lτ, ¯T ,T under the physical measure P :

CrV aRq, ¯T ,T = q percentile under P of 1{τ ≤ ¯T }LGD(Eτ[Π(τ, T )])+ It is q-percentile of the loss random variable, which

measures the losses up to time ¯T ≤ T of the over the exposures (Eτ[Π(τ, T )])+

Eτ is under the risk-neutral measure.

Losses are non-zero only if default occurs before ¯T .

(55)

References I

Black, Fischer and John C Cox (1976).“Valuing corporate securities: Some effects of bond indenture provisions”.In: The Journal of Finance 31.2, pp. 351–367.

Leland, Hayne E (1994).“Corporate debt value, bond

covenants, and optimal capital structure”.In: The journal of finance 49.4, pp. 1213–1252.

Longstaff, Francis A and Eduardo S Schwartz (1995).“A simple approach to valuing risky fixed and floating rate debt”.In:

The Journal of Finance 50.3, pp. 789–819.

Merton, Robert C (1974).“On the pricing of corporate debt:

The risk structure of interest rates”.In: The Journal of finance 29.2, pp. 449–470.

(56)

References II

Zhou, Chunsheng (2001).“The term structure of credit spreads with jump risk”. In: Journal of Banking & Finance 25.11, pp. 2015–2040.

Riferimenti

Documenti correlati

Sotto questo aspetto la Banca Mondiale concentrò il suo lavoro con la Banca d'Albania principalmente sulla ristrutturazione del sistema bancario in Albania in

The giant sample exhibit a smooth gradient in vertical velocities, while the UMS sample exhibit a perturbed pattern, indicating a different response of the gas (traced by

Tra i perio- di meno frequentati ci sono sicuramente gli anni attorno al Mille: diversi sono i motivi di questa scarsa frequentazione e su di essi mi soffermerò nella mia ricerca,

2 To obtain an acceptable precision when using Monte-Carlo simulation to simulate default arrival it is necessary to use a large number

Let us now instead assume recovery payment at default τ.. r and λ ∗ affine functions of an affine state vector process) k(t, u) is available in closed form. Notice also that

● We say that a strongly connected component (SCC) in a directed graph is a subset of the nodes such that:. – (i) every node in the subset has a path to every

Here, we show that, in a network of financial contracts, the probability of systemic default can be very sensitive to errors on information about contracts as well as on

The proposed approach combines the use of censored regression with territorial indicators for measuring credit risk in SMEs context Availability of information about default amount