• Non ci sono risultati.

The Merton distance to default model: an empirical analysis

N/A
N/A
Protected

Academic year: 2021

Condividi "The Merton distance to default model: an empirical analysis"

Copied!
68
0
0

Testo completo

(1)

Academic year 2012/2013

UNIVERSITÀ DEGLI STUDI DI PISA

Laurea Magistrale in Economics

THE MERTON DISTANCE TO DEFAULT MODEL:

AN EMPIRICAL ANALYSIS

Supervisor:

Prof. Giulio Bottazzi

Candidate:

Ettore Strapazzon

(2)

Introduction 3 1 The distance to default: theoretical basis, empirical studies and

rating agencies 5

1.1 A general definition of distance to default . . . 6

1.2 Distance to default: the formula . . . 7

1.3 The Black-Scholes-Merton model of option pricing . . . 10

1.4 Empirical studies on the distance to default . . . 15

1.5 Two example of option-Based rating measure: the Expected Default Frequency (EDF) and the Equity-Implied Rating (EIR) . . . 18

1.6 A simpler measure of distance to default: the Na¨ıve Distance to De-fault of Bharath and Shumway (2008) . . . 23

2 Data description 27 2.1 Definition of default . . . 27

2.2 The construction of the database and its composition . . . 30

3 Methods, tests and statistics used in the analysis of data 36 3.1 Estimation method of the Merton distance to default . . . 36

3.2 General considerations on the logistic model . . . 38

3.3 Analysis of the errors : the confusion matrix . . . 39

3.4 The CAP curve and the accuracy ratio . . . 42

3.5 The robustness check . . . 44

4 Empirical results 46 4.1 Statistical description of the distance to default measures . . . 46

4.2 Results on the performance of the DD models . . . 52

4.3 The distance to default and the actual default rates: an empirical mapping . . . 55

4.4 A robustness check . . . 58

Conclusions and final remarks 60

(3)

A Harmonization of industry classification for CEBI and CapitalIQ 62 B Distributions of the distance to default measures 64 C The regDD: correlation and distribution 64

(4)

Introduction

In the last decade, the recent financial crisis, together with the development of new corporate debt products and credit derivatives, has generated a growing interest for predicting default models. Consequently, the study of models capable of correctly evaluating the credit default risk of a company has gradually gained a central im-portance in research, both among academics and practitioners. Indeed, the subjects involved in some business activity, in particular those providing financial resources to the firms, have been requiring more accurate instruments to assess the firms’ abil-ity to repay their debts. Rating agencies developed and provided a number of new rating products to the market and shareholders, while banking groups were required by the Basel Committee to develop internal rating models to quantify and manage the credit risk exposure of their portfolios. The academic literature has also showed a renewed interest for this type of models with an increasing number of studies and publications.

In general terms, it is possible to identify many different classes of models aimed at predicting default, using diverse analytical tools to estimate the default probability and to identify the most relevant factors that affect it. This Thesis deals with a relatively recent type of model, known as structural or “firm-value” model, and in particular with the Merton distance to default model. Based on the option pricing theory of Black and Scholes (1973) and Merton (1974), and firstly developed by Vasicek (1984) for the KMV Corporation, this model considers the equity of a firm as a call option on the market value of its assets, with a price equal to its liabilities: within this framework the default event occurs when the company asset value falls below the value of its liabilities, namely its default point.

The Merton model is at the basis of two proprietary rating models provided by Moody’s and Fitch, respectively the Expected Default Frequency (EDF) and the Equity Implied Ratings (EIR). In recent years it has been increasingly studied in academic research, starting from Vassalou and Xing (2004) and Hillegeist et al. (2004), and many different variants from its original formulation have been imple-mented and tested. One in particular, the “na¨ıve” distance to default model put forward by Bharath and Shumway (2008), is considered in this work: this version of the original model is of particular interest since it abandons the theoretical

(5)

foun-dation of the Black-Scholes-Merton approach, maintaining the functional form of the standard distance to default. In this way, their model provides a measure of distance to default that is easier to estimate and appears to be a significant factor in the default prediction.

This Thesis reports the main empirical results obtained during an internship in a relevant Italian Bank where I have the honor and the pleasure to work for a nine-months period. The aim of this research is to test and compare the performance of the Merton model and the na¨ıve distance to default using a database of large international publicly traded firms provided by the Bank. It will be showed that, although both models are good default predictors, the Merton distance to default still presents the best results.

In section 1, a brief review of the main contributions to the distance to default model is presented, from its theoretical basis, to the main papers that analyzed it, to the rating agencies models and its na¨ıve version. Section 2 provides a descrip-tion of the data, with particular attendescrip-tion to the construcdescrip-tion of the database, the definition of default that is considered, and the presentation of its contents. The an-alytical methods are described in section 3, starting from the estimating procedure of the distance to default measure, to the main statistics used to test its ability of predicting default. Section 4 presents the empirical results obtained together with an exercise on the empirical default rate distribution and a robustness check on the main statistic. A synthesis and some further considerations on the results are reported in the conclusions.

(6)

1

The distance to default: theoretical basis,

em-pirical studies and rating agencies

Broadly speaking, we can define the default event as the moment when a firm is no more able to repay its debt, or a part of it. The Distance to Default (DD from here on) is a measure built over this very intuitive definition and it is basically an evaluation of the firm’s default risk: it provides a quantitative assessment of how far is a firm from defaulting, just looking at its market value and current financial situation.

The basic intuition behind the Merton DD model is to consider the claims on the asset value of a firm, like its equity, as a call option on the underlying firm’s mar-ket value, with a strike price equal to the value of the firm’s liabilities. Under this view, the default event corresponds to the case when the call is out-of-the-money at the expiring date. In particular, the Distance to Default measure represents the difference between an estimate of the firm’s market value and the face value of the firm’s debt, scaled by the standard deviation of the same firm’s value. Although both the market value and the volatility of assets are not directly observable, in the case of public companies the literature generally makes use of the Black-Scholes equation in order to obtain estimations of those variables from the value of equity and its volatility. Finally, the distance to default can be used to calculate a default probability both by mapping it with empirical default distribution (Moody’s KMV approach), and by considering the standard assumption of normal distribution of firm’s asset value (classical Merton’s DD approach).

In the next pages I start by presenting the classical Merton’s distance to default in formal terms, together with the iterative procedure generally used to calculate it. A brief description of some of the most relevant papers that considered this model under various aspects is then provided, followed by a review of two of the most fa-mous rating measures based on Merton’s default probability model, namely Moody’s KMV Expected Default Frequency and Fitch’s Equity Implied Rating. This section concludes with the presentation of a recently developed simplified version of the distance to default.

(7)

1.1

A general definition of distance to default

The Distance to Default measure is used in order to quantify the default risk of a firm, seen as the uncertainty surrounding a firm’s ability to service its debts and obligations. In order to understand where its general formulation comes from, it is useful to start by defining the default event and then analyze what are the main factors concurring to determine the risk associated to it for a single firm.

In general terms it is possible to identify the “default” as the moment when the firm is not more able to repay its obligors, or a part of them, within the committed time. This very generic definition has, in practice, a number of possible specifications and thus it is hard to find a precise description of the firm’s default: this depends on a series of conditions, ranging from the credit event concerned, to the national legislation. Usually events like missed payments, bankruptcy, debt restructuring and government bailouts are considered as default, but this set can be widened or restricted on the basis of the needs and the objective of the analysis.

Following the analysis presented in Crosbie and Bohn (2003) I introduce a qualitative description the firm’s default risk as mainly related to the state of three variables: the asset value of the firm, the volatility of this value and the leverage.

− The market value of a firm’s asset is of particular importance for the evaluation of the future profitability: this measure represents a collective assessment made by the market participants about the present value of expected future cash flows, discounted at an appropriate rate. The greater is this value, the further is the firm from default.

− The asset risk is the uncertainty related to the asset value. It embodies both the firm’s business and industry risk and it is measured through the volatility of the asset value. The higher is the volatility, the smaller can be the difference between the firm’s value and liabilities, and so the closer can be the firm to default.

− Last, the leverage of the firm, as the ratio between the market value of its asset and the book value of its liabilities, takes into account the firm’s contractual liabilities, that is the amount the firm must repay. The grater is the leverage, i.e. the higher the amount of liabilities with respect to the market value, the

(8)

closer is the firm to default.

Once the main components of the risk default has been defined, it is then easy to introduce a first general definition of firm’s Distance to Default that relates these elements together:

Distance to default = Asset’s volatility1 ln(Book value of liabilitiesMarket value of assets)1

Given this broad definition it is possible to provide a more precise formulation of the DD in mathematical terms based on some widely accepted assumptions, as it is usually considered in literature. Two variable needed for the calculation of the DD are not directly observable, namely the market value of firm’s asset and its volatility. For this purpose it is necessary to introduce the Black-Scholes option pricing model that relates these values to the value of equity and its volatility.

In the next sections the standard formulation of the Merton distance to default is presented, starting from the definition of probability of default. Then the BS formula is described, together with an iterative procedure commonly used to obtain empirical estimation of the DD from market data.

1.2

Distance to default: the formula

The main innovation of the Merton’s approach is in considering the equity of a firm as a call option on its assets: the equity’s feature of limited liability means that the equity holders have the right, but not the obligation, to pay off the debt holders and take over the firm’s asset. The price of the call option on the assets is equal to the value of the liabilities. Within this framework the probability of default corresponds to the probability of the option to be out of money at maturity, i.e. the probability for the firm’s value of assets to be lower than the debt value at its expiration date. A first mathematical description was introduced by Vasicek (1984) and it constitutes the ground of the VK model, a model of default probability developed in the late 80’s by the KMV Corporation, then acquired by Moody’s Analytics in 2002. In the next lines the probability of default in its standard formulation is presented. This can be found described in many books and official documents like Sobehart et al.

1This general formulation is mainly used in studies and publicity materials of Moody’s KMV

(9)

(2000); Crosbie and Bohn (2003); Crouhy and Galai (2001); Sun et al. (2012) to cite some.

In order to have a more proper definition of default probability, it is necessary to start from the widely accepted assumption that the market values of assets follows a Brownian motion of the form

dV = µV dt + σVV dz,

where V and dV are the firm’s asset value and change in asset value; µ is the firm asset value drift rate, that is the expected return on asset; σV is the volatility of the asset value and dz is a Wiener process. As I am going to show in the next section, this basic assumption about the changes in asset values derives directly from the option pricing theory. The time path of the asset value, together with an insight of the practical meaning of distance to default (DD) and default probability (EDF) are well described in figure 1, presented in Crosbie and Bohn (2003):

Figure 1: The asset path (Crosbie and Bohn, 2003)

Here it is shown the typical stochastic process followed by the asset value through time, from the observation moment at t=0 until a time T that is supposed to be the

(10)

expiration time of liabilities. At T the expected asset value is normally distributed and the area under the distribution that is below the default point (theoretically equal to the market value of debt, it is usually calculated as the short term liabilities plus half the long term ones) is the probability of default2. The space separating the expected asset value from the default point, measured in terms of standard deviations, is the distance to default (DD). Starting from these considerations, I can then derive in more formal terms the probability of default at time t as:

pt= Pr[Vt≤ Ft|V0 = V ] = Pr[lnVt ≤ lnFt|V0 = V ]

Specifically, this is the probability that at time t the asset value Vtwill be lower than the default point Ft, given that at time zero it is equal to V (and it doesn’t change if we consider the logarithms of variables). Given the initial assumption describing the changes in asset value, the logarithm of Vt can be written as:

lnVt= lnV + (µ − σ2 V 2 )t + σV p (t)ε

where ε is the random component of the firm’s return, that is commonly assumed to be normally distributed, i.e. ε ∼ N (0, 1). Thus, the probability of default becomes:

pt = Pr[lnV + (µ − σ 2 V 2 )t + σV p (t)ε ≤ lnFt] rearranging: pt= Pr[− ln(V /Ft) + (µ − σ2 V 2 )t σVp(t) ≥ ε]

and given the assumption of normal distribution:

pt= N [− ln(V /Ft) + (µ − σ2 V 2 )t σVp(t) ]

Since the DD is the number of standard deviations separating the firm from default, the probability of default results also to be equal to pt = 1 − N (DD) = N (−DD).

2The EDF, that stands for Expected Default Frequency, is the probability of default calculated

(11)

In fact, it is represented graphically in figure1 by the dark area under the right tail of the normal distribution. Therefore, the DD is defined as:

DD = ln(V /F ) + (µ − 1/2σ 2 V)T σVp(T )

Finally, in this formula it is possible to detect all the main elements highlighted at the beginning: the nominator is composed by the logarithm of the expected asset value over the liabilities, and the denominator is represented by the volatility of asset.

This is the standard basic formulation of the Merton distance to default, as it is provided by theory. The main problem in implementing this formula for the esti-mation of a firm DD, is represented by the value of two variables: the asset V and its volatility σV. In fact, these are not directly observable but they need to be indi-rectly estimated from other variables. If all liabilities were traded, the market value of assets could be obtained as the sum of the market value of liabilities. However, typically, only the equity has observable price and thus the asset value must be in-ferred from equity value alone. For this reason the Black-Scholes equation for option pricing is used: this formula, in the version proposed by Merton (1974), relates the equity value of a publicly traded firm to its asset value and volatility. Researchers make use of this formula, together with an iterated procedure of estimation, in order to obtain approximations of the firm’s market asset value from its equity value. In the next section I will provide a brief description of the Black-Scholes equation and of the iterative procedure in its generic feature, followed by some examples taken from the literature on the implementation of this procedure to obtain measures of the Merton DD.

1.3

The Black-Scholes-Merton model of option pricing

The Black-Scholes model is a milestone in the theory of the pricing of options. It had been formulated for the first time in 1973 in a first study by Myron Samuel Scholes and Fischer Black, two researchers who gave their name to this formula. Among the many comments and integrations provided to this model by a number of authors in the years following its publication, the most relevant ones came from

(12)

the works of Merton, who contributed to implement this model within the field of credit risk analysis (?).3

In their original article, Black and Scholes (1973) construct a model for the estima-tion of the value of an opestima-tion w through its maturity time T and the price of the underlying stock X. In particular they consider an European call option: this type of option is exercised (i.e. it is “in the money”) only if at the expiring date it has a positive value, that is if the stock price is greater or equal than the exercise price c. Moreover, the BS formal model is built on some simplifying assumptions of “ideal market”:

− Borrowers and lenders exchange in the market at a short term interest rate r, that is known and constant.

− The return on the common stock follows a geometric Brownian motion of the type: dX/X = µdt + σXdz, where µ is the instantaneous expected return on the common stock, σ2

X is the instantaneous variance of the return and dz is a standard Gauss-Wiener process. This assumption implies that the logarithm of the stock price is normally distributed.

− There are neither distribution of dividends nor changes in the exercise price, during the life of the contract.

− There are no transaction costs, taxes, or problems with indivisibility of assets. − There are no penalties to short selling.

In this framework the value of the option can be derived as a function of stock price X and time t: w(X, t). The objective of the model is to obtain an explicit formula for this value. With this aim, they consider a hedged position, constructed with a share of stock and a number of options on the stock: this position is obtained by going long in the first security and short in the second one (i.e. writing the options). In equilibrium, its expected return must be equal to the return on a risk-free assets. The value of the option does not depend on the stock price, while the number of options that it is necessary to sell to maintain the hedge position with one share of

3Before Black and Scholes (1973) other preliminary attempts aimed at finding analytical

so-lutions to the options valuation issue had been originally provided in the sixties: for instance, Samuelson (1965) or Samuelson and Merton (1969).

(13)

stock is w 1

1(X,t), where w1 is the partial derivative of w with respect to X, so that

it changes with variations in X and t. The value of the equity is then x − w/w1 and its change is then equal to δx − δw/w1. Therefore, expanding this last equation and setting it equal to the value of the equity multiplied by rδt (since the expected return on a hedge position must be equal, by definition, to the return on a risk free asset), Black and Scholes (1973) obtain a stochastic differential equation for the value of the option:

w2 = rw − rXw1− 1 2v

2 X2w11

together with the boundary condition:

w(X, T ) = x − c, X ≥ c w(X, T ) = 0 X < c

where T is the expiring date of the option and the numbers in the subscripts indicate the derivatives with respect to X or t (respectively number 1 or 2). Applying substitutions and simplifications, the authors are able to solve this last equation and obtain the so-called Black-Scholes formula for the value of the option4:

w(X, t) = XΦ(d1) − cer(t−T )Φ(d2) where d1 = ln X/c + (r + 1/2σ2)(T − t) σ√T − t d2 = ln X/c + (r − 1/2σ2)(T − t) σ√T − t

and Φ(·) is the cumulative normal density function.5

Besides the complexity of the formal derivation of this option pricing formula it is possible to give it an intuitive interpretation, by analyzing its constitutive parts: the option value is equal to the difference between the future value of the stock at time T, given that the option will be in the money (first term in the right hand side),

4A particularly clear and intuitive explanation of the options market and the

Black-Scholes-Merton model is described in Bodie et al. (2009), chapter 9.

5The very same equation can be obtained from the theoretical framework of the capital asset

(14)

and the discounted value strike price adjusted by the probability that the option will not be exercised (second term in the RHS).

Although this model is originally referred to European call options, it can also be applied to an American call option, and it can be easily modified to be applied to European put options as well. For the American put options, on the contrary, its formulation is formally more complex(Merton, 1973).

The most important feature of this model, for what concerns the object of this research, is that this basic approach can be applied in developing a general pricing theory for corporate liabilities. In particular, ? proved that the very same BS equation can be used to estimate the value of the equity of a firm. Specifically, considering a firm with a simplified structure composed by an asset of value V and just two classes of liabilities, i.e. a single homogeneous class of debt (F ) and a class of equity (E), the BS option pricing formula can be used to calculate E as:

E = V Φ(d1) − F e−rTΦ(d2) where d1 = ln(V /F ) + (r + 1/2σ2 V)T σV √ T d2 = ln(V /F ) + (r − 1/2σ2 V)T σV √ T

This is of particular interest since, as I pointed out initially, the Merton DD model lies on the very intuitive idea to consider the equity of a firm as a call option on its asset value, so that the firm goes on default when its asset value is not sufficient to cover the liabilities. It is possible to notice here that the second term in the right hand side of the equation is similar to the DD formula, and indeed their meaning is quite close: the main difference is that the Black-Scholes-Merton equation is derived under the assumption of risk-neutrality, so that all the assets are expected to growth at the risk-free rate, while, on the contrary, the probability of default for a firm depends also on the actual distribution of future asset values that is, in turn, dependent on µ.

Finally, from this final version of the BS formula used to compute the equity value of a firm, it is possible, by applying the Ito’s Lemma, to derive the volatility of the

(15)

equity as a function of the volatility of the asset, i.e.: σE = δEδVσV.6

These two equations allow then to calculate the market value of the firm’s asset and its volatility, that are necessary to estimate the DD and then the probability of default. Theoretically this operation is straightforward since we have a two-equation system and two variables. However, as explained in ?, the asset values found by solving simultaneously these equations seem to be unreasonable, and this is mainly due to the excessive volatility of the market leverage. Most of the literature facing this problem, as well as the rating agency that implement some form of equity implied rating based on Merton DD model, generally make use of an iterative procedure of estimation that consists in starting from a realistic approximation of the asset value and then using the BS equation to obtain subsequent estimations of V and σV, until they converge. More specifically, this iterative process can be represented through three main steps:

1. A proxy for V is calculated: usually it is set equal to the sum of the liabilities and the equity, i.e. E + F . On the basis of this approximation, an initial value of asset volatility is also computed as σV = (E+FE )σE.

2. This proxy for σV is then used within the BS formula, together with the other available data, to infer V , that in turn is used to calculate a new series of σV. These are substituted again into the BS equation to obtain a series of estimates for the asset value.

3. With these new values of V the process is repeated from step 2 and iterated until subsequent estimated values do not converge, that is until the differ-ence between two adjacent estimates is smaller than a certain value, chosen arbitrarily.

Once the firm’s market value and its volatility are calculated, µ and then the DD and the probability of default can be obtained.

There exists a small but increasing literature analyzing Merton’s DD-type measures. The empirical implementations differ in practice under many secondary aspects, de-pending on the assumption and the variables that the researchers decide to consider.

6For a more formal and detailed derivation of the Black-Scholes-Merton approach to the bond

(16)

In the following I briefly describe some formulations of the DD that differ in some details from the standard version I just introduced, together with the empirical tests run specifically on these measures. Then, two of the most well-known commercial products in the credit rating market based on the Merton distance to default, namely the Moody’s EDF and Fitch’s EIR, are presented.

1.4

Empirical studies on the distance to default

The distance to default measure has been increasingly studied in the last decade as an alternative structural model of default prediction. Vassalou and Xing (2004), for instance, estimate the monthly distance to default and the default probability7 for a sample of firms within the period 1971-1999, and use it to evaluate the effect of default risk on equity returns. In their computations, they substitute the volatility of asset into the BS equation with the estimated average volatility of the equity σE over the previous year, and solve the two equations system to find the asset value. Moreover, they calculate the expected return on asset µ as the simple average of variations of the logarithm of V . The authors then test the ability of their DD to capture default risk, and find that it has an accuracy ratio (AR) of 0.59 8. They also show that the DD contains much more information than that conveyed just by the size of the firms or the volatility of their assets, since the AR of these measure is noticeably lower. Finally, as a further proof of the goodness of this measure, Vassalou and Xing (2004) compare the DLI of defaulted firms with respect to a group of non-defaulted firms similar by size and industry, and find that it is sharply higher in the years prior to default.

Their choice to estimate µ as the average of variations of the logarithm of V has been criticized by Hillegeist et al. (2004), since, they argue, it can easily result in a negative value of the expected growth rate and this is not consistent with the asset pricing theory. To solve this problem, these authors calculate it as the maximum

7In their paper, these authors refer to this probability, obtained through the standard

assump-tion of normality, as “default likelihood” (DLI).

8The Accuracy Ratio statistic is a measure of the accuracy of a model in predicting default. It

can range from a value of -1 for the worst possible model, to the value of 1 for a perfect model, taking the value of 0 for a model that assign a random default risk to the firms. The AR is equivalently called Somer’s D or Area Under the Receiver Operating Characteristic Curve (AUC). A more specific explanation of this test is presented in section 3.4.

(17)

value between the expected return on asset and the risk free rate r, i.e.:

µt= max[

Vt+ Dividends − Vt−1 Vt−1

; r]

Furthermore, these authors solve the BS equation for V and σV using the standard proxies of these variables showed (namely X + F and E+FE σE) and implementing an iterative process that make use of a Newton search algorithm that ends with a pair of value that solves the system. They also use a modified version of the DD and BS formula in order to account for the effect of dividends, introducing a dividend rate measure δ, calculated as the sum of the prior year’s common and preferred dividends divided by the approximate market value of assets. The BSM and DD formulas used are, respectively:

BSM : E = V e−δTΦ(d1) − F e−rT Φ(d2) + (1 − e−δT)V where d1 = ln(V /F ) + (r − δ + ( σV2 2 ))T σVp(T ) and d2 = d1− σVp(T ) DD : ln(V /F ) + (µ − δ − ( σ2 V 2 ))T σVp(T )

and, of course, the probability of default is π = N (−DD) (named “BSM-Prob” in their paper). This is then compared with two other traditional measures used to estimate the probability of default, based on accounting data: Altman’s Z-Score Alt-man (1968) and Ohlson’s O-scoreOhlson (1980). Computing these indicators over a sample of over 78100 firm-year data along the 1980-2000 period, Hillegeist et al. (2004) estimate a discrete hazard model to analyze the ability of each of these mea-sure to fit the data and then compare the log-likelihood statistics of the non-nested models: they find that the BSM-Prob contains significantly more information about the probability of default and its pseudo-R2 is higher from 20% to almost 40% de-pending on the coefficients used in computing the Z-score and O-score.

(18)

dif-ference that they do not consider the effect of dividends, and estimate a common expected return value µ by introducing a fixed empirical proxy for it in the numera-tor of the DD equation. In this paper the authors originally estimate the probability of default using a standard logit model on a data-set of over 1,7 million of monthly data on firms in the period from 1963 to 2003. The DD is calculated as an al-ternative to this reduced-form econometric approach, and the results of these two models are then compared: the logit regression with the DD as the only variable shows a modest pseudo-R2 of 16% and a highly significant negative coefficient of the DD, consistent with expectations. However, when the variables of the reduced-form model are inserted in the regression, the sign of the DD coefficient results negative and significant only at a time horizon of 1 or 3 years, and the pseudo-R2 does not really change. This happens presumably because, as noted by the authors, the con-stitutive elements of the DD are already present between the added variables, like the asset volatility and leverage. On these results Campbell et al. (2008) reach the conclusion that, despite the good performance of the Merton model, the reduced-form econometric approach is a better tool for the default prediction.

A similar analysis is provided by Bharath and Shumway (2008), who use the same method of Hillegeist et al. (2004) applied to the standard formulation of the Merton DD without considering the dividend factor9.

More recently, Tsai et al. (2012) propose a hybrid model that integrates a Merton-type probability of default into an accounting-based model. With the aim of predict-ing construction contractor default, these authors run a logit regression over a sample of of 1560 firm-year observations representing 121 individual contractors. They se-lect four accounting variables plus the Merton probability to default, and measure the model’s predicting performance by calculating the Accuracy Ratio statistic: it comes out that the hybrid model has the higher AR (0,873), although the option-based model (logit regression with the Merton default probability as only variable) follows closely with an AR of 0,858.

These are some of the main example taken from the most recent academic litera-ture performing empirical tests on the Merton-type models of DD and PD. Starting from the 80’s however, this class of models has been empirically verified, modified

(19)

and implemented also privately by some large corporations operating in the field of rating valuation, in order to develop reliable option-based rating tools to sell to the market. In the next section I provide a general description of two of the most well-known of these “products”, namely Moody’s EDF and Fitch’s EIR.

1.5

Two example of option-Based rating measure: the

Ex-pected Default Frequency (EDF) and the Equity-Implied

Rating (EIR)

In 1989 Stephen Kealhofer, John McQuown and Oldrich Vasicek founded the KMV Corporation, that developed and implemented a model for the evaluation of debt securities based on modern financial theory of derivative asset pricing. With this product they provided a measure of credit risk of publicly traded firms in terms of probabilities rather than ordinal ratings. The KMV corporation was then acquired by Moody’s in 2002 for $210 million and since then this measure of probability of default is offered by Moody’s Analytics and take the names of Expected Default Frequency (EDF). This measure is provided for all traded companies in the world and it is updated with daily frequency. The EDF measure is derived directly from the Merton distance to default and it is a proprietary model, whose exact formu-lation is, of course, not publicly known. In fact, Moody’s introduces more realistic characteristics into the theoretical model in order to construct a better approxi-mation of the real firm capital structures and firm behavior. Nevertheless, thanks to the informative essays and papers published by Moody’s, it is possible to have some more precise information about its main features and how it differs from the standard Merton’s approach.

The EDF main difference with respect to the standard Merton default probability is that it does not lie on the assumption of log normality on the distribution of asset returns. Moody’s instead, uses an empirical mapping with historical default rates for classes of DD in order to estimate a fitted distribution to use in place of the nor-mal one. To calculate this empirical distribution, Moody’s relies on a large data-set composed by data on large public companies starting from 1972 up-to-date. These firms have been divided in buckets on the basis of their DD, and for each bucket the historical rate of default has been computed. The non-linear function that better

(20)

fits these data is used instead of the normal distribution in the calculation of the EDF. In figure 2 the two distribution are showed: this confrontation is of particular interest, since it shows that, while the two curves practically overlap for intermediate values of DD, the normal curve is higher for DDs close to zero, and lower for higher values of DD. This is noteworthy since it means that Moody’s, in implementing

Figure 2: Merton’s normal distribution and Moody’s empirical mapping (Sun et al., 2012).

the Merton’s approach, breaks the deterministic relationship implied by the Merton model and the BS option pricing theory to use a relationship estimated by a non-parametric regression of default indicator on DD.

Moreover, this also means in practice that the standard Merton model would tend to overestimate the default probability for small DDs, while it would underestimate it for larger values. Sun et al. (2012) provide also a numerical example: when the DD is equal to 4 the normal distribution provides a probability of default of 0,003%, while according with the empirical mapping it should be approximately equal to 0,4%.

Other differences between the Moody’s KMV approach and Merton’s one lie in the way the DD is calculated. Despite, as I pointed out, the exact equation

(21)

imple-mented by the rating agency is not publicly released, its main features are known. Following Sun et al. (2012) it is possible to classify these differences as theoretical modifications and empirical advances: the most important element of the first group is surely the slight change in the asset value original equation (the counterparty of the BS equation), due to introduction of the possibility that the default can oc-cur at any time before the debt maturity date. This means that the EDF allows the default to happen at any time a firm’s asset value falls below its default point, relaxing the original hypothesis that this can happen only at the time when debt expires10. Another relevant theoretical modification consists in the fact that the EDF model considers multiple classes of liabilities in a firm’s capital structure, in-stead of the simple long-term/short-term distinction, and in particular: short-term liabilities, long-term liabilities, convertible securities, preferred stock and common stock. Finally, in the calculation of the DD, Moody’s introduces various forms of cash leakages over time, like dividends on stocks, coupons on bonds and interest expenses on loans.

Turning to empirical advances, the mapping from the DD to the EDF showed above is for sure one of the most relevant ones. Beside that,another important empiri-cal advance of the EDF is represented by the way the asset volatility is estimated: starting from an estimate obtained through an iterative procedure similar to the one already presented and commonly used in literature, Moody’s combines this value with a modeled asset volatility of comparable firms for geographical region and industrial sector. Last, it is worth noting that Moody’s also employs estimation algorithms to define the best default point to be used in order to maximize the model’s default prediction power: for non-financial firms this is set equal to the sum of the total short-term liabilities and half the long-term ones (considering a one year time horizon), while for financial firms this is a sector-dependent percentage of total liabilities.

The EDF model performance, in the form of the AR statistics, is provided by Moody’s on their data over two different periods (Sun et al., 2012): it results in a 83,89% for data comprised between 2001 and 2007, and 81,9% for the 2008-2010

10This adds formal complexity to the asset value equation. An example is showed in few lines,

where the Fitch EIR is described: this measure relax the same hypothesis and a mathematical description of it is provided in Liu et al. (2007).

(22)

period.

Another quite famous option-based default probability measure is the Equity Im-plied Rating (EIR) developed by Fitch. This is an hybrid model, in the sense that it is estimated by combining a Merton-type probability of default with other relevant financial and market variables. It is calibrated on data from 1990 to 2005 for North American firms and from 2000 to 2005 for global firms, and it provides estimates of default probability for approximately 13000 US and Canadian firms and 17000 firms from over 70 other countries in the world. An important feature of this model is that it allows – like Moody’s EDF – firms to default before the standard time horizon of 1 year (or the theoretical assumption of the debt maturity date): in particular, the default occurs whenever the asset value falls below a certain barrier value H, that is lower or equal to the default point F of the standard model. The formal set of equations, as presented in Liu et al. (2007), is:

Et= Vt ( Φ(x+) − H Vt 2r σ2 V +1 Φ(y+) ) − e−r(T −t)F ( Φ(x−) − H Vt 2r σ2 V −1 Φ(y−) ) where x± = ln Vt F + (r ± σ2 V 2 )(T − t) σV √ T − t and y ± = 2lnH − ln Vt F + (r ± σ2 V 2 )(T − t) σV √ T − t .

With model calibration, Fitch could not find a unique value for H that increases model performance. Therefore, in order to avoid complexity, they set it equal to the default point (0,5 long-term liabilities + short-term ones). From these equations it is possible to derive the default probability, that is the probability that the asset value is never lower than H and that at T it is higher than the default point F:

πt= Φ(−x−) +  H Vt 2r σ2V−1 Φ(y−)

Here again, the asset value and asset volatility are obtained through an iterative procedure. The DD calculated from this last equation (called structural PD in Liu et al. (2007)) is then mapped with historical real data on default rates in order to get a more reasonable default probability.

(23)

factor for the default prediction are considered in the construction of the EIR: finan-cial indexes, market performance indexes and macroeconomic indicators accounting for the market risk. Moreover, the distance to default calculate for every firm is adjusted so that the median DD for each country is in line with the relative agency rating of the country itself. Then, a non-linear empirical mapping is implemented between each indicator’s distribution and the historical default rate, with a method analogous to the one applied to the DD. The PDs obtained from this procedure are considered as independent variables (xi) within an exponential logistic function, with the aim to analyze their combined effect:

ˆ

P D(t, τ ) = 1

1 + eP a0+aixi(t,τ )

Where τ it the time horizon. Then the EIR can be obtained through two different methods: a static mapping, where the firm’s P D∗ is directly compared to the rating class of the firm, or through a dynamic mapping. This last method is more complex, since it requires the creation of a score that comprehends, in addition to the P D,ˆ other information about macroeconomic variables like the kind and dimension of the industrial sector, or variables that includes geographical region of the firm. I report here the formula provided for this step in Liu et al. (2007):

Scorek

I,j(t) = a0log ˆP D(t) + b1size(t) + b2countryDDi(t) + b3sectorDDik(t)+ +P cnIndustryDummyn

Where size is expressed in terms of revenue, countryDDi and sectorDDik are the average DD for country (i) and sector (j). Three regional areas are identified (k): Pacific Rim, EMEA and Latin America; Western Europe, Australia and New Zeland; Japan and South Korea. The coefficients are estimated through a logit function, and the score obtained for each firm is then compared to the rating classes in order to obtain the final EIR.

The predictive performance of this model has been tested through different ways. The accuracy ratios of the DD and final PDs, enhanced by financial and market information, has been calculated and compared to traditional account-based measure like Altman Z-score: considering the population of North American firms, the AR for the structural DD is 72,4% outperforming the traditional default risk measures

(24)

by 20%. Moreover, in all cases, even when disaggregate by region, industry or size, the final PD accuracy ratio resulted greater with respect to the DD AR and, being equal to 86,4% and 68,8% for 1 year and 5 years horizon respectively, when tested over the whole rated universe. The robustness of the model has been tested too, through a walk-forward test, by comparing the accuracy ratios of the in-sample and out-of-sample final PD for the period 1996-2005: the findings support the good robustness of the model. Finally, Fitch compared the implied rating with the agency rating, showing that 19,4% issuers match agency rating perfectly, 51,3% and 73,7% of them are within one notch and two notches respectively, that is the two measures are consistent with each other, or, in other words, the market and the rating agencies evaluate in a similar way the future creditworthiness of firms.

1.6

A simpler measure of distance to default: the Na¨ıve

Distance to Default of Bharath and Shumway (2008)

The standard Merton model of distance to default, as it has been described until this point, is grounded over a strong theoretical basis, (i.e. the Black-Scholes-Merton option pricing model) and requires complex and powerful computations when imple-mented over large data-base using the iterative procedure to obtain its determinant. In their paper Bharath and Shumway (2008) start from this general considerations to investigate the DD and its highly-performing predictive power, wondering whether it is due to the above mentioned theoretical basis or if it is mostly related to its function form. In order to answer these questions, in their paper the authors formu-late a “na¨ıve” version of the Merton’s distance to default, that is easier to calcuformu-late since it does not involve the solution of the BSM formula for the asset value and its volatility, but that maintains its functional form. In particular, the na¨ıve DD they propose is: ˆ DD = ln E+F F  + (ˆµ − 0.5ˆσ 2 V)T ˆ σV √ T

And, as a consequence, the na¨ıve default probability is: ˆπ = Φ(− ˆDD). In this equation the value of F is calculated as the face value of the firm’s debt . They assume that the debt volatility is directly correlated to the equity volatility as:

(25)

ˆ

σF = 0.05 + 0.25σE where the intercept of five percent should account for the term structure volatility. They approximate the total volatility of the firm with a value obtained as a weighted average of the equity and debt volatility, i.e.:

ˆ σV = E E + FσE + F E + FσˆF = E E + FσE + F E + F(0.05 + 0.25σE)

Then they set the expected return over the firm’s asset equal to the firm’s stock return over the previous year: ˆµ = rt−1 It is worth noting that the interest of the authors is not in calculating an exact empirical-based equation able to substitute the Merton’s DD, but they just want to have a formula emulating its functional form, built on some reasonable assumptions. In describing their model, they often underline the fact that their formulas has been arbitrary drawn, like when they precise that “None of the numerical choices below is the result of any type of esti-mation or optimization.”; or, more specifically, when they argue that “It is fairly easy to criticize our na¨ıve probability. Our choices for modeling firm volatility are not particularly well motivated and our decision to use part returns for µ is arbitrary at best. However, to quibble with our na¨ıve probability is to miss the point of our exercise. We have constructed a predictor that is extremely easy to calculate, and it may have significant predictive power.”11.

More specifically, there are three hypotheses these authors want to test: firstly, they want to test whether the probability of default given by the Merton model is a sufficient statistic for the default prediction, that is if it contains all the necessary information or, on the contrary, if it possible to build a model with better predictive properties. Secondly they are interested in studying the role of the Z-score func-tional form of the distance to default: specifically they test whether it is possible to construct a sufficient statistic for the default probability without considering this functional form. The third hypothesis they analyze concerns the importance of the standard algorithm (through the BS equation and he iterative procedure) for the default prediction: again, they test if it possible to build a sufficient statistic without considering it.

They begin the empirical analysis by incorporating the na¨ıve PD into a Cox propor-tional hazard model, using data on firm’s default from 1980 to 2003: they develop

(26)

seven models, firstly by testing the standard Merton probability of default and the na¨ıve one separately, then by comparing them in the same model, and finally by fur-ther adding into the hazard model several ofur-ther default forecasting variables. Their findings show that both the standard π and ˆπ are significant predictors, with the rel-ative distances to default sharing similar distributions and being highly correlated. Moreover the results clearly indicate that the Merton PD is not a sufficient statistic for default probability, while its functional form seems to be an important construct for predicting default, since it is not possible to calculate a sufficient statistic without considering the na¨ıve ˆπ. Finally the same evidences suggest also that the algorithm provided in the standard Merton model for calculating the firm asset value and its volatility is not an important factor.

Therefore, it comes out that the na¨ıve measure of default probability that Bharath and Shumway propose, shows good performances in predicting default. The same results obtained with the hazard model analysis are also evident when they look at the out-of-sample forecasting ability of these measures: despite the fact that models including the Merton π together with other variables perform generally better than others, they are also able to calculate models that outperform the results obtained just with the standard π, without considering it. Finally, other confirmations come from the CDS spread and bond yield spread regressions over π, ˆπ and other explana-tory variables: in fact, the Merton default probability is not a significant variable when the others are included in the models.

A similar measure to the na¨ıve distance to default introduced in Bharath and Shumway (2008) is considered in a more recent research by Bottazzi et al. (2011). Analyzing a database of almost 20000 limited liabilities firms in the period from 1998 to 2003, these authors provide a rich set of statistical comparisons between compa-nies defaulted in the years 2003 or 2004 and non-defaulting firms, over a number of different financial and economic variables. After providing a descriptive analysis of the empirical densities of defaulting and non-defaulting firms, they move to run non-parametric tests of distributional equality to study the dominance in terms of distribution between the two groups of firms. A probit model is then used to iden-tify the main determinants of default among the selected variables at different dates from default. A modified version of the na¨ıve distance to default is finally introduced

(27)

into the analysis: since most of the firms in the database are not publicly traded and the database itself does not contain any market information, they build an equivalent measure (called “Book DD”) using data on the value of shares and debt derived from figures on leverage and total assets. In particular, the value of equity is substituted by a proxy called Book Equity (BE), calculated as the denominator of the Leverage measure, i.e.: BE = T otalAsset/Leverage. Consequently, the market value of debt, F , is approximated by: D = T otalAsset − BE. Furthermore, they replace the standard µ with the time series average of the firm growth rates of Book Equity (µBE). Finally, the volatilities of D and BE are computed as the standard deviations of the growth rates of these variables. Thus, the Book DD results equal to: BookDD = ln h (BE+D) D i + (µBE − 0.5naiveσ2 V) ˆ σV Where: ˆ σV = BE BE + DσBE + D BE + DσD

This alternative na¨ıve measure of distance to default is then included into the probit model both as the sole variable and together with the whole set of financial and economic covariates already analyzed in the paper. A bootstrap sampling technique is employed to obtain robust estimates of the coefficients, and a series of tests are run in order to assess the goodness of fit and prediction accuracy of the models12. The findings shows that the BookDD variable results significant in both the model specifications. However, the inclusion of this variable does not affect much the result obtained in the previous analysis, neither in terms of the significance of the variables nor in terms of the goodness of fit, and that the predictive power of the model increases when the financial and economic factors are considered.

12Specifically, the Brier score, the I and II type errors and percentages of correct defaults and

(28)

2

Data description

Before starting with the description of the data, a brief premise needs to be made: the data used in this work, as anticipated in the introduction, are kindly provided by the Internal Rating Model Presidium office of the Bank where I have the honor and the pleasure to work as an intern for a nine-months period. Because of the legal terms of disclosure I will not report here any sensitive detailed information neither about the Bank itself, nor about its counterparties. In this section a quantitative and qualitative presentation of these data is provided, limiting the description to general statistics and definitions.

All the empirical analyses presented in this Thesis – whose methodological details and results are reported in the next sections – are performed on a database contain-ing accountcontain-ing and market data of international publicly traded large corporations for a period that goes from 2007 through 2010. The whole set of data is composed of a total of 1917 firm-year entries, whereof 65 are flagged as default. This has been obtained from an original wider database composed by a subset of the Bank’s main international counterparties, consisting in both public and private corpora-tions. The two sources providing these data are Centrale Bilanci (CEBI from here on) and Standard & Poor’s CapitalIQ database. The default flags have three origins: internal Bank’s ratings, S&P ratings and CapitalIQ data.

The following subsections present the definition of default that is used for the analy-sis and a more specific statistical description of the database in terms of distributions across years, geographical regions and industries.

2.1

Definition of default

The term “default” has not a clear-cut unique interpretation in the complexity of the real-world events. Of course there are some extreme cases, like bankruptcies and failures, that surely indicate a status of default. However, in many circumstances, the same financial situation can be interpreted as a default or not on the basis of factors like the credit event considered, the national legislation, an agency evalua-tion and many other possible aspects. This is why it is necessary to clarify what is intended with the world “default”, in particular when this condition constitutes the

(29)

key variable of an empirical analysis. Moreover, a straightforward definition of this status would improve the economic implications of the results of the analysis. In this research work, a number of possible definitions of default are taken into con-sideration. Firstly, the standard set of definitions that the Bank adopts internally for the expositions it has with respect to its counterparties is considered. This includes three different credit situations: substandard, doubtful and restructuring. In gen-eral, it is defined as “substandard” a loan that is unlikely to be paid in time, because of a temporary situation of difficulty or crisis associated with a missed fulfilling of the debt along a reasonable time, set by the national legislation at 270 days. A more serious event is indicated with the term “doubtful”, that is used when full repayment is questionable and uncertain. This condition is generally triggered by an overdue installment unpaid beyond objective limits or by some significant event that could persistently affect the creditworthiness of the counterparty. Usually it is connected to balance sheet and off-balance sheet exposure-related events, like declaration of bankruptcy, liquidation and legal actions initiated by the bank or by third parties, but also the admission to the extraordinary admin procedure and cessation of the business activity. Finally, the term ”restructured” is used referred to an exposition when, in front of a deterioration of the credit quality, the bank decides unilaterally to change the conditions of the interest rate or the total amount due, usually to make them more favorable to the borrower and thus to incurr in a loss.

Another definition of default that is considered in the analysis consists in the ratings “D” and “SD” attributed by Standard and Poor’s, which stands respectively for “default” and “selective default”. For an exhaustive definition of these ratings I directly report the definition provided by S&P (Thompson et al., 2012):

“An obligor rated ’SD’ (selective default) or ’D’ is in payment default on one or more of its financial obligations (rated or unrated) unless Standard & Poor’s believes that such payments will be made within five business days, irrespective of any grace pe-riod. The ’D’ rating also will be used upon the filing of a bankruptcy petition or the taking of similar action if payments on a financial obligation are jeopardized. A ’D’ rating is assigned when Standard & Poor’s believes that the default will be a general default and that the obligor will fail to pay all or substantially all of its obligations as they come due. An ’SD’ rating is assigned when Standard & Poor’s believes that the

(30)

obligor has selectively defaulted on a specific issue or class of obligations, but it will continue to meet its payment obligations on other issues or classes of obligations in a timely manner. A selective default includes the completion of a distressed exchange offer, whereby one or more financial obligation is either repurchased for an amount of cash or replaced by other instruments having a total value that is less than par”. The last type of default considered in the analysis is introduced directly from the CapitalIQ data-set: in particular, from the “Capital IQ Key Developments and Fu-ture Events” database, which provides strucFu-tured summaries of material news and events that may affect the market value of securities, together with details about upcoming events. Capital IQ monitors 89 Key Development types such as exec-utive changes, M&A rumors, changes in corporate guidance, delayed filings, and SEC inquiries. Each Key Development item includes the announced date, entered date, situation summary, type, source, company role, and other identifiers. These data, daily updated, are collected from various international and regional sources including: press releases/articles from various news wires and publications, Com-pany filings available with regulatory authorities and stock exchange sites, ComCom-pany Websites and conference call transcripts. Between all the types of events reported, the “bankruptcy – filing” one has been selected for the aims of the analysis, as the last default indicator. It identifies an extreme type of default, consisting in the re-ported cases of failures and bankruptcies.

It is worth highlighting that this composite definition of default, including substan-dard, doubtful, restructuring, S&P’s D and SD and bankruptcies, is particularly suited to be used in an analysis on default prediction: on the one hand it is wider than the simple bankruptcy event often considered in empirical studies 13, allowing to include also some situations preceding an effective default, like the SD S&P rat-ings and the doubtful expositions of the bank, that can be relevant in forecasting default. On the other hand this definition it is not too much inclusive, as it does not consider as defaulters the simple past-due, for instance, that could lead to insert basically sound companies within the default set.

The status of the firms that are not classified as default are defined “bonis”,

def-13This is probably due to the ease in recovering data on this particular class of defaults, since

the data are collected from public authorities like central banks or are directly available for those firms that filed for bankruptcy in tribunals.

(31)

inition that comprehends all the non-problematic expositions, together with some “technical default”: the pass-due that reentered in bonis after a short time period without losses, before passing to substandard, and the problematic loans, expres-sion of a potential credit deterioration. In the data-set, each firm is classified with respect to the default event by using a flag that is valued 0 if the firm is still in bonis in the following year, 1 if it is in default.

2.2

The construction of the database and its composition

The starting set of data on the “large corporate” was built by selecting the Bank’s counterparties presenting revenues greater or equal than $500 million14. In the case of corporate groups, another criterion used in the selection consists in considering just the holding or the parent company of the groups, while excluding the sub-sidiaries. These lasts were kept in the population only if they constitute a default, and in order to maximize the number of internal defaults the minimum-revenue threshold for them was lowered to $50 million. A further selection was made on the basis of the firm balance sheet scheme, as it is classified in CEBI: in particular, firms categorized with schemas from 01 to 05 were kept, corresponding respectively to in-dustry, commerce, multi-year production, services and real estate, while schemas from 06 to 09 were dropped, i.e. financial, factoring holding and leasing companies. With the aim of further increasing the number of defaults, the bankruptcies and the firms rated D and SD by Standard & Poor’s (whose data were available from CapitalIQ) were included into the population. This was then cleared from repeated data and entries about defaulted firm after the year of default, so that a firm is excluded from the database once it passes in default. From this database, com-posed by a total of 4043 firm-year items, of which 162 defaults, only public firms whose market data were available from CapitalIQ were selected, obtaining a final set of 1852 bonis and 65 defaults. Of these 65 defaults, 10 are “D”, 14 “SD”, 29 bankruptcies and 12 from internal source. The distribution of data across years is presented in table 1, separated by source (CEBI and CapitalIQ). Table 2 and table 3 show the distribution across nations and industries. It is worth noting that the

14This threshold has been estimated empirically, in order to distinguish firms by size on the basis

(32)

Table 1: Distribution of bonis and default for source and year Source Default year Bonis Default Default rate

CEBI 2009 49 3 5,77% 2010 48 2 4,00% 2011 39 1 2,50% 2012 35 0 0,00% Tot. CEBI 171 6 3,51% CapitalIQ 2009 352 34 8,81% 2010 408 6 1,45% 2011 451 5 1,10% 2012 470 14 2,89% Tot. CapitalIQ 1681 59 3,51% OVERALL 2009 401 37 9,23% 2010 456 8 1,75% 2011 490 6 1,22% 2012 505 14 2,77% Tot. overall 1852 65 3,51%

first three countries covers more than the 50% of the population, with the American firms particularly over-represented, while the distribution across sectors seems to be more homogeneous15. The representativeness issue is, however, a false problem in this case: the main aim of this research is indeed to test the predictive power of the distance to default and to compare it with those of a model variation. Therefore, the only factors that really matters is the overall dimension of the sample and it seems to be numerous enough to allow to conduct an empirical analysis.

Moving to description of the contents of the final database, an initial considera-tion regards the available variables for each firms, which come from different sources and are selected and compounded on the basis of the information required for the analysis: both CEBI and CapitaIQ provide the financial statements of the firms, and when both these sources were available for the same firm the first was pre-ferred. CEBI provides information for the majority of the Italian firms and just few international ones, while the opposite is true for CapitalIQ. Since these two providers use proprietary balance sheet schemas that differ from one another, it was necessary

15The industrial sectors presented here are obtained from the conjunction of two different

clas-sifications, relatively CEBI and CapitalIQ ones. The original sub-sectors included for each of the seven classes are reported in tables 16 and 15 in the Appendix.

(33)

Table 2: Geographical distribution

Frequency Percentage Bonis Default Default rate United States 549 28,6% 504 45 8,20% France 185 9,7% 183 2 1,08% Italy 175 9,1% 169 6 3,43% United Kingdom 129 6,7% 128 1 0,78% Germany 99 5,2% 97 2 2,02% Switzerland 77 4,0% 77 . 0,00% Spain 67 3,5% 66 1 1,49% Japan 51 2,7% 50 1 1,96% India 49 2,6% 49 . 0,00% Netherlands 49 2,6% 49 . 0,00% Sweden 45 2,3% 45 . 0,00% Others 442 23,1% 435 7 1,61%

Table 3: Industry distribution

Frequency Percentage Bonis Default Default rate

Industrials 431 22,5% 418 13 3,02%

Commerce & Other Services

394 20,6% 381 13 3,30%

Utilities 316 16,5% 302 14 4,43%

Energy 250 13,0% 247 3 1,20%

Chemicals & Pharma-ceuticals 212 11,1% 208 4 1,89% Information Technolo-gies 183 9,5% 171 12 6,56% Constructions 131 6,8% 125 6 4,58%

to harmonize them in order to obtain an homogeneous database with respect to the main entries: considering the description of the single items, and comparing their of magnitude empirically for each company, a simplified financial statement with compounded entries was created. From this classification it was possible to create the first variable of the data-set used in the analysis (besides the company ID code, the date of the balance sheet entries, the flag default and the date of default, that are given from the beginning), i.e. the default point in its most used formulation, equal to short-term liabilities plus half the long-term liabilities. In tables 4 and 5 is provided an example of the harmonization put forward and used in the analysis for these two entries and their relative sub-items.

(34)

Table 4: Long-term liabilities sub-items in CEBI and CapitalIQ LONG-TERM LIABILITIES

CEBI CAPITALIQ

FONDI ACCANTONATI PENSION & OTHER POSTRETIRE-MENT BENEFITS

Fondo per rischi ed oneri LONG-TERM DEBT Fondo trattamento di fine rapporto Long-term debt DEBITI CONSOLIDATI Capital leases

Obbligazioni nette oltre es. successivo Finance div. debt non-curr. Deb. fin. vs soci ed azionisti oltre es.

successivo

OTHER NON-CURRENT LIABILI-TIES

Deb. fin. vs banche oltre es. successivo Finance div. other non-curr. liab. Deb. fin vs altri finanziat. oltre es.

suc-cessivo

Unearned revenue, non-current Deb. comm.li, div. e altre pass. oltre

es. successivo

Other non-current liabilities Def. tax liability, non-current

Table 5: Short-term liabilities sub-items in CEBI and CapitalIQ LONG-TERM LIABILITIES

CEBI CAPITALIQ

TOTALE PASSIVO CORRENTE SHORT-TERM LIABILITIES Debiti fin. entro es. successivo Total current liabilities

Debiti commerciali

Debiti tributari e f. imposte Debiti diversi

Another key variable is represented by the firm’s equity value: the trivial calcu-lation that defines the firm’s equity as the sum of the number of its shares times their value, it is not so simple in the real world. In fact, in particular if we consider large corporations, the same firm is usually listed in many stock markets with a number of different types of securities, sold at different prices in different currencies. Fortunately it has been possible to void this computational issue, since CapitalIQ already provides an estimation of each firm’s daily market capitalization. S&P cal-culates this value as the sum of two types of market cap: on the one hand the total traded market cap, that is the sum of the products of the price of each type of secu-rity and the relative number of the shares outstanding; on the other hand the total non-traded market cap, obtained by multiplying the price of the primary trading

(35)

item of a company for the number of non-traded securities of that company. Every single market cap has been converted in dollar terms using the historical series of daily exchange rates.

Finally, the last variable needed in the analysis is the risk-free rate for the economy. Given that there were many possible long-term rates to be considered, all equally efficient in theory, it has been arbitrarily chosen the 10-years daily compounded Euroiris rate.

A final important consideration that has to be done in order to avoid biases when implementing the DD models, deals with the time each kind of data refers to. In order to build a proper model, we should indeed adopt the point of view of an indi-vidual acting in the market: while the market information about the equity and the risk free rate are registered with a daily frequency and are instantaneously available, the balance sheets are compiled once a year, in December, and published around May or June of the following year. Therefore the information contained in it, even if referred to a certain date, has to be accounted for in the analysis with a lag of 1 year. Furthermore, also the flag defaults is reported with yearly frequency, in the sense that I do not dispose of the exact date of default (this is also due to the heterogeneity of definitions of default). I make an example in order to better clarify this delicate issue: assume we want to estimate what is the probability of default for a firm in the year 2008, with a one-year time horizon (i.e. the probability that the firm will be in default in 2009). This is the situation represented in figure 3.

(36)

Following the theory, I should enter in the model the value of firm’s liabilities of the year 2008, and use the market information of that same year to get the asset value and volatility, then compute the DD. But this would not be correct, since it does not correspond to the actual information available to the market one year from default: in 2008 the agents are able to observe the number of firm’s traded shares and their price in real-time, but they can have information just on the past year firm’s liabilities (starting from June, in the bargain). Thus, in order to calculate the probability of default of 2009 there should be used market data and risk free rates of 2008, but the balance sheet information of 2007.

(37)

3

Methods, tests and statistics used in the

anal-ysis of data

The principal focus of this research is to test empirically the Merton distance to default model , considering both its standard theoretical formulation and the sim-pler version presented in the first part: the na¨ıve alternative measure proposed by Bharath and Shumway (2008) will be analyzed, the goodness of its results verified and then compared with the ones of the original model. An initial consideration on the quality of the overall data set needs to be made: in fact, although our sample is not particularly large in terms of number of firms and years covered, if compared with other studies, it is worth noting that the modest quantity of disposable data is very reliable and precise, coming from certified sources of attested quality. This is a first element contributing to ensure the correctness of the final findings and the absence of rough biases.

In this section the main steps followed in analyzing the data are presented. I start by describing the method employed to compute the distance to default in its standard formulation. Then, some considerations on the logit model are introduced. A final part is left to the description of the main statistics used to test these findings.

3.1

Estimation method of the Merton distance to default

The first part of the analysis is dedicated to the computation and the study of the distance to default as formulated in the standard Merton model. In order to do that, it is necessary to implement a complex set of procedures: from the definition and adaptation of some variables, to the various steps of the iterative procedure that makes use of the Black-Scholes-Merton equation to obtain the firms’ market values and the volatilities of their returns, to estimate, eventually, a periodic measure of distance to default.

The specific procedure used to calculate the Merton’s DD is the same employed by Bharath and Shumway (2008) in their paper: the original program for the SAS statistical package written and implemented by these authors to obtain the values of default probability has been minimally readapted to be used with the Bank’s data, maintaining all the main passages unaltered. Their original code can be freely

Riferimenti

Documenti correlati

understand if there are some company characteristics that are related with the disclosure of the best reporting practices, for example the company dimension, its performance on

Ciò implica, in primo luogo, che l’impianto teorico che va sotto il nome di federalismo fiscale si applichi tanto alla ripartizione delle funzioni pubbliche e delle entrate

Può allora definire il sistema sociale come sottosistema del sistema generale d'azione, distinguen- dolo così da altri sottosistemi quali il sistema della personalità, il

Note also that, if we start from a set of initial conditions that correspond to the oscillations of Figure 8 , where transmembrane diffusion is finite, and we ideally make it

An omni-experience leverages the relational and lifestyle dimensions of the customer experience.We outline the four key design elements integral to creating an omni-experience

Further, we contribute to academic research by presenting evidence of the effect of value perceptions, namely purposive value, usability, monetary value,

The methodology just described ensures the correct refunctionalization of any meridian line because, by exploiting the simple laws that regulate the principles of the reflection

When free ER are reported as intangible assets, 29.41% adopt the fair value option, which is market value at reporting date, and 70.58% use the cost approach, which is market value