• Non ci sono risultati.

Realized GARCH model adding robust measures of skewness and kurtosis

N/A
N/A
Protected

Academic year: 2021

Condividi "Realized GARCH model adding robust measures of skewness and kurtosis"

Copied!
69
0
0

Testo completo

(1)

1

Corsi di Dottorado Regionali Pegaso Università di Pisa-Università di Firenze-Università di Siena.

Sede amministrativa Università di Pisa. Dipartamento di Economia e Management. Squola di Dottorato di Ricerca in Economia e Management.

“Realized GARCH model adding robust measures of skewness

and kurtosis.”

Tutor: Ph.D. Guidi, Marco Enrico Luigi (Università di Pisa). Relatore: Ph.D. Fiorentini, Gabriele (Università di Firenze).

Dottorando: Mg. Cesar Germán Santamaria. Mat. 500532

(2)
(3)

3

(4)
(5)

5

Acknowledgements

I would like to express my gratitude to my directors Ph.D. Gabriele Fiorentini and Ph.D. Marco Guidi for their advice and support during the last years. Greats thanks to Professor Carlo Bianchi for his recommendations.

I am grateful to Ph.D. Fulvio Corsi from Scuola Normale Superiore for his valuable advices and his time dedicated to my project.

I also would like to special thanks to the professor Ann Katherine Isaacs to support me in the frame of the Erasmus Mundus program and gave me the chance to arrive in Pisa.

Special thanks go to Dottore Mauro Mazzota e Dottore Marco Brunitto from the office of international relationships at University.

I want to express my gratitude to Dottore Marcelo Paz from University of San Martin, Argentina for all support me when I enrolled in this program.

Infinite thanks to Iolanda, Giovanna, Gianni, Teana, Cesare e Claudio my “Italian Family” in Viareggio for their company along these years, they always encouraged me.

To my mother, my family and my friends for their endless love and encouragement to make my dreams come true throughout my life. Thank you for all.

(6)
(7)

7

Contents

Abstract……….Page 13 Chapter 1: Introduction………...Page 15. 1.1 Motivation………..Page 15. 1.2 Key research questions………Page 16. 1.3 Organization of the rest of this thesis………...Page 16. Chapter 2: Theory………...Page 17. 2.1 GARCH Volatility Models………...Page 17. 2.1.1 Daily GARCH Volatility Models……….Page 17. 2.1.2 Intra-Daily GARCH Volatility Models………Page 17. 2.1.2.1 Realized Volatility Measures ……….………..Page 17. 2.1.2.2 Robust Measures of Skewness and Kurtosis……….Page 20. 2.1.2.3 Test for Jumps………...Page 21. 2.1.2.4 Test for Microstructure ……….Page 21. 2.1.2.5 Realized GARCH Models with Linear Specification………Page 22. 2.1.3 Skewness and Kurtosis Using GARCHSK (1,1,1,1) Model……….……Page 23. 2.1.4 The Realized GARCHSK Model (1,1,1,1) ……….…….……Page 25. 2.1.5 Proposed Model: RGARCHRSRK (1,1,1,1)………..….…….Page 26. 2.2 Value at Risk………...Page 27. 2.2.1 Mathematical Definition………..Page 27. 2.2.2 VaR Techniques………...Page 28. 2.2.3 Backtesting VaR………..Page 28. 2.2.3.1 Unconditional Coverage Test………Page 28. 2.2.3.2 Independence Test………Page 29. 2.2.3.3 Joint Unconditional Coverage and Independence Test………..Page 29. 2.2.3.4 Statistical Evaluation of VaR´s………...Page 30. 2.2.3.4.1 Mean Relative Bias………Page 31. 2.2.3.4.2 Average Quadratic Loss Function………..Page 31. 2.2.3.4.3 Average Market Risk Capital………...Page 31. 2.2.3.5 The Loss Function……….Page 32.

(8)

8

2.2.3.6 The D.M. Test………...Page 32. Chapter 3: Data Analysis………..Page 33. 3.1 Data……….Page 33. 3.2 Tests for Jumps and Microstructure Noise………..…..………...Page 33. 3.2.1 Test for Jumps………..Page 33. 3.2.2 Test for Microstructure Noise……….……...Page 33. 3.3 Realized Volatility Measure Robust to Jumps and Microstructure Noise…………...Page 34. 3.4 Realized and Robust Measures of Skewness and Kurtosis………...Page 35. 3.5 Figures………....Page 36. Chapter 4: Empirical Results………Page 39. 4.1 Models and Results……….Page 39. 4.2 Value at Risk Estimations………...Page 44. 4.2.1 Backtesting VaR………..Page 45. 4.3 Conclusion………..Page 48. 4.4 Figures and Command Programs………….……….…..Page 49. References………....Page 58. Appendix 1: Gram-Charlier Distribution………..Page 62. Appendix 2: Nonnegative Function………..Page 64. Appendix 3: RGARCHSK(1,1,1,1) Score..………..Page 65. Appendix 4: RGARCHRSRK(1,1,1,1) Score…… ………..Page 67.

(9)

9

Main Bibliography

Barndorff-Nielsen, O. Hansen, P. Lunde, A. and Shepard, N. “Realized Kernels in Practice: Trades and Quotes”. Econometrics Journal, 12(3). (2009), 1-32.

Cornish, A. and Fisher, R. A. “Moments and cumulants in the specification of distributions”. Review of the International Statistical Institute, 5(4). (1938), 275-307.

Gallant, A.C. and Tauchen, G. “Semi-nonparametric estimation of conditionally constrained heterogeneous processes: Asset pricing applications”. Econométrica 57. (1989), 1091-1120. Groeneveld, R. A. and Meeden, G. “Measuring Skewness and Kurtosis”. The Statistician 33. (1984), 391-399.

Hansen, P. R. Huang, Z. and Lunde, A. “Realized GARCH: A Joint Model of Returns and Realized Measures of Volatility”. Journal of Applied Econometrics. (2011).

Jondeau, E. and Rockinger, M. “Gram-Charlier densities”. Journal of Economic Dynamics & Control, 25. (2001), 1457-1483.

Kim, T. H. and White, H. ”One more robust estimation of skewness and kurtosis”. Finance Research Letters 1. (2004), 56–73.

León, A. Rubio, G. and Serna, G. “Autoregressive Conditional Volatility, Skewness and Kurtosis”. The Quarterly Review of Economics and Finance, 45. (2005), 599-618.

(10)
(11)

11

List of Tables

Table I: Descriptive statistics of daily and intra-daily log returns....………. Page 33. Table II: Descriptive statistics of realized volatility measures.……….. Page 34. Table III: Descriptive statistics of skewness and excess of kurtosis……….. Page 35. Table IV: Model results..……….… Page 41. Table V: Measurement equations……… Page 43. Table VI: Persistence in volatility……… Page 44. Table VII: Number of VaR exceptions using a reduced sample from 9/05/2013 to 9/05/2014………..Page 45. Table VIII: Christoffersen unconditional coverage test………..……… Page 45. Table IX: Statistics ratios for VaR evaluation………..……… Page 46. Table X: Loss function results………..……… Page 46. Table XI: Diebold and Mariano test results………..……… Page 47.

List of Figures

Figures I and II: One minute S&P500 index intra-daily close prices and Intra-daily log-returns of S&P500 index close prices from 1/05/1990 to 9/05/2014 (09:30 am–15.59 pm)………...Page 36. Figure III: Q-plot intra-daily log returns. ………Page 36. Figures IV and V: Figures IV and V: S&P500 index daily close prices and daily log-returns of S&P500 index close prices from 1/05/1990 to 9/05/2014….………... Page 37.

Figure VI: Q-plot daily log returns.………. Page 37.

Figures VII and VIII: Intra-daily one minute robust measures of skewness and excess of kurtosis from 1/05/1990 to 9/05/2014 (09:30 am–15.59 pm)………. Page 38.

Figure IX:

VaR-GARCH(1,1)/VaR-GARCHSK(1,1,1,1)/VaR-RGARCH(1,1)/VaR-RGARCHSK(1,1,1,1)……..………Page 49. Figure X: VaR-RGARCHRSRK(1,1,1,1)………Page 50.

List of Programs

EViews Command Program I: GARCH(1,1) Model……….. Page 51. EViews Command Program II: GARCHSK(1,1,1,1) Model………Page 52. EViews Command Program III: RGARCH(1,1) Model…..……… Page 53. EViews Command Program IV: RGARCHSK(1,1,1,1) Model….………. Page 54.

(12)

12

EViews Command Program V: RGARCHRSRK(1,1,1,1) Model with Groeneveld-Meeden Skewness and Moors Kurtosis………..………... Page 56.

(13)

13

Abstract

Past financial crises show the importance of adequate risk measurement techniques which adapt more rapidly to changing market circumstances. One traditional risk method is the conditional Value at Risk (VaR) using GARCH models based on low-frequency daily data. After these initial GARCH models, other models like a realized GARCH by Hansen, Huang and Lunde (2011) incorporated intra-day data, and it has become a rapid growing field in financial econometrics; but these methodologies only consider the second moment of a log-returns distribution; Previously, some researchers had started to incorporate higher moments into their GARCH models to reach a more accurate measure of VaR. Leon, Rubio, and Serna (2005) created a daily GARCH model with conditional counterparts of the sample skewness and kurtosis. In their model the standard measures of skewness and kurtosis are essentially based on averages and it can be sensitive to outliers.

Then, robust measures of third and fourth moments, proposed by Kim and White (2004), are based on quantiles rather than averages; from these developments we have calculated the RGARCHRSRK model with robust measures of skewness and kurtosis in two steps: 1) The first step is the RGARCHSK model as a mix between RGARCH Hansen et.al (2011) and GARCHSK

Leon et al. (2005) models and 2) the second is the RGARCHRSRK model that uses robust

measures of skewness and kurtosis in conditional higher moment equations. For both models we applied a quasi-maximum likelihood estimation with modified Gram-Charlier expansion of standardized innovations using one minute intra-day information from log-returns of S&P500 index.

Finally, we calculated and tested the accuracy of daily VaR´s using the normal distribution for GARCH(1,1) and RGARCH(1,1) and Cornish-Fisher expansion for GARCHSK(1,1,1,1), RGARCHSK(1,1,1,1) and RGARCHRSRK(1,1,1,1) models. Based on this empirical analysis, we found that the use of RGARCHRSRK(1,1,1,1) model improves the conditional VaR accuracy. Keywords: RGARCHRSRK model; Gram–Charlier expansion; intra-day data; realized volatility, realized robust skewness-kurtosis measures and value at risk.

(14)
(15)

15

Chapter 1: Introduction.

1.1 Motivation:

Volatility in finance is often defined as the dispersion in asset price movements over a period of time. Past situations in financial markets such as some sovereign crisis as Tequila, Asian, Brazil, Turkey and Argentine, the collapse of the “.com” bubble, Enron and Worldcom, the US subprime mortgage problem and Europe’s sovereign debt debacles, exemplify the importance of adequate risk measurement and management technique which adapts more rapidly to changing market circumstances.

A common parametric approach to deal with the volatility is the Autoregressive Conditional Heteroskedasticity model (ARCH) introduced by Engle (1982) and generalized (GARCH) by

Bollerslev (1986) that used a traditional portfolio risk model based on low-frequency data.

However, with the availability of high-frequency data the estimation of volatility has moved to realized measures; this shift of risk models was needed to provide more accurate short-term risk measures; one of these models is the Realized Generalized Autoregressive Conditional Heteroskedasticity (RGARCH) model proposed by Hansen, Huang and Lunde (2011), but this model doesn´t contemplate the conditional skewness and kurtosis equations. Higher moments of distributions in financial variables can be important to assess the portfolio risk and complement traditional variance measures to improve the performance of various financial models. Responding to this recognition, some researchers have started to incorporate these higher moments into their models, mostly using the conventional measures of the sample skewness and kurtosis as in Leon,

Rubio, and Serna (2005). In base of these models we calculate as a first step (to reach the

RGARCHRSRK model) the RGARCHSK model as a link between RGARCH Hansen et al. (2011); and GARCHSK Leon et al. (2005) But, in this model the skewness and kurtosis measures have some limitations, Kim and White (2004) show that standard measures of skewness and kurtosis are essentially based on averages and it can be sensitive to outliers. They propose the use more stable and robust measures of skewness and kurtosis, based on quartiles rather than averages; then, we used these robust measures to calculate our RGARCHRSRK model by applying a quasi-maximum likelihood estimation with Gram-Charlier normal expansion modified according

Gallant and Tauchen (1989) for standardized residuals.

The sample used is one minute intra-day data of S&P500 index log-returns (390 observations per day) to make a daily log-returns series from 1/02/1990 to 9/05/2014 obtained from http://pitrading.com/intraday_ascii_data_market_edition.htm. After that, we calculate a modified daily VaR using the Cornish-Fisher expansion.

Our aim is to provide new insights into two empirical problems to assess: i) whether the higher moments in RGARCHRSRK (1,1,1,1) model show time dependence; and ii) whether VaR estimation using this proposed model is better to capture the risk than the VaR estimations using the rest of the models tested here. As a result, we found that the use of RGARCHRSRK(1,1,1,1) model in VaR´s framework is a more accurate methodology to cover risks than the conditional VaR´s using GARCH(1,1), GARCHSK(1,1,1,1), RGARCH(1,1) and RGARCHSK(1,1,1,1) models.

(16)

16

1.2 Key research questions:

The key research questions are two:

1) Does exist some evidence of time-varying dependence in conditional skewness and kurtosis equations?

The use of the GARCH framework implies a time varying volatility, meaning that periods with high volatility are followed by periods with high volatility with a decay factor less than one (volatility clustering). Then, we going to testif equations that include third and fourth moments in the RGARCHRSRK(1,1,1,1) model present either skewness and kurtosis clustering (negative log-returns are followed by negative log-log-returns and outliers are followed by outliers).

2) Do we obtain a more accurate conditional Value at Risk measure with our RGARCHRSRK model than the other models tested in this thesis?

Computing a conditional VaR using our RGARCHRSRK(1,1,1,1) model would be a more accurate risk assessment than VaR´s using other parametric methods like GARCH(1,1), GARCHSK(1,1,1,1), RGARCH(1,1) and RGARCHSK(1,1) models.

1.3 Organization of the rest of this thesis:

Chapter two presents a literature review of the topic, starting from the papers that proposed the basic models and then moving to the more recent papers that include the foundation of the current work; chapter three provides the data analysis. Finally, chapter four compares the VaR with the proposed model using intra-daily data and robust measures against VaR´s using GARCH(1,1), GARCHSK(1,1,1,1), RGARCH(1,1) and RGARCHSK(1,1) models.

(17)

17

Chapter 2: Theory.

2.1 GARCH Volatility Models.

2.1.1 Daily GARCH Volatility Models.

ARCH and GARCH models are widely used to model the conditional variance of financial time series; since the original contributions of Engle (1982) and Bollerslev (1986). These models are typically estimated by maximum likelihood, or quasi-maximum likelihood methods using observations at the daily frequency, see Bollerslev and Wooldridge (1992). The popularity of GARCH process is partially explained by its ability to incorporate volatility clustering in an intuitive way. Then, the expected volatility today is a linear combination of the previously observed squared returns, therefore periods of high (low) volatility tend to be followed by periods of high (low) volatility.

Andersen and Bollerslev (1998) observed that a weakness of these models in modeling volatility

could be the latent (hidden) character of volatility, stochastically evolving through time. Due to the fact that volatility of time series formed from daily returns (when returns were formed from prices registered at the last transaction of the trading day), having the purpose of a better approximation of the underlying or “true” volatility; the failure is the true volatility measures against which the forecasting performance was measured; the standard way of using ex-post daily squared returns as the measure of “true” volatility for daily forecasts was flawed as such measure comprised a large and noisy independent zero mean constant variance error term which was unrelated to the actual volatility. As such, they suggested that cumulative squared returns from intra-day data be used as an alternative way to express such “true” volatility, called “integrated

volatility”.

2.1.2 Intra-Daily GARCH Volatility Models.

In the last three decades, the wide availability of high-frequency financial data has led to substantial improvement in the study of volatility; these higher-frequency data may contain information which can be used to improve forecasts and characterization of daily conditional volatility. In particular, higher-frequency data may be used to estimate the daily variance directly; these estimates based on higher frequency data provide an auxiliary source of information for low frequency variance estimators.

2.1.2.1 Realized Volatility Measures.

Merton (1980) noted that the variance of asset returns over an extended period of time can be

estimated with high precision if during that period a sufficient number of sub-period returns is available. Because the squared mean return converges to zero as the sampling frequency increases, the variance of the returns over an extended period can be calculated by summing the squared sub-period returns and ignoring the mean return. This is what is called the concept of realized volatility and this term is interchangeably used with realized variance. More specifically, an often used and very flexible model for logarithmic prices of speculative assets is the (continuous-time) stochastic volatility model:

(18)

18

Where, dYt is the price differential, σt is the instantaneous variance, μ denotes the drift, β is the risk premium parameter, and Wt defines the standard Wiener process1. The object of interest is

the amount of variation accumulated on time interval ∆ (e.g., a day, week, month etc.). If n = 1, 2 . . . denotes a counter for the time intervals of interest, then the term:

σn2 = ∫ σ

t 2dt n∆

(n−1)∆

σn2 is called, the actual volatility; see Barndorff-Nielsen and Shephard (2002). The actual volatility is the quantity that reflects the market risk structure (scaled in ∆) and is the key element in pricing and portfolio allocation. Actual volatility (measured in scale ∆) is of course related to the integrated volatility:

V(t) = ∫ σs2ds t

0

An important result is that V(t) can be estimated from rt,M via the quadratic variation, which is the integral of the squared volatility over a fixed time interval:

rt,M = ∑ (Close Pricetj− Close Pricetj−1)2

Where t0 = 0 < t1 < · · · < tM = t is a sequence of partition points, |tj+1− tj| → 0 and the close price is the price at the end of the time 𝑡. Andersen and Bollerslev (1998) have shown that:

rt,M→ V(t), M → ∞

This observation leads us to consider an interval ∆ with M observations, RVn = ∑ (Close Pricetj− Close Pricetj−1)

2 = ∑ 𝑟𝑡𝑗 2 𝑀 𝑗=1 M j=1 With tj= ∆ {(n − 1) j

M}; note that RVn is a consistent estimator of σn

2 and is called realized

volatility. They show that volatility forecasts generated by GARCH-type models perform

satisfactorily after all, if the unbiased but noisy daily squared returns are replaced by realized volatility when determining the accuracy of volatility forecasts through regressing ex-post realizations on forecasts. Although the daily squared returns are accurate on average, it is too noisy causing an underestimation of the explanatory power of potentially accurate volatility forecasts.

1In mathematics, the Wiener process is a continuous-time stochastic process named in honor of Norbert Wiener. It is often called

standard Brownian motion, after Robert Brown. The Wiener process Wt is characterized by three properties: 1. W0 = 0

2. The function t → Wt is almost surely everywhere continuous

3. Wt has independent increments with Wt−Ws ~ N(0, t−s) (for 0 ≤ s < t), where N(μ, σ2) denotes the normal distribution with expected value μ and variance σ2.

The last condition means that if 0 ≤ s1 < t1 ≤ s2 < t2 then Wt1−Ws1 and Wt2−Ws2 are independent random variables, and the similar condition holds for n increments.

(19)

19

One of the most important advantages of using high-frequency data for volatility estimation is the improvement in statistical efficiency that results from the reduction in variance of the realized volatility estimator relative to the variance of the daily squared return estimator.

Other two traditional and robust measures of realized volatility are the Bi-Power Variation (BPV) and the realized Kernel (RK). The Bi-Power Variation process introduced by Barndorff-Nielsen

and Shepard (2006) is a measure robust to jumps; this process separate quadratic variation into its

continuous and jump components. The BPV estimator is defined as:

BPV = {yM∗}i[1,1] = ∑ |yj,i| M−1

J=1

|yj+1,i|

Where 𝑦𝑀∗ are the low-frequency returns, 𝑀 is the number of intra-day periods and 𝑦𝑗,𝑖 are the 𝑗 −

𝑡ℎ intra-daily return for the 𝑖 − 𝑡ℎ day.

The realized kernel was introduced by Zhou (1996), who proposed K(Xδ) with H = 1, where K(Xδ) is a kernel weight function; but, the estimator is inconsistent; Hansen and Lunde (2006)

used realized kernel type estimators, with k(x) = 1 for general 𝐻 to characterize the second-order properties of market microstructure noise. This method proposed by Zhou (1996) and generalized by Hansen and Lunde (2006), is:

RK(Xh) = RVn+ 2 ∑ ( n n − h) γh H h=1 γh = ∑ r𝑚,𝑖 n i=1 r𝑚,𝑖+1

Where, RVn is the realized volatility or realized variance, n is the number of observations in a day and r𝑚,𝑖 are the intra-day log-returns. Another estimation based on realized kernels spelt out in

Barndorff-Nielsen et al. (2008) is:

RK(Xh) = RVn+ 2 ∑ k (h − 1

H ) γh [1]

H

h=1

Where RK(x) is a kernel weight function, such as the Parzen Kernel and k(x) for x ∈ [0,1] is a non-stochastic weight function such that k(0) = 1 and k(1) = 0, RVt:

γh = ∑ (XδJ− Xδ(J−1)) 𝑥J n j=|h|+1 (Xδ(J−h)− Xδ(J−h−1)) ⏟ 𝑥J−h [2] k(x) = { 1 − 6x2+ 6x3, 0 ≤ x ≤1 2 2(1 − x3), 1 2≤ x ≤ 1

Here xJ is the (𝑗 − 𝑡ℎ) high frequency return; the method by which these returns are calculated is nontrivial, for the accuracy and depth of data cleaning is important.

(20)

20

2.1.2.2 Robust Measures of Skewness and Kurtosis.

Robust measures of skewness and kurtosis that are sensitive to outliers have been presented in the statistic literature. For instance, Bowley (1920) proposed this robust measure of skewness:

SK =Q3+ Q1− 2Q2 Q3− Q1 Where Qi is the ith quartile of returns. That is:

Q1 = F−1(0.25), Q2 = F−1(0.50), Q3 = F−1(0.75)

The Bowley´s coefficient of skewness is zero for symmetric densities; the denominator re–scales the coefficient so that the maximum value for SK is 1, representing extreme right skewness and the minimum value for SK is -1, representing extreme left skewness. Another and more advanced measure was proposed by Groeneveld and Meeden (1984), who integrate α using this equation.

SK(α) =∫ [F −1(1 − α) + F−1(α) − 2Q 2] 0.5 0 dα ∫ [F0.5 −1(1 − α) − F−1(α)] 0 dα = μ − Q2 E|rt− Q2| [3]

This measure is also 0 for any symmetric distributions and is bounded by -1 and 1. Kurtosis measure was proposed by Crow and Siddiqui (1967) and it is defined as:

KR =F

−1(1 − α) + F−1(α)

F−1(1 − β) − F−1(β), α, β ∈ (0, 1)

Their choice for α and β is 0.025 and 0.25 respectively. For these values the normal distribution is such that F−1(0.975) = −F−1(0.025) = 1.96 and F−1(0.75) = −F−1(0.25) = −0.68 and the

coefficient is 2.91. Hence the complete equation is:

KR = F

−1(0.975) + F−1(0.025)

F−1(0.75) − F−1(0.25) − 2.91

Hogg (1972) found that his measure of Kurtosis performs better than the traditional measure in

detecting heavy – tailed distributions: (Uα−Lα)

Uβ−Lβ where Uα(Lα) is the average of the upper (lower) α

quantiles defined as: Uα = 1

α∫ F −1(r)dr, L α = 1 α 1 1−α ∫ F −1(r)dr α 0 , for α ∈ (0, 1). According to

Hogg’s experiments, α = 0.05 and β = 0.5 gave the most satisfactory results. Finally, Moors (1988) showed that the conventional measure of Kurtosis can be interpreted as a measure of the dispersion of a distribution around the two values μ ± σ. Kurtosis can be large when probability mass is concentrated either near the mean μ or in the tails of the distributions, the formulae is:

KR =(E7− E5) + (E3− E1) (E6− E2)

Where Ei is the ith octile of returns. It is easy to check that:

E1 = −E7 = −1.15 E2 = −E6 = −0.68

(21)

21

E3 = −E5 = −0.32 E4 = 0

For the standard normal case and the coefficient is 1.23. Hence, the centered coefficient is: KR =(E7− E5) + (E3− E1)

(E6− E2)

− 1.23 [4]

2.1.2.3 Test for Jumps:

Aït-Sahalia (2009) constructed a nonparametric method using the ratio of the realized absolute

𝑝 − 𝑡ℎ power at two different sample scales.

{ 𝑝 > 2 ⇒ 𝐵̂(𝑝, ∆𝑛)𝑡→ 𝐵(𝑝)𝑡, 𝑝 = 2 ⇒ 𝐵̂(𝑝, ∆𝑛)𝑡 → [𝑋, 𝑋]𝑡, 𝑝 < 2 ⇒∆𝑛 1−𝑝/2 𝑚𝑝 𝐵̂(𝑝, ∆𝑛)𝑡→ 𝐴(𝑝)𝑡, 𝑋 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 ⇒ ∆𝑛 1−𝑝/2 𝑚𝑝 𝐵̂(𝑝, ∆𝑛)𝑡→ 𝐴(𝑝)𝑡

The estimator is:

𝐵̂(𝑝, ∆𝑛)𝑡= ∑ |𝐶𝑙𝑜𝑠𝑒𝑖 − 𝐶𝑙𝑜𝑠𝑒𝑖−1|𝑝 𝑛=𝑇 ∆⁄ 𝑖=1 = ∑ |𝑟𝑖|𝑝 𝑛=𝑇 ∆⁄ 𝑖=1 [5]

𝑝 is the power coefficient, ∆𝑛 is an interval of time. For testing the existence of jumps, they use the ratio of volatility estimates from two different time scales(∆𝑛 𝑣𝑠 𝑘∆𝑛):

𝑆̂(𝑝, 𝑘, ∆𝑛) = 𝑍𝑗𝑢𝑚𝑝𝑠(𝑝, 𝑘, ∆𝑛) =𝐵̂(𝑝, 𝑘, ∆𝑛)𝑡 𝐵̂(𝑝, ∆𝑛)𝑡

[6] 𝑍𝑗𝑢𝑚𝑝𝑠(𝑝, 𝑘, ∆𝑛)~𝑁(0,1)

k is a positive number, for instance 2.

𝑆̂(𝑝, 𝑘, ∆𝑛) → 𝑝 { 1, 𝑢𝑛𝑑𝑒𝑟 𝐻0 𝑜𝑓 𝑛𝑜 𝑗𝑢𝑚𝑝𝑠

𝑘(𝑝 2⁄ )−1, 𝑢𝑛𝑑𝑒𝑟 𝐻𝛼 𝑜𝑓 𝑗𝑢𝑚𝑝𝑠

More specifically, a process 𝑋 = (𝑋𝑡) = 𝐶𝑙𝑜𝑠𝑒𝑡 on a given time interval [0, 𝑇] is observed at

times 𝑖∆𝑛 for ∆𝑛= 𝑇/𝑛. To compute the test statistics, say 𝑆𝑛, which converge to 1 if there are not jumps and to another deterministic and known value (such as 2) if there are jumps. Choosing 𝑘 = 2 and 𝑝 = 4, from the result above, the test statistic with a normal distribution converges to 2 for the paths with jumps; and converges to 1 for the paths without jumps.

2.1.2.4 Test for Microstructure Noise:

This test statistic is based on the difference between two realized volatilities computed at different sampling intervals (for instance, five minutes and ten minutes). Under the null hypothesis, both estimators will converge to the true integrated volatility process, though at a different speed. The

(22)

22

test was developed by Barndorff-Nielsen and Shephard (2006) where the null hypothesis is the absence of noise 𝑍(𝑝, 𝑘, ∆𝑛) = 0 and the alternative is the presence of noise 𝑍(𝑝, 𝑘, ∆𝑛) ≠ 0. Under the null the test statistic follows a normal distribution.

𝑆̂(𝑝, 𝑘, ∆𝑛) → 𝑝 {= 0, 𝑢𝑛𝑑𝑒𝑟 𝐻0 𝑜𝑓 𝑎𝑏𝑠𝑒𝑛𝑐𝑒 𝑜𝑓 𝑛𝑜𝑖𝑠𝑒 ≠ 0, 𝑢𝑛𝑑𝑒𝑟 𝐻𝛼 𝑜𝑓 𝑝𝑟𝑒𝑠𝑒𝑛𝑐𝑒 𝑜𝑓 𝑛𝑜𝑖𝑠𝑒 𝑆̂(𝑝, 𝑘, ∆𝑛) = 𝑍𝑛𝑜𝑖𝑠𝑒𝑠(𝑝, 𝑘, ∆𝑛) = 𝐵̂(𝑝, 𝑘, ∆𝑛)𝑡− 𝐵̂(𝑝, ∆𝑛)𝑡 𝐵̂(𝑝, ∆𝑛)𝑡 [7] 𝑍𝑛𝑜𝑖𝑠𝑒𝑠(𝑝, 𝑘, ∆𝑛)~𝑁(0, 1)

2.1.2.5 Realized GARCH Models with Linear Specification.

The Realized GARCH model proposed by Hansen, Huang and Lunde (2011) is a model which combined a GARCH structure for returns with a model that uses realized measures of volatility. It takes the advantage of the natural relationship between the realized measure and the conditional variance and proposes a single measurement equation in which the realized measure is a consistent estimator of the integrated variance. Besides its mathematical structure, the Realized GARCH model is easy to estimate, captures the return volatility dependence (leverage effect) and was empirically proven that it outperformed the conventional GARCH. Hansen et al. (2011) uses a realized Kernel for the realized measure. The advantage of including a measurement equation that defines the realized measure through a linear relationship with the conditional variance into a GARCH structure, instead of regressing it against its past lagged values is that it nests the model in a simple and tractable GARCH structure and offers a formulation of the dependence between shocks to returns and shocks to volatility, known as the leverage effect. This model is:

𝑡 = 𝜔 + ∑ 𝛽𝑖𝑡−𝑖+ ∑ 𝛾𝑗𝑥𝑡−𝑗 [8] 𝑞 𝑗=1 𝑝 𝑖=1 𝑥𝑡 = 𝜉 + 𝜑ℎ𝑡+ 𝜏(𝜂𝑡) + 𝑢𝑡 [9] 𝜂𝑡~𝑁(0,1) 𝑢𝑡~𝑖. 𝑖. 𝑑. (0, 𝜎𝑢2) 𝜏(𝜂𝑡) = τ1𝜂𝑡+ τ1(𝜂𝑡2− 1)[10]

Where ℎ𝑡 has an AR(1) process ℎ𝑡 = (𝜔 + 𝛾𝜉) + (𝛽 + 𝛾𝜑)ℎ𝑡−𝑖+ 𝛾𝑤𝑡−1 with 𝑤𝑡 = 𝑢𝑡+ 𝜏(𝜂𝑡) is i.i.d. and the realized measure 𝑥𝑡 follows an ARMA(1,1) process. If xt is computed from high-frequency data and 𝑟𝑡 is a close to close return that spans 24 hours, then φ reflects how much of

of the daily volatility occurs during trading hours. The function 𝜏(𝜂𝑡) is the leverage function, which captures the dependence between returns and future volatility. It is constructed from the second Hermite polynomial. The persistence of volatility is given by 𝜋 = 𝛽 + 𝜑𝛾. The log-likelihood function is:

𝑙(𝑟, 𝑥; 𝜃) = −1 2∑ [log(2𝜋) + 𝑙𝑜𝑔(ℎ𝑡) + 𝑟2 ℎ𝑡 + 𝑙𝑜𝑔(𝜎𝓊2) + 𝓊2𝑡 𝜎𝓊2 ] 𝑛 𝑡=1 [11]

(23)

23

2.1.3 Skewness and Kurtosis Using GARCHSK (1,1,1,1) Model.

Empirical studies of time series data show that returns are not normally distributed; moreover they are skewed and leptokurtic. The presence of negative skewness has the effect of accentuating the left hand side of the distribution; it is that the market gives higher probability to increases than decreases in asset pricing. According with Harvey and Siddique (1999), Peiró (1999) and

Premaratne and Bera (2001) excess of Kurtosis makes outliers more likely than in the normal

case, which means that the market gives higher probability to extreme observations than in a normal distribution.

Standard GARCH models allow for time-varying volatility, but do not permit time varying skewness and kurtosis. Harvey and Siddique (1999) presented a methodology for estimating jointly the time varying conditional variance and skewness under a non–central student 𝑡 distribution of the error terms in the mean equation. This methodology is applied to several series of stocks index returns and they found that autoregressive conditional skewness is significant and the inclusion of skewness affects the persistence in variance. Campbell & Siddiqui (1999) set a GARCH with skewness (GARCHS) model to estimate dynamic kurtosis by constructing a fourth moment equation; Brooks, Burke and Herand (2005) derived a model for autoregressive conditional heteroskedasticity and kurtosis via a time varying degrees of freedom parameter when conditional variance and kurtosis are permitted to evolve separately. This model uses a student 𝑡 distribution and consequently can be estimated by maximum likelihood. Extensions to the basic model show that conditional kurtosis appears to be positive, but weakly related to returns and the response of kurtosis to good and bad news is not significantly asymmetric. Premaratne and Bera (2001) have suggested to capture the asymmetry or skewness and excess of kurtosis with the

Pearson type IV error term distribution, which has three parameters that can be interpreted as

volatility, skewness and kurtosis. In the same way Jondeau and Rockinger (2000) employ a conditional generalized student 𝑡 distribution to capture conditional skewness and kurtosis by imposing a time varying structure for the two parameters which control the probability mass in the assumed distribution; but, these parameters do not follow a GARCH structure for either skewness or kurtosis. León, Rubio and Serna (2005) estimate time varying volatility, skewness and kurtosis using a Gram–Charlier series expansion of normal distribution where skewness and kurtosis appear as parameters. Moreover, Knight and Satchell (1997) develop an option pricing model using a Gram–Charlier expansion of the underlying asset. In a similar framework, Abken, Madan

and Ramamurtie (1996) end up with a Gram–Charlier expansion to approximate risk neutral

densities (RND). Gallant and Tauchen (1989) use the Gram–Charlier expansion to describe deviations from normality of innovations in a GARCH framework. Jondeau and Rockinger (2001) show how to improve GARCH estimations when innovations are assumed to be distributed as a

Gram–Charlier density rather than a normal one.

Leon et al. (2005) obtain the following normal density function with Gram-Charlier expansion for

(24)

24 𝑔(𝜂𝑡|𝐼𝑡−1) = 𝜙(𝜂𝑡) [1 +𝑠𝑡 3!(𝜂𝑡 3− 3𝜂 𝑡) + 𝑘𝑡− 3 4! (𝜂𝑡 4− 6𝜂 𝑡 2+ 3)] [12]

𝜙(ηt) denotes the probability density function, corresponding to the standard normal distribution and ψ(ηt) is the polynomial part of fourth order. Note that the probability density function defined above is not really a density function because for some parameter the density g(ηt|It−1) might be negative due to the component (ηt). Similarly, the integral of g(ηt|It−1) on ℝ is not equal to one

(see appendix two). In order to obtain a positive density function, Galland and Tauchen (1989)2 describe the density in terms of the same expansion terms 𝜓(𝜂𝑡) and divide it by the function Γ𝑡,

then the density is defined as:

𝑔(𝜂𝑡|𝐼𝑡−1) = 𝜙(𝜂𝑡) [1 +𝑠3!𝑡(𝜂𝑡3− 3𝜂 𝑡) + 𝑘𝑡− 3 4! (𝜂𝑡4− 6𝜂𝑡2+ 3)] 2 [1 +𝑠𝑡 2 3! + (𝑘𝑡− 3)2 4! ] = 𝜙(𝜂𝑡)𝜓 2(𝜂 𝑡) Γ𝑡 [13] Where, 𝜓(𝜂𝑡) = 1 +𝑠𝑡 3!(𝜂𝑡 3− 3𝜂 𝑡) + 𝑘𝑡− 3 4! (𝜂𝑡 4− 6𝜂 𝑡2+ 3) [14] Γ𝑡= 1 +𝑠𝑡 2 3!+ (𝑘𝑡− 3)2 4! [15]

This distribution is used to model innovations in a GARCH model maintaining the parameters 𝑠 and 𝑘 as the skewness and excess of kurtosis of density. This expansion assumes that 𝜂𝑡

innovations are distributed as a Gram-Charlier density. 𝜂𝑡~𝒢𝒞(0,1, 𝑠, 𝑘)

(𝑠, 𝑘) = 𝑓(𝑠̃, 𝑘̃)

Where 𝑓(∙) is the mapping from ℛ2 into the domain (𝒟 = ℛ × ℛ+). If the Gram-Charlier is negative, the log-likelihood function is no longer defined and parameters could not be estimated. Therefore, when (𝑠, 𝑘) is not in the domain 𝒟, the log-likelihood is not defined for some values of 𝜂. The set of equations of variance, skewness and kurtosis follows a GARCHSK (1,1,1,1) process as: 𝑟𝑡= 𝜇 + 𝜖𝑡 𝜖𝑡 = √ℎ𝑡𝜂𝑡 𝜂𝑡~𝐺𝑇(0,1, 𝑠𝑡, 𝑘𝑡) (𝜖𝑡|𝐼𝑡−1)~𝐺𝑇(0, ℎ𝑡) ℎ𝑡= 𝜔1+ 𝛼1𝜀𝑡−12 + 𝛽1ℎ𝑡−1 [16]

Where 𝐼𝑡−1 is the information at time t-1 and 𝜂𝑡 is the standardized residual of main equation.

(25)

25

𝑠𝑡 = 𝜔2 + 𝛼2𝜂𝑡−13 + 𝛽2𝑠𝑡−1 [17]

𝑘𝑡 = 𝜔3 + 𝛼3𝜂𝑡−14 + 𝛽3𝑘𝑡−1 [18]

The log-Likelihood Function is: 𝑙𝑡 = − 1 2∑{𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(ℎ𝑡) + 𝜂𝑡 2} + ∑{[log (𝜓2(𝜂 𝑡)) − log (Γ𝑡)]} [19] 𝑛 𝑡=1 𝑛 𝑡=1 𝜓2(𝜂 𝑡) = [1 + 𝑠𝑡 3!(𝜂𝑡 3− 3𝜂 𝑡) + 𝑘𝑡− 3 4! (𝜂𝑡 4− 6𝜂 𝑡 2+ 3)] 2 Γ𝑡 = [1 +𝑠𝑡 2 3!+ (𝑘𝑡− 3)2 4! ]

2.1.4 The Realized GARCHSK (1,1,1,1) Model.

As we mentioned Leon et al. (2005) applied a Gram-Charlier series expansion of the normal distribution (see appendix 1) to model the conditional skewness and kurtosis simultaneously using daily log-returns. Hansen et al. (2011) create a RGARCH model using a realized measure of volatility. Linking these two models we obtained a RGARCHSK (1,1,1,1) model using quasi-maximum likelihood approach with Gauss-Newton optimization method, Marquardt algorithm and HAC covariance estimators.

This model uses high-frequency data with Hansen et al. (2011) model variance equation plus Leon

et al. conditional skewness and kurtosis equations with Gram-Charlier modified expansion,

according to Gallant and Tauchen (1989). Finally, as in Hansen et al. (2011) paper, the realized Kernel is the realized measure 𝑥𝑡.

𝑟𝑡 = 𝜇 + 𝜖𝑡 𝜖𝑡= √ℎ𝑡𝜂𝑡 𝜂𝑡~𝐺𝑇(0,1, 𝑠𝑡, 𝑘𝑡) (𝜖𝑡|𝐼𝑡−1)~𝐺𝑇(0, ℎ𝑡) ℎ𝑡= 𝜔1+ 𝛾1𝑥𝑡−1+ 𝛽1𝑡−1 [20] 𝑅𝑒𝑎𝑙𝑖𝑧𝑒𝑑 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 𝑅𝑉𝑡= 𝑥𝑡 = ξ1+ 𝜑1𝑡+ τ1𝜂𝑡+ 𝑢𝑡 [21] 𝑢𝑡~𝑖. 𝑖. 𝑑. (0, 𝜎𝑢2) 𝜏1(𝜂𝑡) = τ1𝜂𝑡+ τ2(𝜂𝑡2 − 1)[22]

Where 𝐼𝑡−1 is the information at time t-1, 𝜂𝑡 is the standardized residual and 𝑥𝑡 is the Kernel

measure of realized standard deviation and 𝜏1(𝜂𝑡) is the leverage effect.

𝑠𝑡 = ω2+ α2𝜂𝑡−13 + β2𝑠𝑡−1 [23]

𝜖𝑡3 = 𝜎𝑡 3/2

𝜂𝑡3

(26)

26

𝑘𝑡 = ω3+ α3𝜂𝑡−14 + β

3𝑠𝑡−1 [24]

𝜖𝑡4 = 𝜎𝑡2𝜂𝑡4 𝜖t~N(0,1)

The log-likelihood function is given by:

𝑙𝑡 = − 1 2∑ {[𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(ℎ𝑡) + 𝜖2 ℎ𝑡 ] + [𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(𝜎𝓊2) + 𝓊2𝑡 𝜎𝓊2 ]} 𝑛 𝑡=1 +∑ [ln (𝜓2(𝜂𝑡)) − ln (Γ𝑡)] [25] 𝑛 𝑡=1 𝜓2(𝜂𝑡) = [1 + 𝑠𝑡 3!(𝜂𝑡 3− 3𝜂 𝑡) + 𝑘𝑡− 3 4! (𝜂𝑡 4− 6𝜂 𝑡 2+ 3)] 2 Γ𝑡 = [1 +𝑠𝑡 2 3!+ (𝑘𝑡− 3)2 4! ]

As GARCHSK model does not model the realized measure 𝑥𝑡 and the log-likelihood obtained cannot be compared to the RGARCHSK model.

2.1.5 Proposed Model: RGARCHRSRK (1,1,1,1).

From the RGARCHSK model as a baseline model we applied the Groeneveld and Meeding (1988) robust measure of skewness and Moors (1984) robust measure of kurtosis inside the equations of conditional higher moments, obtaining the RGARCHRSRK (1,1,1,1) model. Also, this model is estimated by quasi-maximum likelihood estimation with Gauss-Newton optimization method, Marquardt algorithm and HAC covariance estimators.

𝑟𝑡 = 𝜇 + 𝜖𝑡 𝜖𝑡= √ℎ𝑡𝜂𝑡 𝜂𝑡~𝐺𝑇(0,1, 𝑠𝑡, 𝑘𝑡) (𝜖𝑡|𝐼𝑡−1)~𝐺𝑇(0, ℎ𝑡, 𝑠𝑡, 𝑘𝑡) ℎ𝑡 = 𝜔1+ 𝛾1𝑥1,𝑡−1+ 𝛽1ℎ𝑡−1 [26] 𝑅𝑒𝑎𝑙𝑖𝑧𝑒𝑑 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 𝑅𝑉𝑡 = 𝑥1𝑡 = ξ1+ 𝜑1ℎ𝑡+ τ1𝜂𝑡+ 𝑢𝑡,1 [27] 𝑢𝑡,1~𝑖. 𝑖. 𝑑. (0, 𝜎𝑢1 2 ) 𝜏1(𝜂𝑡) = τ11𝜂𝑡+ τ12(𝜂𝑡2− 1)[28] 𝑠𝑡 = 𝜔2+ 𝛾2𝑥2,𝑡−1+ 𝛽2𝑠𝑡−1 [29] 𝑅𝑒𝑎𝑙𝑖𝑧𝑒𝑑 𝑅𝑜𝑏𝑢𝑠𝑡 𝐺𝑟𝑜𝑒𝑛𝑒𝑣𝑒𝑙𝑑 𝑎𝑛𝑑 𝑀𝑒𝑒𝑑𝑒𝑛 𝑆𝑘𝑒𝑤𝑛𝑒𝑠𝑠 = 𝑅𝑆𝑡= x2t = ξ2+ 𝜑2st+ τ2𝜂t+ ut,2 [30] 𝑢𝑡,2~𝑖. 𝑖. 𝑑. (0, 𝜎𝑢22)

(27)

27 𝜏2(𝜂𝑡) = τ21𝜂𝑡+ τ22(𝜂𝑡2− 1) + τ 23(𝜂𝑡3− 3𝜂t) [31] 𝑘𝑡= 𝜔3+ 𝛾3𝑥3,𝑡−1+ 𝛽3𝑘𝑡−1 [32] 𝑅𝑒𝑎𝑙𝑖𝑧𝑒𝑑 𝑅𝑜𝑏𝑢𝑠𝑡 𝑀𝑜𝑜𝑟𝑠 𝐾𝑢𝑟𝑡𝑜𝑠𝑖𝑠 = 𝑅𝐾𝑡 = x3t = ξ3 + 𝜑3(kt− 3) + τ3𝜂t+ ut,3 [33] 𝑢𝑡,3~𝑖. 𝑖. 𝑑. (0, 𝜎𝑢23) 𝜏3(𝜂𝑡) = τ31𝜂𝑡+ τ32(𝜂𝑡2− 1) + τ 33(𝜂𝑡3 − 3𝜂t) + τ34(𝜂𝑡4− 6𝜂𝑡2+ 3) [34]

The functions 𝜏1(𝜂𝑡), 𝜏2(𝜂𝑡) and 𝜏3(𝜂𝑡) are the leverage functions that capture the leverage effect

of volatility, skewness and kurtosis in returns. For the RGARCHRSRK model these functions are calculated from the second, third and fourth Hermite polynomials.

The log-likelihood function is given by:

𝑙𝑡 = − 1 2∑ {[𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(ℎ𝑡) + 𝜖2 ℎ𝑡 ] + [𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(𝜎𝓊12 ) + 𝓊12 𝜎𝓊12 ] 𝑛 𝑡=1 + [𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(𝜎𝓊22 ) + 𝓊22 𝜎𝓊22 ] + [𝑙𝑜𝑔(2𝜋) + 𝑙𝑜𝑔(𝜎𝓊32 ) + 𝓊23 𝜎𝓊32 ]} +∑ [ln(𝜓2(𝜂𝑡))− ln(Γ𝑡)] 𝑛 𝑡=1 [35] 𝜓2(𝜂 𝑡) = [1 + 𝑠𝑡 3!(𝜂𝑡 3− 3𝜂 𝑡) + 𝑘𝑡− 3 4! (𝜂𝑡 4− 6𝜂 𝑡 2+ 3)] 2 Γ𝑡= [1 + 𝑠𝑡2 3!+ (𝑘𝑡− 3)2 4! ]

It is important to note that we added two parts to the log-likelihood´s function one for skewness and another for kurtosis.

2.2 Value at Risk.

2.2.1 Mathematical definition.

Given some confidence level α ∈ (0,1) the Value at Risk (VaR) of a portfolio at the confidence level α is given by the smallest number “l” such that the probability that the loss “L” exceeds “l” is no larger than (1 − α). Formally,

VaRα = inf[l ∈ ℝ; P(L > l) ≤ 1 − α] = inf[l ∈: FL(l) ≥ α]

The first equation is the definition of VaR and the second presents the probability distribution of profit and loss, where FL(l) is its cumulative distribution function. Given some increasing function

T: ℝ → ℝ, the generalized inverse function of T is defined by T←(y) ∶= inf{x ∈ ∶ T(x) ≥ y}, where we use the convention that the infimum of an empty set is ∞. Then, given some df F, the generalized inverse F← is called the quantile function of F. For α ∈ (0,1) the α quantile of F is

given by

(28)

28

If F is continuous and strictly increasing, then qα(F) = F−1 (α), where F−1 is the ordinary inverse

of F.

2.2.2 VaR Techniques.

The VaR calculated under the normal assumption underestimates the actual risk when the distribution of many observed financial return series has tails that are fatter than those implied by the conditional normal distribution. Then, we incorporate these observations into the VaR calculation employing quantile’s of standard normal distribution and Cornish-Fisher expansion for models with conditional skewness and kurtosis equations. For GARCH(1,1) and RGARCH(1,1) models the formulae of VaR is:

𝑉𝑎𝑅𝑡= −𝑚𝑒𝑎𝑛(𝑟) − ℎ𝑡−1𝐹𝛼−𝑁𝑜𝑟𝑚𝑎𝑙−1 [36] 𝑁𝑜𝑟𝑚𝑎𝑙 = 𝐹𝛼−𝑁𝑜𝑟𝑚𝑎𝑙−1 = 2.33

The Cornish-Fisher (1938) approximation is a Taylor series as expansion of the alpha-VaR around the alpha-VaR of a normal and is given by:

𝑉𝑎𝑅𝑡= −𝑚𝑒𝑎𝑛(𝑟) − ℎ𝑡−1𝐹𝛼−𝐶𝑜𝑟𝑛𝑖𝑠ℎ−𝐹𝑖𝑠ℎ𝑒𝑟−1 [37] 𝐶𝑜𝑟𝑛𝑖𝑠ℎ 𝐹𝑖𝑠ℎ𝑒𝑟 = 𝐹𝛼−𝑁𝑜𝑟𝑚𝑎𝑙−1 + (𝑠𝑡 6) (𝐹𝛼−𝑁𝑜𝑟𝑚𝑎𝑙 −1 2− 1) + (𝑘𝑡− 3 24 ) (𝐹𝛼−𝑁𝑜𝑟𝑚𝑎𝑙 −1 3− 3𝐹 𝛼−𝑁𝑜𝑟𝑚𝑎𝑙−1 ) − (𝑠𝑡 2 36) (2𝐹𝛼−𝑁𝑜𝑟𝑚𝑎𝑙 −1 3− 5𝐹 𝛼−𝑁𝑜𝑟𝑚𝑎𝑙−1 )

Where 𝑠𝑡 and 𝑘𝑡 are the skewness and kurtosis measures of standardized residuals series obtained

from GARCHSK(1,1,1,1), RGARCHSK(1,1,1,1) and RGARCHRSRK(1,1,1,1) models. The expression 𝐹𝐶𝐹−1(𝛼), with negative skewness and excess kurtosis trend to decrease the estimated quantile and increases the VaR.

2.2.3 Backtesting VaR:

A perspective on the evaluation of VaR methods is through the number of exceptions (Basel II,

1996); a VaR exception occurs when the actual loss exceeded the value of the anticipated VaR,

depending on the VaR probability level, a specific amount of VaR exceptions are allowed per year. For example, for the 99% VaR, it is expected and permitted that the number of exceptions be equal to 1% of the total number of periods in a year. If a year has 250 time periods, about two or three exceptions are allowed per year.

2.2.3.1 Unconditional Coverage Test.

Kupiec (1995) proposed VaR backtests focused exclusively on the property of unconditional

coverage. In short, these tests are concerned with whether or not the reported VaR is violated more (or less) than α × 100% of the time. Then, the proposed a proportion of failures or POF test examines how many times a financial institution's VaR is violated over a given span of time. If the number of violations differs considerably from α × 100% of the sample, then the accuracy of the underlying risk model is called into question. Using a sample of T observations, Kupiec's test statistic takes the form;

(29)

29 𝑃𝑂𝐹 = −2𝑙𝑜𝑔 [(1 − 𝛼̂ 1 − 𝛼) 𝑇−𝐼(𝛼) (𝛼̂ 𝛼) 𝐼(𝛼) ] [38] 𝛼̂ = 1 𝑇𝐼(𝛼) 𝐼(𝛼) = ∑ 𝐼𝑡(𝛼) 𝑇 𝑡=1

A close inspection of the test statistic reveals that if the proportion of VaR violations, α̂ × 100%, is exactly equal to α × 100%, then the POF test takes the value zero, indicating no evidence of any inadequacy in the underlying VaR measure. As the proportion of VaR violations differs from α × 100%, the POF test statistic grows indicating mounting evidence that the proposed VaR measure either systematically understates or overstates the portfolio's underlying level of risk. A shortcoming of this test is that it focuses exclusively on the unconditional coverage property of an adequate VaR measure and does not examine the extent to which the independence property is satisfied.

2.2.3.2 Independence Test.

Christoffersen's (1998) test, examine the independence property of the VaR hit series, 𝐼𝑡 (𝛼). This

test assumes that the likelihood of a VaR violation depends on it being occurred on the previous day. If the VaR measure accurately reflects the underlying portfolio risk, then the chance of violating today's VaR should be independent of yesterday's VaR being violated. If, for example, the likelihood of a 1% VaR violation increases on the day following a previous 1% VaR violation, then this would indicate that the 1% VaR following a violation should in fact be increased. The test is carried out by creating a 2x2 contingency table that records the violations of the institution's VaR on adjacent days. If the VaR measure accurately reflects the portfolio's risk; then the proportion of violations that occur after a previous violation, 𝐼𝑡−1 = 1, should be the same as the proportion of violations that occur after a day in which no violation occurred, 𝐼𝑡−1= 0.

Violations of the independence property which are not related to the set of anomalies defined by the test will not be systematically detected. As a result, independence tests are only likely to be effective at detecting inaccurate VaR measures to the extent that the tests are designed to identify violations of the independence property in ways that are likely to arise when internal risk models fail to provide accurate VaR measures.

2.2.3.3 Joint Unconditional Coverage and Independence Tests.

The advantage of this method is the simplicity of its implementation. This procedure starts with the hit function of Christoffersen (1998):

𝐼𝑡= {1, 𝑖𝑓𝑟𝑡 < 𝑉𝑎𝑅𝑡 0, 𝑖𝑓𝑟𝑡 ≥ 𝑉𝑎𝑅𝑡 [39]

𝐼𝑡 takes the value 1 if the negative log-return at time t exceeds the VaR at the same time t; if the

(30)

30

that 𝐼𝑡~𝑖. 𝑖. 𝑑. 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖 (𝑝) and the alternative hypothesis is 𝐼𝑡~𝑖. 𝑖. 𝑑. 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖 (𝜋). To form a complete test Christoffersen uses a LR test of correct unconditional coverage, a LR test of independence and a LR test that combines the two. Under the null of correct unconditional coverage, the statistics will have a chi-squared distribution with one degree of freedom:

𝐿𝑅𝑈𝐶 = −2𝑙𝑜𝑔 [(1 − 𝑝)

𝜂0𝑝𝜂1

(1 − 𝑝̂)𝜂0𝑝̂𝜂1] ~𝒳

2(1)[40]

Where 𝜂0 is the number of non-violations and 𝜂1 is the number of violations; 𝑝̂ is the observed

coverage probability. Its maximum likelihood estimator defined as 𝜂1

𝜂0+𝜂1. 𝐿𝑅𝑖𝑛𝑑 = −2𝑙𝑜𝑔 [ (1 − 𝜋2) 𝜂00+𝜂10𝜋 2𝜂01+𝜂11 (1 − 𝜋01)𝜂00𝜋01𝜂01(1 − 𝜋11)𝜂10𝜋11 𝜂11] ~𝒳 2(1)[41] 𝜋01= 𝜂01 𝜂00+ 𝜂01 𝜋11 = 𝜂11 𝜂10+ 𝜂11 𝜋2 = 𝜂01+ 𝜂11 𝜂00+𝜂01+ 𝜂10+ 𝜂11

Where ηij is the number of observations with the value i at time t − 1 followed by j at time t. (It takes value 1 if the excess occur and 0 if does not). This LR test of conditional coverage is asymptotically chi-square with 2 degrees of freedom.

𝐿𝑅𝐶𝐶 = −2𝑙𝑜𝑔 [ (1 − 𝑝) 𝜂0𝑝𝜂1 (1 − 𝜋01)𝜂00𝜋01𝜂01(1 − 𝜋11)𝜂10𝜋11 𝜂11] ~𝒳 2(2)[42] 𝐿𝑅𝐶𝐶 = 𝐿𝑅𝑈𝐶 + 𝐿𝑅𝑖𝑛𝑑~𝒳2(2)[43]

This statistic tests jointly for independence and correctness of probability parameter 𝑝. The null 𝐻0 is not rejected at the significance level 𝑝 = 1 − 𝑎 if,

𝑐ℎ𝑖𝑖𝑛𝑣(𝑝

2; 2) ≤𝐿𝑅𝐶𝐶 ≤𝑐ℎ𝑖𝑖𝑛𝑣(1 − 𝑝 2; 2)

The advantage of this method is that it is easy to implement and it can identify the source of failure. At the same time it enables to test both coverage and independence properties. However, these joint tests will detect a VaR measure which violates either property; because do not detect a VaR measure which only violates one of two properties. Another issue is that this test does not include information about the accuracy of VaR measurement.

2.2.3.4 Statistical Evaluation of VaR´s.

Three features should be possessed by an appropriate method according with Engel and Gizycki (1999): (1) conservatism, which indicates that it generally gives a relatively higher VaR compared to other methods, (2) accuracy, in which the method is able to identify the level of loss with minimum error in magnitude, and (3) efficiency, that is the capability to compute the adequate level of risk capital such that risk is fully accounted yet it is not too high to imply an opportunity loss.

(31)

31

2.2.3.4.1 Mean Relative Bias (MRB):

To assess the conservatism and the relative size of VaR estimation we apply the mean relative bias developed by Hendricks (1996). This statistic captures the extent to which different VaR-GARCH models produce risk estimations of similar average size. The mean relative bias is computed as:

𝑀𝑅𝐵𝑖 = 1 𝑇∑ 𝑉𝑎𝑅𝑖𝑡− 𝑉𝑎𝑅̅̅̅̅̅̅̅𝑡 𝑉𝑎𝑅𝑡 ̅̅̅̅̅̅̅ [43] 𝑇 𝑡=1 𝑉𝑎𝑅𝑡 ̅̅̅̅̅̅̅ = 1 𝑁∑ 𝑉𝑎𝑅𝑖𝑡 𝑁 𝑖=1

𝑉𝑎𝑅𝑖𝑡= The VaR at time 𝑡 based on method 𝑖.

𝑇= Return series data length of the evaluation period. 𝑁= Number of VaR´s method being compared.

The higher the MRB of a VaR method, the more conservative it is relative to other models. 2.2.3.4.2 Average Quadratic Loss Function (AQLF):

While the measure of conservatism takes account of the relative size of risk estimates, when assessing the accuracy of the VaR estimations, we are concerned with the number of times that losses larger than VaR estimates are observed and the size of those losses.

𝐴𝑄𝐿𝐹 = 1 𝑇∑ 𝐿(𝑉𝑎𝑅𝑡, 𝑟𝑡) [44] 𝑇 𝑡=1 𝐿(𝑉𝑎𝑅𝑡, 𝑟𝑡) = {1 +(𝑉𝑎𝑅𝑡− 𝑟𝑡)2 𝑖𝑓 𝑉𝑎𝑅𝑡 > 𝑟𝑡 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

The lower or close to zero of AQLF, the more accurate the VaR method in forecasting and accounting for possible loss.

2.2.3.4.3 Average Market Risk Capital (AMRC):

A risk measure needs to be more than conservative and accurate. Then, the efficiency is important to provide more precise resource allocation signals about the capacity for a model to provide adequate risk coverage with a minimum average capital.

𝐴𝑀𝑅𝐶 =1 𝑇∑ 𝑀𝑅𝐶𝑡 𝑇 𝑡=1 [45] 𝑀𝑅𝐶𝑡= 𝑚𝑎𝑥 [(− 𝑘 60 ∑ 𝑉𝑎𝑅𝑡 𝑡−60 𝑘=𝑡−1 ) , −𝑉𝑎𝑅𝑡−1]

𝑘 = is the penalty multiplier (3, 3.4, 3.5, 3.65, 3.75, 3.85 or 4) based on the number of VaR exceptions.

(32)

32

The lower is the AMRC, the lower the risk capital to be allocated on the average. The AMRC is jointly analyzed with the results of the other statistics.

2.2.3.5 The Loss Function.

The loss function is based on the hit function defined by Christoffersen (1998): 𝐿𝑜𝑠𝑠 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛 =𝐿(𝑟𝑡, 𝑉𝑎𝑅𝑡)

= ∑ 𝛼(𝑟𝑡− 𝑉𝑎𝑅𝑡)(1 − 𝐼𝑟𝑡<𝑉𝑎𝑅𝑡) + (1 − 𝛼)(𝑉𝑎𝑅𝑡−

𝑇

𝑡=1

𝑟𝑡)𝐼𝑟𝑡<𝑉𝑎𝑅𝑡 [46]

Where 𝑟𝑡 is the return in period 𝑡 and 𝑉𝑎𝑅𝑡 is a VaR value for 𝛼 quantile of the return distribution in period 𝑡. The generalized error can be computed differentiating respect to VaR, and is:

𝑔𝑒𝑡 = 𝐼𝑟𝑡<𝑉𝑎𝑅𝑡− 𝛼

When there is a VaR exceedance, 𝐻𝐼𝑇𝑡= 1 − 𝛼 and no exceedance, 𝐻𝐼𝑇𝑡= −𝛼. If the model is correct, then 𝛼 of the HIT should be (1 − 𝛼) and (1 − 𝛼) should be – 𝛼 and the mean of 𝐻𝐼𝑇𝑡

should be 0.

𝛼(1 − 𝛼) − 𝛼(1 − 𝛼) = 0

Moreover, when the VaR is conditional on time 𝑡 information, 𝐸𝑡(𝐻𝐼𝑇𝑡+1) = 0 which follows

from the properties of optimal forecasts. 2.2.3.6 The DM Test.

Another approach is the relative comparison test proposed by Diebold and Mariano (1995) used to rank VaR forecasts. If 𝐿(𝑟𝑡, 𝑉𝑎𝑅𝑡 ) is a loss function defined over VaR, then this test statistic

can be computed as:

𝐷𝑀 = 𝑑̅ √𝑉(𝑑̅)̂ [47] Where 𝑑𝑡 = 𝐿(𝑟𝑡, 𝑉𝑎𝑅𝑡𝐴− 𝐿((𝑟𝑡, 𝑉𝑎𝑅𝑡𝐵) [48] 𝑉𝑎𝑅𝑡𝐴 and 𝑉𝑎𝑅

𝑡𝐵 are the Value-at-Risks from models 𝐴 and 𝐵 respectively, 𝑑̅ = 1

𝑁∑ 𝑑𝑡 𝑁

𝑇=1 ,𝑁 (for

modeling) is the number of observations used in the model and √𝑉(𝑑̅)̂ is the long-run variance of 𝑑𝑡 which requires the use of a HAC covariance estimator (e.g. Newey-West) in the least squares with breakpoints framework. Recall that 𝐷𝑀 test has an asymptotic normal distribution and that the null is 𝐻0: 𝐸(𝑑𝑡) = 0 while the alternative 𝐻A: 𝐸(𝑑𝑡) ≠ 0. Large negative values (less than

-2) indicate model 𝐵 is superior while large positive values indicate the opposite; values close to zero indicate neither forecast outperforms the other.

(33)

33

Chapter 3: Data Analysis.

3.1 Data.

Our data set is made of one minute closing prices of S&P 500 from 1/02/1990 to 9/05/2014. These closing prices contain 390 observations each day from 09:30 am to 15.59 pm. Figures I and II present 2.394.100 intra-day closing prices of S&P 500 index and the intra-day continuously compounded log-returns are: rt,i = [Log(Closet,i) − Log(Closet,i−1)] ∗ 100. Where rt,i are the intra-day log-returns and Closet,i the intra-day close prices of S&P 500 Index. The descriptive statistics present values of negative skewness (-0.13691) and high kurtosis (499.7555). Figure III is a qq-plot showing non-normality of log-return distribution. Daily data represent 6169 observations of S&P 500 close price index as we see in figure IV and returns calculated as: rt = [Log(Closet) − Log(Closet−1)] ∗ 100. Figure V displays the log-returns and figure VI the qq plot. Descriptive statistics of daily log-returns shows negative skewness (-0.22452) and excess of kurtosis (11.50159). Next table shows the comparative descriptive statistics of both log-returns.

Table I

Descriptive statistics of daily and intra-daily log returns.

Log-return Daily Intra-daily

Mean 0.02788 0.00007 Median 0.05706 0.00000 Maximum 10.95720 4.76824 Minimum -9.46951 -4.74758 Standard deviation 1.14613 0.04356 Skewness -0.22452 -0.01369 Kurtosis 11.50159 499.75550 Observations 6168 2.394.100

3.2 Tests for Jumps and Microstructure Noise.

3.2.1 Test for Jumps.

This test proposed by Aït-Sahalia (2009) is computed as the daily sum of differences between intra-day close prices with one lag and 𝑝 powers. We use a 𝑘 value of 2 and a 𝑝 value of 4, as suggested by Aït-Sahalia. The test value is 2.05025, therefore as the value is higher than 5% normal alpha value of 1.64 the null hypothesis is rejected, meaning the presence of jumps.

𝑆̂(𝑝, 𝑘, ∆𝑛) = 𝑍𝑗𝑢𝑚𝑝𝑠(𝑝, 𝑘, ∆𝑛) =𝐵̂(𝑝, 𝑘, ∆𝑛)𝑡 𝐵̂(𝑝, ∆𝑛)𝑡 =

77651856

37874312= 2.05025 > 1.64 → 𝑅𝑒𝑗𝑒𝑐𝑡 𝐻0

3.2.2 Test for Microstructure Noise.

This test developed by Barndorff-Nielsen and Shephard (2006) is based on the difference between two realized volatility estimators. Therefore, we have chosen the Bi-Power variation as a measure of realized volatility computed at different sampling intervals from one minute versus five, ten and fifteen minutes. The null hypothesis is that the difference between the realized volatilities is zero, then that there is no microstructure noise. Therefore, we reject the null hypothesis because the results confirm the presence of microstructure noise. The test values are -0.11309 for one minute against five minutes, -0.12816 for one minute against ten minutes and -0.13455 for one minute to

(34)

34

fifteen minutes. If we standardize these values we obtain -10.26, -11.63 and -12.21 respectively all these values are higher than -1.64 for 5% alpha value, rejecting the null hypothesis of no microstructure noise. 𝑆̂(𝑝, 𝑘, ∆𝑛) = 𝑍𝑛𝑜𝑖𝑠𝑒𝑠(𝑝, 𝑘, ∆𝑛) =𝐵̂(𝑝, 𝑘, ∆𝑛)𝑡− 𝐵̂(𝑝, ∆𝑛)𝑡 𝐵̂(𝑝, ∆𝑛)𝑡 𝑇𝑒𝑠𝑡 01 𝑣𝑠 05 𝑚𝑖𝑛 = 𝐵̂(01 𝑚𝑖𝑛)𝑡− 𝐵̂(05 𝑚𝑖𝑛)𝑡 𝐵̂(05 𝑚𝑖𝑛)𝑡 = (429.63 − 484.42) 484.42 = −0.11309 → −10.26 < −1.64 → 𝑅𝑒𝑗𝑒𝑐𝑡 𝐻0 𝑇𝑒𝑠𝑡 01 𝑣𝑠 10 𝑚𝑖𝑛 = 𝐵̂(01 𝑚𝑖𝑛)𝑡− 𝐵̂(10 𝑚𝑖𝑛)𝑡 𝐵̂(10 𝑚𝑖𝑛)𝑡 = (422.33 − 484.42) 484.42 = −0.12816 → −11.63 < −1.64 → 𝑅𝑒𝑗𝑒𝑐𝑡 𝐻0 𝑇𝑒𝑠𝑡 01 𝑣𝑠 55 𝑚𝑖𝑛 = 𝐵̂(01 𝑚𝑖𝑛)𝑡− 𝐵̂(15 𝑚𝑖𝑛)𝑡 𝐵̂(15 𝑚𝑖𝑛)𝑡 =(419.23 − 484.42) 484.42 = −0.13455 → −12.21 < −1.64 → 𝑅𝑒𝑗𝑒𝑐𝑡 𝐻0

3.3 Realized volatility Measure Robust to Jumps and Microstructure

Noise.

The results of the previous subsections point to the presence of jumps and microstructure noise. Therefore, we calculate three different measures of realized volatility the first is the realized variance, the second is the bi-power variation and the third is the realized univariate kernel. As we have seen in chapter two the traditional realized variance measure is influenced by jumps and microstructure noise; the bi-power variation is robust to jumps but it is not to microstructure noise and the univariate realized kernel measure is robust to jumps and microstructure noise see

Barndorff-Nielsen et al. (2009). Next table shows descriptive statistics of these measures. We

adopt the realized kernel using the Parzen kernel function because this estimator is similar to the realized variance, but is robust to jumps and market microstructure noise and it is the most accurate and robust estimator of the quadratic variation.

Table II

Descriptive statistics of realized volatility measures.

Volatility measures Univariate Kernel Bi-Power Realized Variance

Mean 0.18336 0.07852 0.13462 Median 0.08204 0.03321 0.05671 Maximum 16.60168 9.42310 14.22302 Minimum 0.00846 0.00014 0.00453 Standard deviation 0.42952 0.21972 0.35195 Skewness 14.56094 18.42684 16.08817 Kurtosis 405.10810 602.21830 479.74660 Observations 6168 6168 6168

The realized univariate Kernel is 𝑅𝐾(𝑋ℎ) = 𝑅𝑉𝑛+ 2 ∑ 𝑘 ( ℎ−1

𝐻 ) 𝛾ℎ 𝐻

ℎ=1 The bi-power variation is 𝐵𝑃𝑉 = ∑𝑀−1𝐽=1|𝑦𝑗,𝑖||𝑦𝑗+1,𝑖| The realized variance is 𝑅𝑉𝑛= ∑ 𝑟𝑡𝑗

2 𝑀 𝑗=1

(35)

35

3.4 Realized and Robust Measures of Skewness and Kurtosis.

Finally, we calculated robust measures of skewness and kurtosis from the intra-day S&P 500 index´s log-returns. Table III shows descriptive statistics of these higher moment measures. The main feature of this table is that the mean of skewness is positive and the same measure in kurtosis is lower than 3 the normal value.

Table III

Descriptive statistics of skewness and excess of kurtosis.

Measures Groeneveld & Meeden

Skewness

Moors excess of Kurtosis

Mean 0.00361 1.74085 Median 0.00078 1.60204 Maximum 0.53581 7.80709 Minimum -0.75486 0.00000 Standard deviation 0.11553 0.86744 Skewness -0.03034 1.31081 Kurtosis 3.42952 6.11168 Observations 6168 6168

The Groeneveld and Meeden skewness is 𝑆𝐾(𝛼) =∫ [𝐹

−1(1−𝛼)+𝐹−1(𝛼)−2𝑄 2] 0.5 0 𝑑𝛼 ∫0.5[𝐹−1(1−𝛼)−𝐹−1(𝛼)] 0 𝑑𝛼 = 𝜇−𝑄2 𝐸|𝑟𝑡−𝑄2|

The Moors Kurtosis is 𝐾𝑅 =(𝐸7−𝐸5)+(𝐸3−𝐸1)

(36)

36

3.5 Figures.

Figures I and II: One minute S&P500 index intra-daily close prices and Intra-daily log-returns of S&P500 index close prices from 1/05/1990 to 9/05/2014 (09:30 am–15.59 pm).

(37)

37

Figures IV and V: S&P500 index daily close prices and daily log-returns of S&P500 index close prices from 1/05/1990 to 9/05/2014.

(38)

38

Figure VII and VIII: Intra-daily one minute robust measures of skewness and excess of kurtosis from 1/05/1990 to 9/05/2014 (09:30 am–15.59 pm).

(39)

39

Chapter 4: Empirical Results.

In this last chapter we analyze the results for all volatility models using a set of tests applied at a reduced sample of the 250 last days of log-returns from 9/05/2013 to 9/05/2014 according to VaR recommendations under the new Basel III normative.

4.1 Models and Results.

Before presenting the estimation results obtained, we summarize the five nested models estimated as follows:

1. Daily VaR-GARCH(1,1) model.

𝑟𝑡 = 𝜇 + 𝜖𝑡

𝜖𝑡= √ℎ𝑡𝜂𝑡

𝜂𝑡~𝑁(0,1) ℎ𝑡 = 𝜔0+ 𝛼0𝜀𝑡−12 + 𝛽0𝑡−1 2. Daily VaR-GARCHSK(1,1,1,1) model.

𝑟𝑡 = 𝜇 + 𝜖𝑡 𝜖𝑡= √ℎ𝑡𝜂𝑡 𝜂𝑡~𝒢𝑇(0,1, 𝑠, 𝑘) ℎ𝑡 = 𝜔0+ 𝛼0𝜀𝑡−12 + 𝛽0𝑡−1 𝑠𝑡 = 𝜔1+ 𝛼1𝜂𝑡−13 + 𝛽1𝑠𝑡−1 𝑘𝑡 = 𝜔2+ 𝛼2𝜂𝑡−14 + 𝛽 2𝑘𝑡−1

3. “Augmented” daily VaR-RGARCH(1,1) model with the Kernel univariate measure as a realized variance. 𝑟𝑡 = 𝜇 + 𝜖𝑡 𝜖𝑡= √ℎ𝑡𝜂𝑡 𝜂𝑡~𝑁(0,1) ℎ𝑡 = 𝜔1+ 𝛾1𝑥𝑡−1+ 𝛽1ℎ𝑡−1 𝑥𝑡 = ξ1 + 𝜑1𝑡+ τ1𝜂𝑡+ 𝑢𝑡 𝑢𝑡~𝑖. 𝑖. 𝑑. (0, 𝜎𝑢2) 𝜏1(𝜂𝑡) = τ11𝜂𝑡+ τ12(𝜂𝑡2− 1)

4. “Augmented” daily (mix between Leon et al. and Hansen et al. models) VaR-RGARCHSK(1,1,1,1) model with the Kernel univariate measure as a realized variance.

Riferimenti

Documenti correlati

Andrea Camilli, Paola Puma, Esmeralda Remotti, Dismouting, conserving, displaying ships; the MNAP Museo delle Navi Antiche di Pisa and the activity of Centro di restauro del

Se le prime due battute riproducono, in forma abbreviata, l’alterco tra Menenio e il Secondo Cittadino appena prima della favola del ventre (1.1.69-75), il

The proposed EvFS method considers run-to-failure and right-censored degradation trajectories as agents whose state of knowledge on the actual RUL of the testing equipment

background modeling PD, i.e., the sum of nonresonant background and SM single-Higgs boson contributions scaled to their cross sections; and the solid red line represents the SM-like

139 Charles University, Faculty of Mathematics and Physics, Prague; Czech Republic 140 State Research Center Institute for High Energy Physics, NRC KI, Protvino; Russia 141

At the end of the course students were invited to compile a form that involved evaluation of integrated course effectiveness and of the following aspects in

Così, Mario Ascheri ha promosso indagini volte a ricostruire gli ambiti territoriali di competenza delle magistrature centrali e locali; d’altro canto, a Riccardo Francovich si deve

Uso delle scritture, metodi di rappresentanza e forme di percezione di sé delle comunità del contado bergamasco lungo il XIII secolo, in “Bergomum”, 103 (2008), pp. Vincoli