• Non ci sono risultati.

Robust asset allocation problems: a new class of risk measure based models

N/A
N/A
Protected

Academic year: 2021

Condividi "Robust asset allocation problems: a new class of risk measure based models"

Copied!
122
0
0

Testo completo

(1)

Ph.D. Program in Mathematics for Economic Decisions Leonardo Fibonacci School

Ph.D. Thesis

Robust asset allocation problems:

a new class of risk measure based models

SSD: MAT/09

Raffaella Recchia

Ph.D. Supervisor:

(2)
(3)

Acknowledgments

This dissertation would not have been possible without the help and encouragement of several people. First of all, I thank my supervisor Maria Grazia Scutell`a. She has been a source of support, ideas and encouragement. I am very grateful to her because she has been an example for me.

I am greatly indebted to Professor Riccardo Cambini, his availability has been providential every times I needed.

I wish to thank the referees Professors Reha T¨ut¨unc¨u and David B. Brown for their helpful comments and suggestions; as well as I thank Professor T¨ut¨unc¨u and Professor Byrne who kindly provided me with used data for the computational analysis.

I heartily thank my colleague and friend Francesca Salvi; we spent a lot of time together, overlapping troubles and sharing joys. She has often been my source of inspiration and my model. Affectionately, I want to thank Fausto and Lorenzo for their advice, their help and their liking, as well as Roberta, Rossana and Mauro; finally all my colleagues of the Department of Computer Sciences where I worked during these years and those one of the Department of Statistics and Applied Mathematics.

A sincere thanks to my family: they have never once failed to support me when I needed. I am grateful to my mother, her love has been my greatest strength and it has been of use to go on in the difficult situations; I thank my father for believing in me all times and my brother for his ever present good humor and for the technical (and not only) support. I am grateful to Aldo who supported me unconditionally throughout these years with patience and affect; and I thank Anna, she has always cheered up me even if she was far. Finally, I thank all friends and people that have touched my life in special ways.

(4)

Ringraziamenti

Questo lavoro non sarebbe stato possibile senza l’aiuto e il sostegno di diverse persone. Prima di tutto, vorrei ringraziare il mio supervisore, la Professoressa Maria Grazia Scutell`a per il supporto continuo, l’attenzione, le idee e l’incoraggiamento che non hai mai smesso di darmi.

Un ringraziamento particolare al Professor Riccardo Cambini per essersi messo a mia disposizione in ogni occasione a volte quasi in maniera provvi-denziale.

Sono grata ai Professori T¨ut¨unc¨u e Brown per aver accettato di revisio-nare la mia tesi e per avermi offerto utili commenti e suggerimenti; il Professor T¨ut¨unc¨u insieme al Professor Byrne mi hanno poi gentilmente fornito i dati attraverso i quali `e stato possibile portare avanti l’analisi computazionale e per questo li ringrazio.

Di cuore, vorrei poi ringraziare la mia collega e amica Francesca Salvi; insieme abbiamo vissuto tanti momenti di difficolt`a e condiviso tante gioie. La ringrazio per essere stata spesso la mia fonte di ispirazione e il mio modello e soprattutto per aver sempre trovato le parole giuste per risollevarmi. Con grande affetto vorrei ringraziare Fausto e Lorenzo per i loro consigli, il loro aiuto e la loro simpatia, cos`ı come Roberta, Rossana e Mauro e tutti i colleghi del Dipartimento di Informatica dove ho lavorato tutti questi anni e quelli del Dipartimento di Statistica e Matematica Applicata.

Un grazie particolare alla mia famiglia che non ha mai ostacolato le mie scelte, mi ha sempre appoggiato ed `e stata continuamente presente. A mia madre, il suo amore `e stato la mia pi`u grande forza e mi ha fatto andare avanti anche nei momenti pi`u critici; a mio padre che ha sempre creduto in me e mi ha incoraggiato e a mio fratello per il suo humor e il suo supporto tecnico (e non solo).

Un ringraziamento speciale ad Aldo, per avermi sostenuto incondiziona-tamente in tutti questi anni con pazienza e affetto, per aver dovuto subire tutti i miei sbalzi di umore e per non aver mai perso fiducia in me.

Con tanta stima e affetto vorrei poi ringraziare Anna, guida costante e discreta che ha continuato a regalarmi serenit`a, sostegno e attenzione anche

(5)

quando lontana.

Infine, un grazie a tutte quelle persone che, in questi anni, hanno toccato la mia vita in maniera unica e speciale.

(6)

Contents

1 Introduction 1

1.1 Optimization under uncertainty . . . 1

1.2 Motivation and overview . . . 2

2 Robust asset allocation problems: a literature review 4 2.1 Asset Allocation Problems . . . 4

2.1.1 Formulations of asset allocation problems . . . 4

2.2 Robust Optimization . . . 6

2.2.1 Absolute robustness . . . 6

2.2.2 Relative robustness . . . 8

2.2.3 Adjustable robust optimization . . . 9

2.2.4 Robust mean-variance models . . . 10

2.2.5 The Sharpe ratio problem and its robust counterparts . 14 2.3 Robustness and risk measures . . . 17

2.3.1 Risk measures . . . 17

2.3.2 Robust VaR and CVaR models . . . 21

2.3.3 Alternative robust models . . . 25

2.3.4 Risk measures theory and robust optimization . . . 28

3 A new class of robust asset allocation problems 33 3.1 Risk measure based models . . . 33

3.2 The norm-portfolio models . . . 33

3.2.1 The || · ||∞ norm case . . . 36

3.2.2 The || · ||1 norm case . . . 39

3.2.3 The D-norm case . . . 41

3.2.4 The Euclidean norm case . . . 46

3.3 Coherent variant of the norm-portfolio models . . . 49

3.3.1 Some considerations about the bound π . . . 52

(7)

4 Computational analysis 55

4.1 Plan of the experiments . . . 56

4.1.1 Input Data . . . 56

4.1.2 In-sample analysis . . . 59

4.1.3 Out-of-sample analysis . . . 59

4.2 The first computational test . . . 62

4.3 The second computational test . . . 77

4.4 The third computational test . . . 89

(8)

List of Figures

4.1 Evolution of realized return related to the norm-portfolio

mod-els for different λ values in the first subperiod . . . 63

4.2 Evolution of realized return related to the norm-portfolio mod-els for different λ values in the second subperiod . . . 64

4.3 Evolution of realized return related to the norm-portfolio mod-els for different λ values in the third subperiod . . . 65

4.4 Evolution of realized return related to the norm-portfolio mod-els for different λ values in the fourth subperiod . . . 66

4.5 Evolution of realized return related to coherent variant of the norm-portfolio models for different π values in the first subperiod 69 4.6 Evolution of realized return related to coherent variant of the norm-portfolio models for different π values in the second sub-period . . . 70

4.7 Evolution of realized return related to coherent variant of the norm-portfolio models for different π values in the third sub-period . . . 71

4.8 Evolution of realized return related to coherent variant of the norm-portfolio models for different π values in the fourth sub-period . . . 72

4.9 Out-of-sample performance for the CVaR model . . . 73

4.10 Performance of the T¨ut¨unc¨u-Koenig model . . . 74

4.11 Performance of the entropic model . . . 75

4.12 Second data set: evolution of realized return related to the norm-portfolio models for different values of λ in the first sub-period. . . 78

4.13 Second data set: evolution of realized return related to the norm-portfolio models for different values of λ in the second subperiod. . . 79

4.14 Second data set: evolution of realized return related to the norm-portfolio models for different values of λ in the third subperiod. . . 80

(9)

4.15 Evolution of realized return related to Coherent variant of the norm-portfolio models for different π values in the first subperiod 82 4.16 Evolution of realized return related to Coherent variant of the

norm-portfolio models for different π values in the second

sub-period . . . 83

4.17 Evolution of realized return related to Coherent variant of the norm-portfolio models for different π values in the third sub-period . . . 84

4.18 Out-of-sample behaviour of CVaR model. . . 85

4.19 Out-of-sample behaviour of T¨ut¨unc¨u-Koenig model. . . 86

4.20 Out-of-sample behaviour of the entropic model . . . 87

4.21 Behaviour of the norm-portfolio model for different values of λ in the first subperiod . . . 90

4.22 Behaviour of the norm-portfolio model for different values of λ in the second subperiod . . . 91

4.23 Behaviour of Coherent models for each value of π and CVaR model in the first subperiod . . . 94

4.24 Behaviour of Coherent models for each value of π and CVaR model in the second subperiod . . . 95

4.25 Performances of T¨ut¨unc¨u-Koenig and entropic models . . . 98

4.26 Comparison among the chosen models . . . 99

(10)

List of Tables

4.1 Out-of-sample mean, variance, Sharpe Ratio and portfolio turnover. 67 4.2 Out-of-sample mean, variance, Sharpe Ratio and portfolio turnover. 68 4.3 Out-of-sample mean, variance, Sharpe Ratio, portfolio turnover

and computational time for all models chosen. . . 76 4.4 Out-of-sample mean, variance, Sharpe Ratio and portfolio turnover. 81 4.5 Out-of-sample mean, variance, Sharpe Ratio and portfolio turnover. 82 4.6 Out-of-sample mean, variance, Sharpe Ratio, portfolio turnover

and computational time. . . 88 4.7 Out-of-sample mean, variance, Sharpe Ratio and portfolio turnover. 92 4.8 Out-of-sample mean, variance, Sharpe Ratio and portfolio turnover. 96 4.9 Out-of-sample mean, variance, Sharpe Ratio, portfolio turnover

and computational time. . . 101

(11)

Chapter 1

Introduction

1.1

Optimization under uncertainty

Optimization problems very often involve input data and parameters that are uncertain due to measurement or modeling errors, or simply because they correspond to quantities that will only be realized in the future, or cannot be known exactly at the time the problem must be formulated and solved. Over the years, the interest about this kind of problems has increased, and a certain number of methodologies have been developed for their solution.

The oldest such method, the “sensitivity analysis” deals with uncertainty after an optimal solution is obtained. The aim is to determine the parameter ranges over which the current solution remains optimal, assuming that one or more parameters can deviate from their nominal value.

Other methods incorporate the uncertainty directly into the computation of the optimal solution.

The Stochastic Programming approach for example utilizes an underlying probability model to handle the uncertainty. This approach has been suc-cessfully applied in a different number of areas, but it remains challenging to implement it because probability distributions are often unknown in practice and also because of the intractability of the models when the problem size grows.

Robust Optimization is a relatively recent framework in which the uncer-tainty is treated as deterministic. Unlike the traditional approach, robust optimization incorporates the notion that inputs have been estimated with errors. In this case, the inputs are not the traditional forecasts, but rather uncertainty sets including these point estimates. The optimal solution to the optimization problem is required to remain feasible for any values of the parameters that fall within these specified uncertainty sets.

(12)

In this work we will focus on the Robust Optimization approach in the context of asset allocation problems, considering however an extension of the standard notion of robustness that involves also some probability aspects.

1.2

Motivation and overview

Many problems, particularly in financial field, involve necessarily uncer-tain parameters, like future values of security prices, interest rates and ex-change rates. This is true, for example, in asset allocation problems. Our focus is to handle the uncertainty in this kind of problems using a flexi-ble robust approach [5], in which robustness incorporates some probabilistic aspects.

The remainder is organized as follows. Chapter 2 provides an overview of the theory and the applications of robust optimization in the field of ro-bust asset allocation problems. Starting from the classical mean-variance optimization problems introduced by Markowitz, we review several robust models proposed in literature, from the most traditional one to the most recent developments. We also give an overview of some of the algorithmic approaches and relative computational results.

By emphasizing the role played by convex risk measures in this environ-ment, we then describe an innovative approach to robustness which relaxes the traditional notion of robustness and that specifies not only the values of the uncertainty parameters, but also their degree of feasibility.

Based on the relaxed robustness related to convex risk measures, Chapter 3 proposes a new family of models that we call norm-portfolio models, that include as special cases linear programming (LP) and second order cone programming (SOCP) problems, i.e., computational tractable models. To define these models we focus on the notion of the penalty function appearing in the characterization of convex risk measures, and we propose models in which this function is defined in terms of the general norms in order to obtain models which are computationally tractable.

Then we study a variant of the proposed family, i.e., the case in which the used risk measure is a coherent one (a sub-case of the convex one) and we conclude the chapter with considerations about some used parameters, that describe an interesting link between this coherent variant of the norm-portfolio models and one of the most known coherent risk measures studied in literature, i.e., the Conditional Value at Risk (CVaR).

Chapter 4 provides the implementation of some described models, with real market data. The aim of the computational analysis is to observe how different risk measures utilize the scenario information based on past history

(13)

in producing a successful portfolio. We work on three different data sets and for each of them we lead a twofold analysis: an in-sample analysis through that we determine suitable values of the parameters in the models, and an out-of-sample analysis in which we present in a certain sense the “actual” performance of the models.

Finally, in order to conduct a preliminary comparison among several ro-bustness types, we compare some norm-portfolio models and their coherent variant with the following models described in literature: the classical CVaR, some standard robust models and a soft robust model. Our aim is to un-derstand how the greater flexibility influences the optimal portfolio values. To compare the models we use the following statistics: the out-of-sample mean, variance, Sharpe Ratio of realized return, portfolio turnover and the computational cost (i.e., the time needed to solve each model).

In general, from the computational analysis the following key observations are drawn: in the first experiment between a traditional robustness and a more relaxed robust approach, the second type always outperforms the first one. In other words, by relaxing the robustness constraints in a flexible way, one can potentially gain out-of-sample performance for not too high of a price. Among the flexible type robustness, the soft one (incorporated by a so-called entropic model) is without doubts the best approach in terms of almost all criteria adopted in order to compare the models, even if it results the most expensive in terms of the transaction costs, i.e., by evaluating the variability of assets into portfolio and above all in terms of the computational cost.

In the second computational test, the standard robustness shows a better performance than relaxed one in terms of variability, risk-return relation and portfolio turnover. The standard approach results more robust than relaxed one, although soft robustness incorporated by the entropic model produces by far the best performance.

In the last computational test the related robustness incorporated by the norm-portfolio approach and the soft one incorporated by the entropic model provide a better performance with respect to the traditional approach. In particular, traditional robustness produce the worst performance at the highest cost in terms of portfolio turnover.

Finally, Chapter 5 presents some final comments and directions for future research.

(14)

Chapter 2

Robust asset allocation

problems: a literature review

2.1

Asset Allocation Problems

Portfolio selection problems were formulated for the first time by Markowitz in 1952. They consist in allocating capital over a number of available assets in order to maximize the “return” on the investment while minimizing the “risk” using mathematical techniques. In the proposed models, the return is measured by the expected value of the random portfolio return, while the risk is quantified by the variance of the portfolio (mean-variance models).

Despite the strong theoretical support proved by mean-variance models, their elegance and the availability of efficient computer codes for solving them, these models present various practical pitfalls: optimal portfolios are not well diversified, in fact they tend to concentrate on a small subset of the available securities and, above all, they are often very sensitive to changes in input parameters.

2.1.1

Formulations of asset allocation problems

Let n be the number of available assets and X be the set of feasible portfolios defined as X = ( x∈ Rn| n X j=1 xi = 1, xj ≥ 0, j = 1, · · · , n ) (2.1)

i.e., a non-empty and bounded set where no short sales are allowed.

The set X in (2.1) is only an example of feasible portfolio set, additional constraints in fact, could be added to describe it.

(15)

Then, the classical Markowitz’ mean-variance optimization (MVO) models can be formulated as follows:

1) maximize the expected return subject to an upper limit on the variance: max µTx

s.t. xTQx≤ σ (2.2)

x∈ X;

2) minimize the variance subject to a lower limit on the expected return: min xTQx

s.t. µTx≥ R x∈ X;

(2.3)

3) maximize the risk-adjusted expected return: max µTx

− λxTQx

s.t. x∈ X (2.4)

where µ and Q denote the estimated expected return vector and the covariance matrix of the given assets respectively and λ ∈ R indicates a risk-aversion parameter.

These three models are parametrized by the variance limit, the expected return limit and the risk-aversion parameter, respectively. Let us observe that while the first formulation can not be classified as convex QP problem because of nonlinear variance constraint, the latter two are convex QP problems [53, 79].

Mean-variance portfolios generated using the sample expected return and covariance matrix of the asset returns perform poorly out of sample due to estimation errors [17, 20, 25, 56].

A study of Black and Litterman [17] for example demonstrated that small changes in the expected returns, in particular, had a substantial impact in the portfolio composition. It is indeed commonly accepted that estimation errors in the sample expected return are much larger than in sample covariance matrix [24, 45]. For this reason, researchers have recently focused on the so-called minimum-variance optimization models:

min xTQx s.t. x∈ X,

(16)

which rely solely on estimates of the covariance matrix.

However, also these portfolios are quite sensitive to estimation errors and have unstable weigths that fluctuate substantially over time. The main mo-tivation is that the sample covariance matrices, on which minimum-variance portfolios are based on, are the maximum likelihood estimators (MLE) for normally distributed returns, and the efficiency of these estimators is highly sensitive to deviations of the asset-return distribution from the implicitly assumed normal distribution. This is particular relevant for portfolio asset allocation, where extensive evidence shows that the empirical distribution of the asset returns usually deviates from the normal distribution, [28].

For this, incorporating the uncertainty about the accuracy of the es-timates in the portfolio optimization process becomes crucial for practical applications.

A way to handle the uncertainty in minimum-variance models is to use suitable robust estimators of the portfolio return characteristics, that allow one to generate portfolios with better stability properties. A robust estimator is one which should have good properties not only for the assumed (normal) distribution, but also for distributions in a neighborhood of the assumed one (see [44]). The class of models based on robust estimators will be not reviewed in this dissertation. The interested reader is referred to [22, 62, 55] for two-steps approach and [28, 49] for innovative one-step approach.

2.2

Robust Optimization

Robust Optimization refers to modeling of optimization problems with uncertain data to obtain a solution that is guaranteed to be “good” for all or most possible realizations of the uncertain parameters. Uncertainty in the parameters is described through uncertainty sets that contain many possible values that may be realized for the uncertain parameters. The size of the uncertainty set is determined by the level of desired robustness.

2.2.1

Absolute robustness

Let us consider a general formulation of an optimization problem:

min {c(x) | x ∈ X} (2.5)

where c(x) is a generic objective function, x ∈ Rn is a decision vector and X

represents the set of feasible solutions.

In general, uncertainty could affect either the constraints (constraint ro-bustness) or the objective function (objective roro-bustness); in the first case we

(17)

seek solutions that are feasible for all possible values of the uncertain inputs, in the second one we seek solutions that minimize the maximum value of the objective function value considering all possible realizations of the uncertain parameters inside the uncertainty set.

Let us consider an optimization problem in which the uncertainty appears in constraints, i.e.:

min

x c(x)

s.t. G(x, p) ∈ K (2.6)

where like before x and c() are the decision vector and the objective function, respectively; G and K are the structural elements of the constraints that are assumed to be certain. Let us consider an uncertainty set U that contains all possible values of the uncertain parameters p, then a constraint-robust optimal solution can be found by solving the following problem:

min

x c(x)

s.t. G(x, p) ∈ K, ∀p ∈ U. (2.7)

The vector x∗ that solves (2.7) is called a robust solution of the uncertain

problem. A robust feasible solution to the robust counterpart (2.7) should, by definition, satisfy all realizations of the constraints from the uncertainty set U, and a robust optimal solution to (2.7) is a robust feasible solution with the best possible value of the objective function.

Now, let us observe that an optimization problem in which uncertain pa-rameters appear in both the objective function and constraints can be easily reformulated like (2.7) simply introducing an auxiliary variable. Indeed, let us consider the following problem

min

x c(x, p)

s.t. G(x, p) ∈ K (2.8)

in which the uncertain parameters p appear both in the objective function and in the constraints. Problem (2.8) is equivalent to the following one:

min

t,x t

s.t. t− c(x, p) ≥ 0 G(x, p) ∈ K

(18)

i.e., a problem in which all uncertainties are in the constraints.

There is an alternative procedure to handle uncertainty in the case in which it affects the objective function; this procedure consists in finding solutions whose worst-case behaviour is optimized. The worst-case behaviour of a solution corresponds to the value of the objective function for the worst possible realization of the uncertain data for that particular solution. Let us consider the following optimization problem:

min

x c(x, p)

s.t. x∈ X. (2.10)

As before, let us denote by U the uncertainty set that contains all possible values of the uncertain parameter p. Then an objective robust solution can be obtained by solving:

min

x∈Xmaxp∈U c(x, p). (2.11)

2.2.2

Relative robustness

Measuring the worst case in absolute way as seen in the previous sub-section is much conservative and it is not consistent with the risk tolerances of many decision-makers. This is the starting observation of the relative ro-bustness approach. The idea of relative roro-bustness is to measure the worst case in a relative manner, i.e., relative to the best possible solution under each scenario.

In other words, the relative robustness criteria, also called minimax regret, consists in finding a solution x∗ ∈ X such that the maximum deviation

between the objective function c(x, p) and the optimal value function z∗(p)

is minimized under each scenario, i.e., a relative-robust solution is a vector x that minimizes the maximum regret:

min x∈Xmaxp∈Uc(x, p) − z ∗(p) (2.12) where z∗(p) = min x∈Xc(x, p) .

While it is intuitively attractive, relative robust formulations can also be more difficult than the standard absolute-robust formulations. Indeed, since z∗(p) is the optimal-value function and involves an optimization problem

itself, problem (2.12) is a three-level optimization problem as opposed to the two-level problems in absolute-robust formulations. Furthermore, the optimal value function z∗(p) is rarely available in analytic form and it is

(19)

2.2.3

Adjustable robust optimization

Robust optimization formulation we described above assumes that un-certain parameters will not be observed until all decision variables are de-termined and therefore do not allow for recourse actions that may be based on realized values of some of these parameters. Indeed, for example, multi-period decision models involve uncertain parameters, some of which are re-vealed during the decision process. Adjustable robust optimization (ARO) formulations model these decision environment and allow recourse action. These models are closely related to stochastic programming formulations with recourse. Let us consider a two-stage linear optimization problem whose first-stage decision variables x1 need to be determined now, while the

second-stage decision variables x2 can be chosen after the uncertain parameters of

the problem, A1, A2 and b, are realized:

min

x1,x2

cTx

1 : A1x1 + A2x2 ≤ b . (2.13)

Let U denote the uncertainty set for parameters A1, A2 and b.

Instead of the standard constraint-robust optimization formulation in which both sets of variables must be chosen before the uncertain parameters can be observed and therefore cannot depend on these parameters, the ad-justable robust optimization formulation allows to choice the second-period variables x2 on the basis of the realized values of the uncertain parameters.

As a result, the adjustable robust counterpart problem is given as follows: min

x1

cTx

1 : ∀ (A1, A2, b) ∈ U, ∃x2 ≡ x2(A1, A2, b) : A1x1 + A2x2 ≤ b .

Soyster [75] was one of the first researchers to investigate explicit ap-proaches to robust optimization.

Then, Ben-Tal and Nemirovski in [7, 8] as well as separately El Ghaoui, Oustry and Lebret in [32, 33] continued to investigate this innovative ap-proach focusing also on computational issues.

Robust optimization techniques have been applied in several fields: telecom-munications, control theory, network flow problems, engineer design, supply chain problems and finance.

In the whole work we will present, we will focus on models characterized by a robustness of absolute type applied in the financial optimization prob-lems field in which future values of security prices, interest rates and exchange rates are not known in advance, but can only be forecast or estimated.

(20)

2.2.4

Robust mean-variance models

Recalling the portfolio asset allocation problems reviewed in section 2.1.1, let us assume now that the uncertain expected return vector µ and the un-certain covariance matrix Q of the asset returns belong to unun-certainty sets having the following form of intervals:

Uµ=µ : µL ≤ µ ≤ µU and UQ=Q : Q  0, QL≤ Q ≤ QU , (2.14) where the relation ≤ is intended to hold true componentwise (both con-sidering vectors and matrices), while the restriction Q  0 is a necessary condition so that Q is a covariance matrix, i.e., we assume that Q is a posi-tive semidefinite matrix.

Based on the above introduced uncertainty sets, in [79] T¨ut¨unc¨u and Koenig have formulated some robust counterparts of problems (2.4) and (2.3) by exploiting formulations previously introduced in [41, 43]. In order to generate the extreme values of the intervals (2.14), the authors use percentiles techniques, but several other methods could be used to do this, both using the available time series and by generating estimates of parameters needed.

The first robust model looks for a feasible portfolio x such that its mini-mum risk-adjusted expected return, when both parameters vary in the given uncertainty sets, is the maximum one among the feasible portfolios. On the other hand, the latter robust model looks for a feasible portfolio which gua-rantees the lower limit R on the expected return also in the worst case, i.e., for the worst realization of parameter µ in Uµ, and which minimizes the

vari-ance in the worst realization of parameter Q in the uncertainty set UQ. The

resulting robust counterparts are therefore: max x∈X  min µ∈Uµ,Q∈UQ µTx− λxTQx  (2.15) and min max Q∈UQ xTQx s.t. min µ∈Uµ µTx≥ R, (2.16) x∈ X

Under certain simplifying assumptions, that is when QU is a positive

semidefinite matrix, these robust problems can be reduced to pure MVO problems. In such a special case, the best asset allocation can in fact be determined by first fixing the worst-case input data in the considered uncer-tainty sets, that is µL for the uncertain mean return vector µ and QU for

(21)

the uncertain covariance matrix Q, and then solving the resulting QP pro-blems [79]. Without these assumptions, it is not possible to solve the robust asset allocation problems as standard QPs. In the general case, the robust counterparts (2.15) and (2.16) can be solved using a nonlinear saddle-point formulation that involves semidefinite constraints, [43].

Let Ψλ(x, µ, Q) = µTx− λxTQxbe the objective function of the problem

(2.15), with x ∈ X and (µ, Q) ∈ U = (Uµ× UQ). For fixed (µ, Q) ∈ U and

given λ ≥ 0, Ψλ is a concave quadratic function of x. On the other hand,

fixed x and λ, the function Ψλ is a linear function of µ and Q. Moreover,

X and U are non-empty and bounded sets. Then, from [43] (see Lemma 2.3), we have that the optimal values of the following pair of primal and dual problems max x∈X  min (µ,Q)Ψλ(x, µ, Q)  min (µ,Q)∈U  max x∈X Ψλ(x, µ, Q)  (2.17) are equal, and they are obtained at a saddle-point of the function Ψλ(x, µ, Q).

That is, there exists a vector ¯x∈ X and a vector-matrix pair (¯µ, ¯Q) ∈ U such that:

Ψλ(x, ¯µ, ¯Q) ≤ Ψλ(¯x,µ, ¯¯ Q) ≤ Ψλ(¯x, µ, Q), ∀x ∈ X, (µ, Q) ∈ U. (2.18)

Moreover, ¯x ∈ X and (¯µ, ¯Q) ∈ U solve both problems in (2.17). Therefore, the robust counterpart (2.15) can be solved using the literature on saddle-point problems and related algorithmic approaches, such as the one in [43]. In a similar way it is possible to solve the robust counterpart (2.16).

From a computational point of view, the authors describe an algorithm to generate the so-called robust efficient frontier. Given µ, Q and X, the efficient frontier is the collection of portfolios that are Pareto-optimal solutions to problem (2.16) when R varies (or to problem (2.15) when λ varies), [53]. Since the methods proposed in the literature to generate the efficient frontier, such as the method of critical lines in [53], are either not available or not implemented for the robust case, the authors propose and implement the following approach, that generates a discrete approximation to the robust efficient frontier. Firstly they determine the robust efficient portfolios with the lowest and highest expected returns, discretize the range between these two extremes to obtain a finite number of set levels of the expected return, and then solve problem (2.16) for each level of the expected return. To obtain the robust efficient portfolios with the lowest and highest expected returns, as well as to solve problem (2.16) for each intermediate value, they use the saddle-point algorithm developed by Halld¨orsson and T¨ut¨unc¨u in [43] which is an interior-point path-following method with computationally

(22)

attractive polynomial-time convergence guarantees. The authors apply the robust asset allocation methods to two market data sets, and compare the robust efficient frontier to the classical efficient frontier, generated solving model (2.3). The first data set is composed of 5 asset classes, spanning the period January 1979 to July 2002, for a total of 283 months. The second data set addresses 8 asset classes over the period July 1983 to July 2002, for a total of 229 months. The computational analysis demonstrates that the portfolios generated using the proposed robust techniques have a significantly better worst-case behaviour than the classical MVO portfolios, i.e., the robust approaches guarantee a risk reduction for worst-case scenarios. For example, in one considered data set, the robust efficient portfolio achieving a 7.5% worst-case annualized expected return has an 8% standard deviation, while the classical efficient portfolio with a 7.5% worst-case annualized expected return has approximately 12% standard deviation, indicating that it is signifi-cantly riskier. Moreover, the generated robust portfolios show more stability over time, that is, they remain relatively unchanged over long periods of time. The overall conclusion of the authors is that, “by directly addressing some of the weaknesses of classical MVO models, the proposed robust models provide a valuable asset allocation vehicle to conservative investors ”.

An alternative method for modeling the uncertainty was proposed in [41] using the factor model.

Let us consider a standard factor model for representing the return vector r ∈ Rn, that is

r = µ + VTf+ ε

where, according to the previous notation, µ is the expected return vector. In the formula, f ∈ Rm is the vector of random returns of the m (< n) factors

that drive the market, V ∈ Rm×n denotes the matrix of factor loading and

ε is the vector of residual returns. The parameters (µ, V ) are estimated by linear regression. Given market data consisting of samples of asset return vectors and the corresponding factor returns, the linear regression procedure computes the least squares estimates (µ0, V0) of (µ, V ).

The covariance matrix of the returns, Q, can be expressed as Q= VTF V + D,

where F is the covariance matrix of the factor returns and D is the diagonal matrix of the error term variances.

The individual elements di of the diagonal matrix D are assumed to

be-long to intervals [di, ¯di], i.e., the uncertainty set Sd for the error matrix D is

given by:

(23)

On the other hand, the uncertainty set for the matrix of factor loading V is the following ellipsoidal set:

Sv = {V : V = V0+ W, kWikG≤ ρi, i= 1, . . . , n}

where V0 = [ ¯V1· · · ¯Vn] is the least square estimate of V , ρi are given bounds

and Wi denotes the i-th column of W , for i = 1, · · · , n, while kwkG =

wTGw denotes the Euclidean (elliptic) norm of w with respect to a

sym-metric positive definite matrix G 1. Finally, the expected return vector µ is

assumed to lie in the uncertainty set Sm given by:

Sm = {µ : µ = µ0+ ξ, |ξi| ≤ γi, i= 1, . . . , n}

where γi are given bounds (i.e., each component of µ is assumed to lie within

a certain interval).

According to the factor model, the return of a portfolio x is: rx = rTx= µTx+ fTV x+ εTx

where µ and V are the uncertain parameters. Similarly, its variance is: xTQx= xTVTF V x+ xTDx,

where V and D are assumed to be uncertain.

By considering the uncertainty sets Sd, Svand Smabove defined, Goldfarb

and Iyengar derive the following robust analog of the Markowitz’s mean-variance optimization problem (2.3):

min max {V ∈Sv,D∈Sd} xTQx s.t. min {µ∈Sm}E(r x) ≥ R, (2.19) x∈ X

where E(rx) represents the expected value of the portfolio return.

At the same way, the authors derive the following robust counterpart of problem (2.2):

max min

{µ∈Sm}

E(rx)

1

“A way to define G is related to probabilistic guarantees on the likelihood that the actual realization of the uncertain coefficients will lie in the ellipsoidal uncertainty set

Sv. Specifically, the definition of matrix G can be based on the data used to produce the

(24)

max

{V ∈Sv,D∈Sd}

xTQx≤ σ (2.20)

x∈ X

Under the hypothesis of normality of rx (it is in fact assumed that rx

follows a normal distribution N(µTx, xT(VTF V + D)x)), and considering

the uncertainty sets above defined, the authors prove that the robust op-timization problem (2.19) can be reduced to a second order cone program-ming problem (SOCP) that can be solved very efficiently using interior point algorithms,[1, 51, 60, 76].

In fact, both the worst case and the practical computational effort re-quired to solve a second order cone programming problem is comparable to that for solving a convex quadratic program of similar size and structure; in practice, the computational effort required to solve these robust portfo-lio selection problems is comparable to that required to solve the classical Markowitz mean-variance portfolio selection problems. The reduction to a second order cone programming problem is subsequently shown for the robust counterpart (2.20).

In [41], the authors further improve the factor model by addressing uncer-tainty also in the factor covariance matrix, and show that, for natural classes of uncertainty sets, all robust counterparts continue to be second order cone programming problems. They also show that these natural classes of uncer-tainty sets correspond to the confidence regions associated with the maxi-mum likelihood estimation of the covariance matrix. Finally, computational experiments are performed using the Sharpe ratio variant of the proposed robust framework. This variant and the related computational results will be reviewed in Section 2.2.5.

2.2.5

The Sharpe ratio problem and its robust

coun-terparts

A way to incorporate in asset allocation models also assets considered essentially riskless is to study the following optimization problem, called the Sharpe Ratio Problem:

maxµ

Tx

− rf

pxTQx

x∈ X,

where rf represents the known return on a riskless asset. The Sharpe ratio,

i.e., h(x) = µTx−rf

xTQx, is a measure that evaluates the excess return per unit of

(25)

Since this maximization problem has a nonlinear and nonconcave objec-tive function, and therefore it may be difficult to solve it directly in [41] the authors proposed an elegant argument to formulate the problem in terms of a convex minimization problem. All is based on the observation that eTx= 1

whenever x ∈ X (e represents an n-dimensional vector of 1’s) since propor-tions in all securities must sum 1. Therefore the Sharpe ratio h(x) can be rewritten as a homogeneous function of x, say g(x), as follows:

h(x) = µ Tx− r f pxTQx = (µ − rfe)Tx pxTQx =: g(x) = g x k  k >0. (2.21) The term µ − rfe is the excess return of the risk-free rate. In [41], the

authors proved that when X has the form (2.1), the optimal solution is not influenced if the normalization constraint eTx= 1 is replaced with constraint

(µ − rfe)Tx= 1 .

In [79] the authors proved that a similar reduction can be achieved even when X is not in the form in (2.1), as long as x ∈ X implies eTx= 1. Under

this assumption, and using the observation that h(x) = g(x) and g(x) is homogeneous, a portfolio x∗ with the maximum Sharpe ratio can be found

by solving the following problem:

min xTQx

s.t (µ − rfe)Tx= 1 (2.22)

(x, k) ∈ X+

where X+ is a cone that lives in a one higher-dimensional space than X, and

which is defined as follows:

X+ =nx∈ Rn, k∈ R | k > 0,x k ∈ X

o

∪ {(0, 0)} . (2.23) Moreover, the normalizing constraint can be relaxed to (µ − rfe)Tx ≥ 1 by

recognizing that this constraint will always be tight at an optimal solution. Based on this observation, in [79] is proposed the following robust counter-part of the relaxed maximum Sharpe ratio problem, where the sets Uµ = µ : µL≤ µ ≤ µU and UQ = Q : Q  0, QL≤ Q ≤ QU are pre-viously defined: min  max Q∈UQ xTQx  (x, k) ∈ X+ (2.24) min µ∈Uµ(µ − r fe)Tx≥ 1.

(26)

This robust counterpart, too, can be solved by reducing it to a second order cone programming problem [41], [79].

Let us consider the computational results about the robust counterpart (2.22) as reported in [41]. Like in the computational experiments performed in [79] for problems (2.15) and (2.16), the aim is to contrast the performance of the classical portfolio selection strategies with that of the robust portfolio selection strategies. Two types of computational tests are conducted, on simulated data and on real market data, by selecting the classical and robust portfolios via the corresponding maximum Sharpe ratio problems.

Consider the simulated data. By varying a confidence threshold ω from 0.01 to 0.95, where ω is used to generate the uncertainty sets the authors perform some independent runs; in particular, if ω is chosen very high, the uncertainty sets will be very large, leading to very conservative portfolios; on the other hand, if ω is chosen too low, the portfolio choice will not be robust enough. Then, they compare the mean Sharpe ratio of the robust portfolios to that of the classical portfolios, as well as the worst-case Sharpe ratio of the robust portfolios to that of the classical portfolios, where the worst-case Sharpe ratio of a portfolio is the minimum Sharpe ratio when the parameters vary in the uncertainty sets Sd, Sv and Sm introduced in Section 2.2.4.

The main result is that the mean performance of the robust portfolios does not significantly degrade as ω increases. On the other hand, the worst-case performance of the robust portfolios is about 200% better than the one of the classical portfolios. Moreover, robust portfolios are able to withstand noisy data considerably better than classical portfolios.

Concerning the real data, the authors perform an out-of-sample analysis in order to compare the classical and the robust strategies on a universe of 43 assets and 5 factors, spanning the period January 1997 to December 2000. Specifically, they divide the whole time series into investment periods of length 90 days; for each investment period they estimate the covariance matrix and the factor matrix, then setting all other parameters required in their model, they compute the robust and the classical portfolios solving the robust and the classical maximum Sharpe ratio problems, respectively. Finally, they compute the “actual” return of the generated portfolios by evaluating the return of the portfolios generated at period t with the return data available at t+1, and plot the relative performance of the robust strategy with respect to the classical strategy. For the particular data sequences used, the robust strategy appears to be clearly superior; i.e., it generates a larger return at a smaller computational cost when ω is sufficiently large. On the other hand, for small values of ω, i.e., when the portfolios generated through the robust strategy are not robust enough, no discernible trend is observed by the authors.

(27)

2.3

Robustness and risk measures

2.3.1

Risk measures

Recently, there has been an increasing interest in defining quantitative methods for assessing the risk of financial positions. Simplistically speaking, it is possible to distinguish between two types of risk measures: dispersion and downside measures [35]. Dispersion measures consider both positive and negative deviations from the expected return, and treat these devia-tions as equally risky. A well-known dispersion measure is portfolio standard deviation (and portfolio variance), used in the previously reviewed minimum-variance and mean-minimum-variance optimization models. On the other hand, down-side risk measures traditionally address the probability that the portfolio return is above a minimal acceptable level.

The theory of risk measures has made significant progress in recent years. Here we introduce some advanced concepts of risk measures, together with related mean-risk optimization models.

Let Y be a real-valued function on a set Ω of possible scenarios that represents the return from an investment portfolio over a fixed period of time. A negative value for Y indicates loss. Then, a quantitative measure of risk can be modelled as a mapping ρ from the space of these return functions into the real line. This is the classical definition of measure of risk, as provided in [2] in their seminal contribution. In that paper, the authors have initiated a systematic analysis of the concept of risk measure, by formulating certain axioms which should be satisfied by any reasonable measure of risk. See also [30] for a review of risk measures in the framework of setting solvency capital requirements for a risky business.

In this paper, we introduce the definition of “monetary measure of risk” as in [38], as well the corresponding definitions of convex and coherent risk measures. From these definitions, F¨ollmer and Schied in [38] provided some risk measure characterizations that have been used, quite recently, to model interesting robust optimization models for asset allocation problems.

Let us denote by Φ the set of all bounded measurable functions on a set of scenarios Ω, and P be the set of all probability measures on Ω. For any Y, Z ∈ Φ, the shortand notation Y ≤ Z denotes Y (ω) ≤ Z(ω) ∀ω ∈ Ω. We define the following, [38]:

Definition 2.1. A mapping ρ : Φ → R is called a monetary measure of risk if it satisfies the following conditions for all Y, Z ∈ Φ:

Monotonicity : if Y ≤ Z, then ρ(Y ) ≥ ρ(Z).

(28)

The financial meaning of monotonicity is that the downside risk of a position is reduced if the payoff profile is increased. Cash invariance, also called translation invariance, is motivated by the interpretation of ρ(Y ) as a capital requirement. Thus, if the amount m is added to the position and invested in a risk-free manner, the capital requirement is reduced by the same amount.

Definition 2.2. A monetary measure of risk ρ : Φ → R is called a convex measure of risk if it satisfies

Convexity: ρ (λY + (1 − λ) Z) ≤ λρ (Y ) + (1 − λ) ρ (Z) ∀0 ≤ λ ≤ 1, ∀Y, Z ∈ Φ.

The axiom of convexity states that diversification should not increase the risk.

Definition 2.3. A convex measure of risk ρ is called a coherent risk measure if it satisfies 2

Positive homogeneity: if λ ≥ 0, then ρ (λY ) = λρ (Y ) .

If a measure of risk ρ is positively homogeneous, then it is normalized, i.e., ρ(0) = 0.

In [2], Artzner et al. provided the following characterization of coherent risk measures (also recalled in [38]):

Theorem 2.3.1. A functional ρ : Φ → R is a coherent measure of risk if and only if there exists a subset Q⊆ P such that

ρ(Y ) = sup

q∈Q

Eq[−Y ] , Y ∈ Φ, (2.25)

where Eq[−Y ] denotes the mean value of −Y (i.e., the expected loss)

with respect to the probability q.

Such a characterization can be generalized to convex measures of risk in the following way, [36]:

2

In [2], a mapping ρ : Φ → R is called a coherent measure of risk it it satisfies the four axioms of translation invariance, positive homogeneity, monotonicity and subadditivity, where subadditivity states that, ∀Y, Z ∈ Φ, ρ(Y + Z) ≤ ρ(Y ) + ρ(Z); convexity is a consequence of these axioms.

(29)

Theorem 2.3.2. Suppose that Ω is a finite set 3. Then ρ : Φ → R is a

convex measure of risk if and only if there exists a penalty function α : P→ R∪ (−∞, +∞] such that

ρ(Y ) = sup

q∈P

(Eq[−Y ] − α (q)) . (2.26)

The function α satisfies α(q) ≥ −ρ(0) for any q ∈ P, and it can be taken to be convex and lower semicontinuous on P.

In the above characterization, function α has the role to assign a possibly different weight to the probabilities in P, by suitably penalizing some of them. In particular, by choosing α (q) = 0 for all q ∈ Q, and +∞ otherwise, the characterization of coherent measures of risk stated by Theorem 2.3.1 can be derived as special case. The characterization expressed by Theorem 2.3.2 has been recently exploited in order to propose more flexible robust models for asset allocation problems. This will be the subject of Section 2.3. Here let us review some well-known measures of risk together with related mean-risk optimization models.

In the literature, a well-known measure of risk is Value at Risk (VaR), developed by engineers at J.P. Morgan. VaR represents the predicted maxi-mum loss with a specified probability level over a certain period of time. Let fx(ω) denote the loss function of a portfolio x ∈ X when ω is the realization

of some random events (so, fx(ω) = −Y according to our previous notation).

Then, the α-VaR risk measure of x is defined as follows: V aRα(x) = min {γ : P rob (fx(ω) ≥ γ) ≤ 1 − α} ,

where α is a given probability level, and Prob denotes the probability with respect to a given reference probability on the set of scenarios Ω, say p ∈ P. In other words, VaR is defined as the minimum level γ such that the probability that the portfolio loss exceeds γ is less than or equal to 1 − α.

Some practical and computational issues related to VaR are discussed in [39], where the authors describe a method of calculating the portfolio giving the smallest VaR among those which yield at least a specified ex-pected return. The method consists in approximating the historic VaR by a “smoothed” VaR which filters out local irregularities. Therefore, it is based on historic simulation. Several other approaches to VaR optimization are used in practice; in some contexts it is proved that VaR can be a suitable measure of risk, as in [59].

3

There exists also a generalization of this theorem to the case where Ω is an infinite set [36].

(30)

However, VaR has also some undesirable properties as a risk measure. First, if studied in the framework of coherent risk measures [2], it lacks subadditivity (and therefore convexity). An additional difficulty with VaR may be in its computation and optimization. In fact, when VaR is calculated from generated scenarios, it turns out to be a nonsmooth and nonconvex function of the positions in the investment portfolio.

Another criticism of VaR is that it pays no attention to the magnitude of losses beyond the VaR value. This and other undesirable features of VaR led to the development of alternative risk measures. One well-known modi-fication of VaR is the Conditional Value at Risk (CVaR), which measures the expected loss exceeding VaR. With respect to the classification of risk measures previously reviewed, CVaR classifies as a coherent risk measure (it is also a concave distortion risk measure), whereas VaR is a distortion risk meausure but it is not coherent (and therefore it is not a concave distortion risk measure).

Specifically, given a probability level α, the α-CVaR associated with a portfolio x is defined as follows:

CV aRα(x) = 1 1 − α Z fx(ω)≥V aRα(x) fx(ω)p (ω) dω

where, as before, fx(ω) denotes the loss function when the portfolio x is

choosen from the set X of feasible portfolios and ω is the realization of the random events, while p (ω) denotes the reference probability of ω.

In [70], the authors showed that minimizing CVaR can be achieved by minimizing a more tractable auxiliary function without predetermining the corresponding VaR first. They introduced the following simpler auxiliary function Fα(x, γ) = γ + 1 1 − α Z fx(ω)≥γ fx(ω)p (ω) dω. (2.27)

This formulation can be written in the following equivalent way: Fα(x, γ) = γ +

1 1 − α

Z

(fx(ω) − γ)+p(ω) dω (2.28)

where a+ = max {a, 0}. The authors showed that F

α(x, γ) verifies some

interesting properties such that minimizing CVaR is equivalent to minimize the auxiliary function Fα(x, γ), i.e., :

min

x∈XCV aRα(x) = minx∈X,γFα(x, γ).

Moreover, if fx(ω) is a convex (linear) function of the portfolio variables x,

then Fα(x, γ) is also a convex (linear) function of x. In this case,

(31)

problem is a smooth convex optimization problem that can be solved using well-known optimization techniques. In particular, the authors formulated the problem in the discrete case, obtaining a tractable formulation. Assume that the set of scenarios Ω comprises N scenarios ω1, . . . , ωN, and that all

scenarios have the same probability (so, p (ω) = 1

N ∀ω). In this case the

auxiliary function Fα(x, γ) can be approximated by the following function:

˜ Fα(x, γ) = γ + 1 (1 − α)N N X k=1 (fx(ωk) − γ)+. (2.29)

Hence, the problem min

x∈XCV aRα(x) can be approximated by replacing Fα(x, γ)

with ˜Fα(x, γ), obtaining the following formulation:

min x,z,γγ+ 1 (1 − α)N N X k=1 zk s.t. zk ≥ 0, k = 1, . . . , N zk ≥ fx(ωk) − γ, k = 1, . . . , N x∈ X,

where zk are artificial variables used to model (fx(ωk) − γ)+, k = 1, . . . , N.

This formulation of CVaR usually results in convex programs and even linear programs (when fx(ω) is a linear loss function). Thus, Rockafellar

and Uryasev’s work opened the door to the application of CVaR to financial optimization and risk management in practice.

In the following section we review some robust models that are based on VaR and CVaR.

2.3.2

Robust VaR and CVaR models

According to the previous definition, the portoflio α-VaR optimization problem consists in determining a portfolio x which minimizes the VaR risk measure with respect to a given confidence level α, i.e. :

min γ

s.t. P rob(fx(ω) ≥ γ) ≤ 1 − α (2.30)

x∈ X.

Assuming that the asset returns are multivariate normally distributed (Normal VaR), in [41] the authors derived a robust counterpart of this for-mulation using the factor model to describe the future returns, and assuming

(32)

that there is some errors in the estimation of the expected return vector and of the covariance matrix. No computational results have been provided by the authors concerning this robust counterpart.

However, in some contexts it may be not realistic to assume that the asset returns follow a multivariate normal distribution, as observed before. For this reason, El Ghaoui et al. in [34] proposed an alternative formulation. The authors assume that the distribution of the returns is partially known, in the sense that only bounds on the first two moments (mean and covariance matrix) are available. In particular, let Q denote the set of allowable distri-butions. For example, Q could represent the set of Gaussian distributions with mean ˆx and covariance matrix Γ, where ˆx and Γ are only known up to a given componentwise bound.

Then, given a probability level α ∈ (0, 1) and a portfolio x ∈ X, the authors introduce the notion of Worst-case VaR with respect to Q as follows:

min γ s.t.sup

q∈Q

P robq(fx(ω) ≥ γ) ≤ 1 − α, (2.31)

where P robqdenotes here the probability with respect to q. Calling VQ(x) the

Worst-case VaR, i.e., the optimum value of problem (2.31), the corresponding robust portfolio optimization problem is defined as follows:

min VQ(x) s.t. x ∈ X. (2.32)

In [34], El Ghaoui et al. prove that, for a large class of allowable probability distribution sets Q, problem (2.32) can be solved via a semidefinite program-ming reformulation (SDP). Specifically, the authors examine two kinds of bounds, i.e., polytopic and componentwise, as well as uncertainty structures arising from factor models. A numerical example is then provided which compares the worst-case VaR of a nominal portfolio with that of its robust counterpart, computed via SDP. The results show that, if one chooses the nominal portfolio, data errors can have a dramatic impact on the VaR. On the other hand, taking into account the uncertainty via problem (2.32) damp-ens such a negative effect. For a review of SDP the interest reader is referred to [61, 71, 78, 80].

An attempt of robust optimization of the CVaR was proposed in [65], where the authors implemented in a robust way the bicriteria model 4

pro-posed by Rockafellar and Uryasev in [70], obtaining a robust linear reformu-lation of the problem. Three different versions are implemented and tested

4

The goal is to form a portfolio in which the expected return is maximized, while some index of risk is minimized.

(33)

using market data. Also in this case, the experiments show that the robust counterpart of each model is able to select qualitatively good portfolios in terms of realized return.

Another variant of the CVaR optimization problem was formulated in [85], where the concept of Worst-case CVaR (WCVaR) is introduced. Given a probability threshold α > 0, the Worst-case CVaR of a portfolio x ∈ X, with respect to a certain set Q of probability distributions, is defined as:

W CV aRα(x) = sup p∈Q

CV aRα(x) .

In other words, as for the Worst-case VaR definition, the α-CVaR is computed for each probability p ∈ Q, and the supremum value is then returned. Like CVaR, Worst-case CVaR remains a coherent risk measure.

In [85] the authors investigated the problem of minimizing WCVaR for several structures of Q, reformulating the problem in a tractable form that can be efficiently solved. First, the authors consider the Worst-case CVaR in the case in which only partial information on the underlying probability distribution is given, i.e., the distribution p is known to belong to a set of dis-tributions which consists of all convex combinations of some predetermined likelihood distributions ph, h = 1, ..., H: p∈ QM = ( H X h=1 λhph : H X h=1 λh = 1, λh ≥ 0, h = 1, . . . H ) . (2.33)

Following Rockafellar and Uryasev’s approach in [69], Zhu and Fukushima in [85] introduce the auxiliary functions

Fαh(x, γ) = γ + 1 1 − α Z ω [fx(ω) − γ]+ph(ω)dω (2.34)

where, as before, ω denotes the random events. Then, for each portfolio x, they prove that

W CV aRα(x) = min γ maxh∈HF h α (x, γ) where H = {1, 2, · · · , H}. Denoting FH α (x, γ) = max h∈HF h α (x, γ), minimizing

the Worst-case CVaR over X can then be achieved by minimizing FH α (x, γ) over X × R, i.e., : min x∈XW CV aRα(x) =(x,γ)∈X×Rmin F H α (x, γ). (2.35)

Under the hypothesis that each likelihood distribution is characterized by a finite set of possible scenarios, and denoting by Nh the number of

(34)

the authors obtain the following formulation, which is linear when the loss function fx(ω) is linear, too, and the set X is a polyhedron:

min θ s.t. x∈ X γ+ 1 1 − α(p h )Tuh ≤ θ h = 1, . . . H uhk ≥ fx(ω[k]h ) − γ k = 1, · · · Nh h= 1, . . . H uhk ≥ 0, k = 1, · · · Nh, h= 1, . . . H, (2.36) where uh k is the k-th component of u h and ωh

[k] denotes the k-th sample of the

likelihood distribution ph.

In the second part of their work, the authors generalize the above model and, among various uncertainty structures, they focus on the minimization of WCVaR under box and ellipsoidal uncertainty sets associated with the distribution p. The considered box uncertainty set is the following:

p∈ QB =p : p = p0+ η, eTη= 0, η ≤ η ≤ η

(2.37) in which p0 is a nominal distribution that represents the most likely

distri-bution of the random component ω, e denotes the vector of ones, and η and η are given lower and upper bound vectors; in this case the problem is linear when fx(ω) is a linear function and it is convex when the function is convex.

The ellipsoidal set is defined as follows:

p∈ QE =p = p0+ Aη, eTAη = 0, p0+ Aη ≥ 0, kηk ≤ 1

(2.38) in which p0 is a nominal distribution that is the center of the ellipsoid, A is

the scaling matrix of the ellipsoid and kηk =pηTη. In this case the problem

is convex when the loss function fx(ω) is convex and it is a second order cone

program when the function is linear.

The authors discuss robust portfolio selection problems corresponding to the types of uncertainties just described through two numerical examples. Market data simulation analysis and Monte Carlo simulation analysis are presented. Concerning the market data simulation, four sectoral sub-indices of Hang Seng Index of Hong Kong Stock Exchange (SEHK) are chosen, for which it is reasonable to assume a mixture distribution of the random returns, and therefore it makes sense to perform a Worst-case CVaR minimization. Numerical experiments for the nominal and the robust portfolio optimization problems are performed via the linear programming model (2.36), where it is assumed that the investor has an initial wealth, bound constraints are

(35)

imposed on the portfolio, and a minimum expected return R is required. The former model employs the original CVaR as the risk measure to minimize, while the latter minimizes the Worst-case CVaR. Various nominal portfolio strategies and robust portfolio strategies are computed by setting different values of R. The authors show that the robust optimal portfolio almost always outperforms the nominal optimal portfolio in terms of portfolio value. Concerning the Monte Carlo simulation analysis, the authors investigate the robust portfolio optimization model under the ellipsoidal uncertainty set (2.38). Considering the example given in [70], where the portfolio is to be constructed by three assets, the samples are generated via the Monte Carlo simulation approach by assuming a joint normal distribution. The scaling matrix of the ellipsoid, i.e., A, is assumed to be a diagonal matrix ρI, where ρ is a non-negative parameter which models uncertainty aspects. In particular, the nominal optimal portfolio is obtained by setting A = 0, i.e., ρ = 0. The main result is that the gap between the two curves (nominal versus robust portfolio value) becomes larger as ρ increases, which demonstrates the advantage of the robust optimization formulation in the situation where the uncertainty grows.

The numerical experiments thus imply that the portfolio selection models using the Worst-case CVaR as the risk measure in minimizing perform ro-bustly in practice and provide flexibility in portfolio decision analysis; more-over, the experiments confirm that the specification of the uncertainty set is the key issue for successful practical applications.

2.3.3

Alternative robust models

Some critical aspects of traditional robust optimization took some au-thors to propose alternative robust models. Traditional robust approach is sometimes criticized for being overly conservative; moreover, this approach only guards against data realizations that are allowed by the given uncer-tainty set, while potentially becoming very vulnerable to realizations outside of the uncertainty sets considered. Finally, it tends to give the same weight to all possible data realizations (within the considered uncertainty sets), which may be unrealistic in practice.

A major critique that Bienstock moves in [16], and that he wants to overcome with his work, is tied to the notion of tractability of the models. The notion of tractability is without doubts a positive attribute, and cer-tainly when a model is relevant, then ease of solution should be a worthwhile goal. However (and this is the critique moved by Bienstock) the solution methodology is often chosen at the expense of the richness and flexibility of the uncertainty model. Further, the underlying assumptions that often are

(36)

used to justify convex uncertainty sets (such as normally distributed asset returns) potentially expose the user to a structural lack of robustness. Bien-stock’s view is that it is preferable to rank the risk modeling flexibility higher than the theoretical algorithmic performance.

Based on these observations, the author presents two non standard robust models for shortfalls in asset returns, which achieve an enhanced risk mod-eling flexibility by incorporating probability distribution and risk measure elements into the models. In constructing the models, the author assumes that a time series is available from which expected returns (and variances) are computed. This time series is used to construct rough distributions of the return shortfalls. In particular, Bienstock does not assume that the returns are normally distributed.

In the first of these models, called the histogram model, returns are seg-mented into a fixed number of categories, or bands, according to the mag-nitude of each shortfall; the distribution is obtained by employing an ap-proximate count of the number of assets falling within each band. In the second model, the ambiguous chance constrained model, suitable probability distributions for the shortfalls are generated. Both models are solved using a cutting plane algorithm that runs a sequence of two step iterations: the first one solves an implementor problem which picks values for the decision variables of the model, the second one solves an adversarial problem which finds the worst-case data corresponding to the decision variables just selected by the implementor problem. The adversarial problem, in both cases, is a mixed-integer linear program. On the contrary, the implementor problem is, in the first case, a quadratic convex problem, while in the second case it is a quadratically constrained program solvable using SOCP techniques.

Starting from the risk-adjusted expected return model of Markowitz, Bi-enstock proposes the following formulation of the histogram problem:

min

x maxµ∈U λx TQx

− µTx

where U is the uncertain set of allowable return vectors. This problem is equivalent to min x  λxTQx− min µ∈Uµ Tx  = min x λx TQx − A(x)

where A(x) denotes the worst case return achieved by the asset vector x in the uncertainty set U. The implementor problem consists on approximatively solving the problem

min x  λxTQx− min µ∈Uµ Tx  , (2.39)

(37)

generating a portfolio x∗. Then, the corresponding adversarial problem yields

a vector of asset returns corresponding to A(x∗), which is incorporated into

the implementor problem. Specifically, let us consider the iteration h of the basic cutting plane algorithm, and let µ(j), 1 ≤ j ≤ h − 1 be the return

vectors produced by the priors runs of the adversarial problem. Then the implementor model at the iteration h is the following:

min x λx TQx − r s.t. Ax ≥ b r− µ(j)x≤ 0 1 ≤ j ≤ h − 1

where the constraints Ax ≥ b reflects a variety of restrictions on decisions. As we have said before, this is a convex quadratic program. After the solution of this model, the adversarial problem generates a new return vector, say µ(h+1),

which is the worst case return achieved by the current optimum portfolio in the considered uncertainty set U. This return vector determines an additional constraint, r−µ(h+1)x≤ 0, which is incorporated into the implementor model

at the next iteration.

The nature of the uncertainty set in the histogram model is tied to the concept of discretization of the risk, which is obtained by constructing suit-able bands around an estimate of the expected value of the return short-falls. Specifically, the return shortfalls are classified into a certain number of bands, whose width depends on the target of risk protection of the mod-eler. Then, given the specification of such an uncertainty set U, and given the current optimum portfolio x∗, the adversary behavior is modeled via a

mixed-integer linear programming formulation, which generates the worst case return achieved by x∗ in U, i.e., A(x). For more details and formal

proofs, the interested reader is referred to [16].

In the ambiguous chance constrained model, risk measures such as VaR and CVaR are used to modeling the risk. In this model, at each iteration the adversary produces a probability distribution according to the so-called random model and, on the basis on such a distribution, he generates an adver-sarial return vector. This is obtained via a mixed-integer linear programming formulation, [16]. Then, the implementor chooses a portfolio x∗ that

min-imizes the Worst-case VaR, say V aRmax(x) (or the Worst-case CVaR, say

CV aRmax(x)), which is the maximum VaR (CVaR) that can be incurred by the portfolio under the random model, that is under the choices made by the adversary. The implementor problem in the VaR (or CVaR) case has the following form:

Figura

Figure 4.1: Evolution of realized return related to the norm-portfolio models for different λ values in the first subperiod
Figure 4.2: Evolution of realized return related to the norm-portfolio models for different λ values in the second subperiod
Figure 4.3: Evolution of realized return related to the norm-portfolio models for different λ values in the third subperiod
Figure 4.4: Evolution of realized return related to the norm-portfolio models for different λ values in the fourth subperiod
+7

Riferimenti

Documenti correlati

In addition, control of the number and distribution of crossovers on autosomes appears to be defective in the absence of ATM because there is an increase in the total number of

Luego, hay otros muros de incomprensión y barreras culturales, como las que narran las mujeres que protagonizan Diario de una prófuga (págs. 187-193) de Juana Escabias, hundida en

Data la particolarità delle immagini da studiare e delle operazioni necessarie per individuare ed analizzare i difetti, è stato realizzato RailView, un software di Image

Taking advantage of the expertise in fabricating CNT-based emitting systems with tailored features, we thought it worthwhile to test such cathodes using an innovative X-ray

Por tanto, como Estados soberanos y sujetos del Derecho internacional, los Estados miembros de la Unión Europea han buscado soluciones, de manera bilateral, a la

Circa 15.000 prezzi di manodopera, noli orari, macchine e strumentazioni, materiali, opere compiute e sicurezza relativamente a: - Impianti idro-sanitari - Impianti di riscaldamento

VIR entire dataset, using spectral endmembers from Mamilia crater northern wall (diogenitic, eucritic and

Per quanto riguarda la pratica del banchetto funebre, documentato dalla pre- senza in molte tombe dei centri indigeni delle tipiche forme greche collegate al simposio – a volte