• Non ci sono risultati.

Leverage and the financial accelerator: An Agent-Based perspective

N/A
N/A
Protected

Academic year: 2021

Condividi "Leverage and the financial accelerator: An Agent-Based perspective"

Copied!
224
0
0

Testo completo

(1)

Sant’Anna School of Advanced Studies

Master of Science in Economics

Curriculum in General Economics

Leverage and the financial accelerator

An agent-based perspective

Candidate

Davide M. Coluccia

Supervisor Prof. Andrea Roventini

(2)

website:

http://www.dmcoluccia.weebly.com/

e-mail:

d.coluccia@santannapisa.it

This thesis is submitted in partial fulfillment of the requirements for the degree of Master of Science in Economics jointly offered by the University of Pisa and Scuola Superiore Sant’Anna.

(3)

Purpose of this work is to understand the role of financial institutions in modern economies. Specifically, we try to show that if banks follow mark-to-market account-ing and manage their balance sheets accordaccount-ing to Value-at-Risk then one can write and calibrate and agent-based model to provide a sound analytical basis for the financial instability hypothesis.

Upon reviewing the most relevant branches of the mainstream literature dealing with financial frictions in macroeconomics and agent-based models, we show that the lever-age cycles that stem from procyclical leverlever-age targeting resulting from Value-at-Risk and mark-to-market balance sheet accounting are consistent with recent empirical evi-dence concerning the relationship between the financial sector and busines cycles. We thus develop an agent-based model that accounts for these and more traditional stylized facts on business cycles and firm dynamics. We consider a four-sector econ-omy in which banks lend to firms and issue equity that is held by a set of unleveraged investors. These mutual funds trade in equity, thus determining the market value upon which the banks undertake their lending decisions. Firms in turn use the external fi-nancing they get to hire workers and conduct their investment decisions. We show that, besides yielding series that are consistent with the main business cycle stylized facts, the model successfully replicates the role of credit in modern economies, and the widespread use of Value-at-Risk leverage management implies credit to enhance the volatility of fluctuations. We further show that the financial sector has an active role in that it can be understood as the source of business cycle fluctuations, rather than a passive amplifier.

Keywords:Agent-based models, Financial frictions, Financial instability, Leverage, Mark-to-market accounting, Value-at-Risk

JEL Classification: E32, E37, E44, G01, O16

(4)
(5)

Introduction 1

1 some theoretical foundations 7

1.1 The Financial Accelerator Hypothesis 7

1.1.1 Microfoundations of financial frictions 9

1.1.2 Shock persistence 11

1.1.3 Shock amplification 13

1.1.4 The continuous-time methodology 18

1.1.5 Some concluding remarks 23

1.2 Procyclical Leverage 25

1.2.1 Understanding the credit crunch 26

1.2.2 The leverage cycle 28

1.2.3 Theoretical underpinnings of procyclical leverage 33

1.2.4 A microfoundation for procyclical leverage 36

1.2.5 Conclusion 40

1.3 Credit in Agent-Based Models 41

1.3.1 A baseline credit-ABM 42

1.3.2 The financial accelerator in a credit network 46

1.3.3 Leverage and the credit network in an ABM 50

1.3.4 Markets 51

1.3.5 Simulations Results 54

1.3.6 Some concluding remarks 55

1.4 Topics in Agent-Based Computational Finance 57

1.4.1 An outlook on market design 58

1.4.2 Heterogeneity and evolutionary selection 61

1.4.3 Artificial stock market simulations 63

1.4.4 Conclusion 67

2 empirical stylized facts 69

2.1 Firm Dynamics 69

2.1.1 Size distribution 70

2.1.2 Size growth 72

2.1.3 Profitability and Innovation 75

2.1.4 Firms and finance 77

2.1.5 Taking a stock 78

2.2 Business Cycle Facts 79

2.2.1 The Methodology of Business Cycle Analysis 80

2.2.2 Business Cycle Fluctuations 83

2.2.3 Towards Non-Gaussian Macroeconomics 86

2.2.4 Conclusion 89

2.3 Credit and the Business Cycle 90

2.3.1 The very long run 92

(6)

2.3.2 Credit and business bycles 95

2.3.3 The anatomy of financial crises 101

2.3.4 What causal structure? 106

2.3.5 Conclusion 108

3 a simple mark-to-market abm 111

3.1 Introduction 111

3.2 The Theoretical Model 115

3.2.1 The Equity Market 115

3.2.2 The Debt Market 118

3.2.3 Equilibrium Dynamics 121

3.3 A look at the Simulation Results 122

3.3.1 Business Cycle dynamics 123

3.3.2 Exogenous Shocks and Amplification 128

3.4 Concluding Remarks 132

4 a mark-to-market abm with procyclical leverage 137

4.1 Introduction 137

4.2 The theoretical framework 139

4.2.1 The financial sector 143

4.2.2 The real sector 149

4.2.3 Market dynamics 154 4.2.4 Evolutionary selection 157 4.3 Simulations 159 4.3.1 A sample realization 162 4.3.2 Firm dynamics 170 4.3.3 Monte-Carlo experiments 172 4.4 Some conclusions 189 Conclusion 193 References 197 Acknowledgments 213

(7)

Figure 1 Leverage and asset growth (i). 34

Figure 2 Leverage and asset growth (ii). 34

Figure 3 The leverage cycle. 35

Figure 4 Baseline ABM with credit: output. 46

Figure 5 Baseline ABM with credit: firms. 46

Figure 6 The credit network in ABMs: plots. 55

Figure 7 Firm size distributions. 70

Figure 8 Sector-level firm size distributions. 71

Figure 9 Growth rate distributions. 73

Figure 10 Firm growth volatility and size. 74

Figure 11 Firm profitability distributions. 75

Figure 12 Financial constraints and size. 77

Figure 13 Financial constraints and growth. 78

Figure 14 US GDP growth distributions. 87

Figure 15 Credit and money in the long run. 93

Figure 16 Moments of the business cycle and credit. 98

Figure 17 Correlation between credit and the business cycle. 99

Figure 18 Excess credit and recessions: evidence. 102

Figure 19 Evidence on the effects of financial and standard recessions. 106

Figure 20 Sketch of the baseline mark-to-market ABM. 116

Figure 21 Sample realization: levels and growth rates. 124

Figure 22 Bandpass filtered variables: cyclical component isolated. 126

Figure 23 Credit and GDP rates of change from US data. 127

Figure 24 Sample realization of a negative credit shock. 128

Figure 25 Credit shock: bandpass filtered variables. 129

Figure 26 Sample realization of a negative technology shock. 130

Figure 27 Technology shock: bandpass filtered variables. 131

Figure 28 Sketch of the mark-to-market ABM 2.0. 141

Figure 29 Sample realization: GDP decomposition. 162

Figure 30 GDP growth rate distribution. 163

Figure 31 Sample realization: prices and wages. 164

Figure 32 Sample realization: Phillips curves. 164

Figure 33 Sample realization: unemployment and the business cycle. 165

Figure 34 Sample realization: the equity market. 167

Figure 35 Sample realization: credit and leverage. 168

Figure 36 Sample realization: credit and investment. 169

Figure 37 Sample realization: Banks’ net worth and growth. 169

Figure 38 Sample realization: average firm net worth and growth. 170

Figure 39 Sample realization: firm size distributions for selected time

peri-ods. 171

(8)

Figure 40 Sample realization: firm growth rate distributions for selected

time periods. 172

Figure 41 GDP decomposition and cycle. 173

Figure 42 GDP and investment cycles distribution. 176

Figure 43 Credit, Investment and GDP: cycle. 180

Figure 44 Central moments of the business cycle and credit. 182

Figure 45 FSD for selected time periods. 186

(9)

Table 1 Comparative evidence on small firms. 71

Table 2 Cross-correlations of the business cycle. 85

Table 3 Timing and correlation for selected macroeconomic time series. 89

Table 4 Expansions and recessions of output and credit for the four

sub-sample periods. 94

Table 5 Real GDP per capital and Excess Credit in Booms. 95

Table 6 Cross correlations between real variables and real money growth

and real credit growth. 97

Table 7 Leverage and business cycles in XX century recessions. 100

Table 8 Duration and rate of Real GDP Cycles stratified by Credit Growth

in Current Expansion/Recession. 103

Table 9 Normal and financial recessions and excess credit. 105

Table 10 Baseline model: parameters settings. 123

Table 11 Baseline model: sample stationarity tests. 125

Table 12 Baseline model: Monte-Carlo stationarity tests. 125

Table 13 Model 2.0: parameters settings. 166

Table 14 Sample statistics and stationarity tests. 174

Table 15 Normality tests on GDP and investment. 175

Table 16 Correlation structure for selected variables and the business

cy-cle. 178

Table 17 Recessions, expansions and credit: descriptive statistics. 180

Table 18 Likelihood of recessions and excess lending. 184

Table 19 Descriptive statistics and normality tests for FSD. 186

Table 20 Statistics and normality tests for FGD. 188

Table 21 Gibrat’s regressions. 189

(10)
(11)

People who don’t like dynamic stochastic general equilibrium models are dilettantes.

Christiano, Eichenbaum & Trabandt(2017)

This work is the result of a composite effort whose aim has been to study the finan-cial sector and its growing relevance in modern economies with alternative modeling instruments. In this introduction we thus pursue a twofold purpose: on the one hand, we shall present the reasons underlying our modeling choice, that is agent-based mod-eling as opposed to more traditional DSGEs; on the other, we will introduce the topics of interest, credit, mark-to-market balance sheet management and procyclical leverage targeting, as plausible sources of financial instability and business cycle fluctuations driven by the financial side of the economy.

the methodology

In 2003 a distinguished scholar famously commented:

“[. . .]macroeconomics in this original sense has succeeded: its central prob-lem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades.” (Lucas,2003, p.1)

That same year,Woodford(2003) published a leading graduate textbook on New

Keyne-sian macroeconomics, a stream which by that time, and today as well, had established itself as the common consensus among economists: a “mainstream”, so to speak. Few would doubt that Lucas and Woodford belong to different economic schools. How-ever, probably fewer would doubt the influence that the former had on New-Keynesian macroeconomics, the tradition the latter belongs to. Among other things, and simplisti-cally speaking, Lucas stood as the influential proponent of the Critique, which in turn prompted wide adoption of the rational expectations paradigm and the quest for mi-crofoundations. Rational expectations made dynamic optimization models analytically tractable, and it did not take much beforeKydland & Prescott(1982) developed the first

Real Business Cycle Model (RBC). The RBC model, though empirically inconsistent and realistically opaque, was later adopted as the core of today’s -and 2003’s- workhorse of macroeconomic modeling, the dynamic stochastic general equilibrium (DSGE) model. The Lucasian tenet unfortunately turned out to be overly optimistic. It took four, maybe five years for history to blow it up, but that was with style. The Crisis in 2008 was the heaviest since the Great Depression in 1929: some might have hoped for a radical change in the economic discipline, given such a manifest error of appraisal. And yet, in

2008Galí(2008) published his guide to New-Keynesian macroeconomics, as Woodford

did that same five years before. Today, DSGE models maintain their solid lead within the mainstream macroeconomic literature, though this is not to say that new features and more complex specifications have been proposed. Among these, with respect to an

(12)

ideal benchmark, such as the model byClarida, Galí & Gertler(1999),Curdia & Wood-ford(2010) introduce and justify the presence of financial intermediaries and show that

the optimal monetary policy remains unvaried, Gertler & Karadi (2011) exploit the

DSGE toolbox to evaluate the impact of unconventional monetary policy,Gertler et al.

(2008) introduced staggered nominal wage contracting. Some efforts have been spent

developing DSGEs allowing for a mild degree of heterogeneity, for instance see Eggerts-son & Krugman(2012) and the lively recent literature on HANK models (Kaplan, Moll & Violante,2018).

What is more, recent DSGE models are now trying to take into account deep downturns and recessions. Curdia, Del Negro & Greenwald(2014) estimate the model bySmets & Wouters(2007) assuming t-distributed shocks and find that downturns are rare but of

no negligible likelihood. Also Fernández-Villaverde & Levintal (2016) and Canzoneri et al. (2016) model rare events that occur with no vitually-zero probability, the former

allowing the exogenous risk of disaster to vary over time, the latter allow fiscal policy to change depending on the economic conditions.

Nevertheless, it would be difficult to maintain that following the 2008 downturn any paradigm shift occurred in the economic discipline. Such a change was not, we believe, only invoked by a bunch of creepy and discriminated heterodox economists relegated aside of the major economic debate arenas.Caballero(2010),Kirman(2010) andStiglitz

(2011) are only three voices out of a crowd of important economists calling upon a major

change in the wake of the Crisis. What are, however, the difficulties arising from DSGE models? With no presumption of comprehensiveness, we now try to answer this rather complex question.

AsFagiolo & Roventini(2016) point out, we may distinguish three orders of issues

con-cerning DSGEs. Simplifying their taxonomy, these may be deemed as being inherently theoretical, attaining the econometric viability of estimation and dealing with the fea-tures of economic agents they imply.

As for the first eminently microeconomic critique, we consider an argument that is twofold. First, since DSGEs are a version of General Equilibrium model, whatever non-Walrasian feature of the market implies that the equilibrium fails to exist -if ever, to be unique (Kirman, 1989). In order not to deal with the issues arisen, i.a., by Saari & Simon(1978), DSGEs assume a representative agent (henceforth RA) to obtain a stable

and unique equilibrium. Nevertheless, as it is argued byKirman (1992), such an

ap-proach fails to take into account that (i) individual rationality does not imply aggregate rationality, (ii) RA-based models are not robust for one cannot assume that the aggre-gate response is the same as the individual one, (iii) preferences are not aggregable (i.e. the Weak Axiom of Revealed Preferences in the aggregate may not hold), and (iv) RA is oddly unverifiable for testing a model that relies on it implies testing that same assumption.

A more econometric perspective suggests that issues regard the estimation of DSGEs. Following Fukac & Pagan (2006), these may concern the identification of the model,

its estimation and its evaluation. From the identification point of view,Canova(2008)

points out that DSGEs are hard to identify because of the high number of nonlinearities they encompass. These identification problems cause MLE to be biased estimates, thus resulting in severe estimation drawbacks. Such issues are tackled by employing limited-information ML estimation, but even such approach may be of little help in the case

(13)

of non-informative priors and flat likelihood functions (Canova,2009). As for the

eval-uation of DSGEs performance, Fukac & Pagan (2006) show that co-trending behavior

cannot be assessed because the data are typically demeaned. Furthermore, since output is the only trending variable, there can be no other series that is co-integrated with it. Finally, DSGEs tend to misrepresent means, standard deviations and autocorrelations of observed data.

Moreover, DSGEs as well as all those models whose core relies upon Real Business Cycle models developed in the spirit ofKydland & Prescott(1982) are grounded upon

the Rational Expectations Hypothesis (henceforth REH). In principle, the REH can be interpreted in three ways (Lucas, 1972): either (i) subjective expectations of economic

variables coincide with the objective conditional expectations of those variables, or (ii) agents know the true underlying data generating process of the economy, or (iii) agents are able to form expectations that are on average correct. In practice, REH entails that agents are able to engage in dynamic programming and determine their behavior ac-cordingly, their expectation being on average unflawed. While we will come back on this point, it is nonetheless important to point out that the REH has been quite uncon-troversially rejected in empirical testing, i.a. byCoibion & Gorodnichenko (2011) and Gennaioli, Ma & Shleifer(2015).

As said, however, there have been several developments within the DSGE literature. First, it would be incorrect to affirm that the RA assumption is strictly necessary to DS-GEs as shown byKrusell & Smith (1998), whose recent development by Eggertsson & Krugman(2012) allows for a however moderate degree of heterogeneity across agents.

Furthermore, bounded rationality models as surveyed byDilaver & et al. (2016) allow,

following Evans & Honkapohja (2001), agents to learn, usually through least squares

analysis, but other learning patterns have been explored too, in the spirit of Brock & Hommes(1997).

Nevertheless,Fagiolo & Roventini(2016) argue that such steps, however impressive, do

not solve the fundamental idiosincrasy that is implicit in a DSGE modelling approach, i.e. the idea of introducing ad-hoc exogenous shocks in a system which would other-wise be -at least locally- stable around its own equilibrium. Furthermore, they claim that even modern DSGEs cannot allow for more than a mild degree of heterogeneity, whereas it should allegedly be desireable to be able to model as many strategies and agents as “nature” requires.

Farmer & Foley(2009) appear to be quite of the same opinion, insofar they claim that

DSGEs and other mainstream macroeconomic models do not just deliver wrong policy indications, but they fail to even deliver any. The authors instead claim that Agent-Based models (henceforth ABM) allow for a more realistic representation of reality, mostly focusing on the financial markets, whose complexity is best caught by ABMs, allegedly in the original Keynesian spirit. What is more interesting, the authors argue that the learning patterns which are being exploited, i.a, inLeBaron & Tesfatsion(2008)

andAshraf, Gershman & Howitt(2011), owe much to the Lucas’s critique, whose

influ-ence ultimately led to the hegemony of dynamic programming and REH.

Still, a rather important array of problematics arises once we consider ABMs as a viable alternative to DSGEs. While it is true that ABMs do not impose strong theory-driven as-sumption on the behavior of agents, this nonetheless translates into an instrumentalist approach which would led the scholar to trust a model provided that it “behaves” well

(14)

in reproducing real-world data. As it is argued by Maki(2009) in the long standing

critique against Friedman’s approach to economic methodology, which we echo, this cannot be considered as an episthemically correct validation technique. Also, the lack of closed-form solution to the models, something they share with DSGEs, makes it hard to disentangle causality and perform policy evaluations with true confidence.

It is difficult, therefore, to cogently argue that ABMs can truly represent a viable al-ternative to DSGEs, given the importance such an undertaking would imply. Also, this task is not well-suited for such a work. Nevertheless, given the already mentioned stances of changes among economists following the Great Recession, we are convinced by the fact that this relatively new research paradigm may prove itself useful and thus adopt this class of models in sketching our original contribution, which builds upon these and further reflections in an attempt to motivate each modeling choice we put forward.

the research question

That said, this is not a methodological work in that we exploit ABMs as a modeling tool, but our focus is to study something beyond the way by means of which the study is carried out. The main and most far-reaching ambition of this work is to provide an analytical framework for the financial instability hypothesis, an attempt that its pro-ponent Minsky (1977) never undertook. After decades of seeming oblivion, the Great

Recession once more fostered interest in the author, whose contribution can be valuable to understand the dynamics of business and credit cycles, as well as the interactions between the two. Therefore, we shall seek to develop an environment in which the financial sector can serve as the source of business cycle fluctuations stemming from the dynamics of credit it fuels the economy with. This perspective is radically differ-ent from the way mainstream macroeconomic models embed financial frictions: the dominant position, stretching back toBernanke, Gertler & Gilchrist(1999), has long

re-garded the financial side of the economy as an amplification mechanism of real shocks. While influential and interesting, this perspective does nonetheless not quite capture the mechanism underlying the inherent instability of modern economies as stemming from their financial sector. Indeed, we shall seek to understand business cycle fluctuations as endogenous phenomena, thus we do not rely upon exogenous shocks such as those widely employed in common DSGEs. Business cycle fluctuations, we may borrow from Schumpeter, are not exogenous facts, but their raison d’être is implicit in the economic activity per se.

We try to fulfill this rather ambitious endeavor in a twofold manner. First, we introduce mark-to-market accounting, that is financial institutions do not behave according to the book value of their assets. Instead, they care about their market value: increasingly com-plex financial environments prompt mark-to-market accounting, which in turn makes economies more vulnerable to balance sheet recessions and financial accelerator spi-rals. Second, following the seminal work byAdrian & Shin (2010) we explicitly model

how Value-at-Risk, which has been shown to represent the analytical solution of a stan-dard contracting model between optimizing agents, can result in procyclical leverage

(15)

targeting. These two features are the fundamental tools we employ to argue in favor of financial instability. In a system in which financial institutions hold mark-to-market balance sheets and undertake procyclical leverage targeting, leverage cycles naturally emerge: increasing asset prices prompt increasing leverage, which in turn boosts the demand for assets, thus further enhancing their price. Increasing leverage comes, how-ever, as no free lunch.

The more financial institutions are leveraged, the more fragile they are to relatively mild economic downturns. It is important to notice, however, that we do not postulate the existence of exogenous shocks: market frictions and less-than-optimal matching among the agents can result in unexpected losses. In an upward phase of the leverage cycle, credit has been observed to behave as a stabilizing variable. Recent evidence byJordá, Schularick & Taylor (2017) and other have interestingly noted that the correlation

be-tween the credit-to-GDP ratio and the volatility of the business cycle is slightly negative. When, however, leverage is too high, tail events occur, and the outcomes of recessions are further dampened by means of balance sheet shrinkage due to mark-to-market ac-counting. With this respect we find our work to be well in tune with Minsky’s: current expansions sow the seeds of future recessions in the form of balance sheet excessive enlargements and increasing leverage. The resulting crisis is worse the more prolonged the expansion period was, hence we try to provide a portrait of the financial sector as both the source of long-term growth and short-term volatility and boom-and-busts dy-namics.

We come to the main theoretical point of inquiry sideways. First, we need to evalu-ate the theoretical contributions that have been put forward in the literature. Chapter

1 is devoted to this crucial necessity. We shall discuss the way mainstream literature

has dealt with the topic of financial frictions, most notably with respect to the way they influence the persistence and magnitude of real shocks. We then present evidence and reflections on leverage procyclicality, and document the dynamics of the leverage cycle, as well as some key facts about balance sheet recessions as contextualized in the 2007-2008credit crunch. That said, we undertake a brief albeit comprehensive review of the literature on ABMs dealing with credit and leverage, and most importantly shed light on the differences between this class of models and the one we develop thereupon. In chapter2 we present the main stylized facts about firm dynamics, business cycles

and credit cycles. The motivation of the chapter is twofold. On the one hand, since we develop an ABM whose validation is attained through its capability to replicate real-world facts, it is crucial to take such facts into account. On the other, recent evidence on credit-fueled expansions and recessions as well as on the correlation between credit and the main business cycle indicators shed further light on the role of financial institutions in the wider macro-economy. Credit is understood to yield, on average, milder and less volatile growth, albeit it is also correlated with increasing likelihood of recessions, which are more dramatic the more reliant on credit the previous expansion had been.

In chapter3we develop a simple many-types heterogeneous agents model that is

use-ful for explanatory purposes. With no presumption of realism, we consider a mark-to-market economy in which banks maintain a fixed leverage targeting policy. The analytical framework is simple enough to allow for a closed form solution, since the credit and labor markets allow for market clearing in the equilibrium. We show that even in such a simple environment, business cycle fluctuations emerge as the outcome

(16)

of the interactions between banks and firms. We further show that financial sector reacts differently to different real shocks: technology shocks are both persistent and amplified, whereas interest rate shocks are readily absorbed.

Building on that framework, in chapter 4 we develop a more realistic and complex

framework that corrects most of the shortcomings that the previous model encompassed on purpose. The new one is a proper agent-based inasmuch its performance can only be evaluated by means of numerical simulations. Also, we get rid of the residual “Wal-rasian” features of the market clearing mechanism, and allow for more complex market structure both in the goods and credit markets. We introduce an evolutionary selection component and make banks follow a stylized VaR rule. We evaluate the performance of the model by means of Monte Carlo simulations as suggested in many studies dealing with validation of ABMs, and find the model capable of replicating many stylized facts, both concerning business cycles and credit.

The model robustly generates macroeconomic time series whose cross correlations with the business cycle are consistent with the real data. Furthermore, we find that the GDP growth rate distribution is non-Gaussian for it features heavy tails: thus, our model un-like standard DSGEs successfully allows tail events, such as deep recessions, to come up with non-negligible probability. Most importantly, we find that the likelihood of experi-encing a recession -in our model and in the real world too- is positively correlated with the excess credit the financial sector fuels the economy with before the recession actu-ally takes place, thus supporting the view put forward by recent studies that too much finance can harness growth and enhance business cycle volatility. Moreover, endoge-nous growth co-exists with business fluctuations, and both are driven by the financial sector. The business cycle is thus turn driven by the credit one in an asynchronous fashion: whenever credit growth exceeds that of output the economy get overly finan-cialized, credit risk increases and consequently recessions become more likely. That is, in our idea, a way to microfound the financial instability hypothesis from a bottom-up perspective.

(17)

1

S O M E T H E O R E T I C A L F O U N D A T I O N S

Salomon saith: “There is no new thing upon earth”. So that as Plato had an imagination, that all knowledge was but remambrance; so Salomon giveth his sentence: “that all novelty is but oblivion”.

F. Bacon, Essays in J. L. Borges, Aleph

summary

In this first chapter we discuss the main ideas that have inspired our work from a theoretical perspective. Our intellectual debts are not limited to those authors whose ideas are briefly presented here, and yet these are the ones whose influence is the clearest. We first study the development and main features of the financial accelerator, which is the mechanism most mainstream macroeconomic models employ to embed financial frictions within an otherwise frictionless general equilibrium context. This notwithstanding, since out work is not concerned with standard DSGE models, we will merely provide the underlying intuition and outline how we intend to translate it in our model. Second, we take into account the relatively recent and lively literature on procyclical leverage stemming from banks employing Value-at-Risk to manage their mark-to-market balance sheets. We discuss the evidence on the phenomenon and its relevance for the 2007 credit crunch. Furthermore, we show that a seemingly heuristic rule such as the VaR can be derived in a standard contracting model as the outcome of optimizing decision rules Third, we briefly review the outstanding literature dealing with credit in Agent-Based models. With no pretense of comprehensiveness we discuss three models which, we hope, are relevant to grasp the main features of these models and the differences with ours. We conclude the chapter examining some relevant topics in agent-based behavioral finance. The section is important for it introduces some mod-elling techniques which we shall adopt in developing our models, which are presented in the third and fourth chapters, and thus introduces and motivates our choices.

1.1

the financial accelerator hypothesis

The idea of this section is to study how modern scholars in macroeconomics have dealt with frictions stemming from the financial sector of the economy. We name the section with reference to the notion of the financial accelerator for we will see that this very simple idea comprises a variety of contributions put forward in the literature and is intuitively easy to grasp. We refrain from merely providing a list of references, but aim at sketching a thematically meaningful review of the most relevant topics that have emerged.

That financial frictions are a fundamental aspect of the macroeconomic discipline is

(18)

in itself not a novel idea. Classic economic scholars emphasized their importance from as early as the Great Depression, among these we cite the pioneering contributions byFisher(1933) andKeynes (1936), notwithstanding Minsky(1957) andKindleberger

(1978). Also, authors such as Patinkin (1956) and Tobin (1969) linked the instability

these frictions entail for the economy to monetary policy. Nevertheless, following the influential contribution by Kydland & Prescott (1982), the subsequent Real-Business

Cycle literature has largely neglected financial frictions and made the widespread as-sumption of efficient capital markets (Fama, 1970). Even though in the ‘90s an array

of impressive papers devoted attention to the study of the financial accelerator hypoth-esis, financial frictions begun to be embedded in DSGE New Keynesian models only following Bernanke, Gertler & Gilchrist (1999), who document how credit constraints

can result in feedback and amplification dynamics stemming from balance sheets of both lenders and borrowers1

.

Theoretically, a frictionless economy would allow funds to be liquid and flow to the most profitable projects or to those agents who value them the most. Hence, in a set-ting without financial frictions the distribution of financial wealth across agents is not relevant, and furthermore those who embraced this perspective commonly employed a representative agent perspective to study the economy.

By contrast, financial frictions make liquidity considerations important and the distri-bution of wealth across agents matter. Information asymmetries and costly monitoring imply that more productive agents shall issue claims mainly in the form of debt, for it ensures that the borrower exerts sufficient effort. Still, debt comes with noteworthy drawbacks, since it implies that an adverse shock wipes out a large fraction of the lev-ered borrowers net worth, thus limiting his risk bearing capacity in the future. Hence, we say that a temporary adverse shock is persistent, since it can take a long time until borrowers can rebuild their net worth via simple internal funding, that is earning re-taining.

Still, temporary shocks tend to feature also amplification whenever productive agents are forced to fire-sell their capital. The idea is that fire-sales dampen the price of capital, hence the net worth of productive agents is affected by this price decrease to an extent that is beyond that whereof the original shock. While this effect is usually referred to as a static amplification, a dynamic effect may stem from the fact that persistence negatively affects future assets’ prices, which in turn feed back to current net worth, which is thus further affected.

It has been argued that amplification effects lead to rich volatility dynamics which in turn can explain the inherent instability of the financial system. Indeed, even when ex-ogenous risk is not high, endex-ogenous risk stemming from these amplification dynamics can be noticeable. Credit risk is further affected by liquidity risk, for liquidity is intrin-sically fragile, as we shall see in section 1.1.2. It is nonetheless useful to distinguish

between the three notions of technological, market and funding liquidity that liquid-ity risk has been disentangled in. Technological liquidliquid-ity is due to the fact that physical capital can be liquid because so is investment. On the other hand, we speak of market liquidity if physical capital can be sold off easily, with limited price impact, notwith-standing possible lack of reversibility of investment. Last, funding liquidity is primarily

(19)

determined by the maturity structure of debt and margins/haircuts. But since margins depend on the volatility of the collateral assets, all the three concepts are indeed deeply related.

The remainder of the review proceeds as follows. Section 1.1.1deals with theoretical

microfoundations of financial frictions that have been put forward in either economic theory or applied works. In section1.1.2we analyze two cornerstones which introduced

financial frictions in modern macroeconomics and formalized the idea of persistence of financial shocks. Amplification of shocks is analyzed in section1.1.3, and we discuss

how leverage can entail highly nonlinear dynamics. Section1.1.4introduces

continuous-time models that address some of the critiques to standard models which could prove substantial. Lastly, in section1.1.5we present some ancillary topics and take the stock

of the discussion.

1.1.1 Microfoundations of financial frictions

The literature offers several microfoundations providing sound theoretical justification for the many different financial frictions applied works have dealt with. Thus, we present them in a chronological fashion.

One first contribution is the costly state verification approach originally due toTownsend

(1979). The basic friction here is related to asymmetric information about the future payoff

of a given investment project. The asymmetry is substantiated by the fact that ex-post the investor learns the true payoff, whereas the lender never does unless he under-takes a costly monitoring action. In such an environment, one primary finding whereof

Townsend (1979) is that debt is the optimal contract because it minimizes the socially

wasteful monitoring expenditure. Indeed, as long as debt is paid in full, there is no need to pay such costs.

In case of default the lender needs to verify the state: theoretically he would thus bear monitoring costs, in practice these are charged upon the borrower in terms of interests. Therefore costly state verification implicitly explains one further phenomenon, that is, the empirically observed cost wedge between internal and external funding, which con-trasted with the theoretical prediction whereofModigliani & Miller(1958). Importantly,

the interest rate the lender charges upon the borrower increases with the borrowed amount, for the higher the amount borrowed, the more likely becomes default and thus costly monitoring.

This latter proposition contrasts with quantity rationing as in Stiglitz & Weiss(1981).

Here, asymmetric information arises ex-ante, that is before contracting takes place. We briefly sketch their model to point out its wider implications.

The payoff of entrepreneur i is given by R with a distribution G(R|σi). All entrepreneurs’ projects yield the same mean return, that isR

RdG(R|σi) =µi = µfor all i. Hence i has a riskier project than j if σi >σj and G(R|σi)is a mean-preserving spread of G(R|σj). If an entrepreneur borrows B at interest rate r, then

πe(R, r) =max{R− (1+r)B, 0} (1a)

(20)

where πeand π` are the payoffs for the entrepreneur and the lender respectively.

The entrepreneur’s return πe is convex in the realization R whereas π` is concave

with respect to the same variable. Hence, the expected payoff for the entrepreneur

E[πe(R, r)] =R πe(R, r)dG(R|σi)is increasing in the volatility of investment σi, whereas the opposite holds for the expected payoff for the lender, i.e.E[π`(R, r)] =

R

π`(R, r)dG(R|σi). At a given r only those investors featuring a sufficiently high risk level σi ≥ σ∗ will ap-ply for loans. Since entrepreneurs are assumed to maximize their expected payoffs, and since these have been shown to be increasing with σi, then the threshold level is just that one solving for the zero-profit condition, that is σ∗ solves

Z

πe(R, r)dG(R|σ∗) =0 (2)

which in turn implies that the threshold σ∗ is itself increasing with market interest rate r. Hence, the higher the interest rate lenders charge on borrowers, the more these will be willing to carry riskier projects. Essentially, this is a classic lemons problem as that studied byAkerlof(1970).

Assume now that the asymmetry in information substantiates in lenders being unable to evaluate a given applicant’s project’s riskiness. Then, the lender’s ex-ante payoff is just the expected value of (1b) conditional of the borrower’s participation constraint:

¯ π`(r) =E Z π`(R, r)dG(R|σi) σiσ∗  (3) A higher interest rate has a positive effect on ¯π`(r) because π`(R, r)is increasing in r. However, a higher r has also a negative impact on ¯π`(r) because it increases σ∗ and thus the riskiness of the average borrower. What results is thus that the overall effect is ambiguous and ¯π`(r)can be non-monotonic with respect to r. Lenders solve

max r π¯`(r) =E Z π`(R, r)dG(R|σi) σi ≥ σ∗  (4) and, because of such non-monotonicity, it may be that in the equilibrium demand does not match supply, i.e. shortages of supply are in order at the equilibrium. Hence, the model provides a theoretical foundation for credit rationing. To see this, we know that the demand for funds is always decreasing with r, whereas the supply is not monotoni-cally increasing: hence there may be cases in which the competitive equilibrium entails credit rationing.

More recently Hart & Moore (1994) introduced the idea that debt may be optimal in

the presence of incomplete contracts. The basic idea is that if payments in some states of the world are not exactly specified, then debtors and lenders will try to renegotiate their obligations in the future. Once such future behavior is anticipated, certain pay-off realizations become non-pledgeable, and thus ex-ante funding is limited and, as a consequence, a ‘skin the game constraint’ has to be imposed. Therefore, the way out of such limited pledgeability is to collateralize the initial contract: this is the fundamental reason why the literature dealing with collateral/margin/haircut constraints typically relies upon the incomplete contracting approach as its microfoundation (Brunnermeier, Eisenbach & Sannikov,2012).

(21)

Thus, theoretical microfoundations to financial frictions in macroeconomics can be broadly distinguished in three categories: ex-post asymmetric information as inTownsend

(1979), ex-ante asymmetric information as inStiglitz & Weiss(1981) and incomplete

con-tracts as put forward byHart & Moore(1994). Empirically, there is compelling evidence

suggesting the existence and relevance of financial constraints. The empirical macro literature on credit channels focused on two distinguished topics. On the one hand, among othersBernanke (1983) studied the bank lending channel, thus suggesting the

financial friction to be supply-driven. On the other,Lamont(1997) and other corporate

finance works have documented how financial constraints can stem from the so-called balance sheet channel, i.e. they might be demand-driven. Both the approaches resulted successful in documenting the pervasiveness and depth of financial constraints.

1.1.2 Shock persistence

Initially, the macroeconomic literature focused on the fact that a shock, despite being temporary, could have persistent, i.e. long-lasting effects. Even though even standard RBC models feature a mild persistence of shocks, the main idea of the models we dis-cuss in this section is that temporary shocks have their persistence enhanced by means of feedback effects which tighten financial frictions. More specifically, negative shocks to entrepreneurial net worth increase financial frictions and force entrepreneurs to in-vest less. This in turn results in a lower level of capital and entrepreneurial net worth, which in turn leads to lower investment and lower net worth.

Typically these models are set in a framework similar toSolow(1956) thereby assuming

a single aggregate production function Yt = f(Kt, Lt), where Yt is aggregate output, and Ktand Lt are respectively capital and labor. Still, agents are not homogeneous, for a fraction η∈ (0, 1)of the population are entrepreneurs, whereas 1−ηare households. They differ inasmuch the former only can create new capital from the consumption good by means of an investment out of their own wealth and borrowing from house-holds. The key friction these models assume is costly state verification first introduced byTownsend(1979).

We first present the work by Carlstrom & Fuerst (1997) and then move to the

origi-nal contribution byBernanke & Gertler(1989). Entrepreneurs can convert consumption

goods into capital at a one-for-one rate, but an entrepreneur’s investment it yields ωit, where it is the input in terms of consumption good and ω is an i.i.d. shock with c.d.f. G and such that E[ω] = 1. Costly state verification implies that the outcome ωit is observable to the financier at a cost µit, hence the optimal contract is a standard risky debt with an auditing threshold given by ¯ω.

Let nt be the entrepreneur’s net worth, hence he borrows it−nt and promises to repay ¯

ωit for all realizations over the zero-profit condition, i.e. ωω¯, whereas if ω < ω¯ he is audited and the creditors receive(ωµ)it, that is the return of the project net of the auditing expenses. Thus, the auditing threshold is set by the lenders to break even:

Z ω¯

0

(ωµ)dG(ω) + (1−G(ω¯))ω¯ 

(22)

where qt is the price of capital.

Hence, the entrepreneur with net worth ntis assumed to set itto solve max it Z ∞ ¯ ωt (ωω¯t)dG(ω)itqt s.t. itqt = (it−nt) · Z ω¯ 0 (ωµ)dG(ω) + (1−G(ω¯))ω¯ −1 (6)

that is he maximizes his payoff subject to (5). Carlstrom & Fuerst (1997) show that

solving (6) yields a linear optimization rule that is increasing with the price of capital:

it=Ψ(qt)nt (7)

with Ψ0(·) > 0, so that investment is increasing in both qt and nt. The entrepreneur’s return on internal funds is in turn given by the argument of the maximization whereof (6), divided by nt to yield a rate of return, plugging the optimal rule (7):

ρ(qt) =

Z ∞

¯

ωt

(ωω¯t)dG(ω)Ψ(qt)qt (>1) (8)

Since investments are linear, they can be aggregated into a supply of capital curve which is increasing in qt and aggregate net worth of entrepreneurs Nt. The model is closed as a general equilibrium one, hence we need to evaluate the demand for capital for households and entrepreneurs. The competitive rate of return on capital from period t to t+1 is given by

Rkt+1 =

At+1f0(Kt+1) +qt+1(1−δ) qt

(9) where At+1f0(Kt+1)is the competitive rent and δ is the depreciation rate. Households and entrepreneurs are infinitely lived and solve their intertemporal utility maximization problem given the discount factor β for the former, and γβ for the latter, where both β ∈ (0, 1) and γ ∈ (0, 1). Hence, entrepreneurs are assumed to be more impatient than households. Households value both leisure and consumption while entrepreneurs inelastically supply their labor endowment to finance investment, thence the two FOC Euler equations read out as

u0(ct) =βEt[Rtr+1u0(ct+1)] (10a)

1= γβEt[Rkt+1ρ(qt+1)] (10b)

Combining the two yields the aggregate capital demand schedule, which is decreasing in qt.

In this model, shocks are persistent. A negative shock hitting entrepreneurial wealth Nt at time t forces a smaller investment via (7). In turn, supply of capital shifts to the

left, leading to a lower level of Kt+1, lower output Yt+1 and hence lower Nt+1. This last decrease in net worth further dampens investment in a self-fulfilling albeit decreasing fashion. Notice, however, that the shift in the supply of capital given fixed demand by (10a) and (10b) increases its price. This has a dampening effect on the propagation

dynamics of the shock because investment is positively related with the price of capital through (7): thus, the model does not feature any amplification dynamics as those we

(23)

are to discuss in the following section.

The first noteworthy contribution in the literature is nonetheless that by Bernanke & Gertler (1989), who discuss an overlapping generations model where agents live for

two periods. Entrepreneurs earn labor income in their first period and invest these internal funding and borrowed wealth from households to create future capital. After production, there is full capital depreciation, so the return to creating capital equals its rent, that is Rkt = Atf0(Kt).

In period t, previous period’s capital stock Kt and technology At determine wage

in-come and hence young entrepreneurs’ net worth Nt, since entrepreneurs only invest

at the end of their youth. Due to costly state verification, the supply curve of future period’s capital is in the expectation of such return:

Kt+1 =S(E[Rkt+1], Nt) (11)

and increasing in both arguments. The demand only depends on the expected rents and is defined as

E[At+1]f0(Kt+1) =E[Rkt+1] (12) and is therefore increasing in the expected return for concave f(·).

Here, shock are persistent too. A negative productivity shock decreases wage wt and

thus Nt via labor earning channel. Hence, this implies investment and thus future cap-ital to dampen via (11). The lower capital reduces output and thus wages in t+1, so

that Nt+1 is similarly negatively affected. Analogously to Carlstrom & Fuerst (1997),

alsoBernanke & Gertler(1989) thus built a model in which shocks tend to show

persis-tence. However, since f(·) is concave, a contraction of the supply of capital results in an increase of price, and thus no amplification arises either.

To conclude, it is interesting to point out that both models do not solve for the full dynamics of their models, but log-linearize it around the steady state and study the impulse-response functions of the endogenous variable to exogenous shocks. This methodology is pursued also in the models we are to discuss in the following section.

1.1.3 Shock amplification

We now turn to study the works which, building on those showing that temporary shocks may have persistent effects, underlined the possibility of them to be amplified beyond their original magnitude. In this subsection we deal with two studies which focused on dynamic amplification, for the latter of the two comprises findings already identified in static models, as we will show.

We first discuss the contribution underlying modern DSGEs encompassing financial frictions, i.e. the model proposed byBernanke, Gertler & Gilchrist(1999)2. The basic

as-sumption leading to amplification dynamics is that of nonlinear costs in the adjustment of capital, which in turn lead to variations in the marginal q. Clearly, shocks feature persistence as in the former model by Bernanke & Gertler(1989): here, however, once

(24)

the shock on agents’ net worth negatively affects capital, such decrease does not lead to an upward shift in the price, precisely because of convex costs.

The law of motion of capital is Kt+1 =Kt  1−δ+Φ It Kt  (13)

where It is the investment stock and Φ(·) is increasing and concave, and such that

Φ(0) = 0. Function Φ(·) represents convex adjustment costs, hence we may label the termΦ(·) −δas the technological illiquidity due to the presence of adjustment costs. To-bin’s marginal q is derived through firms’ first-order maximization conditions, yielding

qt=Φ0 It

Kt −1

(14) In Bernanke, Gertler & Gilchrist (1999) productive sectors are differentiated between

firms producing the investment good and those producing the consumption one. This is to ensure that adjustment costs do not impact on entrepreneurs’ decision on how much capital to hold. Any entrepreneur purchasing at time t an amount of kt+1 capital will be borrowing qtkt+1−nt, where nt is his net worth. Again, future gross return on capital for each entrepreneur is assumed to be of the form ωRkt+1 where Rkt+1 is the endogenous aggregate return and ω is random and i.i.d., such thatE[ω] =1, with c.d.f. G(ω).

As in Carlstrom & Fuerst (1997), the original imperfection is costly state verification,

hence lenders do not know the realized outcome of a given project unless they spend a fraction µ ∈ (0, 1) of the amount actually extracted from entrepreneurs. If Rkt+1 is deterministic3

, then verification occurs if and only if ω < ω¯, where ¯ω is that value of return satisfying entrepreneurs’ break even:

 (1−µ) Z ω¯ 0 ωdG (ω) + (1−G(ω¯))ω¯  Rkt+1qtkt+1= Rt+1(qtkt+1−nt) (15)

where Rt+1is the risk-free interest rate.

Entrepreneurs are assumed to maximize next period net worth, thus they solve max kt+1 E Z ∞ ¯ ω (ωω¯)dG(ω)Rkt+1qtkt+1  s.t.  (1−µ) Z ω¯ 0 ωdG (ω) + (1−G(ω¯))ω¯  Rkt+1qtkt+1 =Rt+1(qtkt+1−nt) (16)

The resulting optimal policy is again that optimal investment -and thus leverage- is given by a linear rule defined as

qtkt+1=Ψ E[Rk t+1] Rt+1 ! nt (17)

3 If this was not the case, as noted byBrunnermeier, Eisenbach & Sannikov (2012) the authors appeal to

the assumption of entrepreneurs being risk-neutral and households being risk-averse, thus entrepreneurs insure households against aggregate risk.

(25)

Therefore, in equilibrium an entrepreneur’s investment is linear in his net worth, the coefficient of this relationship being function of the ratio between expected future re-turn and risk-free interest rate. Similarly to their previous model,Bernanke, Gertler & Gilchrist(1999) determine aggregate return in a general equilibrium environment, that

is they show that

E[Rkt+1] =E   At+1f0(Kt+1) +qt+1(1−δ) +qt+1Φ  It+1 Kt+1  − It+1 Kt+1 qt   (18)

which is fundamentally a demand for capital schedule and, as such, is increasing in the expected return capital yields.

Albeit technically more complex, this model is fundamentally similar to that developed byBernanke & Gertler (1989). To see that shocks are persistent, simply take the

aggre-gate version of (17): a shock to net worth dampens investment, hence from (13) capital

holdings decrease, but this yields a lower output, and thus a lower net worth, thus feed-ing back the persistence mechanism. Still, the difference here is solely given by (14): a

fall of investments leads to a decrease in prices since Φ0(·) > 0, thus the lower price even dampens the fall of entrepreneurs’ net worth. Therefore, an amplification mecha-nism stems from the assumption of nonlinear adjustment costs.

Despite their influence, Bernanke, Gertler & Gilchrist (1999) were not the first to set

out a model encompassing a dynamic amplification mechanism. Kiyotaki & Moore

(1997) set out an intuitive framework in which they were able to disentangle the static

amplification dynamics, which had already been noted by Shleifer & Vishny(1992) in

the corporate finance literature, from dynamic, more relevant ones.

Kiyotaki & Moore (1997) depart from a single aggregate production function in that

they allow a mild degree of heterogeneity: the economy is populated by farmers and gatherers. They enjoy consumption of the same good and are both risk-neutral, but the former are more impatient than the latter. Also, they differ with respect to the tech-nology they are endowed with, since farmers produce according to a constant returns technology, whereas gatherers face decreasing returns.

Beginning with farmers, we have that their production function is

yt+1 =F(kt) = (a+c)kt (19)

where aktis tradable output, whereas cktcannot be invested but can only be consumed by the farmer. The model assumes c> (1/β−1)a, where β is farmers’ discount factor, to ensure farmers always use up their tradable output for investment purposes.

Kiyotaki & Moore(1997) assume that each farmer’s technology is idiosyncratic, in the

sense that only the farmer who initiates production at time t has the skill to produce the tradable output at t+1. Also, following Hart & Moore(1994), it is assumed that

farmers can withdraw their labor and cannot precommit to work, hence we work in world of incomplete contracts. These two assumption provide the rationale to impose the constraint upon farmers’ borrowing, that is they set out the stage for moral hazard requiring collateral to debt. Therefore, gatherers never allow the size of debt to exceed the value of the production good serving as collateral to back the loan, that is

(26)

where btis the size of the loan and R is the risk-free interest rate. Hence, farmers solve the following optimization problem:

max ct,bt Et "

s=0 βsct+s # s.t. qt(kt−kt−1) +Rbt−1+ct−ckt−1= akt−1+bt (21)

where the constraint to the maximization is simply entailing on the left-hand side the outflows, stemming from investment, interest rate borrowing and production, while the right-hand side features inflows due to past production and current borrowing.

Turning to gatherers, they are endowed with a technology given by

y0t+1 =G(k0t) (22)

where G0(·) > 0 and G00(·) < 0. Gatherers do not feature any idiosyncratic skill in production, and they purely produce tradable output. Hence, gatherers are not credit constrained, and each of them solves

max ct,bt Et "

s=0 β0sc0t+s # s.t. qt(k0t−k0t1) +Rb0t1+c0t= G(k0t1) +b0t (23)

where typically one will have bt0 and bt01 be negative, reflecting the fact that gatherers lend to farmers.

Equilibrium is given as an allocation of land and prices{qt, ct, c0t, bt, b0t, kt, k0t}such that farmers and gatherers solve their constrained utility maximization problems given by (21) and (23), and the markets for land, tradable good and debt clear. The authors

study the behavior of the model around its steady state, and show that for each t, ct = ckt−1, that is farmers wish to borrow up to their constraint and invest all tradable and borrowed output, while limiting consumption to ct =ckt−1. The demand for land is thus given by

kt= 1

qt− R1qt+1

[(a+qt)kt−1−Rbt−1] (24)

where the term in square bracket is the agent’s net worth given by his tradable output and market value of his land, net of the face value of the maturing debt. On the other hand gatherers are not credit constrained, and thus the equilibrium interest rate shall be equal to the inverse of their discount factor, i.e. R = 1/β0. Gatherers’ demand for land is basically a FOC, that is they wish to detain land as long as the return is weakly greater than the risk-free interest rate:

R= G 0(k0 t) +qt+1 qt ⇒ qt− 1 Rqt+1= 1 RG 0(k0 t) (25)

Now let α ∈ (0, 1)be the fraction of farmers, thus 1−α is that of gatherers. Market clearing in the land market requires αkt+ (1−α)k0t= ¯k, where ¯k is the (fixed) amount of land. Thus, exploiting the gatherers’ first-order condition (25) one can link the variation

in the price of land to its demand: qt− 1 Rqt+1 = 1 RG 0(k0 t) = 1 RG 0 ¯k −αkt 1−α  ≡M(kt) (26)

(27)

so that iterating forward, and assuming the transversality condition lims→∞Et[R−sqt+s] = 0, one has qt= ∞

s=0 1 RsM(kt+s) (27)

Let q∗ be the steady state price of land, then it must be that farmers always borrow to the limit and roll over their debt, that is they use output a to pay the due interest R, implying q∗− 1

Rq∗ =a. Hence, from (26) one has 1 RG 0 ¯k −αkt 1−α  =a (28)

Hence the equilibrium is inefficient, because the marginal product for farmers is a <

a+c. We can now derive the effect of an unanticipated shock. Assume that until t−1 the economy is at its steady state, and at the beginning of period t a shock reduces production of all agents by a factor of 1−∆, ∆ ∈ (0, 1). Let ξ be such that

1 ξ = d log M(kt) d log kt kt=k∗ = M 0(k)k∗ M(k∗) (29)

that is ξ is the gatherers’ elasticity of land supply with respect to the opportunity cost, at the steady state. Then, denoting ˆxt as the variation for the generic variable xt with respect to its steady state value, that is ˆxt =xt−x∗, combining aggregate demand from (24) with (25) and log-linearizing around the steady state one has:

ˆkt= − ξ 1+ξ  ∆+ R R−1ˆqt  (30) from which we see that the reduction in land’s holdings are not solely ascribed to the shock, that is ∆. Indeed, farmers experience capital losses that are due to the fall of the price of the land they are holding. This effect is further enhanced because of the fact that farmers are leveraged. From (30) we can further argue that forward changes

in land holding are given by ˆkt+s = 1+ξξˆkt+s−1, so that once more the shock is persistent,

so as to underline that the literature on amplification encompasses and enriches that of persistence. Log-linearizing the equilibrium price expression (27) around(k∗, q∗)we

can further show that a dynamic amplification mechanism arises for present change in price is fed-back by future changes:

ˆqt= 1 ξ R−1 R ∞

s=0 1 RsKˆt+s (31)

Hence it is clear that present variation in the price of land encompasses future changes and, thus, amplifies the shock. Considering both (30) and (31) one can express -solving

the summation- changes in capital holdings and price of land as ˆkt= −  1+ 1 (ξ+1)(R−1)  ∆ ˆqt= −1 ξ∆ (32)

(28)

so that is is clear that one can disentangle the static effect the shock has on the variables at hand from the dynamic amplification that is embodied by forward changes that are encompassed in equilibrium present pricing. Also,Kiyotaki & Moore(1997) show this

dynamic effect to outweight the static one.

Kocherlakota (2000) pinpoints one difficulty that is faced by both Bernanke, Gertler & Gilchrist (1999) as well as Kiyotaki & Moore(1997), that is, the effects of shocks are

completely symmetric, for a positive shock is just as amplified and persistent as a neg-ative one, albeit with opposite consequences. Kocherlakota (2000) proposes a model

in which entrepreneurs have an optimal scale of production, thus entailing that shocks have different impact depending on their sign: a negative shock is indeed amplified, whereas a positive shock does not comprise such dynamics because entrepreneurs ben-efit from the loosened credit constraints by means of an increased consumption. Still, the model further shows that financial frictions are unlikely to be the cause of business cycles, because professionals insure themselves and detain buffer liquidity that allows them to face and absorb even large shocks. In the following section we discuss one recent contribution which solves this concern.

1.1.4 The continuous-time methodology

From an ABM perspective, the original literature dealing with financial frictions in macroeconomics is more satisfactory than contemporary DSGEs inasmuch one funda-mental feature of most of the models hereby discussed is that they are founded upon the heterogeneity agents are characterized by. BothCarlstrom & Fuerst(1997) as well as Kiyotaki & Moore(1997) feature a mild degree of heterogeneity across agents, but what

is perhaps more relevant is that this very same heterogeneity belies to the fundamental dynamics of the models.

This notwithstanding, one shortcoming is that all these contributions do not attempt to put forward an analysis beyond a log-linearization of the models around their -possibly not unique- steady-state. In this section we discuss an array of recent contributions aimed at overcoming this limitation.

If one is willing to buy the assumption of continuous time modeling, then the work byBrunnermeier & Sannikov(2014) puts forward an approach, which the authors label

as the ‘continuous time methodology’, which allows them to study the full dynamics of the model, instead of restricting the study to a log-linearization around the steady-state. In this section we discuss a slightly extended version of the aforementioned paper, as sketched byBrunnermeier & Sannikov(2016), to show one possibly viable alternative to

the ABM approach we champion. Indeed, the model is based upon a solid -albeit mild-heterogeneity across agents, and explicitly attempts to study the full dynamics. As we will show, the model, unlike ABMs, operates in equilibrium, but studies the transitions between different equilibria as well as the dynamics stemming from such transitions. According to the authors, the model delivers the following takeaway messages:

• The critique whereofKocherlakota(2000) is avoided, for the model shows that

(29)

risk, hence financial frictions can per se be the explanatory variable of business cycles;

• Equilibrium dynamics are characterized by a stable steady state, since experts -one of the categories of agents to be discussed- are able to absorb exogenous shocks by detaining liquid buffers. However, a sequence of negative shocks worsens this capability and drives the economy in a crisis regime in which loss spirals and amplification effects are likely to offset the experts’ positions;

• High volatility during crisis times may put the system in an extremely depressed region which is likely to persist;

• Crisis feature a surge in endogenous risk which makes assets more correlated;

• Since risk-taking is endogenous, a volatility paradox arises: if aggregate volatility decreases, experts increase their leverage, thus the economy is more prone to crises, i.e. is less stable;

• Financial innovations, such as derivatives hedging, make the economy less stable, as inBrock, Hommes & Wagener(2009).

We now turn to a more quantitative discussion of the model proposed byBrunnermeier

& Sannikov(2014) and extended in section 3 whereofBrunnermeier & Sannikov(2016).

There are two categories of agents: experts and households, whose variables are labeled as x. They can both hold physical capital, but the former are more productive than the latter, so that a>a, where a is the experts’ productivity of capital. Experts can finance themselves by issuing equity or through lending, which is provided by households at the risk-free rate rt. Still, experts cannot be fully equity funded, hence they must retain at least a fraction χ∈ (0, 1]of equity. Also, experts have a higher discount rate ρ than households to, hence ρ>ρ.

Net of investment, physical capital ktgenerates consumption output at rate4

(a−ιt)ktdt (33)

where ιt is the reinvestment rate per unit of capital. Thus the production technology exhibits constant returns. Physical capital evolves according to

dkt

kt = (Φ(ιt) −δ)dt+σdZt (34)

whereΦ(·)is an investment function with adjustment costs such thatΦ(0) =0,Φ0(·) >

0 andΦ00(·) ≤0, δ is the depreciation rate of capital, and Zt is a brownian motion with volatility σ. Notice that we can interpret Φ(ιt)as the technological illiquidity.

The equilibrium in this setting is defined as a map from histories of macro shocks

{Zs, s ≤ t} to the price of capital qt, risk-free rate rt, consumption and asset hold-ings of all agents such that (i) agents maximize their utility functions and (ii) markets clear. The authors put forward a 4-step methodology to reach such equilibrium, hence

(30)

we discuss this approach.

Postulate Equilibrium Processes

As a first step, we formulate the following triple equilibrium processes for the price of physical capital qtand the stochastic discount factors ξt and ξt for both households and experts: dqt qt =µ q tdt+σ q tdZt (35a) t ξt = −rtdt−ςtdZt (35b) t ξt = −rtdt−ς tdZt (35c)

where rtis the shadow risk-free rate and ςtis the price of risk dZt.

We now sketch two preliminary results which shall be useful below. First, one can express the return of the risky asset by noting that an investment in capital generates a dividend rate of(a−ιt)ktdt and a capital gains rate of d(ktqt)/ktqt. This latter term can be evaluated by means of It ˆo’s formula for product given (35a) and (34):5

d(ktqt) ktqt

= (Φ(ιt) −δ+µqt +σσtq)dt+ (σ+σtq)dZt (36)

Thus, we can express the rate of return of capital for experts as drkt = (a−ιt)ktdt+

d(ktqt)

ktqt =

= (a−ιt)ktdt+ (Φ(ιt) −δ+µqt+σσtq)dt+ (σ+σtq)dZt

(37)

and symmetrically for households drkt = (a−ιt)ktdt+ d(ktqt) ktqt = = (a−ιt)ktdt+ (Φ(ιt) −δ+µqt+σσtq)dt+ (σ+σtq)dZt (38)

The second result we hereby prove concerns the stochastic discount factors (SDFs). Let u(ct) be the utility for an agent with consumption ct, then for optimal consumption it must be θt = u0(ct), where θt is the agent’s marginal utility of wealth. Further let ρ be the agent’s discount factor, then his SDF is given by ξt = exp(−ρt)θt. Thus one can write

dξt ξt

= −rtdt−ςtdZt (39)

5 It ˆo’s formula for product affirms that given two processes Xt and Yt such that dXt Xt =µXt dt+σtXdZt and dYt Yt =µYtdt+σtYdZt then the product XtYt follows the following law of motion:

d(XtYt) XtYt

(31)

Furthermore, for any asset A with return given by a It ˆo’s process,

drtA=µtAdt+σAdZt (40)

the following must hold:

µAt = rt+ςtσtA (41)

We shall show that both (39) and (41) directly follow from the portfolio optimization

problem of the agents, which reads out max ct,{xA t} E Z ∞ 0 e−ρtu(c t)dt  s.t. dnt= nt rtdt+

A xtA((µtA−rt)dt+σtAdZt) ! −ctdt (42)

where xtAis the portfolio weight on asset A. By means of the (stochastic) maximum prin-ciple, we can derive the first-order conditions of (42) from the Hamiltonian associated

to the problem, which in turn reads out as

H(c, xA, ξt) =e−ρtu(c) +ξt " rt+

A xtA(µtA−rt) ! nt−c # −ςtξt

A xAσtAnt (43) where ξt is the multiplier on nt, that is equal to the marginal utility of wealth, whose volatility is denoted by −ςtξt. First-order conditions on (43) are straightforward and

yield: H ∂c =0 ⇔ e−ρtu0(c) = ξt (44a) H ∂xA =0 ⇔ ξt(µ A t −rt) −ςtξtσtA=0 (44b) −H ∂n = −ξt rt+

A x A( µAt −rt) ! +ςtξt

A xAσtA= = −ξt rt+

A xA(ςtσtA) ! +ςtξt

A xAσtA= −ξtrt (44c)

where (44a) is a restatement of the definition of SDF, (44b) yields (41) and finally from

(44c) it follows that the drift of ξt is −ξtrt and thus the SDF process follows the law

whereof (39).

Equilibrium conditions

Since both experts can trade the risk-free asset, the drift of both SDFs has to be the same, hence rt =rt. Also, rewriting (37) keeping in mind that χt is defined as the fraction of equity held by experts, one has

a−ιt qt +Φ(ιt) −δ+µ q t +σσ q t −rt σ+σtq =χtςt+ (1−χt)ς t (45)

Riferimenti

Documenti correlati

allergenic potential is debated and the significant difference between muscle infection levels in whole and filleted herrings may result in a different risk of exposure

Columbia University Med./Ren. Fragment 111 also is a piece of a manuscript of Comp. The fragment, probably written in Italy, can be dated according to Dr. Dutschke to the

Il 27 ottobre sorse il primo governo Menabrea, con ministro delle finanze il senatore Luigi Guglielmo de Cambray-Digny, il quale, nella tornata del 20 gennaio 1868, ripropose

In una missiva inviata dal Montecuccoli l’11 maggio 1599 11 , ad esempio, troviamo alcuni casi tipici: quello del contadino Bartolomeo Forghieri di Novi, delinquente abituale,

We analyse how market competition in a vertically di¤erentiated pol- luting industry is a¤ected by product variants that comply at di¤erent levels with &#34;green&#34; social norms.

Dal momento che l’ analisi degli impasti ceramici conferma che tutti i vasi in cui sono presenti i bioclasti vegetali analizzati sono stati prodotti nell’area del Finalese,

Si tratta, come aveva intuito Husserl, di accin- gersi finalmente a interrogare seriamente ciò che di ovvio è presupposto da qualsiasi pensiero, da qualsiasi