• Non ci sono risultati.

Aggregate Fluctuations in a Sectorial Economy: a network analysis

N/A
N/A
Protected

Academic year: 2021

Condividi "Aggregate Fluctuations in a Sectorial Economy: a network analysis"

Copied!
153
0
0

Testo completo

(1)

University of Pisa

Sant’Anna School of Advanced Studies

Department of Economics and Management

MSc in Economics

Aggregate Fluctuations in a Sectorial Economy

A Network Approach

Author:

Matteo Iacopini

Supervisor: Prof. Davide Fiaschi

(2)
(3)

Abstract

Business cycle analysis has always been at the core of macroeconomic thinking and empir-ics; notwithstanding the recent crisis has highlighted the lack of economic tools able both to describe and forecast such widespread phenomena. It has been found the need for the understanding of the degree of interconnection between agents within and across countries and its implications in national economies. In light of this necessity many scholars have tried to address the issue of interconnection, but their focus was limited to the financial sec-tor. In this thesis I face the more general topic concerning the determination of aggregate output volatility: by drawing from the seminal work of Acemoglu et al. (2012), who stated an important mathematical result in terms of first and higher order effects (namely, cas-cades) of idiosyncratic shocks, and from the theory of networks, I develop a microfounded model of a sectorial economy with heterogeneous sectors and consumers, with the aim of deriving the aggregate effects in terms of output volatility of small idiosyncratic shocks. The analysis is undertaken at the meso-level, where connections between sectors are de-fined by means of an Input/Output structure, which in turn allows for the characterisation of the network of the economy. The model is then estimated by means of parametric and nonparamtric methods applied on both European and U.S. Input/Output tables firstly to catch the implications of different economic structures on shock transmission; then the hy-pothesis of idiosyncratic origin of aggregate fluctuations is tested by means of comparative statics analysis. Concluding, the model is estimated and simulated in order to check an important issue the medium run, aggregate effects of idiosyncratic shocks on the topology of the network.

(4)
(5)

Contents

1 Introduction 1

2 Literature review 3

2.1 The business cycle . . . 3

2.2 Theories of aggregate fluctuations . . . 6

2.2.1 Keynesian . . . 6

2.2.2 New Classical . . . 7

2.2.3 New Keynesian . . . 10

2.3 Theories of idiosyncratic and sectoral origins of fluctuations . . . 13

2.3.1 Lucas’ islands model . . . 13

2.3.2 Sectoral and granular theories . . . 14

2.3.3 Evolutionary theory . . . 16

2.4 Network theories of fluctuations . . . 17

2.5 Summary and remarks . . . 21

3 Theoretical framework 25 3.1 Mesoeconomics . . . 25

3.2 Network Theory . . . 29

3.2.1 Basic concepts and metrics . . . 29

3.2.2 Networks in economic literature . . . 37

3.2.3 Limits and areas of research . . . 39

4 Theoretical Model of Sectorial Economy 41 4.1 The Model . . . 41

4.1.1 Households . . . 41

4.1.2 Firms . . . 42

4.1.3 The Equilibrium of the Model . . . 43

4.1.4 Fluctuations in Income and Prices . . . 45

4.1.5 Extensions of the Basic Model . . . 48

5 I/O Network Analysis 51 5.1 Network Statistics . . . 51

5.1.1 Average vertex degree . . . 53

5.1.2 Network clustering coefficient . . . 54

5.1.3 Network assortativity by degree . . . 54

5.1.4 Average vertex Katz centrality . . . 55

5.1.5 ’Total’ network centrality . . . 56

5.2 Model’s α . . . . 57

5.2.1 Vertex degree countercumulative distribution . . . 57

5.2.2 Vertex clustering countercumulative distribution . . . 57 v

(6)

5.3 Summary of the Results . . . 57

6 Price Analysis 59 6.1 Data and Procedure . . . 59

6.2 Estimation of Sectorial Shock Standard Error . . . 61

6.3 Possible extensions . . . 66

7 Results for Output Standard Error 69 7.1 Sectorial Structural Component . . . 69

7.2 Total Standard Error . . . 74

8 Conclusions 81 Appendices 85 A Statistical appendix 85 A.1 General overview . . . 85

A.2 United States . . . 87

A.3 EURO Area E17 . . . 90

A.4 Italy . . . 91

A.5 Final data . . . 91

A.6 Prices . . . 95

A.6.1 PPI . . . 95

A.6.2 CPI . . . 110

B Input/Output tables 113

(7)

List of Figures

2.1 Complete network . . . 19

2.2 Star network . . . 19

3.1 Directed, weighted network . . . 30

3.2 Undirected, binary network. . . 30

5.1 US Economy network in 2007. Threshold: 0.05. Source: author’s elaborations of US Symmetric Input/Output Table (SIOT)s. . . 52

5.2 US Economy network in 2008. Threshold: 0.05. Source: author’s elaborations of US SIOTs. . . . 52

5.3 EU Economy network in 2007. Threshold: 0.05. Source: author’s elaborations of EU SIOTs. . . . 52

5.4 EU Economy network in 2008. Threshold: 0.05. Source: author’s elaborations of EU SIOTs. . . . 52

5.5 IT Economy network in 2007. Threshold: 0.05. Source: author’s elaborations of IT SIOTs. . . . 53

5.6 IT Economy network in 2008. Threshold: 0.05. Source: author’s elaborations of IT SIOTs. . . . 53

5.7 Prima figura . . . 54 5.8 Seconda figura . . . 54 5.9 Prima figura . . . 54 5.10 Seconda figura . . . 54 5.11 Prima figura . . . 55 5.12 Seconda figura . . . 55 5.13 Prima figura . . . 56 5.14 Seconda figura . . . 56 5.15 Prima figura . . . 57 5.16 Seconda figura . . . 57

6.1 Time series chart of sectorial real price indices in log terms in US. Start date: December 2003. Reference date: December 2003. Source: author’s elaborations of price data described in the Statistical appendix. . . . 60

6.2 Time series chart of sectorial real price indices in log terms in Euro Area extended to 17 countries (E17). Starting date: January 2002. Reference date: December 2003. Source: author’s elaborations of price data described in the Statistical appendix . . . 60

6.3 Estimated sectorial shocks standard errors for US in 2004. Source: author’s elaborations of I/O and price data described in the Statistical appendix. . . 63

6.4 Estimated sectorial shocks standard errors for E17 in 2004. Source: author’s elaborations of I/O and price data described in the Statistical appendix. . . 63

6.5 Estimated sectorial shocks standard errors for US in 2004, with standard error bands from bootstrap. Source: author’s elaborations of I/O and price data described in the Statistical appendix. . . 64

6.6 Estimated sectorial shocks standard errors for E17 in 2004, with standard error bands from bootstrap. Source: author’s elaborations of I/O and price data described in the Statistical appendix . . . 64

6.7 Estimated sectorial shocks variance for US for 2004-2007. Source: author’s estimates. . . . 65

6.8 Estimated sectorial shocks variance for US for 2008-2010. Source: author’s estimates. . . . 65

6.9 Estimated sectorial shocks standard error for E17 for 2002-2007. Source: author’s estimates. . . . 66

(8)

6.10 Estimated sectorial shocks standard error for E17 for 2008-2009. Source: author’s estimates. . . . 66

7.1 Time series 1997-2007 of (square root of) US structural component of sectorial output variance. Source: author’s computations. 71

7.2 Time series 2008-2010 of (square root of) US structural component of sectorial output variance. Source: author’s computations. 71

7.3 Time series 2000-2007 of (square root of) EU structural component of sectorial output variance. Source: author’s computations. 72

7.4 Time series 2008-2009 of (square root of) EU structural component of sectorial output variance. Source: author’s computations. 72

7.5 Time series 1997-2007 of (square root of) US random shock component of sectorial output variance. Source: author’s computations. 75

7.6 Time series 2008-2010 of (square root of) US random shock component of sectorial output variance. Source: author’s computations. 75

7.7 Time series 2000-2007 of (square root of) EU random shock component of sectorial output variance. Source: author’s computations. 76

7.8 Time series 2008-2009 of (square root of) EU random shock component of sectorial output variance. Source: author’s computations. 76

7.9 Time series 1997-2010 of US and EU total output Standard Error. Source: author’s computations. . . . 77

7.10 Time series 1997-2010 of US output standard error and network average statistics. Source: author’s computations. . . . 78

(9)

List of Tables

2.1 Principal streams of research about aggregate fluctuations in the period: 1940-2013.

Source: author’s elaborations and evaluations. . . . 23

3.1 Taxonomy and brief characterization of the main areas of economic analysis. Source: author’s elaborations. . . 28

3.2 Selection of network statistics for directed graphs. Source: cited sources. . . . 36

5.1 Average vertex degree, by country and year . . . 53

5.2 Network clustering coefficient, by country and year . . . 54

5.3 Network assortativity by degree, by country and year . . . 55

5.4 Average vertex Katz centrality, by country and year . . . 56

5.5 ’Total’ network centrality, by country and year . . . 56

5.6 Estimated alpha of power law fit for total vertex degree counter cumulative distribution, by country and year . . . 57

5.7 Estimated alpha of power law fit for vertex clustering coefficient counter cumulative distribution, by country and year . . . 57

7.1 US 1997-2010 regressions of Standard Error against network statistics, at sectorial level. Values correspond to coefficients for linear regression, while to edf for GAM. Signifi-cance: *** p < 0.001, ** p < 0.01, * p < 0.05. Source: author’s computations. . . . 73

7.2 E17 2000-2009 regressions of Standard Error against network statistics, at sectorial level. Values correspond to coefficients for linear regression, while to edf for GAM. Significance: *** p < 0.001, ** p < 0.01, * p < 0.05. Source: author’s computations. . 74

A.1 Concordance tables between all the standards cited: Standard Industrial Classifica-tion (SIC), North American Industry ClassificaClassifica-tion System (NAICS) and Nomencla-ture statistique des activits conomiques dans la Communaut europenne (NACE). The asterisk represent cases for which also the conversion table in the opposite direction is available (at the same source). Source: the websites indicated in the table. . . . 86

A.2 Exchange rates used to convert US dollars into EURO, classified by year and rate. As-terisk identifies tables converted using an exchange rate at a different date (years before the introduction of the European Currency Unit (ECU)). Source: author’s elaborations on the basis of data from http://research.stlouisfed.org/fred2/series/EXUSEC? cid=280 for $/ECU and http://www.ecb.europa.eu/stats/exchange/eurofxref/ html/index.en.html for $/e. . . 88

A.3 US basic data for the construction of I/O tables, classified by year. Bold years identify years in which the US classification change has had a significant impact on the number of sectors. Source: author’s elaboration of cited sources. . . . 89

A.4 EURO Area E17 basic data for the construction of I/O tables, classified by year. Source: author’s elaboration of the cited sources. . . 90

(10)

A.5 Italian basic data for the construction of I/O tables, classified by year. Source: author’s

elaboration of the cited sources. . . 91

A.6 Summary of the number of 2-digit sectors used in I/O analysis, grouped per year and country. Max sectors stands for the dimensions of the Supply and Use Tables (SUT) as provided by original data sources. Bold years identify years in which the US classifica-tion change has had a significant impact on the number of sectors. Source: author elab-oration of data from the previously cited sources: http://epp.eurostat.ec.europa. eu/portal/page/portal/esa95_supply_use_input_tables/introduction for Italy and EU, http://www.bea.gov/industry/index.htm for US. . . 94

A.7 Principal problems in the comparison of time series and cross-country data in the sam-ple. Source: author’s conclusions of the analysis of nataional and cross-country I/O data illustrated above. . . 95

A.8 Sectorial price data availability, in number of sectors, classified by country and reference period. Share idenifies the ratio of available over total of the corresponding period. Source: author’s elaborations of Thomson Reuters Datastream data. . . . 96

A.9 List of the most relevant difficulties faced in the Producer Price Index (PPI) prelim-inary data manipulation phase. Source: author’s elaboations of Thomson Reuters Datastream data. . . 99

A.10 Price indices per sector (total = 112). For 1997-2007 PPI that matches with I/O sectors are reported, while for the other two periods are reported the PPI that match with those periods but are not present in 1997-2007. Underlined entries identify those codes whose time series has been added to 1997-2007 since an original, precise correspondence was missing, but they were present in the other two periods. Source: author’s elaborations of Thomson Reuters Datastream Data. . . 101

A.11 Correspondence between SIOT industries and sectorial PPI for US, classified by pe-riod. Notes: p stands for proxy, a is aggregation, d is related to choices made on the basis of available data, [part I] and [part II] identify industries whose corresponding prices have been splitted into two complementar prices. Source: author’s elaboratons of Thomson Reuters Datastream Data. . . 107

A.12 US Price sectors corresponding to the I/O sectors used for the estimation purposes in Chapter 5. Source: author’s elaborations. . . . 109

A.13 E17 Price sectors corresponding to the I/O sectors used for the estimation purposes in Chapter 5. Source: author’s elaborations. . . . 110

A.14 Correspondence Table between PPI and PCI for computation of ”Real PPI” for US. The code has been made by the author and is not related to any classification of supplied data. Source: author’s eleborations of Thomson Reuters Datastream Data. . . . 111

A.15 Correspondence Table between PPI and PCI for computation of ”Real PPI”. PCI indices refer to EURO Area 18, while PPI indices refer to EURO Area 17. The code has been made by the author and is not related to any classification of supplied data. Source: author’s eleborations of Thomson Reuters Datastream Data. . . . 112

B.1 Supply table for an economy subdivided in three industries (A, B, C) and products classified in three categories (α, β, γ) . . . . 114

B.2 Use table for an economy subdivided in three industries (A, B, C) and products clssified in three categories (α, β, γ) . . . . 114

B.3 Supply, Use and Input/Output tables in vector notation . . . 118

B.4 Relation between basic, producers’ and purchasers’ price . . . 121

(11)

LIST OF TABLES xi

C.2 Classification of industries at 2-digits level of disaggregation according to NACE Rev. 1 (1990). Source: http://ec.europa.eu/eurostat/ramon/nomenclatures/index. cfm?TargetUrl=LST_CLS_DLD&StrNom=NACE_1_1 . . . 126 C.3 Broad structure of industry classification in NACE Rev. 2 (2008). Source: Eurostat

[2008] . . . 127 C.4 Classification of industries at 2-digits level of disaggregation according to NACE Rev.

2 (2008). Source: Eurostat [2008] . . . . 129 C.5 classification of industries at 2-digits level of disaggregation according to NAICS III

(2007). Source: United States Census Bureau, available at: http://www.census.gov/ cgi-bin/sssd/naics/naicsrch?chart=2007 . . . 130

C.6 Members of the E17. Source: http://www.eurozone.europa.eu/euro-area/euro-area-member-states/ . . . 130

(12)
(13)

Chapter 1

Introduction

(14)
(15)

Chapter 2

Literature review

2.1

The business cycle

In the last century, the study of economic business cycles has always been at the center of the debate between economists, so that it’s not possible to find even a single decade in which this topic has not grasped the attention of researches all over the world. Far from being an exhausted stream of research, nor a sterile debate about the same old-fashioned theories, the literature on business cycles, often addressed to also as “fluctuations1”, resembles a never-ending spring continuously fuelled by the work of economists. The study of business cycles has attracted a wide number of researchers over time, without registering any kind of stop or “dead period”. To this point, it’s worthwhile to stress that many economic theories have experienced a proper life cycle, in a sense close to that used in for describing the marketing status of a specific product, with some corrections, of course. Particularly, as its natural for everything is object of scientific research, economic theories too have born by the intuition of some researcher, than the new idea has spread out, incentivized more research in that field until the interest in continuing the research has fallen. This stylised fact in reality didn’t happen to the stream of literature that concerns the study of business cycles. Of course, behind this label many specific flows of scientific thinking have been developed, discussed, than abandoned in favour of more correct ones. The purpose of this chapter is exactly this: to give an insight of the main theoretical underpinnings of business cycles of the last thirty years. Of course, since the purpose of the present work is not to give a detailed review of the literature on this theme, the following subsections will describe the main features and implications of the most developed categories of models.

First of all, it’s necessary to clearly identify the object of the analysis, unfortunately in literature does not exist a unique and rigorous, worldwide accepted definition of business cycle; however the real meaning of this phenomenon is captured by all existing definition. Here it’s provided the one given by [Hoover, 2012, p. 143]: [Business cycle] The business cycle is the alternation in the state of the economy of a roughly consistent periodicity and with rough coherence between different measures of the economy.

Roughly speaking, a cycle in economy is defined as a period (with unspecified length, at least in principle) in which the main economic variables (typically, but not exclusively, the aggregate output and the unemployment rate) persistently change in their values; then this tendency is inverted and the cycle comes to an end.

It appears evident that the economic history of whatever modern country in the world is charac-terised by a particular sequence of periods of expansions and recessions, economic booms and slumps, instead of a steady behaviour, i.e. the following of a given trend without significant deviations from it. Of course, it’s clear that economic fluctuations do not represent a mere table of numbers on the desk

1

Though the two terms have quite different meanings - the second being more vague - for the sake of the current exposition they will be used interchangeably as synonyms.

(16)

of an analyst, instead they have significant implications for the underlying country, both in economic terms, as regards employment and growth, individual welfare, and in sociological and political terms, for what concerns wealth distribution, social disorders, strikes and many others.

Before proceeding with the review of the literature, it’s owed a brief discussion of some basilar descriptive characteristics of fluctuations, as well as the goals of the research in this field of research. In the analysis of these phenomena the staring point is necessarily the identification of the effects that they exert on the economy and their classification. Although history provides a wide range of different fluctuations, all can be reconciled and classified according to specific criteria that make the analysis quite simple. In addition, this preliminary classification is useful in order to understand the point of view of researchers, that is what particular aspects different theories try to explain. Finally, consider that the following classification is not exhaustive (many others can be performed according to the specific needs of the research), but it highlights the main points that will be used later in the dissertation.

• duration of the effects. From a temporal point of view, the repercussions that business cycles have on the socio-economic structure of a country can be transitory, in the sense that their duration is limited to approximately one year, in which case also their effects tend to be somehow reduced. In these cases the occurrence of a structural break, either in the social or in the economic structure, is improbable and the recover to the original path of the economy is rapid, by definition. On the other hand, long lasting crises, as well as durable periods of stable growth of a country are suitable to bring radical changes to the socio-economic pattern of the country, so that, even at the end of the cycle, it’s impossible for the economy to revert to its previous (pre-cycle) path. These structural changes, by definition, modify permanently the economy: on one side, they alter the consumption habits, implying modifications in the demand for existing goods as well as the arising of a significant demand for new ones; on the other, the production structure of the economy, hit both directly, by the shock, and indirectly, through the changes in demand patterns, can change radically. As a consequence, the effects of a long lasting cycle are not limited to the short run dynamics of economic variables, but also influence the long run growth path. Indeed, these consequences cannot be overcome without a hard and costly (in terms of money, of course, but also of time and, probably, of social conflicts) intervention by the Government. In general terms, cycles, particularly long lasting ones, are not driven by a unique force that pushes the economy for the whole duration of the cycle itself, instead they can be characterised by a massive “first round” shock that functions as the injector of the cycles, which is subsequently driven by multiple “second round” smaller shocks, consequences of the first one. Metaphorically, this concept can be represented by the effect of the transit of a hurricane on a city: the destruction that it keeps starts the (negative) cycle, but its duration is relatively short, however its devastating effects don’t cease as the wind stops blowing. As a consequence of its passage, gradually in time, unstable damaged houses fall down and destroyed crops will result in no harvest. The process of reconstruction will require long time, and it may also imply the edification of stronger buildings, the reallocation of the whole city in a more protect place and the seeding of different, more resistant crops. All this surely deeply influences the long run structure and growth of the city.

• origin. Any fluctuation, regardless of the effects that it exerts on the economy, has a specific origin, which is intended to be the nature of the initial shock that has give rise to the cycle. According to this point of view, it’s possible to distinguish between real and monetary (financial) origin. The first category groups all fluctuations that influence primarily real variables, such as real output, employment and productivity. On the other side, shocks stemming from monetary institutions are labeled as monetary ones. It’s worth stressing the point that origin does not mean effect, that is a cycle originated from a monetary shock could be characterised by high repercussion on real variables.

(17)

2.1. THE BUSINESS CYCLE 5

• aggregate origin. Another slightly different characterisation based on the origin considers, instead of the raw nature of the underlying shock, its primary influence, that is, the aggregate (if demand or supply) which is directly affected by it. As in the previous case, this refers just on the initial effect.

• overall effect. By taking a look at the economy as a whole, particularly at the variables that efficiently describe its behaviour, it’s possible to evaluate the global effect of a cycle. More precisely, this issue can be addressed by looking at the behaviour of the economy in relation to its past and its long run trend: in case of a substantive positive growth above the “normal” expected trend the economy is experiencing a positive phase, called boom, or expansion, whereas the opposite case is known as contraction or recession, according to the gravity.

• research focus. It is meant as the point of view of the researcher, so this is an improper classi-fication of business cycles, though a cornerstone for the correct understanding of the literature. To the extent of the following analysis, it’s possible to distinguish mainly two broad approaches: the aggregate approach, which focuses on the study of cycles as phenomena which originate (or at least they treat them as if they were originated) at macro level; conversely the micro-macro approach takes the issue of understanding the micro level origins of fluctuations together with the mechanisms that drive them to hit the economy as a whole.

Once the effects that business cycles can exert on the economy have been taken into account, the reasons for which this topic has always been on the top of researchers’ agenda may appear clearer.

Primarily, the subjects most concerned with the analysis of aggregate fluctuations are national governments and international economic institutions, because of their role in national and interna-tional context, respectively. The understanding of the causes, together with the origins of cycles is fundamental to these institutions in order to exert their proper functions in the economic as well as in the normative field. In fact, on one hand they have the power of intervening in the economy of the respective country (or countries), deeply affecting them. It is still open the debate concerning the intervention of the State in the economy as opposed by the increasing liberalisation, the first relying on the existence of market imperfections that can be at least loosen by public intervention, in order to achieve a better (i.e. more efficient) allocation, the second strongly defending the principle of deregu-lation of markets, because of the belief that market imperfections worsened by Government influence. It does not surprise that starting from Keynes’ work during 30s, who strongly proposed Government’s intervention for overcoming demand-side negative shocks such as the Great Depression, the scientific (and political) debate about this point strengthened, but it is worthwhile to pay attention also at the theoretical developments regarding the origin, the nature and the effects of fluctuations which closely accompanied it. Abstracting from a judgment of merit, it’s however straightforward to conclude that the study of business cycles is crucial from this point of view.

To go into detail, assume that national and international economic policies are able to positively affect the economy: so, the understanding of the determinants of business cycles would allow institu-tions both to develop measures with the scope of contrasting negative cycles (conversely, of fostering expansion periods) and to implement procedures apt to forecast them. Notwithstanding all past and present findings, both are very hard tasks, therefore encouraging further research in this area.

On the other side of the economy, the study of this topic is relevant also for private institutions, such as banks and financial companies (whose importance has boosted in last decades, without signal of decrease) primarily for what concerns risk management. It requires an accurate everyday dealing with the issue of fluctuations, at least to the extent that macroeconomic shocks affect financial markets. Moreover the recent occurrence of a financial crisis of global proportions may have raised their attention towards the causes of large cycles and the need for better forecasting tools.

After the problem of business cycles has been focused in its main characteristics and purposes, in the next subsections it will be discussed the most relevant theories whose primary purpose is explicitly

(18)

that of explaining and modelling aggregate fluctuations. [brief description of goals, assumptions and conclusions]

2.2

Theories of aggregate fluctuations

The study of business cycles can be performed in different ways; drawing from economic history, in this subsection will be discussed those models which deal with the phenomenon considering fluctuations that occur at aggregate level. These models form the baseline theory for the analysis performed in subsequent years, mostly because of their economic insights.

2.2.1 Keynesian

One of the pillars of modern economic thought is surely Keynes [1936]. His work in response to the Great Depression has been explored in order to formalise and transpose in analytical way his ideas, which have been found to be valid and insightful. Among them, three of the most relevant are the concept of demand-driven shocks, the presence of nominal rigidities and the real effect of monetary policy. These conditions represent the building block of any model which assumes the label “Keynesian”. There are many important insights about investments and savings, but they won’t be considered here because they go beyond the scope of this dissertation.

The main assumption made by Keynesian theory regards the existence of market imperfections which in turn allow the economy to experience significant and non-transient periods of positive in-voluntary unemployment, or, generally, consents employment and output fluctuations around the full employment level. This happens by assuming fixed nominal wages, so that in case of negative de-mand shock the system can fall in a situation of stable unemployment (as Keynes argued for the Great Depression). Therefore the real imperfection in the labour market (for monetary wage) rules out the standard assumption of market clearing made by classical theory and, in addition, creates a clear division between short run and long run dynamics of the economy. In fact, as long as market clearing holds any fluctuation is avoided by the quick convergence of quantities and prices to their (long run) equilibrium values; instead, when a force reacts in an opposite way to slow down or stop this mechanism, then it arises a clear distinction between short run and long run equilibria, allowing for different behaviours of the economy. In this sense it’s possible, both theoretically and analytically, to define in a more correct and coherent form aggregate fluctuations of the economy and to study them. From here the citation of Keynes’ theory as the first one that allows a systematic, theoretically grounded exploration of business cycles.

Coming to theory implications, the prominent macroeconomic insight concerns the leading role of aggregate demand as source (or, according to previous subsection’s characterisation, origin) of cyclical fluctuations for the economy. Exogenous shocks on the supply side, albeit not neglected, play a minor role in comparison to the fluctuations of aggregate demand, particularly investments, which are the among the most destabilising factors for the economy (recall the well known argument concerning the so called “animal spirits”). Starting from this, notwithstanding the lack of a formal micro-founded model, it can be inferred from theory that the (free) interaction between individuals doesn’t assure the reach of an efficient equilibrium allocation in the short run. This constitutes the justification for State intervention via fiscal and monetary policies, in order to keep the economy closer to a Pareto first-best allocation.

The core of the short run normative analysis resides in the aggregate demand, just as implication of the nominal rigidity: monetary wage stickiness and imperfect adjustment of real wages, which exhibit a countercyclical behaviour, allows fiscal and monetary policies to affect the short run equilibrium. This sheds light on the central role played by the Government and the Central Bank in manipulat-ing the level of aggregate demand, so as to counteract negative shocks (and exploit positive ones):

(19)

2.2. THEORIES OF AGGREGATE FLUCTUATIONS 7

fiscal policy boosts the demand with a crowding out effect on private investments; furthermore mon-etary policy too has real effects (except the case of liquidity trap), in contrast with the quantitative theory. Summarising, Keynesian theory opposes to classical spontaneous adjustment mechanism a bundle of policy measures such as public expenditure, taxation and wealth transfers, together with the manoeuvre of the monetary base, to be undertaken by the State for reacting to fluctuations.

The pioneering and simple model developed by Keynes [1936], then re-formulated by Hicks [1937] in the IS-LM and subsequently expanded by Fleming [1962] and Fleming [1963], rests on the reasonable assumption of nominal rigidities in the labour market to deliver its conclusions: short run fluctuations driven by exogenous demand and supply shocks and stabilisation (fiscal and monetary) policy. So, State intervention on one hand, and monetary supply variations, on the other, are not only permitted, but required in order to move the (otherwise stuck) economy in the short run.

Of course, a number of drawbacks limit its conclusions: first of all, it’s a purely static and de-terministic model, which permits only comparative statics exercises, with scarce information gain; second, the aggregate formulation of the model totally ignores the underlying microeconomic dy-namics; third, it doesn’t take into account expectations at any level (even though posterior rational expectations-augmented versions have been developed in literature). Another non-negligible drawback of this theory rests in the counter-cyclicality of real wages, to the extent that empirical evidence has firmly rejected it in favour of a-cyclicality (or weak pro-cyclicality). Henceforth, this model should receive more attention for the theoretical insights that suggests, namely nominal rigidities, possibility of durable cycles, stabilisation role of national institutions, rather than for its analytical formulation and solution.

2.2.2 New Classical

The advent of Keynesism upset existing economic theory and started three decades of intensive research on business cycles, which poured on literature a vast number of macroeconomic and macroeconometric models for the analysis of policy effects and the forecasting of economic time series conditional on the realisation of specific monetary and fiscal measures. On the other hand, almost no attempt was made to address one of their main weakness by establishing an analytical microeconomic base able to reproduce the aggregate behaviour of the economy.

It is in this the scientific framework in which Robert Lucas wrote his famous article2 where he pointed out the inconsistency of these macroeconometric models. His argument was centred on the assumption of perfectly rational economic agents, embedded with the capacity of exploiting efficiently all disposable information, which in turn allow them to make better forecasts about future values estates of the economy and to evaluate (conditional on their knowledge) the degree of persistency of changes in economic variables. In this sense they were assumed to be able to distinguish between transitory and permanent policies and, as a consequence, to react even before that a shock hits the economy, if it’s expected.

In the very next few years the scientific community reacted to Lucas’ Critique as well as the theoretical work of other economists (such as Modigliani) who stressed the need pay attention at the development of the micro structure of macroeconomic models and the result was the reinterpretation of classical and Keynesian theories in light of these academic contributions.

The new stream of models that come to light during 1980s can be reconciled to the class of Dynamic Stochastic General Equilibrium (Dynamic Stochastic Generale Equilibrium (DSGE)) models, though it identifies a very wide and quite heterogeneous group. The common features of these models, which represented the true novelty and breakpoint as compared to previous, are the explicit modelling of individuals’ decision making process, the assumption of agents’ rational expectations, the focus on dynamic rather than static models and the general equilibrium point of view. The first two

2

(20)

elements constitute the answer to the principal critiques to economic modelling, whereas the third is a fundamental feature required to deliver the main results of the models. Within this wide definition of models, rather than of theories, in fact, have their placement the analytical frameworks of both New Classical and New Keynesian theory. This section is concerned with the study of the first one, while New Keynesian theory is the core of the next one.

The very first outcome of this renewal process of economic thought is New Classic economics, whose analytical counterpart are Real Business Cycle (RBC) models. From a theoretical viewpoint, this strand of literature approaches the study of aggregate fluctuations having as a reference point the neoclassical growth model by Solow [1957] and modifying it in order to account for short run cycles, besides long run growth path. This point is made particularly clear in King et al. [1988] and in King and Rebelo [1999]: both offer a review of the principal characteristics of the standard growth model as well as the structural changes that should be applied for delivering the desired result.

The backbone of this literature is the recognition of real factors as the only origin of business cycles. In more detail, monetary shocks (and all nominal factors in general) are explicitly avoided or relegated to an almost useless role in the vast majority of cases, because of the belief that nominal shocks are unable to activate real effects in the economy: in this respect, New Classical theory is drawing from the original Classical thought (its undiscussed predecessor). With reference to real factors, a prominent role is conferred to technological shocks on production, which in practice are the unique source of fluctuations recognised. For the sake of completeness, it’s owed to stress that in literature have been considered also other kinds of shocks, however, since technological shocks represent the lion’s share of the scientific debate, is with no loss of generality that this section focuses only on the first class of models. The other the building blocks that form the core of this stream of literature are given by the rational expectation assumption, on one side, and the existence of perfectly competitive markets, on the other. As a consequence, economic fluctuations are explained as the optimal reply of economic agents to the uncertainty brought by stochastic productivity innovations ( Prescott [1986]).

Given these ideological underpins, for a normative point of view it descends as a natural con-clusion that monetary policy is completely ruled out as an efficient way to affect real economy; the same conclusion do apply, for different reasons, also for fiscal policies: more specifically, absence of market imperfections imply that state interference in economy, besides costly, will turn out to be counterproductive at all.

For a better understanding of the theoretical issues described above, in the following are analysed both the assumptions and the implications of a basic RBC model. For this purpose, it’s made reference to the seminal paper by Kydland and Prescott [1982] and the successive work of King and Rebelo [1999] for a deeper discussion.

As staring point, consider the optimization problem of a representative, perfectly rational house-hold: max Et  ∑ j=0 βjU (cj, lj)   β ∈ (0, 1) (2.1)

and the aggregate Cobb-Douglas production function:

Yt= AtKt1−α(NtXt)α (2.2)

where At is the stochastic productivity, Kt stands for capital, Nt is labour and Xt is a deterministic component of productivity, evolving according to: Xt= Xtγ−1. Output is either consumed or invested. In addition, assuming δ∈ (0, 1) depreciation rate, capital is accumulated according to :

(21)

2.2. THEORIES OF AGGREGATE FLUCTUATIONS 9

Then, a crucial assumption regarding the productivity shock follows. First of all its logarithm is assumed to follow AR(1) stochastic process:

At= Aρt−1ϵt ρ < 1, ϵt N (0, σϵ) (2.4)

then, deriving the conventional formula for Solow residuals in log terms (log SRt) and exploiting 2.2, one gets:

log(SRt) = log(Yt)− α log(Nt)− (1 − α) log(Xt) log(SRt) = log(At) + α log(Xt) (2.5) In words, two assumptions deserve particular attention: first, in (2.4) technological shocks are assumed to follow a AR(1) stationary process, implying autocorrelation in innovation; second, it is implicitly assumed that this term can be obtained, after necessary calculations, from the Solow residuals, as results from (??). The implications of these hypothesis will be discussed later. ———— ———————————–

The majority of RBC models, far from this very simplistic description, have equations that do not admit an analytical closed-form solution, therefore two different procedures have been proposed in order to undertake this problem: Long and Plosser [1983] assumed explicitly simple functions, giving up a significant fraction of the complexity and of the economic interpretation of the model in order to obtain a tractable and interpretable analytical result; on the contrary, Kydland and Prescott [1982] preferred to perform a log-linearization of the model around its steady state3 The latter, most employed, method allows to explore the dynamic path of the economy in a neighbourhood of the (unknown) steady state. Finally, in order to evaluate model’s performances, it has been made wide use of calibration and simulation procedures. In particular, the original paper by Kydland and Prescott [1982], and many other RBC models from that moment on, turned out to correctly represent the post-war U.S. aggregate output fluctuations.

Coming back, one step ahead, the crucial assumption of RBC models is the existence of a high-valued and strongly autocorrelated technological stochastic shock, and in second instance the presence of a mechanism of capital accumulation. The importance of these factors essentially corresponds to their necessity for delivering the good results of fitness with data. It can be proved, for example, that iid productivity shocks doesn’t allow the economy to display the desired behaviour. There are several limitations and critiques expressed by researchers to this class of models4, the main can be synthesised as follows:

• use of representative agent. It implies the ruling out of a great deal of interactions among a heterogeneous set of economic actors, therefore the resulting model is highly simplified, rendering the unreliable the model, the greater the magnitude (and economic meaningfulness) of real people relations. By the same token, the phenomenon of aggregation has been almost ignored the majority of times as if it was a minor, almost insignificant problem, clearly in opposition to real evidence.

• high elasticity of the labour supply curve, at the opposite of empirical results, which have provided estimates significantly lower than those emerged from the model.

3

It’s a three steps procedure: first of all, take logs of the variables of interest, then perform a first order Taylor expansion around the steady state value, that is:

ln f (x) = ln f (xSS) + f′(xSS)·x− x

SS

f (xSS).

Finally, express it as a function of the percent deviation of the variable from it steady state value: ln f (x) = ln f (xSS) + xSS·f (xSS) f (xSS) · x− xSS xSS . 4

(22)

• partial “blindness”, to be intended as incapacity of accounting for sources of cycles other than technological shocks. This was juxtaposed by the difficult economic interpretation of slumps and contraction periods as fuelled by adverse technological happenings.

• required high magnitude of these shocks. To this end, King and Rebelo [1999] developed a new model able to reproduce aggregate fluctuations starting from small productivity disturbances, but at the cost of a higher elasticity of labour supply curve.

• linkage with Solow residuals. It has been found that Solow residuals are not a homogeneous class, which can be though of as representing technological innovation, instead it’s like a cauldron where both demand and supply residual coexist without a clear distinction.

• calibration method. In particular, Hartley et al. [1997] criticised the calibration method and re-calibrated (using the same methodology) a basic RBC model taking as reference data gen-erated by a Keynesian model5. They found a tendency to mimic the results obtained for U.S. economy, thereby mining, in the authors’ view, the capacity of the model to correctly represent the behaviour of economies substantially different from U.S., on one hand, and the capacity of capturing the change in the drivers of the dynamics, on the other.

Apart from technical criticisms, New Classical Theory presents some important deficiencies mainly in the reduced and very stylised set of explanations it provides for the cycles: a massive, persistent series of shocks (formally defined as technological) is clearly unable to provide a solid ground for the study of aggregate fluctuations, neither at national, nor at international level, since the two mechanisms suggested as drivers for the propagation (i. e. empirically unsound high elasticity of labour supply and capital accumulation) seem unable to account for the numerous national specificities observed. Nonetheless this theory has the virtue of being the first6 in powerfully proposing perfect rationality of agents and micro-foundation as general framework for economics. Going further its drawback in overstating the importance of technical change, it has provided an interesting insight: the latter is not only a driver of economic growth, instead plays a role in short run output fluctuations.

2.2.3 New Keynesian

During 1980s, as a response to both Lucas’ Critique and early research on RBC, a stream of researchers started to re-elaborate the standard Keynesian theory in order merge it whit the new theoretical and technical developments, giving births to the New Keynesian school. From a theoretical point of view, the basic framework is essentially the original Keynesian thought, with two differences: on one hand, the analytical discussion of the microeconomic origin of nominal and real rigidities; on the other, the shift of focus from fiscal to monetary policy.

With reference to the first, the literature has evolved in a tree-like manner: many different explana-tions, all traceable to either nominal rigidity or real stickiness, have been provided, without reaching a uniform theory, nor a real and uniform solution to the original problem. In effect, many causes for micro-rigidity, if suitable (and also economically meaningful) in the specific case, seems to lack of the crucial capacity of delivering good results also at aggregate level. This is in reality one of the criticisms advanced by New Classical theorists, who claim the “aggregative weakness” of these expla-nations. However, in recent years if not on the theoretical ground, New Keynesian economic modelling seems to be characterised by a greater homogeneity in rigidity assumptions, suggesting an underlying process of convergence towards two different cases. The first is known as “menu costs” hypothesis, due to a seminal work by Mankiw [1985], because it highlights the role of small costs sustained by

5

In particular, they performed two calibrations, the first on generated data similar to U.S. aggregate data, the second after modification of some parameters affecting the dynamics - not the steady state - of the data generating model.

6

(23)

2.2. THEORIES OF AGGREGATE FLUCTUATIONS 11

firms in re-printing price etiquettes and other material support7; successive authors have suggested a more extensive definition to account for a major burden on firm’s profits. The intuition behind the assumption is that firms may change their price in consequence of an external shock only when it grants them economic advantages greater then the total amount of menu costs, giving rise to stickiness in the price level. The hypothesis of staggered prices (practical applications in Yun [1996], Smets and Wouters [2003] tries to tackle the same problem from a different viewpoint: conversely from before, firms are willing to modify their prices, but they’re prevented to do so by the existence of norms and contracts that can be updated only at fixed time8.

New Keynesian theory differs from its origins (namely, Keynesism) for the recognition of real rigidities as addition source of market imperfection. Although clear in theory, in the practice of (at least early) modelling this possibility sometimes substitutes while others juxtaposes nominal rigidity as defined above. The most common real rigidity takes the form of monopolistic competition on goods or labour market, where firms and trade unions fix prices and wages according to their desired mark-up: a complete framework, with imperfections in both markets is adopted by Soskice and Iversen [2000]. A wider description of nominal and real rigidities applied in early literature of New Keynesian models is provided by Ball et al. [1988], Gordon [1990] and Romer [1993].

In the same way the New Classical Theory rests on the Classical one, so does New Keynesian with respect to early Keynesian thought. In effects, the economic insights regarding market imperfections, demand-driven cycles and state intervention for stabilisation purposes still characterise all recent New Keynesian models. The main points of departure from original theory, though important, are quite peripheral: the concentration on monetary policy and on real effect of money supply, existence of real rigidities and explicit consideration for technological shocks as possible drivers of fluctuations are the key elements of all most recent models. However New Keynesian modelling has its origins in RBC frameworks, whereof it represents an improved extension. One of the first well structured models in this sense is that in Yun [1996], where a standard RBC is enriched by the addition of monopolistic competition, nominal price rigity and endogenous money supply. Here is a brief survey of the basic features of these class of models:

• micro-foundation. On the demand side, this issue has been addressed mainly by assuming the existence of a representative agent. However, recent developments Smets and Wouters [2003, 2007] have reversed to a multi-agent framework.

• monopolistic competition. Firstly introduced in Keynesian modelling by Blanchard and Kiyotaki [1987], its successfully adoption has a twin motive: theoretically provides a microeconomic base for real rigidity, whereas from a modelling point of view, the mechanisms of mark-up function as means for the propagation of a shock.

• shocks. A relevant part of the most recent models Clarida et al. [1999], Gal´ı [2010], Ireland [2004], Smets and Wouters [2003, 2007] assumes that stochastic disturbances follow AR(1) pro-cesses9. This assumption is not dissimilar to that of RBC models, indeed it has the same purpose, namely assure that the shock propagates through the economy and generates good fluctuations. With reference to the nature of the shock, as previously discussed, both demand and supply are considered, even though the fist has a prominent role.

• three equation model. This is not a proper feature, instead it can be interpreted as the back-bone of recent models, which have a minimum structure given by the three equations expressing,

7More precisely, he proved that after an exogenous demand shock, a monopoly firm experiences a variation of profits of the second order, whereas the total welfare change is of the first order.

8Nonetheless, staggered prices are generally modelled in a stochastic way, considering the (uniformly distributed) probability that at time t firm i varies its price.

9

In most cases the autoregressive coefficient rho is implicitly assumed absrho < 1 in some cases equals 1, so that the process becomes a random walk

(24)

respectively, the aggregate demand, the relation between aggregate supply and inflation, cor-rected with expectations (New Keynesian Phillips Curve) and the Central Bank’s monetary rule (reaction function to inflation and output deviations from a desired level).

• monetary policy10. This too is not a proper technical feature of these models, but it can be

considered the common thread that joins early research ( Yun [1996], Goodfriend and King [1997], Clarida et al. [1999]) and new findings ( Ireland [2004], Gal´ı [2010], Inoue and Rossi [2011]). Entering in the details, an important novelty with respect to standard Keynesian The-ory is due to the assumption of rational agents: given their forward-looking attitude and the efficient exploitation of available information, policy measures in these models are constrained by credibility, that is, their effectiveness depends crucially on the way in which they’re performed (if anticipated by agents or totally unexpected).

This theory together with its analytical modelling representation, though comprehensive has still many limitations for what concerns the analysis of fluctuations: one for all, the micro dynamics of individual interactions are not so elaborated in theory, and this, of course, has an effect also on the implied analytical models, which in fact tend to focus on the aggregate variables. Conversely, there are some theoretical insights that have remained at the periphery (if not ignored) of the modelling practice: fiscal expenditure and the role of the Government are just an example11.

As compared to New Classic Theory (and RBC models), the New Keynesian Theory (and the corresponding analytical tools) has overcome some of its drawbacks, in particular as to the possible aggregate origin of fluctuations it has developed the idea whereby more plausible pure technological shocks coexist with other demand real and monetary shocks and all together, fostered by market imperfections drive economic fluctuations. At the opposite, the deficiencies associated to a rather stylised aggregation of micro dynamics (namely the often recurring use of representative agents) have been maintained to a large extent. The focus of these models, notwithstanding the efforts made in order to overcome Lucas’ Critique, has been limited themselves to the aggregate behaviour of the economy: the result is an incomplete picture, whose micro-foundations are at a preliminary level. The argument stressed here is that for years economic research strived for finding a micro-foundation of the mainstream macro models, but, according to the two mainstream theories, the result is closer to micro-justification of macro behaviour, rather than micro-foundation of aggregate dynamics.

A view to concluding, note that the time evolution of the two schools of thought has fallowed different paths: over the past decade, research on RBC models seemed to reach an endpoint, perhaps owing to the academic (both theoretical and empirical) findings that undermined its simplistic struc-ture; instead New Keynesian models have accrued greater success at institutional level: this is due principally to the model by Smets and Wouters [2003]12, that is nowadays employed by the ECB13for forecasting and policy evaluation. Apart from this, the heritage of the theoretical debate described so far is twofold: first, the necessity to consider various sources of economic disturbances (both demand and supply shocks) for having a better understanding of the business cycle; second, without diminish-ing the importance of theoretical results achieved, it has highlighted the need of better understanddiminish-ing the role of the microeconomic dynamics in all that concerns economic fluctuations. At a different level, both ideas have been developed by theories of sectoral fluctuations and, in a more peculiar way, also by evolutionary economic theory.

10Particularly, a survey on this topic can be found in Goodfriend and King [1997]. 11

Maybe, as regards this point it can be asserted that historical framework exerted an important, though not predom-inant, influence: in the second half of 1980s, the oil crises and the high inflation, to which is added the non-negligible magnitude of public debt (accrued in the previous decades) in many of the occidental economies moved the attention from fiscal to monetary instruments.

12

It’s made reference to the original model by Smets and Wouters [2003] and to its successive developments, namely Smets and Wouters [2007].

13

(25)

2.3. THEORIES OF IDIOSYNCRATIC AND SECTORAL ORIGINS OF FLUCTUATIONS 13

2.3

Theories of idiosyncratic and sectoral origins of fluctuations

Aggregate fluctuations are phenomena of undoubtedly relevance, for a number of issue previously dis-cussed, however their are not the only kind of disturbance that might affect the economy: expansions, slumps and steady growth are features of shared by the various agents14 operating at different levels of disaggregation in an economy, that is: industries, sectors, small clusters and individuals. These phenomena, especially when they occur at micro level, are likely to affect a very limited number of agents in such a way that remains unnoticed from an aggregate viewpoint: this is indeed the true motif whereby standard mainstream theories (namely Classical and Keynesian, with their respective extensions) does not deal directly with them, relegating the problem to the margin, if not ignoring it at all. This principle is not always true. In some cases a sectoral shock may transmit to other sectors and result in aggregate variations or fluctuations. This is the primary aim of the section, to provide an overview of some models that have tried to tackle the issue. Note that in this respect it’s not possible to talk about a unified theory, since the basic needs of coherence, well defined goal and boundaries and common analytical approach cannot be contemporaneously identified.

2.3.1 Lucas’ islands model

One famous contribution, perhaps one of the earliest, on this theme is the “islands model” by Lucas [1972]. The original purpose of this article was not the analysis of aggregate or sectoral output fluctuations, instead it was the establishment of an output-inflation relationship (Phillips curve) as an endogenous result of a simple economic model, rather than a almost inexplicable empirical finding.

The original model built upon the problem of imperfect information faced by firms in the goods market: since they cannot neither correctly know the general price level, nor distinguish between local (relative) price changes and general modifications, they are forced to make a subjective expectations about these variables. In addition, the assumption of perfect rationality implies that firms’ decision making process (profit maximisation) is concerned also to the evaluation of the unobserved general price level. Finally, considering the possibility of two kinds of shock, namely an aggregate (general) price shock, common to everyone, and a local, idiosyncratic disturbance, firms can misinterpret market signals and consider a particular shock as an aggregate one. The result is a wrong production decision which is essentially the basis of output fluctuations.

Business cycles in this highly stylised framework emerge from idiosyncratic shocks, only to the extent that they have been misunderstood. Though extremely simple, this model delivers a funda-mental result for all subsequent analysis. It is an irrelevance result concerning the aggregate effect of sector-specific shocks, which is the consequence of the weak law of large numbers. It can be stated in the following way:

[Weak Law for shock aggregation] In a multi-sectoral or sufficiently disaggregated economy, if the conditions for the weal law of large numbers are satisfied, then idiosyncratic shocks do not have aggregate effects

Essentially, it consider that idiosyncratic shocks, as the number of agents increases, tend to offset each other at the aggregate, implying the removal of an influential link between micro and macro levels of the economy. Hence it’s straightforward to recognise this result as the main obstacle for the whole class of theories as well as models about this topic.

14In this context the word “agent” abstracts from its standard meaning and can be used to identify either individuals or a homogeneous - according to a specific criterion, such as geographical proximity - group of them. However, for the sake of clearness, in the remainder of the exposition this word will be used to identify individual agents, that is members of the category at the maximum possible level of disaggregation, if not differently specified. According to the same convention, the prefix micro- will be used to identify features or phenomena occurring at the maximum disaggregate level.

(26)

2.3.2 Sectoral and granular theories

At the end of the 1990s it has been experienced a renewal interest in the study of the micro-foundation of aggregate phenomena fluctuations. This stream of literature cannot be completely reconciled to one of the mainstream theories described above, although it remains grounded on the so called wide group of orthodox theories. The theoretical pillars which can be inferred from the literature, next to the high relevance conferred to idiosyncratic shocks as explanatory variables for aggregate fluctuations, are the existence of a specific mechanism able to invalidate the necessary conditions of Lemma 2.3.1 and the statement of an adequate mechanism through which micro shocks propagate to the economy. In any case, this cannot be properly defined a theory, rather an established common way-of-doing, since it lacks of consistent theoretical underpinnings, economic insights and procedural ideas, that characterize an economic theory.

Among the first to contribute to this literature, Bak et al. [1992] developed a very insightful model: under the assumtpions of nonconvexities in production (due to indivisibilities) and nonlinear-ities stemming from the interconnections of the economy, they demonstrated by means of simulations the possibility of upward shock transmission (that is, transmission from the consumer to the supplier. The opposite case is usually called downward transmission) in an infinte lattice, with a fat-tailed lim-iting aggregate production ditribution, that implies high probability of large avalances. The relevance rests in being the first attempt to introduce nonlinearities in the form of sectorial interconnections for studying aggregate fluctuations, a structure of the economy that resembled to some extent that of a network (as will be discussed later), which appeared in economics only more than twenty years later. Notwithstanding the releance of this contribution, the article didn’t receive much attention by the scientific community and also the theory of the idiosyncratic orign remian at the border of the acca-demic research until the end of the decade. Similarly, Quah [1996] attepted to takle the same issue: he criticized the point of view according to which the effects of shocks took place from aggregate to micro level (i.e. a situation such that any disturbace other than aggregate was neglected) and proposed an organic theoretical and statistical procedure for the analysis of the evolution of the distributon of regional output in response to a series of idiosyncratic (i.e. regional) and aggregate distubances. To this aim, he considered both spatial and temporal profiles as relevant variables. Anyhow, as for the previous article, also this contribution has received scarce attention by the academic environment.

Seminal researches in this area ( Dupor [1999] and Horvath [1998]) focused on the study of economi-cally grounded assumptions that jeopardize the perennial validity of Lemma 2.3.1, which represents the main mathematical and logical obstacle to the aggregate relevance of idiosyncratic shocks. First, Du-por [1999]15build a strpped-down DSGE model with different firms’ market shares and Input/Output relationships, then combined them with persistent technological shocks as amplification mechanisms. His empirical test on 1987 US SIOT rejected the hypothesis of a significant relvance of idiosyncratic shocks, however a precisation is strongly required: in his model, Dupor focused on first-order connec-tions among sectors, while ignored further indirect linkages, moreover his dataset was limited to just one year, which resulted in a remarkable limitation of the empirical findings. Nontheless, the main of his work concerned the identification of a peculiar case such that, in an I/O framework, the Weak Law of Large Numbers occurrence is delayed, allowing (theoretically) for small independent low level shocks to determine large output fluctuations; it was based on the increasing row heterogeneity of the intermediate goods matrix within the I/O table, that is, a situation in which increasing disaggregation implies increasing sparseness in the sectorial input exchange linkages distribution16. This result was re-managed by Horvath [1998], who approached the same problem by studying (in a DSGE framework) a sectorial economy in a different model, with a unique kind of shock, that is productivity disturbance,

15Horvath was ware of the main results of Dupor’s analysis prior to the final publication of his article, whose first draft was dated 1996.

16

In terms of network analysis, this simply means a progressive reduction of vertex in-degree as a consequence of further disaggregation.

(27)

2.3. THEORIES OF IDIOSYNCRATIC AND SECTORAL ORIGINS OF FLUCTUATIONS 15

and assumed a different AR(1) stochastic process for each sector of the economy, together with an independent shock at aggregate level, then he concentrated on the shape and characteristics of US I/O for different years, finding a substantial confirm of the disaggregation-increasing sparseness hypothe-sized. In a later work ( Horvath [2000]) extended previous results to a multisector DSGE model with sector-specific productivity shocks, where shock percolation occurs as a consequence of limited but intense trade interactions. Simulations showed no significant difference, in terms of aggreate volatility, between the baseline model and a one sector version of it.

Another stream of research in the same years devoted attention on the distribution of firm sizes at national level ( Axtell [2001] for US and Fujiwara et al. [2004] for European Union) or sectoral level ( Pagano and Schiavardi [2003]), finding two main results: non-normal distributioon and sectoral heterogeniety. As the first reuslt is concerned, power-law family distributions (in particular, Zipf’s law for US data) are found to fit empirical data on firm size better than the Normal: this has strong implications in terms of tail events probability, in other terms the number of very big firms has found to be significantly higher and the whole sample of firm size more sparse than what Normal density predicts. On the other side, this distribution was proved to be sector-specific, deepen the degree of heterogeneity that economist should take into account when building their models.

These results are particularly important when they are read in light of the theoretical insights provided by the researches on idiosyncratic shocks. The fundamental work of Gabaix [2011] uses together these two elements to formulate and test the “granular hypothesis”, which states the relevance of indiosyncratic shocks in the determination of aggregate fluctuations in presence of fat-tailed firm size distribution. This research was based on the mathematical proof of the failure of the Weak Law of Large Numbers in presence of fat-tailed distributions, with a resulting aggregate volatility decay rate of logN when firm size follow Zipf’s law, instead of the canonical√N . In this model industrial supply linkages are responsible for the propagation of productivity shocks, whose firm level value is assumed to depend on aggregate, industrial and idiosyncratic components. The granular residual is defined as the best proxy for the estimation of the idiosyncratic component and the empirical implementation of the model with respect to the top 100 US firms confirmed that the hypothesis that study of the biggest firms in an economy can yield useful insights in the understanding of macro variables. A similar approach is used by Carvalho and Gabaix [2013] to test the significance of the fundamental volatility (conceptually the equivalent of the granular volatility in Gabaix [2011]) as a representative statistics of the volatility of an economy.

Starting from this finding, many recent researches have attempted the empirical validation of models built on structurally similar assumptions with contrasting results; among them Foerster et al. [2011] and di Giovanni and Levchenko [2012] deserve particular attention. The importance of the first one stems from its focus on indutry intercorrelation for explaining the relevance of sectoral shocks at aggregate level on the Industrial Production index, in a context where Gabaix’s argument failed to deliver this conclusion17; while the second adopted a granular approach for the study of open economies, finding a positive correlation between aggregate volatility and degree of openess of an economy as a direct implication of the role of big firms in the national context. Other interesting economic insights have been delivered by the research of Stella [2013], who merged an improved version of the factor model used by Foerster et al. [2011] and Gabaix’s granular residual idea to decompose aggregate and firm-level variance of the growth rate of sales into aggregate, sectoral and idiosyncratic shocks effect: the outcome of the dynamic factor model estimation showed that aggregate shocks accounted for about 30% of total volatility (as average over 1972-2007), almost the same did idiosyncratic shocks; in line with this, the granular residual estimation proved the reduced importance

17Another (independent) contribution which stressed the role fo interconnections is Shourideh and Zetlin-Jones [2012]: they approached the problem financial constraints finding that idiosyncratic shocks in a context of heterogeneous financial constraints could prevent the optimal re-allocation of resources determining demand fluctuations, which in turn reflect into aggregate level due to the connections of intermediate and final goods producers.

Riferimenti

Documenti correlati

Accanto a una lunga pratica di iudex, partecipò attivamente alla vita politica della sua città negli anni dell’ingresso di Alba nell’orbita di controllo

In the present work, safety margin uncertainties are handled by Order Statistics (OS) (with both Bracketing and Coverage approaches) to jointly estimate

Here, we show that depletion of brain serotonin in adult mice produces severe abnormalities in serotonergic fiber density with a region-specific effect and that these alterations can

The excluded cross section at 95% CL for the Jet-Jet model (upper left) and the B-Lepton model (upper right) as a function of the mass and proper decay length of the parent

zia del Capra per il de Marini trovava, infatti, un limite sia nella prudenza con la quale egli si esprimeva nelle sue lettere, spesso accompagnata da si-

L’Agenzia demandata alla valorizzazione e alla gestione di alcuni dei beni dello Stato è attualmente l’Agenzia del Demanio, unica tra le quattro agenzie fiscali a

outflow from infiltration basin has been considered constant and equal to the infiltration capacity at saturation; rainfall-runoff transformation has been neglected,

Analytical calculation of asymmetrical fault transients in three-phase power systems is a critical issue in power system analysis because the conventional circuit techniques leading