• Non ci sono risultati.

A method to reduce the number of parameters to be estimated in a distribution lag-model

N/A
N/A
Protected

Academic year: 2021

Condividi "A method to reduce the number of parameters to be estimated in a distribution lag-model"

Copied!
55
0
0

Testo completo

(1)

Single Cycle Degree programme

in [QEM] MODELS AND

METHODS IN ECONOMICS AND MANAGEMENT “ECONOMICS” Final Thesis

Gamma

Distributed

Lagged

Model

Supervisor

Ch. Prof. Domenico Sartore Assistant supervisor Graduand Dario Mander Matriculation Number 816681 Academic Year 2016 / 2017

(2)

Sommario

1 Introduction ... 1

2 Distributed lagged model... 4

2.1 Koyck (1954) distributed lagged model ... 5

2.2 Almon (1962) distributed lagged model ... 7

3 Distributed Lag model: Gamma distribution relation between coefficients. ... 9

3.1 Why a Gamma Distribution ... 9

3.2 Dynamic model with two variables and one explanatory variable change ... 12

3.2.1 Estimation ... 16

3.3 Dynamic model with two variables and multiple explanatory variable changes ... 18

3.3.1 Estimation ... 19

3.3.2 Forecast ... 24

3.3.3 An example. The Consumption model ... 26

4 Conclusions ... 36

Bibliography ... 37

Figure Index ... 38

Tables Index ... 39

Appendix A –Consumption model: Empiric data ... 40

(3)

1

1 Introduction

It isn’t the first time that the study of economic models takes inspiration from models of complex processes in other sciences like physics, thermodynamics, neural sciences. All these processes have in common a significant number of elements which determine the system properties that we are analyzing and for this reason is difficult or also impossible to find always a deterministic model.

So we are forced to use statistical methods for the solution through the concepts random variables and distribution functions,

For example, considering physics, it’s a common idea that matter is composed of discrete elements and as a consequence, the mater properties we detect are the effect of these items interaction. This concept has been acknowledged only recently but has its origin since Grecian in the Vth century B.C. In that period Leucippo (Mileto, Vth century B.C.) and Democrito (460 B.C.) were the founders of a philosophical school of thought named “atomism”.

They theorize that nature consists of two fundamental principles: atom, elements our senses don’t detect, and void. These elements produce the events in nature through a combined effect.

In 1738 Bernoulli introduced a gas model which represents it as a set of molecules which are in a continuous movement following the kinetics laws.

Following this model, our senses detect the events like temperature, pressure which are the effects of the molecules kinetic activity.

For example, the pressure derives from the molecule collisions against the inside of the container in which the gas is closed.

Starting from the kinetic theory, the statistical mechanics gives a set of concepts and methods to analyze the behavior of complex events concerning the action of simple elements which follow specific laws.

The application of these concepts to the thermodynamics starts from the idea that an isolated system, which is made up of a certain quantity of matter, is composed of N particles which follow the dynamics laws.

The dynamic of this system is represented by the Hamilton’s function H({q},{p}) where {q} represents the set of all the coordinates e {p} represents the set of pulses of all the N particles.

(4)

2

It’s clear that the vast particles number (around the Avogadro’s number N≈1023

) makes impossible Hamilton function solution with an analytic method to obtain the system state. Moreover, this solution may be of confusing explanation.

For this reasons, the system state is represented through other few parameters like temperature, pressure and volume which are named thermodynamics parameters. Statistical mechanics defines as “micro-state” the state of a single particle and

“macro-state” the system properties we can measure.

If we want to relate to economics, we could highlight the following economics system properties which we could consider similar to the properties of a Thermodynamics system:

 The economic system is made by discrete elements, consumers, entrepreneurs, etc. which we will call them agents

 These agents follow specific laws which characterize their behavior. These laws have as target the maximization of utility.

 The number of agents is vast but not at the level of Avogadro’s number.

 Every agent has a particular state based on his wealth, consumption level and free time

 Every agent has a different pulse to change his state to maximize the utility. This pulse could depend on the consumption inclination, the work inclination, etc.  We can define the last two properties like micro-state which are different

between agents.

 The economic system has some properties that we can detect like GDP, inflation, unemployment level .. and for this reason, we can define them like macro-state.

 Such states depends on the state and the behavior of all the agents

 If we applied Hamilton’s function to the economic system, the solution with analytic methods should be difficult.

Going back to the thermodynamics, we will have a function f which will have the set of microstates like domain and the set of macro-states as co-domain. This function has as a variable the time through the relation of coordinates and pulses with the time.

(5)

3

To simplify this calculus, W. Gibbs (1914) replaced the temporal average of this function with a statistical average using a so-called statistical ensemble.

We have to think a set of identical systems where each one represents a particular dynamic state of the original system in a specific instant of time. In this way, this set represents in a contemporary way the progressive evolution of the original system. This statistical ensemble will have a specific distribution function which stands for the identical system’s density into the interval we consider. So the average of a macro-state is calculated using the average of the distribution system.

We can consider the statistical average equal to the temporal average if we assume the

ergodic hypothesis correct.

This property will be assumed true in this analysis and we will use the distribution Gamma function.

So it’s clear that this analysis doesn’t want to find a better method as an alternative to the OLS econometric method. It wants only to obtain a relation (function f) between the agents’ states and pulses and an economic system property that we want to analyze through a model.

Something has been already done.

(6)

4

2 Distributed lagged model

To analyze relations between macro-economic variables, the econometric models often use lagged (past period) values of explanatory variables with the idea that a variation of one of these variables propagates effects on the dependent variable through a time interval.

I.e.

Time Period t t+1 t+2 t+3

Explanatory variable

Effect on dependent variable

This relation can assume the structure of a linear form:

Eq. 2.1

Where , the value at the period t of the dependent variable, is affected by the sum of the effects of the actual and lagged values of an explanatory variable.

Starting from the empiric data, we have to follow follows the next steps:

 Determine which explanatory variables influence the dependent variable Y  Determine how long is the maximum time interval of this influence (s)  Estimate the parameters

These parameters are estimated by the empiric data, and we don’t assume a particular “a priori” relation between them.

However, in the literature exist models which define an “a priori” relation between the . These models explain the way an independent variable variation has the same effect on the dependent variable values.

(7)

5

2.1 Koyck (1954) distributed lagged model

In many cases the number of parameters to estimate in Eq. 2.1 is very high. It depends on the number of explanatory variables and by the number of lagged values we use for each one.

In some cases, to avoid this, it’s postulated a schema with multiple distributed lagged values of and where the lags sum of the two variables is less than s.

I.e. Eq. 2.2 where With p+q<s.

For the stationary state of Eq. 2.2 all the roots of A(L) must be outside the unitary circle.

The Eq. 2.2 can be transformed in this way:

Where

If the polynomial A(L) is of the first degree, , we obtain a Koyck distributed lagged model.

(8)

6

Eq. 2.3

If we compare the coefficients:

per

It’s evident that the first two coefficients haven’t any relation between them whereas the next coefficients descend exponentially because the stationary state condition is less than one.

If we increment the order of polynomial B(L) and we maintain the first degree of A(L), we obtain an increment of the number of “free” parameters before the rest of coefficients start to descend.

If the B(L) has a q order, the recursive formula becomes:

And the number of “free” parameters will be q+1

The parameters estimations of regressors could be inexact if there are multicollinearities.

(9)

7

2.2 Almon (1962) distributed lagged model

S. Almon (1962) developed a way to avoid the coefficients multicollinearities problem and at the same time to reduce the number of parameters to estimate. It consists to round of the coefficients using a function .

This methodology is based on the Weierstrass theorem which establishes that a continuous function can be substituted with small errors by a polynomial with a right degree in a closed interval.

For example, if we consider a polynomial of the third degree:

The coefficients will be:

……….

Substituting this coefficients in Eq. 2.1 we obtain:

In this way, we have only four parameters to estimate against the s in the original model.

We have to identify the right degree of the polynomial in two ways:

- We can verify with statistical significance tests of the OLS estimation method about the coefficients the last significant coefficient starting from a given

(10)

8

number of regressors and decrementing it until we find the ultimate significance term.

- We can use the statistical significance test of the OLS estimation method about the coefficients of Eq. 2.1 for a given value s. When we have obtained a right degree, the estimator can be calculated with the conditioned OLS estimator. Almon introduced this methodology in 1965, so before the inception of concepts like integration and co-integration between regressors. We have multicollinearities when there are unitary roots and or trend.

In this analysis, we assume a more restricted relation than the previous. We consider a connection between the coefficients which is represented by the Gamma distribution function.

(11)

9

3 Distributed Lag model: Gamma distribution relation between

coefficients.

3.1 Why a Gamma Distribution

In this analysis, we try to relate the empiric data timeline of a dependent variable with a possible response function of agents that are directly concerned by the explanatory variable value changes.

Coming back to the example in chapter 2, we suppose that at a certain instant t0, the

explanatory variable value change and that the new value stays during the next periods: Period t t+1 t+2 t+3 Explanatory variable Effect on Than: If we assume for t<t0

The process will be the following:

(12)

10

- Hypothetical gradual reaction of the agents along the time (Fig. 2)

- Induced variations along the time on the dependent variable Y (Fig. 3, Fig. 4)

Fig. 1 The explanatory variable value change ∆Xt0

(13)

11

Fig. 3 Dependent variable change ∆Yt due to ∆Xt0

Fig. 4 Dependent variable cumulative change ∆Yt,t0

We consider the hypothetical agents’ response function like a Gamma Distribution function which depends on two parameters: a shape parameter and a scale parameter. Using different values of these two parameters, we can obtain various function forms which, in turn, will represent different response way of the agents. (Appendix C – Gamma Distribution)

We’ll consider the agent response regardless of the fact that the explanatory variable was or not deterministic. So, for simplicity, we’ll consider the explanatory variables as deterministic.

(14)

12

3.2 Dynamic model with two variables and one explanatory variable change

We can start with the simple model without intercept:

Where is the variation of the explanatory variable, is the variation of the dependent variable and c the slope which represents the marginal change of due to variation.

In this relation, there isn’t the time because the variation is considered in a long-term stationary state.

Now we suppose to make inference on the dependent variable change through a time series with lagged explanatory variable changes..

We start with the next assumptions:

 The marginal variation coefficient c is constant through the time.

 The increment of the dependent variable is equal between all the agents for the same increase of the explanatory variable: .

 The value changes of the explanatory variable are independents each other

Relating the value changes of independent variable with the value changes of the explanatory variable into the interval s , we get the theoretical model:

Or in general Eq. 3.1

(15)

13 Where

In this case, against the explanatory variable value change , the dependent variable marginal variation will get after a period t+s.

So if we have the variation only at the instant t-s and it remains constant through the next periods, la Eq. 3.1 becomes

Eq. 3.2 And recursively

The dependent variable cumulated variation into the time interval s can be summarized in this way: Where Eq. 3.3 = = And

(16)

14 in this way: Than Eq. 3.4

With the two new variables:

- N: the number of agents that will react to the variation

- : the number of agents that will respond to the variation at the instant t

Now we assume that the ratios is distributed into the interval s with the form of the Gamma distribution function

with the shape parameter k and

the scale parameter θ.

It’s possible to determine the interval value s considering the time interval beyond which the explanatory variable value changes don’t affect the dependent variable value in any meaningful way at the instant t.

Changing the shape and scale parameters, it’s possible to determine the value of s more than the value of the cumulative Gamma assumes the value of 95% (Fig. 5). In this way, it’s possible to obtain the first instant time t-s over that we haven’t to consider the explanatory variable value changes .

(17)

15

Fig. 5 Interval s with the value of 95% of the cumulative variation ∆Yt,t0

Moreover, we assume the following properties:

 The Gamma function is independent of the time instant t when the explanatory variable variation occurs. For this reason, the function will always have the same shape and scale parameters values.

From the Eq. 3.4

(18)

16

Where

3.2.1 Estimation

For the estimation of the shape parameter k and the scale parameter e θ we can use the following properties of the Gamma Distribution function:

E(g(t)) = k θ

and

V(g(t)) = k θ2

If we consider

The average of into the interval s will be

E( )=E(g for i=0,..,s

We will have into the interval s:

(19)

17 So E( )= θ k θ = E( ) / ( ) and V( )=V( ) V( )=V( )) k θ2= V( )/( ) so θ= V( ( ) for i=0,..,s and k= / for i=0,..,s

(20)

18

3.3 Dynamic model with two variables and multiple explanatory variable changes

What happens if the explanatory variable changes many times into the interval s (Fig. 6)?

Fig. 6 Multiple explanatory variable changes into the interval s

Starting from Eq. 3.1

And from the following relations

(21)

19 + + Namely Eq. 3.5

Where represents the number of agents which reacts at the time t to the explanatory variable change at the time t-i. We assume this variable dissociated for every explanatory variable change.

If we suppose that the agents’ reaction isn’t instant, but it is distributed through the time as a Gamma distribution function g(t), that distribution will follow the development

where is the function generated by the explanatory

variable change at the instant time t-i.

We obtain

Eq. 3.6

With

We ‘ll call this “Gamma model”.

3.3.1 Estimation

The model Eq. 3.5 is a discrete model, not continuous. So the estimation process will be just as precise as the time unit of measurement will be small, e.g. one day rather than a quarter.

(22)

20

To estimate the shape parameter k and the scale parameter θ, we have to consider an interval p=tmax-tmin where:

- tmax is the instant time when the last explanatory variable change runs out of effect without new changes

- tmin is the start of the interval p so that there will not be present any explanatory variable changes until the instant time tmin-s.

Fig. 7 “Independent” Interval

We call this interval “independent” (Fig. 7) in the sense that any explanatory variable change outside the interval doesn’t affect the dependent variable values inside this interval.

From Eq. 3.5 we can pull off:

Eq. 3.7

Now the shape parameter k and the scale parameter θ can be estimated assuming that the Gamma distribution function doesn’t depend on time t.

(23)

21 In this way into the interval p.

(24)

22

Tab. 1 ∆Yt-i decomposition

…. …. …. …. 0 0 0 0 0 0 0 ….. 0 0 0 0 0 …… 0 ….. …. … …… …. ….. ….. ….. ….. ….. ….. … …. 0 0 0 0 0 0 0 0 … … … 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

(25)

23 Since :

If we sum by row the Tab. 1 terms, we obtain

So

E( )=E( k θ per i=1..p

Eq. 3.8 θ and V( )= V( ) V( )= k θ2 k θ2= V( )/(

by replacing with Eq. 3.8 we obtain

θ= (V( /( ) per i=1..p

(26)

24 k= /( per i=1..p

How we can see, the θ expression depends on the number of explanatory variable changes into the interval p. Biggest the interval dimension is, nearest to 0 the result is. So we need to modify this expression so that we can consider it independent of the number of changes.

For example in this way:

θ= (V( /( )) per i=1..p

If the estimation interval contains more independent intervals pi , the estimation will be

obtained by the average of the parameters (E(Ki) and E(θi )) which are calculated for

each interval pi . The “normalization” of the scale parameter θ will be obtained through

the multiplication with the maximum independent interval dimension.

3.3.2 Forecast

If we assume that the effect isn’t instant with the cause, the forecast process can be considered like a conditioned forecast because, at the instant time t+1, the effect of the explanatory variable change on the dependent variable is null.

Indeed the dependent variable, at the instant time t+1, will depend only on the explanatory variable changes in the previous instants of time t, t-1, etc.

For these reasons, the “one-step-ahead” forecast is quite reliable because we know the last explanatory variable values.

At the same time, we can use the “What if analysis”. We can verify what will happen to the dependent variable at the instant t+h due to an explanatory variable change at the instant t ( .

Namely:

(27)

25 And Where

Using the Gamma model:

To verify the forecast reliability or to choose the best between two or more different models we can use the loss function like the Mean of Squared Forecast Errors (MSFE):

Where the optimal forecasts are the expectations conditioned by the information set that we have at the instant T ( :

If the dynamic model is like:

We have the problem of the instant effect of the variable X on the variable Y at the time t.

(28)

26

If we consider the forecast one-step-ahead forecast, we don’t know the value, so we have to follow the next steps:

- We have to define a stationary model AR(n) for the X variable like :

where ut is a White Noise variable

- We have to forecast the explanatory variable at the instant t+1 , namely:

- At the end:

3.3.3 An example. The Consumption model

We consider now as the real consumption at the instant t and as the real income at the same instant t.

Like the literature, in the IS/LM model, the consumption component is represented as a linear function of the available income, namely . In particular, this equation can be like where is the marginal willingness to consume and the available income given by the income Y minus the taxes T after transfers.

If we want to consider the relation between these two variables not in the stationary state but in the dynamic form with time series, without the constant term we will have:

And in the first difference form:

(29)

27 In this case, we have considered:

 the seasonally adjusted time series of real monthly consumption in America since January 1959 to December 2016

 the seasonally adjusted time series of real monthly income for the same period.

Using the empiric data, we have obtained a stationary dynamic model like:

1∆ −6+ 1∆ −7+ 1∆ −8+

With the OLS statistical regression, we’ve got the parameters values like in Tab. 2:

Dependent Variable: DREALCONS Method: Least Squares

Date: 12/28/17 Time: 16:53

Sample (adjusted): 1959M11 2000M12 Included observations: 494 after adjustments Convergence achieved after 5 iterations

Variable Coefficient Std. Error t-Statistic Prob.

DINC 0.112368 0.016722 6.719912 0.0000 DINC(-1) 0.062307 0.016691 3.732908 0.0002 DINC(-2) 0.049185 0.016650 2.954085 0.0033 DINC(-3) 0.031966 0.016692 1.915034 0.0561 DINC(-4) 0.024628 0.016524 1.490423 0.1368 DINC(-5) 0.019716 0.016720 1.179160 0.2389 DINC(-6) 0.009127 0.016698 0.546611 0.5849 DINC(-7) 0.040381 0.016747 2.411211 0.0163 DINC(-8) 0.037657 0.016781 2.243946 0.0253 AR(1) -0.217655 0.044411 -4.900978 0.0000

(30)

28

R-squared 0.137572 Mean dependent var 5.863040

Adjusted R-squared 0.121536 S.D. dependent var 13.43602

S.E. of regression 12.59311 Akaike info criterion 7.924211

Sum squared resid 76755.78 Schwarz criterion 8.009283

Log likelihood -1947.280 Hannan-Quinn criter. 7.957611

Durbin-Watson stat 2.002841

Inverted AR Roots -.22

Tab. 2 OLS statistical regression results for the consumption model

The Gamma model will be this:

Where

With the empiric data and using the method of paragraph 3.3.1 we have obtained the following parameters values:

- s (interval lenght) = 8 - k (shape parameter) = 0,68 - θ (scale parameter) = 3,56

- c (consumption willingness) = 0,27

To verify other possibility, we have forecasted using other shape and scale parameters:

- s = 8 - k = 3,5 - θ = 1 - c = 0,27

(31)

29 - k = 0,68 - θ = 1,78 - c = 0,27 - s = 0 - k = 0,68 - θ = 0,2 - c = 0,27

(32)

30

The distribution functions and the cumulative distribution functions for each set of parameter values have the form like in Fig. 8:

(a) (b)

(c) (d)

Fig. 8 Distribution function and cumulative distribution function using parameters (a) K=0.68,θ =3.56, (b) K=3.5,θ =1, (c) K=0.68,θ =1.78, (d) K=0.68,θ =0.2

(33)

31

The relation between the OLS regression parameters and the Gamma model parameters are represented in Tab. 3:

Modello OLS Modello DG Modello DG g(t) K=0.68 θ =3.56 K=3.5 θ =1 K=0.68 θ =1.78 K=0.68 θ =0.71 K=0.68 θ =3.56 K=3.5 θ =1 K=0.68 θ =1.78 K=0.68 θ =0.71 t 0,112368 0,112971 0,010843 0,163054 0,269112 0,4184 0,0402 0,6039 0,9967 t-1 0,062307 0,050083 0,048616 0,053391 0,000882 0,1855 0,1801 0,1977 0,0033 t-2 0,049185 0,031853 0,064808 0,025593 0 0,1180 0,2400 0,0948 0 t-3 0,031966 0,021538 0,055931 0,013058 0 0,0798 0,2072 0,0484 0 t-4 0,024628 0,014987 0,038886 0,00686 0 0,0555 0,1440 0,0254 0 t-5 0,019716 0,010605 0,023764 0,003665 0 0,0393 0,0880 0,0136 0 t-6 0,009127 0,007588 0,013332 0,00198 0 0,0237 0,0494 0,0073 0 t-7 0,040381 0,005471 0,007038 0,001078 0 0,0203 0,0261 0,0040 0 t-8 0,037657 0,003968 0,00355 0,000591 0 0,0147 0,0131 0,0022 0

Tab. 3 δi Coefficients Estimations

The loss function of the one step ahead forecast for each model are represented in Tab. 4:

(34)

32

Periodo Lost Gamma Lost ELS Perc

200101-200212 K=0.68,θ =3.56 151,1617723 151,7484 0,386580608 K=3.5,θ =1 149,4731759 1,49934119 K=0.68,θ =1.78 151,6266988 0,080200777 K=0.68,θ =0.2 152,5175562 -0,506861303 2003M1-2004M12 K=0.68,θ =3.56 90,39468906 87,45512 -3,361228737 K=3.5,θ =1 88,93441682 -1,691489787 K=0.68,θ =1.78 91,41447958 -4,527301679 K=0.68,θ =0.2 94,52671218 -8,085964123 2005M1-2006M12 K=0.68,θ =3.56 106,1512746 101,6906 -4,386487364 K=3.5,θ =1 104,9422573 -3,197570278 K=0.68,θ =1.78 109,3819361 -7,563438492 K=0.68,θ =0.2 121,7992066 -19,77426929 2007M1-2008M12 K=0.68,θ =3.56 80,51493971 93,29883 13,70209509 K=3.5,θ =1 88,52421855 5,117551821 K=0.68,θ =1.78 80,55678259 13,65724686 K=0.68,θ =0.2 89,59368466 3,971271581 2009M1-2010M12 K=0.68,θ =3.56 95,48586582 92,3256 -3,422953032 K=3.5,θ =1 91,95146107 0,40524263 K=0.68,θ =1.78 98,53808166 -6,728878704 K=0.68,θ =0.2 107,3028685 -16,22222235 2011M1-2012M12 K=0.68,θ =3.56 50,06237199 54,15135 7,551026024 K=3.5,θ =1 52,50684996 3,036867559 K=0.68,θ =1.78 50,46979228 6,798652801 K=0.68,θ =0.2 59,5995608 -10,06107035 2013M1-2014M12 K=0.68,θ =3.56 76,61866409 86,4012 11,32222138 K=3.5,θ =1 84,02266532 2,752894447 K=0.68,θ =1.78 79,8709492 7,558054745 K=0.68,θ =0.2 109,0183586 -26,17690455 2015M1-2016M12 K=0.68,θ =3.56 58,53911191 57,16657 -2,400959739 K=3.5,θ =1 59,43491019 -3,967956587 K=0.68,θ =1.78 58,38870714 -2,137860555 K=0.68,θ =0.2 58,1044127 -1,640551614 2001M1-2016M12 K=0.68,θ =3.56 217,4285805 221,8944 2,012607321 K=3.5,θ =1 221,2807752 0,276559028 K=0.68,θ =1.78 222,0371915 -0,064331086 K=0.68,θ =0.2 250,3125862 -12,80705422

Tab. 4 Loss function MSFE results

How you can see, we have the best forecast than OLS using the parameters values K=0.68,θ =3.56 we have obtained using the estimation process of Gamma model, 217 vs 221. But in some intervals, we can use other settings for best forecasting.

(35)

33

For example in the periods 2005-2006 and 2009-2010 the values K=3.5,θ =1, with the Gamma distribution like in Fig. 8 Distribution function and cumulative distribution function using parameters (a) K=0.68,θ =3.56, (b) K=3.5,θ =1, (c) K=0.68,θ =1.78, (d) K=0.68,θ =0.2, have the best behavior rather than the OLS regression estimation and the Gamma model estimation with the parameters K=0.68,θ =3.56.

If we compare the two graphs (a) and (b) in Fig. 8 we could suppose that the agents don’t react as soon as the explanatory variable changes but they are more careful.

The form of the forecasting graph, as we can imagine, turns out to be very similar between the two methods. (Fig. 9 – Fig. 13)

(36)

34

Fig. 9 One step ahead forecasting. Period 200501- 200612

Fig. 10 One step ahead forecasting. Period 200701- 200812

Fig. 11 One step ahead forecasting. Period 200901- 201012 -40 -20 0 20 40 60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -40 -30 -20 -10 0 10 20 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -60 -40 -20 0 20 40 60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS

(37)

35

Fig. 12 One step ahead forecasting. Period 201101- 201212

Fig. 13 One step ahead forecasting. Period 201301- 201412 -25 -20 -15 -10 -5 0 5 10 15 20 25 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -30 -20 -10 0 10 20 30 40 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS

(38)

36

4 Conclusions

Although we have chosen a model with the income like an explanatory variable which surely isn’t a deterministic variable, we have been able to identify some independent intervals to estimate the shape and scale parameters.

With these parameters, the one step ahead forecast with Gamma model is very similar to the same estimates using OLS regression method.

Furthermore, with the Gamma model, we can obtain the consumption willingness parameter and the interval length of the time series independently from the OLS method.

Nevertheless, we have considered a model with some limits to say that we can find a function to describe the relationship between the agents’ behavior with the state and the dynamic o fan economic process:

- This model has like micro-states and macro-states the same variables: consumption and income.

- The model is limited to one explanatory variable

The analysis should be extended considering a model with a dependent variable type (macro-state) different to the explanatory variable type (micro-state). For example the investments model with explanatory variables like personal income (salary and profit), willingness to work.

If the model has more than one explanatory variable, surely we have to deal with the problem of correlation between them. In this case, we could use a VAR model or better a Gamma VAR model where the regressors coefficients will be different Gamma distribution values. We will have a different Gamma distribution for each relation between variables.

Another limit of the consumption model is that the income is not a deterministic variable. For this reason is difficult to make a forecast two- step ahead, three-step ahead …

And it’s difficult to say if the Gamma model forecasts better than OLS model in a what if analysis.

(39)

37

Bibliography

L.M. Koyck, Distribuited Lags and Investment Analysis, North-Holland, Amsterdam, 1954

S. Almon, «The Distribuited Lag between Capital Appropriations And Expenditures»,

Econometrica, 30, 1962, pp.407-423

D.Bernonulli, Hydrodynamica, 1738

W. Gibbs, Elementary principles in statistical mechanics, developed with special

reference to the rational foundations of thermodynamics, New Haven, Yale University

Press, 1914

(40)

38

Figure Index

Fig. 1 The explanatory variable value change ∆Xt0 ... 10

Fig. 2 Agents response to the value change ∆Xt0 ... 10

Fig. 3 Dependent variable change ∆Yt due to ∆Xt0 ... 11

Fig. 4 Dependent variable cumulative change ∆Yt,t0 ... 11

Fig. 5 Interval s with the value of 95% of the cumulative variation ∆Yt,t0 ... 15

Fig. 6 Multiple explanatory variable changes into the interval s ... 18

Fig. 7 “Independent” Interval ... 20

Fig. 8 Distribution function and cumulative distribution function using parameters (a) K=0.68,θ =3.56, (b) K=3.5,θ =1, (c) K=0.68,θ =1.78, (d) K=0.68,θ =0.2 ... 30

Fig. 9 One step ahead forecasting. Period 200501- 200612 ... 34

Fig. 10 One step ahead forecasting. Period 200701- 200812 ... 34

Fig. 11 One step ahead forecasting. Period 200901- 201012 ... 34

Fig. 12 One step ahead forecasting. Period 201101- 201212 ... 35

Fig. 13 One step ahead forecasting. Period 201301- 201412 ... 35

Fig. 14 One step ahead Forecasting. Period 200101- 200212 ... 48

Fig. 15 One step ahead Forecasting. Period 200301- 200412 ... 48

Fig. 16 One step ahead Forecasting. Period 200501- 200612 ... 48

Fig. 17 One step ahead Forecasting. Period 200701- 200812 ... 49

Fig. 18 One step ahead Forecasting. Period 200901- 201012 ... 49

Fig. 19 One step ahead Forecasting. Period 201101- 201212 ... 49

Fig. 20 One step ahead Forecasting. Period 201301- 201412 ... 50

Fig. 21 One step ahead Forecasting. Period 201501- 201612 ... 50

Fig. 22 One step ahead Forecasting. Period 200101- 201612 ... 50

Fig. 23 Dristribution Gamma Properties (Wikipedia - https://it.wikipedia.org/wiki/Distribuzione_Gamma) ... 52

(41)

39

Tables Index

Tab. 1 ∆Yt-i decomposition ... 22

Tab. 2 OLS statistical regression results for the consumption model ... 28

Tab. 4 δi Coefficients Estimations ... 31

(42)

40

Appendix A –Consumption model: Empiric data

Real Personal Consumption Expenditures, Billions of Chained 2009 Dollars, Monthly, Seasonally Adjusted Annual Real Real Disposable Personal Income, Billions of Chained 2009 Dollars, Monthly, Seasonally Adjusted Annual Rate

Estimation Command:

=========================

LS DREALCONS DINC DINC(-1) DINC(-2) DINC(-3) DINC(-4) DINC(-5) DINC(-6) DINC(-7) DINC(-8) AR(1)

Estimation Equation:

=========================

DREALCONS = C(1)*DINC + C(2)*DINC(-1) + C(3)*DINC(-2) + C(4)*DINC(-3) + C(5)*DINC(-4) + C(6)*DINC(-5) + C(7)*DINC(-6) + C(8)*DINC(-7) + C(9)*DINC(-8) + [AR(1)=C(10)]

Substituted Coefficients:

=========================

DREALCONS = 0.112367758776*DINC + 0.0623065858687*DINC(-1) + 0.0491854193544*DINC(-2) + 0.0319655584903*DINC(-3) +

0.0246276816538*DINC(-4) + 0.0197155627758*DINC(-5) + 0.00912716610803*DINC(-6) + 0.0403813300198*DINC(-7) + 0.0376565014229*DINC(-8) + [AR(1)=-0.217654964705]

Dependent Variable: DREALCONS Method: Least Squares

Date: 12/28/17 Time: 16:53

Sample (adjusted): 1959M11 2000M12 Included observations: 494 after adjustments Convergence achieved after 5 iterations

Variable Coefficient Std. Error t-Statistic Prob.

DINC 0.112368 0.016722 6.719912 0.0000 DINC(-1) 0.062307 0.016691 3.732908 0.0002 DINC(-2) 0.049185 0.016650 2.954085 0.0033 DINC(-3) 0.031966 0.016692 1.915034 0.0561 DINC(-4) 0.024628 0.016524 1.490423 0.1368 DINC(-5) 0.019716 0.016720 1.179160 0.2389 DINC(-6) 0.009127 0.016698 0.546611 0.5849 DINC(-7) 0.040381 0.016747 2.411211 0.0163 DINC(-8) 0.037657 0.016781 2.243946 0.0253 AR(1) -0.217655 0.044411 -4.900978 0.0000

R-squared 0.137572 Mean dependent var 5.863040

Adjusted R-squared 0.121536 S.D. dependent var 13.43602

S.E. of regression 12.59311 Akaike info criterion 7.924211

Sum squared resid 76755.78 Schwarz criterion 8.009283

Log likelihood -1947.280 Hannan-Quinn criter. 7.957611

Durbin-Watson stat 2.002841

(43)

41 -80 -40 0 40 80 -80 -40 0 40 80 1960 1965 1970 1975 1980 1985 1990 1995 2000

(44)

42 Date: 01/14/18 Time: 19:09

Sample: 1959M01 2000M12 Included observations: 494

Q-statistic probabilities adjusted for 1 ARMA term

Autocorrelation Partial Correlation AC PAC Q-Stat

.|. | .|. | 1 -0.004 -0.004 0.0085 .|. | .|. | 2 -0.006 -0.006 0.0276 .|. | .|. | 3 0.014 0.014 0.1317 .|. | .|. | 4 -0.007 -0.007 0.1557 .|. | .|. | 5 -0.003 -0.003 0.1603 .|. | .|. | 6 0.041 0.041 1.0088 .|. | .|. | 7 0.040 0.040 1.7977 .|* | .|* | 8 0.083 0.084 5.2815 .|. | .|. | 9 0.043 0.044 6.2118 .|. | .|. | 10 -0.041 -0.040 7.0699 .|. | .|. | 11 0.012 0.011 7.1491 .|. | .|. | 12 -0.004 -0.006 7.1591 *|. | *|. | 13 -0.071 -0.073 9.7221 *|. | *|. | 14 -0.079 -0.091 12.874 .|. | .|. | 15 0.064 0.052 14.979 .|. | .|. | 16 -0.036 -0.043 15.650 .|. | .|. | 17 -0.037 -0.042 16.370 .|. | .|. | 18 0.003 0.004 16.375 .|. | .|. | 19 0.034 0.045 16.971 .|. | .|* | 20 0.072 0.088 19.668 *|. | .|. | 21 -0.080 -0.065 22.971 .|. | .|. | 22 -0.035 -0.018 23.609 .|. | *|. | 23 -0.066 -0.072 25.844 .|. | .|. | 24 -0.062 -0.065 27.864 .|. | .|. | 25 0.047 0.058 29.027 .|. | .|. | 26 -0.007 -0.025 29.055 .|. | .|. | 27 0.032 0.014 29.586 .|. | .|. | 28 0.073 0.071 32.391 .|. | .|* | 29 0.048 0.079 33.627 .|. | .|. | 30 -0.041 -0.028 34.512 *|. | *|. | 31 -0.096 -0.100 39.404 .|. | .|. | 32 -0.011 0.008 39.473 .|. | .|. | 33 0.031 0.037 39.991 .|. | *|. | 34 -0.055 -0.079 41.619 .|. | .|. | 35 0.015 -0.024 41.732 .|. | *|. | 36 -0.048 -0.066 42.944

(45)

43 Breusch-Godfrey Serial Correlation LM Test:

F-statistic 0.042690 Prob. F(1,483) 0.8364

Obs*R-squared 0.000000 Prob. Chi-Square(1) 1.0000

Test Equation:

Dependent Variable: RESID Method: Least Squares Date: 01/14/18 Time: 19:10 Sample: 1959M11 2000M12 Included observations: 494

Presample missing value lagged residuals set to zero.

Variable Coefficient Std. Error t-Statistic Prob.

DINC 9.79E-05 0.016745 0.005849 0.9953 DINC(-1) 0.000135 0.016721 0.008098 0.9935 DINC(-2) -4.02E-05 0.016668 -0.002411 0.9981 DINC(-3) -3.90E-06 0.016708 -0.000234 0.9998 DINC(-4) -3.94E-05 0.016541 -0.002383 0.9981 DINC(-5) -3.19E-05 0.016737 -0.001904 0.9985 DINC(-6) -3.24E-05 0.016715 -0.001941 0.9985 DINC(-7) -9.74E-06 0.016764 -0.000581 0.9995 DINC(-8) -1.39E-05 0.016798 -0.000826 0.9993 AR(1) 0.040668 0.201788 0.201540 0.8404 RESID(-1) -0.042747 0.206889 -0.206616 0.8364

R-squared -0.001996 Mean dependent var 0.568551

Adjusted R-squared -0.022741 S.D. dependent var 12.46464

S.E. of regression 12.60558 Akaike info criterion 7.928172

Sum squared resid 76749.00 Schwarz criterion 8.021750

Log likelihood -1947.258 Hannan-Quinn criter. 7.964911

Durbin-Watson stat 1.998552

(46)

44

x=0:0.1:10; k=0.68; fi=3.56; dgamma2(x, k, fi);

function yc=dgamma2(t, k, lambda)

y=gampdf(t, k, lambda); yc=gamcdf(t, k, lambda); axis([0,10,0,1])

subplot(1,2,1)

plot(t, y), title('Funz Densità Distribuzione Gamma') axis([0,10,0,1])

subplot(1,2,2)

plot(t, yc), title('Funz Ripartizione Distribuzione Gamma')

x=0:0.1:10; k=0.68; fi=3.56;

prob = gamcdf(t,k,fi)- gamcdf(t-1,k,fi);

0,417360414319381 0,185664770377692 0,118176641418597

0,0799440083683454 0,0556427460465789

0,0393794478020935 0,0281771485941107

0,0203191133960160 0,0147379187397848

(47)

45 x=0:0.1:10; k=3.5; fi=1; dgamma2(x, k, fi); t = 1:10; k=3.5; fi=1;

prob = gamcdf(t,k,fi)- gamcdf(t-1,k,fi);

0,0401596312698984 0,180062960254386 0,240028058080158

0,207155447796250 0,144020435085858 0,0880145990050912

0,0493775150952934 0,0260649926662127

(48)

46 x=0:0.1:10; k=0.69; fi=1.78; dgamma2(x, k, fi); t = 1:10; k=0.69; fi=1.78;

prob = gamcdf(t,k,fi)- gamcdf(t-1,k,fi);

0,597195834565554 0,200168950253805 0,0965288134622205

0,0494366247482407 0,0260366370494816

0,0139381822444925 0,00754195376904010

0,00411206178330603 0,00225473407654730

(49)

47 x=0:0.1:10; k=0.69; fi=0.2; dgamma2(x, k, fi); t = 1:10; k=0.69; fi=0.2;

prob = gamcdf(t,k,fi)- gamcdf(t-1,k,fi);

0,997044012658503 0,00293953523656532 1,63534839697777e-05

9,80103633807516e-08 6,06748984388617e-10

3,82549547595090e-12 2,44249065417534e-14

(50)

48

Fig. 14 One step ahead Forecasting. Period 200101- 200212

Fig. 15 One step ahead Forecasting. Period 200301- 200412

Fig. 16 One step ahead Forecasting. Period 200501- 200612 -100 -50 0 50 100 150 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -30 -20 -10 0 10 20 30 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -40 -30 -20 -10 0 10 20 30 40 50 60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS

(51)

49

Fig. 17 One step ahead Forecasting. Period 200701- 200812

Fig. 18 One step ahead Forecasting. Period 200901- 201012

Fig. 19 One step ahead Forecasting. Period 201101- 201212 -40 -30 -20 -10 0 10 20 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -60 -40 -20 0 20 40 60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -25 -20 -15 -10 -5 0 5 10 15 20 25 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS

(52)

50

Fig. 20 One step ahead Forecasting. Period 201301- 201412

Fig. 21 One step ahead Forecasting. Period 201501- 201612

Fig. 22 One step ahead Forecasting. Period 200101- 201612 -30 -20 -10 0 10 20 30 40 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -10 -5 0 5 10 15 20 25 30 35 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mod.Gamma DatiEmp Mod.OLS -100 -50 0 50 100 150 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 Mod.Gamma DatiEmp Mod.OLS

(53)

51

(54)

52

Fig. 23 Distribution Gamma Properties (Wikipedia - https://it.wikipedia.org/wiki/Distribuzione_Gamma)

The cumulative distribution function where γ is the incomplete gamma function and is the

(55)

53

If and than

and in general

If

If then the expected value and the variance

If than

If ….. are independent random variables with distribution

than the sum of the variables is a random variable with distribution

The random variable chi-square with v degrees of freedom is conform with the random variable

For integer values of k :

Riferimenti

Documenti correlati

There- fore an important development of the present work would be represented by the implementation of the developed algorithms on GPU-base hardware, which would allow the

Essere liberi dalla paura e dal bisogno per essere capaci di esercitare la libertà in senso pieno è quindi agli occhi della Arendt una condizione

Nondimeno, non si possono dimenticare l’ascesa al francese Mont Ventoux di petrarchiana me- moria, racconto che è cronaca di un’ascensione e nel contempo di un’espe- rienza

Government Printing Office , Social Security Administration (US)..

In modern Astrodynamics and Celestial Mechanics, thanks to the rise of interplanetary exploration that has brought a renewed interest in Lambert’s problem, it has application mainly

In this paper, the residual strength ratio of GFRP bars subjected to different environment conditioning was investigated in an attempt to shed light on the durability of

Il rilievo di Treviso e di Ceneda, entrambe sedi di ducato nel regno longobardo, all’interno di quella porzione nordorientale dello stesso che, dall’VIII secolo, le fonti

In this frame, since the Italian Pharmacovigilance System (Italian Medicines Agency—AIFA) only collects reports for registered drugs, the Italian National Institute of Health