• Non ci sono risultati.

∑ ∑ Bayesian inference and forecasts with full range autoregressive time series models

N/A
N/A
Protected

Academic year: 2021

Condividi "∑ ∑ Bayesian inference and forecasts with full range autoregressive time series models"

Copied!
4
0
0

Testo completo

(1)

Bayesian inference and forecasts with full range

autoregressive time series models

Venkatesan D.,

Department of Statistics, Annamalai University, Tamil Nadu, India sit_san@hotmail.com

Sita Devi K.

Department of Agricultural Economics, Annamalai University, Tamil Nadu, India sit_san@hotmail.com

Gallo M.,

Department of Social Science, University of Naples – L’Orientale, Italy mgallo@unior.it

Abstract: This paper describes the Bayesian inference and forecasting as applied to the full range

autoregressive (FRAR) model. The FRAR model provides an acceptable alternative to the existing methodology. The main advantage associated with the new method is that one is completely avoiding the problem of order determination of the model as in the existing methods.

Keywords: Full range autoregressive model, Posterior distribution, Bayesian analysis, Bayesian predictive distribution.

1. Introduction

The Bayesian approach to the analyses of the FRAR model consists in determining the posterior distribution of the parameters of the FRAR model and the predictive distribution of future observations. From the former, one makes posterior inferences about the parameters of the FRAR model including the variance of the white noise. From the latter, one may forecast future observations. All these techniques are illustrated by Broemeling (1985) for autoregressive models. This paper will develop posterior and predictive distributions of the FRAR model, introduced by Venkatesan et. al. (2008).

An outline of this paper is as follows. In section 2 the FRAR model is described and the stationarity conditions are given. In section 3 and 4 the posterior analysis of the FRAR model is discussed and . the predictive density of a single future observation is derived. In section 5 the summary and conclusion is given.

2. The Full Range Autoregressive Model

The FRAR model, introduced by Venkatesan et. al. (2000) and defined by a discrete time stochastic process

( )

Xt is given by

∞ = − + = 1 r r t r t t a X e X , t=0,±1,±2,...

where ar =

(

kr

)

sin

( ) ( )

rθ cosrφ , k, α , θ and φ are parameters, e1, e2, e , … are independent 3

and identically distributed normal random variables with mean zero and variance σ2 >0. That is

(

Xt /Xt1,Xt2,...

)

has the normal distribution with mean

=

1

r arXt r and variance

2

σ . It is

assumed that the domain of the parameters is S given by

( )

[

)

[

)

{

,α,θ,φ ∈ ,α∈ 1,∞,θ∈ 0,π ,φ∈ 0,π/2

}

= k k R

S , which ensures that the model is identifiable

and the FRAR model is asymptotically stationary up to order 2 provided 1−α <k <α −1. MTISD 2008 – Methods, Models and Information Technologies for Decision Support Systems 

Università del Salento, Lecce, 18­20 September 2008   

   

(2)

The problem is to estimate the unknown parameters k, α , θ, φ and σ2, using the Bayesian methodology on the basis of a past random realization of

{ }

Xt say x=

(

x1,x2,...,xN

)

.

3. The Posterior Analysis

The joint probability density of X1,X2,...,XN is given by

(

X

)

P α

( )

⎢⎣⎡−

2=1

(

=

)

⎥⎦ 2 1 2 2 2 2 1 exp t t r r t r N x a k x σ σ (2)

where x=

(

x1,x2,...,xN

)

, Θ=

(

k,α,θ,φ,σ2

)

and ar =

(

1/α2

)

sin

( ) ( )

rθ cos rφ .

Here P is used as a general notation for the probability density function of the random variables given within the parentheses following P and X0,X1,X2,... are the past realizations on X t

which are unknown. Following Priestley (1981) and Broemeling (1985), these are assumed to be zero for the purpose of deriving the posterior distribution of Θ. Therefore, the range for the index r,

viz., 1 through ∞, reduces to 1 through N and so, in the joint probability density function of the observations given by (2), the range of the summation 1 through ∞ can be replaced by 1 through N. By expanding the square in the exponent and simplifying, one gets

(

X

)

P α

( )

σ2 −N2 exp(−Q/2σ2) (3) where Θ∈S, = +

= +

< =

Nr= r r N s r s r r s rs N r arTrr k a a T k a T k T Q ;, 1 1 0 2 1 2 2 00 2 2 , =

= − − N t t r t s rs x x T 1 , N s r, =0,1,..., .

The prior distribution for the parameters are assigned as follows :

1. α is distributed as the displaced exponential distribution with parameter β, that is,

( )

α = β exp

[

−β

(

α −1

)

]

P ;α >1, 0β > .

2. σ2 has the inverted gamma distribution with parameter ν and δ, that is,

( )

2

( )

( )

2 ( 1) 2 − + − δ σ υ σ α σ e P ; 0σ2 > , υ >0, δ >0

3. k, θ and φ are uniformly distributed over their domain, that is, P

(

k,θ,φ

)

=C, a constant, π

θ < ≤

0 , 0≤φ <π/2.

So, the joint prior density function of Θ

(

Θ∈S

)

is given by

( )

(

(

)

2

)( )

2 ( 1) / 1 exp− − − − + Θαβ β α ν σ σ δ P (4)

Using (3), (4), and Bayes’ theorem, the joint posterior density of k, α , θ , φ and σ2

is obtained as

(

X

)

P Θ/ α

( )

σ2 −N2 exp(−Q/2σ2) exp

[

−β(α −1)−ν /σ2

]

(σ2)−(δ+1)

α exp

[

−β(α −1)

]

α exp

[

−1/2σ2(Q+2ν)

]

(σ2)−[(N/2)+δ+1] (5) Integrating (5) with respect to σ2, the joint posterior density of k, α , θ and φ is obtained as

(

k X

)

P ,α,θ,φ/ α exp

[

−β(α −1)

]

(Q+2ν)−[(N2)+δ] (6) where

(

θ∈S

)

, 2 00

[

1

(

1

)

2

]

1 2 2 ) 2 (Q+ ν =akkb+T + υ =C +A kB , and C=T00B2/A+2ν

= = N r arT r B 1 0 , =

= +

= N s r r s rs N r arTrr a a T A 1 , 1 2 2 , A1 = A C, B1 = B C.

Thus, the above joint posterior density of k, α , θ , φ can be rewritten as

(

k X

)

P ,α,θ,φ/ α exp

(

−β(α −1)

)

{

C

[

1+A1

(

kB1

)

2

]

}

d (7)

where

(

θ∈S

)

, d =

(

N/2

)

+δ.

This shows that, given α , θ and φ the conditional distribution of k is t located at B1 with

(

2d−1

)

(3)

For proper Bayesian inference on k, α, θ and φ one needs their marginal distributions. The joint posterior density of α , θ and φ, namely P

(

α,θ,φ/x

)

, can be obtained by integrating (7) with respect to k. Thus, the joint posterior density function of α , θ and φ is obtained as

(

x

)

Pα,θ,φ/ α exp

(

−β(α −1)

)

11/2 − − A C d (8) with α >1, 0≤θ <π and 0≤φ <π/2.

The above joint posterior density of α , θ and φ in (8) is a very complicated expression and is analytically intractable. For instance, it seems impossible to integrate it with respect to α and φ in order to obtain the marginal posterior density of θ and similarly for φ, which are essential for the purposes of posterior inference. One way of solving the problem is to find the marginal posterior density of α , θ and φ from the joint density (8) using numerical integration.

4. One-step-ahead prediction

In order to forecast xN+1 using the random realization x1,x2,...,xN on

(

X1,X2,...,XN

)

, one must find the conditional distribution of XN+1 given the past observations. This is the predictive distribution of XN+1 and will be derived by multiplying the conditional density of XN+1 given

N

X X

X1, 2,..., , Θ and the posterior density of Θ given X1,X2,...,XN and then integrating with respect to Θ.

That is,

(

XN X X XN

)

P +1/ 1, 2,..., =

(

) (

)

ΘP XN+1/X1,X2,...,XNP Θ/X1,X2,...,XN dΘ.

In the present context, from (1) and (7), with xN+1∈R, we get

(

xN+1/x1,x2,...,xN

)

P α

( )

⎢⎣⎡−

(

+

= +

)

⎥⎦⎤ 2 1 1 1 2 2 1 2 2 1 exp σ xN k i ai xN i σ (9)

The square in the exponent in the above expression, say Q1, can be rewritten, after expanding the

square, as

= < = = + + + + − = N i i i N N i j i i j ij N i i i N k a P k a a P k a P X x Q 1 1 1 ; 2 1 2 2 2 2 1 1 2 2

where Pi = XN+1−i and Pij = XN+1−iXN+1−j. Now multiplying (9) by the joint posterior density of θ

and integrating over the parameter space Θ, we obtain,

(

xN x x xN

)

P +1/ 1, 2,..., α

(

(

)

)

( )

( ) ( ) [ ]

(

)

Θ ⎥⎦ ⎤ ⎢⎣ ⎡ + + − −

+ + + d Q Q N υ σ σ α β δ 2 2 1 exp 1 1 exp 2 1 1 2 / 1 2 / 2 (10) First, integrating out σ2 in (10), one gets the joint distribution of xN+1, k, α , θ and φ as

(4)

Further, integrating out k from (12) we get

(

xN k x x xN

)

P +1, ,α,θ,φ/ 1, 2,..., α exp

(

−β

(

α−1

)

)

( )12 1 1 − − E C d (13)

with d =

(

υ+1

)

/2which is the conditional predictive distribution of xN+1 given α , θ and φ.

Further elimination of the parameters α , θ and φ from (13) is not possible analytically. So the marginal posterior density of xN+1 can not be expressed in a closed form. Since the distribution in (13) is analytically not tractable, a complete Bayesian analysis is possible only by numerical integration technique or by viewing (13) as the conditional distribution of xN+1 given α , θ and φ.

Suppose one wants a point estimate of xN+1, then one should compute the marginal posterior

density of xN+1 from (13) and use it to calculate the marginal posterior mean of xN+1. Thus four

dimensional numerical integration is necessary in order to estimate xN+1. But it is a very difficult

problem.

Practically, to perform four dimensional numerical integration is very difficult and therefore to reduce the dimensions of the numerical integration one may substitute the estimators αˆ , θˆ and φˆ respectively in the place of α , θ and φ and then perform one dimensional numerical integration to find the conditional mean of XN+1. That is, one may eliminate the parameters as much as possible by analytical methods and then use the conditional estimates for the remaining parameters to compute the marginal posterior mean of the future observation.

5. Summary and conclusion

The Full Range Autoregressive model provides an acceptable alternative to the existing methodology. The main advantage associated with the new method is that one is completely avoiding the problem of order determination of the model as in the existing methods.

Thus, it is not unreasonable to claim the FRAR model proposed by Venkatesan et. al. (2008) and its Bayesian analysis presented above certainly provides a viable alternative to the existing time series methodology, completely avoiding the problem of order determination.

Bibliography

Brockwell P.J., Davis R.A. (1987), Times Series: Theory and Methods, Springer-Verlag, New York.

Broemeling L.D. (1985), Bayesian Analysis of Linear Models, Marcel Dekker INC, New York. Nicholls D.F., Quinnon B.G. (1982), Random Coefficient Autoregressive Models: An Introduction,

Springer-Verlag, New York.

Priestley M.B. (1981), Spectral Analysis and Time Series, Academic Press, London and New York. Venkatesan D., Sita Devi K., Simonetti B. (2008), Full Range Autoregressive Time Series Models. MTISD 2008 – Methods, Models and Information Technologies for Decision Support Systems 

Università del Salento, Lecce, 18­20 September 2008   

   

Riferimenti

Documenti correlati

This cold flow draws heat from the passenger compartment air through a heat exchanger and then is brought back to ambient pressure by a compressor.. Part of the compression work

The model developed in this study allows to define the trajectory of a vehicle running of a given road section (as shown in the example of Figure 9) and therefore the maximum

fertilità dei suoli. e) - Il comprensorio in cui si potrebbe realizzare questo tipo d’intervento integrato: allevamento/agricoltura, potrebbe essere posto in

shaped inlet boundary has been created on the inlet door surface, and a uniform flux with a velocity of 1.25 m/s and a concentration of 25000 ppm (representing the

∆RU LC jt 4-digit sectoral manufacturing relative unit labour cost growth; ∆W orldIncome t world income growth. 2-digit sectoral dummies are included in the “all

Saturnino was operated in July 2013 and it had a lucky timing, while the courtyard of all the “green” coverages was still not completed and all the area around the church was

Three fits are shown: fitting all data with a single PL (black dashed line), or with a broken PL (black line) as well as a fit to the data excluding obvious flaring points (blue

In particular, we used a model composed of a power law plus a multicolor disk ( diskbb ) that resulted in a hot disk (the inner temperature is 1.15 keV) and a soft power law