• Non ci sono risultati.

of moving average processes under ρ-mixing assumption

N/A
N/A
Protected

Academic year: 2021

Condividi "of moving average processes under ρ-mixing assumption"

Copied!
7
0
0

Testo completo

(1)

Limiting behaviour

of moving average processes under ρ-mixing assumption

Pingyan Chen

i

Department Mathematics, Jinan University, Guangzhou, 510630, P.R. China tchenpy@jnu.edu.cn

Rita Giuliano Antonini

Department of Mathematics, University of Pisa, Largo B. Pontecorvo, 5, 56100 Pisa, Italy giuliano@dm.unipi.it

Tien-Chung Hu

Department of Mathematics, National Tsing Hua University, Hsinchu 300, Taiwan tchu@math.nthu.edu.tw

Andrei Volodin

ii

School of Mathematics and Statistics, University of Western Australia, 35 Stirling HWY, Crawley, WA 6009, Australia

and Department of Mathematics and Statistics, University of Regina, Regina, SK, S4S 0A2, Canada

andrei@maths.uwa.edu.au, andrei@math.uregina.ca Received: 9.5.2008; accepted: 10.11.2009.

Abstract. Let {Y

i

, −∞ < i < ∞} be a doubly infinite sequence of identically distributed ρ-mixing random variables, {a

i

, −∞ < i < ∞} an absolutely summable sequence of real numbers. In this paper, we prove the complete convergence and Marcinkiewicz-Zygmund strong law of large numbers for the partial sums of the moving average processes { P

i=−∞

a

i

Y

i+n

, n ≥ 1}.

Keywords: moving average, ρ-mixing, complete convergence, Marcinkiewicz-Zygmund strong laws of large numbers.

MSC 2000 classification: primary 60F15

Introduction and Main Results.

Let {Y

i

, −∞ < i < +∞} be a doubly infinite sequence of identically distributed random variables and {a

i

, −∞ < i < +∞} be an absolutely summable sequence of real numbers. Next,

i

This work is supported by the National Natural Science Foundation of China

ii

This work is supported by the National Science and Engineering Council of Canada

http://siba-ese.unisalento.it/ c 2010 Universit` a del Salento

(2)

let

X

n

= X

i=−∞

a

i

Y

i+n

, n ≥ 1

be the moving average process based on the sequence {Y

i

, −∞ < i < +∞}. As usual, we denote S

n

= P

n

k=1

X

k

, n ≥ 1, the sequence of partial sums.

Under the assumption that {Y

i

, −∞ < i < +∞} is a sequence of independent identically distributed random variables, many limiting results have been obtained for the moving average process {X

n

, n ≥ 1}. For example, Ibragimov [3] established the central limit theorem, Burton and Dehling [4] obtained a large deviation principle assuming E exp{tY

1

} < ∞ for all t, and Li et al. [5] obtained the complete convergence result for {X

n

, n ≥ 1}.

Certainly, even if {Y

i

, −∞ < i < +∞} is the sequence of independent identically dis- tributed random variables, the moving average random variables {X

n

, n ≥ 1} are dependent.

This kind of dependence is called weak dependence. The partial sums of weakly dependent random variables {X

n

, n ≥ 1} have similar limiting behaviour properties in comparison with the limiting properties of independent identically distributed random variables.

For example, we could present some the previous results connected with complete conver- gence. The following was proved in Hsu and Robbins [1].

Theorem A. Suppose {X

n

, n ≥ 1} is a sequence of independent identically distributed random variables. If EX

1

= 0, E|X

1

|

2

< ∞, then P

n=1

P {|S

n

| ≥ εn} < ∞ for all ε > 0.

Hsu-Robbins result was extended by Li et al. [5] for moving average processes.

Theorem B. Suppose {X

n

, n ≥ 1} is the moving average processes based on a sequence {Y

i

, −∞ < i < ∞} of independent identically distributed random variables with EY

1

= 0, E|Y

1

|

2

< ∞. Then

P

n=1

P {|S

n

| ≥ εn} < ∞ for all ε > 0.

Very few results for a moving average process based on a dependent sequence are known.

In this paper, we provide a result on the limiting behavior of a moving average process based on a ρ-mixing sequence.

Let {Z

i

, −∞ < i < ∞} be a sequence of random variables defined on a probability space (Ω, F , P ) and denote σ-algebras

F

nm

= σ(Z

i

, n ≤ i ≤ m), −∞ ≤ n ≤ m ≤ +∞.

As usual, for a σ-algebra F we denote by L

2

(F ) the class of all F -measurable random variables with the finite second moment.

A sequence of random variables {Z

i

, −∞ < i < ∞} is called ρ-mixing if the maximal correlation coefficient

ρ(m) = sup

k≥1

sup (

cov(X, Y ) p Var(X)Var(Y )

, X ∈ L

2

(F

−∞k

), Y ∈ L

2

(F

m+k

) )

→ 0

as m → ∞.

The following maximal inequality can be found in Shao ([7], Theorem 1.1) and plays a crucial role in the proof of our main result. As usual, the notation [·] is used for the integer part function.

Maximal Inequality. Assume that {Z

n

, n ≥ 1} is a sequence of ρ-mixing random vari-

ables with EZ

n

= 0 and E|Z

n

|

q

< ∞ for all n ≥ 1 and some q ≥ 2. Then there is a positive

(3)

constant K = K(q, ρ(·)) depending only on q and ρ(·) such that for any n ≥ 1,

E max

1≤k≤n

| X

k

i=1

Z

i

|

q

≤ K

n

q/2

exp

 

 K

[log n]

X

i=0

ρ(2

i

)

 

 max

1≤k≤n

(E|Z

k

|

2

)

q/2

+n exp

 

 K

[log n]

X

i=0

ρ

q/2

(2

i

)

 

 max

1≤k≤n

E|Z

k

|

q

.

Recall that a measurable function h is said to be slowly varying if for each λ > 0

x→∞

lim h(λx)

h(x) = 1.

We refer to Seneta [6] for other equivalent definitions and for detailed and comprehensive study of properties of such functions.

Now we can present the main result of the paper.

Theorem 1. Let {Y

i

, −∞ < i < ∞} be a sequence of identically distributed ρ-mixing random variables, and {X

n

, n ≥ 1} be the moving average process based on the sequence {Y

i

, −∞ < i < ∞}.

Let h(x) be a positive slowly varying function and 1 ≤ p < 2, r ≥ 1. If rp = 1, additionally assume that P

i=1

|a

i

|

θ

< ∞ for some θ ∈ (0, 1). If rp < 2 take q = 2, and if rp ≥ 2 take any q >

2p(r−1)2−p

. Set

φ(t) =

[log t]

X

i=0

ρ

2/q

(2

i

).

Let K = K(q, ρ(·)) be the constant defined in the Maximal Inequality presented above.

If EY

1

= 0 and E|Y

1

|

rp

h(|Y

1

|

p

) exp{Kφ(|Y

1

|

p

)} < ∞ then for all ε > 0 X

n=1

n

r−2

h(n)P {max

k≤n

|S

k

| ≥ εn

1/p

} < ∞.

In particular, if EY

1

= 0 and E|Y

1

|

p

exp{Kφ(|Y

1

|

p

)} < ∞, then the following Marcinkiewicz- Zygmund strong law of large numbers

n

−1/p

S

n

→ 0, a.s.

holds.

Remark 1. If P

n=0

ρ

2/q

(2

n

) < ∞, then

E|Y

1

|

rp

h(|Y

1

|

p

) exp{Kφ(|Y

1

|

p

)} < ∞ and E|Y

1

|

rp

h(|Y

1

|

p

) < ∞

are equivalent. Hence the first statement of Theorem extends and generalizes Theorem 3.1 of Shao [7]. Marcinkiewicz-Zygmund strong law of large number presented in Theorem generalizes Theorem 5.1 of Fazekas and Klesov [2] and Corollary 3.1 of Shao [7].

By the following Proposition, the function exp{Kφ(t)} is a slowly varying function. Hence the assumption about exp{Kφ(t)} in Theorem 5.1 of Fazekas and Klesov [2] is excessive

Proposition 1. Let {b

n

, n ≥ 0} be a sequence of real number with lim

n→∞

b

n

= 0 and set Φ(t) = P

[log t]

n=0

b

n

. Then for any constant K > 0 the function exp{KΦ(t)} is a slowly varying

function.

(4)

Proofs

Proof of Theorem 1. Note that X

n k=1

X

k

= X

n k=1

X

i=−∞

a

i

Y

i+k

= X

i=−∞

a

i i+n

X

j=i+1

Y

j

and since P

i=−∞

|a

i

| < ∞, n

−1/p

|E

X

i=−∞

a

i i+n

X

j=i+1

Y

j

I{|Y

j

| ≤ n

1/p

}|

≤ n

−1/p

X

i=−∞

|a

i

|

i+n

X

j=i+1

E|Y

j

|I{|Y

j

| ≤ n

1/p

}

≤ n

1−1/p

X

i=−∞

|a

i

|

!

E|Y

1

|I{|Y

1

| > n

1/p

}

≤ CE(n

1/p

)

p−1

|Y

1

|I{|Y

1

| > n

1/p

}

≤ CE|Y

1

|

p

I{|Y

1

| > n

1/p

} → 0, as n → ∞.

Hence for n large enough we have

n

1/p

| X

i=−∞

a

i i+n

X

j=i+1

Y

j

I{|Y

j

| ≤ n

1/p

}| < ε/4.

Let Y

nj

= Y

j

I{|Y

j

| ≤ n

1/p

} − EY

j

I{|Y

j

| ≤ n

1/p

}. Then X

n=1

n

r−2

h(n)P { max

1≤k≤n

|S

k

| ≥ εn

1/p

}

≤ C X

n=1

n

r−2

h(n)P { max

1≤k≤n

| X

i=−∞

a

i

X

i+k j=i+1

Y

j

I{|Y

j

| > n

1/p

}| ≥ εn

1/p

/2}

+C X

n=1

n

r−2

h(n)P { max

1≤k≤n

| X

i=−∞

a

i

X

i+k j=i+1

Y

nj

| ≥ εn

1/p

/4}

=: I + J, say.

For I, if rp > 1, by Markov inequality we have

I ≤ C X

n=1

n

r−2

h(n)n

−1/p

E max

1≤k≤n

| X

i=−∞

a

i i+k

X

j=i+1

Y

j

I{|Y

j

| > n

1/p

}|

≤ C X

n=1

n

r−1−1/p

h(n)E|Y

1

|I{|Y

1

| > n

1/p

}

= X

n=1

n

r−1−1/p

h(n) X

m=n

E|Y

1

|I{m < |Y

1

|

p

≤ m + 1}

= X

m=1

E|Y

1

|I{m < |Y

1

|

p

≤ m + 1}

X

m n=1

n

r−1−1/p

h(n)

(5)

≤ C X

m=1

m

r−1/p

h(m)E|Y

1

|I{m < |Y

1

|

p

≤ m + 1}

≤ CE|Y

1

|

rp

h(|Y

1

|

p

) ≤ CE|Y

1

|

rp

h(|Y

1

|

p

) exp{Kφ(|Y

1

|

p

)} < ∞.

If rp = 1, by Markov inequality and the same argument as the case rp > 1 we have I ≤ C

X

n=1

n

r−2

h(n)n

−θ/p

E max

1≤k≤n

| X

i=−∞

a

i

X

i+k j=i+1

Y

j

I{|Y

j

| > n

1/p

}|

θ

≤ C X

n=1

n

r−1−θ/p

h(n)E|Y

1

|

θ

I{|Y

1

| > n

1/p

} < ∞.

For J, if rp < 2, by Markov, H¨ older, and Maximal inequalities we have J ≤ C

X

n=1

n

r−2

h(n)n

−2/p

E max

1≤k≤n

| X

i=−∞

a

i

X

i+k j=i+1

Y

nj

|

2

≤ C X

n=1

n

r−2

h(n)n

−2/p

E X

i=−∞

 |a

i

|

1− 1/2



|a

i

|

1/2

max

1≤k≤n

| X

i+k j=i+1

Y

nj

|

!!

2

≤ C X

n=1

n

r−2−2/p

h(n)(

X

i=−∞

|a

i

|) X

i=−∞

|a

i

|E max

1≤k≤n

| X

i+k j=i+1

Y

nj

|

2

≤ C X

n=1

n

r−2−2/p

h(n)

n exp

 

 K

[log n]

X

i=0

ρ(2

i

)

 

 E|Y

1

|

2

I{|Y

1

| ≤ n

1/p

}

= C X

n=1

n

r−1−2/p

h(n) exp {Kφ(n)} E|Y

1

|

2

I{|Y

1

| ≤ n

1/p

}

≤ C X

n=1

n

r−1−2/p

h(n) exp {Kφ(n)}

X

n k=1

E|Y

1

|

2

I{(k − 1)

1/p

< |Y

1

| ≤ k

1/p

}

≤ C X

k=1

E|Y

1

|

2

I{(k − 1)

1/p

< |Y

1

| ≤ k

1/p

} X

n=k

n

r−1−2/p

h(n) exp {Kφ(n)}

≤ C X

k=1

k

r−2/p

h(k) exp {Kφ(k)} E|Y

1

|

2

I{(k − 1)

1/p

< |Y

1

| ≤ k

1/p

}

≤ CE|Y

1

|

rp

h(|Y

1

|

p

) exp{Kφ(|Y

1

|

p

)} < ∞.

If rp ≥ 2, by Markov, H¨ older, and Maximal inequalities we have J ≤ C

X

n=1

n

r−2

h(n)n

−q/p

E max

1≤k≤n

| X

i=−∞

a

i

X

i+k j=i+1

Y

nj

|

q

≤ C X

n=1

n

r−2

h(n)n

−q/p

E X

i=−∞

 |a

i

|

1− 1/q



|a

i

|

1/q

max

1≤k≤n

| X

i+k j=i+1

Y

nj

|

!!

q

≤ C X

n=1

n

r−2−q/p

h(n)(

X

i=−∞

|a

i

|)

q−1

X

i=−∞

|a

i

|E max

1≤k≤n

| X

i+k j=i+1

Y

nj

|

q

≤ C X

n=1

n

r−2−q/p+q/2

h(n) exp

 

 K

[log n]

X

i=0

ρ(2

i

)

 

 (E|Y

1

|

2

I{|Y

1

| ≤ n

1/p

})

q/2

(6)

+C X

n=1

n

r−1−q/p

h(n) exp {Kφ(n)} E|Y

1

|

q

I{|Y

1

| ≤ n

1/p

}

=: J

1

+ J

2

, say.

For J

1

, since r − 2 − q/p + q/2 < −1, we can take t > 0 small enough such that r − 2 − q/p + q/2 + tq/(2p) < −1. Then

J

1

= C X

n=1

n

r−2−q/p+q/2

h(n) exp

 

 K

[log n]

X

i=0

ρ(2

i

)

 

 (E|Y

1

|

2−t+t

I{|Y

1

| ≤ n

1/p

})

q/2

≤ C X

n=1

n

r−2−q/p+q/2+tq/(2p)

h(n) exp

 

 K

[log n]

X

i=0

ρ(2

i

)

 

 (E|Y

1

|

2−t

I{|Y

1

| ≤ n

1/p

})

q/2

< ∞.

For J

2

, we have

J

2

= C X

n=1

n

r−1−q/p

h(n) exp {Kφ(n)} E|Y

1

|

q

I{|Y

1

| ≤ n

1/p

}

≤ C

X

n=1

n

r−1−q/p

h(n) exp {Kφ(n)}

X

n k=1

E|Y

1

|

q

I{k − 1 < |Y

1

|

p

≤ k}

= C

X

k=1

E|Y

1

|

q

I{k − 1 < |Y

1

|

p

≤ k}

X

n=k

n

r−1−q/p

h(n) exp {Kφ(n)}

≤ C

X

k=1

k

r−q/p

h(k) exp {Kφ(k)} E|Y

1

|

q

I{k − 1 < |Y

1

|

p

≤ k}

≤ CE|Y

1

|

rp

h(|Y

1

|

p

) exp {Kφ(|Y

1

|

p

)} < ∞.

Now we show the almost sure convergence. By the first part of Theorem, EY

1

= 0 and E|Y

1

|

p

exp{Kφ(|Y

1

|

p

)} < ∞ imply

X

n=1

n

−1

P



1≤k≤n

max |S

n

| ≥ εn

1/p



< ∞, for all ε > 0.

Hence

∞ >

X

n=1

n

−1

P { max

1≤m≤n

|S

m

| > εn

1/p

}

= X

k=1

2k

X

n=2k−1

n

−1

P { max

1≤m≤n

|S

m

| > εn

1/p

}

≥ 1/2 X

k=1

P { max

1≤m≤2k−1

|S

m

| > ε2

k/p

}.

By Borel-Cantelli lemma,

2

−k/p

max

1≤m≤2k

|S

m

| → 0 almost surely

(7)

which implies that S

n

/n

1/p

→ 0 almost surely.

Proof of Proposition 1. Since lim

n→∞

b

n

= 0, for all ε > 0 there exists a positive integer N = N (ε) such that

−ε < b

n

< ε for all n > N.

For any λ > 1

exp{KΦ(λt)}/ exp{KΦ(t)} = exp{K

[log(λt)]

X

n=[log t]+1

b

n

}.

Note that for t > 1

[log(λt)] − [log t] ≤ log(λt) − (log t − 1) = log λ + 1.

Hence for t large enough

exp{−K(log λ + 1)ε} ≤ exp{KΦ(λt)}/ exp{kΦ(t)} ≤ exp{K(log λ + 1)ε}

and by the arbitrariness of ε > 0

t→∞

lim exp{KΦ(λt)}/ exp{KΦ(t)} = 1.

References

[1] P.L. Hsu, H. Robbins: Complete convergence and the law of large numbers, Proc. Nat.

Acad. Sci. U.S.A. 33 (1947) 25–31.

[2] I. Fazekas, O. Klesov: A general approach to the strong law of large numbers, Theory Probab. Appl. 45 (2000) 436–449.

[3] I.A. Ibragimov: Some limit theorem for stationary processes, Theory Probab. Appl. 7 (1962) 349–382.

[4] R.M. Burton, H. Dehling Large deviations for some weakly dependent random pro- cesses, Statist. Probab. Lett. 9 (1990) 397–401.

[5] D. Li, M.B. Rao, and X.C. Wang: Complete convergence of moving average processes, Statist. Probab. Lett. 14 (1992) 111–114.

[6] E. Seneta: Regularly varying function, Lecture Notes in Math. 508 Springer, Berlin, 1976.

[7] Q.M. Shao: Maximal inequalities for partial sums of ρ-mixing sequences, Ann. Probab.

23 (1995) 948–965.

Riferimenti

Documenti correlati

We will present a new concept of integration of PC through horizontal and vertical actions, applicable within any health and social care system, with a person-centered PC approach

Ad oggi, però, l’attuale sistema di customer satisfaction della divisione Energy risulta non essere allineato con le sue esigenze e con gli obiettivi che si è imposta di

Nailfold videocapillaroscopy reveals the peripheral microvascular morphology and thus allows classification and scoring of capillary abnormalities with respect to

La combinazione degli effetti di riduzione del peso corporeo e di riduzione del glucosio di GLP-1 con un miglioramento più potente della funzione delle cellule beta

The contributions to TWG15 were grouped according to the following themes: large-scale professional development through online courses; technology-mediated assessment

In contrast with other works, this paper has offered a more complete study in the loads modelling and I have presented a new numerical simulation using Abaqus 6.14

Most of the existing bipolar compact modelling procedures employ the idea of integral charge control relationship (ICCR) to express the transistor transfer current [4].. ICCR is