• Non ci sono risultati.

View of Asymptotics for linear approximations of smooth functions of means

N/A
N/A
Protected

Academic year: 2021

Condividi "View of Asymptotics for linear approximations of smooth functions of means"

Copied!
17
0
0

Testo completo

(1)

ASYMPTOTICS FOR LINEAR APPROXIMATIONS OF SMOOTH FUNCTIONS OF MEANS

Andrea Pallini

1.INTRODUCTION

The class of smooth functions of means contains several population character-istics and estimates, providing an important theoretical framework for statistical inference. The class of smooth functions is known to include the univariate vari-ance, the difference of means, the ratio of means, the ratio of variances, the cor-relation coefficient, and the corresponding asymptotically pivotal versions. See Fuller (1976), chapter 5, Bhattacharya and Ghosh (1978), Hall (1992), chapter 2, Sen and Singer (1993), chapter 3, and Pallini (2000, 2002).

An accurate linear approximation for smooth functions of means is studied in Pallini (2002), with an error of order O np( 2) in probability, as the sample size n diverges. Here, we study a higher-order version of the basic linear approximation studied in Pallini (2002), yielding an error in approximation of smaller order

3

( )

p

O n , as n diverges. We show that the asymptotic distribution of the basic and higher-order linear approximations is normal, as n diverges. We also study the corresponding sample versions of the basic and higher-order linear approxi-mations, which can be used for approximating the sample estimate of a smooth function of means, with errors of order O np( 5/ 2) and O np( 7/ 2), respectively, as n diverges.

In particular, in section 2, we propose a higher-order version of the linear ap-proximation of smooth functions of means proposed in Pallini (2002), by using a specific definition of difference of derivatives. We study a common asymptotics for the basic and its higher-order linear version, as n diverges. We show that these linear approximations are asymptotically normal, as n diverges. Finally, in section 3, we discuss the details of a simulation study on the ratio of means ex-ample, in order to see the computational issues and show the effectiveness of these linear approximations.

(2)

2. LINEAR APPROXIMATIONS 2.1. Notation

Let  { X ,1 !, X }n be a random sample of n independent and identically

distributed (i.i.d.) observations drawn from a finite-dimensional q -variate ran-dom variable X , with distribution F and finite vector of means P E[ X ]1 .

We denote by T a real-valued population characteristic of interest. More precisely, T is defined as T g( )P , where g U: oƒ1 is a smooth function,

q

U Žƒ , fulfilling the conditions of Appendix 4.1. A natural estimator of T is

ˆ g( x )

T , where 1 i

1

x n X

i

n

¦

is the vector of q sample means. 2.2. The basic linear approximation

In Pallini (2002), it is shown that

1 2 i 1 ( y ) ( ) { ( Y ) ( )} ( ) n p i g P  g P

¦

g n P  g P O n , (1) as n o f , where y 1 n 1Yi i

n

¦

, and Yi Xi  , for every P i ! . See 1, ,n

also Appendix 4.2. The proof is detailed in Appendix 4.4.

Result (1) is obtained by Taylor expanding the smooth function

( y ) ( )

g P  g P around the origin 0 of ƒq, as the sum of the n Taylor expan-sions of the functions g n( 1Yi P) g( )P around 0 , where i ! .1, ,n

The set of smooth functions g n( 1Yi P) g( )P , i ! , can be viewed 1, ,n

as a sample of n well-defined i.i.d. random variables. 2.3. The higher-order linear approximation

For every t1 ! , we denote by 1, ,q

1(u) t

g the derivative of the smooth

func-tion (u)g , 1

1

( t )

(u) (u)/ u

t

g wg w , where u . We define the quantity U * as

1 1 1 1 1 1 1 ( t ) 1 (t ) i i 1 1 1 1 ( y ) y ( Y ) Y 4 q n q t t t i t g P g n P n * ­°®    ½°¾ ° ° ¯

¦

¦¦

¿ , (2)

where the derivatives are in the directions y and n 1Yi 

, at the points y  and P

1 i

Y

n  , respectively. The coefficient 1/4 in (2) adjusts the difference of the P

derivatives for an exact correction of the term of order O np( 2) in (1), as

(3)

Adding * (given by (2)) to the basic linear approximation (1), we obtain the higher-order linear approximation given by

1 3 i 1 ( y ) ( ) { ( Y ) ( )} ( ) n p i g P  g P

¦

g n P  g P  * O n , (3)

as n o f . See Appendix 4.3 for the Taylor expansion of the derivative.

The order O np( 3) in (3) improves on O np( 2) in (1) by a good factor n1, as

n o f . See Appendix 4.5 for the proof. In Appendix 4.6, it is also seen that

1/ 2

( )

p

O n

* , as n o f .

2.4. Linear approximations of sample estimates

The sample versions of the linear approximations (1) and (3) can naturally be obtained by substituting P with x . These sample versions can be adapted to yield linear approximations of the sample estimate ( x )g of the smooth function

( )

g P .

From (1) and (3), it follows that linear approximations of the estimates g(x) can be defined by 1 1 5/ 2 i 1 ( x ) ( ( X x ) x ) ( ) n p i g n

¦

g n   O n , (4) as n o f , and 1 1 1 7/2 i 1 ˆ ( x ) ( ( X x ) x ) 2 ( ) n p i g n

¦

g n   n *O n , (5) as n o f , where 1 1 1 1 1 1 ( t ) ( t ) i 1 1 ˆ 1 ( ( X x ) x ) ( X x ) 4 q n t i i t g n n * 

¦ ¦

   . (6) The coefficient 2 in (5) adjusts *ˆ (given by (6)) for an exact correction of the term of order O np( 5/ 2) in (4), as n o f .

See Appendixes 4.7 and 4.8, for the proofs of the orders O np( 5/ 2)



and

7/ 2

( )

p

O n , as n o f , in (4) and (5), respectively. In Appendix 4.9, it is also seen that *ˆ O np( 1/ 2) and n1*ˆ O np( 3/ 2), as n o f .

(4)

(5) of the estimates ( x )g are asymptotically normal, as n o f . In particular, it is seen that 1/ 2 1 2 i 1 ( ( X x ) x ) ( x ) (0, ) n d i n ®­° g n   n g ½°¾ oN V ° ° ¯

¦

¿ , (7) 1/ 2 1 2 i 1 ˆ ( ( X x ) x ) 2 ( x ) (0, ) n d i n ®­° g n    *n g ½°¾ oN V ° ° ¯

¦

¿ , (8)

as n o f , where V2 is the asymptotic variance.

The asymptotic variance n1V2 in (7) and (8) can be estimated by

2 2 1 i 1 ˆ { ( ( X x ) x ) ( x )} n i g n g V

¦

    . (9)

In Appendix 4.12, it is shown that Vˆ2 n1V2 O np( 3/ 2), as n o f . 3. A SIMULATION STUDY

3.1. The ratio of means example

The random sample  consists of n i.i.d. bivariate observations

(1) (2)

i i i

X ( V , V )T , where q 2, and i ! . The random variable 1, ,n V1( 2 )

ranges in a set of positive values. The population ratio of means is defined as

1 (1) ( 2 )

( ) ( )

g P P P  .

The sample ratio of means g( x ) x ( x(1) ( 2 ))1 is given by

1 (1) (1) ( 2 ) ( 2 )

( y ) ( y )( y )

g P P P  .

The basic linear approximation (1) is defined by

1 (1) (1) (1) 1 i 1 (2) ( 2 ) ( 2 ) 1 1 i Y { ( Y ) ( )} Y n n i i i n g n g n P P P P P P    ­  ½   ®  ¾  ¯ ¿

¦

¦

.

The quantity * (given by (2)) in the higher-order linear approximation (3) is defined by the derivative

1 1 (1) (1) 2 1 (1) ( 2 ) ( 2 ) ( 2 ) ( 2 ) ( 2 ) 2 1 y 1 ( y ) y y y y ( y ) t t g P P P P     

¦

,

(5)

in the direction y at the point y , and, for every P i ! , by the derivative 1, ,n 1 1 1 2 1 1 (t ) 1 (1) i 1 (2) ( 2 ) i 1 i 1 (1) (1) 1 (2) i i 1 (2) ( 2 ) 2 i 1 ( Y ) Y Y Y Y Y , ( Y ) t i t g n n n n n n n P P P P            

¦

in the direction n 1Yi  at the point n 1Yi P   .

Sample quantities in (4), (5) and (6) are given by the function

1 (1) (1) (1) 1 i i 1 (2) (2) (2) i ( X x ) x ( ( X x ) x ) ( X x ) x n g n n          ,

and, for every i ! , by the derivative 1, ,n

1 1 1 1 2 1 1 (t ) (t ) i i 1 1 (1) (1) i 1 (2) (2) (2) i ( ( X x ) x ) ( X x ) 1 ( X x ) ( X x ) x t t g n n n n          

¦

1 (1) (1) (1) 1 (2) (2) i i 1 (2) (2) (2) 2 i (X x ) x ( X x ) . ( ( X x ) x ) n n n          3.2. Computational details

Figures 1 and 2 plot the differences between the sample ratio of means ( )g x

and its linear approximations (4) and (5). Figures 3 and 4 compare the sample values of its linear approximations (4) and (5) with the corresponding quantiles of the standard normal distribution.

Simulated data were generated from a bivariate folded-normal distribution and a bivariate lognormal distribution. In particular, let W1 N(0,1) , W2 N(0,1) and W3 N(0,1) be independent random variables, where N(0,1) is the normal

N(0,1) random variable. The folded-normal variable X1 ( V1(1), V1(2))

T

in Figure 1 is defined by V1(1) W1 W3 and

( 2 )

1 2 3

V W W , where the correlation coef- ficient of V1(1) and V1( 2 ) is U 0.5. Let W1 N(0,1), W2 N(0,1) and

3 (0,1)

W N be independent random variables. The lognormal variable

(1) (2)

1 1 1

X ( V , V )T in Figure 2 is defined by V1(1) exp((W1W3)/ 2 ) and

( 2 )

1 2 3

V exp((W W )/ 2 ), where the correlation coefficient of V1(1) and ( 2 ) 1

V is U 0.377541.

(6)

3.3. Empirical results

Figures 1 and 2 show the performance of the sample linear approximations (4) and (5) for the ratio of means example. The simulated samples were generated with 40 different sizes n that range from n 2 to n 41. Linear approxima-tions (4) and (5) may be equivalent in performance. It is important to see that the linear approximations (4) and (5) in Figure 1 can be regarded as nearly exact, for all sizes n t18.

Figures 3 and 4 show the speed of convergence to normality of the linear ap-proximations (4) and (5) for the ratio of means example, with small sample sizes

n, n 4 and n 7. Results are preferable with data from the folded-normal dis-tribution.

3.4. Conclusions

Sample linear approximations (4) and (5) are effective and very accurate. In any case, (5) may be more difficult than (4) to apply with some examples of smooth functions of means, since (5) demands the derivative of smooth functions.

Sample linear approximations (4) and (5) can be used for simplifying asymptot-ics in statistical inference for population smooth functions of means. Stochastic convergence of a sequence of smooth function of means can be studied by sto-chastic convergence of a sequence of sums of well-defined i.i.d. smooth functions that define the sample linear approximations (4) or (5).

Dipartimento di Scienze Statistiche “Paolo Fortunati” ANDREA PALLINI

Alma Mater Studiorum Università di Bologna

4. APPENDIX 4.1. Assumptions

We denote by gt1"tk( )P the derivative of order k of the smooth function

(u ) g P , u ,U 1 1 ( ) ( ) ...k( ) (u )/ u ... u k 0, t k t t t u g P w g P w w where P E[ Y1P]. We let 1 k 1 (t ) (t ) 1 1 E[Y Y ] k t t P " " , and 1 i 1 y Y n i n

¦

.

Let M !1 0, M !2 0and M !3 0 be finite constants. For every kd , where s 5

s d ,

1 k( )

t t

g " P exists and is bounded,

1 k( ) 1

t t

g " P d M . The sample smooth function ( yg P) and the basic linear approximation 1 i

1 { ( Y ) ( )} n i g n P  g P

¦

are bounded,

(7)

(a) 10 20 30 40 -0.015 -0.010 -0.005 0.0 0.005 0.010 0.015 (b) 10 20 30 40 -0.015 -0.010 -0.005 0.0 0.005 0.010 0.015

Figure 1 – Difference between the sample ratio of means g( x ) and its basic linear approximation given by (4) (panel (a)), and difference between g( x ) and its higher-order sample linear approxima-tion given by (5) (panel (b)), for 40 different sample sizes n (horizontal axes) ranging from n 2 to n 41. Bivariate folded-normal observations from correlated (U 0.5) marginals.

(a) 10 20 30 40 -0.02 -0.01 0.0 0.01 0.02 (b) 10 20 30 40 -0.02 -0.01 0.0 0.01 0.02

Figure 2 – Difference between the sample ratio of means ( x )g and its basic linear approximation given by (4) (panel (a)), and difference between g( x ) and its higher-order sample linear approxima-tion given by (5) (panel (b)), for 40 different sample sizes n (horizontal axes) ranging from n 2 to n 41. Bivariate lognormal observations from correlated (U 0.377541) marginals.

(8)

(a) -2 0 2 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 (b) -2 0 2 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 (c) -2 0 2 -0.1 0.0 0.1 0.2 0.3 (d) -2 0 2 -0.1 0.0 0.1 0.2 0.3

Figure 3 – Plot ( , )x y of the quantiles of the standard normal distribution ( x -axis) and the corre-sponding sample values of the linear approximation (4) ( y -axis), sample size n 4 (panel (a)) and

7

n (panel (c)). Plot ( , )x y of the quantiles of the standard normal distribution ( x -axis) and the sample values of the higher-order linear approximation (5) ( y -axis), sample size n 4 (panel (b)) and n 7 (panel (d)). The ratio of means example, with bivariate folded-normal observations from correlated (U 0.5) marginals. (a) -2 0 2 -0.4 -0.2 0.0 0.2 0.4 0.6 (b) -2 0 2 -0.4 -0.2 0.0 0.2 0.4 0.6 (c) -2 0 2 -0.4 -0.2 0.0 0.2 0.4 0.6 (d) -2 0 2 -0.4 -0.2 0.0 0.2 0.4 0.6

Figure 4 – Plot ( , )x y of the quantiles of the standard normal distribution ( x -axis) and the corre-sponding sample values of the linear approximation (4) ( y -axis), sample size n 4 (panel (a)) and

7

n (panel (c)). Plot ( , )x y of the quantiles of the standard normal distribution ( x -axis) and the sample values of the higher-order linear approximation (5) ( y -axis), sample size n 4 (panel (b)) and n 7 (panel (d)). The ratio of means example, with bivariate lognormal observations from cor-related (U 0.377541) marginals.

(9)

2 ( y ) g P d M , 1 i 3 1 { ( Y ) ( )} n i g n P  g P d M

¦

.

4.2. Exact linear approximations

Following Appendix 4.1, if the derivative 1 ( ) 0

s

t t

g " P , for all s t4, then the basic linear approximation (1) is exact. If the derivative

1 s( ) 0

t t

g " P , for all 5

s t , then the higher-order linear approximation (3) is exact.

4.3. Taylor expansion of the derivative

For every u , the Taylor expansion around u 0U of the derivative of the function (ug P) is 1 1 1 1 1 1 1 k 1 1 (t ) (t ) 1 1 (t ) (t ) 2 1 1 (u ) u ( ) u 1 ( ) u u . ( 1)! k k q q t t t t q q t t k t t g g g k P P P t   

¦

¦

¦

¦ ¦

" " "

In particular, for every u , we have that U

1 1 1 1 1 1 1 2 1 2 1 2 1 1 1 3 1 2 1 2 3 1 2 3 1 1 2 1 2 (t ) (t ) 1 1 u 0 (t ) (t ) 1 1 u 0 (t ) 1 u 0 (t ) (t ) (t ) 1 1 1 u 0 q (t ) 1 1 u 0 (u ) u (u ) u (u ) u u (u ) u 1 (u ) u u u 2 2 (u ) u u q q t t t t q q t t t t q t t q q q t t t t t t q t t t t g g g g g g P P P P P P   ­° ®  °¯ ½°   ¾ °¿ ­°  ®  °¯  

¦

¦

¦ ¦

¦

¦ ¦ ¦

¦ ¦

(t )2 ½°¾ °¿

(10)

3 4 1 2 1 2 3 4 1 2 3 4 3 1 2 1 2 3 1 2 3 (t ) (t ) (t ) (t ) t t t t 1 1 1 1 u 0 (t ) (t ) (t ) 1 1 1 u 0 1 g (u ) u u u u 6 3 (u ) u u u . q q q q t t t t q q q t t t t t t g P P ­°  ®  °¯ ½°   ¾ °¿

¦ ¦ ¦ ¦

¦ ¦ ¦

"

4.4. Proof of O np( 2) in the basic linear approximation (1), as n o f

From the definition (1), by Taylor expanding the function (ug P), where u U , around u 0 , it follows that

4 1 i 1 1 ( y ) ( ) { ( Y ) ( )} , n k i k g P  g P 

¦

g n P  g P

¦

' R

where the quantities '1, '2, '3 and '4 are obtained as

1 1 1 1 1 1 1 (t ) (t ) 1 i 1 1 1 ( ) y ( ) Y 0 , q n q t t t i t g P g P n '

¦



¦¦

1 2 1 2 1 2 1 2 1 2 1 2 2 (t ) (t ) (t ) (t ) 2 i i 1 1 1 1 1 1 1 ( ) y y ( ) Y Y 2 2 q q n q q t t t t t t i t t g P g P n '

¦ ¦



¦¦ ¦

, 3 1 2 1 2 3 1 2 3 3 1 2 1 2 3 1 2 3 (t ) (t ) (t ) 3 1 1 1 3 (t ) (t ) (t ) i i i 1 1 1 1 1 ( ) y y y 6 1 ( ) Y Y Y , 6 q q q t t t t t t q q q n t t t i t t t g g n P P  ' 

¦ ¦ ¦

¦¦ ¦ ¦

and 3 4 1 2 1 2 3 4 1 2 3 4 3 4 1 2 1 2 3 4 1 2 3 4 (t ) (t ) (t ) (t ) 4 1 1 1 1 4 (t ) (t ) (t ) (t ) i i i i 1 1 1 1 1 1 ( ) y y y y 24 1 ( ) Y Y Y Y , 24 q q q q t t t t t t t t q q q q n t t t t i t t t t g g n P P  ' 

¦ ¦ ¦ ¦

¦¦ ¦ ¦ ¦

respectively, and R is the remainder. Note that 1 (t )1

1

E[n Y ] 0 , where i ! ,1, ,n

(11)

1 1 1 (t ) (t ) i 1 E[ y ] Y 0 n i Eª« n º» « » ¬

¦

¼ , 1 2 1 2 1 2 2 1 (t ) (t ) (t ) (t ) i i 1 E[ y y ] Y Y n t t i Eª« n º» n P « » ¬

¦

¼ , 3 3 1 2 1 2 1 2 3 3 2 (t ) (t ) (t ) (t ) ( ) ( ) i i i 1 E[ y y y ] Y Y Y , n t t t t t i Eª« n º» n P « » ¬

¦

¼ 3 4 1 2 1 2 3 4 3 2 (t ) (t ) (t ) (t ) E[ y y y y ] n Pt t t t O n(  ) , 3 4 1 2 1 2 3 4 4 (t ) (t ) (t ) (t ) 3 i i i i 1 E Y Y Y Y , n t t t t i n n P ª º « » « » ¬

¦

¼

as n o f . These results on multivariate moments show that '1 and 0

2 3 E[' ] E[' ] 0 . Then, 4 4 1 E k E[ ] 0. k R R O ' º ' ª » «  t  ! » «¬

¦

¼

There exists a finite constant M !4 0, such that M4O

1 2 3 4

E[ ' ' ' ' R ]. For every t !0, we also have that

4 4 1 0 4 1 4 1 4 1 . k k k k t k k t k k M u dP R u u dP R u t dP R u t P R t O O O O O O f f f ' ' ' ' § ·  d ¨ ¸ ¨ ¸ © ¹ § · t ¨¨  d ¸¸ © ¹ § · t ¨¨  d ¸¸ © ¹ § ·  ! ¨ ¸ ¨ ¸ © ¹

¦

³

¦

³

¦

³

¦

(12)

4 1 4 1 k k P ' R H H M O § ·  ! d ¨ ¸ ¨ ¸ ©

¦

¹ .

It finally follows that 1 2 3 4 R O np( 2)



' ' ' '  , as n o f , because

4 (1)

M O and O O n( 2), as n o f . Accordingly, it holds that R O np( 3), as n o f .

4.5. Proof of O np( 3) in the higher-order linear approximation (3), as n o f

Following Appendix 4.3, the quantity '5 is defined as

3 4 5 1 2 1 2 3 4 5 1 2 3 4 5 3 4 5 1 2 1 2 3 4 5 1 2 3 4 5 (t ) (t ) (t ) (t ) (t ) 5 1 1 1 1 1 5 (t ) (t ) (t ) (t ) (t ) i i i i i 1 1 1 1 1 1 1 ( ) y y y y y 120 1 ( ) Y Y Y Y Y . 120 q q q q q t t t t t t t t t t q q q q q n t t t t t i t t t t t g g n P P  ' 

¦ ¦ ¦ ¦ ¦

¦¦ ¦ ¦ ¦ ¦

From (3), it follows that

5 1 * * i 1 1 ( y ) ( ) { ( Y ) ( )} , n k i k g P  g P 

¦

g n P  g P  *

¦

' R

where * is given by (2), the quantities '*k are obtained as 'k* 'k k'k/4, 1, 2, 3, 4, 5

k , and R* is the remainder. For every tk ! ,1, ,q k 1, 2, 3, 4, 5, we have that 3 4 5 1 2 1 2 3 4 5 4 3 (t ) (t ) (t ) (t ) (t ) E[ y y y y y ] n Pt t t t t O n(  ), 3 4 5 1 2 1 2 3 4 5 5 (t ) (t ) (t ) (t ) (t ) 4 i i i i i 1 E Y Y Y Y Y , n t t t t t i n n P ª º « » « » ¬

¦

¼

as n o f . Following Appendix 4.3, and these results on multivariate moments, we have that '1* '*4 and 0

* *

2 3

E[' ] E[' ] 0. We also have that

* * *

5

E[' R ] O ! , where 0 O* O n( 3)¸ as n o f . Finally, it can be shown that 1* 2* *3 *4 *5 R* O np( 3)  ' ' ' ' '  , as n o f , and (accordingly) 4 * ( ) p R O n , as n o f .

(13)

4.6. Proof of * O np( 1/ 2), as n o f , where * is given by (2)

Following Appendix 4.3, we have that

1 1 1 1 1 1 1 (t ) (t ) 1 1 ( y ) y ( ) y ( ) q q t t p t t g P g P O n

¦

¦

, as n o f , and 1 1 1 1 1 1 1 1 (t ) 1 (t ) 2 i i i 1 1 ( Y ) Y ( ) Y ( ) q q t t p t t g n P n g P n O n

¦

¦

,

where i ! , as n o f , for every 1, ,n t1 ! . It follows that 1, ,q

1 1 1 1 1 1 1 1 (t ) (t ) i 1 1 1 1 1 ( ) y ( ) Y ( ) 4 4 q n q t t p t i t g P g P n O n *

¦



¦¦

 , as n o f . Finally, * O np( 1/ 2), as n o f . 4.7. Proof of O np( 5/ 2) 

in the linear approximation (4), as n o f

For every tk ! , 1, ,q k 1, 2, 3, we have that

1 1 1 (t ) (t ) 1/ 2 i 1 ( X x ) ( ) , n p i n  O n

¦

1 1 2 2 1 (t ) (t ) 1 (t ) (t ) 3/ 2 i i 1 ( X x ) ( X x ) ( ) , n p i n  n  O n

¦

3 3 1 1 2 2 1 (t ) (t ) 1 (t ) (t ) 1 (t ) (t ) 5/ 2 i i i 1 ( X x ) ( X x ) ( X x ) ( ) , n p i n  n  n  O n

¦

as n o f . Following Appendix 4.4, and substituting P with x , we have that

1 1 1 1 1 1 1 1 1 1 (t ) (t ) 1 i i 1 1 1 1/ 2 (t ) (t ) i 1 1 ˆ ( ( X x ) x ) ( X x ) { ( ) ( )} ( X x ) , q n t i t q n t p i t g n n g P O n n     '       

¦¦

¦¦

(14)

1 1 2 2 1 2 1 2 1 1 2 2 1 2 1 2 1 1 (t ) (t ) 1 (t ) (t ) 2 i i i 1 1 1 1 1 1/2 (t ) (t ) (t ) (t ) i i 1 1 1 ˆ 1 ( ( X x ) x ) ( X x ) ( X x ) 2 1 { ( ) ( )} ( X x ) ( X x ) , 2 q q n t t i t t q q n t t p i t t g n n n g P O n n n       '         

¦¦ ¦

¦¦ ¦

1 2 3 1 2 3 3 3 1 1 2 2 1 2 3 1 2 3 3 1 1 2 2 1 3 i 1 1 1 1 1 (t ) (t ) 1 (t ) (t ) 1 (t ) (t ) i i i 1/ 2 1 1 1 1 1 (t ) (t ) 1 (t ) (t ) 1 (t ) (t i i i 1 ˆ ( ( X x ) x ) 6 ( X x ) ( X x ) ( X x ) 1 { ( ) ( )} 6 ( X x ) (X x ) ( X x q q q n t t t i t t t q q q n t t t p i t t t g n n n n g O n n n n P         '    ˜      ˜   

¦¦ ¦ ¦

¦¦ ¦ ¦

3)) , as n o f . We also have that ˆ1 O np( 1)

 ' , ˆ2 O np( 3/ 2)  ' and ˆ3 O np( 5/ 2)  ' ,

as n o f . It can be shown that ˆ1 Rˆ O np( 1)



'  , where ˆR is the remainder, as

n o f , and (accordingly) Rˆ O np( 3/ 2), as n o f . Then,

1 3/ 2 i 1 0 { ( ( X x ) x ) ( x )} ( ), n p i g n    g O n

¦

as n o f , and (4) is finally proved.

4.8. Proof of O np( 7/ 2) in the higher-order linear approximation (5), as n o f

Following Appendix 4.7, with *ˆ given by (6), we have that 'ˆ*1 'ˆ*2 . It can 0

be shown that Rˆ* O np( 5/ 2), where Rˆ* is the remainder, as n o f . Then, (5) is finally proved.

4.9. Proof of O np( 1/ 2), as n o f , where is given by (6)

Following Appendix 4.7, we have that

1 1 1 1 1/ 2 1 (t ) (t ) i 1 1 ˆ 1 { ( ) ( )} ( X x ) 4 q n t p i t g P O n n * 

¦¦

  ,

as n o f . Finally, we have that O np( 1/ 2) and n1*ˆ O np( 3/ 2), as

(15)

4.10. Asymptotic normality of the linear approximation (4), as n o f The variance 1 i 1 VAR ( Y ) ( ) n i g n P n g P ª º   « » « »

¬

¦

¼ of the basic linear

approxima-tion (1) is 1 1 1 1 (t ) 2 2 2 3 i 1 1 1 2 2 VAR ( ) Y ( ) { ( )} ( ) , q n t p i t g n O n n n O n n O n P V V       ª º   « » « » ¬ ¼ 

¦

¦

as n o f , where 1 2 1 2 1 2 2 1 1 ( ) ( ) . q q t t t t t t g g V

¦ ¦

P P P

The characteristic function In( )u of 1 1 1/ 2 1 (t ) i 1 ( Y ) ( ) n i n g n n g V  ­°®  P  P ½°¾ ° ° ¯

¦

¿, where uƒ1, is 1 1/ 2 1 i 1 1 1/ 2 1 i 1 2 2 1 2 2 1 i 1 2 3/ 2 ( ) E exp ( Y ) ( ) E 1 ( Y ) ( ) 1 ( Y ) ( ) 2 1 ( ) , 2 n n i n i n i u iu n g n n g iu n g n n g i u n g n n g u O n I V P P V P P V P P           ª § ­° ½°·º « ¨¨ ®   ¾¸¸» « © °¯ °¿¹» ¬ ¼ ª ­° ½°    « ® ¾ « °¯ °¿ ¬ º ­ ½ ° ° »  ®   ¾  » ° ° ¯ ¿ ¼  

¦

¦

¦

"

as n o f . Actually, it holds that

2 3/ 2 ( ) 1 ( ) , 2 n u u o n I   

(16)

2

( ) exp( / 2) ,

n u u

I o 

as n o f , which is the characteristic function of the normal N(0,1) distribution. Following Appendixes 4.4 and 4.7, we finally have the asymptotic result (7). 4.11. Asymptotic normality of the higher-order linear approximation (5), as n o f

In Appendix 4.4, it is seen that '1 . The variance 0

1 i 1 VAR ( Y ) ( ) n i g n P n g P * ª º    « » « »

¬

¦

¼ of the higher-order linear approximation

(3) then is 1 1 1 1 (t ) 2 1 2 2 i 1 1 ( ) Y ( ) ( ) q n t p i t VAR«ª g P n O n º» n V O n « » ¬ ¼

¦

¦

,

as n o f , where V2 is defined in Appendix 4.10. The characteristic function In*( )u of

1 1/ 2 1 i 1 ( Y ) ( ) n i n g n n g V  ­°  P  P *½° ® ¾ ° ° ¯

¦

¿, where uƒ1, is 2 5/ 2 * ( ) 1 ( ) , 2 n u u o n I   

as n o f . Then, we have that

* 2

( ) exp( / 2) ,

n u u

I o 

as n o f , which is the characteristic function of the normal N(0,1) distribution. Following Appendixes 4.5, 4.8 and 4.9, we finally have the asymptotic result (8). 4.12. Proof of Vˆ2 n1V2 O np( 3/ 2) , as n o f , where Vˆ2 is given by (9)

From the definition (9), we have that

1 1 1 2 1 3/ 2 2 (t ) i 1 1 ˆ ( ) Y ( ) q n t p i t g n O n V ­°® P    ½°¾ ° ° ¯ ¿

¦ ¦

,

(17)

as n o f . The asymptotic variance V2 is defined in Appendix 4.10. Then, it fi-nally follows that Vˆ2 n1V2 O np( 3/ 2), as n o f .

REFERENCES

R.A. BECKER, J.M. CHAMBERS, A.R. WILKS (1988), The New S Language, Wadsworth and

Brooks/Cole, Pacific Grove, California.

R.N. BHATTACHARYA, J.K. GHOSH(1978), On the validity of the formal Edgeworth expansion, “Annals

of Statistics”, 6, 434-451.

W.A. FULLER(1976), Introduction to Statistical Time Series, Wiley and Sons, New York. P. HALL(1992), The Bootstrap and Edgeworth Expansion, Springer-Verlag, New York.

A. PALLINI(2000), Efficient bootstrap estimation of distribution functions, “Metron”, 58 (1-2),

81-95.

A. PALLINI(2002), On a linear method in bootstrap confidence intervals, “Statistica”, 62, 5-25. P.K. SEN, J.M. SINGER(1993), Large Sample Methods in Statistics, Chapman and Hall, New York.

RIASSUNTO

Analisi asintotica per approssimazioni lineari di medie

Viene definita e studiata una versione di ordine più accurato della linearizzazione di funzioni regolari di medie proposta in Pallini (2002). Viene dimostrato come questa ver-sione migliori l’errore di approssimazione di ordine O np( 2)



in probabilità, al divergere della numerosità campionaria n , producendo un errore più piccolo di ordine O np( 3)



, al divergere di n . Viene dimostrata la normalità asintotica di entrambe le linearizzazioni al divergere di n . Vengono presentati i risultati empirici di uno studio di simulazione sull’esempio del rapporto di due medie.

SUMMARY

Asymptotics for linear approximations of smooth functions of means

A higher-order version of the linear approximation of smooth functions of means proposed in Pallini (2002) is defined and studied. This version is shown to improve over the error of order O np( 2)



in probability, as the sample size n diverges, yielding a smaller error of order O np( 3)



, as n diverges. Both linear approximations are shown to have a normal distribution, as n diverges. Empirical results of a simulation study on the ratio of means example are presented.

Figura

Figure 1  – Difference between the sample ratio of means  g ( x )  and its basic linear approximation  given by (4) (panel (a)), and difference between  g ( x )  and its higher-order sample linear  approxima-tion given by (5) (panel (b)), for  40  differen
Figure 3  – Plot  ( , ) x y  of the quantiles of the standard normal distribution ( x -axis) and the corre- corre-sponding sample values of the linear approximation (4) ( y -axis), sample size  n   4  (panel (a)) and

Riferimenti

Documenti correlati

2 Specifically, two Directives from the EU Telecom Package establish a dispute resolution framework that applies to undertakings providing communications services

Per convenzione il G° f delle sostanze nelle loro forme stabili alle condizioni standard è assunto uguale a

It includes a total of 58 items requiring a response on a four-point Likert scale (strongly disagree, disagree, agree, strongly agree) which refer to several aspects

Il Tema ha raccolto 21 commenti e si articola in due parti, la prima a firma di Claudio Marazzini in qualità di presidente della Crusca, la seconda in cui interviene l’accademica

Alla luce dell’evoluzione dei consumi di energia primaria e finale nell’Unione Europea e della successiva decomposition analysis, la decisione europea di non ag- giornare

[r]

Le prime tre sezioni servono solo a rendere chiaro (spero) il legame tra al- gebra lineare e geometria analitica; per questo non contengono veri e propri esercizi.. La loro lettura

The main reason to measure phosphonates in grape and wine is linked to the Fosetyl-Al (Fos-Al) residues and their legal definition and chemical determination since phosphonates