• Non ci sono risultati.

SOME MODIFIED EXPONENTIAL RATIO-TYPE ESTIMATORS IN THE PRESENCE OF NON-RESPONSE UNDER TWO-PHASE SAMPLING

N/A
N/A
Protected

Academic year: 2021

Condividi "SOME MODIFIED EXPONENTIAL RATIO-TYPE ESTIMATORS IN THE PRESENCE OF NON-RESPONSE UNDER TWO-PHASE SAMPLING "

Copied!
17
0
0

Testo completo

(1)

 

e-ISSN 2070-5948, DOI 10.1285/i20705948v6n1p1

© 2013 Università del Salento –  http://siba-ese.unile.it/index.php/ejasa/index

SOME MODIFIED EXPONENTIAL RATIO-TYPE ESTIMATORS IN THE PRESENCE OF NON-RESPONSE UNDER TWO-PHASE SAMPLING

SCHEME

Javid Shabbir

*

, Nasir Saeed Khan

Department of Statistics, Quaid-i-Azam University, Islamabad, Pakistan.

Received 11 February 2011; Accepted 02 April 2012 Available online 26 April 2013

Abstract: This paper addresses the problem of estimating the population mean using information on the auxiliary variable in the presence of non-response under two-phase sampling. On the lines of Bahl and Tuteja [1] and upadhyaya et al.

[22], a class of modified exponential-ratio type estimators using single auxiliary variable have been proposed under two different situations of non-response of the study variable. The expressions for the bias and mean square error (MSE) of a proposed class of estimators are derived. Efficiency comparisons of a proposed class of estimators with the usual unbiased estimator by Hansen and Hurwitz [3]

and other existing estimators are made. An empirical study has been carried out to judge the performances of the proposed estimators.

Keywords: Auxiliary variable, bias, mean Square error, non-response, two-phase sampling, exponential-ratio type estimator.

1. Introduction

Consider a finite population of size N . We draw a sample of size n from a population by using simple random sample without replacement (SRSWOR) sampling scheme. Let y

i

and x

i

be the observations on the study variable ( ) y and the auxiliary variable ( ) x respectively. Let

1 n

i i

y y

=

n

= ∑ and

1 n

i i

x x

=

n

= ∑ be the sample means corresponding to the population means

1 N

i i

Y y

=

N

= ∑ and

1 N

i i

X x

=

N

= ∑ respectively. When information on X is unknown then double                                                                                                                

* Email: [email protected]

(2)

sampling or two phase sampling is suitable to estimate the population mean. In first phase sample we select a sample of size nʹ′ by SRSWOR from a population to observe x . In second phase, we select a sample of size n from nʹ′ ( n < nʹ′ ) by SRSWOR also. Non-response occurs on second phase in which n units respond and

1

n do not. From

2

n

2

non-respondents, a sample of r = n k

2

; k > 1 units is selected, where k is the inverse sampling rate at the second phase sample of size n.

Sometimes it may not be possible to collect the complete information for all the units selected in the sample due to non-response. Estimation of the population mean in sample surveys when some observations are missing due to non-response not at random has been considered by Hansen and Hurwitz [3] is given by y

*

= w y

1 1

+ w y

2 2r

, where

1

1 1 1

n i i

y y

=

n

= ∑ ,

2 1

r i r

i

y y

=

r

= ∑ , w

1

= n n

1

and

2

n

2

w = n .

The variance of y

*

is given by:

* 2 2

2 (2)

1 1

( )

y y

,

f k

Var y S W S

n n

− −

⎛ ⎞ ⎛ ⎞

= ⎜ ⎟ + ⎜ ⎟

⎝ ⎠ ⎝ ⎠ (1)

where n

f = N and

2

N

2

W = N ,

2

( )

2

1

1

N i

y i

y Y

S

=

N

= −

∑ − and

2( ) 2

(

2

)

2

2 1 2

1 .

N i y

i

y Y

S

=

N

= −

∑ −

It is well known that in estimating the population mean, sample survey experts use the auxiliary information to improve the precision of the estimates.

Similar to y

*

one can write x

*

= w x

1 1

+ w x

2 2r

, where

1 1

1 1 n

i i

x x

=

n

= ∑ and

2

1 r

.

i r

i

x x

=

r

= ∑

The variance of x

*

is given by:

* 2 2

2 (2)

1 1

( )

x x

,

f k

Var x S W S

n n

− −

⎛ ⎞ ⎛ ⎞

= ⎜ ⎟ + ⎜ ⎟

⎝ ⎠ ⎝ ⎠ (2)

where

2

( )

2

1

1

N i x

i

x X

S

=

N

= −

∑ − and

2( ) 2

(

2

)

2

2 1 2

1

N i x

i

x X

S

=

N

= −

∑ − .

The auxiliary information can be used both at designing and estimation stages to compensate for

units selected for a sample that fails to provide adequate responses and for the population units

missing from the sampling frame. Rao ([10], [11]), Khare and Srivastava ([4], [5], [6]), Okafar

and Lee [9], Sarndal and Lundstrom [12], Tabasum and Khan ([20], [21]), Singh and Kumar

([13], [14], [15], [16], [17], [18]) and Singh et al. [19] have suggested some estimators for

population mean Y of the study variable y using the auxiliary information in presence of non-

response and studied their properties.

(3)

When there is non-response on the study variable y as well as on the auxiliary variable x , Cochran [2] suggested the conventional two-phase ratio and regression estimators for the population mean Y are defined as:

(1) * *

ˆ

R

,

Y y x x

= ʹ′ (3)

and

( )

* * *

ˆ

Reg(1) yx

,

Y = y + bx ʹ′ x (4)

where b

*yx

= s

*xy

/ s

*2x

is the sample regression coefficient, whose population regression coefficient is β

yx

= S

xy

/ S

x2

at the first phase sampling. Here

*

( 1 1 )

1 1 *

n r

xy i i i i

i i

s x y k x y nx y

n

= =

⎛ ⎞

= − ⎜ ⎝ ∑ + ∑ − ⎟ ⎠ and

( )

*2 2 2 *

1 1

1 1

n r

x i i

i i

s x k x nx x

n

= =

⎛ ⎞

= − ⎜ ⎝ ∑ + ∑ − ⎟ ⎠ are the sample covariance and sample variance respectively.

Recently Singh and Kumar [17] suggested the following estimator on the lines of Bahl and Tuteja [1] as:

* *

(1) *

ˆ

Exp

exp .

x x

Y y

x x ʹ′

⎧ − ⎫

= ⎨ ⎬

ʹ′ +

⎩ ⎭

(5)

To the first degree of approximation, the expressions for bias and mean square error of Y ˆ

R(1)

, ˆ

(1)

Y

Reg

and ˆ

(1)

Y

Exp

are given by:

( )

2 *

( )

2

(1) (2) (2)

( ˆ

R

) 1

yx x

1

yx x

,

B YY ⎡ ⎣ λ ʹ′ʹ′K C + λ KC ⎤ ⎦ (6)

2 30(2) 21 * 30(2) 21(2)

Re (1)

11 12 11 12

ˆ 2

( ) ,

( 1)( 2)

g yx

B Y N

N N

µ µ µ µ

β λ λ

µ µ µ µ

⎡ ⎛ ⎞ ⎛ ⎞ ⎤

≅ ⎢ ⎣ ʹ′ʹ′ − − ⎜ ⎝ − ⎟ ⎠ + ⎜ ⎝ ⎟ ⎠ ⎥ ⎦ −

(7)

2 * 2

(1)

1 3 3

(2) (2)

( ˆ ) ,

2 4 4

EXP yx x yx x

B YY ⎡ ⎢ λ ʹ′ʹ′ ⎛ ⎜ − K ⎞ ⎟ C + λ ⎛ ⎜ K⎞ ⎟ C ⎤ ⎥

⎝ ⎠ ⎝ ⎠

⎣ ⎦ (8)

( )

{ } { ( ) }

2 2 2 2 * 2 2

(1) (2) (2) (2)

( ˆ

R

)

y y

1 2

yx x y

1 2

yx x

,

MSE YY ⎡ λ ʹ′ C + λ ʹ′ʹ′ C + − K C + λ C + KC ⎤

⎣ ⎦ (9)

( ) { ( ) }

2 2 2 2 * 2 2

Re (1) (2) (2) (2)

( ˆ

g

)

y

1

y y yx yx

2

yx x

,

MSE YY ⎡ λ ʹ′ C + λ ʹ′ʹ′ − ρ C + λ C + K KK C ⎤

⎣ ⎦ (10)

(4)

2 2 2 2 * 2 2

(1)

1 1

(2)

1 1

(2) (2)

( ˆ ) 2 2 ,

2 2 2 2

EXP y y yx x y yx x

MSE YY ⎡ ⎢ λ ʹ′ C + λ ʹ′ʹ′ ⎧ ⎨ C + ⎛ ⎜ − K ⎞ ⎟ C ⎫ ⎬ + λ ⎧ ⎨ C + ⎛ ⎜ K⎞ ⎟ C ⎫ ⎬ ⎤ ⎥

⎝ ⎠ ⎝ ⎠

⎩ ⎭ ⎩ ⎭

⎣ ⎦

(11)

where

yx yx yx y

x

K C

R C

β ρ

= = ,

( ) ( ) ( ) ( )

( )

2 2 2

2

2

yx yx y

yx

x

C

K R C

β ρ

= = ,

yx yx2

x

S

β = S ,

( ) ( )

( )

2

2 2

2 yx yx

x

S β = S ,

( )( )

1

1

N

i i

i xy

x X y Y

S N

=

− −

= −

∑ ,

( ) 2

(

2

)(

2

)

2 1

2

1

N

i i

i xy

x X y Y

S N

=

− −

= −

∑ ,

y

S

y

C = Y ,

y( )2

S

y( )2

C = Y ,

x x

C S

= X ,

( ) ( )2

2 x x

S

C = X ,

( ) ( )

( ) ( )

2 2

2 2

,

yx yx

x y

S S S

ρ = 1 f ,

λ = ⎛ ⎜ n ⎞ ⎟

⎝ ⎠

1 f

λ ʹ′ = ⎛ ⎜ ⎝ n ʹ′ ʹ′ ⎞ ⎟ ⎠ , λ ʹ′ʹ′ = ( λ λ − ʹ′ ) ,

*

W k

2

( 1 )

λ = n ,

Y ,

R = X n ,

f = N n

f N

ʹ′ = ʹ′ , ( ) ( )

1

1 1

N v s

vs i i

i

x X y Y

µ N

=

= − −

− ∑ and

( )2 2

(

2

) (

2

)

2 1

1 ,

1

N v s

i i

vs

i

x X y Y

µ N

=

= − −

− ∑ ( ) v s , being non- negative integers.

When there is incomplete information on the study variable y and complete information on the auxiliary variable x , the conventional two-phase ratio, regression and exponential-ratio type estimators are respectively defined by:

*

ˆ

(2) R

Y y x x

= ʹ′ (12)

and

( )

* **

ˆ

Reg(2) yx

,

Y = y + bx ʹ′ x (13)

where b

**yx

= s

*xy

/ s

2x

is the sample regression coefficient, whose population regression coefficient is β

yx

= S

xy

/ S

x2

at second phase sampling and

2

( 1 1 )

1

( )

2

n

x i

i

s x x

n

=

= −

− ∑ .

Singh and Kumar [14] defined the following exponential ratio type estimator:

( )2 *

ˆ

Exp

exp .

x x

Y y

x x

⎧ ʹ′ − ⎫

= ⎨ ⎬

ʹ′ +

⎩ ⎭ (14)

(5)

To the first degree of approximation, the bias and mean square error of Y ˆ

R(2)

, ˆ

(2)

Y

Reg

and ˆ

(2)

Y

Exp

are given by:

( )

2

ˆ

(2)

(

R

) 1

yx x

,

B YY λ ʹ′ʹ′ − K C (15)

( )( )

2 21 30

Re (2)

11 20

ˆ 2

( ) ,

1 2

g yx

B Y N

N N

µ λ β µ

µ µ

⎛ ⎞

≅ ʹ′ʹ′ − − ⎜ ⎝ − ⎟ ⎠

(16)

(2)

1 3

2

( ˆ ) ,

2 4

EXP yx x

B Y ≅ λ ʹ′ʹ′ Y ⎛ ⎜ − K ⎞ ⎟ C

⎝ ⎠ (17)

( )

{ }

2 2 2 2 * 2

(2) (2)

( ˆ

R

)

y y

1 2

yx x y

,

MSE YY ⎡ λ ʹ′ C + λ ʹ′ʹ′ C + − K C + λ C ⎤

⎣ ⎦ (18)

( )

2 2 2 2 * 2

Re (2) (2)

( ˆ

g

)

y y

1

yx y

,

MSE YY ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C − ρ λ C + ⎤ ⎦ (19)

( )

2 2 2 2 * 2

(2)

1

(2)

( ˆ ) 1 2 ,

2

EXP y y yx x y

MSE YY ⎡ ⎢ λ ʹ′ C + λ ʹ′ʹ′ ⎧ ⎨ C + − K C ⎫ ⎬ + λ C ⎤ ⎥

⎩ ⎭

⎣ ⎦ (20)

2. Proposed exponential-ratio type estimator

We propose the following modified exponential-ratio type estimator for estimating the populations mean Y under two-phase sampling scheme in two different situations.

2.1 Situation I

The population mean X is unknown, when non-response occurs on the study variable y and the auxiliary variable x . On the lines of Bahl and Tuteja [1] and Upadhyaya et al. [22], we propose the following estimator:

( )

( ) ( ) ( )

*

( ) *

(1) *

ˆ exp ,

1

h P

c x x

Y y

cx d h cx d

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + − + ⎟ ⎠

(21)

where ( h > 0); c ( ) 0 and d are constants which can be coefficient of variation ( ) C

x

or correlation coefficient ( ρ

yx

) or standard deviation ( ) S

x

.

Remarks:

(i) When h = 0 , the estimator ˆ

( )(1)h

Y

P

reduces to

(6)

(0) *

ˆ

P(1)

exp(1),

Y = y (22)

which is a biased estimator with larger MSE than the usual estimator y

*

due to the positive value of ‘exp’ and has multiplicative effect on the above estimator ˆ

(0)(1)

Y

P

. (ii) When h = 1 , the estimator ˆ

( )(1)h

Y

P

reduces to

(

*

)

(1) *

ˆ

P(1)

exp .

c x x

Y y

cx d

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ʹ′ + ⎟

⎝ ⎠

(23)

(iii) When h = 2 , the estimator ˆ

( )( )h1

Y

P

reduces to estimator:

( )

( ) ( )

*

(2) *

(1) *

ˆ

P

exp .

c x x

Y y

cx d cx d

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + + ⎟ ⎠

(24)

To obtain bias and mean square error of the estimator ˆ

( )( )h1

Y

P

, we define:

( )

*

1

0

,

y = Y + ε x

*

= X ( 1 + ε

1

) , x ʹ′ = X ( 1 + ε

1

ʹ′ ) , x = X ( 1 + ε

2

) , such that E ( ) ε

i

= 0, ( 0,1,2) i = and E ( ) ε

i

ʹ′ = 0,

( )

02 y2 * 2y( )2

,

E ε = λ C + λ C E ( ) ε

12

= λ C

x2

+ λ

* 2

C

x( )2

, E ( ) ε

1

ʹ′

2

= λ ʹ′ C

x2

, E ( ) ε

22

= λ C

x2

,

( )

0 1 yx y x * yx( )2 y( ) ( )2 x2

,

E ε ε = λρ C C + λ ρ C C E ( ) ε ε

0 1

ʹ′ = λ ρ ʹ′

yx

C C

y x

, E ( ε ε

0 2

) = λρ

yx

C C

y x

,

( )

1 1 x2

,

E ε ε ʹ′ = λ ʹ′ C E ( ) ε ε

1 2

= λ C

x2

and E ( ) ε ε

1 2

ʹ′ = λ ʹ′ C

x2

. Expressing the estimator ˆ

( )( )h1

Y

P

given in (21), in terms of ε 's , we have:

( ) ( )

( )

(

1 1

)

( )(1) 0

1 1

ˆ 1 exp ,

1

h

Y

P

Y

h h

ε ε

ε ε ε δ

⎛ ʹ′ − ⎞

= + ⎜ ⎟

⎜ ʹ′ + − + ⎟

⎝ ⎠ (25)

where cX d . δ = ⎛ ⎜ cX + ⎞ ⎟

⎝ ⎠

Solving (25), neglecting terms of ε 's having power greater than two, we have:

[ ( ) ( )

( )

(1) 0 1 1 0 1 0 1

1 1

( Y ˆ

Ph

Y ) Y

h h

ε ε ε ε ε ε ε

δ ʹ′ δ ʹ′

− ≅ + − + −

(7)

+ h

2 2

1 δ ( ε

1

ʹ′ ε

1

)

2

h

2 2

1 δ ( ε

1

ʹ′

2

+ ( h 2 ) ε ε

1 1

ʹ′ ( h 1 ) ε

12

) ⎤ ⎦ . (26)

Taking expectations on both sides of (26), we get the bias of ˆ

( )(1)h

Y

P

which is given by:

( ) 2 * 2

(1)

1 1 1

2

1 1 1

(2) (2)

( ˆ ) .

2 2

h

P yx x yx x

B Y Y h K C h K C

h h h h

λ λ

δ δ δ δ

⎡ ʹ′ʹ′ ⎧ ⎛ ⎞ ⎫ ⎧ ⎛ ⎞ ⎫ ⎤

≅ ⎢ ⎨ ⎜ − ⎟ − ⎬ + ⎨ ⎜ − ⎟ − ⎬ ⎥

⎝ ⎠ ⎝ ⎠

⎩ ⎭ ⎩ ⎭

⎣ ⎦

(27)

Squaring both sides of (26) and neglecting terms of ε 's involving power greater than two, we have:

( ) ( )

( ) 2 2 2 2 2

(1) 0 2 2 1 1 1 1 0 1 0 1

1 2

( Y ˆ

Ph

Y ) Y 2 .

h h

ε ε ε ε ε ε ε ε ε

δ δ

⎡ ʹ′ ʹ′ ʹ′ ⎤

− ≅ ⎢ ⎣ + + − + − ⎥ ⎦ (28)

Using (28), the MSE of ˆ

( )(1)h

Y

P

to the first degree approximation is given by:

{ } { }

( ) 2 2 2 2 * 2 2

(1) 1 (2) 2 (2)

( ˆ

Ph

)

y y x y x

,

MSE YY ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C + A C ⎤ ⎦ (29)

where A

1

1 1 2 K

yx

h δ h δ

⎛ ⎞

= ⎜ − ⎟

⎝ ⎠ and A

2

1 1 2 K

yx(2)

. h δ h δ

⎛ ⎞

= ⎜ − ⎟

⎝ ⎠

The MSE Y ( ˆ

P( )(1)h

) is minimum when

( ) ( )

{ }

2 * 2

(2)

2 * 2 0

2 2

x x

yx x yx x

C C

h h

K C K C

λ λ

λ λ δ

ʹ′ʹ′ +

= =

ʹ′ʹ′ + (say).

Thus the resulting minimum MSE of ˆ

( )(1)h

Y

P

is given by:

( )

( ) ( )

2 * 2 2

( ) 2 2 2 * 2 (2)

(1) min (2) 2 * 2

2 2

( ˆ

Ph

)

y y y x x

.

yx x yx x

C C

MSE Y Y C C C

K C K C

λ λ

λ λ λ

λ λ

⎡ ʹ′ʹ′ + ⎤

⎢ ʹ′ ʹ′ʹ′ ⎥

≅ + + −

⎢ ʹ′ʹ′ + ⎥

⎣ ⎦

(30)

Table 1 shows some members of a proposed class of estimators ˆ

( )(1)h

Y

P

of the population mean Y

by taking h = 1 and h = 2 , each at different values of c and d . Many more estimators can also

be generated from the proposed estimator in (21) just by taking different values of h , c and d .

(8)

Table 1. Some members of a family of estimators

ˆ

( )(1)h

Y

P under Situation-I.

Estimator h c d

(1)(1) * *

ˆ

P(1)

exp

x

x x

Y y

x S

⎛ ʹ′ − ⎞

= ⎜ ⎝ ʹ′ + ⎟ ⎠ 1 1 S

x

(1)(2) * *

ˆ

P(1)

exp

x

x x

Y y

x C

⎛ ʹ′ − ⎞

= ⎜ ⎟

ʹ′ +

⎝ ⎠ 1 1 C

x

(1)(3) * *

ˆ

P(1)

exp

yx

x x

Y y

x ρ

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + ⎟ ⎟ ⎠ 1 1 ρ

yx

(

*

)

(1)(4) *

ˆ

P(1)

exp

x

x x

C x x

Y y

C x S

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ʹ′ + ⎟

⎝ ⎠ 1 C

x

S

x

( ) ( )

(2)(1) * *

(1) *

ˆ

P

exp

x x

x x

Y y

x S x S

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 S

x

( ) ( )

(2)(2) * *

(1) *

ˆ

P

exp

x x

x x

Y y

x C x C

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 C

x

( ) ( )

(2)(3) * *

(1) *

ˆ

P

exp

yx yx

x x

Y y

x ρ x ρ

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 ρ

yx

( )

( ) ( )

* (2)(4) *

(1) *

ˆ

P

exp

x

x x x x

C x x

Y y

C x S C x S

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 C

x

S

x

The expressions of mean square error of the above estimators (Table 1) are given by:

{ } { }

(1)( ) 2 2 2 2 * 2 2

(1) 3 (2) 4 (2)

( ˆ

P i

)

y y x y x

,

MSE YY ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C + A C ⎤ ⎦ (31)

where

3

1 1 2

yx

i i

A K

δ δ

⎛ ⎞

= ⎜ − ⎟

⎝ ⎠

and

4

1 1 2

yx(2)

i i

A K

δ δ

⎛ ⎞

= ⎜ − ⎟

⎝ ⎠ ( 1,2,3,4) i = and

{ } { }

(2)( ) 2 2 2 2 * 2 2

(1) 5 (2) 6 (2)

( ˆ

P i

)

y y x y x

,

MSE YY ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C + A C ⎤ ⎦ (32)

(9)

where

5

1 1 2

2

i

2

i yx

A K

δ δ

⎛ ⎞

= ⎜ − ⎟

⎝ ⎠

,

6

1 1 2

(2)

2

i

2

i yx

A K

δ δ

⎛ ⎞

= ⎜ − ⎟

⎝ ⎠ ( 1,2,3,4) i = ,

1

X S

x

, δ = ⎛ ⎜ X + ⎞ ⎟

⎝ ⎠

2

X C

x

,

δ = ⎛ ⎜ X + ⎞ ⎟

⎝ ⎠

3

X

yx

X δ = ⎛ ⎜ ⎜ + ρ ⎞ ⎟ ⎟

⎝ ⎠ and

4 x x

.

x

C X S δ = ⎛ ⎜ C X + ⎞ ⎟

⎝ ⎠

2.2 Situation II

The population mean X is unknown, when non-response occurs on the study variable y and complete response on the auxiliary variable x . The estimator is given by:

( )

( ) ( )( )

( ) *

ˆ

(2)

exp ,

1

g P

c x x

Y y

cx d g cx d

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + + − + ⎟ ⎟ ⎠

(33)

where ( g > 0).

Remark:

(i) When g = 0 , the estimator ˆ

( )( )g2

Y

P

reduces to

(0) *

ˆ

P(2)

exp(1),

Y = y (34)

which is a biased estimator with larger MSE than the usual estimator y

*

. (ii) When g = 1 , the estimator ˆ

( )( )g2

Y

P

reduces to

( )( )1 *

( )

ˆ

P2

exp c x x .

Y y

cx d

⎛ ʹ′ − ⎞

= ⎜ ⎟

ʹ′ +

⎝ ⎠

(35) (iii) When g = 2 , the estimator ˆ

( )( )g2

Y

P

reduces to the estimator

( )

( ) ( )

(2) *

ˆ

P(2)

exp .

c x x

Y y

cx d cx d

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠

(36) To obtain bias and mean square error of Y ˆ ,

P( )( )g2

in terms of ε ' ,s we have:

( ) ( )

( )

(

1 2

)

( )(2) 0

1 2

ˆ 1 exp .

1

g

Y

P

Y

g g

ε ε

ε ε ε δ

⎛ ʹ′ − ⎞

= + ⎜ ⎟

⎜ ʹ′ + − + ⎟

⎝ ⎠ (37)

Solving (37), neglecting terms of ε 's and having power greater than two, we have:

[ ( ) ( )

( ) ( ( ) ( ) )

( )(2) 0 1 2 0 1 0 2

2 2 2

1 2 1 1 2 2

2 2 2 2

1 1

ˆ

1 1 2 1 .

g

Y

P

Y

g g

g g

g g

ε ε ε ε ε ε ε

δ δ

ε ε ε ε ε ε

δ δ

ʹ′ ʹ′

≅ + − + −

ʹ′ ʹ′ ʹ′ ⎤

+ − − + − − − ⎦

(38)

(10)

The bias of Y ˆ ,

P( )( )g2

to first order of approximation, is given by:

( ) 2

(2)

1 1 1

( ˆ ) .

2

g

P yx x

B Y Y g K C

g g

λ δ δ

⎡ ʹ′ʹ′ ⎧ ⎛ ⎞ ⎫ ⎤

≅ ⎢ ⎨ ⎜ − ⎟ − ⎬ ⎥

⎝ ⎠

⎩ ⎭

⎣ ⎦ (39)

Squaring both sides of (38) and neglecting terms of ε 's involving power greater than two, we have:

( ) ( )

( ) 2 2 2 2 2

(2) 0 2 2 1 2 1 2 0 1 0 2

1 2

( T

Rg

Y ) Y 2 .

g g

ε ε ε ε ε ε ε ε ε

δ δ

⎡ ⎤

ʹ′ ʹ′ ʹ′

− = ⎢ + + − + − ⎥

⎣ ⎦ (40)

Using (40), the mean square error of ˆ

( )( )g2

Y

P

to the first degree of approximation is given by:

( ) 2 2 2 2 * 2

(2)

1 1

(2)

( ˆ

Pg

)

y y

2

yx x y

.

MSE Y Y C C K C C

g g

λ λ λ

δ δ

⎡ ⎧ ⎛ ⎞ ⎫ ⎤

ʹ′ ʹ′ʹ′

≅ ⎢ + ⎨ + ⎜ − ⎟ ⎬ + ⎥

⎢ ⎩ ⎝ ⎠ ⎭ ⎥

⎣ ⎦ (41)

The MSE Y ( ˆ

P( )(2)g

) is minimum when 1

0

yx

g g

δ K

= = (say).

Thus the resulting minimum MSE of ˆ

( )( )g2

Y

P

is given by:

( )

( ) 2 2 * 2 2

(2) min (2)

( ˆ

Pg

)

y y

2

y

1

yx

.

MSE YY ⎡ ⎣ λ ʹ′ C + λ C λ ʹ′ʹ′ + C ρ − ⎤ ⎦ (42) In Table 2, for g = 1 and g = 2 , we propose a family of estimators ˆ

( )( )g2

Y

P

of the population mean Y by taking at different choices of c and d respectively. Many more estimators can also be generated from the proposed estimator in (33) just by putting different values of g , c and d . Using Table 2, the MSE of ˆ

(1)( )(2)i

Y

P

and ˆ

(2)( )(2)i

Y

P

( 1,2,3,4) i = to first degree of approximation are given by:

{ }

(1)( ) 2 2 2 2 * 2

(2) 3 (2)

( ˆ

P i

)

y y x y

,

MSE YY ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C ⎤ ⎦ (43) and

{ }

(2)( ) 2 2 2 2 * 2

(1) 5 (2)

( ˆ

P i

)

y y x y

.

MSE YY ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C ⎤ ⎦ (44)

(11)

Table 2. Some members of a family of estimators

ˆ

( )(2)g

Y

P under Situation-II.

Estimator g c d

(1)(1) *

ˆ

P(2)

exp

x

x x

Y y

x S

⎛ ʹ′ − ⎞

= ⎜ ⎝ ʹ′ + ⎟ ⎠ 1 1 S

x

(1)(2) *

ˆ

P(2)

exp

x

x x

Y y

x C

⎛ ʹ′ − ⎞

= ⎜ ⎟

ʹ′ +

⎝ ⎠ 1 1 C

x

(1)(3) *

ˆ

P(2)

exp

yx

x x

Y y

x ρ

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + ⎟ ⎟ ⎠ 1 1 ρ

yx

( )

(1)(4) *

ˆ

P(2)

exp

x

x x

C x x

Y y

C x S ʹ′ −

⎛ ⎞

= ⎜ ⎝ ʹ′ + ⎟ ⎠ 1 C

x

S

x

( ) ( )

(2)(1) *

ˆ

P(2)

exp

x x

x x

Y y

x S x S

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠ 2 1 S

x

( ) ( )

(2)(2) *

ˆ

P(2)

exp

x x

x x

Y y

x C x C

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠ 2 1 C

x

( ) ( )

(2)(3) *

ˆ

P(2)

exp

yx yx

x x

Y y

x ρ x ρ

⎛ ʹ′ − ⎞

⎜ ⎟

= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 ρ

yx

( )

( ) ( )

(2)(4) *

ˆ

P(2)

exp

x

x x x x

C x x

Y y

C x S C x S

⎛ ʹ′ − ⎞

= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠ 2 C

x

S

x

3. Efficiency comparisons 3.1 Situation I

(a) When the constant ' ' h is unknown:

To compare the estimator ˆ

( )(1)h

Y

P

with the usual estimators y

*

, ˆ

(1)

Y

R

and ˆ

(1)

Y

Exp

when the value of constant ' ' h does not coincide with its optimum value ' ' h

0

, we have

(i) Var y ( )

*

MSE Y ( ˆ

P( )(1)h

) 0 > if

( )2

1 1

max , .

2

yx

2

yx

h δ K δ K

⎧ ⎫

⎪ ⎪

> ⎨ ⎬

⎪ ⎪

⎩ ⎭

(ii) MSE Y ( ˆ

R(1)

) − MSE Y ( ˆ

P( )(1)h

) 0 > if

(12)

( ) (

( )2

) ( ) (

( )2

)

1 1 1 1 1 1

min , , max , , .

2

yx

1 2

yx

1 2

yx

1 2

yx

1

K K h K K

δ δ δ δ δ δ

⎧ ⎫ ⎧ ⎫

⎪ ⎪ < < ⎪ ⎪

⎨ ⎬ ⎨ ⎬

− − − −

⎪ ⎪ ⎪ ⎪

⎩ ⎭ ⎩ ⎭

(iii) MSE Y ( ˆ

Exp(1)

) − MSE Y ( ˆ

P( )(1)h

) 0 > if

( ) (

( )2

) ( ) (

( )2

)

2 2 2 2 2 2

min , , max , , .

4

yx

1 4

yx

1 4

yx

1 4

yx

1

K K h K K

δ δ δ δ δ δ

⎧ ⎫ ⎧ ⎫

⎪ ⎪ < < ⎪ ⎪

⎨ ⎬ ⎨ ⎬

− − − −

⎪ ⎪ ⎪ ⎪

⎩ ⎭ ⎩ ⎭

(b) When the constant ' ' h is known:

(i) Var y ( )

*

MSE Y ( ˆ

P( )(1) minh

) > 0 if (

2 * (2) 2(2)

)

2

2 * 2

(2)

0.

yx x yx x

x x

K C K C

C C

λ λ

λ λ

ʹ′ʹ′ + ʹ′ʹ′ + >

(ii) MSE Y ( ˆ

R(1)

) − MSE Y ( ˆ

P( )(1) minh

) > 0 if

(

2 2 * * 2(2) 2(2)

)

2

( )

2

(2)

1 2 0

yx x yx x

yx x

x x

K C K C

K C

C C

λ λ

λ λ λ

⎛ ʹ′ʹ′ + ⎞

⎜ + ʹ′ʹ′ − > ⎟

⎜ ʹ′ʹ′ + ⎟

⎝ ⎠

and

( )2

1 . 2 K

yx

<

(iii) MSE Y ( ˆ

Exp(1)

) − MSE Y ( ˆ

P( )(1) minh

) > 0 if

(

2 * (2) 2(2)

)

2 2

2 * 2

(2)

1 0

4

yx x yx x

yx x

x x

K C K C

K C

C C

λ λ

λ λ λ

⎛ ʹ′ʹ′ + ⎛ ⎞ ⎞

⎜ + ʹ′ʹ′ ⎜ − ⎟ > ⎟

⎜ ʹ′ʹ′ + ⎝ ⎠ ⎟

⎝ ⎠

and

( )2

1 . 4 K

yx

<

(iv) MSE Y ( ˆ

Reg( )1

) − MSE Y ( ˆ

P( )( )h1

)

min

> 0 if

(

2 * (2) 2(2)

)

2 2 2

2 * 2

(2)

0

yx x yx x

yx y

x x

K C K C

C C C

λ λ

λ λ λ ρ

⎛ ʹ′ʹ′ + ⎞

⎜ − ʹ′ʹ′ ⎟ >

⎜ ʹ′ʹ′ + ⎟

⎝ ⎠

and K

yx

> 2 K

yx( )2

.

3.2 Situation II

(a) When the constant ' ' g is unknown:

To compare the estimator ˆ

( )( )g2

Y

P

with the usual estimators y

*

, ˆ

( )2

Y

R

and ˆ

( )2

Y

Exp

when the value of constant ' ' g does not coincide with its optimum value ' ' g

0

, we have

(i) Var y ( )

*

MSE Y ( ˆ

P( )( )g2

) 0 > if 1 . 2

yx

g > δ K

(ii) MSE Y ( ˆ

R( )2

) − MSE Y ( ˆ

P( )( )g2

) 0 > if

Riferimenti

Documenti correlati

La fatica di un materiale si riscontra ogni volta che questo viene sottoposto a sforzi variabili ciclicamente nel tempo, sforzi che possono provocare la rottura se il

Dal momento che la durezza e lo sforzo sono correlati alla facilità con cui si può verificare la deformazione plastica, riducendo la mobilità delle dislocazioni si può

Questo concetto risulta tanto più preciso tanto più gli intervalli di tempo sono piccoli; facendo dunque tendere gli intervalli di tempo a 0 si introduce la velocità istantanea

• a quale distanza dalla superficie terrestre la forza gravitazionale terrestre è uguale in modulo a quella lunare; • con quale velocità si deve lanciare il satellite affinché

[r]

[r]

[r]

Sapendo che lo spostamento è compreso nel range