e-ISSN 2070-5948, DOI 10.1285/i20705948v6n1p1
© 2013 Università del Salento – http://siba-ese.unile.it/index.php/ejasa/index
SOME MODIFIED EXPONENTIAL RATIO-TYPE ESTIMATORS IN THE PRESENCE OF NON-RESPONSE UNDER TWO-PHASE SAMPLING
SCHEME
Javid Shabbir
*, Nasir Saeed Khan
Department of Statistics, Quaid-i-Azam University, Islamabad, Pakistan.
Received 11 February 2011; Accepted 02 April 2012 Available online 26 April 2013
Abstract: This paper addresses the problem of estimating the population mean using information on the auxiliary variable in the presence of non-response under two-phase sampling. On the lines of Bahl and Tuteja [1] and upadhyaya et al.
[22], a class of modified exponential-ratio type estimators using single auxiliary variable have been proposed under two different situations of non-response of the study variable. The expressions for the bias and mean square error (MSE) of a proposed class of estimators are derived. Efficiency comparisons of a proposed class of estimators with the usual unbiased estimator by Hansen and Hurwitz [3]
and other existing estimators are made. An empirical study has been carried out to judge the performances of the proposed estimators.
Keywords: Auxiliary variable, bias, mean Square error, non-response, two-phase sampling, exponential-ratio type estimator.
1. Introduction
Consider a finite population of size N . We draw a sample of size n from a population by using simple random sample without replacement (SRSWOR) sampling scheme. Let y
iand x
ibe the observations on the study variable ( ) y and the auxiliary variable ( ) x respectively. Let
1 n
i i
y y
=
n
= ∑ and
1 n
i i
x x
=
n
= ∑ be the sample means corresponding to the population means
1 N
i i
Y y
=
N
= ∑ and
1 N
i i
X x
=
N
= ∑ respectively. When information on X is unknown then double
* Email: [email protected]
sampling or two phase sampling is suitable to estimate the population mean. In first phase sample we select a sample of size nʹ′ by SRSWOR from a population to observe x . In second phase, we select a sample of size n from nʹ′ ( n < nʹ′ ) by SRSWOR also. Non-response occurs on second phase in which n units respond and
1n do not. From
2n
2non-respondents, a sample of r = n k
2; k > 1 units is selected, where k is the inverse sampling rate at the second phase sample of size n.
Sometimes it may not be possible to collect the complete information for all the units selected in the sample due to non-response. Estimation of the population mean in sample surveys when some observations are missing due to non-response not at random has been considered by Hansen and Hurwitz [3] is given by y
*= w y
1 1+ w y
2 2r, where
1
1 1 1
n i i
y y
=
n
= ∑ ,
2 1r i r
i
y y
=
r
= ∑ , w
1= n n
1and
2n
2w = n .
The variance of y
*is given by:
* 2 2
2 (2)
1 1
( )
y y,
f k
Var y S W S
n n
− −
⎛ ⎞ ⎛ ⎞
= ⎜ ⎟ + ⎜ ⎟
⎝ ⎠ ⎝ ⎠ (1)
where n
f = N and
2N
2W = N ,
2( )
21
1
N i
y i
y Y
S
=N
= −
∑ − and
2( ) 2(
2)
22 1 2
1 .
N i y
i
y Y
S
=N
= −
∑ −
It is well known that in estimating the population mean, sample survey experts use the auxiliary information to improve the precision of the estimates.
Similar to y
*one can write x
*= w x
1 1+ w x
2 2r, where
1 11 1 n
i i
x x
=
n
= ∑ and
21 r
.
i r
i
x x
=
r
= ∑
The variance of x
*is given by:
* 2 2
2 (2)
1 1
( )
x x,
f k
Var x S W S
n n
− −
⎛ ⎞ ⎛ ⎞
= ⎜ ⎟ + ⎜ ⎟
⎝ ⎠ ⎝ ⎠ (2)
where
2( )
21
1
N i x
i
x X
S
=N
= −
∑ − and
2( ) 2(
2)
22 1 2
1
N i x
i
x X
S
=N
= −
∑ − .
The auxiliary information can be used both at designing and estimation stages to compensate for
units selected for a sample that fails to provide adequate responses and for the population units
missing from the sampling frame. Rao ([10], [11]), Khare and Srivastava ([4], [5], [6]), Okafar
and Lee [9], Sarndal and Lundstrom [12], Tabasum and Khan ([20], [21]), Singh and Kumar
([13], [14], [15], [16], [17], [18]) and Singh et al. [19] have suggested some estimators for
population mean Y of the study variable y using the auxiliary information in presence of non-
response and studied their properties.
When there is non-response on the study variable y as well as on the auxiliary variable x , Cochran [2] suggested the conventional two-phase ratio and regression estimators for the population mean Y are defined as:
(1) * *
ˆ
R,
Y y x x
= ʹ′ (3)
and
( )
* * *
ˆ
Reg(1) yx,
Y = y + b − x ʹ′ x (4)
where b
*yx= s
*xy/ s
*2xis the sample regression coefficient, whose population regression coefficient is β
yx= S
xy/ S
x2at the first phase sampling. Here
*( 1 1 )
1 1 *n r
xy i i i i
i i
s x y k x y nx y
n
= =⎛ ⎞
= − ⎜ ⎝ ∑ + ∑ − ⎟ ⎠ and
( )
*2 2 2 *
1 1
1 1
n r
x i i
i i
s x k x nx x
n
= =⎛ ⎞
= − ⎜ ⎝ ∑ + ∑ − ⎟ ⎠ are the sample covariance and sample variance respectively.
Recently Singh and Kumar [17] suggested the following estimator on the lines of Bahl and Tuteja [1] as:
* *
(1) *
ˆ
Expexp .
x x
Y y
x x ʹ′
⎧ − ⎫
= ⎨ ⎬
ʹ′ +
⎩ ⎭
(5)
To the first degree of approximation, the expressions for bias and mean square error of Y ˆ
R(1), ˆ
(1)Y
Regand ˆ
(1)Y
Expare given by:
( )
2 *( )
2(1) (2) (2)
( ˆ
R) 1
yx x1
yx x,
B Y ≅ Y ⎡ ⎣ λ ʹ′ʹ′ − K C + λ K − C ⎤ ⎦ (6)
2 30(2) 21 * 30(2) 21(2)
Re (1)
11 12 11 12
ˆ 2
( ) ,
( 1)( 2)
g yx
B Y N
N N
µ µ µ µ
β λ λ
µ µ µ µ
⎡ ⎛ ⎞ ⎛ ⎞ ⎤
≅ ⎢ ⎣ ʹ′ʹ′ − − ⎜ ⎝ − ⎟ ⎠ + ⎜ ⎝ ⎟ ⎠ ⎥ ⎦ −
(7)
2 * 2
(1)
1 3 3
(2) (2)( ˆ ) ,
2 4 4
EXP yx x yx x
B Y ≅ Y ⎡ ⎢ λ ʹ′ʹ′ ⎛ ⎜ − K ⎞ ⎟ C + λ ⎛ ⎜ K − ⎞ ⎟ C ⎤ ⎥
⎝ ⎠ ⎝ ⎠
⎣ ⎦ (8)
( )
{ } { ( ) }
2 2 2 2 * 2 2
(1) (2) (2) (2)
( ˆ
R)
y y1 2
yx x y1 2
yx x,
MSE Y ≅ Y ⎡ λ ʹ′ C + λ ʹ′ʹ′ C + − K C + λ C + K − C ⎤
⎣ ⎦ (9)
( ) { ( ) }
2 2 2 2 * 2 2
Re (1) (2) (2) (2)
( ˆ
g)
y1
y y yx yx2
yx x,
MSE Y ≅ Y ⎡ λ ʹ′ C + λ ʹ′ʹ′ − ρ C + λ C + K K − K C ⎤
⎣ ⎦ (10)
2 2 2 2 * 2 2
(1)
1 1
(2)1 1
(2) (2)( ˆ ) 2 2 ,
2 2 2 2
EXP y y yx x y yx x
MSE Y ≅ Y ⎡ ⎢ λ ʹ′ C + λ ʹ′ʹ′ ⎧ ⎨ C + ⎛ ⎜ − K ⎞ ⎟ C ⎫ ⎬ + λ ⎧ ⎨ C + ⎛ ⎜ K − ⎞ ⎟ C ⎫ ⎬ ⎤ ⎥
⎝ ⎠ ⎝ ⎠
⎩ ⎭ ⎩ ⎭
⎣ ⎦
(11)
where
yx yx yx yx
K C
R C
β ρ
= = ,
( ) ( ) ( ) ( )( )
2 2 2
2
2
yx yx y
yx
x
C
K R C
β ρ
= = ,
yx yx2x
S
β = S ,
( ) ( )( )
2
2 2
2 yx yx
x
S β = S ,
( )( )
1
1
N
i i
i xy
x X y Y
S N
=
− −
= −
∑ ,
( ) 2(
2)(
2)
2 1
2
1
N
i i
i xy
x X y Y
S N
=
− −
= −
∑ ,
yS
yC = Y ,
y( )2S
y( )2C = Y ,
x xC S
= X ,
( ) ( )2
2 x x
S
C = X ,
( ) ( )( ) ( )
2 2
2 2
,
yx yx
x y
S S S
ρ = 1 f ,
λ = ⎛ ⎜ − n ⎞ ⎟
⎝ ⎠
1 f
λ ʹ′ = ⎛ ⎜ ⎝ − n ʹ′ ʹ′ ⎞ ⎟ ⎠ , λ ʹ′ʹ′ = ( λ λ − ʹ′ ) ,
*W k
2( 1 )
λ = n − ,
Y ,
R = X n ,
f = N n
f N
ʹ′ = ʹ′ , ( ) ( )
1
1 1
N v s
vs i i
i
x X y Y
µ N
=
= − −
− ∑ and
( )2 2
(
2) (
2)
2 1
1 ,
1
N v s
i i
vs
i
x X y Y
µ N
=
= − −
− ∑ ( ) v s , being non- negative integers.
When there is incomplete information on the study variable y and complete information on the auxiliary variable x , the conventional two-phase ratio, regression and exponential-ratio type estimators are respectively defined by:
*
ˆ
(2) RY y x x
= ʹ′ (12)
and
( )
* **
ˆ
Reg(2) yx,
Y = y + b − x ʹ′ x (13)
where b
**yx= s
*xy/ s
2xis the sample regression coefficient, whose population regression coefficient is β
yx= S
xy/ S
x2at second phase sampling and
2( 1 1 )
1( )
2n
x i
i
s x x
n
== −
− ∑ .
Singh and Kumar [14] defined the following exponential ratio type estimator:
( )2 *
ˆ
Expexp .
x x
Y y
x x
⎧ ʹ′ − ⎫
= ⎨ ⎬
ʹ′ +
⎩ ⎭ (14)
To the first degree of approximation, the bias and mean square error of Y ˆ
R(2), ˆ
(2)Y
Regand ˆ
(2)Y
Expare given by:
( )
2ˆ
(2)(
R) 1
yx x,
B Y ≅ Y λ ʹ′ʹ′ − K C (15)
( )( )
2 21 30
Re (2)
11 20
ˆ 2
( ) ,
1 2
g yx
B Y N
N N
µ λ β µ
µ µ
⎛ ⎞
≅ ʹ′ʹ′ − − ⎜ ⎝ − ⎟ ⎠
(16)
(2)
1 3
2( ˆ ) ,
2 4
EXP yx x
B Y ≅ λ ʹ′ʹ′ Y ⎛ ⎜ − K ⎞ ⎟ C
⎝ ⎠ (17)
( )
{ }
2 2 2 2 * 2
(2) (2)
( ˆ
R)
y y1 2
yx x y,
MSE Y ≅ Y ⎡ λ ʹ′ C + λ ʹ′ʹ′ C + − K C + λ C ⎤
⎣ ⎦ (18)
( )
2 2 2 2 * 2
Re (2) (2)
( ˆ
g)
y y1
yx y,
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C − ρ λ C + ⎤ ⎦ (19)
( )
2 2 2 2 * 2
(2)
1
(2)( ˆ ) 1 2 ,
2
EXP y y yx x y
MSE Y ≅ Y ⎡ ⎢ λ ʹ′ C + λ ʹ′ʹ′ ⎧ ⎨ C + − K C ⎫ ⎬ + λ C ⎤ ⎥
⎩ ⎭
⎣ ⎦ (20)
2. Proposed exponential-ratio type estimator
We propose the following modified exponential-ratio type estimator for estimating the populations mean Y under two-phase sampling scheme in two different situations.
2.1 Situation I
The population mean X is unknown, when non-response occurs on the study variable y and the auxiliary variable x . On the lines of Bahl and Tuteja [1] and Upadhyaya et al. [22], we propose the following estimator:
( )
( ) ( ) ( )
*
( ) *
(1) *
ˆ exp ,
1
h P
c x x
Y y
cx d h cx d
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + − + ⎟ ⎠
(21)
where ( h > 0); c ( ) ≠ 0 and d are constants which can be coefficient of variation ( ) C
xor correlation coefficient ( ρ
yx) or standard deviation ( ) S
x.
Remarks:
(i) When h = 0 , the estimator ˆ
( )(1)hY
Preduces to
(0) *
ˆ
P(1)exp(1),
Y = y (22)
which is a biased estimator with larger MSE than the usual estimator y
*due to the positive value of ‘exp’ and has multiplicative effect on the above estimator ˆ
(0)(1)Y
P. (ii) When h = 1 , the estimator ˆ
( )(1)hY
Preduces to
(
*)
(1) *
ˆ
P(1)exp .
c x x
Y y
cx d
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ʹ′ + ⎟
⎝ ⎠
(23)
(iii) When h = 2 , the estimator ˆ
( )( )h1Y
Preduces to estimator:
( )
( ) ( )
*
(2) *
(1) *
ˆ
Pexp .
c x x
Y y
cx d cx d
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + + ⎟ ⎠
(24)
To obtain bias and mean square error of the estimator ˆ
( )( )h1Y
P, we define:
( )
*
1
0,
y = Y + ε x
*= X ( 1 + ε
1) , x ʹ′ = X ( 1 + ε
1ʹ′ ) , x = X ( 1 + ε
2) , such that E ( ) ε
i= 0, ( 0,1,2) i = and E ( ) ε
iʹ′ = 0,
( )
02 y2 * 2y( )2,
E ε = λ C + λ C E ( ) ε
12= λ C
x2+ λ
* 2C
x( )2, E ( ) ε
1ʹ′
2= λ ʹ′ C
x2, E ( ) ε
22= λ C
x2,
( )
0 1 yx y x * yx( )2 y( ) ( )2 x2,
E ε ε = λρ C C + λ ρ C C E ( ) ε ε
0 1ʹ′ = λ ρ ʹ′
yxC C
y x, E ( ε ε
0 2) = λρ
yxC C
y x,
( )
1 1 x2,
E ε ε ʹ′ = λ ʹ′ C E ( ) ε ε
1 2= λ C
x2and E ( ) ε ε
1 2ʹ′ = λ ʹ′ C
x2. Expressing the estimator ˆ
( )( )h1Y
Pgiven in (21), in terms of ε 's , we have:
( ) ( )
( )
(
1 1)
( )(1) 0
1 1
ˆ 1 exp ,
1
h
Y
PY
h h
ε ε
ε ε ε δ
⎛ ʹ′ − ⎞
= + ⎜ ⎟
⎜ ʹ′ + − + ⎟
⎝ ⎠ (25)
where cX d . δ = ⎛ ⎜ cX + ⎞ ⎟
⎝ ⎠
Solving (25), neglecting terms of ε 's having power greater than two, we have:
[ ( ) ( )
( )
(1) 0 1 1 0 1 0 1
1 1
( Y ˆ
PhY ) Y
h h
ε ε ε ε ε ε ε
δ ʹ′ δ ʹ′
− ≅ + − + −
+ h
2 21 δ ( ε
1ʹ′ − ε
1)
2− h
2 21 δ ( ε
1ʹ′
2+ ( h − 2 ) ε ε
1 1ʹ′ − ( h − 1 ) ε
12) ⎤ ⎦ . (26)
Taking expectations on both sides of (26), we get the bias of ˆ
( )(1)hY
Pwhich is given by:
( ) 2 * 2
(1)
1 1 1
21 1 1
(2) (2)( ˆ ) .
2 2
h
P yx x yx x
B Y Y h K C h K C
h h h h
λ λ
δ δ δ δ
⎡ ʹ′ʹ′ ⎧ ⎛ ⎞ ⎫ ⎧ ⎛ ⎞ ⎫ ⎤
≅ ⎢ ⎨ ⎜ − ⎟ − ⎬ + ⎨ ⎜ − ⎟ − ⎬ ⎥
⎝ ⎠ ⎝ ⎠
⎩ ⎭ ⎩ ⎭
⎣ ⎦
(27)
Squaring both sides of (26) and neglecting terms of ε 's involving power greater than two, we have:
( ) ( )
( ) 2 2 2 2 2
(1) 0 2 2 1 1 1 1 0 1 0 1
1 2
( Y ˆ
PhY ) Y 2 .
h h
ε ε ε ε ε ε ε ε ε
δ δ
⎡ ʹ′ ʹ′ ʹ′ ⎤
− ≅ ⎢ ⎣ + + − + − ⎥ ⎦ (28)
Using (28), the MSE of ˆ
( )(1)hY
Pto the first degree approximation is given by:
{ } { }
( ) 2 2 2 2 * 2 2
(1) 1 (2) 2 (2)
( ˆ
Ph)
y y x y x,
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C + A C ⎤ ⎦ (29)
where A
11 1 2 K
yxh δ h δ
⎛ ⎞
= ⎜ − ⎟
⎝ ⎠ and A
21 1 2 K
yx(2). h δ h δ
⎛ ⎞
= ⎜ − ⎟
⎝ ⎠
The MSE Y ( ˆ
P( )(1)h) is minimum when
( ) ( )
{ }
2 * 2
(2)
2 * 2 0
2 2
x x
yx x yx x
C C
h h
K C K C
λ λ
λ λ δ
ʹ′ʹ′ +
= =
ʹ′ʹ′ + (say).
Thus the resulting minimum MSE of ˆ
( )(1)hY
Pis given by:
( )
( ) ( )
2 * 2 2
( ) 2 2 2 * 2 (2)
(1) min (2) 2 * 2
2 2
( ˆ
Ph)
y y y x x.
yx x yx x
C C
MSE Y Y C C C
K C K C
λ λ
λ λ λ
λ λ
⎡ ʹ′ʹ′ + ⎤
⎢ ʹ′ ʹ′ʹ′ ⎥
≅ + + −
⎢ ʹ′ʹ′ + ⎥
⎣ ⎦
(30)
Table 1 shows some members of a proposed class of estimators ˆ
( )(1)hY
Pof the population mean Y
by taking h = 1 and h = 2 , each at different values of c and d . Many more estimators can also
be generated from the proposed estimator in (21) just by taking different values of h , c and d .
Table 1. Some members of a family of estimators
ˆ
( )(1)hY
P under Situation-I.Estimator h c d
(1)(1) * *
ˆ
P(1)exp
x
x x
Y y
x S
⎛ ʹ′ − ⎞
= ⎜ ⎝ ʹ′ + ⎟ ⎠ 1 1 S
x(1)(2) * *
ˆ
P(1)exp
x
x x
Y y
x C
⎛ ʹ′ − ⎞
= ⎜ ⎟
ʹ′ +
⎝ ⎠ 1 1 C
x(1)(3) * *
ˆ
P(1)exp
yx
x x
Y y
x ρ
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + ⎟ ⎟ ⎠ 1 1 ρ
yx(
*)
(1)(4) *
ˆ
P(1)exp
xx x
C x x
Y y
C x S
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ʹ′ + ⎟
⎝ ⎠ 1 C
xS
x( ) ( )
(2)(1) * *
(1) *
ˆ
Pexp
x x
x x
Y y
x S x S
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 S
x( ) ( )
(2)(2) * *
(1) *
ˆ
Pexp
x x
x x
Y y
x C x C
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 C
x( ) ( )
(2)(3) * *
(1) *
ˆ
Pexp
yx yx
x x
Y y
x ρ x ρ
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 ρ
yx( )
( ) ( )
* (2)(4) *
(1) *
ˆ
Pexp
xx x x x
C x x
Y y
C x S C x S
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 C
xS
xThe expressions of mean square error of the above estimators (Table 1) are given by:
{ } { }
(1)( ) 2 2 2 2 * 2 2
(1) 3 (2) 4 (2)
( ˆ
P i)
y y x y x,
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C + A C ⎤ ⎦ (31)
where
31 1 2
yxi i
A K
δ δ
⎛ ⎞
= ⎜ − ⎟
⎝ ⎠
and
41 1 2
yx(2)i i
A K
δ δ
⎛ ⎞
= ⎜ − ⎟
⎝ ⎠ ( 1,2,3,4) i = and
{ } { }
(2)( ) 2 2 2 2 * 2 2
(1) 5 (2) 6 (2)
( ˆ
P i)
y y x y x,
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C + A C ⎤ ⎦ (32)
where
51 1 2
2
i2
i yxA K
δ δ
⎛ ⎞
= ⎜ − ⎟
⎝ ⎠
,
61 1 2
(2)2
i2
i yxA K
δ δ
⎛ ⎞
= ⎜ − ⎟
⎝ ⎠ ( 1,2,3,4) i = ,
1X S
x, δ = ⎛ ⎜ X + ⎞ ⎟
⎝ ⎠
2
X C
x,
δ = ⎛ ⎜ X + ⎞ ⎟
⎝ ⎠
3X
yxX δ = ⎛ ⎜ ⎜ + ρ ⎞ ⎟ ⎟
⎝ ⎠ and
4 x x.
x
C X S δ = ⎛ ⎜ C X + ⎞ ⎟
⎝ ⎠
2.2 Situation II
The population mean X is unknown, when non-response occurs on the study variable y and complete response on the auxiliary variable x . The estimator is given by:
( )
( ) ( )( )
( ) *
ˆ
(2)exp ,
1
g P
c x x
Y y
cx d g cx d
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + + − + ⎟ ⎟ ⎠
(33)
where ( g > 0).
Remark:
(i) When g = 0 , the estimator ˆ
( )( )g2Y
Preduces to
(0) *
ˆ
P(2)exp(1),
Y = y (34)
which is a biased estimator with larger MSE than the usual estimator y
*. (ii) When g = 1 , the estimator ˆ
( )( )g2Y
Preduces to
( )( )1 *
( )
ˆ
P2exp c x x .
Y y
cx d
⎛ ʹ′ − ⎞
= ⎜ ⎟
ʹ′ +
⎝ ⎠
(35) (iii) When g = 2 , the estimator ˆ
( )( )g2Y
Preduces to the estimator
( )
( ) ( )
(2) *
ˆ
P(2)exp .
c x x
Y y
cx d cx d
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠
(36) To obtain bias and mean square error of Y ˆ ,
P( )( )g2in terms of ε ' ,s we have:
( ) ( )
( )
(
1 2)
( )(2) 0
1 2
ˆ 1 exp .
1
g
Y
PY
g g
ε ε
ε ε ε δ
⎛ ʹ′ − ⎞
= + ⎜ ⎟
⎜ ʹ′ + − + ⎟
⎝ ⎠ (37)
Solving (37), neglecting terms of ε 's and having power greater than two, we have:
[ ( ) ( )
( ) ( ( ) ( ) )
( )(2) 0 1 2 0 1 0 2
2 2 2
1 2 1 1 2 2
2 2 2 2
1 1
ˆ
1 1 2 1 .
g
Y
PY
g g
g g
g g
ε ε ε ε ε ε ε
δ δ
ε ε ε ε ε ε
δ δ
ʹ′ ʹ′
≅ + − + −
ʹ′ ʹ′ ʹ′ ⎤
+ − − + − − − ⎦
(38)
The bias of Y ˆ ,
P( )( )g2to first order of approximation, is given by:
( ) 2
(2)
1 1 1
( ˆ ) .
2
g
P yx x
B Y Y g K C
g g
λ δ δ
⎡ ʹ′ʹ′ ⎧ ⎛ ⎞ ⎫ ⎤
≅ ⎢ ⎨ ⎜ − ⎟ − ⎬ ⎥
⎝ ⎠
⎩ ⎭
⎣ ⎦ (39)
Squaring both sides of (38) and neglecting terms of ε 's involving power greater than two, we have:
( ) ( )
( ) 2 2 2 2 2
(2) 0 2 2 1 2 1 2 0 1 0 2
1 2
( T
RgY ) Y 2 .
g g
ε ε ε ε ε ε ε ε ε
δ δ
⎡ ⎤
ʹ′ ʹ′ ʹ′
− = ⎢ + + − + − ⎥
⎣ ⎦ (40)
Using (40), the mean square error of ˆ
( )( )g2Y
Pto the first degree of approximation is given by:
( ) 2 2 2 2 * 2
(2)
1 1
(2)( ˆ
Pg)
y y2
yx x y.
MSE Y Y C C K C C
g g
λ λ λ
δ δ
⎡ ⎧ ⎛ ⎞ ⎫ ⎤
ʹ′ ʹ′ʹ′
≅ ⎢ + ⎨ + ⎜ − ⎟ ⎬ + ⎥
⎢ ⎩ ⎝ ⎠ ⎭ ⎥
⎣ ⎦ (41)
The MSE Y ( ˆ
P( )(2)g) is minimum when 1
0yx
g g
δ K
= = (say).
Thus the resulting minimum MSE of ˆ
( )( )g2Y
Pis given by:
( )
( ) 2 2 * 2 2
(2) min (2)
( ˆ
Pg)
y y2
y1
yx.
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ C λ ʹ′ʹ′ + C ρ − ⎤ ⎦ (42) In Table 2, for g = 1 and g = 2 , we propose a family of estimators ˆ
( )( )g2Y
Pof the population mean Y by taking at different choices of c and d respectively. Many more estimators can also be generated from the proposed estimator in (33) just by putting different values of g , c and d . Using Table 2, the MSE of ˆ
(1)( )(2)iY
Pand ˆ
(2)( )(2)iY
P( 1,2,3,4) i = to first degree of approximation are given by:
{ }
(1)( ) 2 2 2 2 * 2
(2) 3 (2)
( ˆ
P i)
y y x y,
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C ⎤ ⎦ (43) and
{ }
(2)( ) 2 2 2 2 * 2
(1) 5 (2)
( ˆ
P i)
y y x y.
MSE Y ≅ Y ⎡ ⎣ λ ʹ′ C + λ ʹ′ʹ′ C + A C + λ C ⎤ ⎦ (44)
Table 2. Some members of a family of estimators
ˆ
( )(2)gY
P under Situation-II.Estimator g c d
(1)(1) *
ˆ
P(2)exp
x
x x
Y y
x S
⎛ ʹ′ − ⎞
= ⎜ ⎝ ʹ′ + ⎟ ⎠ 1 1 S
x(1)(2) *
ˆ
P(2)exp
x
x x
Y y
x C
⎛ ʹ′ − ⎞
= ⎜ ⎟
ʹ′ +
⎝ ⎠ 1 1 C
x(1)(3) *
ˆ
P(2)exp
yx
x x
Y y
x ρ
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + ⎟ ⎟ ⎠ 1 1 ρ
yx( )
(1)(4) *
ˆ
P(2)exp
xx x
C x x
Y y
C x S ʹ′ −
⎛ ⎞
= ⎜ ⎝ ʹ′ + ⎟ ⎠ 1 C
xS
x( ) ( )
(2)(1) *
ˆ
P(2)exp
x x
x x
Y y
x S x S
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠ 2 1 S
x( ) ( )
(2)(2) *
ˆ
P(2)exp
x x
x x
Y y
x C x C
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠ 2 1 C
x( ) ( )
(2)(3) *
ˆ
P(2)exp
yx yx
x x
Y y
x ρ x ρ
⎛ ʹ′ − ⎞
⎜ ⎟
= ⎜ ⎝ ʹ′ + + + ⎟ ⎠ 2 1 ρ
yx( )
( ) ( )
(2)(4) *
ˆ
P(2)exp
xx x x x
C x x
Y y
C x S C x S
⎛ ʹ′ − ⎞
= ⎜ ⎜ ⎝ ʹ′ + + + ⎟ ⎟ ⎠ 2 C
xS
x3. Efficiency comparisons 3.1 Situation I
(a) When the constant ' ' h is unknown:
To compare the estimator ˆ
( )(1)hY
Pwith the usual estimators y
*, ˆ
(1)Y
Rand ˆ
(1)Y
Expwhen the value of constant ' ' h does not coincide with its optimum value ' ' h
0, we have
(i) Var y ( )
*− MSE Y ( ˆ
P( )(1)h) 0 > if
( )2
1 1
max , .
2
yx2
yxh δ K δ K
⎧ ⎫
⎪ ⎪
> ⎨ ⎬
⎪ ⎪
⎩ ⎭
(ii) MSE Y ( ˆ
R(1)) − MSE Y ( ˆ
P( )(1)h) 0 > if
( ) ( ( )2 ) ( ) (
( )2 )
1 1 1 1 1 1
min , , max , , .
2
yx1 2
yx1 2
yx1 2
yx1
K K h K K
δ δ δ δ δ δ
⎧ ⎫ ⎧ ⎫
⎪ ⎪ < < ⎪ ⎪
⎨ ⎬ ⎨ ⎬
− − − −
⎪ ⎪ ⎪ ⎪
⎩ ⎭ ⎩ ⎭
(iii) MSE Y ( ˆ
Exp(1)) − MSE Y ( ˆ
P( )(1)h) 0 > if
( ) ( ( )2 ) ( ) (
( )2 )
2 2 2 2 2 2
min , , max , , .
4
yx1 4
yx1 4
yx1 4
yx1
K K h K K
δ δ δ δ δ δ
⎧ ⎫ ⎧ ⎫
⎪ ⎪ < < ⎪ ⎪
⎨ ⎬ ⎨ ⎬
− − − −
⎪ ⎪ ⎪ ⎪
⎩ ⎭ ⎩ ⎭
(b) When the constant ' ' h is known:
(i) Var y ( )
*− MSE Y ( ˆ
P( )(1) minh) > 0 if (
2 * (2) 2(2))
22 * 2
(2)
0.
yx x yx x
x x
K C K C
C C
λ λ
λ λ
ʹ′ʹ′ + ʹ′ʹ′ + >
(ii) MSE Y ( ˆ
R(1)) − MSE Y ( ˆ
P( )(1) minh) > 0 if
(
2 2 * * 2(2) 2(2))
2( )
2(2)
1 2 0
yx x yx x
yx x
x x
K C K C
K C
C C
λ λ
λ λ λ
⎛ ʹ′ʹ′ + ⎞
⎜ + ʹ′ʹ′ − > ⎟
⎜ ʹ′ʹ′ + ⎟
⎝ ⎠
and
( )21 . 2 K
yx<
(iii) MSE Y ( ˆ
Exp(1)) − MSE Y ( ˆ
P( )(1) minh) > 0 if
(
2 * (2) 2(2))
2 22 * 2
(2)
1 0
4
yx x yx x
yx x
x x
K C K C
K C
C C
λ λ
λ λ λ
⎛ ʹ′ʹ′ + ⎛ ⎞ ⎞
⎜ + ʹ′ʹ′ ⎜ − ⎟ > ⎟
⎜ ʹ′ʹ′ + ⎝ ⎠ ⎟
⎝ ⎠
and
( )21 . 4 K
yx<
(iv) MSE Y ( ˆ
Reg( )1) − MSE Y ( ˆ
P( )( )h1)
min> 0 if
(
2 * (2) 2(2))
2 2 22 * 2
(2)
0
yx x yx x
yx y
x x