• Non ci sono risultati.

View of Estimation of population ratio, product, and mean using multiauxiliary information with random nonresponse

N/A
N/A
Protected

Academic year: 2021

Condividi "View of Estimation of population ratio, product, and mean using multiauxiliary information with random nonresponse"

Copied!
32
0
0

Testo completo

(1)

ESTIMATION OF POPULATION RATIO, PRODUCT, AND MEAN USING MULTIAUXILIARY INFORMATION WITH RANDOM NON-RESPONSE H.P. Singh, P. Chandra, I.S. Grewal, S. Singh, C.C. Chen, S.A. Sedory, J.-M. Kim

1. INTRODUCTION

It is well known that in the theory of sampling the precision of estimate is usu-ally increased by the use of some auxiliary variables correlated with the variables under investigation. Ratio, product, regression estimators and their several gener-alizations have been discussed in literature. These estimators use information in the form of known population means of the auxiliary variables. Srivastava and Jhajj (1981) suggested a family of estimators which use not only the information of the known population mean of the auxiliary variable, but also use the informa-tion of its known populainforma-tion variance. The family of estimators of populainforma-tion

mean 0 0 1 1 N j j Y Y N

of a finite population of size N, suggested by Srivastava and Jhajj (1981) is defined by:

0 ( , ) t yy t a b (1.1) where 0 0 1 1 n j j y y n  

, 1 1 1 1 n j j x x n

are the sample means of size n drawn by

simple random sampling without replacement, 21 1 2

1 1 1 ( 1) n ( ) x j j s nx x   

 is an

unbiased estimator of population variance 21 x

S of the auxiliary character x and 1 1

X denotes its known population mean, a x X 1 1, 1 1

2 2

x x

b sS and ( , )t a b is a function of ( , )a b such that (1,1) 1t  and satisfies certain regularity conditions as given in Srivastava and Jhajj (1981).

A generalized version of the family (1.1) is given by 0

( , , ) T

(2)

where ( )T  is a function of (y a b0, , ) such that T Y( ,1,1)0 Y0, 0 1 0 ( ,1,1) 0 ( ) ( ,1,1) |Y 1 T T Y y   

  and satisfies certain regularity conditions. It has been shown that the asymptotic minimum MSE of y and t y are the same. Us-T ing information on X1 and 1

2 x

S , Upadhyaya and Singh (1985) suggested a family

of estimators of population ratio R Y Y0 1, Y  as: 1 0 ˆ ˆ ( , )

h

RRh a b (1.3)

where Rˆ  y0 y1 (y 1 0) is the conventional estimator of R , y is the sample 1

mean of the study variable y , ( , )1 h a b is a parametric function of ( , )a b such that (1,1) 1

h  and satisfies certain regularity conditions. A family of estimators wider than ˆR is defined as: h

ˆ ( , , )ˆ H

RH R a b (1.4)

where ( )H  is a parametric function of ˆ( , , )R a b such that, H R1( ,1,1) , and R satisfies certain regularity conditions. It has been shown that the asymptotic minimum MSEs of ˆR and ˆh R are the same. Quite often information on many H supplementary variables are available in the survey which can be utilized to in-crease the precision of the estimate. Olkin (1958) has considered the use of multi-auxiliary variables, positively correlated with the variables under study to build up a multivariate ratio estimator of the population mean Y . Following Olkin’s 0 method of estimation several estimators using multiauxiliary variables have been proposed by various authors; for instance, see Raj (1965), Rao and Mudholkar (1967), Srivastava (1971), Tuteja and Bahl (1991), Agarwal and Panda (1994), Singh and Rani (2005-2006), Tailor and Tailor (2008) etc. In this paper we suggest a family of estimators for population ratio, product and mean when information about population means and variances of m  auxiliary variables are available. 1 The properties of the suggested family are also discussed in the presence of ran-dom non-response. In this context we refer to Tracy and Osahan (1994), Singh and Joarder (1998), Singh et al. (2000, 2007), Dubey and Uprety (2008), Gamrot

(2008), and Harel (2008). 2. NOTATIONS

We assume that information on m auxiliary variables X X1, 2,...,X are avail-m able for all the units in the population. Let U{U U1, 2,...,UN} denote the popu-lation of N units from which a simple random sample of size n is drawn without

(3)

replacement. Let Y , 0 j Y and 1 j X denote the values of the variables ij Y , 0 Y 1 and X on the j -th unit of the population, i i1, 2,3..., ;m j 1, 2,...,N. Fur-ther, let y0j, y and 1j x denote the value of the sample, ij i1, 2,...,m;

1, 2,..., . jn We denote: 0 0 1 1 n j j y y n  

, 1 1 1 1 n j j y y n  

, 1 1 n i ij j x x n  

, 0 0 1 1 N j j Y Y N  

, 1 1 1 1 N j j Y Y N  

, 1 1 N i ij j X X N  

, 20 1 0 0 2 1 ( 1) n ( ) y j j s ny y   

 , 21 1 1 1 2 1 ( 1) n ( ) y j j s ny y   

 , 2 1 2 1 ( 1) ( ) i n x ij i j s nx x   

 , 0 1 1 0 0 1 1 1 ( 1) n ( )( ) y y j j j s ny y y y   

  , 0 1 0 0 1 ( 1) ( )( ) i n y x j ij i j s ny y x x   

  , 1 1 1 1 1 ( 1) ( )( ) i n y x j ij i j s ny y x x   

  , 0 2 1 2 0 0 1 ( 1) N ( ) y j j S NY Y   

 , 21 1 1 1 2 1 ( 1) n ( ) y j j S NY Y   

 , 2 1 2 1 ( 1) ( ) i N x ij i j S NX X   

 , 0 1 1 0 0 1 1 1 ( 1) N ( )( ) y y j j j S NY Y Y Y   

  , 0 1 0 0 1 ( 1) ( )( ) i N y x j ij i j S NY Y X X   

  , 1 1 1 1 1 ( 1) ( )( ) i N y x j ij i j S NY Y X X   

  , C2y0S2y0 Y02, 1 1 2 2 2 1 y y CS Y and 2 2 2 i i x x i CS X , 1, 2,...,im.

Further let y y0 1, y x0 i, y x1 i and x xi k denote the correlation coefficients

between (y y , 0, 1) ( , )y x , 0 i ( , )y x and between the variables 1 i (x x respec-i, k) tively, i k 1, 2,...., .m Define: uix Xi i , 1, 2,....,im and ( 2 ) ( 2 ) i m i i x x m us S  , 1, 2,...., 2 i m  mm

Let u denote the column vector of 2m elements u u1, ,...,2 u . Superscript T 2m over a column vector denotes the corresponding row vector.

Defining: 0 y Y0 0 1    , 1 y Y1 1 , 1 i   , ui 1 i1, 2,..., 2 ;m and 1 2 2 ( , ,..., ) T m     

(4)

we have: 0 1 ( ) ( ) 0 E E   , ( ) 0Ei  1,2,...,2 ; i m 0 2 2 0 ( ) y E C , 2 21 1 ( ) y E C , E( 0 1) y y0 1C Cy0 y1, E( 0 i)ay x0 i, 1, 2,...., im; 2 2 ( )i xi E C , i1, 2,....,m; E( 1 i)ay x1 i, i 1, 2,....,m, 2 2 ( ) ( ( ) 1) i i i x x i E    x  ,i m 1,m2,...., 2m; 0 0 0 {( ) }i ( y xi y xi i) i E     a a a ,i1, 2,....,m; ( i k) x xi k xi xk x xi k E    C C a , i k 1, 2,...,m; and up to terms of order n1, we have:

0 0 ( i) y xi E     , i m 1,m2,..., 2m; E  ( 1 i) y x1 i, i m 1,m2,..., 2m; 0 0 0 {( ) }i ( y xi y xi i) i E         , 1,i m  m2,..., 2m; ( i k) x xi k E   f , 1, 2,....,im, k m 1,m2,..., 2m; ( i k) x xi k E    , ( , )i k  m 1,m2,...., 2m; where ay x0 i y x0 iC Cy0 xi; ay x1 i y x1iC Cy1 xi , (1/ n1/ )N , 0 2 2 0 0 0 1 ( )( ) /(( 1) ) i i N y x j ij i x j Y Y X X N Y S   

   , 1 2 2 1 1 1 1 ( )( ) ( ( 1) ) i i N y x j ij i x j Y Y X X N Y S   

   , 2 2 1 ( )( ) (( 1) ) i k k N x x ij i kj k i x j f X X X X N X S  

   and 2 2 2 2 1 ( ) ( ) ( ( 1) ) 1 i k i k N x x ij i kj k x x j X X X X N S S   

    .

(5)

Putting the above results in matrix notations, we have: ( ) 0

E   , (ET)D, E{(0 1) }d,

where the vector dT (aT :T) ( , ,..., a a1 2 am, , ,..., 1 2 m) and the 2m2m matrix: | | T A F D F               

which is assumed to be positive definite. The matrices A , F and  are of order

m m matrices A[ax xi k], [Ffx xi k], and  [x xi k].

3. THE SUGGESTED FAMILY OF ESTIMATORS OF R 

The parameter under investigation is R Y Y0 1 , where  takes values 0, +1, or –1. It reduces to:

(i.) R 1Y Y0 1 , for R   (Ratio of two population means) 1 (ii) R 1Y Y0 1 for P    (Product of two population means) 1 (iii) R 0Y0, for   (Mean of single population) 0

The conventional estimator of R  Y Y0 1 , is defined by:

  0 1 1

ˆ , 0

R  y yy (3.1)

which reduces to the following set of estimators:

(i) Rˆ 1  y0 y1 , for Rˆ   (Estimator of ratio of two population means) 1 (ii) Rˆ 1y y0 1  for Pˆ    (Estimator of product of two population 1 means)

(iii) Rˆ 0y0, for   (Estimator of mean of single population) 0

Let eT denote the row vector of 2m unit elements. Whatever be the sample chosen, let (Rˆ ,uT)

assume values in a closed convex subset, Q , of the 2m 1

dimensional real space containing the point (R , )eT

(6)

We define a family of estimators of R  as:     1 2 2   ˆ (ˆ , , ,..., ) (ˆ , T) mg m R G Ru u uG Ru (3.2)

where G R  ,uT) is a function of (Rˆ  ,uT) such that:

   

( , T)

G RuR for all Rˆ  , (3.3)

and such that it satisfies the following conditions:

(1) The function G R  ,uT) is continuous and bounded in Q .

(2) The first and second order partial derivatives of the function G R ,uT)

exist and are continuous and bounded in Q .

To obtain the minimum mean squared error (MSE) of Rˆmg( ) , we expand the function G R  ,uT) about the point G R(   , )eT in second order Taylor’s series.

We obtain                 1 ( , ) ( ) ˆ ( , ) (ˆ ) | ( ) ( , ) ˆ T T T T mg R e G R G R e R R u e G R e R                                * * * * 2 2 ˆ ˆ 2 ( , ) ( , ) ( ) ( ) 1 ˆ ˆ ( ) ˆ | 2( )( ) ˆ | 2 T T T R u R u G G R R R R u e RR                    (u e G)T  2(Rˆ * ,u*T)(u e)      (3.4) where Rˆ * R  (Rˆ  R  ), u*  e (u e , 0)   ;  1 G 1( ) denote the 2m elements column vector of the first partial derivatives of ( )G  and G  2

denotes the 2m2m matrix of second order partial derivatives of ( )G  with

re-spect to u . Substituting for y y and u in terms of 0, 1  0, 1 and  using (3.3), we have:               1 0 1 ( , ) ( ) ˆ ((1 )(1 ) 1) | ( , ) ˆ T T T mg R e G R R R G R e R                           * * 2 2 ˆ 0 1 2 ( , ) ( ) 1 ((1 )(1 ) 1) | ˆ 2 R uT G R R                

(7)

        * * 1 ˆ 0 1 ( )( , ) 2 ((1 )(1 ) 1) T ˆ |R uT G R R                TG 2 (Rˆ * ,u*T)       (3.5) Taking expectation in (3.5) and noting that the second partial derivatives are bounded, we have: Theorem 3.1.     1 ˆ ( mg ) ( ) E R RO n

From Theorem 3.1, it follows that the bias of the suggested family is of order 1

n and hence its contribution to the MSE of Rˆmg  will be of order n2.

We now prove the following theorem:

Theorem 3.2. Up to terms of order n1, the MSE of Rˆmg  is minimized for

 

   

1( , )T 1

G R e  R D d  (3.6)

and the minimum MSE is given by

     2 1 ˆ ˆ ( ) ( ) T mg MSE RMSE R R d D d (3.7) where     0 1 0 1 0 1 2 2 2 ˆ ( ) [ y { y 2 y y y y }] MSE R R C  C   C C (3.8)

is the MSE of Rˆ  to the first degree of approximation.

Proof. From (3.5), we have up to terms of order n1,

      2 ˆ ˆ ( mg ) [ mg ] MSE R E R R MSE R(ˆ ) [2R d G  T  1(R , )eT        (G 1(R , ))eT TD G(  1(R , ))]eT   (3.9) which is minimized for

 

   

1( , )T 1

G Re  R D d  (3.10)

Substitution of (3.10) in (3.9) yields the minimum MSE of Rˆmg  as given in (3.7). Hence the theorem.

(8)

Theorem 3.3. Up to terms of order n1 ,      2 1 ˆ ˆ ( ) ( ) T mg MSE RMSE R R d D d  with equality holding if

 

   

1( , )T 1

G R e  R D d  .

Any parametric function G R ,uT)

 satisfying the conditions (1) and (2) can

generate an asymptotically acceptable estimator. The family of estimators is large. Some examples are:

(1) Rˆmg(1)Rˆ( ) exp(T log )u (2) Rˆmg( 2) Rˆ (1( ) T(u e )) (3) ( 3) ( ) ˆ ˆ exp( (T )) mg RR  u e (4) Rˆmg(4 )Rˆ( ) T(u e ) (5) (5) ( )2 1 ˆ ˆ m exp(( )log ) mg i i i i i R Rww u  

(6) (6) ( )2 1 ˆ ˆ i m mg i i R R   

(7) (7) 2 ( ) ( ) ˆ ˆ /{ˆ T( )} mg RR R  u e , where T ( , ,..., 1 2 2m) is a vector of 2m constants.

The optimum values of these constants which minimizes the MSE of Rˆmg  are obtained from the conditions (3.6). The MSE of any estimator of the family (3.2) is obtained from (3.9). From (3.7), the minimum MSE is not larger than the MSE of the conventional estimator Rˆ  , since d D dT 1  . 0

Remark 3.1. The suggested family of estimators Rˆmg  reduces to:

(1) Srivastava and Jhajj (1983) estimator of Y :0 Rˆs 1 G y u( 0, T), for   0 (2) the generalized version of Upadhyaya and Singh (1985) estimator of popu-lation ratio R :Rˆs 2 G R u( ,ˆ T), for   1

(3) the product estimator: Rˆs 3 G P u( ,ˆ T) for    . 1

4. ESTIMATORS BASED ON ESTIMATED OPTIMUM VALUES OF CONSTANTS

Following the same approach as adopted by Srivastava and Jhajj (1983), we suggest a family of estimators of R  (based on estimated optimum values of

(9)

    * * ˆ ˆ (ˆ , T, T) mg R  G Ru  (4.1) where 1 ˆT D dˆ ˆ  (4.2)

is a consistent estimator of T D d1 with: ˆ, ˆ ˆ ˆ, ˆ A F D F          , 1 2 1 2 ˆ ˆ ˆ ˆ ˆT (ˆT : T) ( , ,...,ˆ ˆ ˆ , , ,..., ) m m da   a a a    , ˆA[aˆx xi k]m m , ˆ ˆ [ x xi k]m m Ff  , ˆ[ˆx xi k]m m , aˆi (aˆy x0 i aˆy x1 i), 0 1 ˆ (ˆ ˆ ) i i i y x y x     , 0 ˆ 0 0 ˆ ˆ ˆ i i i y x y x y x a  C C , i1, 2,...,m; ˆ 1 ˆ 1 ˆ ˆ1 i i i y x y x y x a  C C , i1, 2,...,m; 0 0 0 ˆ ( ) i i i y x sy x s sy x   , 1, 2,...,im; ˆ 1 1 ( 1 ) i i i y x sy x s sy x   , 1, 2,...,im; 0 0 0 ˆ y y Cs y , Cˆy1sy1 y1, 0 1 0 0 1 ( 1) ( )( ) i n y x j ij i j s ny y x x   

  , 1 1 1 1 1 ( 1) ( )( ) i n y x j ij i j s ny y x x   

  , 20 1 2 0 0 1 ( 1) n ( ) y j j s ny y   

 , 1 2 1 2 1 1 1 ( 1) n ( ) y j j s ny y   

 , 2 1 2 1 ( 1) ( ) i n x ij i j s nx x   

 , 1, 2,...,im; 0 1 ˆ (ˆ ˆ ) i i i y x y x     , 1,i m  m2,..., 2m; 0 2 0 0 1 2 0 ˆ ( )( ) ( 1) i i n j ij i j y x x y y x x n y s      

, 1 2 1 1 1 2 1 ˆ ( )( ) ( 1) i i n j ij i j y x x y y x x n y s      

, aˆx xi k ˆx xi kC Cˆ ˆxi xk, 1, 2,...,i k  m; ˆ i k i k i k x x x x x x s s s   ; 1 1 ( 1) ( )( ) i k n x x ij i kj k j s nx x x x   

  ; 2 1 2 2 ( )( ) ˆ ( 1) i k k n ij i kj k j x x i x x x x x f n x s     

, 2 2 1 2 2 ˆ ( ) ( ) 1 ( 1) i k i k n ij i kj k j x x x x x x x x n s s       

,   *(ˆ , T, ˆT) G R u  is a parametric function of (Rˆ ,uT, ˆT)   such that:

(10)

    *( , ,T T) G Re  R for all R  (4.3) which implies     * ( , , ) ( ) | 1 ˆ R eT T G R        (4.4)     * ( , , ) ( ) |R eT T G R u         (4.5) and   * ( , , ) ˆ ( ) |R eT T 0. G        (4.6)

Expanding the function G R*(ˆ ,uT,ˆT)

  about the point (R  , ,eTT) in

Taylor’s series and using (4.3) to (4.6), we have

              * * * * ( , , ) ( , , ) ( ) ( ) ˆ ( , , ) (ˆ ) | ( ) | ˆ T T T T T T T mg R e R e G G R G R e R R u e u R                        * ( , , ) ˆ ˆ ( ) ( )T |R eT T G        

  second order terms

R  R  (01)  T ( R )second order terms (4.7) Since ˆ is a consistent estimator of  , the expectation of the second order terms in (4.7) will be O n( 1) and hence

    * 1 ˆ ( mg ) ( ) E R  RO n

The mean squared error of Rˆ mg up to terms of order n1, from (4.7) is

      * * 2 ˆ ˆ ( mg ) [ mg ] MSE R  E R  R MSE R  )R d D d 2 T 1 which is same as (3.7). We thus have proved the following theorem.

Theorem 4.1. If the optimum values of constants in (3.6) is replaced by their con-sistent estimators and conditions (4.5) and (4.6) hold, the resulting estimator

 

* ˆ

mg

R  has the same mean squared error up to terms of order n1, as that of op-timum Rˆmg  .

(11)

5. RANDOM NON-RESPONSE AND SOME EXPECTED VALUES

If (z z0,1, 2,...,(n2)) denotes the number of sampling units on which in-formation could not be obtained due to random non-response, then the remain-ing (n z units in the sample can be treated as SRSWOR sample from U . Since ) we are considering the problem of unbiased estimation of variance of the estima-tors of general parameter R  , therefore, we are assuming that z should be less than (n  , that is, 01)    . We assume that if p denotes the probability z (n 2) of non-response among the (n 2) possible values of non-responses, then z has the following discrete distribution given by

2 2 ( ) ( ) 2 n z n z z n z P z C p q nq p       (5.1) where (1q p), 0,1, 2,...,(zn and 2) n 2 z C

denote the total number of ways of obtaining z non-responses out of the (n 2) total possible responses, for instance, see Singh and Joarder (1998).

Let us define: * * 0 y Y0 0 1    , * * 1 y Y1 1 1    , * * 1 i ui    , and    , i ui 1 where * * *2 2 , 1, 2,...., , 1, 2,..., 2 i m i m i i i x x x X i m u s S i m m m         , 2 2 , 1, 2,...., , 1, 2,..., 2 i m i m i i i x x x X i m u s S i m m m        1 2 2 ( , ,..., ) T m      , * * * * 1 2 2 ( , ,..., ) T m      where *2 1 * 2 1 ( 1) ( ) i n z x ij i i s n z x x      

is conditionally unbiased estimator of 2 i x S (i1, 2,..., )m and * 1 0 0 1 ( ) n z j j y n z y     

, * 1 1 1 1 ( ) n z j j y n z y     

, * 1 1 ( ) n z i ij j x n z x     

, i1, 2,..., .m

Thus under the probability model (5.1), we have the following results:

* * 0 1 ( ) ( ) 0 E E  , ( ) ( ) 0* i i E    for all E i1, 2,..., 2m; 0 *2 * 2 0 ( ) y E  C , *2 * 21 1 ( ) y E  C , * * * 0 1 0 1 0 1 ( ) y y y y E    C C ,

(12)

* 2 *2 * * 2 , 1, 2,..., ( ) ( ( ) 1), 1, 2,..., 2 i i i x i x x i C i m E x i m m m                 0 0 * * * 0 * , 1, 2,..., ( ) , 1, 2,..., 2 . i i y x i y x a i m E i m m m               * * * * , 1, 2,..., ; 1, 2,..., 2 ( ) , , 1, 2,..., 2 . i k i k x x i k x x f i m k m m m E i k m m m                    1 1 * * * 1 * , 1, 2,..., ( ) , 1, 2,..., 2 . i i y x i y x a i m E i m m m               0 1 0 1 * * * * * 0 1 * * ( ) , 1, 2,..., (( ) ) ( ) , 1, 2,., 2 . i i i i y x y x i i y x y x i a a a i m E i m m m                             * * 0 1 , 1, 2,..., (( ) ) , 1, 2,..., 2 . i i i a i m E i m m m                0 0 * 0 , 1, 2,..., ( ) , 1, 2,..., 2 . i i y x i y x a i m E i m m m             1 1 * 1 , 1, 2,..., ( ) , 1, 2,..., 2 . i i y x i y x a i m E i m m m             2 * 2 , 1, 2,..., ( ) ( ( ) 1), 1, 2,..., 2 . i i i x i i x x i C i m E x i m m m                

(13)

* , 1, 2,..., ; ( ) 1, 2,..., 2 , , 1, 2,..., 2 i k i k x x i k x x f i m E k m m m i k m m m                 and * * ( i k) x xi k, 1, 2,..., E  a i k  m, where * 1 1 2 nq p N       .

Putting the results in matrix notations, we have * ( ) 0 E   , E( * *T)*D, * * * * 0 1 {( ) } E      d, E(*T) D, and * * 0 1 {( ) } E     d.

It may be noted that if p 0 that is if there is no non-response, the above ex-pected values coincide with the usual results. In the following sections we con-sider three different strategies as follows:

6. SUGGESTED STRATEGY-I

We are considering the situation when random non-response exists on study variables Y , 0 Y and auxiliary variables 1 X ,1 X , ...,2 X . It is assumed that the m population means X and variances i Sx2i, (i1, 2,..., )m of the auxiliary variables

1

X ,X , ..., 2 X are known. Thus we define a family of estimators of m R  as:

 * *

1 (ˆ , T)

dJ R u , (6.1)

where Rˆ *  y0* y1* is conventional estimator of R  , J R(ˆ * ,u*T) is a

para-metric function of (Rˆ * ,u*T)

 such that

   

( , )T

J R eR for all Rˆ  (6.2)

(14)

 * (  , ) ( ) | 1 ˆ T R e J R       (6.3) and satisfies certain conditions as given for Rˆmg  . To terms of order n1, it can

be easily shown that:

  1 1 ( ) ( ) E dR O n (6.4) and    1    1    1   * * 1 ˆ ( ) ( ) [2 T ( , ) (T ( , )T T ( ( , )))T MSE dMSE R  R d JReJ Re D J Re (6.5) where       1 * ( , ) ( . ) ( ) | T T R e R e J J u       and     0 1 0 1 0 1 * * 2 2 2 ˆ ( ) [ y { y 2 y y y y }] MSE R  R C  C   C C (6.6) is the MSE of Rˆ *

 to terms of order n1 . The MSE( )d at (6.5) is minimized 1 for

 

   

1( , )T 1

J R e  R D d  (6.7)

Thus the resulting (minimum) MSE of d is given by 1

 * *  2 1

1 ˆ

min.MSE d( ) MSE R( ) R d D dT

   

  (6.8)

which clearly indicates that the proposed estimator is more efficient than the conventional estimator Rˆ * . Thus we proved the following theorem:

Theorem 6.1. To the first degree of approximation

 * *  2 1

1 ˆ

( ) ( ) T

MSE dMSE R  R d D d 

with equality holding if

 

   

1( , )T 1

(15)

The following estimators: (1) (1) * * 1 ˆ( )exp( T log )

dR  u , (2)

( 2) * *

1 ˆ (1( ) T( ))

dR  ue etc. may be identified as particular members of the

suggested family d . The MSEs of these estimators can be easily obtained from 1

(6.5). Proceeding in a similar manner as section 4, we state the following theorem. Theorem 6.2. The family of estimators:

 

* * * * *

1 (ˆ , TT)

dJ Ru

of R  based on estimated optimum values of constants has MSE to the first

degree of approximation equal to that of minimum MSE of d that is 1 *

1 1

( ) min. ( )

MSE dMSE d (6.9)

where min.MSE d is cited in (6.8), ( )1 J R*(ˆ * ,u*T,ˆ*T) is a function of

 * * ˆ* ˆ (R ,u T, T) such that     *( , ,T T) J Re  R for all Rˆ *

which implies that

    * * ( , , ) ( ) | 1 ˆ R eT T J R        ,     * * ( , , ) ( ) |R eT T J R u         and   * * ( , , ) ˆ ( ) |R eT T 0 J       

and ˆ* D dˆ* 1 * ˆ is the estimator of D d1 ,

* * * * * ˆ | ˆ ˆ ˆ T | ˆ A F D F                 , * * * * * * * * * 1 2 1 2 ˆ ˆ ˆ ˆ ˆ T (ˆ T : T) ( , ,...,ˆ ˆ ˆ , , ,..., ) m m da   a a a    , ˆ* [ˆ* ] i k x x m m Aa , ˆ* [ ˆ* ] i k x x m m Ff , * * ˆ [ˆx xi k]m m    , ˆ* ˆ 0 ˆ*1 i i i y x y x aa a , ˆ* ˆ*0 ˆ*1 i i i y x y x    , ˆ*0 ˆ*0 ˆ ˆ*0 * i i i y x y x y x a  C C , 1 1 1 * ˆ* ˆ ˆ* * ˆy xi y xi y xi a  C C , 0 0 * * * 0 ˆ y y s C y  , 1 1 * * * 1 ˆ y y s C y  , 0 1 * 1 * * 0 0 1 1 1 ( 1) n r( )( ) y y j j j s n z   y y y y    

  , *20 1 * 2 0 0 1 ( 1) ( ) n z y j j s n z y y      

 , 1 *2 1 * 2 1 1 1 ( 1) ( ) n z y j j s n z y y      

 , ˆ*0 *0 ( *0 * ) i i i y x sy x s sy x   , ˆ*1 *1 ( *1 * ) i i i y x sy x s sy x   ,

(16)

* * * * ˆ ( ) i k i k i k x x sx x s sx x   , 0 1 0 0* * 2 0* *2 1 ˆ ( 1) ( )( ) ( ) i i n z y x j ij i x j n z y y x x y s       

  , 1 1 * * 2 * *2 1 1 1 1 ˆ ( 1) ( )( ) ( ) i i n z y x j ij i x j n z y y x x y s       

  , aˆ*x xi kˆx x*i kC Cˆ ˆx*i x*k , * * * ˆ i i x x i Cs x , Cˆx*ksx*k xk*, * * * 2 *2 *2 1 ˆ ( )( ) (( 1) ) i k k n z x x ij i kj k i x j f x x x x n z x s   

    and * * 2 * 2 *2 *2 1 ˆ i k ( ) ( ) ( ( 1) i k) 1 n z x x ij i kj k x x j x x x x n z s s    

     . 6.1. Special cases

For the numerical comparison of the various estimators, we consider a situa-tion when only two auxiliary variables are available. Let d be an estimator mak-1 ing use of the population means X and 1 X as well as known population vari-2

ances 21 x

S and 22 x

S . It can be easily seen that the percent relative efficiency (RE) of the estimator d with respect to the control estimator 1 *

( ) ˆ R is give by: 1 * 1 1 2 2 1 3 2 4 1 ( ) ( ) ˆ RE( ,d R ) A a a 100%                  (6.1.1) Let (1) 1

d be an estimator making use of the population means X and 1 X , then 2

the percent relative efficiency (RE) of (1) 1

d with respect to the control estimator

* ( ) ˆ R is given by: 1 * * (1) * 1 1 2 2 1 ( ) * ( ) ˆ RE(d ,R ) A a a 100%            (6.1.2) Let ( 2) 1

d be an estimator making use of the population variances 21 x S and 22 x S , then 1 * * ( 2 ) * 1 3 2 4 1 ( ) * 0 ( ) ˆ RE(d ,R ) A 100%             (6.1.3) Let ( 3) 1

d be an estimator making use of population mean X and population 1

variance 21 x

(17)

1 ( 3) * 1 1 2 2 1 ( ) ( ) ˆ RE(d ,R ) A a D a D 100% D          (6.1.4) Let ( 4 ) 1

d be an estimator making use of only the population mean X1 of one aux-iliary variable, then

1 1 2 ( 4 ) * 1 1 ˆ( ) 2 RE( , ) 100% x a d R A C             (6.1.5) Let (5) 1

d be an estimator making use of population variance 21 x

S of the first

auxil-iary variable, then

1 2 (5) * 1 1 ( ) * 2 1 ˆ RE( , ) 100% ( ) d R A x        (6.1.6)

Let d be an estimator making use of population mean 1(6) X and population 2 variance S of only the second auxiliary variable, then x22

1 (6) * 2 3 2 4 1 ( ) * ( ) ˆ RE(d ,R ) A a D D 100% D           (6.1.7)

Let d be an estimator making use of the population mean 1(7) X of the second 2 auxiliary variable, then

2 1 2 (7) * 2 1 ˆ( ) 2 RE( , ) 100% x a d R A C             (6.1.7)

Let d be an estimator making use of population variance 1(8) 2 2 x

S of the second

auxiliary variable, then

1 2 (8) * 2 1 ( ) * 2 2 ˆ RE( , ) 100% ( ) d R A x        (6.1.8) where 0 1 0 1 0 1 2 2 [ y ( y 2 y y y y )] AC  C   C C , a1(ay x0 1ay x1 1), 0 1 0 1 0 1 y x y x y x a  C C , ay x1 1 y x1 1C Cy1 x1, a2(ay x0 2 ay x1 2),

(18)

0 2 0 2 0 2 y x y x y x a  C C , ay x1 2 y x1 2C Cy1 x2, 1(y x0 1 y x1 1), 0 2 1 2 2 ( y x y x )     , 0 1 2 21 0 0 1 1 0 1 ( )( ) (( 1) ) N y x j j x j Y Y X X N Y S   

   , 1 1 1 2 2 1 1 1 1 1 1 ( )( ) (( 1) ) N y x j j x j Y Y X X N Y S   

   , 0 2 2 2 2 0 0 2 2 0 1 ( )( ) (( 1) ) N y x j j x j Y Y X X N Y S   

   1 2 2 2 2 1 1 2 2 1 1 ( )( ) ( ( 1) ) N y x j j x j Y Y X X N Y S   

   1 2 1 2 2 2 2 2 1 1 2 2 1 ( ) ( ) (( 1) ) 1 N x x j j x x j X X X X N S S   

    , 1 1 2 1 1 1 2 2 (1) ( 2) ( 3) (4 ) x x x x x x x C a f f         2 1 2 2 1 2 1 2 2 1 2 2 2 2 1 1 2 2 2 2 * * 2 * (1) 2 1 2 2 2 2 * 2 1 ( ( ) ( ) ) ( ( ) ) ( ( )) x x x x x x x x x x x x x x x x x x x C x x f f x f f f f x               1 2 1 2 2 1 1 1 1 2 1 2 1 2 1 1 1 2 1 2 * * 2 * ( 2) 2 1 2 2 2 2 * 2 1 ( ( ) ( ) ) ( ( ) ) ( ( )) x x x x x x x x x x x x x x x x x x x x a x x f f x f f f f x               1 2 2 1 1 2 1 2 2 1 1 1 2 1 2 2 2 1 1 2 2 1 2 2 1 * 2 * ( 3) ( 2( 2) ) ( 2( )1 ) ( ) x x x x x x x x x x x x x x x x x x x x x x x x x a f x f C f x f f f f f f            1 2 2 1 1 2 2 2 2 1 1 1 2 1 2 2 1 1 1 2 2 1 2 2 1 * 2 * ( 4 ) ( 2( ))1 ( 2( ))1 ( ) x x x x x x x x x x x x x x x x x x x x x x x x x a f f x C f f x f f f f f            1 2 1 2 1 2 1 a1 1(1) ax x 1( 2) fx x 1( 3) fx x 1( 4 )           2 1 2 2 1 2 1 2 2 1 2 2 2 2 1 1 2 2 2 2 * * 2 * 1(1) 2 1 2 2 2 2 * 2 1 ( ( ) ( ) ) ( ( ) ) ( ( )) x x x x x x x x x x x x x x x x x x x C x x f f x f f f f x              

(19)

1 2 2 1 1 2 2 2 1 2 * * 2 * 1( 2 ) 2 2 1 2 2 2 1 2 2 * 2 2 1 1 ( ( ) ( ) ) ( ( )) ( ( ) ) x x x x x x x x x x a x x f x f x                    2 1 2 2 1 2 2 1 2 2 2 2 1 2 2 * 2 * 1( 3) 2 2 2 2 1 2 2 2 1 ( ( ) ) ( ( )) ( ) x x x x x x x x x x x x x x x a f x f C x f f f                 2 1 1 2 2 2 2 1 2 2 1 2 1 2 2 * 2 * 1( 4 ) 2 2 1 2 2 1 1 2 1 ( ( )) ( ( ) ) ( ) x x x x x x x x x x x x x x x a f f x C x f f f                 1 1 1 1 2 2 2 Cx 2(1) a1 2( 2 ) fx x 2(3) fx x 2( 4 )          1 2 2 1 1 2 2 2 1 2 * * 2 * 2(1) 2 2 1 2 2 2 1 2 2 * 2 2 1 1 ( ( ) ( ) ) ( ( )) ( ( ) ) x x x x x x x x x x a x x f x f x                    1 2 1 2 2 1 1 1 1 2 1 2 2 2 1 1 1 2 1 2 * * 2 * 2( 2 ) 2 1 2 2 2 2 * 2 1 ( ( ) ( ) ) ( ( ) ) ( ( ) ) x x x x x x x x x x x x x x x x x x x x a x x f f x f f f x f               1 2 1 2 1 2 1 2 1 2 2 2 1 2 1 1 * * 2(3) 2 1 2 2 2 2 2 1 2 ( ( )) ( ( ) ) ( ) x x x x x x x x x x x x x x x x a x a f x f f f f                1 2 1 2 1 2 1 2 1 2 2 1 1 2 1 1 * * 2(4 ) 2 2 1 1 2 2 1 1 2 ( ( ) ) ( ( )) ( ) x x x x x x x x x x x x x x x x a x a f f x f f f                1 1 2 1 2 2 3 Cx 3(1) ax x 3( 2) a1 3(3) fx x 3(4 )          2 1 2 2 1 2 2 1 2 2 2 2 2 2 1 2 * * 3(1) 2 1 2 2 2 2 2 1 2 ( ( )) ( ( ) ) ( ) x x x x x x x x x x x x x x x C x a f x f f f f                1 2 1 2 1 2 1 2 1 2 2 2 1 2 1 1 * * 3( 2) 2 1 2 2 2 2 2 1 2 ( ( )) ( ( ) ) ( ) x x x x x x x x x x x x x x x x a x a f x f f f f                1 2 2 1 1 2 2 2 2 1 1 1 2 1 2 2 2 1 1 2 2 1 2 2 1 * 2 * 3(3) ( 2( 2) ) ( 2( 2) ) ( ) x x x x x x x x x x x x x x x x x x x x x x x x x a f x f C f x f f f f f f           

(20)

1 2 2 2 2 1 2 1 2 1 1 1 1 2 2 1 2 2 1 2 3( 4 ) 1 2 1 2 2 ( ) ( ) ( ) x x x x x x x x x x x x x x x x x x x a f f C f f a f f f f            1 1 2 1 1 2 4 Cx 4(1) ax x 4( 2) fx x 4( 3) a1 4(4 )          2 1 2 2 2 2 2 2 1 2 1 1 2 2 2 2 * 4(1) 1 3 2 1 1 2 * 2 2 1 ( ( )) ( ) ( ( )) x x x x x x x x x x x x x x x C x f f f a f f x                1 2 1 2 2 1 1 2 2 1 1 1 1 2 1 2 * 4( 2) 1 2 2 1 1 2 * 2 2 1 ( ( )) ( ) ( ( )) x x x x x x x x x x x x x x x x a x f f f a f f x                1 2 2 2 2 1 2 1 2 1 1 1 1 2 2 1 2 2 1 2 4( 3) 1 2 1 2 2 ( ) ( ) ( ) x x x x x x x x x x x x x x x x x x x a f f C f f a f f f f            1 2 2 1 1 2 2 2 2 1 1 1 2 1 2 2 1 1 2 2 2 1 2 2 1 * 2 * 4( 4 ) ( 2( ))1 ( 2( ))1 ( ) x x x x x x x x x x x x x x x x x x x x x x x x x a f f x C f f x f f f f f            * * 2( )x1 2( ) 1x1    , * * 2(x2) 2(x2) 1    , 21 21 2 1 x x CS X , 22 22 2 2 x x CS X , 1 2 1 2 1 2 x x x x x x a  C C , 1 1 1 3 2 1 1 1 1 ( ) (( 1) ) N x x j x j f X X N X S  

  , 1 2 2 2 2 1 1 2 2 1 1 ( )( ) (( 1) ) N x x j j x j f X X X X N X S  

   2 1 1 2 2 1 1 2 2 2 1 ( ) ( ) (( 1) ) N x x j j x j f X X X X N X S  

   , 2 2 2 3 2 2 2 2 1 ( ) ( ( 1) ) N x x j x j f X X N X S  

  , * 21 22 21 2 x x x x C C a    , 1 2 2 * 2 1 a a2 x x a C1 x    , * 1 2 21 2 a a1 x x a C2 x    , * * * 21 2 0 2( ) (x1 2 x2) x x    , 1 2 * * 3  2 x x  1 2(x2)    , * 1 2 * 4  1 x x  2 2( )x1    , D11fx x1 1a1 2 ( )x1 , 1 1 1 2 2 1 x x 1 x Da f C , 21 * 21 1 2( )1 x x x D C  xf , 2 2 * 3 2 x x 2 2( 2) D  fax , 2 2 2 2 4 2 x x 2 x Da f  C , and * 22 * 22 2 2( 2) x x x DCxf .

(21)

7. SUGGESTED STRATEGY-II

Here we are considering the situation when information on study variables y , 0 1

y could not be obtained for z units while information on ‘ m ’(>1) auxiliary

variables X1, X ..., 2, X are available. The population means m Xi and variances 2 , ( 1, 2,..., )

i

x

S im of auxiliary variables X 1, X ..., 2, X are known. Thus we m define a family of estimators of R  as

 *

2 (ˆ , T)

dS Ru , (7.1)

where S R * ,uT) is a parametric function of (Rˆ * ,uT) such that

   

( , )T

S ReR for all Rˆ *

which implies that

 * (  , ) ( ) | 1 ˆ R eT S R       (7.2) and satisfies certain conditions as given for Rˆmg  . To terms of order n1, it can

be easily shown that:

  1 2 ( ) ( ) E dRO n (7.3) and                 1 * 2 1 1 ˆ ( ) ( ) [2 ( , ) ( ( , )) ( ( , ))] T T T T T MSE d MSE R R d S R e S R e D S R e          (7.4) where    

  1 ( . ) , ( ) | T T R e R e S S u      

The MSE(d2) at (7.4) is minimized for

 

   

1( , )T 1

S R e  R D d  (7.5)

(22)

      * 2 1 2 * 2 1 1 ˆ ( ) ( ) min. ( ) ( ) T T MSE d MSE R R d D d MSE d R d D d              (7.6)

where min.MSE d is given in (6.8). Now we state the following theorem: ( )1 Theorem 7.1. To the first degree of approximation

 *  2 1

2 ˆ

min.MSE d( )MSE R(  )R d D dT

with equality holding if

 

   

1( , )T 1

S R e  R D d  .

The following estimators: 1.  1  *

2 ˆ exp( Tlog ) dRu , and 2.     2 * 2 ˆ (1 T( ))

dR  u e etc. are the members of the suggested class of estima-tors d . Proceeding in a similar manner as before, we state the following theo-2 rem.

Theorem 7.2. The family of estimators

   

* * *

2 (ˆ , T, ˆT1 )

dS R u

of R  based on estimated optimum values of constants has MSE to the first degree of approximation equal to that of minimum MSE of d that is 2

*

2 2

( ) min. ( )

MSE dMSE d (7.7)

where min.MSE d is cited in (7.6), ( )2 *  *   1 ˆ ˆ ( , T, T ) S R u  is a function of  * ˆ 1 ˆ (R ,uT, T )   such that     *( , ,T T) S R e  R for all Rˆ *

 which implies that

    * * ( , , ) ( )| 1 ˆ R eT T S R        ,     * ( , , ) ( )| T T R e S R u          and     * ( , , ) 1 ˆ ( )| 0 T T R e S        and   1 * 1 ˆ D dˆ ˆ  is the estimator of D d1 , where ˆD and ˆd are same as defined earlier. *

(23)

7.1 Special cases

Let d be an estimator making use of the population means 2 X and 1 X as 2 well as known population variances 21

x

S and 22 x

S . It can be easily seen that the

percent relative efficiency of the estimator d with respect to the control estima-2 tor * ( ) ˆ R is given by: 1 * 1 1 2 2 1 3 2 4 2 ( ) * ( ) ˆ RE( ,d R ) 1 a a 100% A                       (7.1.1) Let (1) 2

d be an estimator making use of the population means X and 1 X , then 2 the percent relative efficiency (PRE) of (1)

2

d with respect to the control estimator

* ( ) ˆ R is give by: 1 * * (1) * 1 1 2 2 2 ( ) * * ( ) ˆ RE(d ,R ) 1 a a 100% A                 (7.1.2) Let ( 2) 2

d be an estimator making use of the population variances 21 x S and 2 2 x S , then 1 * * ( 2) * 1 3 2 4 2 ( ) * * 0 ( ) ˆ RE(d ,R ) 1 100% A                      (7.1.3) Let ( 3) 2

d be an estimator making use of population mean X and population vari-1 ance 21

x

S of only one auxiliary variable, then

1 ( 3) * 1 1 2 2 2 ( ) * ( ) ˆ RE(d ,R ) 1 a D a D 100% DA              (7.1.4) Let ( 4 ) 2

d be an estimator making use of only the population mean X of one aux-1 iliary variable, then

1 1 2 ( 4 ) * 1 2 ˆ( ) * 2 RE( , ) 1 100% x a d R AC                (7.1.5) Let (5) 2

d be an estimator making use of population variance 21 x

S of the first

(24)

1 2 (5) * 1 2 ( ) * * 2 1 ˆ RE( , ) 1 100% ( ) d R A x                (7.1.6) Let (6) 2

d be an estimator making use of population mean X and population 2 variance 22

x

S of only the second auxiliary variable, then

1 (6) * 2 3 2 4 2 ( ) * * ( ) ˆ RE(d ,R ) 1 a D D 100% AD               (7.1.7) Let (7) 2

d be an estimator making use of the population mean X2 of the second auxiliary variable, then

2 1 2 (7) * 2 2 ˆ( ) * 2 RE( , ) 1 100% x a d R AC                   (7.1.8) Let (8) 2

d be an estimator making use of population variance 22 x

S of the second

auxiliary variable, then

1 2 (8) * 2 2 ( ) * * 2 2 ˆ RE( , ) 1 100% ( ) d R A x           (7.1.9) 8. SUGGESTED STRATEGY-III

We are again considering the situation when information on study variables 0

y , y could not be obtained for z units while information on auxiliary vari-1

ables X , 1 X ,..., 2 X are available for all the sample units. But the difference is m that population means X and variances i 2 , ( 1, 2,..., )

i

x

S im of auxiliary variables 1

X , X ,..., 2 X are not known. With this background, we define a family of es-m timators for R  as:

 * 3 (ˆ , T) dV R w , (8.1) where * i i i wx x , 1, 2,..., ;im and ( *2 ) ( 2 ) i m i m i x x ws s , 1,i m  m2,.., 2m, 1 2 2 ( , ,..., ) T m ww w w ; V R(ˆ * ,wT)

(25)

   

( , )T

V ReR for all Rˆ * (8.2)

which implies that

 * (  , ) ( ) | 1 ˆ R eT V R       (8.3) and satisfies certain conditions. To terms of order n–1, it can be easily shown that

  1 3 ( ) ( ) E dR O n (8.4) and                 1 * * 3 1 1 ˆ ( ) ( ) ( )[2 ( , ) ( ( , )) ( ( , ))] T T T T T MSE d MSE R R d V R e V R e D V R e            (8.5) where       1 ( , ) ( . ) ( ) | T T R e R e V V w       (8.6)

The MSE(d3) at (8.5) is minimized for

 

   

1( , )T 1

V R e  R D d  (8.7)

and hence the resultant (minimum) MSE of d is given by 3

 * *  2 1

3 ˆ

min.MSE d( )MSE R(  ) (  )R d D dT  (8.8)

which shows that the proposed class of estimators is more efficient than Rˆ . *

Now we state the following theorem:

Theorem 8.1. To the first degree of approximation

 * *  2 1

3 ˆ

( ) ( ) ( ) T

MSE d MSE R R d D d  with equality holding if

 

   

1( , )T 1

V R e  R D d  .

The following estimators: 1.  1  *

3 ˆ exp( Tlog ) dRw , and 2.     2 * 3 ˆ (1 T( ))

Riferimenti

Documenti correlati

In consideration of the anti-anabolic effects of sclerostin on bone formation, the asso- ciation between neridronate treatment and increased sclerostin levels suggests that the

Nel 1155, con la conclusione della costruzione della nuova cerchia di mura, Chinzica venne inclusa all’interno della civitas pisana e gran parte delle sue strutture ricettive nate

[r]

vediamo ora come il coefficiente di drag dipende dalla scabrezza relativa e dal numero di Reynolds nel caso di corpi tozzi.. Il caso più estremo di corpo tozzo è rappresentato da

Si consideri il recipiente cilindrico a sezione circolare di diametro D, inclinato di un angolo α rispetto all’orizzontale, rappresentato in figura.. Si determini il peso

Un tappo di peso P chiude superiormente il serbatoio ed è libero di traslare verticalmente senza attrito. Determinare

La sua struttura scalare consente di enucleare il contributo alla formazione dell’utile delle differenti “gestioni”, come infatti per altre voci del bilancio anche questa deve

4, we show the predicted demographic evolution starting from the current Italian demographic structure with constant fertility and mortality rates shown in Fig.. 1, and immigration