NOTE ON CONDITIONAL MODE ESTIMATION FOR FUNCTIONAL DEPENDENT DATA
S. Dabo-Niang, A. Laksaci
1.INTRODUCTION
Let us introduce n pairs of random variables (X Yi, )i }1, ,n that we suppose
drawn from the pair ( , )X Y valued in F Ru where F is a semi-metric space. ¬ Let d denotes the semi-metric. Assume that there exists a regular version of the conditional probability distribution of Y given X which is absolutely continu-ous with respect to the Lebesgue measure on R and has bounded density. As-sume that for a given x the conditional density f of Y given x X x is
uni-modal and the conditional mode, denote by T( )x is defined by
( ) x( )
y R
x argmax f y
T
In the remainder of the paper, x is fixed in F and N denotes a neighbor-x
hood of x . We define the kernel estimator ˆf of x f as follows: x
1 1 1 1 1 1 ( ( , ) ( ) ,¬¬¬¬¬¬¬¬¬¬ ¬ ¬ ( ) ( ) ˆ ( ) ( ( ( , )) n H K i H i x i n K i i h K h d x X H h y Y f y y R K h d x X
¦
¦
here the numerator is equal to zero when the denominator approaches zero. K and H are kernels functions and hK hK n, (resp.¬hH hH n, ) is a sequence of positive real numbers. Note that a similar estimate was already introduced in the special case where X is a real random variable by many authors, Rosenblatt (1969) and Youndjé (1993) among others. For the functional case, see Ferraty et al. (2006). A natural and usual estimator of ( )T x , denoted Tˆ( )x is given by:
ˆ
¬(x) argmaxy Rfˆx¬( )y ¬
Note that this estimate Tˆ( )x is not necessarily unique, so the remainder of the paper concerns any value Tˆ( )x satisfying (1).
The main goal of this paper is to study the nonparametric estimate Tˆ( )x of ( )T x , when the explanatory variable X is valued in the space F of eventu- ally infinite dimension and when the observations (X Yi, )i i }1, ,n are strongly mixing.
The interest of the nonparametric estimation of the conditional mode for strong mixing processes comes mainly from the fact that the mode regression provides better estimations than the classical mean regression in some situations (see for instance Collomb et al. (1987), Quintela and Vieu (1997), Berlinet et al. (1998) or Louani and Ould-Saïd (1999) for the multivariate case).
Currently, the progress of informatics tools permits the recovery of increas-ingly bulky data. These large data sets are available essentially by real time moni-toring, and computers can manage such databases. The object of statistical study can then be curves (consecutive discrete recordings are aggregated and viewed as sampled values of a random curve) not numbers or vectors. Functional data analysis (FDA) (see Bosq (2000), Ferraty and Vieu, (2006), Ramsay and Silverman, (2002)) can help to analyze such high-dimensional data sets. The sta-tistical problems involved in the modelization of functional random variables have received increasing interests in the recent literature (see for example Dabo-Niang (2002), Dabo-Dabo-Niang and Rhomari (2003, 2009), Masry (2005) for the non-parametric context). In this functional area, the first results concerning the condi-tional mode estimation were obtained by Ferraty et al. (2006). They established the almost complete convergence of the kernel estimator in the i.i.d. case. This last result has been extended to dependent case by Ferraty et al. (2005). Ezzahri-oui and Ould-said (2006, 2008) have studied the asymptotic normality of the ker-nel estimator of the conditional mode for both i.i.d. and strong mixing cases. The monograph of Ferraty and Vieu (2006) presents an important collection of statis-tical tools for nonparametric prediction of functional variables. Recently, Dabo-Niang and Laksaci (2007) stated the convergence in L norm of the conditional p mode function in the independent case.
In this paper, we consider the case where the data are both dependent and of functional nature. We prove the p-integrated consistency by giving the upper bounds for the estimation error. Theses results can be applied to predict time se-ries, by cutting the past of the series in continuous paths.
The paper is organized as follows: the following Section is devoted to fixe the notations and hypotheses. We state our results on Section 3. All proofs are given in the Appendix.
2.NOTATION AND ASSUMPTIONS
We begin by recalling the definition of the strong mixing property. For this we introduce the following notations. Let F Z denote the ik( ) V-algebra generated by {Zj,¬¬id dj k}.
Definition:Let {Z i i,¬¬ 1, 2,. }... denote a sequence of random variables. Given a positive integer n , set
* 1
( )n sup P A{ ( B) P A P B( ) ( ) :¬¬¬¬A Fk(Z)¬¬and B¬¬¬ Fk n( ) ¬¬Z ,¬¬k N }
D f
The sequence is said to be D-mixing (strong mixing) if the mixing coefficient ( )D n o as 0 n o f .
There exist many processes fulfilling the strong mixing property. We quote here, the usual ARMA processes which are geometrically strongly mixing, i.e., there exist U(0,1) and a ! such that, for any 0 n t , ( )1 D n daUn (see, e.g., Jones (1978)). The threshold models, the EXPAR models (see, Ozaki (1979)), the simple ARCH models (see Engle (1982)), their GARCH extension (see Bollerslev (1986)) and the bilinear Markovian models are geometrically strongly mixing un-der some general ergodicity conditions.
Throughout the paper, when no confusion is possible, we will denote by C or '
C some strictly positive generic constants, g( )j the j order derivative of the th
function g , and we suppose that the response variable Y is p -integrable. Our nonparametric model will be quite general in the sense that we will just need the following assumptions:
x (H1) (P X B ( , ))x r Ix( ) 0r ! where ( , )B x r {xcF d x x:¬¬ ( , )c r} x (H2) For all k t , we suppose that: 2
1 1 1 1 1 1 ( )¬ ¬ ¬1 ,¬¬ ¬ ¬ ¬ ( ,... )¬¬ ( , ) ¬ ¬ ¬ ¬ ( )¬ ¬ ¬ 0,¬¬ ¬ max( ( ( , ) ,¬1 ) ¬, ( )) ( k k k j k i i i i k v k i i n i x x
i For all i i n the conditional density of Y Y given X X
exists and is uniformely boundes ii There exists v such that
maxd } d P d X x r j k I r OI d } d } ! d d d k(r)). ° ° ® ° ° ¯
x (H3) (X Yi, )i i N is an D-mixing sequence of mixing coefficient ( )D n satis-
fying 2 1 1) ,¬ , ( k k p k v a max max k p v d d § · ! ¨ ¸ © ¹ such that ¬ ( ) a n ND n Cn d
x (H4) f is 2 times continuously differentiable with respect y on R such x that, (y y1, 2)R2; 2 1 2 1 2 ( ) ( ) 1, 2 ( )1 ( 2) 1 2 1 2 1 2 ( ) ¬¬¬ x j x j ( b( , ), b ),¬¬ 0,¬ 0 x x x N f y f y d x x y y b b d ! !
for 0,¬1,¬2j with the convention fx( 0) fx
x (H5) K is a function with support 0 1( , ) such that 0CcK t( )C f. x (H6) H is of classe C , of compact support and satisfies 2
³
H t dt( ) 1,¬¬¬¬and2 ( ) ( ) ( 0)
1, 2 ( )1 2 1 2
(y y ) R ¬¬¬H j y H j (y ) C y y ,¬¬¬j 0,¬1,¬2¬¬¬with H¬¬ H.
d
x (H7) There exists J J1,,¬2! , such that 0
1 3 1 1 ( ) ¬1 ¬¬ ¬¬ a a a H x K n n H Cn J h h n and lim Jh I of d f with a !(5 17 )/2.
The concentration propriety (H1) is less restrictive that the fractal condition in-troduced by Gasser et al. (1998) and is known to hold for several continuous time processes (see for instance Bogachev (1999) for a gaussian measure, Li and Shao (2001) for a general gaussian process and Ferraty et al. (2006) for more discus-sion). In order to establish the same convergence rate as in the i.i.d. case (see Dabo-Niang and Laksaci (2007)), we reinforce the mixing by introducing (H2) and (H3). Note that we can establish the convergence results without these mix-ing assumptions, however, the convergence rate expression will be perturbed, it will contain the covariance term of the observations. Assumptions (H4) is the regularity condition which characterizes the functional space of our model and is needed to evaluate the bias term in our asymptotic developments. Assumptions (H6) and (H7) are standard technical conditions in nonparametric estimation. They are imposed for sake of simplicity and brevity of the proofs.
3.MAIN RESULTS
We establish the p-mean rate of convergence of the estimate Tˆ( )¬x to ¬ ( )T x .
Theorem 1: Under hypotheses (H1)-(H7), and if 3 ( )
H x K
nh I h o f , we have for all ,
[1 [ p f
1 2 1 2 3 ˆ( ) ( ) ( ) 1 ¬ ( ) b b p K H H x K x x O h h O nh h T T I § · § · ¨ ¸ ¨¨ ¸ ¸ © ¹ ¨ ¸ © ¹ . where . (E1/p|.| )p .
Proof of Theorem 1: For all i }1, ., ,n let ( 1 ( , ))
i K i K K h d x X , 1 ( ) ( ( )) i H i H y H h y Y and 1 1 1 1 1 1 ,¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬ ˆx( ) n ( ) ˆx ¬ n . N i i D i i i H f y K H y f K nh EK
¦
nEK¦
We consider a Taylor development of the function ˆfNx(1) at the vicinity of
ˆ( )x
T , in particular for the point ( )T x ,
1 ˆ 1 ˆ 2 *
ˆx ( ( )) ˆx ( ( )) ( ( ) ( ))ˆx ( ( ))
N N N
f T x f T x T x T x f T x
where T*( )x is between Tˆ( )¬x and ( )T x . By the unimodality of f and assump-x
tion (H6), we have 1( ( )) ˆ 1(ˆ( )) x x N f T x f T x =0, and fx 2(T(x))0 Thus, if ˆf z , we have Dx 0 1 1 2 * ˆ( ) ( ) 1 (ˆ ( ( )) ( ( ) ˆx ( ( )) Nx x ) )¬ N x x f x f x f x T T T T T (2)
It is shown in Theorem 11.15 of Ferraty and Vieu (2006, P.179) that, under (H1)-H(7),
ˆ( )x ( )¬ 0¬¬¬¬¬¬¬x almost completely a co¬ ( . .)¬
T T o
So, by combining this consistency and the result of Lemma 11.17 in Ferraty and Vieu (2006, P.181) together with the fact that *( )x
T is lying between Tˆ( )¬x and
( )x T , it follows that 2 * 2 ( ) 0.¬¬¬¬ ˆx ( ( )) x ( ) ¬ . . N f T x f T x o aco Since fx 2( (x)) 0 T ! , we can write
2 * 1 0,¬¬¬¬ ¬ ¬¬¬¬¬ ˆ (¬ ¬ )) . . ( x N C such that C a s x f T ! d It follows that 1 1 1/ ˆ( ) ( ) ˆx ( ( )) x ( ( )) ( (¬ˆx 0)) p. N p D x x Cf x f x P f T T d T T
Theorem 1 is then a consequence of the following lemmas.
Lemma 1:Under hypotheses (H1)-(H3), (H5)-(H6), we have,
1 2 1 1 3 1 ( ) [ ( ) ] ¬ ¬ ( ˆ ( ) ˆ ( ) ) x x N N p H x K f x E f x O nh h T T I § · § · ¨ ¸ ¨¨ ¸ ¸ © ¹ ¨ ¸ © ¹ (3)
Lemma 2:If the hypotheses (H1), (H4)-(H6) are satisfied, we get
1 1 1 2
ˆ ( ) (
[ x ( ) ] x ( )) ( b b )
N K H
E f T x f T x O h h
Lemma 3:Under the conditions of Lemma 1, we get
1 2 1/ 1 ( (¬ˆ 0)) ¬ ¬ ( ) x p D x K P f O nI h § · § · ¨ ¸ ¨ ¸ ¨© ¹ ¸ ¨ ¸ © ¹ .
Labo. EQUIPPE, Maison de Recherche, SOPHIE DABO-NIANG
Univ. Lille3,
BP60149, 59653 Villeneuve d’Ascq cedex Lille, France
Laboratoire de Mathématiques ALI LAKSACI
Univ. Djillali Liabès,
BP 89, 22000 Sidi Bel Abbès, Algérie
ACKNOWLEDGEMENTS
The authors would like to thank the anonym referee, whose remarks permit us to im-prove substantially the quality of the paper and clarify some points.
APPENDIX
Proof of Lemma 1: Let
1 1 1 1 ƅi K Hi (hH(T( )x Yi))E K H[ i (hH (T( )x Yi)).] Then 1 1 2 1 1 ƅ 1 ( ) ˆx ( ) [ ˆx ( ( ) ]) n N N i p H i p f x E f x nh EK T T
¦
Because of (H1) and (H5) we can write EK1 O(Ix(hK)). So, it remains to show
that 1 ƅ ( ( )) n i p H x K i O nh I h
¦
The evaluation of this quantity is based on ideas similar to that used by Yoko-yama (1980). More preciously, we prove the case where p 2m (for all m N *) and we use the Holder inequality for lower values of p . Indeed, if p 2m, we have 1 2 ¬ 1 1 2 2 ¬ , , 2 ,¬ 1 1 1 1 1 2 ƅ k ƅ j ¬ j j j k k j j m k n m p p p p i m i p p i k i i n j p m E C } E t d } d ° ®¦ ° ¯ ª º ª º « » « » « » «¬ »¼ ¬
¦
¼¦ ¦
¦
(4) where 1 2 ¬, , 2 1 2¬ (2 )! ! ! ! k p p p m k m C p p p }} . The stationarity of the couples (X Yi, )¬i i }1, ,n allows us to write 1 1 1 1 1 1 1 1 1 1 1 1 , , 1 ƅ j ƅ j ƅ j j j j b k k k b k k k p p p i i e i i n j i i n j e e n j E E n E ¦ d } d d } d d } d ª º ª º ª º d d « » « » « » « » « » « » ¬ ¼ ¬ ¼ ¬ ¼
¦
¦
¦
Now, we show by induction on m that the last term above is bounded for all k and p , 1j d d as follows j k 1 1 1 1 * 1 1 /2 1 1 1 , , 1 ƅ 2 ,¬ ¬¬ 2 ,¬¬ ,¬ ¬ ( ) :¬¬¬¬¬¬¬ ( ¬ ) ¬( ) j j b k b k j j i k p q H x K e e e n j k m p N p q m C n N A m n E C nh I h ¦ d } d d d ª º d « » « » ¬ ¼
¦
¦
Firstly, in the case where m , we have 1 q or 1 q . If 2 q , then (1)1 A is necessarily satisfied since the ƅi are centered. Indeed, q implies that 1 k 1
and then 1 2 1 1 ƅ ¬( [ ] H x( )K ) .
nE dC nh I h However, if q , we distinguish two 2 cases: 2k or k . For the last case, we have under (H1), that 1
2 1 ƅ [ ] C ( ) n i H x K i E d nh I h
¦
while for the first case we use the same ideas as those used in the proof of Lemma 11.13 of Ferraty and Vieu (2006, P. 175) and obtain1 ƅ ƅ [ ( ) ¬ i i] H x K i j n E nh I h d z d d
¦
(5)Thus, ( )A m is true for ¬m . The next step is to show that ( )1 A m is hereditary. In other words, we suppose that ( )A m is true and we prove (A m . Obvi-1) ously, we can suppose that q t (because the upper bound to prove is always 2 true if q ). Thus, for all 1 kd2m and 2 *
1 ¬k 2 2 j j i p N p q m
¦
d , we have 1 1 1 1 1 1 1 1 1 1 , , 1 1 11 ,¬ 1 ƅ j ƅ j ¬ j j b b k b s j s b k k n k p p e e e e n j s e e e j s j n E n E ¦ ¦ d } d d d z ª º ª º d « » « » « » « » ¬ ¼ ¬ ¼¦
¦¦ ¦
(6)We split the above sum as follows
1 1 1 1 1 1 ¬ 1 1 11 ,¬ 1 11 ,¬ 1 ¬ 1 11 ,¬ 1 ƅ ƅ ƅ ¬¬ ¬¬¬¬¬ ¬ n j j j j b b s j s b s j s b j j b s n j s b u k k n p p e e e e e j s j e e e j s j k n p e e u e e j s j E E E ¦ ¦ d d z d d z ¦ d d z ª º ª º d « » « » « » « » ¬ ¼ ¬ ¼ ª º « » « » ¬ ¼
¦ ¦
¦ ¦
¦
¦
(7)The first term of (7) is such that
1 1 ¬ 1 1 1 11 ,¬ 1 ƅ ¬ : n j k( )¬ j b s j s b u k p k k v n H x K e e e e j s j A E Cu h I h ¦ d d z ª º d « » « » ¬ ¼
¦ ¦
(8)1 1 1 1 ¬ ¬ 1 1 11 ,¬ 1 1 ,¬ 1 ¬ 1¬1 ,¬ 1 2 ƅ ƅ ¬¬ ¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬ ¬¬¬¬¬¬ ( )¬¬¬ ¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬ ¬¬¬¬¬¬¬ j j j j b b s n j s b j s b s n j s k k n p p e e e u e e j s j e e j s j n s e u e e j s B E E e B B D ¦ ¦ d d d d ! d d z ª º ª º d « » « » « » « » ¬ ¼ ¬ ¼ d
¦
¦
¦
¦
¦
(9)Let us first consider the term B and observe that if 1 k 2m and 2 s k 1
then 1p , for all 1j d dj 2m and the second expectation is null (because 2
the ƅi are centered). So, in the case where none of the expectations is null, we
have 2k m or 2¬ s k which imply that 1¬ sd2m. Moreover, we can prove in the same way that k s d2m. We have that, when ¬B z , then1 0
1 1 2 ¬( 2 2¬¬ s k j j j j p d m p d m
¦
¦
and 1 2¬ k j j s p t¦
since either s k or 1¬ p tk 2 and 1 ¬ s j 2 ¬ j s p m d¦
. From what precedes, notice that either B or 1 0 q t (since 4 1 2 k j j s p t¦
and 1 2 s j j p t¦
). So, we apply ( )A m and get Hence, 1 1 ( ) .¬q H x K B C nh h n I d (10)For the B term, we use the fact that 2
¬ 2 2 1 1 2 n k 2 1 Ƨ 1 Ƨ k 1 Ƨ ¬ ¬ ¬ ( ) ¬¬¬ , ¬¬Ƨ 0. s n n n n n k k a s s e u i u a a n n i u B e e C i Cu i Cu K K D d d d d !
¦
¦
¦
(11)It is easy to see that the upper bounds (8) and (11) are respectively increasing and decreasing with respect to u . This gives the idea to choose n u in order to n
balance these two terms. We choose
1 1 ( ( k ( )) vk) a n H x K u h I h K . Then 1 1 (1 ) 1 2 ( ) ( ( )) k k k k k v k v a a H x K H x K A B Ch h h K h K I I d
Furthermore, since 2 1 max ( 1) k , , k p k v a max k p v d d § · ! ¨ ¸
© ¹ then, we can get some
small 0K! such that (1k)(1vk)v ak( K) 0! and (1k k) (k1)(aK) 0! . Consequently
2 (
¬A B dChH xI hK).¬¬ (12)
We easily deduce from (6), (7), (10) and (12), the following upper bound
1 1, 2 1 1 ¬ 1 1 , , 1 ( ƅ ' ¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬ .¬¬ ) ( ) ( ) j j v k v k q p H x K H x K e e e e n j q H x K n E C nh h C nh h C nh h I I I ¦ d } d ª º d « » « » ¬ ¼ d
¦
(13)The last line comes from the fact that q t and 2 nhH xI ( )hK o f . So, we
con-clude that ( )A m is hereditary, so it is true for all m N *. We directly conclude
from (13) and (4) that
2 1 ƅ ( ( ) .) m n m i H x K i Eª« º»dC nh I h « » ¬
¦
¼Finally, it suffices to use the Holder inequality to show that for all p2m
1 1 2 ƅ ƅ ¬ ( ) n n i i H x K i p i m C nh I h d d
¦
¦
.This yields the proof of this lemma.
ɶ
Proof of Lemma 2: It is easy to see that
(1) (1) (1) (1) 1 1 1 2 1 1 ( ) ( ) ( ) | ˆ [ x ( )] x ( ) [ [ ( ) ]] x ( ( ) .) N H E f x f x E K E H x X f x h EK T T T T
By an integration by part and the change of variables
H y z h , we have 1 (1) (1) (1) 1 1 (1) 1 ( ) ( ) ( ˆ [ x ( )] x ( ) ( ) X ( ( ) ) x ( ( )) ) N H E f x f x EK H t f x h t f x dt EK T T d
³
T T .Hypotheses (H4) and (H6) allow to get the desired result.
Proof of Lemma 3: It is clear that, for all H , we have 1 (ˆx 0) (ˆx 1 ) ( ˆx [ ˆx] )
D D D D
P f dP f d H dP f E f tH .
The Markov’s inequality allows to get, for any p ! ,0
[ ] ( [ ˆ ˆ [ ] ˆ ˆ ] ) p x x D D x x D D p E f E f P f E f H H t d . So 1/ ( (ˆx 0)) p ( ˆx [ˆ ]x ) D D D p P f O f E f . The computation of ˆx [ ]ˆx D D p
f E f can be done by following the same argu-ments as those invoked to get (3). This yields the proof
ɶ
REFERENCES
A. BERLINET, A. GANNOUNand E. MATZNER-LOBER, (1998), Normalité asymptotique d'estimateurs
convergents du mode conditionnel. “Canad. J. Statist”, 26, pp. 365-380.
V. I. BOGACHEV, (1999), Gaussian measures. Math surveys and monographs, 62, Amer. Math.
Soc.
T. BOLLERSLEV, (1986), General autoregressive conditional heteroskedasticity. “J. Econom”, 31, pp.
307-327.
D. BOSQ, (2000), Linear Processes in Function Spaces: Theory and applications. Lecture Notes in
Statistics, 149, Springer.
G. COLLOMB, W. HARDLE, and S. HASSANI, S. (1987), A note on prediction via conditional mode
estima-tion. “J. Statist.Plann. Inference”, 15, pp. 227-236.
S. DABO-NIANG, (2002), Estimation de la densité dans un espace de dimension infinie: application aux
diffusions. “C. R., Math., Acad. Sci. Paris”, 334, pp. 213-216.
S. DABO-NIANGandN. RHOMARI, (2003), Estimation non paramétrique de la régression avec variable
explicative dans un espace métrique. “C. R., Math., Acad. Sci. Paris”, 336, pp. 75-80.
S. DABO-NIANG, andN. RHOMARI, (2009), Kernel regression estimate in a Banach space. “J. Statist.
Plann. Inference”, 139, 1421-1434.
S. DABO-NIANG, andA. LAKSACI, (2007), Estimation non paramétrique du mode conditionnel pour
variable explicative fonctionnelle. “C. R., Math., Acad. Sci. Paris”, 344, pp. 49-52.
R. F. ENGLE, (1982), Autoregressive conditional heteroskedasticity with estimates of the variance of U.K.
inflation. “Econometrica”, 50, pp. 987-1007.
M. EZZAHRIOUI, and E. OULD-SAID, (2008). Asymptotic normality of nonparametric estimators of the
conditional mode function for functional data. “J. Nonparametr. Stat”, 20, 3-18.
M. EZZAHRIOUI and E. OULD-SAID, (2006). On the asymptotic properties of a nonparametric estimator
of the conditional mode function for functional dependent data. LMPA, janvier 2006 (Preprint).
F. FERRATY, A. LAKSACI, and PH. VIEU, (2005), Functional times series prediction via conditional mode.
F. FERRATY, A. LAKSACI, andPH. VIEU, (2006), Estimating some characteristics of the conditional
distri-bution in nonparametric functional models. “Stat. Inference Stoch. Process”, 9, pp. 47-76.
F. FERRATY, andPH. VIEU, (2006), Nonparametric functional data analysis. Springer-Verlag,
New-York.
T. GASSER, P. HALL, andB. PRESNELL, (1998). Nonparametric estimation of the mode of a distribution
of random curves. “J. R.Statist. Soc, B”, 60, pp. 681-691.
D. A. JONES, (1978), Nonlinear autoregressive processes. “Proc. Roy. Soc. London A”, 360, pp.
71-95.
W. V. LI, andQ.M. SHAO, (2001), Gaussian processes: inequalities, small ball probabilities and
applica-tions. In Stochastic processes: Theory and Methods, Ed. C.R. Rao and D. Shanbhag. Hanbook of Statistics, 19, North-Holland, Amsterdam.
D. LOUANI, andE. OULD-SAID, (1999), Asymptotic normality of kernel estimators of the conditional
mode under strong mixing hypothesis. “J. Nonparam. Stat”, 11, pp. 413-442.
E. MASRY, (2005), Nonparametric regression estimation for dependent functional data: asymptotic
nor-mality. “Stochastic Process. Appl”, 115, pp. 155-177.
T. OZAKI, (1979), Nonlinear time series models for nonlinear random vibrations. Technical report.
Univ. of Manchester.
J. O. RAMSAYandB.W. SILVERMAN, (2002), Applied functional data analysis. Methods and case studies.
Springer-Verlag, New York.
A. QUINTELA DEL RIO, andPH. VIEU, (1997), A nonparametric conditional mode estimate. “J.
Non-param. Stat”, 8, pp. 253-266.
M. ROSENBLATT, (1969), Conditional probability density and regression estimators. Multivariate
Analysis II, Ed. P.R. Krishnaiah. Academic Press, New York and London.
E. YOUNDJÉ, (1993), Estimation non paramétrique de la densité conditionnelle par la méthode du noyau.
Thèse de Doctorat, Université de Rouen.
R. YOKOYAMA, (1980), Moments bounds for stationary mixing sequences. “Z. Wahrs. verw.
Gebi-ete”, 52, pp. 45-57.
SUMMARY
Note on conditional mode estimation for functional dependent data
We consider D-mixing observations and deal with the estimation of the conditional
mode of a scalar response variable Y given a random variable X taking values in a semi-metric space. We provide a convergence rate in Lp norm of the estimator.