• Non ci sono risultati.

Abstract. Let (X

N/A
N/A
Protected

Academic year: 2021

Condividi "Abstract. Let (X"

Copied!
18
0
0

Testo completo

(1)

MULTICOLOR RANDOMLY REINFORCED URNS

PATRIZIA BERTI, IRENE CRIMALDI, LUCA PRATELLI, AND PIETRO RIGO

Abstract. Let (X

n

) be a sequence of integrable real random variables, adapted to a filtration (G

n

). Define

C

n

= √ n  1

n

n

X

k=1

X

k

− E(X

n+1

| G

n

)

and D

n

= √

nE(X

n+1

| G

n

) − Z

where Z is the a.s. limit of E(X

n+1

| G

n

) (assumed to exist). Conditions for (C

n

, D

n

) −→ N (0, U ) × N (0, V ) stably are given, where U, V are certain random variables. In particular, under such conditions, one obtains

√ n  1 n

n

X

k=1

X

k

− Z = C

n

+ D

n

−→ N (0, U + V ) stably.

This CLT has natural applications to Bayesian statistics and urn problems.

The latter are investigated, by paying special attention to multicolor randomly reinforced urns.

1. Introduction and motivations

As regards asymptotics in urn models, there is not a unique reference frame- work. Rather, there are many (ingenious) disjoint ideas, one for each class of prob- lems. Well known examples are martingale methods, exchangeability, branching processes, stochastic approximation, dynamical systems and so on; see [16].

Those limit theorems which unify various urn problems, thus, look of some in- terest.

In this paper, we focus on the CLT. While thought for urn problems, our CLT is stated for an arbitrary sequence of real random variables. Thus, it potentially applies to every urn situation, even if its main application (known to us) is an important special case of randomly reinforced urns (RRU).

Let (X n ) be a sequence of real random variables such that E|X n | < ∞. Define Z n = E X n+1 | G n  where G = (G n ) is some filtration which makes (X n ) adapted.

Under various assumptions, one obtains Z n a.s.,L −→ Z for some random variable Z.

1

Define further X n = n 1 P n

k=1 X k and C n = √

n X n − Z n ), D n = √

n Z n − Z), W n = C n + D n = √

n X n − Z).

Date: February 8, 2011.

2000 Mathematics Subject Classification. 60F05, 60G57, 60B10, 62F15.

Key words and phrases. Bayesian statistics – Central limit theorem – Empirical distribution – Poisson-Dirichlet process – Predictive distribution – Random probability measure – Stable con- vergence – Urn model.

1

(2)

The limit distribution of C n , D n or W n is a main goal in various fields, includ- ing Bayesian statistics, discrete time filtering, gambling and urn problems. See [2], [4], [5], [6], [7], [8], [10] and references therein. In fact, suppose the next obser- vation X n+1 is to be predicted conditionally on the available information G n . If the predictor Z n cannot be evaluated in closed form, one needs some estimate b Z n

and C n reduces to the scaled error when b Z n = X n . And X n is a sound estimate of Z n under some distributional assumptions on (X n ), for instance when (X n ) is exchangeable, as it is usual in Bayesian statistics. Similarly, D n and W n are of interest provided Z is regarded as a random parameter. In this case, Z n is the Bayesian estimate (of Z) under quadratic loss and X n can be often viewed as the maximum likelihood estimate. Note also that, in the trivial case where (X n ) is i.i.d.

and G n = σ(X 1 , . . . , X n ), one obtains C n = W n = √

n X n − EX 1 ) and D n = 0. As to urn problems, X n could be the indicator of {black ball at time n} in a multicolor urn. Then, Z n becomes the proportion of black balls in the urn at time n and X n the observed frequency of black balls at time n.

In Theorem 2, we give conditions for

(C n , D n ) −→ N (0, U ) × N (0, V ) stably (1) where U, V are certain random variables and N (0, L) denotes the Gaussian kernel with mean 0 and variance L. A nice consequence is that

W n = C n + D n −→ N (0, U + V ) stably.

Stable convergence, in the sense of Aldous and Renyi, is a strong form of convergence in distribution; the definition is recalled in Section 2.

To check the conditions for (1), it is fundamental to know something about the convergence rate of

Z n+1 − Z n and E Z n+1 − Z n | G n .

Hence, such conditions become simpler when (Z n ) is a G-martingale. Since E Z n+1 | G n  = EE(X n+2 | G n+1 ) | G n = E X n+2 | G n

 a.s., (Z n ) is trivially a G-martingale in case

P X k ∈ · | G n  = P X n+1 ∈ · | G n

 a.s. for all 0 ≤ n < k. (2) Those (G-adapted) sequences (X n ) satisfying (2) are investigated in [5] and are called conditionally identically distributed with respect to G. Note that (2) holds if (X n ) is exchangeable and G n = σ(X 1 , . . . , X n ).

Together with Theorem 2, the main contribution of this paper is one of its applications, that is, an important special case of RRU. Two other applications are r-step predictions and Poisson-Dirichlet sequences. We refer to Subsections 4.1 and 4.2 for the latter and we next describe this type of urn.

An urn contains black and red balls. At each time n ≥ 1, a ball is drawn and then replaced together with a random number of balls of the same color. Say that B n black balls or R n red balls are added to the urn according to whether X n = 1 or X n = 0, where X n is the indicator of {black ball at time n}. Define

G n = σ(X 1 , B 1 , R 1 , . . . , X n , B n , R n )

(3)

and suppose that

B n ≥ 0, R n ≥ 0, EB n = ER n , sup

n

E(B n + R n ) u < ∞ for some u > 2, m := lim

n EB n > 0, q := lim

n EB n 2 , s := lim

n ER 2 n , (B n , R n ) independent of X 1 , B 1 , R 1 , . . . , X n−1 , B n−1 , R n−1 , X n .

Then, as shown in Corollary 7, condition (1) holds with U = Z(1 − Z) (1 − Z)q + Zs

m 2 − 1 

and V = Z(1 − Z) (1 − Z)q + Zs

m 2 .

A remark on the assumption EB n = ER n is in order. Such an assumption is technically fundamental for Corollary 7, but it is not required by RRU, as defined in [9]. Indeed, EB n 6= ER n is closer to the spirit of RRU and those real problems motivating them. However, EB n = ER n is an important special case of RRU. For instance, it might be the null hypothesis in an application.

Corollary 7 improves the existing result on this type of urns, obtained in [2], under two aspects. First, Corollary 7 implies convergence of the pairs (C n , D n ) and not only of D n . Hence, one also gets W n −→ N (0, U + V ) stably. Second, unlike [2], neither the sequence ((B n , R n )) is identically distributed nor the random variables B n + R n have compact support.

By just the same argument used for two color urns, multicolor versions of Corol- lary 7 are easily manufactured. To our knowledge, results of this type were not available so far. Briefly, for a d-color urn, let X n,j be the indicator of {ball of color j at time n} where n ≥ 1 and 1 ≤ j ≤ d. Suppose A n,j balls of color j are added in case X n,j = 1. The random variables A n,j are requested the same type of conditions asked above to B n and R n ; see Subsection 4.4 for details. Then,

C n , D n  −→ N d (0, U) × N d (0, V) stably,

where C n and D n are the vectorial versions of C n and D n while U, V are certain random covariance matrices; see Corollary 10.

A last note is the following. In the previous urn, the n-th reinforce matrix is A n = diag A n,1 , . . . , A n,d .

Since EA n,1 = . . . = EA n,d , the leading eigenvalue of the mean matrix EA n has multiplicity greater than 1. Even if significant for applications, this particular case (the leading eigenvalue of EA n is not simple) is typically neglected; see [3], [12], [13], and page 20 of [16]. Our result, and indeed the result in [2], contribute to (partially) fill this gap.

2. Stable convergence

Stable convergence has been introduced by Renyi in [18] and subsequently in- vestigated by various authors. In a sense, it is intermediate between convergence in distribution and convergence in probability. We recall here basic definitions. For more information, we refer to [1], [7], [11] and references therein.

Let (Ω, A, P ) be a probability space and S a metric space. A kernel on S, or a

random probability measure on S, is a measurable collection N = {N (ω) : ω ∈ Ω}

(4)

of probability measures on the Borel σ-field on S. Measurability means that N (ω)(f ) =

Z

f (x) N (ω)(dx)

is A-measurable, as a function of ω ∈ Ω, for each bounded Borel map f : S → R.

Let (Y n ) be a sequence of S-valued random variables and N a kernel on S. Both (Y n ) and N are defined on (Ω, A, P ). Say that Y n converges stably to N in case

P Y n ∈ · | H −→ E N (·) | H weakly for all H ∈ A such that P (H) > 0.

Clearly, if Y n → N stably, then Y n converges in distribution to the probability law E N (·) (just let H = Ω). Moreover, when S is separable, it is not hard to see that Y n

→ Y if and only if Y P n converges stably to the kernel N = δ Y .

We next mention a strong form of stable convergence, introduced in [7], to be used later on. Let F n ⊂ A be a sub-σ-field, n ≥ 1. Say that Y n converges to N stably in strong sense, with respect to the sequence (F n ), in case

E f (Y n ) | F n

 P

−→ N (f ) for each f ∈ C b (S)

where C b (S) denotes the set of real bounded continuous functions on S.

Finally, we state a simple but useful fact as a lemma.

Lemma 1. Suppose that S is a separable metric space and C n and D n are S-valued random variables on (Ω, A, P ), n ≥ 1;

M and N are kernels on S defined on (Ω, A, P );

G = (G n : n ≥ 1) is an (increasing) filtration satisfying

σ(C n ) ⊂ G n and σ(D n ) ⊂ G for all n, where G = σ(∪ n G n ).

If C n → M stably and D n → N stably in strong sense, with respect to G, then (C n , D n ) −→ M × N stably.

(Here, M × N is the kernel on S × S such that M × N(ω) = M (ω) × N (ω) for all ω).

Proof. By standard arguments, since S is separable and σ(C n , D n ) ⊂ G , it suffices to prove that EI H f 1 (C n ) f 2 (D n )} → EI H M (f 1 ) N (f 2 )} whenever H ∈ ∪ n G n and f 1 , f 2 ∈ C b (S). Let L n = E f 2 (D n ) | G n  − N (f 2 ). Since H ∈ ∪ n G n , there is k such that H ∈ G n for n ≥ k. Thus,

EI H f 1 (C n ) f 2 (D n )} = EI H f 1 (C n ) E f 2 (D n ) | G n }

= EI H f 1 (C n ) N (f 2 )} + EI H f 1 (C n ) L n } for all n ≥ k.

Finally, |EI H f 1 (C n ) L n } | ≤ sup |f 1 | E|L n | → 0, since D n → N stably in strong sense, and EI H f 1 (C n ) N (f 2 )} → EI H M (f 1 ) N (f 2 )} as C n → M stably. 

3. A central limit theorem

In the sequel, (X n : n ≥ 1) is a sequence of real random variables on the

probability space (Ω, A, P ) and G = (G n : n ≥ 0) an (increasing) filtration. We

(5)

assume E|X n | < ∞ and we let X n = 1

n

n

X

k=1

X k , Z n = E(X n+1 | G n ) and C n = √

n X n − Z n .

In case Z n

−→ Z, for some real random variable Z, we also define a.s.

D n = √

n Z n − Z.

Sufficient conditions for Z n a.s.,L −→ Z are sup

1

n EX n 2 < ∞ and E 

E(Z n+1 | G n ) − Z n  2

= o(n −3 ). (3)

In this case, in fact, (Z n ) is an uniformly integrable quasi-martingale.

We recall that a sequence (Y n ) of real integrable random variables is a quasi- martingale (with respect to the filtration G) if it is G-adapted and

X

n

E

E(Y n+1 | G n ) − Y n

< ∞.

If (Y n ) is a quasi-martingale and sup n E|Y n | < ∞, then Y n converges a.s..

Let N (a, b) denote the one-dimensional Gaussian law with mean a and variance b ≥ 0 (where N (a, 0) = δ a ). Note that N (0, L) is a kernel on R for each real non negative random variable L. We are now in a position to state our CLT.

Theorem 2. Suppose σ(X n ) ⊂ G n for each n ≥ 1, (X n 2 ) is uniformly integrable and condition (3) holds. Let us consider the following conditions

(a) 1 n Emax 1≤k≤n k |Z k−1 − Z k | −→ 0, (b) 1 n P n

k=1 X k − Z k−1 + k(Z k−1 − Z k ) 2 P

−→ U , (c) √

n E sup k≥n |Z k−1 − Z k | −→ 0, (d) n P

k≥n (Z k−1 − Z k ) 2 −→ V , P

where U and V are real non negative random variables. Then, C n → N (0, U ) stably under (a)-(b), and D n → N (0, V ) stably in strong sense, with respect to G, under (c)-(d). In particular,

(C n , D n ) −→ N (0, U ) × N (0, V ) stably under (a)-(b)-(c)-(d).

Proof. Since σ(C n ) ⊂ G n and Z can be taken G -measurable, Lemma 1 applies.

Thus, it suffices to prove that C n → N (0, U ) stably and D n → N (0, V ) stably in strong sense.

”C n → N (0, U ) stably”. Suppose conditions (a)-(b) hold. First note that

√ n C n = n X n − n Z n =

n

X

k=1

X k +

n

X

k=1

(k − 1)Z k−1 − kZ k 

=

n

X

k=1

X k − Z k−1 + k(Z k−1 − Z k )

(6)

where Z 0 = E(X 1 | G 0 ). Letting

Y n,k = X k − Z k−1 + k E(Z k | G k−1 ) − Z k 

√ n and Q n = 1

√ n

n

X

k=1

k Z k−1 −E(Z k | G k−1 ), it follows that C n = P n

k=1 Y n,k + Q n . By condition (3), E|Q n | ≤ 1

√ n

n

X

k=1

k q

E 

Z k−1 − E(Z k | G k−1 )  2 = 1

√ n

n

X

k=1

o(k −1/2 ) −→ 0.

Hence, it suffices to prove that P n

k=1 Y n,k → N (0, U ) stably. Letting F n,k = G k , k = 1, . . . , n, one obtains E Y n,k | F n,k−1  = 0 a.s.. Thus, by Corollary 7 of [7], P n

k=1 Y n,k → N (0, U ) stably whenever (i) E max

1≤k≤n |Y n,k | −→ 0; (ii)

n

X

k=1

Y n,k 2 −→ U. P As to (i), first note that

√ n max

1≤k≤n |Y n,k | ≤ max

1≤k≤n |X k −Z k−1 | +

n

X

k=1

k |E(Z k | G k−1 )−Z k−1 | + max

1≤k≤n k |Z k−1 −Z k |.

Since (X n 2 ) is uniformly integrable, ((X n − Z n−1 ) 2 ) is uniformly integrable as well, and this implies 1 n Emax 1≤k≤n (X k − Z k−1 ) 2 −→ 0. By condition (3),

√ 1 n

n

X

k=1

k E

E(Z k | G k−1 ) − Z k−1

= 1

√ n

n

X

k=1

o(k −1/2 ) −→ 0.

Thus, (i) follows from condition (a).

As to (ii), write

n

X

k=1

Y n,k 2 = 1 n

n

X

k=1

X k − Z k−1 + k(Z k−1 − Z k )  2

+ 1 n

n

X

k=1

k 2 E(Z k | G k−1 ) − Z k−1

 2

+

+ 2 n

n

X

k=1

X k − Z k−1 + k(Z k−1 − Z k ) k E(Z k | G k−1 ) − Z k−1



= R n + S n + T n say.

Then, R n

→ U by (b) and E|S P n | = ES n → 0 by (3). Further T n

−→ 0, since P

T n 2 4 ≤ 1

n

n

X

k=1

X k − Z k−1 + k(Z k−1 − Z k )  2

· 1 n

n

X

k=1

k 2 E(Z k | G k−1 ) − Z k−1  2

= R n S n .

Hence, (ii) holds, and this concludes the proof of C n → N (0, U ) stably.

”D n → N (0, V ) stably in strong sense”. Suppose conditions (c)-(d) hold.

We first recall a known result; see Example 6 of [7]. Let (L n ) be a G-martingale such that L n

a.s.,L

1

−→ L for some real random variable L. Then,

√ n L n − L −→ N (0, V ) stably in strong sense with respect to G, provided

(c*) √

n Esup

k≥n

|L k−1 − L k | −→ 0; (d*) n X

k≥n

(L k−1 − L k ) 2 −→ V. P

(7)

Next, define L 0 = Z 0 and

L n = Z n

n−1

X

k=0

E(Z k+1 | G k ) − Z k .

Then, (L n ) is a G-martingale. Also, L n a.s.,L

1

−→ L for some L, as (Z n ) is an uniformly integrable quasi-martingale. In particular, L n − L can be written as L n − L = P

k≥n (L k − L k+1 ) a.s.. Similarly, Z n − Z = P

k≥n (Z k − Z k+1 ) a.s.. It follows that E

D n − √

n(L n − L) = √

n E

(Z n − Z) − (L n − L)

= √ n E

X

k≥n

(Z k − L k ) − (Z k+1 − L k+1 )

≤ √

n X

k≥n

E

Z k − E(Z k+1 | G k ) = √

n X

k≥n

o(k −3/2 ) −→ 0.

Thus, D n → N (0, V ) stably in strong sense if and only if √

n(L n − L) → N (0, V ) stably in strong sense, and to conclude the proof it suffices to check conditions (c*)-(d*). In turn, (c*)-(d*) are a straightforward consequence of conditions (3), (c), (d) and

L k−1 − L k = Z k−1 − Z k  + E(Z k | G k−1 ) − Z k−1 .

 Some remarks on Theorem 2 are in order.

In real problems, one of the quantities of main interest is W n = √

n X n − Z).

And, under the assumptions of Theorem 2, one obtains W n = C n + D n −→ N (0, U + V ) stably.

Condition (3) trivially holds when (X n ) is conditionally identically distributed with respect to G; see [5] and Section 1. In particular, (3) holds if (X n ) is exchange- able and G n = σ(X 1 , . . . , X n ).

Under (c), condition (a) can be replaced by (a*) sup n n 1 P n

k=1 k 2 E(Z k−1 − Z k ) 2 < ∞.

Indeed, (a*) and (c) imply (a) (we omit calculations). Note that, for proving C n → N (0, U ) stably under (a*)-(b)-(c), one can rely on more classical versions of the martingale CLT, such as Theorem 3.2 of [11].

To check conditions (b) and (d), the following simple lemma can help.

Lemma 3. Let (Y n ) be a G-adapted sequence of real random variables. If P ∞

n=1 n −2 EY n 2 < ∞ and E Y n+1 | G n  a.s.

−→ Y , for some random variable Y , then

n X

k≥n

Y k

k 2

−→ Y a.s. and 1 n

n

X

k=1

Y k

−→ Y. a.s.

(8)

Proof. Let L n = P n k=1

Y

k

−E Y

k

|G

k−1



k . Then, L n is a G-martingale such that sup

n

EL 2 n ≤ 4 X

k

EY k 2 k 2 < ∞.

Thus, L n converges a.s. and Abel summation formula yields n X

k≥n

Y k − E Y k | G k−1  k 2

−→ 0. a.s.

Since E Y n+1 | G n

 a.s.

−→ Y and n P

k≥n 1

k

2

−→ 1, it follows that n X

k≥n

Y k

k 2 = n X

k≥n

Y k − E Y k | G k−1



k 2 + n X

k≥n

E Y k | G k−1

 k 2

−→ Y. a.s.

Similarly, Kroneker lemma and E Y n+1 | G n

 a.s.

−→ Y yield 1

n

n

X

k=1

Y k = 1 n

n

X

k=1

E(Y k | G k−1 ) + 1 n

n

X

k=1

k Y k − E Y k | G k−1  k

−→ Y. a.s.

 Finally, as regards D n , a natural question is whether

E f (D n ) | G n  a.s.

−→ N (0, V )(f ) for each f ∈ C b (R). (4) This is a strengthening of D n → N (0, V ) stably in strong sense, as E f (D n ) | G n  is requested to converge a.s. and not only in probability. Conditions for (4) are given by the next proposition.

Proposition 4. Let (X n ) be a (non necessarily G-adapted) sequence of integrable random variables. Condition (4) holds whenever (Z n ) is uniformly integrable and

X

k≥1

√ k E

E(Z k | G k−1 ) − Z k−1

< ∞, Esup

k≥1

k |Z k−1 − Z k | < ∞, n X

k≥n

(Z k−1 − Z k ) 2 a.s. −→ V.

Proof. Just repeat (the second part of) the proof of Theorem 2, but use Theorem

2.2 of [8] instead of Example 6 of [7]. 

4. Applications

4.1. r-step predictions. Suppose we are requested to make conditional forecasts on a sequence of events A n ∈ G n . To fix ideas, for each n, we aim to predict

A n = ∩ j∈J A n+j  ∩ ∩ j∈J

c

A c n+j 

conditionally on G n , where J is a given subset of {1, . . . , r} and J c = {1, . . . , r} \ J . Letting X n = I A

n

, the predictor can be written as

Z n = E  Y

j∈J

X n+j

Y

j∈J

c

(1 − X n+j ) | G n .

In the spirit of Section 1, when Z n cannot be evaluated in closed form, one needs

to estimate it. Under some assumptions, in particular when (X n ) is exchangeable

and G n = σ(X 1 , . . . , X n ), a reasonable estimate of Z n is X h n (1 − X n ) r−h where

(9)

h = card(J ). Usually, under such assumptions, one also has Z n

−→ Z and Z a.s. n −→ a.s.

Z h (1 − Z) r−h for some random variable Z. So, it makes sense to define C n = √

n X h n (1 − X n ) r−h − Z n , D n = √

n Z n − Z h (1 − Z) r−h . Next result is a straightforward consequence of Theorem 2.

Corollary 5. Let (X n ) be a G-adapted sequence of indicators satisfying (3). If conditions (a)-(b)-(c)-(d) of Theorem 2 hold, then

(C n , D n ) −→ N (0, σ 2 U ) × N (0, σ 2 V ) stably, where σ 2 = h Z h−1 (1 − Z) r−h − (r − h) Z h (1 − Z) r−h−1 2 .

Proof. We just give a sketch of the proof. Let f (x) = x h (1 − x) r−h . Based on (c), it can be shown that √

n E

Z n − f (Z n )

−→ 0. Thus, C n can be replaced by

√ n f (X n ) − f (Z n ) and D n by √

n f (Z n ) − f (Z) . By the mean value theorem,

√ n f (X n ) − f (Z n ) = √

n f 0 (M n ) (X n − Z n ) = f 0 (M n ) C n where M n is between X n and Z n . By (3), Z n

−→ Z and X a.s n

−→ Z. Hence, a.s

f 0 (M n ) −→ f a.s 0 (Z) as f 0 is continuous. By Theorem 2, C n → N (0, U ) stably. Thus,

√ n f (X n ) − f (Z n ) −→ f 0 (Z) N (0, U ) = N (0, σ 2 U ) stably.

By a similar argument, it can be seen that √

n f (Z n ) − f (Z)

−→ N (0, σ 2 V ) stably in strong sense. An application of Lemma 1 concludes the proof.  Roughly speaking Corollary 5 states that, if 1-step predictions behave nicely, then r-step predictions behave nicely as well. In fact, (C n , D n ) converges stably under the same conditions which imply convergence of (C n , D n ), and the respective limits are connected in a simple way. Forthcoming Subsections 4.2 and 4.3 provide examples of indicators satisfying the assumptions of Corollary 5.

4.2. Poisson-Dirichlet sequences. Let Y be a finite set and (Y n ) a sequence of Y-valued random variables satisfying

P Y n+1 ∈ A | Y 1 , . . . , Y n  = P

y∈A (S n,y − α) I {S

n,y

6=0} + θ + α P

y∈Y I {S

n,y

6=0}  ν(A) θ + n

a.s. for all A ⊂ Y and n ≥ 1. Here, 0 ≤ α < 1 and θ > −α are constants, ν is the probability distribution of Y 1 and S n,y = P n

k=1 I {Y

k

=y} .

Sequences (Y n ) of this type play a role in various frameworks, mainly in population- genetics. They can be regarded as a generalization of those exchangeable sequences directed by a two parameter Poisson-Dirichlet process; see [17]. For α = 0, (Y n ) reduces to a classical Dirichlet sequence (i.e., an exchangeable sequence directed by a Dirichlet process). But, for α 6= 0, (Y n ) may even fail to be exchangeable.

From the point of view of Theorem 2, however, the only important thing is that P Y n+1 ∈ · | Y 1 , . . . , Y n

 can be written down explicitly. Indeed, the following result is available.

Corollary 6. Let G n = σ(Y 1 , . . . , Y n ) and X n = I A (Y n ), where A ⊂ Y. Then, condition (3) holds (so that Z n −→ Z) and a.s.

(C n , D n ) −→ δ 0 × N 0, Z(1 − Z) 

stably.

(10)

Proof. Let Q n = −α P

y∈A I {S

n,y

6=0} + θ + α P

y∈Y I {S

n,y

6=0}  ν(A). Since Z n = P Y n+1 ∈ A | Y 1 , . . . , Y n  = n X n + Q n

θ + n and |Q n | ≤ c

for some constant c, then C n −→ 0. By Lemma 1 and Theorem 2, thus, it suffices a.s.

to check conditions (3), (c) and (d) with V = Z(1 − Z). On noting that Z n+1 − Z n = X n+1 − Z n

θ + n + 1 + Q n+1 − Q n

θ + n + 1 ,

condition (c) trivially holds. Since S n+1,y = S n,y + I {Y

n+1

=y} , one obtains Q n+1 − Q n = −α ν(A c ) X

y∈A

I {S

n,y

=0} I {Y

n+1

=y} + α ν(A) X

y∈A

c

I {S

n,y

=0} I {Y

n+1

=y} . It follows that

E|Q n+1 − Q n | | G n ≤ 2 X

y∈Y

I {S

n,y

=0} P Y n+1 = y | G n  ≤ d

θ + n a.s.

for some constant d, and this implies

E Z n+1 | G n  − Z n

=

E Q n+1 − Q n | G n



θ + n + 1 ≤ d

(θ + n) 2 a.s..

Hence, condition (3) holds. To check (d), note that P

k k 2 E(Z k−1 − Z k ) 4 < ∞.

Since Z k

−→ Z (by (3)) one also obtains a.s.

E(X k − Z k−1 ) 2 | G k−1 = Z k−1 − Z k−1 2 −→ Z(1 − Z), a.s.

E(Q k − Q k−1 ) 2 | G k−1 + 2 E(X k − Z k−1 ) (Q k − Q k−1 ) | G k−1 a.s.

−→ 0.

Thus, k 2 E(Z k−1 − Z k ) 2 | G k−1

a.s.

−→ Z(1 − Z). Letting Y k = k 2 (Z k−1 − Z k ) 2 and Y = Z(1 − Z), Lemma 3 implies

n X

k≥n

(Z k−1 − Z k ) 2 = n X

k≥n

Y k k 2

−→ Z(1 − Z). a.s.

 As it is clear from the previous proof, all assumptions of Proposition 4 are satisfied. Therefore, D n meets condition (4) with V = Z(1 − Z).

A result analogous to Corollary 6 is Theorem 4.1 of [4]. The main tool for proving the latter, indeed, is Theorem 2.

4.3. Two color randomly reinforced urns. An urn contains b > 0 black balls and r > 0 red balls. At each time n ≥ 1, a ball is drawn and then replaced together with a random number of balls of the same color. Say that B n black balls or R n

red balls are added to the urn according to whether X n = 1 or X n = 0, where X n

is the indicator of {black ball at time n}.

Urns of this type have some history starting with [9]. See also [2], [4], [5], [8], [15], [16] and references therein.

To model such urns, we assume X n , B n , R n random variables on the probability

space (Ω, A, P ) such that

(11)

(∗) X n ∈ {0, 1}, B n ≥ 0, R n ≥ 0,

(B n , R n ) independent of X 1 , B 1 , R 1 , . . . , X n−1 , B n−1 , R n−1 , X n , Z n = P X n+1 = 1 | G n  = b + P n

k=1 B k X k b + r + P n

k=1 B k X k + R k (1 − X k )  a.s., for each n ≥ 1, where

G 0 = {∅, Ω}, G n = σ X 1 , B 1 , R 1 , . . . , X n , B n , R n .

In the particular case B n = R n , in Example 3.5 of [5], it is shown that C n converges stably to a Gaussian kernel whenever EB 1 2 < ∞ and the sequence (B n : n ≥ 1) is identically distributed. Further, in Corollary 4.1 of [8], D n is shown to satisfy condition (4). The latter result on D n is extended to B n 6= R n in [2], under the assumptions that B 1 + R 1 has compact support, EB 1 = ER 1 , and ((B n , R n ) : n ≥ 1) is identically distributed.

Based on the results in Section 3, condition (4) can be shown to hold more generally. Indeed, to get condition (4), it is fundamental that EB n = ER n for all n and the three sequences (EB n ), (EB n 2 ), (ER 2 n ) approach a limit. But the identity assumption for distributions of (B n , R n ) can be dropped, and compact support of B n + R n can be replaced by a moment condition such as

sup

n

E(B n + R n ) u < ∞ for some u > 2. (5) Under these conditions, not only D n meets (4), but the pairs (C n , D n ) converge stably as well. In particular, one obtains stable convergence of W n = C n + D n

which is of potential interest in urn problems.

Corollary 7. In addition to (∗) and (5), suppose EB n = ER n for all n and m := lim

n EB n > 0, q := lim

n EB n 2 , s := lim

n ER 2 n . Then, condition (3) holds (so that Z n

−→ Z) and a.s.

(C n , D n ) −→ N (0, U ) × N (0, V ) stably, where U = Z(1 − Z) (1 − Z)q + Zs

m 2 − 1 

and V = Z(1 − Z) (1 − Z)q + Zs

m 2 .

In particular, W n = C n +D n −→ N (0, U +V ) stably. Moreover, D n meets condition (4), that is, E f (D n ) | G n  a.s.

−→ N (0, V )(f ) for each f ∈ C b (R).

It is worth noting that, arguing as in [2] and [15], one obtains P (Z = z) = 0 for all z. Thus, N (0, V ) is a non degenerate kernel. In turn, N (0, U ) is non degenerate unless q = s = m 2 , and this happens if and only if both B n and R n converge in probability (necessarily to m). In the latter case (q = s = m 2 ), C n

−→ 0 and P

condition (4) holds with V = Z(1 − Z). Thus, in a sense, RRU behave as classical Polya urns (i.e., those urns with B n = R n = m) whenever the reinforcements converge in probability.

The proof of Corollary 7 is deferred to the Appendix as it needs some work. Here, to point out the underlying argument, we sketch such a proof under the superfluous but simplifying assumption that B n ∨ R n ≤ c for all n and some constant c. Let

S n = b + r +

n

X

k=1

B k X k + R k (1 − X k ).

(12)

After some algebra, Z n+1 − Z n can be written as

Z n+1 − Z n = (1 − Z n ) X n+1 B n+1 − Z n (1 − X n+1 ) R n+1

S n+1

= (1 − Z n ) X n+1 B n+1

S n + B n+1

− Z n (1 − X n+1 ) R n+1

S n + R n+1

. By (∗) and EB n+1 = ER n+1 ,

E Z n+1 − Z n | G n  = Z n (1 − Z n ) E  B n+1 S n + B n+1

− R n+1 S n + R n+1

| G n

= Z n (1 − Z n ) E  B n+1 S n + B n+1

− B n+1 S n

− R n+1 S n + R n+1

+ R n+1 S n

| G n

= Z n (1 − Z n ) E  − B n+1 2

S n (S n + B n+1 ) + R 2 n+1

S n (S n + R n+1 ) | G n a.s..

Thus,

E Z n+1 | G n  − Z n

EB

n+12

S +ER

2 2n+1

n

a.s.. Since sup n EB n 2 + ER n 2  < ∞ and E(S n −p ) = O(n −p ) for all p > 0 (as shown in Lemma 11) then

E|E(Z n+1 | G n ) − Z n | p = O(n −2p ) for all p > 0.

In particular, condition (3) holds and P

k

√ k E

E(Z k | G k−1 ) − Z k−1

< ∞.

To conclude the proof, in view of Lemma 1, Theorem 2 and Proposition 4, it suffices to check conditions (a), (b) and

(i) Esup

k≥1

k |Z k−1 − Z k | < ∞; (ii) n X

k≥n

(Z k−1 − Z k ) 2 a.s. −→ V.

Conditions (a) and (i) are straightforward consequences of |Z n+1 − Z n | ≤ S c

n

and E(S n −p ) = O(n −p ) for all p > 0. Condition (b) follows from the same argument as (ii). And to prove (ii), it suffices to show that E(Y n+1 | G n ) −→ V where a.s.

Y n = n 2 (Z n−1 − Z n ) 2 ; see Lemma 3. Write (n + 1) −2 E(Y n+1 | G n ) as Z n (1 − Z n ) 2 E  B n+1 2

(S n + B n+1 ) 2 | G n + Z n 2 (1 − Z n )E  R 2 n+1

(S n + R n+1 ) 2 | G n . Since S n

n

−→ m (by Lemma 11) and B a.s. n+1 ≤ c, then

n 2 E  B n+1 2

(S n + B n+1 ) 2 | G n ≤ n 2 E  B

2 n+1

S n 2 | G n = n 2 EB 2 n+1 S n 2

−→ a.s. q m 2 and n 2 E  B n+1 2

(S n + B n+1 ) 2 | G n ≥ n 2 E  B 2 n+1

(S n + c) 2 | G n = n 2 EB n+1 2 (S n + c) 2

−→ a.s. q m 2 . Similarly, n 2 E  R

2n+1

(S

n

+R

n+1

)

2

| G n a.s.

−→ m s

2

. Since Z n −→ Z, it follows that a.s.

E(Y n+1 | G n ) −→ Z(1 − Z) a.s. 2 q

m 2 + Z 2 (1 − Z) s m 2 = V.

This concludes the (sketch of the) proof.

Remark 8. In order for (C n , D n ) −→ N (0, U ) × N (0, V ) stably, some of the assumptions of Corollary 7 can be stated in a different form. We mention two (independent) facts.

First, condition (5) can be weakened into uniform integrability of (B n + R n ) 2 .

(13)

Second, (B n , R n ) independent of X 1 , B 1 , R 1 , . . . , X n−1 , B n−1 , R n−1 , X n  can be replaced by the following four conditions:

(i) (B n , R n ) conditionally independent of X n given G n−1 ; (ii) Condition (5) holds for some u > 4;

(iii) There are an integer n 0 and a constant l > 0 such that

E B n ∧ n 1/4 | G n−1  ≥ l and E R n ∧ n 1/4 | G n−1  ≥ l a.s. whenever n ≥ n 0 ; (iv) There are random variables m, q, s such that

E B n | G n−1  = E R n | G n−1  P

−→ m, E B n 2 | G n−1  P

−→ q, E R 2 n | G n−1  P

−→ s.

Even if in a different framework, conditions similar to (i)-(iv) are in [3].

4.4. The multicolor case. To avoid technicalities, we firstly investigated two color urns, but Theorem 2 applies to the multicolor case as well.

An urn contains a j > 0 balls of color j ∈ {1, . . . , d} where d ≥ 2. Let X n,j

denote the indicator of {ball of color j at time n}. In case X n,j = 1, the ball which has been drawn is replaced together with A n,j more balls of color j. Formally, we assume X n,j , A n,j : n ≥ 1, 1 ≤ j ≤ d random variables on the probability space (Ω, A, P ) satisfying

(∗∗) X n,j ∈ {0, 1}, P d

j=1 X n,j = 1, A n,j ≥ 0,

(A n,1 , . . . , A n,d ) independent of A k,j , X k,j , X n,j : 1 ≤ k < n, 1 ≤ j ≤ d, Z n,j = P X n+1,j = 1 | G n  = a j + P n

k=1 A k,j X k,j P d

i=1 a i + P n k=1

P d

i=1 A k,i X k,i

a.s., where G 0 = {∅, Ω}, G n = σ A k,j , X k,j : 1 ≤ k ≤ n, 1 ≤ j ≤ d.

Note that

Z n+1,j − Z n,j = (1 − Z n,j ) A n+1,j X n+1,j S n + A n+1,j

− Z n,j

X

i6=j

A n+1,i X n+1,i S n + A n+1,i

where S n =

d

X

i=1

a i +

n

X

k=1 d

X

i=1

A k,i X k,i .

In addition to (∗∗), as in Subsection 4.3, we ask the moment condition sup

n

E 

d

X

j=1

A n,j

 u

< ∞ for some u > 2. (6)

Further, it is assumed that

EA n,j = EA n,1 for each n ≥ 1 and 1 ≤ j ≤ d, and (7) m := lim

n EA n,1 > 0, q j := lim

n EA 2 n,j for each 1 ≤ j ≤ d.

Fix 1 ≤ j ≤ d. Since EA n,i = EA n,1 for all n and i, the same calculation as in Subsection 4.3 yields

E Z n+1,j | G n  − Z n,j

P d

i=1 EA 2 n+1,i

S n 2 a.s..

(14)

Also, E(S n −p ) = O(n −p ) for all p > 0; see Remark 12. Thus,

E|E Z n+1,j | G n  − Z n,j | p = O(n −2p ) for all p > 0. (8) In particular, Z n,j meets condition (3) so that Z n,j −→ Z a.s. (j) for some random variable Z (j) . Define

C n,j = √ n 1

n

n

X

k=1

X k,j − Z n,j

 and D n,j = √

n Z n,j − Z (j) .

Next result is quite expected at this point.

Corollary 9. Suppose conditions (∗∗), (6), (7) hold and fix 1 ≤ j ≤ d. Then, C n,j , D n,j  −→ N (0, U j ) × N (0, V j ) stably, where

U j = V j − Z (j) (1 − Z (j) ) and V j = Z (j)

m 2  q j (1 − Z (j) ) 2 + Z (j) X

i6=j

q i Z (i) .

Moreover, E f (D n,j ) | G n  a.s.

−→ N (0, V j )(f ) for each f ∈ C b (R), that is, D n,j meets condition (4).

Proof. Just repeat the proof of Corollary 7 with X n,j in the place of X n .  A vectorial version of Corollary 9 can be obtained with slight effort. Let N d (0, Σ) denote the d-dimensional Gaussian law with mean vector 0 and covariance matrix Σ and

C n = C n,1 , . . . , C n,d , D n = D n,1 , . . . , D n,d .

Corollary 10. Suppose conditions (∗∗), (6), (7) hold. Then, C n , D n  −→ N d (0, U) × N d (0, V) stably,

where U, V are the d × d matrices with entries U j,j = U j , V j,j = V j , and

U i,j = V i,j + Z (i) Z (j) , V i,j = Z (i) Z (j) m 2



d

X

h=1

q h Z (h) − q i − q j

for i 6= j.

Moreover, E f (D n ) | G n  a.s.

−→ N d (0, V)(f ) for each f ∈ C b (R d ).

Proof. Given a linear functional φ : R d → R, it suffices to see that φ(C n ) −→ N d (0, U) ◦ φ −1 stably, and E g ◦ φ(D n ) | G n  a.s.

−→ N d (0, V)(g ◦ φ) for each g ∈ C b (R).

To this purpose, note that φ(C n ) = √

n  1 n

n

X

k=1

φ(X k,1 , . . . , X k,d ) − E φ(X n+1,1 , . . . , X n+1,d ) | G n  , φ(D n ) = √

n  E φ(X n+1,1 , . . . , X n+1,d ) | G n  − φ(Z (1) , . . . , Z (d) ) , and repeat again the proof of Corollary 7 with φ(X n,1 , . . . , X n,d ) in the place of

X n . 

(15)

A nice consequence of Corollary 10 is that

W n = C n + D n −→ N d (0, U + V) stably

provided conditions (∗∗)-(6)-(7) hold, where W n = W n,1 , . . . , W n,d

 and W n,j = √

n n 1 P n

k=1 X k,j − Z (j) .

APPENDIX

In the notation of Subsection 4.3, let S n = b + r + P n

k=1 B k X k + R k (1 − X k ).

Lemma 11. Under the assumptions of Corollary 7, n

S n

−→ 1

m a.s. and in L p for all p > 0.

Proof. Let Y n = B n X n + R n (1 − X n ). By (∗) and EB n+1 = ER n+1 , E Y n+1 | G n  = EB n+1 E X n+1 | G n  + ER n+1 E 1 − X n+1 | G n



= Z n EB n+1 + (1 − Z n ) EB n+1 = EB n+1

−→ m. a.s.

Since m > 0, Lemma 3 implies S n

n

= S 1

n

/n

−→ a.s. m 1 . To conclude the proof, it suffices to see that E(S n −p ) = O(n −p ) for all p > 0. Given c > 0, define

S n (c) =

n

X

k=1

X k B k ∧ c − E(B k ∧ c) + (1 − X k ) R k ∧ c − E(R k ∧ c) .

By a classical martingale inequality (see e.g. Lemma 1.5 of [14]) P |S n (c) | > x ≤ 2 exp −x 2 /2 c 2 n 

for all x > 0.

Since EB n = ER n −→ m and both (B n ), (R n ) are uniformly integrable (as sup n EB n 2 + ER 2 n  < ∞), there are c > 0 and an integer n 0 such that

m n :=

n

X

k=1

minE(B k ∧ c), E(R k ∧ c) > n m

2 for all n ≥ n 0 . Fix one such c > 0 and let l = m/4 > 0. For every p > 0, one can write

E(S −p n ) = p Z ∞

b+r

t −p−1 P (S n < t) dt

≤ p

(b + r) p+1

Z b+r+n l b+r

P (S n < t) dt + p Z ∞

b+r+n l

t −p−1 dt.

Clearly, p R ∞

b+r+n l t −p−1 dt = (b + r + n l) −p = O(n −p ). Further, for each n ≥ n 0

and t < b + r + n l, since m n > n 2 l one obtains

P (S n < t) ≤ P S n (c) < t − b − r − m n  ≤ P S n (c) < t − b − r − n 2 l 

≤ P |S n (c) | > b + r + n 2 l − t ≤ 2 exp −(b + r + n 2 l − t) 2 /2 c 2 n.

Hence, R b+r+n l

b+r P (S n < t) dt ≤ n 2 l exp −n 2 c l

22

 for every n ≥ n 0 , so that

E(S n −p ) = O(n −p ). 

(16)

Remark 12. As in Subsection 4.4, let S n = P d

i=1 a i + P n k=1

P d

i=1 A k,i X k,i . Under conditions (∗∗)-(6)-(7), the previous proof still applies to such S n . Thus, S n

n

−→ m 1 a.s. and in L p for all p > 0.

Proof of Corollary 7. By Lemma 1, it is enough to prove C n → N (0, U ) stably and D n meets condition (4). Recall from Subsection 4.3 that

Z n+1 − Z n = (1 − Z n ) X n+1 B n+1 − Z n (1 − X n+1 ) R n+1 S n+1

and E|E(Z n+1 | G n ) − Z n | p = O(n −2p ) for all p > 0.

In particular, condition (3) holds and P

k

√ k E

E(Z k | G k−1 ) − Z k−1

< ∞.

”D n meets condition (4)”. By (5) and Lemma 11, E|Z k−1 − Z k | u ≤ E (B k + R k ) u

S k−1 u = E(B k + R k ) u E(S k−1 −u ) = O(k −u ).

Thus, Esup k

k |Z k−1 − Z k | u

≤ P

k k

u2

E|Z k−1 − Z k | u < ∞ as u > 2. In view of Proposition 4, it remains only to prove that

n X

k≥n

(Z k−1 − Z k ) 2 = n X

k≥n

(1 − Z k−1 )X k B k

S k

− Z k−1 (1 − X k )R k

S k

 2

= n X

k≥n

(1 − Z k−1 ) 2 X k B k 2

(S k−1 + B k ) 2 + n X

k≥n

Z k−1 2 (1 − X k )R 2 k (S k−1 + R k ) 2 converges a.s. to V = Z(1 − Z) (1−Z)q+Zs m

2

. It is enough to show that n X

k≥n

(1 − Z k−1 ) 2 X k B k 2 (S k−1 + B k ) 2

−→ Z(1−Z) a.s. 2 q

m 2 and n X

k≥n

Z k−1 2 (1 − X k )R k 2 (S k−1 + R k ) 2

−→ Z a.s. 2 (1−Z) s m 2 . These two limit relations can be proved by exactly the same argument, and thus we just prove the first one. Let U n = B n I {B

n

≤ √

n} . Since P (B n > √

n) ≤ n

−u2

EB n u , condition (5) yields P (B n 6= U n , i.o.) = 0. Hence, it suffices to show that

n X

k≥n

(1 − Z k−1 ) 2 X k U k 2 (S k−1 + U k ) 2

−→ Z(1 − Z) a.s. 2 q

m 2 . (9)

Let Y n = n 2 (1−Z (S

n−1

)

2

X

n

U

n2

n−1

+U

n

)

2

. Since (B n 2 ) is uniformly integrable, EU n 2 −→ q. Fur- thermore, S n

n

−→ m and Z a.s. n

−→ Z. Thus, a.s.

E Y n+1 | G n  ≤ (1 − Z n ) 2 (n + 1) 2 E X n+1 U n+1 2 S n 2 | G n 

= Z n (1 − Z n ) 2 (n + 1) 2

S n 2 EU n+1 2 −→ Z(1 − Z) a.s. 2 q

m 2 and E Y n+1 | G n  ≥ (1 − Z n ) 2 (n + 1) 2 E X n+1 U n+1 2

(S n + √

n + 1) 2 | G n 

= Z n (1 − Z n ) 2 (n + 1) 2 (S n + √

n + 1) 2 EU n+1 2 −→ Z(1 − Z) a.s. 2 q

m 2 .

(17)

By Lemma 3, for getting relation (9), it suffices that P

n EY

n2

n

2

< ∞. Since EU n 4

n 2 ≤ EB n 2 I {B

2 n

≤ √

n}

n

32

+ EB n 2 I {B

2 n

> √

n}

n ≤ EB n 2

n

32

+ EB u n n 1+

u−24

, condition (5) implies P

n EU

n4

n

2

< ∞. By Lemma 11, E(S −4 n−1 ) = O(n −4 ). Then, X

n

EY n 2 n 2 ≤ X

n

n 2 E  U n 4

S n−1 4 = X

n

n 2 E(S n−1 −4 ) EU n 4 ≤ c X

n

EU n 4 n 2 < ∞ for some constant c. Hence, condition (9) holds.

”C n → N (0, U ) stably”. By Theorem 2, it suffices to check conditions (a) and (b) with U = Z(1−Z) (1−Z)q+Zs m

2

−1. As to (a), since E|Z k−1 −Z k | u = O(k −u ),

n

12

E max

1≤k≤n k |Z k−1 − Z k |  u

≤ n

u2

n

X

k=1

k u E|Z k−1 − Z k | u −→ 0.

We next prove condition (b). After some algebra, one obtains E(X n − Z n−1 )(Z n−1 − Z n ) | G n−1 = −Z n−1 (1 − Z n−1 ) E  B n

S n−1 + B n | G n−1 + + Z n−1 2 (1 − Z n−1 ) E  B n

S n−1 + B n − R n

S n−1 + R n | G n−1 a.s..

Arguing as in the first part of this proof (”D n meets condition (4)”), n E  B n

S n−1 + B n

| G n−1

a.s.

−→ 1 and n E  R n S n−1 + R n

| G n−1

a.s.

−→ 1.

Thus, n E(X n − Z n−1 )(Z n−1 − Z n ) | G n−1 a.s.

−→ −Z(1 − Z). Further, E 

X n − Z n−1 ) 2 | G n−1 = Z n−1 − Z n−1 2 −→ Z(1 − Z). a.s.

Thus, Lemma 3 implies 1

n

n

X

k=1

(X k − Z k−1 ) 2 + 2 n

n

X

k=1

k (X k − Z k−1 ) (Z k−1 − Z k ) −→ −Z(1 − Z). a.s.

Finally, write n 1 P n

k=1 k 2 (Z k−1 −Z k ) 2 = n 1 P n

k=1 k 2  (1−Z

k−1

)

2

X

k

B

k2

(S

k−1

+B

k

)

2

+ Z

2

k−1

(1−X

k

)R

2k

(S

k−1

+R

k

)

2

. By Lemma 3 and the same truncation technique used in the first part of this proof,

1 n

P n

k=1 k 2 (Z k−1 − Z k ) 2 a.s. −→ V . Squaring, 1

n

n

X

k=1

X k − Z k−1 + k(Z k−1 − Z k ) 2 a.s.

−→ V − Z(1 − Z) = U,

that is, condition (b) holds. This concludes the proof.  References

[1] Aldous D.J. and Eagleson G.K. (1978) On mixing and stability of limit theo- rems, Ann. Probab., 6, 325-331.

[2] Aletti G., May C. and Secchi P. (2009) A central limit theorem, and related

results, for a two-color randomly reinforced urn, Adv. Appl. Probab., 41, 829-

844.

(18)

[3] Bai Z.D. and Hu F. (2005) Asymptotics in randomized urn models, Ann. Appl.

Probab., 15, 914-940.

[4] Bassetti F., Crimaldi I. and Leisen F. (2010) Conditionally identically dis- tributed species sampling sequences, Adv. Appl. Probab., 42, 433-459.

[5] Berti P., Pratelli L. and Rigo P. (2004) Limit theorems for a class of identically distributed random variables, Ann. Probab., 32, 2029-2052.

[6] Berti P., Crimaldi I., Pratelli L. and Rigo P. (2009) Rate of convergence of predictive distributions for dependent data, Bernoulli, 15, 1351-1367.

[7] Crimaldi I., Letta G. and Pratelli L. (2007) A strong form of stable conver- gence, Sem. de Probab. XL, LNM, 1899, 203-225.

[8] Crimaldi I. (2009) An almost sure conditional convergence result and an ap- plication to a generalized Polya urn, Internat. Math. Forum, 23, 1139-1156.

[9] Durham S.D., Flournoy N. and Li W. (1998) A sequential design for maximiz- ing the probability of a favourable response, Canad. J. Statist., 26, 479-495.

[10] Ghosh J.K. and Ramamoorthi R.V. (2003) Bayesian nonparametrics, Springer.

[11] Hall P. and Heyde C.C. (1980) Martingale limit theory and its applications, Academic Press.

[12] Janson S. (2004) Functional limit theorems for multitype branching processes and generalized Polya urns, Stoch. Proc. Appl., 110, 177-245.

[13] Janson S. (2005) Limit theorems for triangular urn schemes, Probab. Theo.

Rel. Fields, 134, 417-452.

[14] Ledoux M. and Talagrand M. (1991) Probability in Banach Spaces, Springer- Verlag.

[15] May C. and Flournoy N. (2009) Asymptotics in response-adaptive designs generated by a two-color, randomly reinforced urn, Ann. Statist., 37, 1058- 1078.

[16] Pemantle R. (2007) A survey of random processes with reinforcement, Probab.

Surveys, 4, 1-79.

[17] Pitman J. and Yor M. (1997) The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator, Ann. Probab., 25, 855-900.

[18] Renyi A. (1963) On stable sequences of events, Sankhya A, 25, 293-302.

Patrizia Berti, Dipartimento di Matematica Pura ed Applicata ”G. Vitali”, Univer- sita’ di Modena e Reggio-Emilia, via Campi 213/B, 41100 Modena, Italy

E-mail address: patrizia.berti@unimore.it

Irene Crimaldi, Dipartimento di Matematica, Universita’ di Bologna, Piazza di Porta San Donato 5, 40126 Bologna, Italy

E-mail address: crimaldi@dm.unibo.it

Luca Pratelli, Accademia Navale, viale Italia 72, 57100 Livorno, Italy E-mail address: pratel@mail.dm.unipi.it

Pietro Rigo (corresponding author), Dipartimento di Economia Politica e Metodi Quantitativi, Universita’ di Pavia, via S. Felice 5, 27100 Pavia, Italy

E-mail address: prigo@eco.unipv.it

Riferimenti

Documenti correlati

The main reason for the second approach is that, in most real problems, the meaningful objects are the probability distributions of random variables, rather than the random

Anscombe theorem, Exchangeability, Random indices, Random sums, Stable

Second, conditions for B n and C n to converge in distribution, under a certain distance weaker than the uniform one, are given for G-c.i.d... Convergence in distribution under

(b) No: the unique plane containing the first three points has equation 3x − 2z = −1, and the fourth point is not contained

Let ¹ R be the preorder of embeddability between countable linear orders colored with elements of Rado’s partial or- der (a standard example of a wqo which is not a bqo).. We show

Indeed, if B is the unit ball in the normed vector space V , we can realize the unit ball of the dual as a closed (with respect to the product topology) subset of [−1, 1] B , which

Roughly speaking, this functor is the composition of the specialization functor (which is defined as long as the normal deformation is defined, i.e. for C 1 manifolds) and

Assuming that L M has constant rank, we shall discuss here the link between “genericity” of S and “regular involutivity” of Σ in the real symplectic space (Λ, σ I )..