• Non ci sono risultati.

Minimal cyclic motion in Rn and hyper-Bessel functions

N/A
N/A
Protected

Academic year: 2021

Condividi "Minimal cyclic motion in Rn and hyper-Bessel functions"

Copied!
33
0
0

Testo completo

(1)

Minimal cyclic random motion in R

n

and

hyper-Bessel functions

A. Lachal

a∗

, S. Leorato

b

, E. Orsingher

b

aINSA de Lyon, Centre de Math´ematiques, Bˆat. L´eonard de Vinci,

20, avenue A. Einstein, 69621 VILLEURBANNE CEDEX, FRANCE

bUniversit`a di Roma ”La Sapienza”, Piazzale A. Moro, 5, 00185 ROMA, ITALY

Abstract

We obtain the explicit distribution of the position of a particle per-forming a cyclic, minimal, random motion with constant velocity c in Rn.

The n + 1 possible directions of motion as well as the support of the dis-tribution form a regular hyperpolyhedron (the first one having constant sides and the other expanding with time t), the geometrical features of which are here investigated.

The distribution is obtained by using order statistics and is expressed in terms of hyper-Bessel functions of order n + 1. These distributions are proved to be connected with (n + 1)th order p.d.e. which can be reduced to Bessel equations of higher order.

Some properties of the distributions obtained are examined. This re-search has been inspired by a conjecture formulated in Orsingher and Sommella (2004) which is here proved to be false.

Keywords. Cyclic random motions, hyper-Bessel functions, (n + 1)th order partial differential equations, order statistics, hyperpolyhedron.

R´esum´e

Dans ce travail, on ´etudie l’´evolution dans l’espace Rnd’une particule anim´ee d’un mouvement al´eatoire cyclique `a vitesse constante. Le mouve-ment est suppos´e minimal au sens o`u les diff´erentes directions prises sont au nombre de n + 1 ; de plus, ces directions forment un hyper-poly`edre r´egulier fixe. Le support de la distribution de la position de la particule est ´egalement un hyper-poly`edre r´egulier (de taille ´evolutive au cours du temps).

Faisant appel aux statistiques d’ordre, on a pu obtenir explicitement la loi de probabilit´e de la position de la particule `a un instant donn´e. Le r´esultat s’exprime au moyen de fonctions de Bessel g´en´eralis´ees d’ordre n + 1 et montre que cette ´etude est li´ee `a des ´equations aux d´eriv´ees partielles hyperboliques d’ordre n + 1.

Ce travail a ´et´e inspir´e par une conjecture formul´ee par Orsingher et Sommella (2004), laquelle se r´ev`ele finalement ˆetre fausse.

Mots-clefs. Mouvements al´eatoires cycliques, fonctions de Bessel d’ordre n + 1, ´equations aux d´eriv´ees partielles d’ordre n + 1, statistiques d’ordre, hyper-poly`edre.

AMS Classification. Primary: 60K99. Secondary: 62G30; 33E99

(2)

1

Introduction

We here study the cyclic motion in Rnof a particle (with position (X(t), t ≥ 0))

which can take n + 1 directions −→vj, j = 0, . . . , n and moves with speed c <

+∞. The extremities of the vectors representing the directions form a regular n−dimensional polyhedron with n+1 vertices (for short, we say (n+1)−hedron) inscribed in the unit sphere.

The particle can initially choose one of the possible directions with proba-bility n+11 . The directions are taken successively at Poisson paced times, that is the particle moving with direction −→vj, after a Poisson event, takes the direction

− →v

j+1, j = 0, . . . , n (with −→vn+1= −→v0).

The minimality of the number of directions, together with the cyclicity make the derivation of the explicit distributions of motion possible.

Motions of particles in a turbulent medium, for example in the presence of a vortex, can be adequately described by the models studied below.

Usually, the analysis of random motions with finite velocity is either per-formed by means of analytic arguments, essentially based on partial differential equations, or by a more probabilistic approach, based on order statistics.

The analytic method has been implemented in the study of cyclic motions on the plane (with three directions) and in space R3 (with four directions as-sociated with a regular tetrahedron) by Orsingher (2002) and Orsingher and Sommella (2004). Random motions in the plane with three directions and Er-lang distributed interarrival times is considered in Di Crescenzo (2002).

The approach based on order statistics has proved to be more suitable for generalizations on higher order spaces (as will be applied here) as well as on non-cyclic motions; the cases of planar motions - symmetrically deviating and with uniform choice of directions - is examined in Leorato and Orsingher (2004). This work proves that the conjecture formulated in the paper by Orsingher and Sommella (2004) is false and we are now able to obtain the exact distribu-tions of the position of a randomly moving particle in Rn and to show that it

matches the necessary requirements, including the connection with the partial differential equations governing the probability laws. These equations, derived by different authors (Kolesnik, Samoilenko and others) are related to hyper-Bessel functions analyzed by Kiryakova (1994) and Turbin and Plotkin (1991).

Our main result is the derivation of the distribution of X(t): ˜

pr(x, t) dx

= Pr {X(t) ∈ dx, complete cycle + r directions}

= dx e −λt (n + 1)rn! V n  λ c n+r Hr,n+1(x, t) Ir,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  (1.1)

where x = (x0, . . . , xn−1), dx = dx0· · · dxn−1and hj(x, t) = ct+nPn−1i=0 vi,jxi=

0 are the equations of the hyperfaces of a (n + 1)−hedron Tct with volume

Vn(ct)n, and (v0,j, . . . , vn−1,j) are the coordinates of the vectors −→vj for j =

0, . . . , n.

The functions Hr,n+1 are defined as

Hr,n+1(x, t) = 1 n + 1 n X j=0 j+r−1 Y l=j hl(x, t)

(3)

while the hyper-Bessel functions Ir,n(x) are Ir,n(x) = ∞ X q=0 1 (q!)n−r((q + 1)!)r x n nq , 0 ≤ r ≤ n.

Our paper is organized as follows. The second section is devoted to the geometrical description of the directions of motion together with that of the support of the distributions.

We then turn our attention to the probabilistic analysis of the cyclic motion as well as to the analysis of the related governing equations (Section 3). In particular, the functions qj(u) for 0 ≤ j ≤ n, defined as

pj(x, t) = const · qj(u) = Pr {X(t) ∈ dx, the current direction is −→vj} /dx

where u = (u0, . . . , un) = (h0(x, t), . . . , hn(x, t)), are governed by the (n + 1)th

order partial differential equations ∂n+1qj ∂u0· · · ∂un =  λ (n + 1)c n+1 qj. (1.2)

We shall see that the p.d.e. (1.2) is related to the (n+1)th order Bessel equation:  w ∂ ∂w n+1 q = λw c n+1 q.

The fourth part of the paper is concerned with the explicit derivation of the conditional probabilities below by means of order statistics:

Pr {X(t) ∈ dx | N (t) = (n + 1)q + r − 1} , r = 0, . . . , n, q ≥ 1, (1.3) where N (t) denotes the number of Poisson events up to time t.

Our analysis shows that the conditional distributions (1.3) coincide with the corresponding components of (1.12) of Orsingher and Sommella (2004) for r = 0, n and arbitrary values of q = 1, 2, . . ., while for r = 1, . . . , n − 1 the conditional distributions and, a fortiori, the absolutely continuous component differ from those conjectured. By summing (1.1) we get a complete expression for the absolutely continuous part of the distribution of motion in Rn. Let us point out that the singular component of the distribution of X(t) is spread on the Pn

k=1

 n + 1 k



subspaces of dimension d = k − 1, composing the hull of Tct, with 0 ≤ d ≤ n − 1.

The concluding section is devoted to checking that Z Tct ∞ X q=1 n X r=0 Pr {X(t) ∈ dx, N (t) = (n + 1)q + r − 1} = 1 − e−λt− · · · − e−λt(λt) n−1 (n − 1)! = Pr {N (t) ≥ n}

and this formula confirms that the support of (X(t) | N (t) ≥ n) is the interior int (Tct) of the (n + 1)−hedron Tct. We have also proved that

d dt   Z Tct I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  dx  

(4)

= d dtVol (Tct) + Z Tct ∂ ∂t  I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)    dx.

For higher-order derivatives, a formula similar to that above does not hold and this is the reason why the conjecture formulated in Orsingher and Sommella (2004) is not true. In Subsection 5.3, we have shown that the distributions obtained (suitably simplified) satisfy an (n + 1)th order p.d.e. related to the hyper-Bessel equation.

2

The directional vectors and the geometry of

T

ct

2.1

The vectors of directions

Let −→vj, j = 0, . . . , n, be the vectors representing the possible directions of the

cyclic motion. We arbitrarily choose −→v0 = (1, 0, . . . , 0) and we determine −→vj,

1 ≤ j ≤ n − 1, in the form −→vj = (v0,j, v1,j, . . . , vj,j, 0, . . . , 0), by means of a

recursive procedure and finally −→vn= (v0,n, v1,n, . . . , vn−1,n). For the evaluation

of the numbers vi,j, 0 ≤ i ≤ n − 1, 0 ≤ j ≤ n, we use the following symmetry

conditions: − →v j· −→vk= const if j 6= k, (2.1) |−→vj|2= 1 for all j, (2.2) n X j=0 − →v j= − → 0 . (2.3)

In order to evaluate the constant in (2.1) we write

0 = n X j=0 − →v j ! · −→vk = |−→vk|2+ n X j=0 j6=k − →v j· −→vk = 1 + n const

which implies that const = −n1. So, we can rewrite the conditions (2.1) and (2.2) as − →v j· −→vk =  −1 n if j 6= k 1 if j = k. (2.4) By applying the condition (2.4) with (j, k) = (0, 1) and (j, k) = (1, 1), we get v0,1 = − 1 n and v 2 1,1+ 1 n2 = 1 so that v1,1 = ± √ n2−1

n . By convention we take, throughout the paper, the

positive signs. Thus the second directional vector has the following components: − →v 1 =  −1 n, √ n2−1 n , 0, . . . , 0 

. We also get, from (2.4), that v0,j = −n1, for

1 ≤ j ≤ n, that is, all the first components of the vectors −→vj, j ≥ 1, are

identical.

(5)

−1 n = v0,1v0,j+ v1,1v1,j = 1 n2+ √ n2− 1 n v1,j for j ≥ 2 and therefore v1,j = − 1 n r n + 1 n − 1 for 2 ≤ j ≤ n. By considering that 1 n2 + v 2 1,2+ v 2 2,2 = 1 and that v1,2 = −n1 q n+1 n−1 we obtain v2,2 = q n−2 n−1 q n+1

n . The third directional

vector thus writes − →v 2= − 1 n, − 1 n r n + 1 n − 1, r n + 1 n r n − 2 n − 1, 0, . . . , 0 ! .

These preliminary computations suggest we determine the vectors −→vj in the

form −→vj = (a0, a1, . . . , aj−1, dj, 0, . . . , 0) for 0 ≤ j ≤ n − 1, that is, in terms of

their components, vi,j=    ai if i < j di if i = j 0 if i > j, (2.5)

or, alternatively, in terms of the matrix of their components, − →v 0 −→v1 −→v2 · · · −→vj · · · −→vn−1             d0 a0 a0 · · · a0 · · · a0 0 d1 a1 · · · a1 · · · a1 0 0 d2 · · · a2 · · · a2 .. . ... ... . .. ... ... 0 0 0 · · · dj · · · aj .. . ... ... ... . .. ... 0 0 0 · · · 0 · · · dn−1             (2.6)

In words, aiis the common (i+1)th component for all the vectors −→vi+1, . . . , −→vn−1

and the di’s are the diagonal terms of the above matrix, which are assumed to

be positive. This is shown by a recursive procedure. We have already seen that

a0= − 1 n, a1= − 1 n r n + 1 n − 1and d0= 1, d1= √ n2− 1 n , d2= s (n + 1)(n − 2) n(n − 1) . Let us assume that

vl,j =    al if j > l dl if j = l 0 if j < l (2.7)

for l = 0, 1, . . . , i. We then prove that this assertion holds for l = i + 1. For j ≥ i + 1, by (2.4), we have that

− →v i+1· −→vj = n−1 X k=0 vk,i+1vk,j = i X k=0 a2k+ vi+1,i+1vi+1,j=  1 if j = i + 1 −1 n if j > i + 1

(6)

from which we get vi+1,i+1= v u u t1 − i X k=0 a2 k := di+1 (2.8) and vi+1,j = − 1 n+ Pi k=0a 2 k di+1 := ai+1 for j > i + 1. (2.9)

It is easy to check that di+16= 0, since −→vi+1· −→vi+2=P i k=0a

2

k+ ai+1di+1= −1n

implies ai+1di+1= −n1 −P i k=0a

2 k < 0.

Finally, by setting vi+1,j = 0 for j < i + 1, we see that assertion (2.7) holds

for l = i + 1 and therefore the validity of (2.5) is confirmed.

Now we derive the explicit values of the ai’s and di’s by using some recursive

relationships between them. Indeed, in view of (2.8) and (2.9), we have that

d2i+1= 1 − i X k=0 a2k= 1 − i−1 X k=0 a2k− a2 i = d 2 i − a 2 i = (di− ai)(di+ ai) (2.10) and ai+1di+1= − 1 n− i X k=0 a2k= −1 n− i−1 X k=0 a2k− a2 i = aidi− a2i = ai(di− ai) (2.11)

from which, by dividing (2.10) by (2.11), we extract that di+1 ai+1 = di+ ai ai = di ai + 1.

Therefore, it is a simple matter to obtain that di

ai =

d0

a0+i = i−n or, equivalently,

di= −(n − i)ai. (2.12)

Plugging (2.12) into (2.10) yields

di+1= p (di− ai)(di+ ai) = di s (n − i + 1)(n − i − 1) (n − i)2 which we rewrite as di+1 r n − i n − i − 1 = di r n − i + 1 n − i = const = d0 r n + 1 n = r n + 1 n for i ≤ n−2. Therefore di= r n + 1 n n − i n − i + 1 for 0 ≤ i ≤ n − 1. (2.13) Next, by (2.12) and (2.13), it swiftly follows that,

ai= − di n − i = − r n + 1 n 1 p(n − i)(n − i + 1) for 0 ≤ i ≤ n − 2. Finally, the last vector −→vn, thanks to (2.3) and (2.6), can be represented as

(7)

− →v n = − n−1 X k=0 − →v k = −((n − 1)a0+ d0, (n − 2)a1+ d1, . . . , (n − i − 1)ai+ di, . . . , an−2+ dn−2, dn−1).

Since, by (2.12) (n − i − 1)ai+ di= −ai, we simply have that

− →v

n = (a0, a1, . . . , an−1).

The matrix V of the components of the directional vectors −→vj, j = 0, . . . , n,

is given in Table 1. The reader can check that conditions (2.3) are fulfilled. − →v 0 −→v1 −→v2 · · · −→vn−1 −→vn 1 −1 n − 1 n · · · − 1 n − 1 n 0 qn+1n n−1n −1 n q n+1 n−1 · · · − 1 n q n+1 n−1 − 1 n q n+1 n−1 0 0 q(n+1)(n−2)n(n−1) · · · −q n+1 n(n−1)(n−2) − q n+1 n(n−1)(n−2) .. . ... ... ... ... 0 0 0 · · · −q n+1 n(n−i+2)(n−i+1) − q n+1 n(n−i+2)(n−i+1) .. . ... ... ... ... 0 0 0 · · · −qn+1 n 1 2·3 − q n+1 n 1 2·3 0 0 0 · · · qn+1 n 1 2 − q n+1 n 1 2

Table 1: The matrix V of directions.

2.2

The (n + 1)−hedron T

a

Let us introduce the points Aj, j = 0, . . . , n defined by

−−→

OAj = a−→vj, for a fixed

a > 0. The points Aj, j = 0, . . . , n are the vertices of a regular (n + 1)−hedron

Ta with center O. The length of the edges of Ta can clearly be obtained by

observing that, for the edge −−−→AjAk, with endpoints Aj and Ak, we have that

−−−→ AjAk= a (−→vk− −→vj) and thus, by (2.4), −−−→ AjAk 2 = a |−→vk|2+ |−→vj|2− 2−→vj· −→vk = 2 n + 1 n a. 2.2.1 Analytic representation of Ta

We now derive the equations of the hyperfaces of Ta, which play an important

(8)

Let Ak be an arbitrary vertex of the hyperface Fj orthogonal to the vector

− →v

j for j 6= k. It is clear that

−−→ OM =−−→OAk+ −−−→ AkM = a−→vk+ −−−→ AkM , (2.14)

where M is an arbitrary point of Fj (and thus

−−−→

AkM is orthogonal to −→vj) with

coordinates (x0, . . . , xn−1). By taking the scalar product of (2.14) by −→vjwe get n−1 X i=0 vi,jxi= −→vj· −−→ OM = −→vj·  a−→vk+ −−−→ AkM  = −a n.

The equation of the hyperface Fj for j = 0, . . . , n is thusP n−1

i=0 vi,jxi+na = 0.

The (n + 1)−hedron Ta is then analytically defined as

Ta= ( (x0, . . . , xn−1) ∈ Rn: n−1 X i=0 vi,jxi+ a n ≥ 0 for j = 0, . . . , n ) (2.15)

and we shall see that Tctis the set of all possible positions of the moving particle

at time t.

Remark 2.1 Observe that the inequality Pn−1

i=0 vi,jxi+an > 0 represents the

half-space containing the vertex Aj = (a v0,j, . . . , a vn−1,j). Indeed, in Aj we

have that aPn−1 i=0 v 2 i,j+ a n = a 1 + 1 n > 0. 2.2.2 Volume of Ta

For our further analysis, it is useful to evaluate the volume of the (n+1)−hedron Ta. This can be split up into n + 1 not regular (n + 1)−hedrons of equal volume

obtained from Ta with each vertex successively being replaced by the origin O

In this way we have that

Vol (Ta) = (n + 1)Vol TOa



(2.16) where TOa is a (n + 1)−hedron with one vertex in O. By means of a well-known

formula and using Table 1, we have that

Vol TOa = a n n! det(− →v 0, . . . , −→vn−1) = an n! 1 −1 n · · · − 1 n 0 qn+1 n n−1 n · · · − 1 n q n+1 n−1 .. . ... . .. ... 0 0 · · · qn+1 2n = a n n! r n + 1 n !n−1 ·√1 n = (n + 1)(n−1)/2 nn/2n! a n .

Therefore, from (2.16) we have that

Vol (Ta) =

(n + 1)(n+1)/2

nn/2n! a

(9)

Remark 2.2 Formula (2.17) can be obtained also by the direct calculation of an n−fold integral. By means of the transformation (x0, . . . , xn−1) 7→ (y0, . . . , yn−1)

with yj= a + n n−1 X i=0 vi,jxi, 0 ≤ j ≤ n − 1, (2.18)

and by considering that (usingPn

j=0vi,j = 0) 0 ≤ n n−1 X i=0 vi,nxi+ a = n n−1 X i=0 − n−1 X j=0 vi,j ! xi+ a by (2.18) = (n + 1)a − n−1 X j=0 yj,

the set (2.15) is transformed into

T0a= ( (y0, . . . , yn−1) ∈ Rn: yj≥ 0 for 0 ≤ j ≤ n − 1, and n−1 X j=0 yj≤ (n + 1)a ) . (2.19) On the other hand, the Jacobian |J| of the transformation (2.18) (x0, . . . , xn−1)

7→ (y0, . . . , yn−1) is |J| = n v0,0 n v0,1 . . . n v0,n−1 n v1,0 n v1,1 . . . n v1,n−1 .. . ... ... n vn−1,0 n vn−1,1 . . . n vn−1,n−1 = nn r n + 1 n !n−1 1 √ n. (2.20)

Therefore, the volume of the (n + 1)−hedron Ta is given by

Vol (Ta) = Z Ta dx = Z T0 a |J|−1dy = Z (n+1)a 0 dy0 Z (n+1)a−y0 0 dy1· · · Z (n+1)a−Pn−3j=0yj 0 dyn−2 × Z (n+1)a−Pn−2j=0yj 0 |J|−1dy n−1 = · · · = 1 n!(n + 1) nan|J|−1 and we reobtain (2.17).

In the rest of the paper we shall use the results of this section applied to the case where a = ct. We also put Vn = Vol (T1) = (n+1)

(n+1)/2

nn/2n! .

3

Analytic description of random motion

3.1

The support of the random position

We consider the randomly moving point X(t) = (X0(t), X1(t), . . . , Xn−1(t))

which, at time t = 0, was at the origin O = (0, . . . , 0) and initially took one of the directions −→vj, j = 0, . . . , n, with probability n+11 .

The motion is cyclic in the sense that at each Poisson event the parti-cle switches from direction −→vj to −→vj+1 (we set −→vn+1 = −→v0 and, in general,

− →v

(10)

Let N (t) be the number of Poisson events up to time t and let T1, . . . , Tn, . . .

denote the instants where they occur. Therefore, the position X(t) of the moving particle starting with direction −→v0 is

X(t) = c " I{N (t)≥1} N (t)−1 X k=0 (Tk+1− Tk) −→vk+ t − TN (t) −→ vN (t) # . (3.1)

The first sum in (3.1) refers to the case where at least one change of direction has occurred while the second one is related to the displacement along the current direction at time t. If N (t) = 0, then X(t) = c t − TN (t)

−→

v0 = ct−→v0 and the

particle is located in A0 (with a = ct) at time t. If we put TN (t)+1 = t the

displacement (3.1) takes the form X(t) = cPN (t)

k=0(Tk+1− Tk) − →v k and, by (2.4), for all j = 0, . . . , n, X(t) · −→vj+ ct n = c N (t) X k=0 − →vk6=−→vj −1 n (Tk+1− Tk) + c N (t) X k=0 − →vk=→−vj (Tk+1− Tk) + ct n = −c n N (t) X k=1 (Tk+1− Tk) + c  1 + 1 n N (t) X k=0 − →vk=→−vj (Tk+1− Tk) + ct n = c  1 + 1 n N (t) X k=0 − →vk=→−vj (Tk+1− Tk) ≥ 0.

This permits us to conclude that, in view of (2.15), the moving particle at time t is always inside or on the hull of Tct.

In order that the moving point X(t) be located on the hull ∂Tct we must

have that

X(t) · −→vj+

ct

n = 0 for some 0 ≤ j ≤ n.

This equality is realized if N (t) is such that the identity −→vk = −→vj does not hold

for any value of k and thus the set on which the sum is performed is empty. If the initial direction is −→v0, then

X(t) · −→vj+

ct

n = 0 for N (t) < j ≤ n, (3.2) which means that the moving particle lies on a N (t)−face (face of dimension N (t)) of ∂Tct at time t.

Remark 3.1 For example, for n = 2 and N (t) = 1 (which implies j = 2 in (3.2)), we have

X(t) · −→v2+

ct 2 = 0

which represents the side of ∂Tct opposite to −→v2. If N (t) = 0, (3.2) holds for

j = 1, 2 and therefore, the moving point is located simultaneously on the two lines X(t) · −→v1+ct2 = 0 and X(t) · −→v2+ct2 = 0, that is on the vertex (ct, 0).

(11)

3.2

About the governing equations

Let D(t) be the current direction of motion at time t. Clearly D(t) takes values −→vj, j = 0, . . . , n. For the densities of the probabilities pj(x, t) dx =

Pr {X(t) ∈ dx, D(t) = −→vj} for x ∈ int (Tct) we have the following theorem.

Theorem 3.2 The densities pj satisfy the following differential system

∂pj ∂t (x, t) = −c ∂pj ∂−→vj (x, t) − λ (pj(x, t) − pj−1(x, t)) for 0 ≤ j ≤ n, (3.3) where p−1= pn and ∂pj ∂−→vj (x, t) =−−→grad pj· −→vj.

Proof. Suppose that the particle is located in x at time t + ∆t. If no Poisson event has occurred during [t, t + ∆t] (which happens with probability 1 − λ ∆t + o(∆t)), then the particle must have been in x − c ∆t−→vj at time t. If exactly one

Poisson event has occurred (with probability λ ∆t + o(∆t)) during [t, t + ∆t], the direction of the particle at time t was −→vj−1. Finally, the probability that

more than one Poisson event has occurred is o(∆t). This short discussion leads to the following equality:

pj(x, t + ∆t) = (1 − λ ∆t) pj(x − c ∆t −→vj, t) + λ ∆t pj−1(x, t) + o (∆t) . We next expand pj(x − c ∆t −→vj, t) = pj(x, t) − c ∆t n−1 X i=0 ∂pj ∂xi (x, t)vi,j+ o (∆t) and pj(x, t + ∆t) = pj(x, t) + ∂pj ∂t (x, t)∆t + o (∆t) . Some obvious simplifications and the limit with ∆t → 0 yield (3.3).

The system (3.3) can be substantially reduced as shown in the next theorem. Theorem 3.3 Let pj(x, t) = e−λtqj(u) where u is the vector with components

uj = ct + n n−1

X

i=0

vi,jxi, j = 0, . . . , n. (3.4)

Then the functions qj satisfy the differential system

(n + 1)c∂qj ∂uj

= λqj−1, j = 0, . . . , n. (3.5)

Proof. The exponential transformation applied to (3.3) readily yields the sys-tem ∂qj ∂t = −c ∂qj ∂−→vj + λqj−1, j = 0, . . . , n. (3.6)

(12)

∂qj ∂t = c n X k=0 ∂qj ∂uk and ∂qj ∂xi = n n X k=0 ∂qj ∂uk vi,k. (3.7) Thus, by (2.4), ∂qj ∂−→vj = −−→grad qj· −→vj= n−1 X i=0 ∂qj ∂xi vi,j= n n X k=0 ∂qj ∂uk n−1 X i=0 vi,jvi,k ! = n n X k=0 k6=j  −1 n ∂qj ∂uk  + n ∂qj ∂uj = − n X k=0 ∂qj ∂uk + (n + 1)∂qj ∂uj . (3.8)

By plugging (3.7) and (3.8) into (3.6) we obtain (3.5).

We remark that equations (3.5) have substantially the same form as those of the systems (3.4) of Orsingher and Sommella (2004) and (2.8) of Orsingher (2002), except for some constants, because of a different definition of the trans-formation (3.4).

Corollary 3.4 Each function qj satisfies the following (n + 1)th order partial

differential equation ∂n+1q j ∂u0· · · ∂un =  λ (n + 1)c n+1 qj, j = 0, . . . , n. (3.9)

Proof. By differentiating the first equation of (3.5) with respect to un we get

∂2q 0 ∂u0∂un = λ (n + 1)c ∂qn ∂un =  λ (n + 1)c 2 qn−1. (3.10) By differentiating (3.10) we obtain ∂3q0

∂u0∂un∂un−1

=  λ

(n + 1)c 3

qn−2

and by iterating this procedure we finally get equation (3.9) for j = 0. The other cases are quite similar.

Proposition 3.5 The solutions q of p.d.e. (3.9) depending only on the variable w = n+1√u

0· · · un verify the (n + 1)th order hyper-Bessel equation

 w ∂ ∂w n+1 q = λw c n+1 q. (3.11)

Proof. Let us now consider the transformation ( vj= uj for j = 0, . . . , n − 1 w = n+1√u 0· · · un. (3.12) In view of (3.12), for i = 0, . . . , n − 1,

(13)

∂qj ∂ui = n−1 X k=0 ∂qj ∂vk ∂vk ∂ui +∂qj ∂w ∂w ∂ui = ∂qj ∂vi + w (n + 1)vi ∂qj ∂w, (3.13) and ∂qj ∂un = n−1 X k=0 ∂qj ∂vk ∂vk ∂un +∂qj ∂w ∂w ∂un =v0· · · vn−1 (n + 1)wn ∂qj ∂w. (3.14) In light of (3.13) we have that

∂nq j ∂u0· · · ∂un−1 = n−1 Y i=0 1 vi  vi ∂ ∂vi + w n + 1 ∂ ∂w ! qj (3.15)

and, by using the well-known formula

n−1 Y i=0 (αi+ x) = n X m=0 xm X 0≤l1<···<ln−m≤n−1 αl1· · · αln−m=: n X m=0 xmσn−m(α0, . . . , αn−1) ,

the differential operator in (3.15) can be written as ∂n ∂u0· · · ∂un−1 = 1 v0· · · vn−1 n X m=0  w n + 1 ∂ ∂w m σn−m  v0 ∂ ∂v0 , . . . , vn−1 ∂ ∂vn−1  .

Formula (3.14), applied to the above expression, enables us to write the differ-ential operator in (3.9) as ∂n+1 ∂u0· · · ∂un = 1 (n + 1)wn ∂ ∂w " n X m=0  w n + 1 ∂ ∂w m σn−m  v0 ∂ ∂v0 , . . . , vn−1 ∂ ∂vn−1 # = 1 wn+1 n−1 X m=0  w n + 1 ∂ ∂w m+1 σn−m  v0 ∂ ∂v0 , . . . , vn−1 ∂ ∂vn−1  + 1 ((n + 1)w)n+1  w ∂ ∂w n+1 .

Therefore, the solutions of (3.9) depending only on w satisfy the ordinary equa-tion (3.11)

Remark 3.6 We check here that the hyper-Bessel function I0,n(w) = ∞ X k=0 1 (k!)n w n nk

is a solution to the hyper-Bessel equation w∂w∂ n

q = wnq. Since w ∂ ∂w w

nk=

(nk) wnkand thus w ∂w∂ nwnk= (nk)nwnk, we have that  w ∂ ∂w n I0,n(w) = ∞ X k=0 1 (k!)n (nk)nwnk nnk = ∞ X k=1 1 nn(k−1) wnk ((k − 1)!)n = wn ∞ X k=0 1 (k!)n w n nk = wnI0,n(w).

(14)

4

Order statistics applied to cyclic motions

In this section, we introduce the number Nj(t) of times the direction −→vjis taken,

up to time t. Of course, the random numbers Nj(t) and N (t) are linked by n

X

j=0

Nj(t) = N (t) + 1.

We can immediately write down the following relationship: Pr {X(t) ∈ dx} = X

k0,...,kn≥0

Pr {N0(t) = k0, . . . , Nn(t) = kn}

× Pr {X(t) ∈ dx | N0(t) = k0, . . . , Nn(t) = kn} . (4.1)

For the cyclic motion the explicit form of Pr {N0(t) = k0, . . . , Nn(t) = kn} is

almost straightforward and this makes the derivation of (4.1) in a closed form possible.

For the planar motion with three directions, the probabilities Pr {N0(t) = k0,

N1(t) = k1, N2(t) = k2} where N0(t) + N1(t) + N2(t) = N (t) + 1, have been

explicitly evaluated for the symmetrically deviating motion by means of an ex-tension of Bose-Einstein statistics (Leorato and Orsingher (2004)).

For a number of directions greater than or equal to 4, the evaluation of the distribution of (N0(t), . . . , Nn(t)) becomes extremely difficult except for the

uniform case, where (N0(t), . . . , Nn(t)) is a multinomial random vector.

The aim of this section is the evaluation of the conditional probabilities in (4.1) by means of order statistics. This method has been successfully applied in the case of planar, cyclic motions with orthogonal directions (Leorato et al. (2003)) and also for motions with three directions (Leorato and Orsingher (2004)).

We observe that the minimality of the number of directions is very important in order to obtain the conditional distributions in (4.1) in a relatively simple way as will become clear below.

4.1

Some preliminary results about the position of the

particle

We start by writing the vector (3.1) in a new convenient form:

X(t) = c " I{N (t)≥1} N (t)−1 X k=0 (Tk+1− Tk) −→vk+ t − TN (t) −→ vN (t) # = c " I{N (t)≥1} n X j=0 X 0≤l≤N (t)−1: − → vj is taken in[Tl,Tl+1) (Tl+1− Tl)−→vj + n X j=0 t − TN (t) −→ vjI{→−v j is taken in [TN (t),t]} # = ct n X j=0 Lj(t)−→vj(4.2) where Lj(t) = 1 t " I{N (t)≥1} X 0≤l≤N (t)−1: − →vj is taken in[Tl,Tl+1) (Tl+1−Tl)+ t − TN (t) I{−→vj is taken in [TN (t),t]} #

(15)

is the proportion of time spent travelling with direction −→vj. The r.v.’s Lj(t) can also be written as Lj(t) = 1 t Nj(t) X m=1 Tm(j) (4.3)

where Tm(j) indicates how long the particle has travelled the mth time that −→vj

has been taken. We remark that

n X j=0 Nj(t) X m=1 Tm(j)= t n X j=0 Lj(t) = t.

The r.v.’s Tm(j), 1 ≤ m ≤ Nj(t), 0 ≤ j ≤ n, are independent and exponentially

distributed with parameter λ.

Theorem 4.1 Fix some positive integers k0, . . . , kn such thatPnj=0kj= k + 1.

The joint conditional distribution of (L0(t), . . . , Ln−1(t)) is given by

Pr {L0(t) ∈ dl0, . . . , Ln−1(t) ∈ dln−1| N0(t) = k0, . . . , Nn(t) = kn} = f (l0, . . . , ln−1) dl0· · · dln−1 (4.4) where f (l0, . . . , ln−1) = k! Qn j=0(kj− 1)! n Y j=0 lkj−1 j I{l0,...,ln−1≥0,Pn−1j=0lj≤1}, (4.5) and ln= 1 −P n−1 j=0lj.

Proof. For the instants T1, . . . , Tk of occurrence of the Poisson events we have

the following well-known conditional distribution: Pr {T1∈ dt1, . . . , Tk∈ dtk| N (t) = k} =

k!

tk I{0≤t1≤···≤tk≤t}dt1· · · dtk. (4.6)

For the r.v.’s Sj= Tj− Tj−1, j = 1, . . . , k, after some obvious transformations

we have that Pr {S1∈ ds1, . . . , Sk∈ dsk| N (t) = k} = k! tk I s1≥ 0, . . . , sk≥ 0, s1+ · · · + sk≤ t ds1· · · dsk.

From (4.6) we can extract, by direct integration, the distribution of (Tj1, . . . ,

Tjn), under the condition that N (t) = k, where 0 < j1 < j2 < · · · < jn < k.

Indeed, Pr {Tj1 ∈ dtj1, . . . , Tjn ∈ dtjn| N (t) = k} = k! tk Z · · · Z {0≤t1<···<tj1<···<tj2<···<tjn<···<tk≤t} dt1· · · dtj1−1dtj1+1· · · dtj2−1dtj2+1· · · dtjn−1dtjn+1· · · dtk = k! tk Z {0≤t1<···<tj1−1<tj1} dt1· · · dtj1−1 Z {tj1<tj1+1<···<tj2−1<tj2} dtj1+1· · · dtj2−1· · · Z {tjn<tjn+1<···<tk≤t} dtjn+1· · · dtk = k! tk tj1−1 j1 (j1− 1)! (tj2− tj1) j2−j1−1 (j2− j1− 1)! · · ·(t − tjn) k−jn (k − jn)! I{0≤t j1<···<tjn≤t}.

(16)

Let us now assume that the intervals of time spent travelling with the same direction are put together and let us choose

jm= k0+ · · · + km−1, 1 ≤ m ≤ n.

The time Tjm can now be regarded as the instant at which the particle ends its

travelling with directions −→vj, 0 ≤ j ≤ m − 1 and switches to direction −→vm(see

Figure 1). 0  −→v0 -Tk0 = Tj1 −→v1 -Tk0+k1 = Tj2 Tk0+···+km−1 = Tjm Tk0+···+kn−1 = Tjn  −→vn -t

Figure 1: How the interval [0, t] is split up into subintervals.

In light of the exchangeability of the r.v.’s representing the length of time be-tween successive changes of direction, we can write that

L0(t) d = Tj1 t , L1(t) d = Tj2− Tj1 t , . . . , Ln−1(t) d =Tjn− Tjn−1 t . (4.7) By means of the transformation

tj1 = tl0, tj2 = t(l0+ l1), . . . , tjn= t(l0+ · · · + ln−1)

emerging from (4.7), we can extract the distribution (4.4) of (L0(t), . . . , Ln−1(t))

from (4.7) as follows: Pr {L0(t) ∈ dl0, . . . , Ln−1(t) ∈ dln−1| N (t) = k, N0(t) = k0, . . . , Nn−1(t) = kn−1} = Pr {Tj1 ∈ t dl0, Tj2 ∈ t d(l0+ l1), . . . , Tjn∈ t d(l0+ · · · + ln−1) | N (t) = k} = k!(tl0) k0−1· · · (tl n−1)kn−1−1(t − t(l0+ · · · + ln−1))k− Pn−1 j=0kjtndl 0· · · dln−1 tk(k 0− 1)!(k1− 1)! · · · (k − k0− · · · − kn−1)! × I{l0,...,ln−1≥0:Pn−1j=0lj≤1} = Qn k! j=0(kj− 1)! n Y j=0 lkj−1 j dljI{l0,...,ln−1≥0:Pn−1 j=0lj≤1} where kn= 1 + k − k0− · · · − kn−1and ln= 1 − l0− · · · − ln−1.

4.2

Deriving the conditional law

The connection between the position vector X(t), t ≥ 0, and the random times Lj(t), 0 ≤ j ≤ n, spent moving with direction −→vj, is given by formula (4.2),

which we rewrite, for our convenience, in the following manner: X(t) = ct "n−1 X j=0 Lj(t)−→vj+ 1 − n−1 X j=0 Lj(t) ! − →v n # = ct " − →v n+ n−1 X j=0 Lj(t) −→vj− −→vn !# . (4.8)

(17)

An alternative representation of (4.8) is therefore      X0(t) X1(t) .. . Xn−1(t)      = ct           v0,n v1,n .. . vn−1,n      + ˜V      L0(t) L1(t) .. . Ln−1(t)          

where ˜V is the matrix of the coordinates of the vectors −→v0− −→vn, −→v1− −→vn, . . . ,

− →v

n−1− −→vn with respect to the canonical basis:

˜

V = (vi,j− vi,n)0≤i,j≤n−1. (4.9)

This means that the vectors X(t) = (X0(t), . . . , Xn−1(t)) and L(t) = (L0(t), . . . ,

Ln−1(t)) are linked by the relationship

X(t) = φ (L(t)) (4.10) where φ : Rn7→ Rn is defined by φ(l0, . . . , ln−1) = ct  v0,n+ n−1 X j=0 (v0,j− v0,n)lj  , . . . , ct  vn−1,n+ n−1 X j=0 (vn−1,j− vn−1,n)lj ! .

4.2.1 Inversion of the matrix ˜V

We need to invert the affine transformation φ in order to determine the distri-bution of X(t) from (4.5). To this aim, we must evaluate the inverse ˜V−1 of the matrix (4.9).

In light of (2.4), for all 0 ≤ k ≤ n − 1, we can write − →v k· (−→vj− −→vn) =  0 if k 6= j 1 +n1 if k = j or, equivalently, n−1 X i=0

vi,k(vi,j− vi,n) =

n + 1

n δk,j (4.11) where δk,jis the Kronecker delta function. The relationship (4.11) can be

rewrit-ten in terms of matrices

n n + 1    v0,0 · · · v0,n−1 .. . ... vn−1,0 · · · vn−1,n−1    T   v0,0− v0,n · · · v0,n−1− v0,n .. . ... vn−1,0− vn−1,n · · · vn−1,n−1− vn−1,n    = n n + 1W ˜V = I. Thus ˜ V−1= n n + 1W = n n + 1    v0,0 · · · v0,n−1 .. . ... vn−1,0 · · · vn−1,n−1    T (4.12)

(18)

For the evaluation of the Jacobian of the transformation φ, we need the determinant det( ˜V), which equalshdet( ˜V−1)i

−1 , where det( ˜V−1) =  n n + 1 n det(−→v0, . . . , −→vn−1) by (2.20) = n n/2 (n + 1)(n+1)/2 = 1 n! Vn .

The Jacobian of φ is clearly given by Jφ= ct ˜V and its determinant is

|Jφ| = n! Vn(ct)n. (4.13)

4.2.2 Inversion of the transformation φ

We are now able to invert the transformation φ. To do this, we write the equivalences φ(l0, . . . , ln−1) = (x0, . . . , xn−1) ⇐⇒    x0 .. . xn−1   = ct       v0,n .. . vn−1,n   + ˜V    l0 .. . ln−1       ⇐⇒    l0 .. . ln−1   = 1 ct ˜ V−1    x0− ctv0,n .. . xn−1− ctvn−1,n   .

Referring to (4.12), we obtain, for 0 ≤ k ≤ n − 1,

lk = n (n + 1)ct n−1 X i=0 vi,k(xi− ctvi,n) = 1 (n + 1)ct n n−1 X i=0 vi,kxi+ ct !

where we used, in the last step, Pn−1

i=0 vi,kvi,n = −n1. As a result, we get, by

putting x = (x0, . . . , xn−1),

φ−1(x) = 1

(n + 1)ct(h0(x, t), . . . , hn−1(x, t)) . (4.14) 4.2.3 The conditional law

We are now in a position to state the theorem concerning the conditional dis-tribution of X(t).

Theorem 4.2 Fix k0, . . . , kn ≥ 1 such that k0+· · ·+kn = k+1. The conditional

distribution of X(t) is given by Pr {X(t) ∈ dx | N0(t) = k0, . . . , Nn(t) = kn} = (n + 1) n n! Vn k! Qn j=0(kj− 1)! 1 ((n + 1)ct)k n Y j=0 hj(x, t)kj−1dx (4.15)

for x ∈ int (Tct), where

hj(x, t) = ct + n n−1

X

i=0

(19)

Proof. From (4.5), (4.10) and (4.14) we have, for x ∈ int (Tct), Pr {X(t) ∈ dx | N0(t) = k0, . . . , Nn(t) = kn} = f φ−1(x) |Jφ−1| dx = Qk!|Jn φ−1| dx j=0(kj− 1)! n−1 Y j=0  h j(x, t) (n + 1)ct kj−1 1 − 1 (n + 1)ct n−1 X j=0 hj(x, t) !kn−1 = Qn k! j=0(kj− 1)! |Jφ−1| dx ((n + 1)ct)k−n n Y j=0 hj(x, t)kj−1

where in the last equality above we used

1 − 1 (n + 1)ct n−1 X j=0 hj(x, t) = 1 − 1 (n + 1)ct n−1 X j=0 ct + n n−1 X i=0 vi,jxi ! = ct + n Pn−1 i=0 vi,nxi (n + 1)ct = hn(x, t) (n + 1)ct,

and Jφ−1 is the Jacobian of the transformation φ−1, whose determinant, in view

of (4.13), is given by |Jφ−1| = 1

n! Vn(ct)n. This concludes the proof of (4.15).

Remark 4.3 The clever reader can ascertain that (4.15) coincides with (2.6) of Leorato and Orsingher (2004) in the case n = 2 for which x = (x, y), h0(x, y, t) =

ct + 2x, h1(x, y, t) = ct − x +

3y and h2(x, y, t) = ct − x −

√ 3y.

4.3

Deriving the unconditional law

Formula (4.1) makes the derivation of the unconditional distribution of X(t) possible once the joint distribution of (N0(t), . . . , Nn(t)) is known.

For cyclic motions this can be easily done. Notice that any integer k can be written as (n + 1)q + r − 1 with 0 ≤ r ≤ n. The equality N (t) = k, rewritten as N (t) = (n + 1)q + r − 1, means that, at time t, the particle has run q complete cycles and next has taken r consecutive directions.

If the initial direction is −→v0 and N (t) = (n + 1)q + r − 1, then the current

direction is −→vr−1. The first cycle is complete if the current direction at time t

is −→vn, although −→vn will be in force also after t.

If the initial direction is −→vj and N (t) = (n + 1)q + r − 1, then

Nj(t) = · · · = Nj+r−1(t) = q + 1 and Nj+r(t) = · · · = Nj+n(t) = q

where we set Nl(t) = Nl−n−1(t) for l > n. This means that the number of times

each direction is taken in a cyclic motion differs at most by one unit.

In order for the particle to move inside Tctit is necessary and sufficient that

at least one cycle be completed (i.e. q ≥ 1).

Theorem 4.4 We have the following explicit distribution for 0 ≤ r ≤ n: ˜

pr(x, t) dx

(20)

= Pr ( X(t) ∈ dx, ∞ [ q=1

{q complete cycles + r directions} ) = e −λt dx (n + 1)rn! V n  λ c n+r Hr,n+1(x, t) Ir,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  (4.17)

for x∈ int (Tct), where

Hr,n+1(x, t) = 1 n + 1 n X j=0 hj(x, t) · · · hj+r−1(x, t) (4.18) with H0,n+1(x, t) = 1, and Ir,n+1(ξ) = ∞ X q=0 1 (q!)n+1−r((q + 1)!)r  ξ n + 1 (n+1)q .

Proof. The probability (4.17) can be written as

Pr ( X(t) ∈ dx, ∞ [ q=1 {N (t) = (n + 1)q + r − 1} ) = ∞ X q=1 n X j=0 Pr {X(t) ∈ dx, D(0) = −→vj, N (t) = (n + 1)q + r − 1} = ∞ X q=1 n X j=0 Pr {X(t) ∈ dx, Nj(t) = · · · = Nj+r−1(t) = q + 1, Nj+r(t) = · · · = Nj+n(t) = q} = ∞ X q=1 n X j=0 Pr {N (t) = (n + 1)q + r − 1, D(0) = −→vj} × Pr {X(t) ∈ dx | Nj(t) = · · · = Nj+r−1(t) = q + 1, Nj+r(t) = · · · = Nj+n(t) = q} by (4.15) = e −λt (n + 1) (n + 1)n n! Vn ∞ X q=1 n X j=0 (λt)(n+1)q+r−1 (q!)r((q − 1)!)n−r+1 ×[hj(x, t) · · · hj+r−1(x, t)] q [hj+r(x, t) · · · hj+n(x, t)] q−1 ((n + 1)ct)(n+1)q+r−1 dx = e −λt n + 1 (n + 1)n n! Vn  λ (n + 1)c n+r n X j=0 ∞ X q=0  λ (n+1)c (n+1)q ((q + 1)!)r(q!)n−r+1 × n Y i=0 hi(x, t) !q hj(x, t) · · · hj+r−1(x, t) dx = e −λt (n + 1)rn! V n  λ c n+r 1 n + 1 n X j=0 hj(x, t) · · · hj+r−1(x, t) × ∞ X q=0  λ (n+1)c n+1qQn j=0hj(x, t) (n+1)q (q!)n−r+1((q + 1)!)r dx. (4.19)

(21)

This concludes the proof of the theorem.

By summing formula (4.17) with respect to r, we explicitly obtain the abso-lutely continuous component of the distribution of X(t), stated below.

Corollary 4.5 We have for x∈ int (Tct)

Pr {X(t) ∈ dx, N (t) ≥ n} = dx e −λt n! Vn  λ c n n X r=0  λ (n + 1)c r Hr,n+1(x, t) × Ir,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  . (4.20) Remark 4.6 We now consider some special cases of (4.17).

(i) If r = 0, that is if N0(t) = · · · = Nn(t) = q, we have that

˜

p0(x, t) dx = Pr {X(t) ∈ dx, the current cycle is complete}

= ∞ X q=1 Pr {X(t) ∈ dx, N0(t) = · · · = Nn(t) = q} = e −λt n! Vn  λ c n I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  dx.

For n = 2 this corresponds to formula (2.8) of Leorato and Orsingher (2004). (ii) When r = 1, a new cycle has begun and its first direction has been taken. In this case we have

H1,n+1(x, t) = 1 n + 1 n X j=0 hj(x, t) = 1 n + 1 n X j=0 ct + n n−1 X i=0 vi,jxi ! = ct.

The hyper-Bessel function I1,n+1(ξ) can be written as

I1,n+1(ξ) = ∞ X q=0 1 (q!)n(q + 1)!  ξ n + 1 (n+1)q =  ξ n + 1 −n−1 ∞ X q=0 1 (q!)n+1(q + 1)  ξ n + 1 (n+1)(q+1) =  ξ n + 1 −n−1Z ξ 0 ∞ X q=0 1 (q!)n+1  u n + 1 (n+1)q+n du =  ξ n + 1 −n−1Z ξ 0  u n + 1 n I0,n+1(u) du = (n + 1) Z 1 0 wnI0,n+1(wξ) dw.

In conclusion we have that ˜

p1(x, t) dx = Pr {X(t) ∈ dx, the last cycle has run only 1 direction}

= dx e −λtλt (n + 1)!Vn  λ c n I1,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)   = dxe −λtλt n! Vn  λ c nZ 1 0 wnI0,n+1   λw c n+1 v u u t n Y j=0 hj(x, t)  dw.

(22)

(iii) If r = n, only the last direction must be taken to complete the cycle. In this case, formula (4.18) yields

Hn,n+1(x, t) = 1 n + 1 n X j=0 j+n−1 Y l=j hl(x, t) = 1 n + 1 n X j=0 1 hj+n(x, t) j+n Y l=j hl(x, t) = 1 n + 1 n Y l=0 hl(x, t) n X j=0 1 hj(x, t) . (4.21)

In the last step we applied the periodicity of hl with respect to l. For the

hyper-Bessel function In,n+1(ξ) we have that

In,n+1(ξ) = ∞ X q=0 1 q! ((q + 1)!)n  ξ n + 1 (n+1)q =  ξ n + 1 −n ∞ X q=0 q + 1 ((q + 1)!)n+1  ξ n + 1 (n+1)(q+1)−1 =  ξ n + 1 −n d dξ " X q=0 1 ((q + 1)!)n+1  ξ n + 1 (n+1)(q+1)# =  ξ n + 1 −n d dξ(I0,n+1(ξ) − 1) =  ξ n + 1 −n I0,n+10 (ξ). (4.22)

In the light of (4.21) and (4.22), formula (4.17) reads Pr {X(t) ∈ dx, the last cycle has run n directions}

= e −λt (n + 1)nn! V n  λ c 2n 1 n + 1 n Y l=0 hl(x, t) n X j=0 1 hj(x, t)  (n + 1)c λ n × n Y j=0 hj(x, t) !−n/(n+1) I0,n+10   λ c n+1 v u u t n Y j=0 hj(x, t)  dx = e −λt n!(n + 1) Vn  λ c n n+1 v u u t n Y j=0 hj(x, t) n X j=0 1 hj(x, t) × I0,n+10   λ c n+1 v u u t n Y j=0 hj(x, t)  dx. (4.23)

On the other hand, we have

∂ ∂t  I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)     = λ c   ∂ ∂t n Y j=0 hj(x, t) !1/(n+1) I 0 0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  

(23)

= λ (n + 1)c n Y j=0 hj(x, t) !1/(n+1)−1 n X j=0    n Y l=0 l6=j hl(x, t)    ∂hj ∂t (x, t) × I0,n+10   λ c n+1 v u u t n Y j=0 hj(x, t)   = λ n + 1 n+1 v u u t n Y j=0 hj(x, t) n X j=0 1 hj(x, t) ! I0,n+10   λ c n+1 v u u t n Y j=0 hj(x, t)  .

Therefore, probability (4.24) can be rewritten in the following simple way: ˜

pn(x, t) dx = Pr {X(t) ∈ dx, the last cycle has run n directions}

= e −λtdx λ n! Vn  λ c n ∂ ∂t  I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)    . (4.24)

Formula (4.24) for n = 1, 2, 3, coincides exactly with the second terms in the distributions (1.11), (1.9) and (1.10) of Orsingher and Sommella (2004), respectively. The other terms containing higher-order derivatives in (1.9) and (1.10) therein do not correspond to what emerges from (4.17) and thus the conjecture (1.12) formulated in that paper fails.

5

Connection with generalized hyperbolic

func-tions and with hyper-Bessel equafunc-tions

5.1

Integration of the distribution in T

ct

Proposition 5.1 The following equality holds: Z Tct ˜ pr(x, t) dx = e−λt ∞ X q=0 (λt)(n+1)q+n+r ((n + 1)q + n + r)!. (5.1) Although formula (5.1) can be directly derived from

Z Tct ˜ pr(x, t) dx = Pr (∞ [ q=1 {N (t) = (n + 1)q + r − 1} ) ,

we find it interesting to deduce it from the main result (4.17), thus providing its indirect confirmation. For this, we need the next lemma.

Lemma 5.2 For the functions hj defined in (4.16) we have that

Z Tct n Y j=0 hj(x, t)kjdx = ((n + 1)ct)Pnj=0kj+n (n + 1)(n−1)/2nn/2 Qn j=0Γ(kj+ 1) ΓPn j=0kj+ n + 1  . (5.2)

(24)

Z Tct n Y j=0 hj(x, t)kj dx = (ct) Pn j=0kj+n Z T1 n Y j=0 1 + n n−1 X i=0 vi,jyi !kj dy (5.3)

where x = ct y. We now perform the affine transformation zj= 1+nP n−1 i=0 vi,jyi,

j = 0, . . . , n − 1, coinciding with (2.18) with a = 1 and with Jacobian equal to n−n/2(n + 1)−(n−1)/2 (see (2.20)). By repeating the same arguments of Remark

2.2, we have 1 + nPn−1

i=0 vi,nyi= n + 1 −

Pn−1

j=0zj. In view of this, the integral

(5.3) becomes Z Tct n Y j=0 hj(x, t)kjdx = (ct)Pnj=0kj+n (n + 1)n−12 n n 2 Z T0 1 n−1 Y j=0 zkj j n + 1 − n−1 X j=0 zj !kn dz, (5.4)

where the set T01 coincides with (2.19) with a = 1. It is now convenient to perform the further transformation w = (w0, . . . , wn−1) 7→ z = (z0, . . . , zn−1)

defined as z0 = w0, z1 = w1− w0, . . . , zn−1 = wn−1− wn−2, which converts

the integral (5.4) into the following one: (ct)Pnj=0kj+n (n + 1)n−12 n n 2 Z T00 1 wk0 0 (w1− w0)k1· · · (wn−1− wn−2)kn−1(n + 1 − wn−1)kndw, where T001 = {(w0, . . . , wn−1) ∈ Rn: 0 < w0< w1< · · · < wn−1< n + 1} . The

foregoing integral, after some straightforward calculations, becomes Z n+1 0 (n + 1 − wn−1)kndwn−1 Z wn−1 0 (wn−1− wn−2)kn−1dwn−2· · · Z w1 0 (w1− w0)k1wk00dw0 = (n + 1)Pnj=0kj+n Qn j=0Γ(kj+ 1) Γ(Pn j=0kj+ n + 1)

and thus the lemma is proved.

Proof of Proposition 5.1. If we assume that r of the coefficients kj are equal

to q + 1 and n − r + 1 are equal to q, thenPn

j=0kj = (n + 1)q + r and formula (5.2) produces Z Tct n Y j=0 hj(x, t)kjdx = ((n + 1)ct)(n+1)q+n+r (n + 1)(n−1)/2nn/2 (q!)n−r+1((q + 1)!)r ((n + 1)q + n + r)!. By inserting this result into the integral over Tct of probability (4.19), we get

that Z Tct Pr ( X(t) ∈ dx, ∞ [ q=1

{q complete cycles +r directions} ) = e −λt (n + 1)r+1n! V n  λ c n+r n X j=0 ∞ X q=0  λ (n + 1)c (n+1)q ×((n + 1)ct) (n+1)q+n+r (q!)n−r+1((q + 1)!)r (q!)n−r+1((q + 1)!)r (n + 1)(n−1)/2nn/2((n + 1)q + n + r)!

(25)

= (n + 1) (n+1)/2e−λt n! Vnnn/2 ∞ X q=0 (λt)(n+1)q+n+r ((n + 1)q + n + r)! by (2.17) = e−λt ∞ X q=0 (λt)(n+1)q+n+r ((n + 1)q + n + r)!.

Remark 5.3 (i) By integrating (4.20) on Tct we get, from (5.1), that

Z Tct Pr {X(t) ∈ dx, N (t) ≥ n} = e−λt n X r=0 ∞ X q=0 (λt)(n+1)q+n+r ((n + 1)q + n + r)! = e−λt ∞ X l=n (λt)l l ! = Pr {N (t) ≥ n} = Pr {X(t) ∈ int (Tct)} , and then RT

ctPr {X(t) ∈ dx | N (t) ≥ n} = 1. Hence, we have verified that the

support of (X(t) | N (t) ≥ n) is int (Tct).

(ii) In (5.1) appear the so-called generalized hyperbolic functions of order n + 1. The nth order hyperbolic functions are defined as (cosine hyperbolic)

chn,j(x) = ∞ X k=0 xnk+j (nk + j)! = 1 n " ex+ n−1 X k=1 e(cos2kπn )xcos  sin2kπ n  x − 2jkπ n # , 0 ≤ j ≤ n − 1.

With this notation, the rhs of (5.1) reads e−λtch

n+1,n+r(λt).

5.2

About the derivatives of the integrated distributions

The following theorem generalizes the relationships (3.25) of Orsingher and Sommella (2004) and (3.8) of Orsingher (2002).

Theorem 5.4 The following identity holds:

d dt   Z Tct I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)  dx   = d dtVol (Tct) + Z Tct ∂ ∂t  I0,n+1   λ c n+1 v u u t n Y j=0 hj(x, t)    dx (5.5) where Vol (Tct) = Vn(ct)n= (n+1)(n+1)/2 n! nn/2 (ct) n.

Remark 5.5 We point out that formula (5.6) cannot be extended to the case of higher-order derivatives (i.e. by replacing d/dt and ∂/∂t respectively by dk/dtk

and ∂k/∂tk for k ≥ 2) and this is the reason why the conjecture formulated in

(26)

Proof. Our first task is to represent the integration set Tctin a suitable form.

For the convenience of the reader we write vi,j as follows

vi,j=        ai= − q n+1 n 1 √ (n−i+1)(n−i) if 0 ≤ i ≤ j − 1, −(n − j)aj if i = j, 0 if j + 1 ≤ i ≤ n − 1. (5.6)

The inequalities appearing in the definition of Tct(see (2.15)), because of (5.7),

can be rewritten asPj−1

i=0aixi− (n − j)ajxj+ctn ≥ 0 for 0 ≤ j ≤ n. From this

and keeping in mind that aj < 0, we can extract the following bounds for xj:

   xj≥ (n−j)a1 j  Pj−1 i=0aixi+ctn  for 0 ≤ j ≤ n − 1, −Pn−1 i=0 aixi≤ ct n. (5.7)

From the second inequality of (5.8) and by taking into account the first one for j = n − 1, we have that ct n ≥ n−2 X i=0 (−ai)xi+ (−an−1) 1 an−1 n−2 X i=0 aixi+ ct n ! = 2 n−2 X i=0 (−ai)xi− ct n from which we obtain that

n−2 X i=0 (−ai)xi ≤ ct n. (5.8)

We now show by induction that inequalities of the form (5.9) actually hold for all 0 ≤ j ≤ n − 2. For this, we assume that for a fixed value of 0 ≤ j ≤ n − 1,

ct n ≥

Pn−1−j

i=0 (−ai)xi and prove that

ct n ≥ n−2−j X i=0 (−ai)xi. (5.9)

By using this assumption and (5.8) we get that ct n ≥ n−2−j X i=0 (−ai)xi− an−1−j 1 (n − (n − 1 − j))an−1−j n−2−j X i=0 aixi+ ct n ! =  1 + 1 j + 1 n−2−j X i=0 (−ai)xi− 1 j + 1 ct n from which (5.10) immediately follows.

We can rewrite the extension of (5.9) asctn ≥Pj

i=0(−aixi) for 0 ≤ j ≤ n − 1,

and subsequently, by putting P−1

i=0 = 0, we have that

xj≤ − 1 aj j−1 X i=0 aixi+ ct n ! , 0 ≤ j ≤ n − 1. (5.10)

(27)

If we denote by ϕ0(t) = −ctn, ψ0(t) = ct and, for 1 ≤ j ≤ n − 1, ϕj(x0, . . . , xj−1, t) = 1 (n − j)aj j−1 X i=0 aixi+ ct n ! , (5.11) ψj(x0, . . . , xj−1, t) = − 1 aj j−1 X i=0 aixi+ ct n ! , (5.12)

we can substitute the definition of the set Tct by applying the lower bounds

(5.8) and the upper bounds (5.11) in the following manner:

Tct= {x∈ Rn: ϕj(x0, . . . , xj−1, t) ≤ xj ≤ ψj(x0, . . . , xj−1, t) for 0 ≤ j ≤ n − 1}.

Let I(x, t) = I(x0, . . . , xn−1, t) = I0,n+1

 λ c n+1qQn j=0hj(x, t)  and F (t) = Z Tct I(x, t) dx = Z ψ0(t) ϕ0(t) dx0 Z ψ1(x0,t) ϕ1(x0,t) dx1· · · Z ψn−1(x0,...,xn−2,t) ϕn−1(x0,...,xn−2,t) I(x0, . . . , xn−1, t) dxn−1. (5.13)

In order to lighten the notations we rewrite (5.14) in the simpler form

F (t) = Z ψ0 ϕ0 dx0 Z ψ1 ϕ1 dx1· · · Z ψn−1 ϕn−1 I(x, t) dxn−1.

By applying the formula of the derivative of an integral function we have that

F0(t) = n−1 X j=0 Z ψ0 ϕ0 dx0· · · Z ψj−1 ϕj−1 dxj−1 ×   ∂ψj ∂t ( Z ψj+1 ϕj+1 dxj+1· · · Z ψn−1 ϕn−1 I(x, t) dxn−1 ) xj=ψj −∂ϕj ∂t ( Z ψj+1 ϕj+1 dxj+1· · · Z ψn−1 ϕn−1 I(x, t) dxn−1 ) xj=ϕj   + Z ψ0 ϕ0 dx0· · · Z ψn−1 ϕn−1 dxn−1 ∂I ∂t(x, t) (5.14) where the notation xj= ϕjlying within all indices in the equalities above stands

for xj = ϕj(x0, . . . , xj−1, t) (as well as for xj = ψj). By substituting xj =

ϕj(x0, . . . , xj−1, t) =(n−j)a1 j  Pj−1 i=0aixi+ctn  into hj(x, t), we get by (4.16) n hj(x, t) o xj=ϕj(x0,...,xj−1,t) = 0 and therefore, I(x0, . . . , xj−1, ϕj(x0, . . . , xj−1, t), xj+1, . . . , xn−1, t) = I0,n+1(0) = 1. (5.15)

(28)

Next, by substituting xj = ψj(x0, . . . , xj−1, t) = −a1j  Pj−1 i=0aixi+ctn  into ψj+1 and ϕj+1, and by remarking that ψj+1= −(n − j)ϕj+1, we find, for all

0 ≤ j ≤ n − 2,

ψj+1(x0, . . . , xj−1, ψj(x0, . . . , xj−1, t), t) = 0,

ϕj+1(x0, . . . , xj−1, ψj(x0, . . . , xj−1, t), t) = 0,

(5.16)

and this implies that, for 0 ≤ j ≤ n − 2, ( Z ψj+1 ϕj+1 dxj+1· · · Z ψn−1 ϕn−1 I(x, t) dxn−1 ) xj=ψj = 0. (5.17)

The time derivatives of the affine functions ϕj and ψj are clearly

∂ϕj ∂t (x0, . . . , xj−1, t) = 1 (n − j)aj c n and ∂ψj ∂t (x0, . . . , xj−1, t) = − c n aj . (5.18)

On the other hand, in view of (5.16), we have n I(x, t)o xn−1=ψn−1 =nI(x, t)o xn−1=ϕn−1 = 1. (5.19)

By considering (5.16), (5.18), (5.19) and (5.31), the derivative (5.15) becomes

F0(t) = −c n "n−1 X j=0 1 (n − j)aj Z ψ0 ϕ0 dx0· · · Z ψj−1 ϕj−1 dxj−1 × ( Z ψj+1 ϕj+1 dxj+1· · · Z ψn−1 ϕn−1 dxn−1 ) xj=ϕj + 1 an−1 Z ψ0 ϕ0 dx0· · · Z ψn−2 ϕn−2 dxn−2 + Z ψ0 ϕ0 dx0· · · Z ψn−1 ϕn−1 dxn−1 ∂I ∂t(x, t). # (5.20)

Let us now evaluate the jth integral Z ψ0 ϕ0 dx0· · · Z ψj−1 ϕj−1 dxj−1 ( Z ψj+1 ϕj+1 dxj+1· · · Z ψn−1 ϕn−1 dxn−1 ) xj=ϕj (5.21)

for 0 ≤ j ≤ n − 2. We note that, for j + 1 ≤ k ≤ n − 1, n ϕk+1(x0, . . . , xk, t) o xk=ϕk = 1 (n − k − 1)ak+1 "k−1 X i=0 aixi+ akϕk(x0, . . . , xk−1, t) + ct n # = 1 (n − k − 1)ak+1  1 + 1 n − k  k−1 X i=0 aixi+ ct n !

(29)

by (5.12) and (5.13) = n − k + 1 n − k − 1 ak ak+1 ϕk(x0, . . . , xk−1, t) = ak+1 ak ϕk(x0, . . . , xk−1, t) (5.22)

where we used in the last equality, by (5.7), ak+1

ak = q n−k+1 n−k−1. From (5.12) we readily have ∂ϕk+1 ∂xk (x0, . . . , xk, t) = 1 n − k − 1 ak ak+1 (5.23) and by multiplying both members of (5.23) by ϕn−k−1k+1 we extract the following useful relationship 1 n − k − 1 ak ak+1 Z ψk ϕk ϕn−(k+1)k+1 (x0, . . . , xk, t) dxk = Z ψk ϕk ϕn−(k+1)k+1 (x0, . . . , xk, t) ∂ϕk+1 ∂xk (x0, . . . , xk, t) dxk = 1 n − kϕ n−k k+1(x0, . . . , xk, t) xk=ψk xk=ϕk = − 1 n − k  ak+1 ak ϕk(x0, . . . , xk−1, t) n−k

where the last step is a consequence of (5.17) and (5.23). The foregoing formula then yields the following result:

Z ψk ϕk ϕn−k−1k+1 (x0, . . . , xk, t) dxk = −n − k − 1 n − k  ak+1 ak n−k+1 ϕn−kk (x0, . . . , xk−1, t). (5.24)

With all this at hand, we are able to perform the integrations in (5.21). By taking into account (5.12) and (5.13), we see that ψn−1= −ϕn−1 and the first

integral of (5.21) becomes Z ψn−1

ϕn−1

dxn−1= −2 ϕn−1(x0, . . . , xn−2, t). (5.25)

Formula (5.25) for k = n − 2 permits us to carry out the second integration in (5.21): −2 Z ψn−2 ϕn−2 ϕn−1(x0, . . . , xn−2, t) dxn−2=  an−1 an−2 3 ϕ2n−2(x0, . . . , xn−3, t). (5.26)

By applying the same reasoning we obtain, by (5.25), (5.26) and (5.27) Z ψn−3 ϕn−3 dxn−3 Z ψn−2 ϕn−2 dxn−2 Z ψn−1 ϕn−1 dxn−1= − 2 3  an−1 an−2 3 a n−2 an−3 4 ϕ3n−3(x0, . . . , xn−4, t).

By continuing this process, we arrive at the general result which reads, as can easily be seen by recurrence,

Z ψn−k ϕn−k dxn−k Z ψn−k+1 ϕn−k+1 dxn−k+1· · · Z ψn−1 ϕn−1 dxn−1

(30)

= (−1)k2a 2 n−1 k an−1an−2· · · an−k+1 ak+1n−k ϕ k n−k(x0, . . . , xn−k−1, t). (5.27)

In view of (5.28) the integral in curly brackets of (5.21) becomes, for 0 ≤ j ≤ n − 2, ( Z ψj+1 ϕj+1 dxj+1· · · Z ψn−1 ϕn−1 dxn−1 ) xj=ϕj = 2(−1) n−j−1a2 n−1 n − j − 1 an−1an−2· · · aj+2 an−jj+1 n ϕn−j−1j+1 (x0, . . . , xj, t) o xj=ϕj by (5.23) = 2(−1) n−j−1a2 n−1 n − j − 1 an−1an−2· · · aj+2 an−jj+1  aj+1 aj ϕj(x0, . . . , xj−1, t) n−j−1 = 2(−1) n−j−1a2 n−1 n − j − 1 an−1· · · aj+2aj+1 a2 j+1a n−j−1 j ϕn−j−1j (x0, . . . , xj−1, t) by (5.7) = (−1)n−j−1(n − j)an−1· · · aj+2aj+1 an−j−1j ϕ n−j−1 j (x0, . . . , xj−1, t). (5.28)

Let us pursue the evaluation of the integral (5.21). By applying (5.25) suc-cessively the integral (5.21) takes, in view of (5.29), the following form:

(−1)n−j−1(n − j)an−1an−2· · · aj+1 an−j−1j Z ψ0 ϕ0 dx0· · · Z ψj−2 ϕj−2 ϕn−jj−1(x0, . . . , xj−2, t) dxj−2 = (−1)n−j−1(n − j)an−1an−2· · · aj+1 an−j−1j × (−1)j  aj aj−1 n−j+1 a j−1 aj−2 n−j+2 · · · a1 a0 n ϕn−10 (t) = (−1)n−1(n − j)an−1· · · aj+1a 2 jaj−1· · · a1 an 0  −ct n n−1 = (n − j)aj an+10 n−1 Y j=0 aj !  ct n n−1 = −n 2(n − j)a j (n + 1)ct Vol(Tct), (5.29)

where we used in the last equality, by (5.7), Qn−1

j=0aj = (−1)n (n+1)

(n−1)/2

n! nn/2 =

(−1)n

n+1 Vn. This result holds for all 0 ≤ j ≤ n − 2. By inserting (5.30) into (5.20)

we finally get that F0(t) = n t Vol(Tct) + Z Tct ∂I ∂t(x, t) dx = d dtVol(Tct) + Z Tct ∂I ∂t(x, t) dx.

Remark 5.6 The intimate structure of the result (5.6) can be grasped through an alternative, geometrical approach. Indeed, the function F can be written as

F (t) = Z Tct I(x, t) dx = Z ct 0 ds Z ∂Ts I(x, t) dσ(x) (5.30)

(31)

where dσ(x) stands for the infinitesimal element of the hyper-surface ∂Tswhich

is the hull of the (n+1)−hedron Ts. By performing the time-derivative of (5.32)

we have that F0(t) = c Z ∂Tct I(x, t) dσ(x) + Z ct 0 ds Z ∂Ts ∂I ∂t(x, t) dσ(x) = c Z ∂Tct I(x, t) dσ(x) + Z Tct ∂I ∂t(x, t) dx. Since, for x∈ ∂Tct, we have thatQ

n

j=0hj(x, t) = 0, we deduce I(x, t) = 1 and

thus F0(t) = c Z ∂Tct dσ(x) + Z Tct ∂I ∂t(x, t) dx = c Voln−1(∂Tct) + Z Tct ∂I ∂t(x, t) dx = c(ct)n−1Voln−1(∂T1) + Z Tct ∂I ∂t(x, t) dx.

Finally, in order to evaluate the (n − 1)−dimensional volume Voln−1(∂T1), we

argue that nVncntn−1 = d dt(Vn(ct) n) ≈ cn[Vn(t + dt)n− Vntn] dt = c n[Vol n(Tt+dt) − Voln(Tt)] dt . (5.31)

The numerator in the last member of (5.33) can be viewed as the volume of the hull of Ttwith infinitesimal depth dt. Thus

cn[Vol

n(Tt+dt) − Voln(Tt)]

dt ≈ c

nVol

n−1(∂Tt) = cntn−1Voln−1(∂T1)

and this heuristically confirms (5.6).

5.3

The distributions ˜

p

r

and the related hyper-Bessel

equa-tions

We now show that the probability (4.17) purged of the exponential factor and reduced to a simplified form, by means of the geometrical transformation (3.4) are solutions to the partial differential equation

∂n+1q ∂u0· · · ∂un =  λ c(n + 1) n+1 q. (5.32)

It should be noted that in Section 3 we proved that the joint distributions pj(x, t), after the exponential and geometrical reductions sketched above, also

resolve the same p.d.e. (see Theorem 3.4).

Let us write the probability (4.17) as ˜pr(x, t) = const · ˜qr(u) where u =

(u0, . . . , un) = (h0(x, t), . . . , hn(x, t)) and ˜ qr(u) = n X j=0 (ujuj+1· · · uj+r−1) Ir,n+1  λ c n+1√u 0· · · un  = ∞ X q=0 (λ/(n + 1)c)(n+1)q (q!)n+1−r((q + 1)!)r n X j=0 (u0· · · un) q (uj· · · uj+r−1) .

(32)

Since ∂u∂n+1 0···∂un  uk0 0 · · · u kn n  = (k0· · · kn) uk00−1· · · u kn−1 n , we have that ∂n+1 ∂u0· · · ∂un [(u0· · · un) q (uj· · · uj+r−1)] = qn+1−r(q + 1)r(u0· · · un) q−1 (uj· · · uj+r−1) . Therefore, ∂n+1q˜ r ∂u0· · · ∂un (u) = ∞ X q=0  λ (n + 1)c (n+1)q qn+1−r(q + 1)r (q!)n+1−r((q + 1)!)r n X j=0 (u0· · · un)q−1(uj· · · uj+r−1) = ∞ X q=1 (λ/(n + 1)c)(n+1)q ((q − 1)!)n+1−r(q!)r n X j=0 (u0· · · un)q−1(uj· · · uj+r−1) =  λ (n + 1)c n+1 ∞ X q=0 (λ/(n + 1)c)(n+1)q (q!)n+1−r((q + 1)!)r n X j=0 (u0· · · un)q(uj· · · uj+r−1) =  λ (n + 1)c n+1 ˜ qr(u)

and thus ˜qrsolves (5.34).

References

[1] A. Di Crescenzo (2002). Exact transient analysis of a planar motion with three directions. Stochastics Stochastics Rep., 72 (3-4), 175–189.

[2] V. Kiryakova (1994). Generalized fractional calculus and applications. Pit-man Research Notes in Mathematics Series, 301. LongPit-man Scientific & Technical, Harlow; copublished in the United States with John Wiley & Sons, Inc., New York.

[3] A.D. Kolesnik (1991). Equations of Markov random evolutions. Doctoral thesis, University of Kiev (in Russian).

[4] S. Leorato and E. Orsingher (2004). Bose-Einstein-type statistics, order statistics and planar random motions with three directions. Adv. Appl. Probab. 36 (3), 937–970.

[5] S. Leorato, E. Orsingher and M. Scavino (2003). An alternating motion with stops and the related planar, cyclic motion with four directions. Adv. Appl. Probab., 35 (4), 1153–1168.

[6] E. Orsingher (2002). Bessel functions of third order and the distribution of cyclic planar motions with three directions. Stochastics Stochastics Rep., 74 (3-4), 617–631.

[7] E. Orsingher and A.M. Sommella (2004). A cyclic random motion in R3

with four directions and finite velocity. Stochastics Stochastics Rep., 76 (2), 113–133.

(33)

[8] M.A. Pinsky (1991). Lectures on random evolution. World Scientific Pub-lishing Co., Inc., River Edge, NJ.

[9] I.V. Samoilenko (2001). Markovian random evolutions in Rn. Random

Oper. Stochastic Equations, 9 (2), 139–160.

[10] I.V. Samoilenko (2001). Analytical theory of Markov random evolutions in Rn. Doctoral thesis, University of Kiev (in Russian).

[11] A.F. Turbin and D.Ya. Plotkin (1991). Higher-order Bessel equation and functions. Asymptotic methods in problems in the theory of random evolu-tions, 112–121, Akad. Nauk Ukrain. SSR, Inst. Mat., Kiev (in Russian).

Figura

Table 1: The matrix V of directions.
Figure 1: How the interval [0, t] is split up into subintervals.

Riferimenti

Documenti correlati

The number of asylum seekers hosted in reception facilities was 138,858 at the end of 2018, and their distribution across the country, mainly through the CAS system, is related to

Le invocazioni di sum nelle precedenti definizioni di sumInts e sumSquares sono esempi di casi in cui ciò è possibile, poiché il compilatore vede che le funzioni anonime

In last years the remarkable progress of quantum technologies opened up new perspective in the investigation of the dynamics of open systems; in this sense, particularly relevant

The second chapter will talk the analysis of the case law on the liability of the State as legislator, Public Administration and Court of last instance for failure of

11 years old) treated with denosumab for spinal ABCs where embolization failed, and reported healing of the lesion after 4 months of treatment with regression of the

thermal string theory are exemplified through the infrared behaviour of the one-loop dual symmetric cosmological constant in association with the global phase structure of the

If indeed the Government of country 1 provides firm 1 with subsidy θ ∗ 1 , we can state: Proposition 7 [Welfare with Subsidy C1] The long-run effect of trade liberalisation on

Numerosi sono stati, infatti, gli episodi di spreco riguardo alla spesa sanitaria: ingente denaro pubblico è stato utilizzato per i risarcimenti a terzi a seguito di