• Non ci sono risultati.

i ThisthesiswascarriedoutatLPTMSlabinUniversit´edeParisSud,Paris.IwishtothankProf.Gr´egorySchehrforguidingmeovertheproject,forhispatienceandthewiseadviceshegaveme.ManythankstoProf.AmosMaritanforsupervisingmyprojectandtoDott.SamirSuweisforhishelpfulness.Iw

N/A
N/A
Protected

Academic year: 2021

Condividi "i ThisthesiswascarriedoutatLPTMSlabinUniversit´edeParisSud,Paris.IwishtothankProf.Gr´egorySchehrforguidingmeovertheproject,forhispatienceandthewiseadviceshegaveme.ManythankstoProf.AmosMaritanforsupervisingmyprojectandtoDott.SamirSuweisforhishelpfulness.Iw"

Copied!
90
0
0

Testo completo

(1)

Universit`

a degli Studi di Padova

DIPARTIMENTO DI FISICA E ASTRONOMIA GALILEO GALILEI Corso di Laurea Magistrale in Fisica

TESI DI LAUREA MAGISTRALE

Order statistics of random walks

A test of universality

Candidato:

Matteo Battilana

Matricola 1082129

Relatore Interno:

Prof. Amos Maritan

Relatore Esterno (Universit´e Paris Sud):

Prof. Gr´egory Schehr

Correlatore:

Dott. Samir Suweis

Controrelatore:

Prof. Fulvio Baldovin

(2)
(3)

Acknowledgements

This thesis was carried out at LPTMS lab in Universit´e de Paris Sud, Paris. I wish to thank Prof. Gr´egory Schehr for guiding me over the project, for his patience and the wise advices he gave me. Many thanks to Prof. Amos Maritan for supervising my project and to Dott. Samir Suweis for his helpfulness. I wish also to thank all the people working at LPTMS lab for the discussions and the nice time we spent together. Last but not least true thanks to my family and friends who infused in me the will to pursue this path.

(4)
(5)

Dedicated to Paola, Gianfranco, Giordano and Marco.

(6)
(7)

Contents

Introduction 1

1 Extreme value statistics of i.i.d. random variables 5

1.1 Gumbel distribution . . . 7 1.2 Fr´echet distribution . . . 8 1.3 Weibull distribution . . . 8 1.4 Fisher-Tippet-Gnedenko theorem . . . 9 1.5 Order statistics . . . 9 1.5.1 Maxima . . . 9 1.5.2 Gaps . . . 10

2 Extreme value statistics of correlated random variables 13 2.1 Wiener processes . . . 13

2.2 Ornstein-Uhlenbeck processes . . . 14

3 Order statistics of random walks 17 3.1 Relevant quantities . . . 17

3.1.1 Mean value of the k-th gap . . . 18

3.1.2 Probability distribution of the k-th gap . . . 20

3.2 Mathematical tools . . . 21

3.2.1 Generating function method . . . 21

3.2.2 Symmetry . . . 22

3.3 Final remarks . . . 23

4 Exponential distribution 25 4.1 Mean value of the k-th gap . . . 26

4.2 Probability distribution of the k-th gap . . . 29

4.3 Alternative approaches . . . 34

4.3.1 Generating function of the k-th gap mean value . . . 34

4.3.2 Asymptotic linear system . . . 36

5 First order Gamma distribution 39 5.1 Mean value of the k-th gap . . . 40

5.2 Probability distribution of the k-th gap . . . 43 iii

(8)

6 General p-th order Gamma distribution 47

6.1 Probability distribution of the k-th gap . . . 50

6.1.1 Large n limit . . . 54

6.1.2 Typical fluctuations . . . 57

6.1.3 Large fluctuations . . . 59

6.2 Large p limiting distribution . . . 62

7 Moments of the k-th gap distribution 65 7.1 Non-universal coefficients . . . 68

7.1.1 Exponential distribution . . . 68

7.1.2 First order Gamma distribution . . . 69

Conclusions 71

A Noteworthy integrals 73

B Sum identities 77

(9)

Introduction

When dealing with complex systems we are required to use a statistical approach. In this context we are usually interested in the expectation values and the variances of some relevant observables since they provide information about the average be-havior of the system. However, in some circumstances the knowledge of rare events statistics is much more important. For example tsunamis, tornadoes, financial crisis happen rarely but they have remarkable consequences in our lives.

For this reason the Extreme Value Statistics (EVS), that is the branch of statis-tics studying the extreme distributions of a set of random variables, plays a crucial role in many fields. Just for citing some:

Environmental science with applications to Ecology [1], the study of Hydrology and water resources [2], large wildfires [3] and so on.

Finance where one may be interested on foreseeing extreme events [4] or find the optimal time for selling a stock price [5].

Random matrix theory where the EVS is exploited for shedding light on the behavior of the largest eigenvalue [6, 7].

Statistical Physics where significant results have been found for the ground state distribution of disordered systems [8, 9].

Miscellaneous Others applications can be found in the Engineering [10] and in the evolution of athletics records [11].

These are just some of the reasons that drove many researchers to investigate the statistics of extreme events. So far, thanks to the contributions made by Gumbel [10], Tippett [12], Fr´echet [13], Gnedenko [14] and so on, we have an exact theory of EVS for i.i.d. random variables. In particular, as we will show in the next chap-ter, in the thermodynamic limit three universality classes can be recognized for the maximum distribution. This theory is very accurate and it can be extended also to the case of weakly correlated random variables i.e., when the correlation length is much smaller than the system size.

Nevertheless several interesting systems involve a set of strongly correlated random variables. Although few problems can be solved, a background theory still doesn’t exist (even for the mean value). In this context good ”laboratories” for studying the EVS are the random walks, indeed all the positions together represent a set of strongly correlated random variables {X0, . . . , Xn}.

(10)

2 Introduction -4 -3 -2 -1 0 1 2 0 2 4 6 8 10 12 14 Position Step X0 X1 X14

Figure 1: Random walk realization.

The analysis of a random walk requires the definition of two sets of random variables:

Maxima {M1,n, . . . , Mn+1,n} obtained by rearranging the positions in a decreasing

order of magnitude, so that M1,n≡ Xmax is the first maximum and Mn+1,n≡

Xmin is the first minimum.

Gaps {d1,n, . . . , dn,n} which are the gaps between successive maxima: dk,n :=

Mk,n− Mk+1,n. -4 -3 -2 -1 0 1 2 0 2 4 6 8 10 12 14 Position Step M1,14 M2,14 M15,14 (a) Maxima -4 -3 -2 -1 0 1 2 0 2 4 6 8 10 12 14 Position Step d1,14 d14,14 (b) Gaps

Figure 2: Random variables sets for order statistics.

The branch of EVS whose aim is to compute the maxima and gaps distributions is the order statistics. Some progress in this field has been recently made by Schehr and Majumdar in [15]. Using an exponential function as jump distribution of the random walk, they showed that in the thermodynamic limit1 the gap statistics

1

(11)

3 exhibits a rich universal behavior2. Basically they analytically demonstrated that the asymptotic behavior of the k-th gap mean value is stationary (independent on n) and universal: hdk,∞i σ k0 ≈ √1 2πk (1)

Furthermore they proved that for typical fluctuations, i.e. fluctuations around the expectation value δ ∼ k−12, the full k-th gap distribution scales as:

Pr (dk,∞= δ) t.f. ≈ √ k σ P δ√k σ ! (2) where the scaling function appearing in (2) is:

P (t) = 4  2 √ 2π 1 + 2t 2 − e2t2t 3 + 4t2 erfc√2t 

and it shows a surprising power law tail going as t−4. Moreover they show numerical evidences for the universality of (2) starting from different jump distributions. Thesis outlines

In this work we will extend the study of the gap statistics of a random walk made in [15], to the entire class of Gamma distributions. In particular we will try to retrieve the universality of the k-th gap mean value (1) and then prove or disprove the claim of universality made for the full k-th gap distribution in the scaling limit of typical fluctuations (2). Furthermore we will shed light on the relation between typical/large fluctuations and moments. Indeed if the typical fluctuations are found to be universal, as consequence some moments might be universal.

2

(12)
(13)

Chapter 1

Extreme value statistics of i.i.d.

random variables

Let’s consider a physical system described by a set of i.i.d. random variables {X1. . . Xn}. The variables share the same parent distribution p(x) with both the

mean µ and the variance σ2 finite. We are interested on studying the mean and the

extreme statistics of the system.

For studying the average behavior we consider the observable mean value: X = 1 n n X i=1 Xi (1.1)

We want to investigate the asymptotic behavior of this quantity in the large n limit. For proceeding it is necessary to know the joint probability distribution of the set. By definition of i.i.d. variables, each variable is independent from the others, hence:

Pn(x1, . . . , xn) = n

Y

i=1

p(xi) (1.2)

Straightforwardly the probability distribution of the mean value is given by: PX(x) = Z dx1· · · Z dxnδ 1 n n X i=1 xi− x ! Pn(x1, . . . , xn)

Let’s evaluate now the characteristic function of (1.1): fX(k) = D eikX E = Z dxeikxPX(x) = Z dxeikx Z dx1· · · Z dxnδ 1 n n X i=1 xi− x ! Pn(x1, . . . , xn) = Z dx1· · · Z dxnei k n Pn i=1xi n Y i=1 p(xi) = Z dxeiknxp(x) n = hD eiknx Ein 5

(14)

6 1. Extreme value statistics of i.i.d. random variables In the large n limit:

fX(k)n0≈  1 + ik nhxi − k2 2n2x 2 n n→∞ −−−→ eikµe−k2σ22n

but this is the characteristic function of a Gaussian distribution with mean µ and variance σ2/n, therefore: PX(x)−−−→ Nn→∞ µ, σ2/n = 1 p2πσ2/ne −1 2  x−µ σ/√n 2

So it is evident that in the large n limit, whatever the parent distribution p(x), the probability distribution of the mean value approaches to a Gaussian. This result is known under the name of central limit theorem.

Let’s concern now on the extreme behaviour of the system. In order to do so, we define the observable maximum:

ˆ

X = max ({X1, . . . , Xn})

Let X? be the superior limit of the domain of the probability distribution p(x). One intuitively expects that in the large n limit, the maximum approaches to this value: ˆX → X?. Therefore a rough way for estimating the mean value µn of the

maximum distribution is setting the constraint that only one random variable (i.e. the maximum) stays in the interval [µn, X?] [16]:

P>(µn) =

Z X?

µn

p(x)dx = 1

n (1.3) We want now to evaluate the asymptotic behavior of the maximum distribution. In order to do so it is convenient to work with the cumulative distribution Fn(m)

which is the probability that the maximum stays below m. Clearly this is also the probability that all the random variables do not exceed m, hence:

Fn(m) = Prh ˆX ≤ m i = [P<(m)]n= Z m −∞ p(x)dx n

In order to avoid trivial results, it is necessary to define a rescaled variable which remains constant in the thermodynamic limit:

Y = m − bn an

(1.4) In this fashion one finds the limiting distribution to be:

lim

n,m→∞Fn(anY + bn) = Gi(Y ) i = 1, 2, 3 (1.5)

As suggested in (1.5), there are three different limiting distributions which define three universality classes. They are selected on the basis of the asymptotic behavior of p(x) close to the superior limit X?.

(15)

1.1. Gumbel distribution 7

1.1

Gumbel distribution

This class is selected when the parent distribution decreases faster than any power law and no bounds are set to the domain, that is X? can be infinity.

The limiting distribution is the so called Gumbel’s law [10]: G1(Y ) = e−e

−Y

(1.6) The coefficient bn= µn can be evaluated using (1.3) while an is given by

an =

RX?

bn (x − bn) p(x)dx

R∞

bn p(x)dx

and it can be interpreted as the average distance between ˆX and bn.

Example 1.1.1 (Exponential distribution).

Let the parent distribution be p(x) = e−x whose domain is the positive real axis. The cumulative distribution reads:

Fn(x) = Z x 0 e−x0dx0 n = 1 − e−xn =  1 − 1 ne −(x−log n) n

hence in the large n, x limit:

G1(anx + bn) = e−e

−(x−log n)

and comparing with (1.4) and (1.6), it turns out that: bn= log n an= 1

Example 1.1.2 (Normal distribution).

Another relevant example is the one of a Gaussian distribution. For instance let’s consider a bunch of n non-interacting Brownian particles starting from x = 0 at t = 0. Their positions are drew from the parent distribution:

p(x) = √ 1 2πσ2e −x2 2σ2 σ = √ 2Dt Using Gumbel theory, the coefficients are found to be [16]:

an=

σ √

2 log n bn= σ p

2 log n − σ log (log n) 2√2 log n + O  1 √ log n 

so the average position of the maximum, that is the farthest Brownian particle from the origin, is:

xmax(t) ∼

p

(16)

8 1. Extreme value statistics of i.i.d. random variables

1.2

Fr´

echet distribution

This class includes all the parent distributions decreasing with a power law p(x)x→∞∼ x−α−1, α > 0 and unbounded domain (X? = ∞).

The limiting distribution is the Fr´echet one [13]: G2(Y ) =e

−Y−α

Y > 0 0 Y ≤ 0

In this case the coefficient bn is always zero while an = µn can be computed from

(1.3) and it grows as n1/α. Example 1.2.1 (Cauchy’s law). The Cauchy distribution

p(x) = 1 π (1 + x2)

is part of the Fr´echet universality class, with coefficients: α = 1 an∼

n π

1.3

Weibull distribution

In this class are included all the distributions with bounded domain (X?< ∞) and cumulative distribution scaling as xαwhen x → X?. The limiting distribution is the Weibull one [13, 17]

G3(Y ) =



1 Y > 0 e−|Y |α Y ≤ 0

The probability distribution is clearly centered in the (finite) superior limit, hence bn= X? whereas an is provided by the relation:

Z X?

X?−a n

p(t)dt = 1

n (1.7)

Example 1.3.1 (Uniform distribution).

The easiest example is the uniform distribution: p(x) =1 x ∈ [0, 1]

0 elsewhere

The limiting distribution is a Weibull function with α = 1, bn = 1 and using (1.7)

(17)

1.4. Fisher-Tippet-Gnedenko theorem 9

1.4

Fisher-Tippet-Gnedenko theorem

This theorem allows to unify the three universality classes by adding a parameter γ ∈ R [14] :

Gγ(Y ) = e−(1+γY )

− 1γ

(1 + γY > 0) (1.8) According to the value of γ, the three limiting distributions are included:

γ → 0 Gumbel distribution.

γ > 0 Fr´echet distribution with α = γ−1. γ < 0 Weibull distribution with α = −γ−1. Remark 1.

It can be shown that if a random variable X is distributed according to a Fr´echet distribution, then two new random variables distributed according to a Gumbel and a Weibull distributions, are given by:

α log X −→ Gumbel − 1

X −→ Weibull

1.5

Order statistics

So far we concerned on the first maximum of the set. However one might be inter-ested also on the statistics of a general order maximum and on the gaps between successive maxima i.e., the order statistics. This requires the definition of the sets of maxima and gaps.

1.5.1 Maxima

Rearranging the set {X1, . . . , Xn} in a decreasing magnitude order:

{M1,n, . . . , Mn,n} Xmax≡ M1,n> M2,n> · · · > Mn,n≡ Xmin

The full joint probability reads [18]: PM(m1, . . . , mn) = n! n Y i=1 p(mi) n−1 Y i=1 Θ (mi− mi+1)

where the differences from (1.2) are the n! factor counting all the possible permu-tations and the Θ functions ensuring the order of the variables. The cumulative distribution is given by:

Fk,n(m) = k−1 Y i=1 Z +∞ −∞ dmi Z m −∞ dmk n Y j=k+1 Z +∞ −∞ dmjPM(m1, . . . , mn) = n! k−1 Y i=1 Z ∞ mi+1 p(mi)dmi Z m −∞ p(mk)dmk n Y j=k+1 Z mj−1 −∞ p(mj)dmj

(18)

10 1. Extreme value statistics of i.i.d. random variables by deriving with respect to m one obtains the k-th maximum distribution:

fk,n(m) = n!p(m) Z ∞ m p(mk−1)dmk−1 k−2 Y i=1 Z ∞ mj+1 p(mj)dmj × Z m −∞ p(mk+1)dmk+1 n Y j=k+2 Z mj−1 −∞ p(mj)dmj (1.9) On defining: P (mj) = Z mj −∞ p(mj+1)dmj+1 ⇒ 1 − P (mj) = Z ∞ mj p(mj+1)dmj+1 relation (1.9) becomes: fk,n(m) = n! (k − 1)!(n − k)!p(m) [P (m)] n−k [1 − P (m)]k−1 and this is the probability distribution of a general k-th order maximum.

We already know that setting k = 1 and taking the large n limit, the asymptotic behavior is well described by the Fisher-Tippett-Gnedenko theorem (1.8). For general k it can be shown that the asymptotic behavior of the cumulative distribution is given by: Fk,n(m) n,m→∞ −−−−−−−−−−−→ (m−an)/bn fixed G(k)i  m − an bn  = G(k)i (Y ) where the scaling function is now defined as:

G(k)i (Y ) = Gi(Y ) k−1 X j=0 [− log Gi(Y )]j j! = 1 Γ(k) Z ∞ − log Gi(Y ) e−ttk−1dt

and the Gi are the well-known scaling functions of the leading maximum.

1.5.2 Gaps

The gaps are random variables helpful for studying near-extreme crowding phenom-ena:

{d1,n, . . . , dn−1,n} dk,n:= Mk,n− Mk+1,n

In the large n limit, the asymptotic joint distribution of the k-th gap for typical fluctuations is found to be [19, 20]: pk,n(δ) n0 ≈ 1 an qi  δ an  i = {1, 2, 3} where the qi depend on the universality classes.

(19)

1.5. Order statistics 11 Gumbel

In the Gumbel class the scaling function q1 is:

q1(δ) = ke−kδ

Recalling example (1.1.1), we have an= 1, so the k-th gap distribution is:

pk,n(δ) n0

≈ q1(δ)

the first moment is given by: hdk,ni n0 ≈ k Z ∞ 0 dδδe−kδ = 1 k Fr´echet

In the Fr´echet class the scaling function q2 is:

q2(δ) = α2 Γ (k) Z ∞ 0 e−x−αx−α−1(x + δ)−αk−1dxδ0≈ α Γ (k)δ −αk−1

Recalling example (1.2.1), we have α = 1 and an∼ n/π so the k-th gap distribution

reads: pk,n(δ) n0 ≈ π nΓ (k) Z ∞ 0 e−x−1x−2x +π nδ −k−1 dx =n π k k (k + 1) δ−k−1Uk + 1, 0, n πδ  the first moment is given by:

hdk,nin0≈ n π k k (k + 1) Z ∞ 0 dδδ−kUk + 1, 0, n πδ  = n πk (k − 1) Weibull

In the Weibull class the scaling function q3 is:

q3(δ) = α2 Γ (k) Z ∞ 0 e−(x+δ)αxαk−1(x + δ)α−1dxδ0≈ Γ (kα) Γ (k) α δ 2−kα δα(2−αk)e−δα Recalling example (1.3.1), we have α = 1 and an= 1/n so the k-th gap distribution

reads: pk,n(δ) n0 ≈ n Γ (k) Z ∞ 0 e−x−nδxk−1dx = ne−nδ the first moment is given by:

hdk,nin0≈ n Z ∞

0

dδδe−nδ = 1 n

(20)
(21)

Chapter 2

Extreme value statistics of

correlated random variables

We study now the EVS of a set of correlated random variables. If the correlation is weak then we can consider the coarse-grained system and so retrieve a new set of i.i.d. variables. Indeed let {X1, . . . , Xn} be a set of weakly correlated random

variables, by definition the correlation function decays fast with space: Cij = hXiXji − hXii hXji ∼ e

−|i−j|

ζ

where ζ  n. So dividing the system in blocks of size ζ, the random variables inside are still correlated but the blocks are not, hence the new set of n0 = n/ζ random variables is now i.i.d. and so its statistics is well-known.

On the other hand if the correlation within variables is strong, a coarse-graining process is no more feasible since ζ & n. This framework lacks of a general theory, but few particular problems can be solved.

2.1

Wiener processes

Let’s consider a free Brownian particle starting from the origin. Its dynamics is described by a Wiener process, so the Langevin equation reads:

dx

dτ = η(τ ) =⇒ x(τ ) = Z τ

0

η(s)ds (2.1) η(τ ) is the white noise:

hη(τ )i = 0 η(τ )η(τ0) = 2Dδ(τ − τ0

)

We are interested on the first maximum distribution over a time interval τ ∈ [0, t]: M (t) := max

0≤τ ≤t[x(τ )]

Mean value and correlation can be easily computed from (2.1): hx(t)i = 0 x(t)x(t0) = 2D min t, t0

(22)

14 2. Extreme value statistics of correlated random variables Thus the positions are strongly correlated in time.

As usual it is convenient to compute the cumulative distribution, i.e. the probability that the Brownian particle stays below z:

F (z, t) = Prob [M (t) ≤ z] = Prob [x(τ ) ≤ z, 0 ≤ τ ≤ t]

For proceeding we need the probability distribution P (x, t|z) of the random variable x. This is provided by the Fokker-Planck equation which is in this case a diffusion equation. Adding the initial condition that the particle starts at the origin and the absorbing condition preventing the particle to reach z, the Cauchy problem

         ∂ ∂tP (x, t|z) = D ∂2 ∂x2P (x, t|z) P (x, 0|z) = δ(x) P (−∞, t|z) = P (z, t|z) = 0 can be solved using the method of images:

P (x, t|z) = √ 1 4πDt  e−4Dtx2 − e− (x−2z)2 4Dt  (2.2) Integrating (2.2), the cumulative distribution is:

F (z, t) = Z z −∞ P (x, t|z)dx = √ 1 4πDt Z z −∞  e−4Dtx2 − e− (x−2z)2 4Dt  dx = √2 π Z √z 4Dt 0 e−u2du = erf  z √ 4Dt 

deriving with respect to z we get the maximum distribution: PM(z, t) := d dzF (z, t) = 1 √ πDte −z2 4DtΘ(z)

The mean value of the maximum hM (t)i = Z +∞ −∞ zPM(z)dz = 2 r Dt π

grows as the square root of the time. Similar calculations produce: Var [M (t)] = 4 (2π − 1)

π Dt

2.2

Ornstein-Uhlenbeck processes

Let’s add now an elastic force to the previous Brownian particle (2.1). The Langevin equation reads:

dx

(23)

2.2. Ornstein-Uhlenbeck processes 15 Equation (2.3) can be integrated on multiplying by eµτ:

dx dτe µτ + µxeµτ = d dτ (xe µτ) = η(τ )eµτ =⇒ x(t) = e−µt Z t 0 η(τ )eµτdτ Quick calculations provide the mean value and the correlation of the position:

hx(t)i = 0 hx(t1)x(t2)i = D µ h e−µ|t1−t2|− e−µ(t1+t2) i

As a test of consistency, taking the limit for µ → 0 we should retrieve the Wiener process correlation, indeed:

hx(t1)x(t2)i µ1

∼ 2D1 − µ|t1− t2| − 1 + µ(t1− t2) µ

= 2D [|t1− t2| + t1− t2] = 2D min(t1, t2)

The characteristic time of the elastic force is τp = µ−1. Assuming it to be finite

(µ > 0), one easily notices that for times way larger than τp, the correlation decays

exponentially:

hx(t1)x(t2)i ∼ e−µ|t1−t2|

This means that after a t  τp, the system ends up to be weakly correlated and so it

can be analyzed using EVS theory for i.i.d. variables. Since the parent distribution of the random variables is Gaussian, we expect a Gumbel limiting distribution. For retrieving this result, let’s start from the Cauchy problem:

         ∂ ∂tP (x, t|z) = µ ∂ ∂x[xP (x, t|z)] + D ∂2 ∂x2P (x, t|z) P (x, 0|z) = δ(x) P (−∞, t|z) = P (z, t|z) = 0

The harmonic potential prevent us to solve it using the method of images, however we can proceed by expanding the solution in its eigenstates [21]:

P (x, t|z) =X λ aλe−λtDλ/µ  −p2µx  e−µx22

where Dp(x) is the parabolic cylinder function satisfying:

d2 dx2Dp(x) +  p +1 2 − x2 4  Dp(x) = 0

The absorbing condition implies: Dλ/µ



−p2µz= 0 ∀λ

and this fixes the eigenvalues. In the large t limit the leading term of the expansion is the one related to the smallest eigenvalue λ0(z):

(24)

16 2. Extreme value statistics of correlated random variables it can be shown that for large z the eigenvalue behaves as [22]:

λ0(z) z→∞ −−−→ √2 πµ 3 2ze−µz 2

therefore the cumulative probability reads:

F (z, t) ∼ e−λ0(z)t= e−e −µz2+log  2tµ 3 2 z √ π   = G1 " p 4µ log t z − s log t µ !#

So we recover a Gumbel distribution with coefficients: at= 1 √ 4µ log t bt= s log t µ = hM (t)i

As result the mean value of the maximum distribution increases very slowly with time. A deeper analysis shows that the Brownian particle doesn’t feel the potential for times way smaller than τp, hence it freely diffuses:

hM (t)i ∼  √ t t  τp √ log t t  τp Final remarks

In this chapter we presented two systems of strongly correlated variables where one can get some information about the first maximum distribution. Other solvable cases can be found in the 1D fluctuating surface [23] and in the evaluation of the largest eigenvalue distribution in Random Matrix Theory [6, 7].

(25)

Chapter 3

Order statistics of random walks

In this chapter I will present the techniques used for studying the order statistics of a set of strongly correlated random variables. The set is provided by the positions of a one-dimensional random walker.

Definition 3.0.1 (Random Walk).

Let {η0, . . . , ηn} be a set of i.i.d. random variables. The partial sum Xj up to the

step j is defined by:

Xj = j

X

i=0

ηi j ≤ n

The set of partial sums {X0, . . . , Xn} is called random walk and it represents a

set of strongly correlated random variables.

We can visualize it as a particle moving through the positions Xj at the times

j. The i.i.d. variables ηj represent then the jumps from a position to another:

Xj− Xj−1= ηj

By definition the jumps ηj share the same parent distribution f (η) that is

intu-itively called jump distribution. In the further analysis we will assume the jump distribution to be continuous and symmetric with zero mean and finite variance σ2.

3.1

Relevant quantities

The study of the order statistics of the random walk requires the sets of maxima {M1,n. . . , Mn+1,n} and gaps {d1,n, . . . , dn,n}. Given that in [15] it has been found

that the gap statistics shows a rich universal behavior, our analysis will revolve around the two following quantities:

1. the mean value of the k-th gap;

2. the probability distribution of the k-th gap.

The first quantity can be computed from the second one, but it can also be extracted from results of the k-th maximum statistics. The strategy we followed is rather

(26)

18 3. Order statistics of random walks simple: find a way for computing the cumulative distribution Fk,n(x) of the k-th

maximum/gap and then derive for obtaining the probability distribution. Thus the challenging step is to compute the cumulative distribution. The procedures followed for the two quantities are almost the same and they require the introduction of some well-defined auxiliary variables.

3.1.1 Mean value of the k-th gap

Let qk,n(x) and rk,n(x) be two auxiliary quantities related by:

rk,n(x) = qn−k,n(x) (3.1)

with:

q0,0(x) = r0,0(x) = 1 qk,n(x) = rk,n(x) = 0 (if k > n)

The quantity qk,n(x) can be defined in two different ways:

1. the probability that starting in x0= 0 there are k points above x and so n − k

points below it;

2. the probability that starting in x0= x there are k points below 0 and so n − k

points above it.

The connection between the two definition is well explained in figure (3.1).

0 Position Time x k n-k

(a) Starting configuration.

0 Position Time -x k n-k

(b) Inversion of the vertical axes.

Position Time 0 x k n-k

(c) Vertical shift of magnitude +x.

Figure 3.1: Visual representation of the equivalence of the two qk,n(x) definitions.

The cumulative distribution of the k-th maximum can be expressed as a combi-nation of these auxiliary quantities. Indeed it sets the bound for the k-th maximum

(27)

3.1. Relevant quantities 19 (and so for the further orders too) to stay below the value of x. As consequence it can be expressed as the sum of all the possible configurations of the lower order maxima (1, . . . , k − 1), which are free to be smaller or greater than x. Let’s treat separately the positive/negative cases.

x > 0 The cumulative distribution is nothing but the sum of the probabilities qm,n(x) where m varies from 0 (all the maxima are smaller than x) to k − 1

(the first k − 1 maxima are above x): Fk,n(x) =

k−1

X

m=0

qm,n(x)

x < 0 This case is analog to the previous one though there is at least the starting point X0 above x. The cumulative distribution reads:

Fk,n= k−2 X m=0 rm,n(−x) all together: Fk,n(x) = Pr [Mk,n≤ x] = ( Pk−1 m=0qm,n(x) x > 0 Pk−2 m=0rm,n(−x) x < 0

Taking the derivative with respect to x, the k-th maximum distribution is achieved: PM(Mk,n = x) = ∂ ∂xFk,n(x) = k−1 X m=0 ∂ ∂xqm,n(x) + k−2 X m=0 ∂ ∂xrm,n(−x) Hence the mean value of the k-th maximum is:

hMk,ni = k−1 X m=0 Z ∞ 0 x ∂ ∂xqm,n(x)dx + k−2 X m=0 Z ∞ 0 x ∂ ∂xrm,n(−x)dx (3.2) Exploiting the linearity of the first moment, the mean value of the k-th gap is simply:

hdk,ni = hMk,ni − hMk+1,ni = −

Z ∞

0

x ∂

∂x[qk,n(x) + rk−1,n(−x)] dx (3.3) It is now clear the crucial role played by the auxiliary variables qk,n and rk,n. An

equation for them is provided by the Markov chain backward equation, related to the first step of the random walk:

qk,n(x) = Z ∞ 0 qk,n−1(x0)f (x0− x)dx0+ Z 0 −∞ rk−1,n−1(−x0)f (x0− x)dx0 = Z ∞ 0 qk,n−1(x0)f (x0− x)dx0+ Z ∞ 0 rk−1,n−1(x0)f (x0+ x)dx0 (3.4)

where the first (second) term represents the probability to jump in x0 > 0 (x0 < 0) from x > 0. For rk,n, using (3.1):

rk,n(x) = Z ∞ 0 rk−1,n−1(x0)f (x0− x)dx0+ Z ∞ 0 qk,n−1(x0)f (x0+ x)dx0 (3.5)

(28)

20 3. Order statistics of random walks

3.1.2 Probability distribution of the k-th gap

Almost the same procedure is pursued for evaluating the probability distribution of the k-th gap. Let’s start by considering the joint probability that the k-th maximum has the value y and the (k + 1)-th one assumes the value x < y:

P (Mk,n= y; Mk+1,n = x) = Pk,n(x, y)

The cumulative distribution Sk,n(x, y) can be defined as:

Sk,n(x, y) = Pr [Mk,n> y, Mk+1,n < x] = Z ∞ y dt Z x −∞ duPk,n(t, u) hence: Pk,n(x, y) = − ∂2 ∂x∂ySk,n(x, y) (3.6) The probability that the gap between the two maxima is δ = y − x will be:

P (dk,n = δ) = Pk,n(δ) = Z R2 dxdyPk,n(x, y)Θ(y − x)δ (y − x − δ) using (3.6): Pk,n(dk,n= δ) = − Z R2 ∂2Sk,n(x, y)

∂x∂y Θ(y − x)δ(y − x − δ)dxdy (3.7) As we have done before, for computing the cumulative distribution Sk,n(x, y) we

introduce two auxiliary variables Qk,n(x, δ) and Rk,n(x, δ), related by:

Rk,n(x, δ) = Qn−k,n(x, δ) (3.8)

with:

Q0,0(x, δ) = R0,0(x, δ) = 1 Qk,n(x, δ) = Rk,n(x, δ) = 0 (if k > n)

Analogously the quantity Qk,n(x, δ) is defined as the probability that the random

walk, starting at x0 = x, has k points in ]−∞, −δ] and n−k in [0, ∞[. The cumulative

distribution is related to these quantities by:

Sk,n(x, y) =    Qk,n(x, y − x) x > 0 0 x < 0 ∧ y > 0 Rk−1,n(−y, y − x) x < 0 ∧ y < 0 Then (3.7) becomes: Pk,n(δ) = − Z ∞ 0 dx Z ∞ x dy ∂ 2 ∂x∂yQk,n(x, y − x)δ(y − x − δ) − Z 0 −∞ dx Z 0 x dy ∂ 2

∂x∂yRk−1,n(−y, y − x)δ(y − x − δ)

(29)

3.2. Mathematical tools 21 In the same way, a backward equation can be defined for each auxiliary variable:

Qk,n(x, δ) = Z ∞ 0 Qk,n−1(x0, δ)f (x − x0)dx0 + Z ∞ 0 Rk−1,n−1(x0, δ)f (x0+ x + δ)dx0 Rk,n(x, δ) = Z ∞ 0 Rk−1,n−1(x0, δ)f (x − x0)dx0 + Z ∞ 0 Qk,n−1(x0, δ)f (x0+ x + δ)dx0 (3.10) Remark 2.

In the study of the k-th gap distribution we will often use the rescaled gap γ, defined by:

γ = δ b

Therefore, it is equivalent to use the variable δ or γ since they only differ of a positive constant.

3.2

Mathematical tools

During the analysis we encountered several mathematical issues like recursiveness of the equations and heavy computations. The two following tools allowed us to lighten these problems.

3.2.1 Generating function method

In order to remove the recursiveness in the equations, we made large use of the probability generating function method.

Definition 3.2.1 (Probability generating function).

Given a sequence of many indexes fn1,...,nr the associated probability generating

function is: G(z1, . . . , zr) = ∞ X n1=1 · · · ∞ X nr=1 zn1 1 · · · zrnrfn1,...,nr

where the {z1, . . . , zr} are complex variables whose modulus must be confined in the

interval [0, 1].

Although this method let us proceed with the calculations, eventually one is required to invert the generating function for retrieving the needed result. For this purpose a general recipe doesn’t exist, anyway some techniques as the Maclaurin expansion, guessing the scaling function and the Bromwich formula (C) are helpful for the scope.

(30)

22 3. Order statistics of random walks The double generating functions of the k-th maximum auxiliary variable are:

˜ q(s, z; x) = ∞ X n=0 n X k=0 snzkqk,n(x) r(s, z; x) =˜ ∞ X n=0 n X k=0 snzkrk,n(x)

In the following these quantities will be defined by linear differential equations. Boundaries conditions for the coefficients are provided by the generating function representations of (3.4) and (3.5): ˜ q(s, z; x) = 1 + s Z ∞ 0 ˜ q(s, z; x0)f (x0− x)dx0+ sz Z ∞ 0 ˜ r(s, z; x0)f (x0+ x)dx0 ˜ r(s, z; x) = 1 + sz Z ∞ 0 ˜ r(s, z; x0)f (x0− x)dx0+ s Z ∞ 0 ˜ q(s, z; x0)f (x0+ x)dx0 (3.11)

The same arguments and equations are valid for the gaps’ auxiliary variables as long as one makes the following substitutions:

˜ q (s, z; x) 7→ ˜Q (s, z; x, δ) r (s, z; x) 7→ ˜˜ R (s, z; x, δ) f (x0− x) 7→ f (x0− x) f (x0+ x) 7→ f (x0+ x + δ) so: ˜ Q(s, z; x, δ) = 1 + s Z ∞ 0 ˜ Q(s, z; x0, δ)f (x − x0)dx0 + sz Z ∞ 0 ˜ R(s, z; x0, δ)f (x0+ x + δ)dx0 ˜ R(s, z; x, δ) = 1 + sz Z ∞ 0 ˜ R(s, z; x0, δ)f (x − x0)dx0 + s Z ∞ 0 ˜ Q(s, z; x0, δ)f (x0+ x + δ)dx0 (3.12) Remark 3.

In the generating function framework the limit for s → 1 corresponds to n → ∞ while the limit for z → 1 corresponds to k → ∞.

3.2.2 Symmetry

The relations between auxiliary variables (3.1) and (3.8) produce in the generating function representation a global symmetry coded in the involution φ:

φ : (s, z) 7→  sz,1 z  φ2= I

Let ˜q (s, z; x) and ˜r (s, z; x) be the generating functions of the auxiliary variables, then: ˜ r (s, z; x) = (˜q ◦ φ) (s, z, x) = ˜q  sz,1 z, x  and exploiting the involution property φ ◦ φ = I:

˜ q (s, z; x) = (˜q ◦ I) (s, z, x) = (˜q ◦ φ ◦ φ) (s, z, x) = (˜r ◦ φ) (s, z, x) = ˜r  sz,1 z, x 

(31)

3.3. Final remarks 23 As consequence, we can work only with one of the auxiliary variables and then automatically extract the other by symmetry.

Remark 4.

The same symmetry holds for the gaps’ auxiliary variables ˜Q, ˜R.

3.3

Final remarks

It is evident how the analysis deeply hinge on the auxiliary variables, which are defined by the Wiener-Hopf integrals (3.4), (3.5) and (3.10). These integral equa-tions are pretty hard to solve for arbitrary jump distribuequa-tions f (x), however for the whole class of Gamma distributions, one can reduce them in recurrent differential equations.

In the next chapter we will report the analysis made with an exponential jump distri-bution in [15]. In the fifth and in the sixth chapters instead we will show the original results we obtained for a first order and a general order Gamma distributions.

(32)
(33)

Chapter 4

Exponential distribution

We consider now an exponential (or zeroth order Gamma distribution) jump distri-bution: f0(x) = 1 2be −|x|b (4.1) with: hxi = 0 σ2 =x2 = 2b2 0 0.1 0.2 0.3 0.4 0.5 0.6 -4 -2 0 2 4 Probability density f 0 (x) Jump length (x)

Figure 4.1: Exponential distribution (4.1) with b = 1.

Computing the second derivative, an useful relation can be isolated for this distribution [15]

b2f000(x) − f0(x) = −δ(x) (4.2)

(34)

26 4. Exponential distribution

4.1

Mean value of the k-th gap

Relation (4.2) is very helpful for reducing the Wiener-Hopf integrals (3.4), (3.5) in two differential equations. Indeed deriving two times (3.4) with respect to x and using relation (4.2), one finds:

qk,n00 (x) = Z ∞ 0 qk,n−1(x0)f000(x0− x)dx0+ Z ∞ 0 rk−1,n−1(x0)f000(x0+ x)dx0 = 1 b2 Z ∞ 0 qk,n−1(x0)f0(x0− x) − δ(x0− x) dx0 + 1 b2 Z ∞ 0 rk−1,n−1(x0)f0(x0+ x) − δ(x0+ x) dx0 = 1 b2[qk,n(x) − qk,n−1(x)]

so we remain with the recurrent differential equation: b2 d

2

dx2qk,n(x) = qk,n(x) − qk,n−1(x) (4.3)

The recursiveness can be removed using the double generating function: ˜ q(s, z; x) = ∞ X n=0 n X k=0 snzkqk,n(x) hence: b2 ∂ 2 ∂x2q(s, z; x) = (1 − s)˜˜ q(s, z; x) − 1

which is a linear differential equation with solution: ˜ q(s, z; x) = a(s, z)e− √ 1−sxb + 1 1 − s (4.4) By symmetry: ˜ r(s, z; x) = a0(s, z)e− √ 1−szxb + 1 1 − sz a 0(s, z) = a  sz,1 z  (4.5) In order to evaluate the amplitude a(s, z) let’s combine the first of (3.11) with (4.1) and (4.4) so: ae− √ 1−s b x+ 1 1 − s = 1 + s 1 − s Z ∞ 0 f0(x0− x)dx0+ as Z ∞ 0 e− √ 1−s b x 0 f0(x0− x)dx0 + sz 1 − sz Z ∞ 0 f0(x0+ x)dx0+ a0sz Z ∞ 0 e−Λ(sz)x0f0(x0+ x)dx0 = 1 1 − s+ e−xb 2 s(z − 1) (1 − s)(1 − sz) + a 0sz e− x b 2(√1 − sz + 1) − a 2 h e−xb 1 + √ 1 − s − 2e− √ 1−s b x i

(35)

4.1. Mean value of the k-th gap 27 where we made use of results from (A). Finally we recover the equation:

a √ 1 − s − 1+ a0z √ 1 − sz + 1 + z 1 − sz − 1 1 − s = 0 (4.6) Applying the symmetry to the (4.6) we obtain a second equation and so a closed system1        a √ 1 − s − 1+ a0z √ 1 − sz + 1 + z 1 − sz − 1 1 − s = 0 a √ 1 − s + 1+ a0z √ 1 − sz − 1 − z 1 − sz + 1 1 − s = 0 which has solution:

         a(s, z) = 1 p(1 − s)(1 − sz) − 1 1 − s a0(s, z) = 1 p(1 − s)(1 − sz) − 1 1 − sz ≡ a  sz,1 z  so finally:          ˜ q(s, z; x) = 1 1 − s + 1 p(1 − s)(1 − sz)− 1 1 − s ! e− √ 1−sxb ˜ r(s, z; x) = ˜q  sz,1 z; x 

Now that the generating functions of the auxiliary quantities are well-known, we can easily compute the first moment of the k-th maximum distribution (3.2). In order to compute these integrals, it is better to switch to the generating function representation: Z ∞ 0 x ∂ ∂xq(s, z; x)dx =˜ σ √ 2  1 (1 − s)3/2 − 1 1 − s 1 √ 1 − sz  = ∞ X n=0 n X m=0 snzm√σ 2π 2δm,0 Γ n +32 Γ (n + 1) − Γ m +12 Γ(m + 1) ! = ∞ X n=0 n X m=0 snzm Z ∞ 0 x ∂ ∂xqm,n(x)dx

where the Taylor function expansion around s, z = 0 has been used for returning to the original form. Comparing:

Z ∞ 0 x ∂ ∂xqm,n(x)dx =      σ √ 2  2 √ π Γ(n+32) Γ(n+1) − 1  (m = 0) −√σ 2π Γ(m+12) Γ(m+1) (m > 0)

Similarly, for rm,n, using (3.1):

Z ∞ 0 x ∂ ∂xrm,n(x)dx =      σ √ 2  2 √ π Γ(n+32) Γ(n+1) − 1  (m = n) −σ 2π Γ(n−m+12) Γ(n−m+1) (m < n) 1

(36)

28 4. Exponential distribution On substituting into (3.2) we obtain:

hMk,ni = √σ 2 " 2 √ π Γ n +32 Γ (n + 1) − 1 − 1 √ π k−1 X m=1 Γ m + 12 Γ(m + 1) −√1 π k−2 X m=0 Γ n − m +12 Γ(n − m + 1) # (4.7)

By exploiting the noteworthy relation

n X m=0 Γ m +12 Γ (m + 1) = 2 Γ n +32 Γ (n + 1) we can rewrite (4.7) as:

hMk,ni = σ √ 2π " n X m=0 Γ m + 12 Γ(m + 1) − k−1 X m=0 Γ m + 12 Γ(m + 1) − k−2 X m=0 Γ n − m + 12 Γ(n − m + 1) # = √σ 2π " n X m=k Γ m + 12 Γ(m + 1) − k−2 X m=0 Γ n − m +12 Γ(n − m + 1) # = √σ 2π " n X m=k Γ m + 12 Γ(m + 1) − n X m=n−k+2 Γ m + 12 Γ(m + 1) # (4.8)

Assuming now k < n − k + 2, i.e. k < 1 + n/2 that is we concern on the first half of the maxima (by symmetry the second would produce the same results), the two sums in (4.8) can be subtracted, so:

hMk,ni = √σ 2π n−k+1 X m=k Γ m +12 Γ(m + 1) (4.9) Equation (4.9) is the exact value of the first moment of k-th maximum distribution. In the large n limit, one finds the following behaviour:

hMk,ni n0 ≈ √σ 2π2 √ n = σ r 2n π ⇒ hMk,ni σ n0 ≈ r 2n π we notice that it loses the dependence on k.

From (4.9) it is straightforward to extract the k-th gap mean value using linearity: hdk,ni = hMk,ni − hMk+1,ni = √σ 2π "n−k+1 X m=k Γ m + 12 Γ(m + 1) − n−k X m=k+1 Γ m + 12 Γ(m + 1) # = √σ 2π " Γ k +12 Γ (k + 1) + Γ n − k +32 Γ(n − k + 2) #

in the large n limit, given that

Γ(n + a) Γ(n + b)

n→∞

(37)

4.2. Probability distribution of the k-th gap 29 we have: lim n→∞ hdk,ni σ = limn→∞ 1 √ 2π " Γ k + 12 Γ(k + 1) + 1 √ n # = √1 2π Γ k +12 Γ (k + 1) (4.11) in the large k limit, using (4.10):

hdk,∞i σ k0 ≈ √1 2πk + O k −1 (4.12) and we notice that the mean value becomes stationary, that is it doesn’t depend on n. Using Pollaczek-Wendel identity this result can be proved to be universal, i.e. to hold for arbitrary symmetric and continuous jump distributions f (x) [15]:

lim n→∞hdk,ni = σ √ 2π Γ k +12 Γ(k + 1) − 1 πk Z ∞ 0 dq q2    ˆ f0(q)k− 1  1 +σ22q2k    (4.13) where ˆf0(q) is the Fourier transform of the jump distribution. For large values of k

one finds again (4.12).

4.2

Probability distribution of the k-th gap

We are now interested on evaluating the asymptotic behavior of the full k-th gap distribution. As we already stated, the gaps auxiliary variables (Qk,n, Rk,n) share

the same properties of the maxima ones (qk,n, rk,n), so as beforehand we find the

following recurrent differential equation: b2 ∂

2

∂x2Qk,n(x, δ) = Qk,n(x, δ) − Qk,n−1(x, δ)

switching to the generating function representation: ˜

Q(s, z; x, δ) = A(s, z; δ)e−

1−sxb + 1

1 − s

Injecting this expression into (3.12) and using the symmetry, we get a linear system for the coefficients A and B ≡ A sz,1z; δ:

         1 1 − s+ A 1 −√1 − s − ze−δb 1 − sz − Bze−δb 1 +√1 − sz = 0 z 1 − sz + zB 1 −√1 − sz − e−δb 1 − s − Ae−δb 1 +√1 − s = 0 (4.14)

which has solutions (γ ≡ δ/b):          A(s, z; γ) = sz √ 1−sz − s√1−sz 1−s cosh γ − s 1−ssinh γ √ 1 − s +√1 − sz cosh γ +1 +p(1 − sz)(1 − s)sinh γ B(s, z; γ) = A(sz, 1/z, γ) (4.15)

(38)

30 4. Exponential distribution Now that the generating functions of the auxiliary variables are known, we can compute the generating functions of the gap distribution using (3.9):

˜ P (s, z; γ) = − 1 b Z +∞ 0 dx Z +∞ x dyδ  γ − δ b  ∂2 ∂x∂yQ(s, z; x, γ)˜ − z b Z 0 −∞ dx Z 0 x dyδ  γ −δ b  ∂2 ∂x∂y ˜ R(s, z; −y, γ) (4.16)

In order to do so let’s start by evaluating the double derivatives of the auxiliary variables (γ = (y − x)/b): ∂2 ∂x∂yQ(s, z; x, γ) = −˜ e−λx b2  ∂2 ∂γ2 + λb ∂ ∂γ  A(s, z; γ) ∂2 ∂x∂y ˜ R(s, z; −y, γ) = −e ηy b2  ∂2 ∂γ2 + ηb ∂ ∂γ  B(s, z; γ) (4.17)

where for simplicity:

λ = √ 1 − s b η = √ 1 − sz b

Injecting (4.17) into (4.16) and computing the integrals, one finally obtains: ˜ P (s, z; γ) = 1 λb2 ∂2A ∂γ2 (s, z; γ) + 1 b ∂A ∂γ (s, z; γ) + z ηb2 ∂2B ∂γ2 (s, z; γ) +z b ∂B ∂γ (s, z; γ) (4.18)

On substituting the values obtained in (4.15) for the coefficients and taking the limit of s → 1 we have: ˜ p(z; γ) = ∞ X k=1 zkpk(γ) = 8z b e −2γ u(z) − v(z)e−2γ [u(z) + v(z)e−2γ]3 (4.19)

where we introduced the functions u(z) =√1 − z + 1, v(z) =√1 − z − 1. The next step is to extract the pk(γ) from the generating function. This is not trivial, but

using scaling arguments we can extract the asymptotic behavior in the large k limit (z → 1) for typical and large fluctuations.

1. Typical fluctuations δ, γ ∼ 1/√k

Here we consider fluctuations around the mean value. In order to do so we guess a scaling form for pk(δ) where

√ kδ is fixed: pk(δ) = 1 σ √ kP √ kδ σ  (4.20) Looking at its generating function:

˜ p(z; δ) = 1 σ ∞ X k=1 zk√kP √ kδ σ 

(39)

4.2. Probability distribution of the k-th gap 31 by defining z = e−t the limit for z → 1 corresponds to t → 0 hence:

˜ p(z; δ) = 1 σ ∞ X k=1 e−kt √ kP √ kδ σ  ≈ 1 σ Z ∞ 1 e−kt √ kP √ kδ σ  dk

making the change of variable x = k σδ2 : ˜ p(z; δ)t.f.≈ 1 σ σ δ 3Z ∞ (δ σ) 2e −(σ δ) 2 xt√xPx dx

So, given that δ → 0: ˜ p(z; δ)t.f.≈ σ 2 δ3 Z ∞ 0 e−σ2tδ2x √ xP √x dx = σ 2 δ3F  σ2t δ2  (4.21) We expect then a scaling of this form for the generating function of (4.20). In order to prove or disprove it, let’s look at the asymptotic behaviour for δ, t → 0 (z → 1) with fixed ratio t/δ2 of (4.19):

8z b e −2γ u(z) − v(z)e−2γ [u(z) + v(z)e−2γ]3 t.f. ≈ 16 b 1 2√1 − z + 2δb3 Now since for t → 0 e−t ≈ 1 − t, then t ≈ 1 − z:

16 b 1 2√t + 2δb3 = 2b 2 δ3 1  1 +b √ t δ 3 = σ2 δ3 1  1 + q σ2t/δ2 2 3

where it has been made the substitution b = σ/√2. Comparing with (4.21) the scaling function turns out to be:

F (λ) = 1 + r λ 2 !−3 λ = σ 2t δ2

and from (4.21), it can be interpreted as the Laplace transform of√xP (√x): Lx→λ√xP ( √ x) := Z ∞ 0 e−xλ√xP (√x)dx = 1 + r λ 2 !−3

The inversion of this Laplace transform is not trivial, however expressing it as: 1 + r λ 2 !−3 = 1 2 Z ∞ 0 y2e−ye−y q λ 2dy one has: √ xP √x =Lλ→x−1  1 2 Z ∞ 0 y2e−ye−y q λ 2dy  = 1 2 Z ∞ 0 y2e−yLλ→x−1  e−y q λ 2  dy (4.22)

(40)

32 4. Exponential distribution So the problem is reduced on evaluating the inverse Laplace transform of the function e−y

q

λ

2. Using Bromwich formula (C):

L−1 λ→x  e−y q λ 2  = y 2√2πx3/2e −y28x (4.23)

Hence matching (4.22) with (4.23): P (√x) = 1 4√2πx2 Z ∞ 0 y3e−  y+y28x  dy = 4e 2x √ 2πx2 Z ∞ 2x (˜y − 2x)3e−y22x˜ d˜y = 4  2 √ 2π(1 + 2x) − e 2x√x (3 + 4x) erfc√2x  setting t :=√x: P (t) = 4  2 √ 2π 1 + 2t 2 − e2t2 t 3 + 4t2 erfc √ 2t  (4.24) 0 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 1.5 2 Scaling function P(t) t

Figure 4.2: Scaling function for typical fluctuations (4.24). which has the asymptotic behaviour:

P (t) ≈      q 32 π t → 0 3 √ 8πt −4 t → ∞ (4.25)

(41)

4.2. Probability distribution of the k-th gap 33 2. Large fluctuations δ, γ  0

Numerical results suggest that, in the large k, δ limits, pk(δ) scales as:

pk(δ) ≈ ϕ0(δ)k−3/2 (4.26)

Switching to the generating function representation, the large k limit corre-sponds to the z → 1 limit. Hence developing ˜p(z; δ) around z = 1:

˜ p(z; δ) = ∞ X k=1 zkpk(δ) z→1 ≈ ˜p1(δ) + ˜p2(δ) √ 1 − z (4.27) In this limit, pk(δ) is related to the coefficient of

√ 1 − z in the expansion, indeed: ˜ p2(δ) √ 1 − z = −p˜2(δ) 2√π ∞ X k=0 Γ k − 12 Γ (k + 1)z k = ˜p2(δ) − ˜ p2(δ) 2√π ∞ X k=1 Γ k −12 Γ (k + 1)z k so: pk(δ) ≈ − ˜ p2(δ) 2√π Γ k −12 Γ (k + 1) ≈ − ˜ p2(δ) 2b√πk −3 2

comparing with (4.26), ϕ0(δ) turns out to be:

ϕ0(δ) = −

˜ p2(δ)

2√π (4.28) finally, expanding (4.19) we find:

˜ p2(δ) = − 16 b e 2δbe4 δ b + 4e2 δ b + 1  1 − e2δb 4 hence by virtue of (4.28): ϕ0(γ) = 8 b√πe 2γe4γ+ 4e2γ+ 1 (1 − e2γ)4 γ = δ b with asymptotic behaviour:

ϕ0(γ) ≈      3 b√πγ −4 γ → 0 8 b√πe −2γ γ → ∞ (4.29) Collecting (4.20) and (4.26): pk(γ) k0 ≈        1 b q k 2P q k 2γ  γ ∼ 1/√k ϕ0(γ)k−3/2 γ  0

(42)

34 4. Exponential distribution The results are consistent since the asymptotic behaviours of the two regimes match when they move ”toward each other”:

         1 b r k 2P r k 2γ ! γ0 −−−→ 1 b r k 2 3 √ 8π 4 k2γ −4 = 3 b√πγ −4k−3/2 ϕ0(γ)k−3/2 γ∼0−−→ 3 b√πγ −4k−3/2

This proves that the chosen scaling forms are correct.

4.3

Alternative approaches

The analysis we made for studying the k-th gap statistics for an exponential jump distribution is no more feasible for further order Gamma distributions since it turns out to be computationally prohibitive. In these cases we are required to follow alternative paths for extracting the asymptotic behavior of the needed quantities. These alternative paths need to be tested for verifying their consistency. In order to do so in this section we will check them for the exponential jump distribution. Indeed if the final results match with the ones we already got, then these approaches are trustful and we could use them in the next chapters.

4.3.1 Generating function of the k-th gap mean value

Instead of passing through the computation of the k-th maximum mean value (3.2), we start from the evaluation of the k-th gap (3.3) generating function:

D ˜d(s, z)E = − Z ∞ 0 x ∂ ∂x ∞ X n=0 n X k=1 snzk[qk,n(x) + rk−1,n(x)] dx (4.30)

For achieving an expression with generating functions, we need to work on the two R.H.S. addends.

1. Starting with the first2

∞ X n=0 n X k=1 snzkqk,n(x) = ∞ X n=0 " n X k=0 snzkqk,n(x) − q0,nsn # = ˜q(s, z; x) − ∞ X n=0 q0,nsn

the last term can be rewritten as:

∞ X n=0 q0,nsn= ∞ X n=0 rn,nsn= ∞ X n=0 rnsn= ˆr(s; x) (4.31)

A differential equation for ˆr is provided by (4.3) setting k = n and summing over n: b2 d 2 dx2 ∞ X n=1 snrn(x) = ∞ X n=1 snrn(x) − ∞ X n=1 snrn−1(x) 2

(43)

4.3. Alternative approaches 35 this produces the well-known differential equation:

b2 ∂ 2 ∂x2ˆr(s; x) = (1 − s)ˆr(s; x) − 1 with solution: ˆ r(s; x) = c(s)e− √ 1−s b x+ 1 1 − s (4.32) in order to evaluate the coefficient let’s use the (3.5) with k = n:

rn(x) = Z ∞ 0 rn−1(x0)f (x0− x)dx0 summing it over n: ˆ r(s; x) = 1 + s Z ∞ 0 ˆ r(s; x0)f (x0− x)dx0 (4.33) hence injecting (4.32) and (4.1) into (4.33):

c(s) = √ 1 − s − 1 1 − s = 1 √ 1 − s − 1 1 − s The first R.H.S. term of (4.30) is now evaluated.

2. The second addend reads:

∞ X n=0 n X k=1 snzkrk−1,n = z ∞ X n=0 n−1 X k=0 snzkrk,n= z ∞ X n=0 sn " n X k=0 zkrk,n− znrn # = z ˜r(s, z; x) − z ∞ X n=0 (sz)nrn

where the last sum:

X

n=0

(sz)nrn= ˆr(sz; x)

coincides with (4.31) where s 7→ sz, so: ˆ r(sz; x) = c(sz)e− √ 1−sz b x+ 1 1 − sz c(sz) = 1 √ 1 − sz − 1 1 − sz Finally putting all together into (4.30):

D ˜d(s, z)E = Z ∞ 0 x ∂ ∂x[ˆr(s; x) − ˜q(s, z; x) + z ˆr(sz; x) − z ˜r(s, z; x)] dx (4.34) The integral reads:

D ˜d(s, z)E = √σ 2  a − c(s) √ 1 − s + z a0− c(sz) √ 1 − sz  = √σ 2  1 (1 − s)√1 − sz + z √ 1 − s(1 − sz)− 1 1 − s− z 1 − sz 

(44)

36 4. Exponential distribution In the s → 1 limit: D ˜d(s, z)Es→1 ≈ √σ 2  1 √ 1 − z − 1  1 1 − s = 1 1 − s ∞ X k=1 zkhdk,∞i (4.35) where hdk,∞i = limn→∞hdk,ni. Using Taylor expansion we can easily extract hdk,∞i

from (4.35): σ √ 2  1 √ 1 − z − 1  = √σ 2 1 √ π ∞ X k=0 Γ k +12 Γ (k + 1)z k− 1 ! = √σ 2π ∞ X k=1 Γ k +12 Γ (k + 1)z k hence comparing: hdk,∞i = √σ 2π Γ k +12 Γ (k + 1)

This is the same result we got in (4.11) so taking the large k limit we will for sure end up in (4.12). As consequence this approach is consistent and it is simpler since it doesn’t require to compute the k-th maximum mean value.

4.3.2 Asymptotic linear system

For computing the full k-th gap distribution we will here consider the s → 1 limit before of solving the linear systems. The easiest way to proceed is to develop the coefficients A, B up to the second order:

As→1≈ A (1) 1 − s+ A(2) √ 1 − s B s→1 ≈ B (1) 1 − s+ B(2) √ 1 − s substituting into (4.14):                                    1 1 − s+ 1 1 −√1 − s A(1) 1 − s + A(2) √ 1 − s ! − ze −γ 1 − sz − ze−γ 1 +√1 − sz B(1) 1 − s+ B(2) √ 1 − s ! = 0 z 1 − sz + z 1 −√1 − sz B(1) 1 − s+ B(2) √ 1 − s ! − e −γ 1 − s− e−γ 1 +√1 − s A(1) 1 − s+ A(2) √ 1 − s ! = 0 (4.36)

where as usual γ = δ/b. Expanding now (4.36) around s = 1 up to the second order:          A(1)+ 1 − B(1)e−γ 1 −√1 − z 1 − s − A(1)+ A(2)− B(2)e−γ 1 −1 − z √ 1 − s = 0 B(1) 1 +√1 − z − A(1)+ 1 e−γ 1 − s − e−γ A(1)− A(2) + B(2) 1 +1 − z √ 1 − s = 0

(45)

4.3. Alternative approaches 37 All the terms are independent so we finally get a four equations linear system in the four variables A(1), A(2), B(1), B(2):                A(1)+ 1 − B(1)e−γ 1 −√1 − z = 0 A(1)+ A(2)− B(2)e−γ 1 −√1 − z = 0 B(1) 1 +√1 − z −  A(1)+ 1  e−γ = 0 e−γA(1)− A(2)+ B(2) 1 +√1 − z = 0 which has solution:

A(1) = −1 A(2) = cosh γ+ √ 1−z sinh γ √ 1−z cosh γ+sinh γ B(1) = 0 B(2)= √ 1 1−z cosh γ+sinh γ (4.37) On substituting (4.37) into (4.18) we get:

zeγ√1 − z 1 +1 − z + e−γ√1 − z 1 −1 − z √ 1 − z√1 − z cosh γ + sinh γ3 1 1 − s = ze γ 1 +1 − z + e−γ 1 −1 − z √1 − z cosh γ + sinh γ3 1 1 − s = 8z b e −2γ √ 1 − z + 1 − √1 − z − 1 e−2γ  √ 1 − z + 1 + √1 − z − 1 e−2γ3 1 1 − s = 8z b e −2γ u(z) − v(z)e−2γ [u(z) + v(z)e−2γ]3 1 1 − s

(46)
(47)

Chapter 5

First order Gamma distribution

We consider now a first order Gamma distribution: f1(x) = |x| 2b2e −|x| b (5.1) with: hxi = 0 σ2 =x2 = 6b2 0 0.05 0.1 0.15 0.2 0.25 -10 -5 0 5 10 Probability density f 1 (x) Jump length (x)

Figure 5.1: First order Gamma distribution (5.1) with b = 1. It can be shown that a relation analog to (4.2) holds:

b4 d 4 dx4f1(x) − 2b 2 d2 dx2f1(x) + f1(x) =  1 + b2 d 2 dx2  δ(x) (5.2) 39

(48)

40 5. First order Gamma distribution

5.1

Mean value of the k-th gap

Exploiting relation (5.2) the Wiener-Hopf integral (3.4) can be reduced to: b4 ∂ 4 ∂x4qk,n(x) − 2b 2 ∂2 ∂x2qk,n(x) + qk,n(x) = b 2 ∂2 ∂x2qk,n−1(x) + qk,n−1(x)

Switching to the generating function representation: b4 ∂

4

∂x4q(s, z; x) − b˜

2(s + 2) ∂2

∂x2q(s, z; x) + (1 − s)˜˜ q(s, z; x) = 1

This linear differential equation is easily solvable. Performing the change of variable ˜

q = g + 1 1 − s we recover an homogeneous equation for g:

b4∂

4g

∂x4 − b

2(s + 2)∂2g

∂x2 + (1 − s)g = 0

the solution is a superposition of exponentials, so setting g(s, z; x) = A(s, z)e−λx we obtain the characteristic equation:

b4λ4− b2(s + 2) λ2+ 1 − s = 0 which has the four real solutions (0 < s < 1):

λ1 = r s+2−√s(s+8) 2b2 λ2= r s+2+√s(s+8) 2b2 λ3 = − r s+2−√s(s+8) 2b2 λ4 = − r s+2+√s(s+8) 2b2 Remark 5.

In the range 0 < s < 1 both the radicands are always positive, so all the solutions are real.

The general solution is a linear combination of them:

g(s, z; x) = A1(s, z)e−λ1x+ A2(s, z)e−λ2x+ A3(s, z)e−λ3x+ A4(s, z)e−λ4x

For avoiding divergences we need to set A3= A4 = 0. Returning to ˜q:

˜ q(s, z; x) = A1(s, z)e−λ1x+ A2(s, z)e−λ2x+ 1 1 − s By symmetry: ˜ r(s, z; x) = B1(s, z) e−η1x+ B2(s, z) e−η2x+ 1 1 − sz where: η1 = s sz + 2 −psz(sz + 8) 2b2 η2 = s sz + 2 +psz(sz + 8) 2b2

(49)

5.1. Mean value of the k-th gap 41 and Bi ≡ Ai sz,1z. For evaluating the amplitudes (A1, A2, B1, B2), let’s use

rela-tions (3.11) and the results from (A): ˜ q(s, z; x) = 1 + s Z ∞ 0  A1(s, z)e−λ1x 0 + A2(s, z)e−λ2x 0 + 1 1 − s  f (x0− x)dx0 + sz Z ∞ 0  B1(s, z)e−η1x 0 + B2(s, z)e−η2x 0 + 1 1 − sz  f (x0+ x)dx0 = 1 + A1s 2be−λ1x λ2 1b2+ 1 − (λ1b + 1)2[b + x (1 − λ1b)] e− x b 2b λ21b2− 12 + A2s 2be−λ2x λ2 2b2+ 1 − (λ2b + 1)2[b + x (1 − λ2b)] e− x b 2b λ22b2− 12 + s 1 − s 1 − e−xb 2b (x + b) ! + B1sz e−xb [b + x (1 + η1b)] 2b (η1b + 1)2 + B2sz e−xb [b + x (1 + η2b)] 2b (η2b + 1)2 + sz 1 − sz e−xb 2b (x + b) exploiting the non-trivial relation:

K Λ

2b2+ 1

(Λ2b2− 1)2 = 1 K = {s, sz} Λ = {λ, η}

One gets the equation:

− A1sb + x (1 − λ1b) (λ1b − 1)2 − A2sb + x (1 − λ2b) (λ2b − 1)2 + B1sz b + x (1 + η1b) (η1b + 1)2 + B2sz b + x (1 + η2b) (η2b + 1)2 + (x + b)  sz 1 − sz − s 1 − s  = 0 which can conveniently be rewritten as:

 − A1sb (1 − λ1b)2 − A2sb (1 − λ2b)2 + B1szb (η1b + 1)2 + B2szb (η2b + 1)2 + bs (z − 1) (1 − s) (1 − sz)  + x  − A1s 1 − λ1b − A2s 1 − λ2b − B1sz η1b + 1 − B2sz η2b + 1 + s (z − 1) (1 − s) (1 − sz)  = 0

Since the condition must be true whatever the value of x, we need to set both the contents of the square brackets to be zero:

       A1 (1 − λ1b)2 + A2 (1 − λ2b)2 − B1z (η1b + 1)2 − B2z (η2b + 1)2 + 1 − z (1 − s) (1 − sz) = 0 A1 1 − λ1b + A2 1 − λ2b − B1z η1b + 1 − B2z η2b + 1 + 1 − z (1 − s) (1 − sz) = 0

(50)

42 5. First order Gamma distribution system:                          A1 1 − λ1b + A2 1 − λ2b − B1z η1b + 1 − B2z η2b + 1 + 1 − z (1 − s) (1 − sz) = 0 A1 (1 − λ1b)2 + A2 (1 − λ2b)2 − B1z (η1b + 1)2 − B2z (η2b + 1)2 + 1 − z (1 − s) (1 − sz) = 0 B1z 1 − η1b + B2z 1 − η2b − A1 λ1b + 1 − A2 λ2b + 1 − 1 − z (1 − s) (1 − sz) = 0 B1z (η1b − 1)2 + B2z (η2b − 1)2 − A1 (λ1b + 1)2 − A2 (λ2b + 1)2 − 1 − z (1 − s) (1 − sz) = 0 which has solution:

                               A1= λ2η1η2 λ21b2− 1 2 (1 − z) (λ1− λ2) (λ1+ η1) (λ1+ η2) (1 − s) (1 − sz) A2= − λ1η1η2 λ22b2− 1 2 (1 − z) (λ1− λ2) (λ2+ η1) (λ2+ η2) (1 − s) (1 − sz) B1 = − λ1λ2η2 η12b2− 1 2 (1 − z) z (η1− η2) (η1+ λ1) (η1+ λ2) (1 − s) (1 − sz) B2 = λ1λ2η1 η22b2− 1 2 (1 − z) z (η1− η2) (η2+ λ1) (η2+ λ2) (1 − s) (1 − sz)

Now that the auxiliary variables are known, let’s compute the first moment of the k-th gap starting from its generating function (4.34). For proceeding we need first to evaluate the quantity ˆr(s; x):

ˆ

r(s; x) = C1(s)e−λ1x+ C2(s)e−λ2x+

1 1 − s Using (4.33) we get an equation for the coefficients:

C1 b + x (1 − λ1b) (1 − λ1b)2 + C2 b + x (λ2b − 1) (λ2b − 1)2 +x + b 1 − s = 0 (5.3) Equation (5.3) has to be true whatever the value of x, hence:

       C1 (Λ1b − 1)2 + C2 (Λ2b − 1)2 + 1 1 − l = 0 C1 1 − Λ1b + C2 1 − Λ2b + 1 1 − l = 0 (5.4)

The system (5.4) has solution:          C1(s) = − (λ1b − 1)2λ2 (1 − s)(λ2− λ1) C2(s) = − (λ2b − 1)2λ1 (1 − s)(λ1− λ2)

(51)

5.2. Probability distribution of the k-th gap 43 Therefore, on substituting in (4.34): D ˜d(s, z)E = A1− C1(s) λ1 +A2− C2(s) λ2 + zB1− C1(sz) η1 + zB2− C2(sz) η2

Taking the s → 1 limit: D ˜d(s, z)Es→1 ≈ σ   q z + 2 +pz(z + 8) + q z + 2 −pz(z + 8) 2√3√1 − z − r 2 3   1 1 − s hence: ∞ X k=1 zkhdk,∞i = σ   q z + 2 +pz(z + 8) + q z + 2 −pz(z + 8) 2√3√1 − z − r 2 3   z→1 ≈ σ √ 6 2√3√1 − z = σ √ 2√1 − z (5.5)

We expects the scaling hdk,∞i k0 ≈ A k, so: A ∞ X k=1 zk √ k = A√π √ 1 − z (5.6) comparing (5.6) with (5.5): A = √σ 2π so: hdk,∞i σ k0 ≈ √1 2πk

The asymptotic behavior of the k-th gap mean value coincides with the one computed from an exponential jump distribution. This is a check of the reliability of the Pollaczek-Wendel identity (4.13).

5.2

Probability distribution of the k-th gap

We want now to extract the full k-th gap distribution and check if the claims of universality for typical fluctuations holds for a first order Gamma distribution. From prior results, the generating function of the auxiliary variable Qk,n(x, δ) reads:

˜

Q(s, z; x, δ) = A1(s, z; δ)e−λ1x+ A2(s, z; δ)e−λ2x+

1 1 − s

Injecting this expression in the integral equation (3.12), we obtain an equation for the amplitudes: −x + b 1 − s− A1[b + x(1 − λ1b)] (λ1b − 1)2 −A2[b + x(1 − λ2b)] (λ2b − 1)2 + ze −δ b 1 − sz(b + x + δ) B1ze− δ bb + (x + δ)(η1b + 1) (η1b + 1)2 + B2ze− δ bb + (x + δ)(η2b + 1) (η2b + 1)2 = 0 (5.7)

(52)

44 5. First order Gamma distribution Since (5.7) must be zero whatever the value of x, it splits in two independent equa-tions. Moreover using symmetry other two equations are provided, hence we get a four equations linear systems in the four variables A1, A2, B1, B2 (in the following

γ := δb):                                              A1 1 − λ1b + A2 1 − λ2b +B1ze −γ η1b + 1 +B2ze −γ η2b + 1 + 1 1 − s− ze−γ 1 − sz = 0 B1z 1 − η1b + B2z 1 − η2b − A1e −γ λ1b + 1 − A2e −γ λ2b + 1 + z 1 − sz − e−γ 1 − s = 0 A1 (λ1b − 1)2 + A2 (λ2b − 1)2 −B1ze −γ η1b + 1  1 η1b + 1 + γ  −B2ze −γ η2b + 1  1 η2b + 1 + γ  + 1 1 − s − ze−γ 1 − sz(1 + γ) = 0 B1z (η1b − 1)2 + B2z (η2b − 1)2 − A1e −γ λ1b + 1  1 λ1b + 1 + γ  − A2e −γ λ2b + 1  1 λ2b + 1 + γ  + z 1 − sz − e−γ 1 − s(1 + γ) = 0 (5.8)

The solutions of (5.8) are rather messy, too much to be handled. So we need first to consider the asymptotic behavior of the system, as explained in (4.3.2):

A1(s, z; γ) ≈ A(1)1 (z;γ) 1−s + A(2)1 (z;γ) √ 1−s A2(s, z; γ) ≈ A(1)2 (z;γ) 1−s + A(2)2 (z;γ) √ 1−s B1(s, z; γ) ≈ B (1) 1 (z;γ) 1−s + B1(2)(z;γ) 1−s B2(s, z; γ) ≈ B(1)2 (z;γ) 1−s + B(2)2(z;γ) 1−s (5.9)

on substituting relations (5.9) into (5.8) and expanding around s = 1, we obtain four equations of the general form:

P  A(j)i , Bk(l)  1 − s + Q  A(j)i , Bk(l)  √ 1 − s +O(1) = 0 (i, j, k, l = 1, 2) where P, Q are polynomials linear in the amplitudes A(j)i , Bk(l). Given that each term of the expansion is independent the linear system of four equations splits in a linear system of eight equations in the eight amplitudes defined in (5.9). This system has solution:

A(1)1 = −1 A(2)1 = A(2)1 (z; γ) A(1)2 = 0 A(2)2 = A(2)2 (z; γ) B1(1) = 0 B1(2) = B1(2)(z; γ) B2(1) = 0 B2(2) = B2(2)(z; γ)

(5.10)

where the second row is still too heavy to be presented.

Now that the amplitudes are known, let’s evaluate the generating function of the gap distribution in the s → 1 limit ˜p (z; γ). In order to do so, we start from relation

(53)

5.2. Probability distribution of the k-th gap 45 (4.16). The double derivatives of the auxiliary variables are:

∂2 ∂x∂y ˜ Q(s, z; x, γ) = −e −λ1x b2  bλ1 ∂ ∂γ + ∂2 ∂γ2  A1(s, z; γ) −e −λ2x b2  bλ2 ∂ ∂γ + ∂2 ∂γ2  A2(s, z; γ) ∂2 ∂x∂yR(s, z; −y, γ) = −˜ eη1y b2  bη1 ∂ ∂γ + ∂2 ∂γ2  B1(s, z; γ) −e η2y b2  bη2 ∂ ∂γ + ∂2 ∂γ2  B2(s, z; γ) (5.11)

Injecting (5.11) into (4.16) we obtain: ˜ P (s, z; γ) = 1 b  ∂ ∂γ + 1 λ1b ∂2 ∂γ2  A1(s, z; γ) +  ∂ ∂γ + 1 λ2b ∂2 ∂γ2  A2(s, z; γ) +ze−η1bγ ∂ ∂γ + 1 η1b ∂2 ∂γ2  B1(s, z; γ) +ze−η2bγ ∂ ∂γ + 1 η2b ∂2 ∂γ2  B2(s, z; γ)  (5.12)

Since we know the amplitudes in the s → 1 limit (5.10), we need to study the asymptotic behavior of (5.12). In order to so we have to inject the (5.10) into (5.12) and substitute for the roots their limiting behavior, that are found to be:

λ1b s→1 ≈ √ 1 − s √ 3 λ2b s→1 ≈ √3 η1b s→1 ≈ q 2 + z −pz(z + 8) √ 2 η2b s→1 ≈ q 2 + z +pz(z + 8) √ 2

All the derivatives with respect to γ of (5.12) acting on (5.10) share, for s → 1, the same asymptotic behaviour √1

1−s. As consequence all the terms of (5.12) scale as 1

1−s with except for the one containing the inverse of the root λ1 which scales as 1

1−s. This is the leader term and it is the only one relevant in the limit s → 1:

˜ P (s, z; γ)s→1≈ 1 λ1b2 ∂2 ∂γ2A1(s, z; γ) s→1 = ˜p (z; γ) 1 1 − s

where ˜p (z; γ) is the generating function of the gap distribution in the large s → 1 limit: ˜ p (z; γ) = ∞ X k=1 zkpk,∞(γ) (5.13) Remark 6.

The quantity ˜p (z; γ) is still to heavy for being transcribed in the document. We investigate now the two scaling regimes of typical and large fluctuations.

(54)

46 5. First order Gamma distribution 1. Typical fluctuations δ, γ ∼ 1/√k

For testing the claim of universality, let’s consider the limit of z → 1 keeping the ratio (z − 1)/γ2 constant for selecting typical fluctuations. Equation (5.13) approaches to: ˜ p (z; γ)t.f.≈ 1 b 18√3 √ 3 + 3 √ t γ 3 1 γ3

where t = 1 − z. Substituting now: b = √σ 6 γ = δ b = √ 6δ σ

We obtain exactly the same asymptotic behaviour as for the exponential dis-tribution ˜ p (z; γ)t.f.≈ σ 2 δ3 1 + r λ 2 !−3

and so the same scaling function (2). 2. Large fluctuations δ, γ  0

Expanding ˜p(z; γ) around z = 1 as in (4.27), we can extract the scaling function ϕ1(γ). It turns out to be rather complicate, anyway its asymptotic behavior

is: ϕ1(γ) ≈ ( 9 b q 3 πγ −4 γ → 0 C bγ 2e−2γ γ → ∞ (5.14)

where C is a large numerical coefficient. As before, combining the two fluctuations regimes: pk(γ) k0 ≈    1 b q k 6P q k 6γ  γ ∼ 1/√k ϕ1(γ)k− 3 2 γ  0

it turns out that their asymptotic behavior match when approaching to each other          1 b r k 6P r k 6γ ! γ0 −−−→ 1 b r k 6 3 √ 8π 36 k2γ −4 = 9 b r 3 πγ −4k−32 ϕ1(γ)k− 3 2 −−−→γ→0 9 b r 3 πγ −4k−3 2

(55)

Chapter 6

General p-th order Gamma

distribution

We consider now a general p-th order Gamma distribution: fp(x) =

|x|p

2p!bp+1e −|x|b

p ∈ N (6.1) By symmetry, the mean value is zero, while the variance is:

σ2 = 1 2p!bp+1 Z +∞ −∞ x2|x|pe−|x|b dx = 1 p!bp+1 Z ∞ 0 xp+2e−xbdx = b 2 p! Z ∞ 0 yp+2e−ydy = b 2 p!(p + 2)! = b 2(p + 1)(p + 2) 0 0.02 0.04 0.06 0.08 0.1 -20 -15 -10 -5 0 5 10 15 20 Probability density f 5 (x) Jump length (x) (a) p = 5 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 -100 -50 0 50 100 Probability density f 50 (x) Jump length (x) (b) p = 50

Figure 6.1: Graphs of some general order Gamma distributions with b = 1. In the prior cases for reducing the Wiener-Hopf integrals (3.4) and (3.10) in recurrent differential equation, we exploited some identities valid for the jump dis-tributions, see (4.2) and (5.2). The aim is now to find an identity for the general p-th order Gamma distribution. We start on evaluating the derivative of (6.1):

d dxfp=

σ(x)

b [fp−1− fp] (6.2) 47

(56)

48 6. General p-th order Gamma distribution where σ(x) is the sign function:

σ(x) = Θ(x) − Θ(−x) =    1 x > 0 0 x = 0 −1 x < 0 whose derivative is:

d

dxσ(x) = 2δ(x) The second derivative of (6.1) reads:

d2 dx2fp = 2δ(x) b [fp−1− fp] + 1 b2 [fp− 2fp−1+ fp−2] = δ(x) b2 [δp,1− δp,0] + 1 b2 [fp− 2fp−1+ fp−2] (6.3) Or case by case: fp00=    f0− δ(x) p = 0 f1− 2f0+ δ(x) p = 1 fp− 2fp−1+ fp−2 p ≥ 2

Deriving an even number of times (6.1), we obtain a combination of lower order distributions and derivatives of delta functions. By defining the rescaled linear differential operator

D2= b2 d

2

dx2 (6.4)

we can rewrite (6.3) as:

D2fp = (−1)p+1 1 p  δ(x) + 2 X l=0 2 l  (−1)lfp−l

Applying (6.4) a second time it can be shown that (B): D4fp= (−1)p+1 1 p  D +3 p  δ(x) + 4 X l=0 4 l  (−1)lfp−l similarly: D6fp= (−1)p+1 1 p  D2+3 p  D +5 p  δ(x) + 6 X l=0 6 l  (−1)lfp−l

Thus we can generalize for k times: D2kfp = (−1)p+1 k−1 X m=0 2k − 2m − 1 p  D2mδ(x) + 2k X l=0 2k l  (−1)lfp−l (6.5)

Riferimenti

Documenti correlati

Definition (the preorder induced by a separable capitalization factor). Let ≥ fe the usual preorder of the financial events. We shall give two proofs: algebraic and geometrical. the

In [10], in the framework of the extended rational thermodynamics with internal vari- ables, using the standard Cartesian tensor notation in a rectangular coordinate system, a model

On information given to the victims, this article makes no reference, in the case of children, to including the expertise of NGOs or agencies most appropriate to deal with children

Using the above row reduced echelon form we now determine the kernel of A by solving the corresponding linear system with zero right-hand sides... How- ever, the hyperplane consists

rate of change, during time, of the spati a l metric; .~ ., describes the rate of change, during time, of the spatial di recti ons. This facts

The expression “audiovisual translation” refers to all those translation techniques that aim to transfer the original dialogues of an audiovisual product into another

The springs were observed from May 2015 to April 2016, measuring flow discharge, temperature, pH and electrical conductivity every twenty days, with two samplings

In the first case we a priori fix an interval of existence (which we do not make in Definition 5.1), in the second case we require the uniqueness, whereas in Definition 5.1 we are