• Non ci sono risultati.

Some stochastic particle systems models of neuronal networks

N/A
N/A
Protected

Academic year: 2021

Condividi "Some stochastic particle systems models of neuronal networks"

Copied!
116
0
0

Testo completo

(1)

N

U

IVERSITA DI ISA

P

Dipartimento di Matematica Tesi di Laurea Magistrale

SOME STOCHASTIC

PARTICLE SYSTEMS MODELS

OF NEURONAL NETWORKS

Candidato: Relatore:

Paolo Grazieschi

Prof. Marco Romito

(2)
(3)

Contents

Introduction 5

I Particle System and Limit Equations 9

1 A particle system model of the human brain 11

1.1 Particle system: definition of solution, existence and uniqueness . . . 11

1.2 Mean-field equation with delayed interaction . . . 20

2 Mean-field limit equation and convergence of the particle system 25 2.1 Convergence . . . 31

2.1.1 The M1-topology . . . 31

2.1.2 Convergence of the particle system . . . 33

3 Mean-field limit equation: existence and uniqueness 59 3.1 Limit Equation . . . 59

3.1.1 Main results . . . 60

3.1.2 Existence and uniqueness in small time . . . 63

3.1.3 Long time estimates . . . 70

3.2 Blow-up phenomenon . . . 87

II New models of neuronal networks 89 4 Properties of neuronal networks and introduction of new models 91 4.1 Random Graphs . . . 91

4.1.1 The Erd˝os–R´enyi Random Graph . . . 91

4.1.2 Circle model . . . 94

4.1.3 Small World models and the Watts-Strogatz model . . . 95

4.2 Observed properties of neuronal networks . . . 97

4.2.1 The M1 model . . . 98

4.2.2 The M2 model . . . 104

4.2.3 The M3 model . . . 106

4.2.4 The M4 model . . . 107

(4)

Appendices 113

(5)

Introduction

A neuron is the fundamental component of the human brain and is composed by a central part, called soma; input connections, called dendrites; and one output connection, called axon. Neurons are linked together to form a complex network of interacting particles, the human brain: particularly, every neuron accumulates a membrane potential in the soma and, when this potential reaches a ”threshold” amount, an electrochemical signal is produced and sent through the axon: we say that the neuron ”spikes”. All the other neurons connected to the axon of that spiking neuron feel an increase in their membrane potential, called a ”kick”, and may in turn produce a new electrochemical signal. We here pursue a better understanding, from a mathematical point of view, of the structure of the human brain network and of the nature of the interactions among neurons. Two works by Delarue, Inglis, Rubenthaler and Tanr´e already address such a problem. There, neurons are represented by nodes in a mathematical graph and edges are con-nections among them: the graph is complete, meaning that every neuron is connected to every other neuron of the network. To each neuron i is then associated a stochastic process Xi representing its potential; Xi solves a stochastic differential equation with Lipschitz drift term b, Brownian diffusion (Wt)tand an interaction term depending on

the distribution of the potentials themselves: Xti = X0i+ Z t 0 b(Xs)ds + α N N X j=1 Mtj − Mi t+ Wt,

where the process (Mti)t keeps track of the number of times that the process Xi reaches

the threshold value. The interaction term Nα PN

j=1M j

t causes the potential of one neuron

to be instantaneously increased whenever some other particle of the network spikes; moreover, whenever the spike comes from particle i itself, the term Mti is increased and this causes the potential Xi to be instantaneously reset to a resting value. This model for neurons in the brain is called the ”stochastic integrate and fire model” and it is only one of the possible descriptions of the behaviour of neurons. The computation of a solution of those resulting McKean-Vlasov type equations is not trivial because of the dependence of the equations not only on the distribution of the solution, but also on the distribution of the first hitting times of the threshold by the solution. Another difficulty arises because we want to find solutions which are ”physical”: as such, we want to exclude those behaviours of the potentials of the particle system which would not correspond to real behaviours of neurons in the brain. Finally, we are able to show that

(6)

there is one and only one ”physical solution” for the particle system.

Chapter 1 presents a generalization of such a theory, applied to arbitrary mathematical graphs instead of only complete ones. To reach such a level of generalization, there is actually a price to be paid, that is a bound on the potentials which is not independent on the number of particles of the network; this fact makes it impossible to proceed at a later stage. We therefore discuss possible conditions on the graph to avoid future problems. Chapter 1 then concludes introducing another possible model of the human brain in which the average behaviour of neurons is described by one stochastic differential equation, with a ”delayed” interaction term. Chapter 2 and 3 are again centred on results found in works by Delarue, Inglis, Rubenthaler and Tanr´e. Chapter 2 deals with convergence of the solution of the particle system to some mean-field stochastic differential equation, when the number of particles diverges:

Xt= X0+

Z t

0

b(Xs)ds + αEMt − Mt+ Wt,

where, again, we have a Lipschitz continuous function b, a standard Brownian motion (Wt)t, a stochastic process (Mt)twhich keeps track of the number of times that X reaches

the threshold and an interaction term now represented by αE[Mt]. The central point

in proving this convergence is the following: we work in some functional space, one in which all the possible trajectories of the solutions of the particle system lie; we consider those empirical measures giving a uniform weight to realizations of the trajectories of the particle system; we show that the laws of those empirical measures form a tight family of measures. Particularly, convergence of the solution of the particle system to the mean-field stochastic differential equation we are talking about makes it possible to state existence of the solution of this SDE; we are however not able to prove uniqueness. Chapter 3 studies the same SDE considered in Chapter 2, but with a different idea of solution which implies better regularity conditions on the terms appearing in the equation. In this way, it is possible to state existence and uniqueness of the solution. Both existence and uniqueness are a result, at least in small times, of a fixed-point argument: we first solve a SDE in which the interaction term is substituted by some arbitrary function belonging to some suitable complete metric functional space and then we transform that arbitrary function by using a contraction function. The unique fixed point of this contraction states the results we are searching for. We finally extend existence and uniqueness from small times to some arbitrary time interval by showing that the local solution can be defined in a time interval with measure at least equal to some fixed value. Chapter 4 presents some empirical observations on the nature of the human brain network which are summarized by requesting the small-world property (that is, low distances between nodes and a cluster structure, which means that there is some kind of local concentration of nodes) and an approximate power-law degree distribution for nodes in the network. The effort of the chapter is then to present entirely new models which overcome the complete network hypothesis and which adhere better to the previously described empirical properties. Four models are presented. The first one shows both small-worldness and power-law degree distribution but is not suitable to a discussion

(7)

CONTENTS 7

in terms of limit behaviour as done in Chapter 2. The same problem appears for the second and third model, which are small-world models in which the power-law degree distribution property is seen only for certain nodes. Finally, the fourth model is still a small-world one even if it completely loses the power-law degree distribution; this last model, however, may be the starting point of a future investigation on limit convergence to some SDE.

(8)
(9)

Part I

Particle System and Limit

Equations

(10)
(11)

Chapter 1

A particle system model of the

human brain

1.1

Particle system: definition of solution, existence and

uniqueness

The human brain network is here represented as a stochastic particle system in which at each neuron is associated a potential and such that the interaction is given by the propagation of spikes. It can be shown that this particle system has a unique solution, as described below.

This section follows the work by Delarue, Inglis, Rubenthaler and Tanr´e in [6], applying it to arbitrary mathematical graphs.

Our effort here is to model, from a mathematical point of view, a neuronal network composed by N interacting neurons.

Every neuron is represented as a node in a graph, which in turn represents the entire network: as such, all neuronal connections are represented by edges between nodes. To every neuron is then associated a potential and when the potential of one of the neurons in the network reaches a threshold value, the neuron is said to emit a spike and its potential is immediately reset to a resting value. The spike is actually an electrochemical wave which reaches all the neurons connected to the one that has spiked causing their potential to increase. This is one of the most classical models we could consider for a neuronal network and it is called the ”noisy leaky integrate and fire” (NLIF) model.

We first consider a very general model of the human brain: let N be the total number of particles in the network and, given two particles i, j ⊆ {1, ..., N }, let aij ⊆ {0, 1}

represent the existence of an edge between vertices i and j. In this way, by taking different combinations of (aij)i,j, we take into account every possible graph as a model of the

brain. Moreover, the potential of a particle i at time t (when we consider a network of N particles) is represented by a c`adl`ag stochastic process Xti,N. Without loss of generality,

(12)

we assume that the potential threshold at which a neuron spikes is 1 and that the resting value at which the potential of a neuron is reset after a spike is 0. We also assume that the spike of a particle j increases the potential of another particle i to which j is connected by an amount of d(i)α , with α a real number in (0, 1) and d(i) =PN

j=1aij

giving the degree of node i; when this happens, we say that particle i receives a kick by particle j. We now define:          τ0i,N = 0 , τk+1i,N = inf    t > τki,N Xt−i,N+ α d(i) N X j=1 aij(Mtj,N− M j,N t− ) ≥ 1    ∀ k ∈ N (1.1) and Mti,N =X k∈N 1[0,t](τki,N). (1.2)

Here Mti,N is an integer-value stochastic process which counts the number of times the process Xti,N reaches the threshold 1 in the time interval [0, t], which is the same as the number of times that particle i spikes in that time interval. Similarly, ∆Mtj,N := Mtj,N − Mt−j,N gives the number of times particle j spikes at time t. The definition of Mi,N uses the stopping times τki,N: these are the times at which the potential of the particle i plus the kicks it receives from the spiking of other particles gives a value equal or bigger than the threshold.

We can now describe the dynamics of Xti,N by using the following stochastic differential equation:        Xti,N = X0i,N + Z t 0 b(Xsi,N)ds + α d(i) N X j=1 aijMtj,N+ Wti− M i,N t X0i,N= Xd 0 i.i.d. (1.3)

where (Wti)t≥0 are standard independent Brownian motions, for each i = 1, ..., N . We

also make some ’standard assumptions’ on the function b and on the random variable X0.

F Standard Assumptions on the function b and on the random variable X0:

1. b : (−∞, 1] → R is a Lipschitz-continuous function such that there exist two constants K and Λ such that

(

|b(x) − b(y)| ≤ K|x − y|

|b(x)| ≤ Λ (1 + |x|) for all x, y ∈ (−∞, 1];

2. the initial condition X0 is such that X0 ∈ (−∞, 1 − 0] almost surely for some

0 > 0 and X0 ∈ Lp(Ω) for any p ≥ 1.

(13)

1.1. PARTICLE SYSTEM: DEFINITION OF SOLUTION 13

Reformulation

It can be convenient to write the system above in a different way, that is in terms of the new process:

Zti:= Xti+ Mti, t ≥ 0. (1.4)

To this aim, it is clear that the system above is equivalent to the following one:        Zti = Z0i+ Z t 0 b(Zsi− Msi)ds + α d(i) N X j=1 aijMtj + Wti Z0i= Xd 0 i.i.d. (1.5)

The only problem is to write the process Mti in terms of Zti, and this can be accomplished by writing: Mti=j sup s∈[0,t] Zsi + k

where b·c stands for the integral part and (·)+ for the positive part.

Proof. We first rewrite the stopping times τki in the following way:

τki = infnt > τk−1i Z i t−− Mt−i + α d(i) N X j=1 aij∆Mtj ≥ 1 o = inf n t > τk−1i Z i t ≥ 1 + Mt−i o = inf n t > τk−1i Z i t ≥ k o

Now if Mti = h then τ1i, ..., τhi ∈ [0, t] while τi

h+1∈ [0, t]; from the way we have rewritten/

the stopping times it follows that  sups∈[0,t]Zsi,N



+



= h. On the other hand, if 

sups∈[0,t]Zsi,N+ = h then there exists s ∈ [0, t] such that Zti ≥ h and ∀ r ∈ [0, t] Zti<

h + 1. This implies that Mti= h, which concludes the proof. 

From now on, we refer to this reformulation of the problem as the Z-system, while we refer to the previous formulation as the X-system.

Non-physical solutions

We need to define a concept of solution of the problem we have just introduced. However, it can be shown that the X-system (1.3) , or equivalently the Z-system (1.5) , could have multiple solutions. The problem is that some of these solutions are not ’physical’, in the sense that they cannot be realized concretely, they are just mathematical constructions which don’t happen in practice; our goal is then to define a concept of solution in such a way that ’non-physical solutions’ are excluded; a desirable result would also be having uniqueness of the solution.

Pursuing this goal, it is useful to first consider situations in which there is no uniqueness of the solution.

(14)

• A first obstruction to having uniqueness comes from the fact that, until now, particles are allowed to have several spikes at the same time: this may happen because there are no conditions on the term Mti,N − Mt−i,N.

Consider this example: take the Z-system (1.5) with Z0i = 0, b ≡ 0 and aij = 1 for

every i, j; we can suppose that the coefficient α has the form α = 1 −2m1 , for some integer m ≥ 1. We now suppose that the potentials of our particles have evolved in such a way that we have:

Zt−i = 1 − δi , where δ1 = 0, δi ∈  i − 2 4N , i − 1 4N  ∀ i ∈ {2, ..., N } and that, for every choice of particle i, Mti = 0. We note that this happens with positive probability. In particular, the particle 1 is going to spike at time t since it has reached 1. Now, if we set Mi

t = l ∀i, where l is an arbitrary integer such that l m ≤ 1, we obtain: Zti= 1 − δi+ αl = 1 − δi+  1 − 1 2m  l, which implies that

Zti ∈  1 −i − 1 4N +  l −1 2  , 1 −i − 2 4N +  l − 1 2m  ⊆ [l, l + 1) . We conclude that the processes Zti and Mti satisfy the equations of the Z-system (1.5) . But if the coefficient α is sufficiently near 1 then the integer m must be big,

and so we have a different solution for each l ≤ m.

We will later call the solutions with l > 1 as non-physical : to have a physical solution, we will in fact ask that ∆Mi

t = Mti− Mt−i ≤ 1. This corresponds to

imposing that each particle of the system might spike only once at a fixed time t. • Even if we limit ourselves to processes Zi

t and Mti which satisfy the Z-system (1.5)

and the condition P[∆Mti ≤ 1] = 1 for all i ∈ {1, ..., N }, we may in fact still not

have uniqueness of the solution.

Here is another example in which uniqueness fails. In the Z-system (1.5) with N interacting particles and aij = 1 for every pair of vertices (i, j), suppose we have:

     Zt−1 = 1 , Zt−i ∈  1 −iα N, 1 − (i − 1)α N  ∀ i ∈ {2, ..., N } Mt−i = 0 ∀i

Again, this configuration happens with positive probability. There are two possible solutions: in the first one only the first particle spikes, in the second one every particle of the system spikes. Particularly, in the first solution we have

(15)

1.1. PARTICLE SYSTEM: DEFINITION OF SOLUTION 15

and the processes Zti are such that the Z-system (1.5) is satisfied; regarding the second solution, we have

Mti = 1 ∀ i and, again, Zti so that the Z-system (1.5) is satisfied.

The two solutions we have described are conceptually different. In the first solution the first particle reaches the threshold potential without receiving kicks by other particles, that is by diffusion only; the spike of the first particle is propagated to all the other particles and generates a kick which is however not sufficient to make other particles spike. So we have only one particle spiking. In the second solution, every particle spikes and this is possible because the kicks generated by all the spikes accumulate and make the potential of every particle exceed 1. We will call this behaviour non-physical.

Physical solutions

As showed in the previous section, there are two questions to be considered to get to a reasonable concept of physical solution: first, we need the condition P[∆Mti ≤ 1] = 1 ∀ i

so that our solution doesn’t admit particles with more than one spike at a fixed time t; secondly, we need to determine a natural ordering of the spiking sequence.

Consider the X-particle system and fix a time instant t. Define: Γt,0:=i ∈ {1, ..., N }

Xt−i = 1

If Γt,0 is not empty, then we say that t is a spiking time and Γt,0 represents all the particles

that spike at time t by diffusion only. If Γt,k 6= ∅, we then define the neighborhood of

node i as the set V (i) := {j ∈ {1, .., N }|aij = 1} (so that |V (i)| = d(i)) and we define:

Γt,k+1 :=  i ∈ {1, ..., N } \ Γt,0∪ ... ∪ Γt,k Xt−i + α d(i) (Γt,0∪ ... ∪ Γt,k) ∩ V (i) ≥ 1 

which is the set of particles whose potential has reached the threshold only after having received all the kicks of the particles in Γt,0∪ ... ∪ Γt,k. For sure, we will at last have an

integer ¯k such that Γt,¯k= ∅: in fact, there are at most N particles and the sets Γt,k are a

disjoint sequence of subsets of {1, ..., N }.

The sets Γt,k, k = 1, ..., ¯k are called the time axis cascade at time t and they give the

ordering of spiking at time t: first, particles in Γt,0 spike by diffusion, then there is the

spiking of particles in Γt,1, then of those in Γt,2 and so on until there are no more particles

spiking. The set of all the particles spiking at time t is Γt:=

[

k=1,...,¯k

Γt,k

The following theorem gives the desired result of existence and uniqueness of a physical solution for the X-particle system:

(16)

• ∀i ∈ {1, ..., N } , ∀t ≥ 0 P[∆Mi

t ≤ 1] = 1

• for every particle i and for each spiking time t ≥ 0, the X-system (1.3) evolves according to the rule:

Xti = Xt−i + α d(i) Γt∩ V (i) and Mti= Mt−i if i /∈ Γt; Xti = Xt−i + α d(i) Γt∩ V (i) − 1 and Mti = Mt−i + 1 if i ∈ Γt.

This solution will be called physical solution.

Proof. We first prove uniqueness. If Γt= ∅ then, whichever particle i we consider, t is

not a spiking time for particle i; so the interaction terms Mti are locally constants in t and the stochastic differential equation

Xti = X0i+ Z t 0 b(Xsi)ds + α d(i) N X j=1 aijMtj+ Wt− Mti

reduces to a diffusion SDE with a Lipschitz-continous function as drift term and a standard Brownian motion as diffusion term. We know that the solution of such a SDE is unique, so that in between spiking times we haven’t any problem of uniqueness. When t is a spiking time for some particle, that is Γt6= ∅, then we have specified a fixed procedure

to update the potential of the particles. So if a solution exists, that solution must be unique.

To prove existence it is convenient to consider the Z-system (1.5) , which is equivalent to the X-system (1.3) . For every particle i, define the new processes (Yt1,i)t≥0which satisfy:

Yt1,i= Z0i + Z t

0

b(Ys1,i)ds + Wti , t ≥ 0.

If, for every particle i, we define

τ1,i:= inf{t ≥ 0 | Y1,i≥ 1}

then we have τ1,i∈ (0, +∞) and so

τ1 := inf

i=1,...,Nτ

1,i∈ (0, +∞).

Then we set

Zti = Yt1,i , Mti = 0, for 0 ≤ t < τ1 and for every i ∈ {1, ..., N }.

Considering how τ1 has been defined, there is a particle i1 such that τ1,i1 = τ1; so it also holds true that the set Γτ1 of all the particles spiking at time τ1, both by diffusion and

(17)

1.1. PARTICLE SYSTEM: DEFINITION OF SOLUTION 17

not, is not empty. Then, according to the way the X-system (1.3) must evolve, we also set: Zτi1 = Y 1,i τ1 + α d(i) Γτ1 ∩ V (i) , Mτi1 = ( 1, if i ∈ Γτ1 0, if i /∈ Γτ1 ∀i ∈ {1, ..., N } Particularly, if i ∈ Γτ1 then Zτi1 = Y 1,i τ1 + α d(i) Γτ1 ∩ V (i) ≥ 1 and Zτi1 ≤ Y 1,i τ1 + α < 2 so that sups∈[0,τ1]Zsi  + = 1. Conversely, if i /∈ Γτ1then Z i τ1 < 1 so that  sups∈[0,τ1]Zsi  + =

0 and so we conclude that, until time τ1, we have a solution of the Z-system (1.5) which satisfies the conditions expressed by the theorem.

We can bring this procedure forward by using the induction principle: assuming we have defined the stopping time τk−1 and the processes Zi and Mi (for each particle i) until time τk−1, we define the process Ytk,i as the solution of the SDE:

Ytk,i= Zτik−1+

Z t

τk−1

b(Ysk,i− Mτik−1)ds + (Wti− Wτik−1).

Then we set:

• τk,i = inf{t ≥ τk−1 | Yk,i

t ≥ k} and τk = infi=1,...,N(τk,i); we again observe

that stopping times τk,i are finite and that τk,i> τk−1 (almost surely);

• Γτk = the set of all particles spiking at time τk, both by diffusion or not;

• Zi t = Y k,i t and Mti = Mτik−1 when t ∈ (τk−1, τk); • Zi τk = Y k,i t +d(i)α Γτk ∩ V (i) and Mi τk = ( Mτik−1+ 1, if i ∈ Γτk Mτik−1, if i /∈ Γτk .

As a result we get an entire sequence of stopping times (τk)k≥0 and of sets (Γτk)k≥0

(setting also τ0 = 0 and Γτ0 = ∅ for convenience).

It holds that, for every particle i: Zti = Z0i+ Z τ1 τ0 b(Zsi− Mτi0)ds + ... + Z τk τk−1 b(Zsi− Mτik−1)ds + Z t τk b(Zsi− Mτik)ds+ + α d(i) k X h=1 Γτj∩ V (i)

+ Wti , when τk< t < τk+1, for every k,

so that in general, for every particle

Zti= Z0i + Z t 0 b(Zsi− Mi s)ds + α d(i) N X j=1 aijMtj+ Wti , if t < sup k∈N (τk) (1.6)

(18)

as desired. With such a construction we have that Mti = sup s∈[0,t] Zsi +  ∀ i ∈ {1, ..., N }, ∀t < sup k∈N (τk)

and we conclude that we have a solution of the Z-system (1.5) (or equivalently of the X-system (1.3) ) which satisfies the conditions required above in the time interval [0, supk∈N(τk)).

The last part of the proof shows that τk → +∞ as k → +∞. Observing that Mti ≤ sups∈[0,t]

Zsi and that b(Zsi− Mi s) ≤ C  1 + supr∈[0,s] Zri  for some constant C, we get:

d(i) sup s∈[0,t] Zsi ≤ d(i) Z0i + Cd(i) Z t 0  1 + sup r∈[0,s] Zri  ds+ + α N X j=1 aij  sup s∈[0,t] Zsj  + d(i) sup s∈[0,t] Wsi (1.7)

and then, dividing by N and taking the empirical mean over i ∈ {1, ..., N }, we have that 1 − α N N X i=1 d(i) N s∈[0,t]sup Zsi ≤ 1 N N X i=1 d(i) N  Z0i + sup s∈[0,t] Wsi  + + C Z t 0 1 + 1 N N X i=1 d(i) N r∈[0,s]sup |Z i r| ! ds

so that, by Gronwall’s lemma: 1 N N X i=1 d(i) N s∈[0,t]sup |Z i s| ≤ C exp(Ct) " t + 1 N N X i=1 d(i) N  |Z0i| + sup s∈[0,t] |Wsi| # (1.8)

possibly incrementing the value of the constant C. We already know that Mti = 

sups∈[0,t]Zsi

+ so that the same bound holds also for 1

N

PN

i=1 d(i)

N Mti. We now state that

Mti ≤ N2  C exp(Ct)  t + 1 N N X j=1 d(i) N  |Z0i| + sup s∈[0,t] |Wsi|    .

When N is fixed, we then deduce that (Mti)i are all bounded (with a bound dependent

on N itself) which implies that there cannot exist a time t > 0 such that the sequence (τk)k accumulates in [0, t).

(19)

1.1. PARTICLE SYSTEM: DEFINITION OF SOLUTION 19

Condition (1.8) makes it possible, as just showed, to state existence and uniqueness of a physical solution of the particle system; however, the choice of some specific graph structures can bring us further. Particularly, a desirable property for us (which will be used in future chapters) would be having a bound on each term sups∈[0,t]|Zi

s|



which is not explicitly dependent on the number of particles N . A condition which would bring us to that result is α d(i) N X j=1 aijMtj ≤ β N N X j=1 Mtj, (1.9)

for some constant β ≥ α and for every possible configuration of processes (Mi)

i, which

means without dependence on the randomness of Brownian motions (Wi)i. A necessary

and sufficient condition for proving this inequality (independently of Brownian motions) is that

d(i) ≥ α βN.

In fact, Mti = 1 ∀i ∈ {1, ..., N } with positive probability (where here the probability is the one linked with the randomness of Brownian motions), so that taking Mi= 1 ∀i in (1.9) gives us that α ≤ β; moreover, after having fixed some i ∈ {1, ..., N }, the configuration with Mti = 1, Mtj = 0 ∀j 6= i has again positive probability and we get d(i) ≥ αβN . Conversely, if d(i) ≥ αβN for some β ≥ α, then (1.9) is true.

Assume now d(i) ≥ αβN for every i ∈ {1, ..., N }, for some constant β ≥ α. The following inequality holds:

α d(i) N X j=1 aij sup s∈[0,t] |Zsj| ! ≤ β 2 αN N X j=1 d(j) N s∈[0,t]sup |Z j s| ! ,

so that we may rewrite (1.7) as:

sup s∈[0,t] Zsi ≤ Z0i + C Z t 0  1 + sup r∈[0,s] Zri  ds + β 2 αN N X j=1 d(j) N s∈[0,t]sup |Z j s| ! + sup s∈[0,t] Wsi .

Using (1.8), we now have that:

sup s∈[0,t] Zsi ≤ Z0i +C Z t 0 sup r∈[0,s] Zri ds+ β2 α Ce Ct " t+1 N N X j=1  |Z0j|+ sup s∈[0,t] |Wsj| # + sup s∈[0,t] |Wsi|, (1.10) possibly incrementing the value of the constant C. Using again Gronwall’s lemma:

sup s∈[0,t] |Zsi| ≤ eCt   Z0i + sup s∈[0,t] |Wsi| + β 2 α Ce Ct " t + 1 N N X j=1  |Z0j| + sup s∈[0,t] |Wsj| # . (1.11)

(20)

Exactly as before, the same bound holds for Mti and so there exists a constant CT > 0 such that: sup i∈{1,..,N } Mti≤ CT1 + sup i∈{1,...,N } |Z0i| + sup i∈{1,..,N } sup s∈[0,T ] |Wsi|.

We observe that the condition mini=1,..,N

d(i)

N

 ≥ α

β is satisfied when we take a

complete graph: in that case, in fact, d(i) = N for every node i.

1.2

Mean-field equation with delayed interaction

There is another possible description of the behaviour of the neurons in the brain: we here introduce a mean-field equation whose solution describes the ’mean’ behaviour of the particles in the brain. In this section, we suppose that the spike of a neuron is felt by other neurons with a certain time delay. This is what has been done by Delarue, Inglis, Rubenthaler and Tanr´e in [6].

Preliminaries

Let T > 0 and take a function e ∈ C1[0, T ]. Consider a function b and a random variable X0 satisfying the standard assumptions (?). Then, the stochastic differential equation:

Xte = X0+ Z t 0 b(Xse)ds + αe(t) + Wt− Mte , t ∈ [0, T ] (1.12) where Mte=X k≥1 1[0,t](τke) , τke= ( 0 , if k = 0 inf{t > τk−1e : Xt−e ≥ 1} , if k ≥ 1

has a unique strong solution defined on the entire interval [0, T ]: in fact, this solution may be constructed iteratively by joining the solutions we get if we limit ourselves to the intervals [τke, τk+1e ); in these intervals a strong solution is guaranteed to exist and, to have a solution defined on the entire [0, T ], all is remained to do is showing that the stopping times (τe

k)k≥0 never accumulate in finite time. This is true: if (τk+1e − τke) → 0

as k → +∞, then 1 ≤ Xee k+1−)− X e τke = Z τk+1e τe k b(Xse)ds + αe(τe k+1) − e(τke) +  Wτe k+1 − Wτke  −−−−→ k→+∞ 0

and this is a contradiction.

Let T, A > 0 be real numbers. We now define the space

L(T, A) := {e ∈ C1[0, T ] : e(0) = 0, e(t) ≤ e(s) ∀t ≤ s, sup

0≤t≤T

(21)

1.2. MEAN-FIELD EQUATION WITH DELAYED INTERACTION 21

and the map

Γ(e)(t) := E[Mte].

Finally, we use the symbol (Xte#s)tto refer to the solution of the SDE (1.12) in which we

start at time s, that is, for t ∈ [0, T − s]: Xte#s = X0e#s +

Z t

0

b(Xre#s)dr + α (e(t + s) − e(s)) + (Wt+s− Ws) − Mt+se − Mse .

The following results will be proved later in greater generality.

Lemma 1.1. If the function b and the random variable X0 satisfy the standard

assump-tions (?), it holds that:

(1) The law of the diffusion of Xte killed at the threshold is absolutely continuous with respect to the Lebesgue measure;

(2) denoting the density of the process Xte killed at the threshold by

pe(t, y) :=

1 dyP[X

e

t ∈ dy, t < τ1e] , t ∈ [0, T ], y ≤ 1,

then pe(t, y) is continuous in (t, y) and continuously differentiable in y on the set

(0, T ] × (−∞, 1];

(3) almost everywhere on (0, T ] × (−∞, 1), pe satisfies the Fokker-Planck equation:

∂ype(t, y) + ∂y(b(y) + αe0(t))pe(t, y) −

1 2∂

2

yype(t, y) = 0

with the Dirichlet boundary condition pe(t, 1) = 0 and the measure-valued initial

condition pe(0, y)dy = P[X0 ∈ dy];

(4) the first hitting time τ1e has a density on [0, T ] given by d dtP[τ e 1 ≤ t] = − 1 2∂ype(t, 1) , t ∈ [0, T ].

The assertions of Lemma 1.1 make the operation of differentiating pe in the variable

y reasonable, so that we can write the following:

Lemma 1.2. If X0 ∈ (−∞, 1 − 0] for some 0 > 0, then for each e ∈ L(T, A), the map

[0, T ] 3 t 7→ Γ(e)(t) is continuously differentiable and

d dtΓ(e)(t) = − Z t 0 1 2∂yp (0,s) δ (t − s, 1) d ds[Γ(e)(s)] ds − 1 2∂ypδ(t, 1)

where pδ is the density of the process Xδ killed at 1 with initial condition X0 and p(0,s)δ is

(22)

System with delays

We consider the following stochastic differential equation: Xtδ= X0+

Z t

0

b(Xsδ)ds + αeδ(t) + Wt− Mtδ , t ≥ 0 (1.13)

where, similarly as before: (Wt)t≥0 is a standard Brownian motion; the function b and

the random variable X0 satisfy the standard assumptions (?).

The stochastic process (Xtδ)t≥0represents the potential of a generic neuron and, as before,

(Mtδ)t≥0 counts the number of times that Xδ reaches the threshold 1, so that we write:

Mtδ =

+∞

X

k=1

1[0,t](τkδ) (1.14)

where the stopping times (τkδ)k∈N are defined by:

τ0δ = 0 , τk+1δ = inf n t > τkδ X δ t− ≥ 1 o , (1.15) with eδ(t) := ( 0 , if t ≤ δ E[Mt−δδ ] , if t > δ

In this context, each particle spikes after having reached the threshold exactly as before; however, this spike takes some time δ > 0 to reach the other particles. This fact gives us a situation mathematically more tractable.

We will sometimes refer to equations (1.13) and (1.14) as the delayed system. The main result here is the following:

Theorem. For each T > 0 and α ∈ (0, 1), there exists a unique pair of c`adl`ag stochastic processes Xtδ, Mtδt∈[0,T ] such that:

• Mδ has integrable marginal distributions;

• Xδ and Mδ satisfy (1.13) and (1.14);

• the resulting map eδ is continuously differentiable.

Proof. Step 1: solution on [0, δ]. Considering t ≤ δ, equation (1.13) becomes Xtδ= X0+

Z t

0

b(Xsδ) ds + Wt− Mtδ

which has a unique strong solution. By Lemma 1.2, the function [0, δ] 3 t 7→ E[Mtδ] is

continuously differentiable.

Step 2: solution on [0, 2δ]. In this case, equation (1.13) becomes: Xtδ= X0+

Z t 0

(23)

1.2. MEAN-FIELD EQUATION WITH DELAYED INTERACTION 23

with eδ(t) = 0 on [0, δ] and eδ(t) = E[Mt−δδ ] on [δ, 2δ]. We want to show that the function

eδ is continuously differentiable on [0, 2δ]. This is obviously the case on [0, δ] and this is

also the case on [δ, 2δ] by Step 1. It only remains to show differentiability at t = δ. With this in mind, we consider that Lemma 1.2 implies, for each s ∈ [0, δ], that:

d dsE[M δ s] = 1 2 Z s 0 ∂ypδ0(s − r, 1) d drE[M δ r] dr − 1 2∂yp δ X0(s, 1). At this point: lim t→δ+e 0 δ(t) = lim s→0+ d dsE[M δ s] = − 1 2  lim s→0+∂yp δ X0(t, 1)  = 0 because of the assumptions on the random variable X0.

We conclude that, also on [0, 2δ], there is a unique strong solution. Again from Lemma 1.2, we have that [0, 2δ] 3 t 7→ E[Mtδ] is continuously differentiable.

Step 3: solution on [0, 3δ]. We can prove that the map [0, 3δ] 3 t 7→ eδ(t) is continuously

differentiable. This is true in the interval [δ, 3δ] by using the results of Step 2, and

d dt

t=δ+ eδ(t) = 0. This completes Step 3.

Global solution. Let T > 0. By iterating the procedure outlined above for a finite number of steps, we can extend the solution to the entire interval [0, T ] and also have that

[0, T ] 3 t 7→ E[Mtδ] is continuously differentiable. 

(24)
(25)

Chapter 2

Mean-field limit equation and

convergence of the particle system

We here introduce a stochastic differential equation which is the limit, in some sense which will be clarified, of the particle system previously introduced.

Delaurue, Inglis, Rubenthaler and Tanr´e have already addressed this problem for complete graphs in [6]. We here generalize those result to a wider class of graphs.

In the previous chapter, we have introduce a stochastic particle system, which de-scribes the evolution of the potentials of N interacting neurons in a neuronal network. It can be shown that this particle system converges in some sense to the solution of a certain stochastic differential equation. We here introduce that stochastic differential equation.

Let’s consider the following mean-field equation: Xt= X0+ Z t 0 b(Xs)ds + αE[Mt] + Wt− Mt, t ≥ 0 (2.1) where Mt= X k≥1 1[0,t](τk) , τk=    0 , if k = 0 infnt > τk−1 : Xt−+ α∆E[Mt] ≥ 1 o , if k ≥ 1 (2.2) and ∆E[Mt] = E[Mt] − E[Mt−]. We also assume that the function b and the random

variable X0 satisfy the standard assumptions (?).

This equation is similar to what we have considered before, except that now we observe the term E[Mt]; this term, which depends on the distribution of Mt, makes the computation

of a solution much more difficult. Particularly, the equation is a McKean-Vlasov type one, with a dependence not only on the distribution of the solution but also on the distribution of the hitting times of the threshold by the solution.

(26)

Observe that the function e(t) := E[Mt] is c`adl`ag, since the process M is.

The first step here will be to define an appropriate concept of solution, similarly to what has been done with the particle system in Chapter 1. Note that, if t is a spiking time (which means that there is one stopping time τk which is equal to t with positive probability), then there are two possible behaviours:

• the function e may be continuous in t, and so X is also continuous in t, Xt− = 1

and Xt is reset to 0;

• the function e may not be continuous in t and so ∆e(t) > 0.

In the second case, Xt− might have not jet reached the threshold: at time t, however, a

kick of magnitude α∆e(t) > 0 is felt, which is sufficient to make the potential exceed 1. We need to be careful about the allowed intensity of this kick: if the kick is too big, than it may happen that two or more spiking times coincide, which would imply the existence of a time t in which there is more than one spike. As in the particle system, we want to avoid these situations, which we will call non-physical. It holds that:

Proposition 2.1. If the pair of c`adl`ag adapted processes (Xt, Mt)t≥0 is such that:

• (Mt)t≥0 has integrable marginal distributions;

• for all t ≥ 0, P[∆Mt≤ 1] = 1;

• (Xt)t and (Mt)t satisfy (2.1) and (2.2);

then, for any time t ≥ 0:

∆e(t) = P[Xt−+ α∆e(t) ≥ 1]. (2.3)

Proof. The probability of having a spike at time t is P[∆Mt = 1] = E[∆Mt] = ∆e(t)

since P[∆Mt≤ 1] = 1 by the hypothesis. It is also clear that the probability of observing

a spike is P[∆Mt= 1] = P[Xt−+ α∆e(t) ≥ 1]. 

The conditions express in Proposition (2.1) give us a concept of solution which does not admit multiple spikes at the same time. We nevertheless observe that equation (2.3) cannot characterize uniquely the jumps of the function e: if, for example, Xt−

has a uniform distribution over [1 − α, 1], then every value of ∆e(t) ∈ [0, 1] satisfies equation (2.3). However, in an effort to give suitable conditions to get a ’physical solution’ behaviour similar to that of the particle system, the following could be a reasonable characterization of ∆e(t):

∆e(t) = supη ≥ 0 : ∀η0 ≤ η, P[Xt−+ αη0 ≥ 1] ≥ η0

= inf {η ≥ 0 : P[Xt−+ αη ≥ 1] < η} .

Definition 2.1. We say that a pair of c`adl`ag adapted processes (Xt, Mt)t≥0 is a physical

(27)

27

1. (Mt)t≥0 has integrable marginal distributions;

2. for all t ≥ 0, P[∆Mt≤ 1] = 1;

3. (Xt)t and (Mt)t satisfy (2.1) and (2.2);

4. the c`adl`ag function e : [0, +∞] 3 t 7→ E[Mt] satisfies:

∆e(t) = inf { η ≥ 0 : P[Xt−+ βη ≥ 1] ≤ η }

To get an intuition of why we impose the last condition in Definition (2.1), observe that there is a similar result which holds true for the particle system. In fact:

Proposition. Consider a physical solution of the X-system (1.3) with N particles, as defined by Theorem (1.1). Defining

¯ eN(t) := 1 N N X i=1 Mti,N , t ≥ 0. (2.4)

then, for all t ≥ 0,

N ∆¯eN(t) =

N

X

i=1



Mti,N − Mt−i,N= inf ( k ∈ {0, ..., N } : N X i=1 1n Xt−i,N≥1−αk d(i) o ≤ k ) . (2.5) Proof. The amount N ∆¯eN(t) = PN

i=1 Mti− Mt−i  measures the number of spikes at

time t, which is equal to |Γt|, according to the notation of Chapter 1. We recall that

|Γt| =P ¯ k

k=0|Γt,k|, where ¯k is the biggest integer such that Γt,¯k 6= ∅.

Fix a j ∈ {1, ..., ¯k} and take k ∈ {|Γt,0∪ ... ∪ Γt,j−1|, ..., |Γt,0∪ ... ∪ Γt,j| − 1}; otherwise

take j = 0 and k ∈ {0, ..., |Γt,0| − 1}. It holds that:

N X i=1 1n Xi t−≥1− αk d(i) o ≥ N X i=1 1( Xi t−≥1−α |(Γt,0∪...∪Γt,j−1)∩V (i)| d(i) )= |Γt,0∪ ... ∪ Γt,j| > k, so that: inf ( k ∈ {0, ..., N } : N X i=1 1n Xi t−≥1− αk d(i) o ) ≥ |Γt| .

On the other hand

N X i=1 1n Xi t−≥1−α|Γt∩V (i)|d(i) o ≤ |Γt| + X i /∈Γt 1n Xi t−≥1−α|Γt∩V (i)|d(i) o = |Γt| .

(28)

Finally, the following sufficient condition to have a physical solution make it easier to prove that pairs of processes (Xi, Mi) make up a physical solution, in the sense of

Definition 2.1.

Proposition 2.2 (Sufficient condition for a physical solution.). Given a pair of c`adl`ag stochastic processes (Xt, Mt)t≥0 satisfying conditions (1), (2) and (3) of Definition (2.1),

if at any point of discontinuity of the function e : [0, +∞) 3 s 7→ E[Ms] we have that

∀η ≤ ∆e(t), P[Xt− ≥ 1 − βη] ≥ η, for some β ≥ α, (2.6)

then the pair (Xt, Mt)t≥0 is a physical solution of (2.1) with coefficient β.

Proof. Consider a pair of processes (Xt, Mt)t≥0 satisfying conditions 1, 2 and 3 of

Definition 2.1. We get a physical solution if, for each t ≥ 0, there exists a decreasing sequence (ηn)n≥1 converging to ∆e(t) such that

P[Xt−+ βηn≥ 1] < ηn, ∀n ≥ 1.

This fact, together with assumption (2.6), would imply condition 4 of Definition 1. By contradiction, let t ≥ 0 be a fixed time such that there exists η0 > ∆e(t) with the

property that

∀η ∈ (∆e(t), η0], P[Xt− ≥ 1 − βη] ≥ η.

By Proposition (2.1), ∆e(t) = P[Xt−+ α∆e(t) ≥ 1], so that

∀η ∈ (∆e(t), η0], P[1 − βη ≤ Xt− < 1 − α∆e(t)] =

= P[Xt− ≥ 1 − βη] − P[Xt−+ α∆e(t) ≥ 1] ≥ η − ∆e(t).

On the event {1 − βη ≤ Xt− < 1 − α∆e(t)}, we have that ∆Mt = 0, so that Xt =

Xt− + α∆e(t). Therefore, defining η0 := η − ∆e(t) and η00 := η0− ∆e(t) > 0, we can

write that, for all η0 ∈ (0, η00]:

P[Xt≥ 1 − βη0] ≥ P[1 > Xt≥ 1 − βη0, Xt−+ α∆e(t) = Xt]

= P[1 − βη ≤ Xt−< 1 − α∆e(t), Xt−+ α∆e(t) = Xt]

= P[1 − βη ≤ Xt−< 1 − α∆e(t)] ≥ η0.

(2.7)

We will follow this strategy: show that lim infh→0[e(t + h) − e(t)] > 0, which would

contradict right-continuity of e. Take h ∈ (0, 1), then

e(t + h) − e(t) = E[Mt+h− Mt] ≥ P∃s ∈ (t, t + h] : Ys−≥ 1,

where (Ys)s∈[t,t+h] is the stochastic process such that

Ys= Xt+

Z s t

(29)

29

Indeed, as long as (Xs)s∈[t,t+h] doesn’t spike, it coincides with (Ys)s∈[t,t+h] and so {∃s ∈

(t, t + h] : Ys−≥ 1} ⊆ {Mt+h− Mt≥ 1}. For some constant C > 0, we then have that:

e(t+h)−e(t) ≥ P " Xt− Ch  1 + sup s∈[t,t+h] |Ys|+ sup s∈[t,t+h] h α(e(s) − e(t)) + Ws− Wt i ≥ 1 # . (2.8) Applying Gronwall’s lemma and choosing h small enough to have e(s) − e(t) ≤ 1:

|Ys| ≤ C 1 + |Xt| + sup s∈[t,t+h]

|Ws− Wt|

!

which implies that P sups∈[t,t+h]|Ys| ≥ 3C, |Xt| ≤ 1 ≤ Ch, possibly increamenting the constant C but maintaining it independent from h, α and beta. We should note that Xt ≥ 0 means that |Xt| ≤ 1, so that P sups∈[t,t+h]|Ys| ≥ 3C, Xt ≥ 0 ≤ Ch. Using

(2.8), allowing the constant C to be incremented from line to line: e(t + h) − e(t) ≥ ≥ P " Xt− Ch(1 + 3C) + sup s∈[t,t+h] h α(e(s) − e(t)) + Ws− Wt i ≥ 1, sup s∈[t,t+h] |Ys| ≤ 3C # ≥ P " Xt− Ch + sup s∈[t,t+h] h α(e(s) − e(t)) + Ws− Wt i ≥ 1, Xt≥ 0, sup s∈[t,t+h] |Ys| ≤ 3C # ≥ P " Xt− Ch + sup s∈[t,t+h] h α(e(s) − e(t)) + Ws− Wt i ≥ 1, Xt≥ 0 # − P " sup s∈[t,t+h] |Ys| ≥ 3C, Xt≥ 0 # ≥ P " Xt− Ch + sup s∈[t,t+h] h α(e(s) − e(t)) + Ws− Wt i ≥ 1, Xt≥ 0 # − Ch

Assume now that there exists a constant c ≥ 0 such that β(e(r) − e(t)) ≥ α(e(r) − e(t)) ≥ c√r − t for all r ∈ [t, t + h]. Note that this is true for at least one constant, that is c = 0. Using the above bound, we get:

e(t + h) − e(t) ≥ Z +∞ 0 P h Xt− Ch + u ≥ 1, Xt≥ 0 i dν(u) − Ch

with ν denoting the law of the supremum of c√s + Ws over s ∈ [0, h]. Also, it is worth

pointing out that u ≤ 1 and Xt− Ch + u ≥ 1 implies Xt≥ 0. Without loss of generality,

let βη00 = β(η0− ∆e(t)) ≤ 1; this assumption and inequality (2.7) make it possible to

deduce that: e(t + h) − e(t) ≥ Z βη00 Ch u − Ch β dν(u) − Ch ≥ 1 β Z βη00 Ch u dν(u) − Ch β − Ch, (2.9)

(30)

with C being independent from c. Remember that c√h ≤ β(e(t + h) − e(t)) ≤ βη00/2 for h small enough, being e a right-continuous function. Therefore, being sups∈[0,h]Ws

Gaussian by the reflection principle, we obtain that Z +∞ αη00 u dν(u) = E " sup s∈[0,h] (c√s + Ws)1{sups∈[0,h](c√s+Ws)≥βη00} # = E " sup s∈[0,h]  βη00 2 + Ws  1{sups∈[0,h]Ws≥βη00/2} # ≤ Ch.

Obviously, it also holds thatR0Chudν(u) ≤ Ch; moreover, by (2.9), possibly incrementing C but leaving it independent from c:

α e(t + h) − e(t) ≥ Z +∞ 0 u dν(u) − C(1 + β)h = E " sup s∈[0,h] (c√s + Ws) # − C(1 + β)h = h1/2 E " sup s∈[0,1] (c√s + Ws) # − C(1 + β)h1/2 ! , using Brownian scaling to get the last equality. A similar result can be proved for any r ∈ [t, t + h]: with the hypotesis that α(e(r) − e(t)) ≥ c√r − t for some c ≥ 0, we can show that β(e(r) − e(t)) ≥ f (c)√r − t, where f (c) is the function

f (c) := E " sup s∈[0,1] (c√s + Ws) # − C(1 + β)√h.

Proceeding by induction, letting c0 = 0 and cn+1= f (cn) for all n ≥ 0, we deduce that

β(e(r) − e(t)) ≥ cn

r − t for all r ∈ [t, t + h] and for all n ≥ 0.

By choosing h small enough to make c1> 0 = c0, being f non-decreasing, we have that

(cn)n≥0is a non-decreasing sequence. Since the function e is locally bounded, the sequence

(cn)n≥0 must have a finite limit c∗. Furthermore, the function f is Lipschitz-continuous,

so c∗ = f (c∗), that is c∗= E " sup s∈[0,1] (c∗√s + Ws) # − C(1 + β)√h = = E " sup s∈[0,1] (c∗√s + Ws) − (c∗+ W1) # + c∗− C(1 + β)√h,

because (W1− W1−s)s∈[0,1] is a standard Brownian motion.

By time reversal: C(1 + β)√h = E " sup s∈[0,1] (c∗√s + Ws) − (c∗+ W1) # ≥ E " sup s∈[0,1] c∗(s − 1) + Ws− W1  # ≥ E " sup s∈[0,1] − c∗s + Ws  # ,

(31)

2.1. CONVERGENCE 31

which says that c∗ must be large when h is small. Particularly, we can assume that h is small enough to make c∗ ≥ 1. Then:

C(1 + β) √ h ≥ E " sup s∈[0,(c∗)−2] (−c∗s + Ws) # = 1 c∗E " sup s∈[0,1] (−s + Ws) # ,

which let us assert that, for h small enough, c∗√h ≥ δ for some constant δ > 0. In conclusion, we get that lim infh↓0[e(t + h) − e(t)] ≥ δ/β, which is a contradiction. 

2.1

Convergence

We here describe in what sense the particle system converges to the solution of equation (2.1). We first introduce the M1-topology, which will be one of our main tools for defining

this convergence.

2.1.1 The M1-topology

Let T > 0 be a fixed real number. We define the space D([0, T ], R) as the set of all the c`adl`ag functions on the interval [0, T ] which are left-continuous in T . We recall that the number of discontinuities of a c`adl`ag function in a finite interval is at most countable [15].

We also define the completed graph of a function f ∈ D([0, T ], R) as the set we obtain by adding segments filling the gaps we have at discontinuity points in the graph of the function f itself, that is:

Gf :=n(x, t) ∈ R × [0, T ] : x = λf (t−) + (1 − λ)f (t) for some λ ∈ [0, 1]o. We introduce an order on the space Gf by saying that, for every (x1, t1), (x2, t2) ∈ Gf:

(x1, t1) ≤ (x2, t2) ⇐⇒ t1 < t2 or



t1 = t2 and |f (t1−) − x1| ≤ |f (t1−) − x2|



which means that either (x1, t1) ’happens’ before (x2, t2) with respect to time, or t1= t2

and x1 is nearer to f (t1−) than x2 is: intuitively, this corresponds to the natural order

we get when we trace out Gf from left to right.

A parametric representation of f ∈ D([0, T ], R) is a continuous and surjective function (u, r) : [0, 1] → Gf which is non-decresing with respect to the ordering in Gf. The set

Rf is the set of all the possible parametric representations of f ∈ D([0, T ], R). We can finally define the M1-distance as:

dM1(f1, f2) := inf

(uj,rj)∈Rjfor j=1,2

{ku1− u2k ∨ kr1− r2k}

(32)

It is now useful to characterize the convergence in the space D([0, T ], R) with the topology induced by the M1-distance just defined. Let’s start with the following result, a

proof of whom may be found on [Whitt, Stochastic Process Limit, THM 12.5.1]: Theorem 2.1. Let (fn)n ⊆ D([0, T ], R) be a sequence of c`adl`ag functions. Then, we

have that fn

D([0,T ],R)

−−−−−−→

n→+∞ f if and only if there exists (un, rn) parametrization of fn, for

each n ∈ N, and (u, r) parametrization of f such that: kun− uk ∨ krn− rk −−−−−→

n→+∞ 0.

The previous theorem states that convergence in the space D([0, T ], R) is equivalent to the existence of parametrizations which converge uniformly.

To give other characterizations of convergence in D([0, T ], R), we first need some new definitions; for each f ∈ D([0, T ], R), for t ∈ [0, T ] and δ > 0, we define:

wT(f, t, δ) := sup 0∨(t−δ)≤t1<t2<t3≤T ∧(t+δ) f (t2) − [f (t1), f (t3)] (2.10) where f (t2) − [f (t1), f (t3)] := inf θ∈[0,1] θf (t1) + (1 − θ)f (t3) − f (t2)

is the distance between the point f (t2) and the interval [f (t1), f (t3)]. It is clear that,

if f ∈ D([0, T ], R) is a monotone map, both non-increasing or non-decreasing, then wT(f, t, δ) = 0 for every δ > 0. We also define: vT(f, t, δ) := sup 0∨(t−δ)≤t1≤t2≤T ∧(t+δ) f (t1) − f (t2) . (2.11)

We are now ready to give the following results about convergence in the space D([0, T ], R): Theorem 2.2. A sequence (fn)n∈N ⊆ D([0, T ], R) converges to f ∈ D([0, T ], R) in the

M1-topology if and only if:

• fn(t) → f (t) for each t belonging to a subset of [0, T ] of Lebesgue meausure T that contains 0 and T ; • it holds that: lim δ→0 lim supn→+∞ sup [0,T ] wT(fn, t, δ) ! = 0 for every t in the same subset of [0, T ] as before.

Theorem 2.3. Given a sequence (fn)n∈N⊆ D([0, T ], R) which converges to f ∈ D([0, T ], R)

in the M1-topology, then, for all points t ∈ [0, T ] at which f is continuous, it holds that:

lim δ→0 lim supn→+∞ sup s∈[0∨(t−δ),T ∧(t+δ)] |fn(s) − f (s)| ! = 0.

(33)

2.1. CONVERGENCE 33

From Theorem (2.2) it is easy to derive a necessary and sufficient condition to have convergence in the M1-topology of a sequence of monotone c`adl`ag functions:

Lemma 2.1. If (fn)n∈N⊆ D([0, T ], R) and fn is monotone for every n ∈ N, then, given

f ∈ D([0, T ], R), we have that: fn−−→

M1

f ⇐⇒ fn(t) → f (t) for every t in a subset of [0, T ] of Lebesgue meausure T

which contains 0 and T.

The following theorem gives conditions to have compact closure in the D([0, T ], R) space:

Theorem 2.4. A subset A ⊆ D([0, T ], R) has compact closure with respect to the M1

-topology if and only if: • supf ∈Akf k < +∞;

• limδ→0 supf ∈A h  supt∈[0,T ]wT(f, t, δ)  ∨ vT(f, 0, δ) ∨ vT(f, T, δ) i = 0.

Another important result is that the Borel σ-field coincides with the one generated by the evaluation mappings [16], so that the law of a process defined on D([0, T ], R) is actually characterized by its finite-dimensional distributions. Moreover the metric space (D([0, T ], R), dM1) is Polish and this implies the possibility to apply the Skorohod

Representation Theorem and the Prohorov Theorem.

Finally, we will widely use the space P(D([0, T ], R)) of all the probabilities on D([0, T ], R) endowed with weak convergence; given the Polish property of D([0, T ], R), P(D([0, T ], R)) is Polish too [8, Chapter 3, Thm 1.7] and so we can again apply the Skorohod Representation Theorem to it.

2.1.2 Convergence of the particle system

Let (Zi,N)i∈{1,...,N } be a physical solution of the Z-system (1.5) as expressed in Theorem

(1.1). Consider the extended system:

¯ Zti,N :=    Zti,N , if t ≤ T,

ZTi,N+Wti,N − WTi,N, if t ∈ (T, T + 1].

Observe that each process ¯Zi,N is certainly left-continuous in T + 1, which makes its trajectories belong to D([0, T +1], R): on the other hand Zi,N is not always left-continuous

in T .

From now on, we will consider only a certain class of graphs.

Definition. k is the set of all vertex-transitive graphs such that, for every i, d(i) ≥ αβN for some β ≥ α.

(34)

Recall that a vertex-transitive graph is one such that, for every pair of vertices i and j, there exists an automorphism f of the graph with f (i) = j. This implies that the graph is regular (every node has the same degree) and that the structure of the graph is completely uniform: it is always possible to relabel the graph so that two nodes are exchanged and so that the graph remains the same.

Taking graphs in k, the processes ¯Zi are identically distributed for i = 1, ..., N . Define now the empirical measures:

¯ µiN := 1 d(i) N X j=1

aijDirac( ¯Zj,N), for every node i, (2.12)

which are random variables taking values in P(D([0, T + 1], R)). We can then define ΠN as the law of ¯µiN and have that ΠN ∈ P(P(D([0, T + 1], R))), the space of all the

probabilities on P(D([0, T + 1], R)), again endowed with the weak topology inherited from the (weak) topology of P(D([0, T + 1], R)). Note that ΠN does not depend on i

thanks to the assumptions made.

Preliminary results

We here collect some results that will be needed to prove the main result of this section. These results will be proved afterwards, at the end of the chapter.

Lemma 2.2. Given a particle system with a graph structure belonging to the class k, for any T > 0, the family (ΠN)N ≥1 is tight in P(P(D([0, T ], R))).

The tightness property of (ΠN)N implies the existence of a subsequence of (ΠN)N

which converges to some limit Π∞.

We will need the following theorem, which will not be proved. A proof may be found in [8, Chapter 3, Lemma 7.7]:

Lemma 2.3. Given a probability space (Ω, F , P) and a stochastic process X on that probability space with c`adl`ag trajectories, the set

D(X) = {t ≥ 0 : P[Xt− = Xt] = 1}

has complement set at most countable.

Lemma 2.3 has some interesting consequences. For any T > 0, consider the canonical process z on D([0, T ], R): that is, the process z : [0, T ] × D([0, T ], R) 3 (t, f ) 7→ f (t); obviously z has c`adl`ag paths. Define the function [0, T ] 3 t 7→R µ{zt− = zt}dΠ∞(µ) and

observe that the application

B(D([0, T ], R)) 3 A 7→ Z

(35)

2.1. CONVERGENCE 35

is a probability on D([0, T ], R), so that by applying Lemma 2.3 to z defined on the probability space made up by D([0, T ], R), the Borel σ-algebra and the probability just defined, we deduce that

D(z) =  t ≥ 0 : Z µ[zt− = zt]dΠ∞(µ) = 1 

has complement set at most countable. This means that there exists a countable set J such that, if t /∈ J , thenR µ[zt− = zt]dΠ∞(µ) = 1, so that, again assuming t /∈ J , we also

have that µ[zt− = zt] = 1 for Π∞-almost every measure µ.

Let us now consider the process m : [0, T ] × D([0, T ], R) 3 (t, f ) 7→ b(sups∈[0,t]f (s))+c

and define the function [0, T ] 3 t 7→ R hµ, mtidΠ∞(µ), which is easily seen to be

non-decreasing. By applying again Lemma 2.3, this time to the process (mt)t∈[0,T ], we get that

for Π∞-almost every measure µ, excluding a countable subset of [0, T ], µ[mt− = mt] = 1.

As a consequence, the function [0, T ] 3 t 7→R hµ, mtidΠ∞(µ) has at most a countable

number of jumps.

Without loss of generality, we can take J ⊆ [0, T ] countable that contains all the discontinuities of [0, T ] 3 t 7→ R hµ, mtidΠ∞(µ) and such that, for Π∞-almost every

measure µ, µ[zt− = zt] = 1.

Lemma 2.4. Fix a T > 0. For any integer p ≥ 1, let S1, ..., Sp ∈ J and S/ 0 = 0 with

0 = S0 ≤ S1 < ... < Sp< T and f0, ..., fp : R → R be bounded and uniformly continuous

functions. Take now:

F (z) :=

p

Y

i=0

fi(zSi)

where z is the canonical process on D([0, T ], R). Then we have that the functional P(D([0, T ], R)) 3 µ 7→Dµ, F  z − z0− Z • 0 b(zs− ms)ds − αhµ, m•i E

is almost everywhere continuous under Π∞.

The following two results holds true:

Lemma 2.5. For every graph in k with constant β ≥ α, for any T > 0, consider 0 ≤ t < t + h ≤ T , with h ∈ (0, 1). Then there exists a constant C independent of h, such that, for an integer N big enough and for any particle i:

P h ¯ ei,N(t + h) − ¯ei,N(t−) > 1 + Ch161 i ≤ Ch and P " ∀λ ≤ ¯ei,N(t + h) − ¯ei,N(t−) − Ch161  +, 1 N N X i=1 1n Xi t−≥1−βλ−Ch 1 16 o ≥ λ # ≥ 1 − Ch, where ¯ei,N are desined as:

¯ ei,N(t) := 1 d(i) N X j=1 aijMtj,N

(36)

Lemma 2.6. For Π∞-almost any µ, given a sequence (µn)n≥1 of probability measures

on the space D([0, T ], R) converging to µ in the space P(D([0, T ], R)), then at any point of continuity of the map (0, T ) 3 t 7→ hµ, mti, we have that:

hµn, mti −−−−−→

N →+∞ hµ, mti.

Main Result

Theorem. Given a particle system with a graph structure belonging to k, the family (ΠN)N ≥1 is tight in P(P(D([0, T + 1], R))). Furthermore, if Π∞ is a weak limit of a

convergent subsequence, then, for Π∞-almost every measure µ ∈ P(D([0, T + 1], R)), the

canonical process (zt)t∈[0,T +1] on D([0, T + 1], R) satisfies:

1. under the measure µ, the variable z0 has the same distribution as X0;

2. under µ, if mt= b(sup0≤s≤tzs)+c and hµ, mti is the expectation of mt under the

measure µ, the process (wt)t∈[0,T ]:= (zt− z0−

Rt

0b(zs− ms)ds − αhµ, mti)t∈[0,T ] is

a Brownian motion;

3. under µ, the pair (zt, mt)t∈[0,T ] is a physical solution of equation 2.1 in the sense

of Definition 2.1, with Brownian motion (wt)t∈[0,T ].

Proof. Step 1: (ΠN)N ≥1 is tight. This is the content of Lemma (2.2).

Step 2. As already done, given the tightness property of (ΠN)N ≥1, it is possible to

extract a convergent subsequence: denote by Π∞ its limit.

Again, without loss of generality, we can take J ⊆ [0, T + 1] countable that contains all the discontinuities of [0, T + 1] 3 t 7→ R hµ, mtidΠ∞(µ) and such that, for Π∞-almost

every measure µ, µ[zt−= zt] = 1.

Replicating the construction done in Lemma (2.4), we define the function:

F (z) :=

p

Y

i=0

fi(zSi)

where fi (for i = 0, ..., p) are bounded and uniformly continuous functions, z is the

canonical process and Si points not in J . Let G be another bounded and uniformly

continuous function from R to R and define QN := E  G ¯µiN, F z − z0− Z • 0 b(zs− ms)ds − αh¯µiN, m•i    ,

where ¯µiN was defined in (2.12). It is possible to rewrite

¯

Zti,N = Z0i,N + Z t∧T

0

(37)

2.1. CONVERGENCE 37

for all t ∈ [0, T + 1] and i ∈ {1, ..., N }. As a consequence:

QN = E  G   1 d(i) N X j=1 aijF (Wi)    −−−−−→ N →+∞ G  EF (W )  (2.13)

by the law of large numbers, since d(i) → +∞ as N diverges. By Lemma (2.4) we have that the function

P D([0, T + 1], R) 3 µ 7→  µ, F  z − z0− Z • 0 b(zs− ms)ds − αhµ, m•i 

is almost everywhere continuous with respect to Π∞, so that, since ¯µiN converges in law

to Π∞, we can apply the continuous mapping theorem to QN when N → +∞ and, from

(2.13), we have that: G  EF (W )  = Z G D µ, F  z − z0− Z • 0 b(zs− ms)ds − αhµ, m•i  E dΠ∞(µ).

We can apply the same equality using the function ˜G(·) =  G(·) − G[ E[F (W )] ] 2 instead of G to obtain that, Π∞-almost everywhere:

G D µ, F  z − z0− Z • 0 b(zs− ms)ds − αhµ, m•i E = G  EF (W )  . We can finally deduce that the process

 Υt= zt− z0− Z t 0 b(zs− ms)ds − αhµ, mti  t∈[0,T +1]

has the same finite dimensional-distributions of a Brownian motion in points 0 ≤ S1<

... < Sp< T not belonging to J . We now consider that the process Υ has right-continuous

paths and that [0, T ) ∩ J{ is dense in [0, T ): this is sufficient to say that (Υt)t∈[0,T ] has

the same finite-dimensional distributions of Brownian motion, under Π∞-almost every

µ. In conclusion, we can say that the process ( ˇΥt)t∈[0,T ], where ˇΥt= Υt if t < T and

ˇ

ΥT = ΥT −, has Wiener distribution thanks to the Kolmogorov Extension Theorem1.

This shows that point (2) of the Theorem is true.

Step 3: the law of z0 under µ (for Π∞ almost measure) is the same as the law

of X0 under P. Let π0 be the evaluation mapping in 0, that is π0(z) = z0 for every

z ∈ D([0, T ], R); let π0#µ be the push-forward of the measure µ by the map π0. The map

P(D([0, T + 1], R)) 3 µ 7→ π0#µ ∈ P(R)

1It is useful to point out that the Kolmogorov Extension Theorem implies that the process ( ˇΥ t)t∈[0,T ]

has Wiener distribution on the space D([0, T ], R) with the σ-algebra generated by the evaluation mappings; however, this σ-algebra coincides with the Borel σ-algebra, which is the one we are considering on D([0, T ], R).

(38)

is continuous because of the convergence characterization on the space D([0, T + 1], R) expressed in Theorem 2.2. Since ¯µi

N converges in law to Π∞(because the law of ¯µiN is ΠN

and the sequence (ΠN)N converges weakly to Π∞), then the Skorohod Representation

Theorem allows us to assume that ¯µiN −−−−−→

N →+∞ X

i almost everywhere, where Xi is a

random variable with law Π∞. The continuity of the map π0#tells us that π#0 µ¯iN −−−−−→ N →+∞

π0#Xi almost everywhere, so that:

∀ f : R → R continuous and bounded h¯µiN, f ◦ π0i −→ N hX

i, f ◦ π 0i.

By the law of large numbers, we conclude that:

hXi, f ◦ π0i = P[f(X0)] , almost everywhere,

which means that π0#µ = PX0, for Π∞-almost every µ. This completes the proof of point

(1) of the theorem.

Step 4: Physical solution We need to show that conditions (2) and (4) of Definition (2.1) hold true for (zt, mt)t∈[0,T ], under Π∞-almost every µ.

Applying Lemma (2.5) for some 0 ≤ t < t + h < T and for N large enough, we obtain that: ΠN hn µ : µ, mt+h− mt− > 1 + Ch 1 16 oi ≤ Ch, ΠN hn µ : ∀λ ≤µ, mt+h− mt − Ch 1 16, µzt−− mt−≥ 1 − βλ − Ch 1 16 ≥ λ oi ≥ 1 − Ch .

Lemma (2.6) implies that the map µ 7→ hµ, mt+h−mti is continuous whenever t, t+h /∈ J ;

assuming precisely that t, t + h /∈ J , the set µ : hµ, mt+h− mti > 1 + Ch

1

16 is open

and, by the Portmanteau Theorem2, we get that:

Π∞ hn µ : hµ, mt+h− mti > 1 + Ch 1 16 oi ≤ Ch (2.14)

Moreover, the setµ : ∀λ ≤ hµ, mt+h− mti − Ch

1

16, µzt−− mt− ≥ 1 − αλ − Ch 1

16 ≥ λ

is closed: since t, t + h ∈ J , if µn → µ in the weak topology, then hµn, m

t+h− mti → hµ, mt+h− mti; so, when λ < hµ, mt+h− mti − Ch 1 16 then λ ≤ hµn, mt+h− mti − Ch 1 16

definitely in n and this implies that µ[zt−− mt− ≥ 1 − βλ − Ch

1 16] ≥ lim supn→+∞[zt−− mt− ≥ 1 − βλ − Ch 1 16] ≥ λ; to conclude, when λ = hµ, mt+h− mti − Ch 1 16, we can

apply the same argument as before with λ − , for some  > 0. Now, applying again the Portmanteau Theorem, we conclude that:

Π∞ hn µ : ∀λ ≤ hµ, mt+h− mti − Ch 1 16, µzt−− mt− ≥ 1 − βλ − Ch 1 16 ≥ λ oi ≥ 1 − Ch. (2.15) 2

Without loss of generality, we can assume that, for any N ≥ 1, µ[zt= zt−] = 1 and µ[mt= mt−] for

ΠN-almost every µ; so, when t /∈ J , it is possible to apply the Portmanteau Theorem without worrying

(39)

2.1. CONVERGENCE 39

Inequalities (2.14) and (2.15) hold true only if t and t + h do not belong to J . For a point t ∈ [0, T )∩J , we can find sequences (tp)p∈Nand (hp)p∈Nsuch that 0 ≤ tp< t < tp+hp < T

and such that tp ↑ t, hp↓ 0 and tp, tp+ hp ∈ J . Applying inequalities (2.14) and (2.15)/

at points (tp, tp+ hp) and letting p → +∞:

Π∞ h µ, mt− mt− > 1 i = 0 Π∞ h ∀λ <µ, mt− mt− , µzt−− mt− ≥ 1 − βλ ≥ λ i = 1

From the first equality we get condition (2) of Definition (2.1), for Π∞-almost every µ.

From the second, since J is countable: Π∞

h

∀t ∈ [0, T ) ∩ J, ∀λ <µ, mt− mt− , µzt−− mt− ≥ 1 − βλ ≥ λ

i = 1, so that we can use Proposition (2.2) to state that condition (4) of Definition (2.1) holds

true, too. 

Proofs

Proof of Lemma (2.2)

The proof of Lemma (2.2) needs the following auxiliary results which are stated and proved here below.

Lemma 2.7. If the particle system is defined on a graph belonging to k, then, for any p ≥ 1 and T ≥ 0, there exits a constant CT(p) ≥ 0 independent of N , such that

∀i ∈ { 1, ..., N }, E " sup t∈[0,T ] Zti p + (MTi)p # ≤ CT(p).

Proof. We observe that:

E " sup t∈[0,T ] Zti p + (MTi)p # ≤ 2 E " sup t∈[0,T ] Zti p #

and then, by considering (1.11) and the standard assumptions (?), we get the thesis.  Lemma 2.8. Assume that the particle system is defined on a graph belonging to k. Then, for all η > 0, there exists a constant λ(η) > 0 independent of i and N such that:

P h

∃t ∈ [0, λ(η)] : ¯ei,N(t) ≥ (λ(η))−1t14

i ≤ η, where ¯ei,N(t) is given by

¯ ei,N(t) := 1 d(i) N X j=1 aijMtj.

(40)

Proof. For some T ∈ (0, 1), we can define: τ := inf ( t ≥ 0 1 N N X i=1 1{Mi t≥1}≥ T ) ,

which is the first time at which the percentage of particleswhich have spiked at least once is at least T . So, if t < τ ∧ T , we have that, for every node i:

1 d(i) N X j=1 aijMtj ≤ β αN N X i=1 Mti1{Mi t≥1} ≤ β αT 1 2 1 N N X i=1 (Mti)2 !12 , (2.16)

thanks to the Cauchy-Schwarz inequality.

It is now useful to investigate P[τ ≤ T ]. Consider the following events: A := ( 1 N N X i=1 (MTi)2≤ T−12 ) Ai:= ( C Z T 0 1 + sup s∈[0,t] |Zi s| ! dt + βT14 + sup t∈[0,T ] |Wi t| ≥ 0 2 ) A0:= ( 1 N N X i=1 1Ai ≤ T2 ) ,

where 0 > 0 comes from the standard assumptions (?) and C ≥ 0 is a constant, chosen

such that, for any i ∈ {1, ..., N }, on the set A and for t < τ ∧ T : sup s∈[0,t] (Zsi)+≤ (Z0i)++ C Z t 0 1 + sup r∈[0,s] |Zri| ! ds + βT14 + sup s∈[0,t] |Wsi|.

Since (Z0i)+ ≤ 1 − 0 for the standard assumptions (?), then, on the set A, for t < τ ∧ T ,

the following implication holds for every particle i: sup s∈[0,t] (Zsi)+≥ 1 − 0 2 =⇒ C Z t 0 1 + sup r∈[0,s] |Zri| ! ds + βT14 + sup s∈[0,t] |Wsi| ≥ 0 2. Particularly, if we assume that τ ≤ T , then the previous implication holds for all t < τ ∧ T = τ . So, on the set A ∩ {τ ≤ T }:

1 N N X i=1 1{sups∈[0,τ )(Zsi)+ ≥ 1−02} ≤ 1 N N X i=1 1Ai. Furthermore, on A ∩ A0∩ {τ ≤ T }: 1 N N X i=1 1{sups∈[0,τ )(Z j s)+ ≥ 1−02} ≤ T 2. (2.17)

Riferimenti

Documenti correlati

Bayesian nonparametrics, Central limit theorem, Conditional iden- tity in distribution, Indian buffet process, Random measure, Random reinforcement, Stable

The settlement phase, placed by the legislator as a guarantee to creditors, is a step in which the company continues to exist by mutating the original social object

[r]

[r]

Solution proposed by Roberto Tauraso, Dipartimento di Matematica, Universit`a di Firenze, viale Morgagni 67/A, 50134 Firenze, Italy.. We denote by σ(n) the sum of the divisors of

In this section we recall some features of our model, including a basic renewal theory representation, originally proven in [3], and we derive some new

Secondly, a novel method to tackle the problem of word spot- ting task is presented: it is based on HOG descriptors and exploits the Dynamic Time Warping technique to compare

Ciascun progetto di piano delle principali città capoluogo di governo (Addis Abeba, Asmara, Mogadiscio, Harar, Gondar, Gimma), al di là delle caratteristiche