• Non ci sono risultati.

Individual based models for COVID-19: numerical simulations and a macroscopic limit

N/A
N/A
Protected

Academic year: 2021

Condividi "Individual based models for COVID-19: numerical simulations and a macroscopic limit"

Copied!
93
0
0

Testo completo

(1)

Dipartimento di Matematica

Corso di Laurea in Matematica

Tesi di Laurea Magistrale

Individual based models for

COVID-19: numerical simulations

and a macroscopic limit

Candidato:

Relatore:

Martina Riva

Prof. Franco Flandoli

Controrelatore:

Prof. Marco Romito

(2)
(3)

Contents

Introduction v

1 Preliminaries 1

1.1 Markov chains . . . 1

1.1.1 Discrete time Markov chains . . . 1

1.1.2 Continuous time Markov chains . . . 2

1.2 Weak convergence in M+1 . . . 12

2 Individual based model behind SIR 15 2.1 Model definition . . . 15

2.2 Introduction to the macroscopic limit and idea of the proof . . . 17

2.3 Topology and compactness . . . 21

2.4 Macroscopic limit . . . 25

2.5 Link with the classical SIR model . . . 37

3 Cellular Automaton for COVID-19 39 3.1 Cellular Automaton . . . 39

3.2 Cellular Automaton for COVID-19 . . . 41

3.2.1 Base model . . . 44

3.2.2 Model with family units . . . 46

3.2.3 Model with pseudo-bubbles . . . 47

3.2.4 Model with layers . . . 48

3.3 Links between model’s parameters and experimental quantities . . . 49

3.3.1 Incubation period . . . 49

3.3.2 Serial interval . . . 49

3.3.3 Basic reproduction number R0 . . . 53

3.4 Numerical simulations . . . 54

3.4.1 Choice of parameters . . . 54

3.4.2 Comparison between different versions of the model . . . 56

3.4.3 Experimental quantities . . . 58

3.4.4 A real case . . . 66

3.5 Conclusions and further developments . . . 68 iii

(4)

A Numerical code 71

A.1 Base model . . . 71

A.2 Model with family units . . . 73

A.3 Model with pseudo-bubbles . . . 75

A.4 Model with layers . . . 76

A.5 Incubation period . . . 77

A.6 Serial interval . . . 79

A.7 Basic reproduction number R0 . . . 80

(5)

Introduction

A novel human coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was identified in China in December 2019. Since then, the epidemic has rapidly grown, and cases have been reported worldwide. The first identified cases in Italy date back to February 2020, when clusters were detected in Lombardy and Veneto. The first deaths were recorded shortly after and, starting on 8th March 2020, the region of Lombardy together with 14 additional northern and central provinces in Piedmont, Emilia-Romagna, Veneto, and Marche, were put under lockdown. Two days later, the government extended the lockdown measures to the whole country. On the 11th March 2020, the World Health Organization, after evaluating the levels of severity and diffusion of the infection, declared COVID-19 outbreak a pandemic. Up to now, there have been 38, 789, 204 confirmed cases and 1.095.097 confirmed deaths worldwide1.

The goal of this work is to develop a mathematical model to analyze the spread of COVID-19 infection.

The two most important aims of mathematical modeling of an epidemic are prediction and understanding. Indeed, one of the primary reasons for studying the spread of an infectious disease is to improve control and ultimately eradicate the infection from a pop-ulation. Models can provide an important tool to predict, although not completely, how different control measures, such as vaccinations, quarantine and contact tracing make an impact on the spreading of the infection. Moreover, models can be used to understand better some characteristics of the illness, as they can provide estimates of important quantities such as R0 or incubation period.

There are different approaches to the mathematical modeling of infectious diseases. The most common one is to use compartmental models based on ordinary differential equa-tions: these models are deterministic and can be used to explain the average behavior but are inadequate to account for stochastic fluctuations. An alternative approach is to use individual based models, that account for every individual and his features, that change over time: these models try to describe the mechanisms that regulate the interactions and highlight the randomness in the spreading of the infection. This second class of models presents some limitations too as, when the number of individuals is large, they become really complicated, and difficult to use. However, we use the individual based models to describe the spreading of the infection, as they allow us to consider social structures that are fundamental to understand the behavior of COVID-19 outbreak.

1https://covid19.who.int

(6)

The first part of this thesis concerns the relation between these two kinds of models. In particular, we will see how an individual based model leads to SIR model in a suitable macroscopic limit. We describe the model by a continuous time Markov chain on the discrete d-dimensional torus TdN = Zd/N Zd, we suppose that an individual can only be Susceptible, Infected or Recovered, and we allow the following transitions:

S → I → R.

We specify the infinitesimal generator (LNf )(ηN), where ηN ∈ {S, I, R}T d

N and f : {S, I, R}TdN → R, and we consider the time evolution of the empirical measures

πtN,A(du) = πN,A(ηNt , du) := 1 Nd X x∈Td N δ(ηNt (x), A)δx N(du), A ∈ {S, I, R}.

Finally, we prove that, as the number of individuals N → ∞, for each fixed time t, the empirical measure πtN,A(du) weakly converges in probability to ρA(t, u)du, where ρA(t, u), A ∈ {S, I, R}, are solutions of the system

                     ∂ ∂tρS(t, u) = −pI R TdρI(t, u)du ρS(t, u) ∂ ∂tρI(t, u) = pI R TdρI(t, u)du ρS(t, u) − pRρI(t, u) ∂ ∂tρR(t, u) = pRρI(t, u) ρS(0, u) = ρ0S(u) ρI(0, u) = ρ0I(u)

ρR(0, u) = ρ0R(u) = 1 − ρ0S(u) − ρ0I(u).

In this part we follow the steps of Chapter 4 of the book "Scaling limits of interacting particle systems" by C. Kipnis and C. Landim.

In the second part we concentrate on the development of a model for the description of the spreading of COVID-19, which is the result of a collaboration with Franco Flandoli, Eleonora La Fauci, and Patrizia Ferrante.

We use a cellular automaton to model the outbreak, consisting of N individuals. We suppose that any of them can be Susceptible, Infected but Not Infectious, Infected and Infectious, Unknown (not tested), Known (tested) or Exit, as these are recognised as the phases of a COVID-19 infection, and we allow the transitions:

S −→ IN −→ II % U & & K % E.

We present different versions of the model, from the simplest to the most complicated one, in order to take into account the different social structures that characterize our society, such as families and groups of friends or coworkers.

(7)

vii quantities found in the specific medical-statistical literature, in order to determine some constraints on the parameters’ values, and obtain a model that describes well what hap-pened in real life.

Finally, to test how our model behaves, we concentrate on numerical simulations. We begin with the choice of the parameters and a comparison between the different versions of the model. We then concentrate on three important experimental quantities, incu-bation period, serial interval and R0, and we see how their values and their probability distributions, extrapolated from our model, compare to those we can find in the specific literature. In the end, we focus on the simulation of a real case, the spreading of the infection in the city of Pisa, and we explain why we think that randomness is a really important feature of the model.

The thesis is divided into three chapters and an appendix.

In Chapter 1 we give some preliminary notions that will be used throughout the work. In Chapter 2 we introduce a simplified individual based model for the spreading of an infection and we study its macroscopic limit as the number of individuals goes to infinity. In Chapter 3 we present different versions of a Cellular Automaton model for COVID-19, we describe the relations between the parameters of our model and experimental quan-tities found in the specific literature, and we test the model with numerical simulations. In Appendix 1 we report the majority of the code used for numerical simulations. The software used is R.

(8)
(9)

Chapter 1

Preliminaries

In this chapter we present some preliminary notions, that will be used throughout the work.

1.1

Markov chains

1.1.1 Discrete time Markov chains

Throughout this section E stands for a countable state space. Let p : E × E −→ R+ be a transition probability, i.e. such that:

p(i, j) ≥ 0 for all (i, j) ∈ E × E, and X

j∈E

p(i, j) = 1 for all i ∈ E

Definition 1.1.1. Let p : N × E × E −→ [0, 1] be a collection of transition probabilities. A sequence of random variables (Xn)n∈N defined on a probability space (Ω,F, P ) and taking values in a countable space E is a Markov chain with transition probability p if for every n ≥ 0,

P (Xn+1= j|X0 = i0, . . . , Xn= in) = P (Xn+1= j|Xn= in) = p(n, in, j)

for every (i0, . . . , in, j) ∈ En+2. The Markov chain is said to be homogeneous if the transition probability p does not depend on n, i.e., if there exists a transition probability p : E × E −→ [0, 1] such that

P (Xn+1= j|Xn= i) = P (X1 = j|X0 = i) = p(i, j)

for every (i, j) ∈ E × E and every n ≥ 0. 1

(10)

The first identity of the definition establishes that the behavior of the Markov chain in the future depends on the past only through the present or, equivalently, that conditioned on the present, the past and the future are independent. The second property imposes the process to be time translation invariant in the following sense: the probability, for a process starting from x at time 0, to be at state y at time n is equal to the probability, for a process that is at x at time m, to be at y at time m + n.

1.1.2 Continuous time Markov chains

We begin with a definition:

Definition 1.1.2. A stochastic process (Xt)t≥0defined on a probability space (Ω,F, P ) and taking values in a countable space E is a homogeneous, continuous time Markov chain if

(i) (Markov property). For every s, t ≥ 0

P (Xs+t = j|σ(Xr, r ≤ t)) = P (Xs+t= j|Xt) for every j ∈ E.

(ii) (Homogeneity). For every s, t ≥ 0 and i, j ∈ E,

P (Xs+t= j|Xt= i) = P (Xs= j|X0 = i) .

(iii) (Jump property). There exists a sequence of strictly increasing stopping times (Tn)n≥0such that T0 = 0, Xtis constant on the interval [Tn, Tn+1) and XTn−6= XTn for every n ≥ 0.

We now introduce the matrices Pt = (pt(i, j))(i,j)∈E2, whose elements represent the probability to be at site j at time t for the Markov chain that starts from i:

pt(i, j) = P (Xt= j|X0= i) .

It follows from the Markov property of the process that these matrices form a semigroup, i.e.

Pt+s = PtPs, for every s, t ≥ 0, P0= I. Indeed, the first condition is equivalent to

pt+s(i, j) = X

k∈E

pt(i, k)ps(k, j), (1.1)

but, breaking things down according to the state at time s, we have P (Xt+s = j|X0 = i) =

X

k∈E

P (Xt+s = j, Xs= k|X0 = i) .

Using the definition of conditional probability and the Markov property, the above is equal to X k∈E P (Xt+s= j|X0 = i, Xs= k) P (Xs= k|X0 = i) = X k∈E ps(k, j)pt(i, k).

(11)

1.1. MARKOV CHAINS 3 Differentiability properties

In addition to the usual assumptions on the transition matrix function Pt, i.e. (a) pt(i, j) ≥ 0, ∀t > 0, ∀i, j ∈ E,

(b) P

j∈Ept(i, j) = 1, ∀t > 0, ∀i ∈ E, (c) P

j∈Ept(i, j)ph(j, k) = pt+h(i, k), ∀t, h > 0, ∀i, k ∈ E

we also assume that, for every i, j ∈ E the pt(i, j) are continuous for every t > 0 and that

(d) limt→0pt(i, j) = (

1 if i = j, 0 if i 6= j.

It turns out that conditions (a)-(d) have many useful consequences. One of these results is that the pt(i, j) are differentiable for every t ≥ 0. We will prove only that they are differentiable at t = 0. We begin with pt(i, i).

Theorem 1.1.1. For every i

− p00(i, i) = lim t→0

1 − pt(i, i)

t (1.2)

exists but might be infinite.

Proof. First we show that pt(i, i) > 0 for all t > 0. In fact, (d) asserts that for each i there is  > 0 such that 0 ≤ t ≤  implies pt(i, i) > 0. Now (c) can easily be iterated to give

pt1+···+tn(i, j) = X

k1,...,kn−1

pt1(i, k1)pt2(k1, k2) . . . ptn(kn−1, j). (1.3) Letting t1 = · · · = tn= t/n, i = j, and taking only that term on the right corresponding to k1= k2 = · · · = kn−1 = i, we obtain

pt(i, i) ≥ [pt/n(i, i)]n. (1.4)

For n sufficiently large t/n ≤ ; hence pt/n(i, i) > 0 and so pt(i, i) > 0. The inequality pt+s(i, i) ≥ pt(i, i)ps(i, i) (1.5) holds, as can be proved as above. Taking the logarithm on both sides, and letting φ(t) = log(pt(i, i)), yields the subadditivity inequality for φ(t):

φ(t + s) ≤ φ(t) + φ(s). Also φ(t) ≥ 0, since 0 < pt(i, i) ≤ 1. We put

qi = sup t>0

φ(t)

(12)

then 0 ≤ qi ≤ ∞ since φ(t) ≥ 0 for t > 0. If qi < ∞, there exists t0 > 0 such that φ(t0)/t0 ≥ qi− . For each t, we write t0 = nt + δ, where 0 ≤ δ < t. Then

φ(t0) ≤ φ(nt) + φ(δ) ≤ φ((n − 1)t) + φ(t) + φ(δ) ≤ · · · ≤ nφ(t) + φ(δ), and so qi−  ≤ φ(t0) t0 ≤ nφ(t) + φ(δ) t0 = nt t0 φ(t) t + φ(δ) t0 . Hence qi−  ≤ lim inf t→0  nt t0 φ(t) t + φ(δ) t0 

But as t → ∞, nt/t0→ 1 and φ(δ) → 0 (since pδ(i, i) → 1 as δ → 0); hence

lim inf t→0  nt t0 φ(t) t + φ(δ) t0  = lim inf t→0 φ(t) t . Now by definition of qi, lim sup t→0 φ(t) t ≤ qi. Combining the last three inequalities, we have

qi−  ≤ lim inf t→0 φ(t) t ≤ lim supt→0 φ(t) t ≤ qi. Since  was arbitrary, we have

lim inf t→0 φ(t) t = lim supt→0 φ(t) t = qi.

If qi = ∞, we can replace qi−  by M , an arbitrary large constant, and then obtain

M ≤ lim inf t→0 φ(t) t ; thus ∞ = lim inf t→0 φ(t) t . In either case we have

lim t→0 φ(t) t = qi. (1.7) Now lim t→0 1 − pt(i, i) t = limt→0 1 − e−φ(t) φ(t) φ(t) t = qi.

(13)

1.1. MARKOV CHAINS 5 Theorem 1.1.2. For every i and j, i 6= j,

p00(i, j) = lim t→0

pt(i, j)

t (1.8)

exists and is finite.

Proof. For each fixed h > 0, Ph is a transition probability matrix of a Markov chain (Xnh); clearly ph(i, j)n= pnh(i, j). We now definejph(i, i)0 = 1 and

jph(i, i)n= P (Xnh= i, Xvh6= j, 1 ≤ v < n|X0= i) fh(i, j)n= P (Xnh= j, Xvh 6= j, 1 ≤ v < n|X0 = i) . Then pnh(i, j) ≥ n−1 X v=0 jph(i, i)vph(i, j)p(n−v−1)h(j, j) (1.9)

since each term on the right corresponds to a possible way of going from i to j in n steps (relative to Ph) and these paths are mutually exclusive but not necessary exhaustive. The term jph(i, i)vph(i, j) is the probability of the event that the last visit to i before visiting j occurs at trial v. Furthermore

pvh(i, j) =j ph(i, i)v+ v−1 X

m=1

fh(i, j)mp(v−m)h(j, i)

for similar reasons. The first term is the probability of visiting i at trial v without entering state j in the intervening trials. The sum terms consider the cases where state j is entered at some intermediate trial. Since

v−1 X

m=1

fh(i, j)m ≤ 1, we have

jph(i, i)v ≥ pvh(i, i) − max

1≤m<vp(v−m)h(j, i). (1.10)

Now by property (d) it follows that for every  > 0 and preassigned i, j, (i 6= j) there exists t0 such that

max

0≤t≤t0pt(j, i) > , 0≤t≤t0min pt(i, i) > 1 − , 0≤t≤t0min pt(j, j) > 1 − . Hence if nh < t0 and v ≤ n, it follows from (1.10) that

jph(i, i)v > 1 − 2. Using this estimate in (1.9) we obtain

pnh(i, j) ≥ (1 − 2) n−1 X

v=0

(14)

or pnh(i, j) nh ≥ (1 − 3) ph(i, j) h , if nh < t0. (1.11) Put

qi,j = lim inf t→0

pt(i, j)

t .

Then (1.11) shows that qij < ∞. In fact, if qij = ∞, we could find h arbitrarily small for which ph(i, j)/h is arbitrarily large; choosing n0 so that t0/2 < n0h < t0 we would conclude in the basis of (1.11) that pn0h(i, j)/n0h would be arbitrarily large, but at the same time pn0h(i, j) n0h <  n0h < 2 t0 .

This contradiction implies that qij < ∞. The remainder of the proof is a consequence of (1.11). From the definition of qi,j there exist t1< t0 such that

pt1(i, j) t1

< qij + .

Since pt(i, j) is continuous, we can find h0 so small that t1+ h0 < t0 and pt(i, j)

t < qij +  for t ∈ I = [t1− h0, t1+ h0]. (1.12) Now, for any h < h0 we determine an integer nh such that nhh ∈ I; thus, using (1.11) and (1.12) we find (1 − 3)ph(i, j) h ≤ pnhh(i, j) nhh < qij+ , h < h0, from which we conclude that

(1 − 3) lim sup h→0

ph(i, j)

h ≤ qij + . Since  is arbitrary, it follows that

lim sup h→0

ph(i, j) h ≤ qij. The theorem now follows from the definition of qij. Remark 1. We have that generally

X j6=i qij ≤ qi, for all i. Indeed, since X j6=i ph(i, j) = 1 − ph(i, i)

(15)

1.1. MARKOV CHAINS 7 we have for a finite N

N X

j=1,j6=i

ph(i, j) ≤ 1 − ph(i, i). Dividing by h and letting h → 0 leads to the inequality

N X

j6=i

qij ≤ qi.

Since N is arbitrary and all the terms are positive, the assertion follows.

Forward and Backward Kolmogorov equations for conservative processes A continuous time Markov chain is said to be conservative if

X

j6=i

qij = qi < ∞, for all i.

From now on we’ll consider only conservative Markov chains. We now prove that for a conservative Markov chain not only are all the pt(i, j) differentiable, if qi < ∞, but they satisfy a set of differential equations known as the backward Kolmogorov equations. The differentiability follows directly from (a)-(d); but the proof under the assumption of conservativeness is quite simple. Indeed,

ps+t(i, j) − pt(i, j) = X k ps(i, k)pt(k, j) − pt(i, j) =X k6=i

ps(i, k)pt(k, j) + [ps(i, i) − 1]pt(i, j).

Dividing by s and letting s → 0+, we obtain formally the backward equations. p0t(i, j) =X

k6=i

qikpt(k, j) − qipt(i, j), for all i. (1.13)

To derive these equations rigorously, we must show that lim s→0+ 1 s X k6=i ps(i, k)pt(k, j) = X k6=i qikpt(k, j). Now lim inf s→0 1 s X k6=i

ps(i, k)pt(k, j) ≥ lim inf s→0 1 s N X k=1,k6=i ps(i, k)pt(k, j) = N X k=1,k6=i qikpt(k, j)

for any N > 0, and so lim inf s→0 1 s X k6=i ps(i, k)pt(k, j) ≥ X k6=i qikpt(k, j). (1.14)

(16)

On the other hand, for N > i, X k6=i ps(i, k)pt(k, j) ≤ N X k=1,k6=i ps(i, k)pt(k, j) + ∞ X k=N +1 ps(i, k) = N X k=1,k6=i ps(i, k)pt(k, j) + 1 − ps(i, i) − N X k=1,k6=i ps(i, k).

Dividing by s and taking lim sups→0+ of both sides, we obtain

lim sup s→0 1 s X k6=i ps(i, k)pt(k, j) ≤ N X k=1,k6=i qikpt(k, j) + qi− N X k=1,k6=i qik.

Letting N → ∞ and using the conservative nature of the system, we see that lim sup s→0+ 1 s X k6=1 ps(i, k)pt(k, j) ≤ X k6=i qikpt(k, j).

Comparing this inequality with (1.14) we conlclude that lim s→0 1 s X k6=i ps(i, k)pt(k, j)

exists and equals

X

k6=i

qikpt(k, j).

In a similar way we can formally derive a set of equations called the forward equations. We write ps+t(i, j) − ps(i, j) = X k ps(i, k)pt(k, j) − ps(i, j) =X k ps(i, k)[pt(k, j) − δkj].

Dividing by t and letting t → 0, we obtain formally p0s(i, j) =X

k6=j

ps(i, k)qkj− ps(i, j)qj, for all i, j, (1.15)

the forward equations. Both sets of equations assume a very simple form in matrix notation. Indeed, consider the infinite matrix L = (lij)(i,j)∈E2 defined by

lij = (

qij, i 6= j −qi, i = j

(17)

1.1. MARKOV CHAINS 9 called the infinitesimal generator of the process. Recall that we have

lij ≥ 0 for i 6= j, lii= − X

j6=i lij.

The backward equations may be compactly expressed by the matrix differential equation d

dtPt= LPt (1.16)

and the forward equation are of the form d

dtPt= PtL. (1.17)

Remark 2. In the case of a finite number of states it follows from the equation 1 − pt(i, i) =

X

j6=i

pt(i, j)

and Theorem 1.1.2 that the derivative p00(i, i) = −qi exists and is finite, and that qi=

X

j6=i

qi,j. (1.18)

If the number of states is infinite, Theorem 1.1.2 does not imply finiteness of qi. Also, finiteness of qi does not imply equation (1.18).

In the space Cb(E) of bounded measurable functions f : E → R we introduce the operators (Pt)t≥0 and L defined by

(Ptf )(i) = X j∈E pt(i, j)f (j) (Lf )(i) =X j∈E lijf (j) = X j6=i lijf (j) + liif (i) =X j6=i lijf (j) − X j6=i lijf (i) = X j6=i lij(f (j) − f (i)).

By duality we can extend the operators to the space of probability measures on E: (µPt)(i) = X j∈E µ(j)pt(j, i) (µL)(i) = X j∈E µ(j)lji.

Notice that hµL, 1i = hµ, L1i = 0 for every probability measure µ, since L1 = 0, where 1 stands for the function on E constantly equal to 1. In particular, µL is not a probability measure.

We also have the following property, that will be useful later on: E [f (Xt)|Xs] = X j∈E P (Xt= j|Xs) f (j) = X j∈E pt−s(Xs, j)f (j) = Pt−sf (Xs). (1.19)

(18)

Martingales in the context of continuous time Markov chains

We’ll now introduce a class of martingales in the context of Markov processes. Let (Xt)t be a Markov process, and denote by L its infinitesimal generator. Consider a bounded function F : R+× E → R smooth in the first coordinate uniformly over the second: for each x ∈ E, F (·, x) is twice continuously differentiable and there exists a finite constant C ∈ R such that

sup (s,x) (∂jsF )(s, x) ≤ C (1.20)

for j = 1, 2, where (∂sjF ) stands for the j-th time derivative of F (·, x). We define MF(t) = F (t, Xt) − F (0, X0) − Z t 0 (∂s+ L)F (s, Xs)ds, NF(t) = (MF(t))2− Z t 0 LF (s, Xs)2− 2F (s, Xs)LF (s, Xs)ds, for every function F satisfying assumption (1.20).

Lemma 1.1.3. Denote by {Ft, t ≥ 0} the filtration induced by the Markov process: Ft= σ(Xs, s ≤ t). The processes MF(t) and NF(t) are Ft-martingales.

Proof. We start showing that MF(t) is a martingale. Fix 0 ≤ s < t. We need to check that E[MF(t) − MF(s)|Fs] = 0, that is equivalent to

E [F (t, Xt)|Fs] = F (s, Xs) + Z t

s

E [(∂r+ L)F (r, Xr)|Fs] dr.

For each r ≥ 0, denote by Fr : E → R, Fr0 : E → R the functions that at x take the values F (r, Xr), ∂rF (r, Xr) respectively. By the Markov property and a change of variables in the integral, the previous identity becomes

E [F (t, Xt)|Xs] = F (s, Xs) + Z t−s

0

E [(∂r+ L)F (r + s, Xr+s)|Xs] dr. (1.21) We now use (1.19) and we obtain that (1.21) is equivalent to

Pt−sFt(Xs) = Fs(Xs) + Z t−s

0

PrFr+s0 (Xs) + PrLFr+s(Xs)dr.

Since for t = s this identity is trivially satisfied, we just need to check that the time derivative of both expressions are equal, i.e., that

∂t(Pt−sFt)(x) = Pt−sFt0(x) + Pt−sLFt(x) for every x ∈ E and 0 ≤ s < t.

To prove this identity we compute the left hand side. Fix h > 0 and rewrite the difference Pt+h−sFt+h(x) − Pt−sFt(x)

(19)

1.1. MARKOV CHAINS 11 as

Ex[Ft+h(Xt+h−s) − Ft(Xt−s)]

h ,

where Ex denotes the expected value with respect to the probability P (·|X0 = x). The above is equal to

Ex[Ft+h(Xt+h−s) − Ft(Xt+h−s)]

h +

Ex[Ft(Xt+h−s) − Ft(Xt−s)]

h . (1.22)

The first expression in (1.22) is equal to 1 h Z t+h t ExFr0(Xt+h−s) dr = 1 h Z t+h t ExFr0(Xt+h−s) − Ft0(Xt+h−s) dr + ExFt0(Xt+h−s) − Ft0(Xt−s) + ExFt0(Xt−s) . Since by assumption (1.20) (∂rF )(·, x) is Lipschitz continuous uniformly on x, the first term vanishes as h → 0. The second term also vanishes as h → 0 because the semigroup Pt is continuous (it is equal to Pt+h−sFt0(x) − Pt−sFt0(x)). The third coincides with Pt−sFt0(x). Therefore, as h → 0, the first term in (1.22) converges to Pt−sFt0(x).

The second expression in (1.22) is equal to 1 h(Pt+h−sFt(x) − Pt−sFt(x)) = 1 h Z t+h−s t−s d drPrFt(x)dr = 1 h Z t+h−s t−s PrLFt(x)dr = 1 h Z t+h−s t−s Ex[LFt(Xr)] dr.

That converges, as h → 0, to Pt−sLFt(x). This proves that MF(t) is a martingale. We show now that NF(t) is a martingale. We have that

MF(t)2 =F (t, Xt)2− 2F (t, Xt) Z t 0 (∂s+ L)F (s, Xs)ds + Z t 0 (∂s+ L)F (s, Xs)ds 2 + F (0, X0)2+ 2F (0, X0) Z t 0 (∂s+ L)F (s, Xs)ds − 2F (t, Xt)F (0, X0). But the second line of the expression is equal to

F (0, X0)  F (0, X0) − 2F (t, Xt) + 2 Z t 0 (∂s+ L)F (s, Xs)ds 

which is a martingale for the first part of the lemma. Therefore MF(t)2 is equal to

F (t, Xt)2− 2F (t, Xt) Z t 0 (∂s+ L)F (s, Xs)ds + Z t 0 (∂s+ L)F (s, Xs)ds 2 (1.23)

(20)

plus a martingale term. Since F2(t, Xt)− Rt

0(∂s+L)F2(s, Xs)ds is a martingale, F2(t, Xt) is equal to a martingale added to

Z t

0

(∂s+ L)F2(s, Xs)ds. (1.24)

The second and third expression in (1.23) can be rewritten as

− 2M0F(t) Z t 0 (∂s+ L)F (s, Xs)ds − Z t 0 (∂s+ L)F (s, Xs) 2 , (1.25)

where M0F(t) = MF(t) + F (0, X0). By Ito’s formula, the first term in this expression is equal to a martingale added to

− 2 Z t 0 F (s, Xs)(∂s+ L)F (s, Xs)ds + 2 Z t 0 Z s 0 (∂r+ L)F (r, Xr)dr  (∂s+ L)F (s, Xs)ds. (1.26) Integrating by part Z t 0 Z s 0 (∂r+ L)F (r, Xr)dr  (∂s+ L)F (s, Xs)ds = Z t 0 (∂s+ L)F (s, Xs)ds 2 − Z t 0 Z s 0 (∂r+ L)F (r, Xr)dr  (∂s+ L)F (s, Xs)ds. Therefore the second term of (1.26) cancels with the second term of (1.25). The remaining expression added to (1.24) is just the integral term that we need to subtract in order to turn MF(t)2 in a martingale. This concludes the proof of the lemma.

1.2

Weak convergence in

M

+1

We denote by S a general metric space and let S be the Borel σ-field. We work on M+

1 :=M

+

1(S), the space of finite positive measures on S with mass bounded by 1. Given a measure π ∈M+1 on S and a real function f on S, we have

hπ, f i := Z

S

f dπ. (1.27)

Definition 1.2.1. If a sequence of measures (πn)n∈N ∈M+

1, and a measure π ∈M+1 on (S,S), satisfy

hπn, f i → hπ, f i, as n → ∞ (1.28)

for every bounded, continuous real function f on S, we say that (πn)n∈N converges weakly to π.

(21)

1.2. WEAK CONVERGENCE INM+1 13 Theorem 1.2.1 (Portmanteau Theorem). Let (πn)n∈N, π ∈ M+1. These conditions are equivalent:

(i) (πn)n∈N converges weakly to π.

(ii) hπn, f i → hπ, f i for all bounded, uniformly continuous f . (iii) lim supnπn(F ) ≤ π(F ) for all closed F .

(iv) lim infnπn(G) ≥ π(G) for all open G.

In the next chapter, we’ll need to work with M+1(Td), where Td denotes the d-dimensional torus, endowed with the weak topology. It is possible to topologize this space in such a way that weak convergence is convergence in this topology: we can define a metric on M+1(Td) by introducing a dense countable family {fk; k ≥ 1} of continuous functions on Td and by defining the distance δ(·, ·) by

δ(µ, ν) = ∞ X k=1 1 2k |hµ, fki − hν, fki| 1 + |hµ, fki − hν, fki| (1.29)

We assume hereafter that f1 = 1. It is easy to see that, if a sequence (πn)n∈N converges weakly to π, then δ(πn, π) → 0.

Conversely, if δ(πn, π) → 0, we fix a bounded continuous real function g and we show that

n, gi → hπ, gi.

Since {fk; k ≥ 1} is dense in the space of bounded continuous real functions, for every  > 0 we can find a function fk0 such that

kg − fk0k < . We have

|hπn, gi − hπ, gi| ≤|hπn, gi − hπn, fk0i| + |hπn, fk0i − hπ, fk0i| + |hπ, fk0i − hπ, gi|. Since πn, π have mass bounded by 1, we obtain:

|hπn, gi − hπn, fk0i| ≤ kg − fk0k < , |hπ, gi − hπ, fk0i| ≤ kg − fk0k < . Therefore, we only need to show that

|hπn, fk0i − hπ, fk0i| → 0. (1.30) We know that there exists an n0∈ N such that

(22)

Thus, for every n ≥ n0, ∞ X k=1 1 2k |hπn, fki − hπ, fki| 1 + |hπn, fki − hπ, fki| ≤ .

Hence, it follows that

1 2k0 |hπn, fk0i − hπ, fk0i| 1 + |hπn, fk0i − hπ, fk0i| ≤ , from which |hπn, fk0i − hπ, fk0i| ≤ 2k0 1 − 2k0 ≤ 2 k0+1,

for  sufficiently small.

(23)

Chapter 2

Individual based model behind SIR

In this chapter we’ll introduce an individual based model for epidemics that leads to SIR model in a suitable macroscopic limit. The consequence is that the determin-istic model, which is frequently fairly easy to analyze, behaves as the particles system, provided that the population is sufficiently large. However, it only gives macroscopic in-formation on the state of the system: the details of what is happening at the single sites are lost, and only the collective behavior is recorded. We’ll follow the steps of Chapter 4 of the book "Scaling limits of interacting particle systems" by C. Kipnis and C. Landim.

2.1

Model definition

Given a positive integer N , we consider the discrete d-dimensional torus TdN = Zd/N Zd. Each site x ∈ TdN represents an individual; hence there are Nd individuals. Every subject can only be in one of the following states:

• S=Susceptible, consisting of healthy people that might be infected;

• I=Infected, consisting of infected individuals that can spread the disease to sus-ceptible;

• R=Recovered, consisting of individuals that had been infected but can no longer spread or catch the disease.

The whole system can then be in any of the following states SN = {S, I, R}T

d N.

An element ηNt ∈ SN is called configuration, at time t. The value ηNt (x) ∈ {S, I, R} denotes, at time t, the quality of subject x ∈ TdN. We suppose that only the following transitions are permitted:

S → I → R; 15

(24)

this means that a susceptible individual can be infected and an infected individual can recover, but a recovered individual can no longer catch the disease. Moreover, ηNt (x) can switch from S to I with rate

X y∈Td N δ(ηNt (y), I)pN(x, y) = pI Nd X y∈Td N δ(ηtN(y), I),

where δ(A, B) denotes the number 1 if A, B ∈ {S, I, R} are equal, 0 if they are different, and pN(x, y) denotes the rate of infection from y to x; ηtN(x) can switch from I to R with rate

pR.

That is, we are considering the uniform scenario, where pN(x, y) = pI

Nd, for all x, y ∈ T d

N. (2.1)

This assumption means that in the unit of time each infectious subject has rate pI of infecting another subject among the Ndthat form the system; pI

Nd is the rate of infecting precisely a certain other subject x.

To describe the dynamics, we use a continuous time Markov chain. For every positive integer N and for every initial configuration ηN0 ∈ SN, (ηNt )t≥0 is a Markov process, defined in a probability space (Ω,F, P ). The dynamics is given when we specify the infinitesimal generator LN, defined as an operator on the family of functions f : SN → R. In our case we choose

(LNf )(ηN) = X x∈Td N     pI Nd X y∈Td N δ(ηN(y), I)  δ(ηN(x), S) + pRδ(ηN(x), I)   f (ηN,x) − f (ηN) ,

where ηN,x denotes the configuration in SN

ηN,x(y) =      I if y = x and ηN(x) = S R if y = x and ηN(x) = I ηN(y) otherwise.

This prescription of generator means that, during a unit of time, each pair of subjects (x, y) ∈ TdN × TNd may undergo an infection event, which happens however only if y is infectious and x is susceptible, and it happens with rate p(x, y) = NpId. During the same unit of time, any subject x ∈ TdN which is infected may recover, with rate pR.

(25)

2.2. INTRODUCTION TO THE MACROSCOPIC LIMIT AND IDEA OF THE PROOF17

2.2

Introduction to the macroscopic limit and idea of the

proof

In this section we want to introduce the macroscopic limit as N → ∞ of the indi-vidual based process we presented in the first section. In particular we’ll see that the hydrodynamic equations, in a suitable macroscopic limit, are somehow linked to the clas-sical SIR model, given by the following system of differential equations for the quantities S(t) = number of susceptible, I(t) = number of infectious, R(t) = number of recovered:

     dS dt = − β NdIS dI dt = β NdIS − γI dR dt = γI (2.2)

where Ndis the total number of subjects, which remains constant during time, β and γ are positive constants that represent the contagious rate and the removal rate respectively. Since the number of individuals Nd does not change over time, we have that

Nd= S(t) + I(t) + R(t) for every t, (2.3)

therefore, it is sufficient to consider the system ( dS dt = − β NdIS dI dt = β NdIS − γI (2.4)

and then derive R(t) from (2.3).

We now briefly present some concepts necessary to give the exact statement of the theorem and the idea of the proof.

Given a positive measure π on Td, the d-dimensional continuous torus, of finite total mass and a continuous function G : Td→ R, we denote by hπ, Gi the integral of G with respect to π:

hπ, Gi := Z

Td

G(u)π(du).

Recall that we denoted by M+1 = M+1(Td) the space of finite positive measures on Td, with mass bounded by 1, endowed with the weak topology.

In order to study the hydrodynamic limit of the system, we consider the time evolution of the empirical measures πtN,A, where A ∈ {S, I, R}, associated to the particle system:

πN,At (du) = πN,A(ηtN, du) := 1 Nd X x∈Td N δ(ηtN(x), A)δx N(du), A ∈ {S, I, R}, (2.5) where δx

N(du) is the Dirac measure on T

d centered in x

N; and we try to find simplified equations satisfied by them.

(26)

Remark 4. Notice that these are positive measures, not mass one usually, on the con-tinuous unitary torus Td: if x ∈ TdN, then x/N ∈ Td and δx/N, πtN,A are measures on Td. The reason to introduce these quantities is to see the problem with a zoom at large distance, as continuous.

Remark 5. We can also notice that there is a one to one correspondence between config-urations η and empirical measures πN,A(η, du). In particular, πtN,A inherits the Markov property from ηt.

Moreover, since πtN,S(du) + πtN,I(du) + πtN,R(du) ≡ 1, it is sufficient to analyze the evo-lution of ΠN =



πN,St (du), πN,It (du) 

t∈[0,T ]. For every G ∈ C(Td, R), we have

hπN,At , Gi = 1 Nd X x∈Td N δ(ηNt (x), A)Gx N  , A ∈ {S, I, R}.

The total numbers of susceptible, infectious and recovered subjects are SN(t) := X x∈Td N δ(ηNt (x), S) IN(t) := X x∈Td N δ(ηNt (x), I) RN(t) := X x∈Td N δ(ηtN(x), R)

thus they can be expressed as

AN(t) = NdhπtN,A, 1i, A ∈ {S, I, R}. (2.6) We consider the distributions of the empirical measures as a sequence of probability measures on a fixed space. Since there are jumps this space must beD = D([0, T ], M+1), the space of right continuous functions with left limits taking values in M+1. We denote by (QN)N >0 the sequence of probability measures on D2 corresponding to the Markov process ΠN:

QN(B) = P ΠN ∈ B .

Our goal is to prove that, for each fixed time t, the empirical measure πN,At , A ∈ {S, I}, converges in probability to ρA(t, u)du where ρA(t, u), A ∈ {S, I}, are solutions of an ap-propriate system. We’ll first prove that the process ΠN converges in distribution to the probability measure concentrated on the deterministic path (ρS(t, u)du, ρI(t, u)du)t∈[0,T ] and then argue that convergence in distribution to a deterministic weakly continuous trajectory implies convergence in probability at any fixed time t ∈ [0, T ].

A deterministic trajectory can be interpreted as the support of a Dirac probability mea-sure on D2 concentrated on this trajectory. The proof is therefore reduced to show the

(27)

2.2. INTRODUCTION TO THE MACROSCOPIC LIMIT AND IDEA OF THE PROOF19 convergence of the sequence of probability measures (QN)N >0to the Dirac measure con-centrated on the solution of the system.

The method we use to prove the convergence of this sequence is to show that the se-quence is relatively compact and then to show that all converging subsese-quences converge to the same limit. To show the relative compactness we’ll use Prohorov’s criterion. At this point it will remain the identification of all limit points of subsequences.

To characterize the limit points of the sequence (QN)N, we need to use the random evolution of the empirical measures to make a system of equations appear. Let G ∈ C(Td, R), and A ∈ {S, I}, by Lemma 1.1.3, we know that

MtN,A,G= hπtN,A, Gi − hπ0N,A, Gi − Z t 0 LNhπsN,A, Gids (2.7) and NtN,A,G =  MtN,A,G 2 − Z t 0

LNhπN,As , Gi2− 2hπsN,A, GiLNhπN,As , Gids (2.8) are martingales with respect to the natural filtration generated by the process. We want to compute LNhπN,As , Gi = LN   1 Nd X x∈Td N δ(ηNs (x), A)G x N   . (2.9)

We denote by fz,AN : SN → R the function fz,AN (ηN) = δ(ηN(z), A). We have that (LNfz,AN )(ηN) = X x∈Td N     pI Nd X y∈Td N δ(ηN(y), I)  δ(ηN(x), S) + pRδ(ηN(x), I)   δ(ηN,x(z), A) − δ(ηN(z), A) . Recalling that ηN,x(z) =      I if z = x and ηN(x) = S R if z = x and ηN(x) = I ηN(z) otherwise. ,

it follows that δ(ηN,x(z), A) − δ(ηN(z), A) = 0, if x 6= z. Thus, the expression above is equal to   pI Nd X y∈Td N δ(ηN(y), I)δ(ηN(z), S)   δ(I, A) − δ(ηN(z), A)  (2.10) +pRδ(ηN(z), I)  δ(R, A) − δ(ηN(z), A) .

(28)

Now, if A = S, (2.10) becomes − pI Ndδ(η N(z), S) X y∈TdN δ(ηN(y), I), (2.11) thus LNhπsN,S, Gi = − pI Nd X x∈Td N δ(ηNs (x), S)   1 Nd X y∈Td N δ(ηsN(y), I)  G x N  . (2.12)

But, using that N1d P y∈Td Nδ(η N s (y), I) = hπ N,I s , 1i, (2.12) is equal to −pI Ndhπ N,I s , 1i X x∈Td N δ(ηsN(x), S)Gx N  , and

LNhπN,Ss , Gi = −pIhπN,Is , 1ihπsN,S, Gi = −pIhπsN,S, hπsN,I, 1iGi. (2.13) Then, (2.7) becomes MtN,S,G = hπN,St , Gi − hπN,S0 , Gi + Z t 0 pIhπN,Ss , hπsN,I, 1iGids. (2.14) Similarly, if A = I, (2.10) becomes pI Ndδ(η N(z), S) X y∈TdN δ(ηN(y), I) − pRδ(ηN(z), I) (2.15) and

LNhπsN,I, Gi = pIhπN,Ss , hπN,Is , 1iGi − pRhπsN,I, Gi. (2.16) Thus, (2.7) becomes

MtN,I,G = hπN,It , Gi − hπN,I0 , Gi − Z t

0

pIhπN,Ss , hπN,Is , 1iGi − pRhπsN,I, Gids. (2.17)

To conclude the proof of the hydrodynamic behavior, we’ll need to show an uniqueness theorem for solutions of the system, and that the martingales MtN,A,G vanishes in the limit as N → ∞. From this it follows that (QN)N has a unique limit point Q∗ which is the probability measure concentrated on the unique solution of the system.

(29)

2.3. TOPOLOGY AND COMPACTNESS 21

2.3

Topology and compactness

As we said, the natural space to consider the evolution of the empirical measures is D([0, T ], M+1). We need to endow this space with a reasonable topology. We state the results in a general setting: we consider a complete separable metric spaceE with a metric δ(·, ·) and we study the general space D := D([0, T ], E) that is the space of right continuous functions with left limits taking values inE. We denote by (PN)N a sequence of probability measures onD([0, T ], E)

We endow D with the modified Skorohod metric: let Λ be the set of strictly increasing continuous functions λ : [0, T ] → [0, T ]. We define then

kλk = sup s6=t logλ(t) − λ(s) t − s and d(µ, ν) = inf λ∈Λmax ( kλk, sup 0≤t≤T δ(µt, νλ(t)) ) .

The intuition is that we measure the distance between µ and little local shifts of ν. It is possible to show that

Proposition 2.3.1. D([0, T ], E) endowed with the metric d is a complete separable metric space.

We consider the spaceD2 endowed with the product topology and the distance dSK  Π, ˜Π  = d(π0, ˜π0) + d(π1, ˜π1), (2.18) for every Π = (π0, π1), ˜Π = (˜π0, ˜π1) ∈D2.

Unluckily, this definition is not very useful in practice because it takes into account all functions λ of Λ. Our main tool will be a modified uniform modulus of continuity. Recall that a function f ∈ C([0, T ],E) if and only if limγ→0ωf(γ) = 0, where ωf(γ) is the classical modulus of continuity:

ωf(γ) = sup s,t∈[0,T ],|s−t|≤γ

δ(f (t), f (s)).

A similar fact holds for the space D([0, T ], E). We introduce the modified modulus of continuity:

ω0µ(γ) := inf

(ti)0≤i≤r0≤i<rmax ti≤s<t<tsupi+1

δ(µs, µt),

where the first infimum is taken over all partitions {ti, 0 ≤ i ≤ r} of the interval [0, T ] such that

(

0 = t0 < t1< · · · < tr = T ti− ti−1> γ, i = 1, . . . , r.

(30)

It is possible to verify that the trajectory (µt)t∈[0,T ] belongs toD if and only if the mod-ified uniform modulus of continuity ωµ0(γ) → 0 as γ → 0.

In the space C([0, T ],E) we know, by Ascoli-Arzelà theorem, that certain sets are relatively compact. We have

Theorem 2.3.2. A set A ⊂ C([0, T ],E) is relatively compact in C([0, T ], E) if a only if it satisfies:

(i) the set {f (t)|f ∈ A, t ∈ [0, T ]} is relatively compact in E, (ii) limγ→0supf ∈Aωf(γ) = 0

For the spaceD([0, T ], E) there is a similar criterion, in terms of the modified modulus of continuity.

Proposition 2.3.3. A set A ⊂ D([0, T ], E) is relatively compact in D([0, T ], E) if and only if it satisfies:

(i) the set {µt|µ ∈ A, t ∈ [0, T ]} is relatively compact in E, (ii) limγ→0supµ∈Aωµ0(γ) = 0.

Remark 6. The functions γ 7→ ωf(γ) and γ 7→ ωµ0(γ) are nondecreasing, hence in place of condition (ii) we can say: for every  > 0 there is γ > 0 such that supf ∈Aωf(γ) <  (resp. supµ∈Aω0µ(γ) < ).

With this result we obtain a statement of Prohorov’s theorem.

Theorem 2.3.4. Let (PN)N be a sequence of probability measures on D([0, T ], E). The sequence is relatively compact if and only if

(i) For every t in [0, T ] and every  > 0 there exists a compact set K(t, ) ⊂ E such that supNPN(µ|µt∈ K(t, )) ≤ ./

(ii) For every  > 0, limγ→0lim supN →∞PN(µ|ωµ0(γ) > ) = 0.

Notice that condition (ii) depends on all path (µt)0≤t≤T and not only on the behaviour at a fixed time t, thus it is the most difficult to verify. However, for any positive γ, the modified uniform modulus of continuity ωµ0(γ) is smaller than the usual modulus of continuity ωµ calculated at 2γ:

ωµ0(γ) ≤ ωµ(2γ).

Therefore, in the next chapters, when we’ll need to prove that a sequence (PN)N is relatively compact, instead of condition (ii), we can verify:

(ii)’ For every  > 0, limγ→0, lim supN →∞PN(µ|ωµ(γ) > ) = 0.

Remark 7. As above, we may replace condition (ii)0 by: for every δ > 0 there is γ > 0 such that

lim sup N →∞

(31)

2.3. TOPOLOGY AND COMPACTNESS 23 Remark 8. Notice that all limit points of a sequence (PN)N satisfying (ii)0 are concen-trated on continuous paths.

However, this condition is still quite difficult to verify. We have a very useful sufficient condition due to Aldous

Proposition 2.3.5. A sequence of probability measures (PN)N on D([0, T ], E) satisfies condition (ii) of Theorem 2.3.4 provided

lim

γ→0lim supN →∞ τ ∈supIT θ≤γ

PN(µ|δ(µτ, µτ +θ) > ) = 0 (2.19)

for every  > 0, where IT denotes the family of all stopping times bounded by T .

In our case, we’ll need to prove the relative compactness of a sequence of measures (QN,A)N onD([0, T ], M+1). We show now that it is enough to check conditions of Theorem 2.3.4 or the (ii)0 for each real process obtained by projecting the empirical measure πtN,A with functions of a dense countable set of C(Td). More precisely:

Proposition 2.3.6. Let {gk|k ≥ 1} be a dense subfamily of C(Td), with g1 = 1. A family of probability measures (QN)N ≥1 on D([0, T ], M+1) is relatively compact if for every positive integer k the family (QNg−1k )N ≥1 of probabilities on D([0, T ], R) has this property. Here, (QNg−1k )N ≥1 is the sequence of measures obtained by projecting (QN)N with a function gk and is defined by

QNg−1k (B) = QN(π ∈D([0, T ], M+1)|hπ, gki ∈ B)

Proof. We want to prove that (QN)N is relatively compact showing that it verifies the hypothesis (i) and (ii) of Theorem 2.3.4 for PN = QN and E = M+1. We know that for every k ≥ 1, (QNgk−1)N ≥1 is relatively compact, therefore, due to Theorem 2.3.4 with PN = QNg−1k ,E = R:

1. For every t in [0, T ] and every  > 0 there exists a compact set K(t, ) ⊂ R such that sup N QNgk−1(µ ∈D([0, T ], R)|µt∈ K(t, )) ≤ / (2.20) 2. For every  > 0 lim γ→0lim supN →∞ Q Ng−1 k µ ∈D([0, T ], R)|ω 0 µ(γ) >   = lim γ→0lim supN →∞ Q Nµ ∈D([0, T ], M+ 1)|ω 0 hµ,gki(γ) >   = 0 Fix  > 0 and 0 ≤ t ≤ T . By 1., there exists K > 0 such that

QNg1−1(µ ∈D([0, T ], R)|µt∈ [−K, K])/

= QN µ ∈D([0, T ], M+1)|hµt, g1i /∈ [−K, K] 

(32)

for every N ≥ 1. In particular, since g1= 1 and since the set {µ ∈D([0, T ], M+1)||hµ, g1i| ≤ K} is weakly relatively compact, the first condition of Theorem 2.3.4 is proved.

To prove the second condition, fix  and β. Let k such that 21−k≤ . We have that

δ(µs, µt) = ∞ X k=1 1 2k |hµs, gki − hµt, gki| 1 + |hµs, gki − hµt, gki| = k X k=1 1 2k |hµs, gki − hµt, gki| 1 + |hµs, gki − hµt, gki| + ∞ X k=k+1 1 2k |hµs, gki − hµt, gki| 1 + |hµs, gki − hµt, gki| ≤ k X k=1 1 2k |hµs, gki − hµt, gki| 1 + |hµs, gki − hµt, gki| + ∞ X k=k+1 1 2k ≤ k X k=1 1 2k|hµs, gki − hµt, gki| +  2. Therefore it follows that, for each γ > 0,

ω0µ(γ) ≤ k X k=1 1 2kω 0 hµ,gki(γ) +  2. (2.21)

By 2. there exists γ0 such that QN  µ ∈D([0, T ], M+1)|ωhµ,g0 ki(γ) >  2  ≤ β 2k (2.22)

for each k ≤ k, γ ≤ γ0 and N ≥ 1. Therefore,

QN µ ∈D([0, T ], M+1)| k X k=1 1 2kω 0 hµ,gki(γ) >  2 ! ≤ k X k=1 QN  µ ∈D([0, T ], M+1)| 1 2kω 0 hµ,gki(γ) >  2  ≤ k X k=1 QNµ ∈D([0, T ], M+1)|ωhµ,g0 ki(γ) >  2  ≤ k X k=1 β 2k ≤ β

for each γ ≤ γ0 and N ≥ 1. This together with (2.21) shows that

QN µ ∈D([0, T ], M+1)|ωµ0(γ) ≥  ≤ β (2.23) for each γ ≤ γ0 and N ≥ 1.

(33)

2.4. MACROSCOPIC LIMIT 25

2.4

Macroscopic limit

We now have all the elements to examine the hydrodynamic behaviour of the process. Theorem 2.4.1. Let ρ0

S∈ C(Td, [0, 1]) and ρ0I ∈ C(Td, [0, 1]) such that supu∈Td{ρ0S(u) + ρ0I(u)} ≤ 1. Suppose that the initial conditions, πN,S0 , πN,I0 converge weakly, in probability, to measures π0S, π0I with densities ρ0S, ρ0I with respect to Lebesgue measure; i.e.

lim N →∞P  hπN,A0 , Gi − Z Td ρ0A(u)G(u)du >   = 0, (2.24)

for every  > 0, A ∈ {S, I}, G ∈ C(Td).

Then, for every t > 0, πN,St , πtN,I weakly converges in probability to certain determin-istic measure valued functions πtS, πtI, having densities ρS(t, u), ρI(t, u) with respect to Lebesgue measure; i.e.

lim N →∞P  hπtN,A, Gi − Z Td ρA(t, u)G(u)du >   = 0, (2.25)

for every  > 0, A ∈ {S, I}, G ∈ C(Td); where ρS(t, u), ρI(t, u) are solutions of:            ∂ ∂tρS(t, u) = −pI R TdρI(t, u)du ρS(t, u) ∂ ∂tρI(t, u) = pI R TdρI(t, u)du ρS(t, u) − pRρI(t, u) ρS(0, u) = ρ0S(u) ρI(0, u) = ρ0I(u). (2.26)

Proof. We start fixing a time T > 0 and recalling that we denoted by D the space D([0, T ], M+

1). We then consider the Markov process ΠN = 

πtN,S, πN,It  t∈[0,T ]

and the sequence of probability measure (QN)N on D2 corresponding to ΠN.

Step 1 (Relative compactness). We have seen in the previous section that the first step in the proof of the hydrodynamic behaviour consists in showing that the se-quence (QN)N is relatively compact. Notice that this follows once we prove the relative compactness of (QN,S)N and (QN,I)N, where (QN,A)N is the sequence of probability measures on D corresponding to πN,A, A ∈ {S, I}. By Proposition (2.3.6) it suffices to check that the sequence of measures corresponding to the real processes hπN,A, Gi is relatively compact for all G ∈ C(Td).

Fix a function G ∈ C(Td) and denote by QN,A,G the probability measure QN,AG−1, A ∈ {S, I}, on D([0, T ], R). We will apply Theorem 2.3.4 and Proposition 2.3.5 with E = R and δ the usual distance in R.

To verify condition (i) of Theorem 2.3.4 we need to show that for every t ∈ [0, T ] and every  > 0 there exists a compact set [−K, K] ⊂ R such that

sup N

(34)

Fix N , we have that QN,AG−1(µ ∈D([0, T ], R)|µt∈ [−K, K])/ = QN,A µ ∈D([0, T ], M+1)||hµt, Gi| > K  = P  |hπtN,A, Gi| > K  . Now, notice that

|hπtN,A, Gi| = Z Td

G(u)πN,At (du) ≤ sup u∈Td |G(u)| Z Td πN,At (du) ≤ KG,

where KG is a constant that depends on G, and we have used that G ∈ C(Td) and that the empirical measure πN,A has total mass bounded by 1. Therefore, if we choose any K > KG, it follows that, for every N ≥ 1,

P 

|hπN,At , Gi| > K 

= 0 and (i) is satisfied.

It remains to prove (ii), or the condition of Proposition 2.3.5. We decide to prove the latter. We only prove it for (QN,S)N as the other case is very similar. Thus, we want to show that

lim

γ→0lim supN →∞ τ ∈supIT θ≤γ

QN,SG−1(µ ∈D([0, T ], R)||µτ− µτ +θ| > ) = 0, (2.28)

which is equivalent to lim

γ→0lim supN →∞ τ ∈supIT θ≤γ P  |hπτN,S, Gi − hπN,Sτ +θ, Gi| >   = 0. (2.29) Now, recalling (2.14), |hπN,S τ , Gi − hπ N,S τ +θ, Gi| = Z τ +θ τ pIhπsN,S, hπsN,I, 1iGids + M N,S,G τ +θ − M N,S,G τ ≤ Z τ +θ τ pIhπsN,S, hπsN,I, 1iGids + M N,S,G τ +θ − M N,S,G τ .

Since the empirical measures have total mass bounded by 1 and G ∈ C(Td), Z τ +θ τ pIhπsN,S, hπsN,I, 1iGids ≤ Z τ +θ τ

pI|hπsN,I, 1i||hπN,Ss , Gi|ds (2.30) ≤ θpIkGk∞= θC(G),

(35)

2.4. MACROSCOPIC LIMIT 27 where C(G) represents a finite constant depending only on G, whether τ is a stopping time or not.

We denote by BN,S,Gt the process given by BtN,S,G = LNhπN,St , Gi2− 2hπ

N,S

t , GiLNhπN,St , Gi. (2.31) By Lemma 1.1.3, the process NtN,S,G defined by

NtN,S,G =  MtN,S,G 2 − Z t 0 BsN,S,Gds (2.32)

is a new martingale. Now, some computations show that BsN,S,G = pI N3d X x∈Td N X y∈Td N δ(ηsN(y), I)δ(ηNs (x), S)Gx N 2 . (2.33)

Since MtN,S,G is a martingale, for every τ ∈IT,

EP   Mτ +θN,S,G− Mτ +θN,S,G2  = EP   Mτ +θN,S,G2  − EP   Mτ +θN,S,G2  = EP  Nτ +θN,S,G+ Z τ +θ 0 BsN,S,Gds − NτN,S,G− Z τ 0 BsN,S,Gds  = EP Z τ +θ τ BsN,S,Gds  + EP h Nτ +θN,S,Gi− EPNτN,S,G . By the Optional Stopping Theorem,

EP h Nτ +θN,S,Gi= EPNτN,S,G = EP h N0N,S,Gi, therefore, EP   Mτ +θN,S,G− Mτ +θN,S,G2  = EP Z τ +θ τ BsN,S,Gds  ≤ C 0(G)θ Nd , (2.34)

where the last inequality follows from (2.33). To recap, we have P  |hπN,Sτ , Gi − hπN,Sτ +θ, Gi| >   ≤ P  Z τ +θ τ pIhπsN,S, hπN,Is , 1iGids + M N,S,G τ +θ − M N,S,G τ >   ≤ P  Z τ +θ τ pIhπsN,S, hπN,Is , 1iGids >   + P  M N,S,G τ +θ − M N,S,G τ >   . Putting together what we proved, by Chebyschev inequality,

P  Z τ +θ τ pIhπsN,S, hπN,Is , 1iGids >   ≤ θC(G)  , (2.35)

(36)

and P  M N,S,G τ +θ − M N,S,G τ >   ≤ θC 0(G) Nd2 . (2.36)

Since both these expressions go to 0 as γ → 0, this concludes the proof that the sequence (QN)N is relatively compact.

Step 2 (Characterization of limit points). The relative compactness of (QN)N implies that any sub-sequence has a convergent sub-sub-sequence. It remains to charac-terize all the limit points of the sequence (QN)N. Let Q∗ be a limit point and let (QNk)k be a sub-sequence converging to Q∗.

The first thing we want to prove (see [1]) is that

Q∗ C([0, T ],M+1)2 = 1, (2.37)

where C([0, T ],M+1) is the subspace of continuous trajectories in D([0, T ], M+1).

If we let, (QNk,A)Nk, Q∗,A, A ∈ {S, I} be the marginals of (QNk)Nk, Q∗ respectively, we can observe that (QNk,A)Nk converges to Q

∗,A, as N

k → ∞. Therefore, it suffices to prove that Q∗,A is concentrated on continuous trajectories, A ∈ {S, I}. We will prove it only for A = S, as the other case is analogous.

We define the function ∆ onD([0, T ], M+1) as ∆(π) = sup

t∈[0,T ]

δ(πt, πt−), (2.38)

where π = (πt)t∈[0,T ], namely ∆(π) is the sup of the jumps of π. It is enough to show that

Q∗,S π ∈D([0, T ], M+1)|∆(π) = 0 = 1; but ∆(π) ≥ 0, thus it suffices to prove that

EQ∗,S[∆(π)] = 0. On the other hand,

EQNk,S[∆(π)] = EP " sup t∈[0,T ] δ(πtNk,S, πNk,St− ) # = EP   sup t∈[0,T ] ∞ X j=1 1 2j |hπNk,St , fji − hπNk,St− , fji| 1 + |hπNk,S t , fji − hπNt−k,S, fji|  . (2.39) But |hπtNk,S, fji − hπNk,St− , fji| = 1 Nkd X x∈Td  δ(ηtNk(x), S) − δ(ηNkt−(x), S)  fj  x Nk  ≤ 1 Nkdkfjk∞.

(37)

2.4. MACROSCOPIC LIMIT 29 For every J > 0, (2.39) is equal to

EP   sup t∈[0,T ]   J X j=1 1 2j |hπtNk,S, fji − hπtNk,S− , fji| 1 + |hπtNk,S, fji − hπtNk,S− , fji| + ∞ X j=J 1 2j |hπtNk,S, fji − hπtNk,S− , fji| 1 + |hπtNk,S, fji − hπtNk,S− , fji|     ≤ EP   sup t∈[0,T ]   J X j=1 1 2j|hπ Nk,S t , fji − hπtNk,S− , fji| + ∞ X j=J 1 2j     ≤ EP   sup t∈[0,T ]   J X j=1 1 2j 1 Nkdkfjk∞+ ∞ X j=J 1 2j     = EP   J X j=1 1 2j 1 Nkdkfjk∞+ ∞ X j=J 1 2j   ≤ J CJ Nd k + ∞ X j=J 1 2J,

where CJ = maxj=1,...,Jkfjk∞. Now, for every  > 0, there exists a J0 such that ∞ X j=J0 1 2j ≤  2. Since J0CJ0 Nd k

→ 0, as Nk→ ∞, there exists a Nk0 such that J0CJ0

Nkd ≤  2 for every Nk≥ Nk0. Therefore,

EQNk,S[∆(π)] ≤ 

for every Nk≥ Nk0, i.e. EQNk,S[∆(π)] → 0 as Nk→ ∞. The proof is then completed if we show that

lim

Nk→∞EQNk,S[∆(π)] = EQ∗,S[∆(π)]. (2.40) This is a consequence of the definition of Q∗,S as a weak limit of (QNk,S)Nk, once we prove that ∆(π) is a continuous function in D([0, T ], M+1). This is so because

∆(π) = ∆(πλ) := sup t∈[0,T ]

δ(πλ(t), πλ(t)−), for every λ ∈ Λ,

and, therefore, d(π, ˜π) < /2 will imply for an appropriate λ that |∆(π) − ∆(˜π)| = |∆(π) − ∆(˜πλ)| < .

(38)

We continue noticing that the applications fromD2to R that associate to a trajectory Π = π0t, πt1t∈[0,T ] the numbers sup t∈[0,T ] hπ0t, Gi − hπ00, Gi + Z t 0 pIhπs0, hπs1, 1iGids (2.41) sup t∈[0,T ] hπ1 t, Gi − hπ10, Gi − Z t 0 pIhπ0s, hπs1, 1iGi − pRhπs1, Gids , (2.42)

where G ∈ C(Td), are continuous. We therefore have that, for every  > 0, the sets ( Π ∈D2| sup t∈[0,T ] hπ0 t, Gi − hπ00, Gi + Z t 0 pIhπ0s, hπs1, 1iGids >  ) (2.43) ( Π ∈D2| sup t∈[0,T ] hπ1 t, Gi − hπ10, Gi − Z t 0 pIhπs0, hπ1s, 1iGi − pRhπs1, Gids >  ) (2.44)

are open inD2. Since (QNk)k converges weakly to Q∗, by Portmanteau Theorem 1.2.1,

lim inf k→∞ Q Nk Π| sup t∈[0,T ] hπ0t, Gi − hπ00, Gi + Z t 0 pIhπs0, hπs1, 1iGids >  ! ≥ Q∗ Π| sup t∈[0,T ] hπt0, Gi − hπ00, Gi + Z t 0 pIhπ0s, hπs1, 1iGids >  ! and lim inf k→∞ Q Nk Π| sup t∈[0,T ] hπ1t, Gi − hπ10, Gi − Z t 0 pIhπs0, hπs1, 1iGi − pRhπs1, Gids >  ! ≥ Q∗ Π| sup t∈[0,T ] hπt1, Gi − hπ01, Gi − Z t 0 pIhπ0s, hπs1, 1iGi − pRhπs1, Gids >  ! .

We want now to show that, for every G ∈ C(Td), every limit point Q∗ is concentrated on trajectories Π = π0t, πt1 t∈[0,T ] such that hπt0, Gi = hπ00, Gi − Z t 0 pIhπ0s, hπs1, 1iGids, (2.45) hπt1, Gi = hπ10, Gi + Z t 0 pIhπs0, hπs1, 1iGi − pRhπs1, Gids, (2.46) for every t ∈ [0, T ].

(39)

2.4. MACROSCOPIC LIMIT 31 other case. Notice that

QNk Π| sup t∈[0,T ] hπt0, Gi − hπ00, Gi + Z t 0 pIhπ0s, hπs1, 1iGids >  ! = P sup t∈[0,T ] hπNk,St , Gi − hπ0Nk,S, Gi + Z t 0 pIhπsNk,S, hπsNk,I, 1iGids >  ! ,

and, by (2.14), the last expression is equal to

P sup t∈[0,T ] M Nk,S,G t >  ! . By Doob inequality, P sup t∈[0,T ] M Nk,S,G t >  ! ≤ 1 2EP  MTNk,S,G 2 = 1 2EP  NTNk,S,G+ Z T 0 BNk,S,G s ds  = 1 2EP Z T 0 BsNk,S,Gds  ≤ C 0(G)T 2Nd k ,

where we used the definition of BtNk,S,Gfrom Step 1 of the proof, and the fact that, since NtNk,S,G is a martingale, EP h NTNk,S,Gi= EP h N0Nk,S,Gi= EP[0] = 0. Thus, Q∗ Π| sup t∈[0,T ] hπt0, Gi − hπ00, Gi + Z t 0 pIhπ0s, hπs1, 1iGids >  ! ≤ lim inf k→∞ P t∈[0,T ]sup hπtNk,S, Gi − hπNk,S0 , Gi + Z t 0 pIhπsNk,S, hπsNk,I, 1iGids >  ! ≤ lim inf k→∞ C0(G)T 2Nd k = 0,

which is what we wanted to show.

We now prove that all limit points Q∗ are concentrated on absolutely continuous measures with respect to Lebesgue measure. Notice that,

sup t∈[0,T ] hπ N,A t , Gi ≤ 1 Nd X x∈Td N G x N  (2.47)

(40)

where G ∈ C(Td) and A ∈ {S, I}. For a fixed continuous function G, the applications that associate to a trajectory Π = πt0, πt1t∈[0,T ] the quantities supt∈[0,T ] hπi

t, Gi , i ∈ {0, 1} are continuous. Moreover,

lim N →∞ 1 Nd X x∈TdN G x N  = Z Td |G(u)|du.

Therefore, by weak convergence and (2.47), we obtain that, for all  > 0,

Q∗ Π| sup t∈[0,T ] |hπit, Gi| − Z Td |G(u)|du >  ! ≤ lim inf k→∞ Q Nk Π| sup t∈[0,T ] |hπti, Gi| − Z Td |G(u)|du >  ! = lim inf k→∞ P t∈[0,T ]sup |hπ Nk,A t , Gi| − Z Td |G(u)|du >  ! = 0,

where i ∈ {0, 1}, A ∈ {S, I}. This implies that all limits point are concentrated on trajectories Π = πt0, πt1t∈[0,T ] such that

hπit, Gi ≤ Z

Td

|G(u)|du, (2.48)

for i ∈ {0, 1}, for all continuous function G and for all t ∈ [0, T ]. Thus, all limit points are concentrated on absolutely continuous trajectories with respect to the Lebesgue measure:

Q∗ Π|(πt0(du), πt1(du)) = (ρ0(t, u)du, ρ1(t, u)du) = 1.

Moreover, all limit points of the sequence (QN)N are concentrated on trajectories that at time 0 are equal to ρ0S(u)du, ρ0I(u)du. Indeed, by weak convergence, for every  > 0,

Q∗  Π| hπ00, Gi − Z Td ρ0S(u)G(u)du >   ≤ lim inf k→∞ Q Nk  Π| hπ0 0, Gi − Z Td ρ0S(u)G(u)du >   = lim inf k→∞ P  hπ0Nk,S, Gi − Z Td ρ0S(u)G(u)du >   = 0. Analogously, we can prove that

Q∗  Π| hπ10, Gi − Z Td ρ0I(u)G(u)du >   = 0.

The previous results show that every limit point is concentrated on absolutely continuous trajectories πt0(du), πt1(du)t∈[0,T ] = (ρ0(t, u)du, ρ1(t, u)du)t∈[0,T ] whose densities are a

(41)

2.4. MACROSCOPIC LIMIT 33 weak solution, in the sense of (2.45), (2.46) of (2.26). However, to prove an uniqueness result of weak solution of (2.26), we need to prove relations (2.45) and (2.46) for time dependent G. Denote by Cm,n([0, T ] × Td) the space of continuous functions with m continuous derivatives in time and n continuous derivative in space, where m and n are positive integers. For G ∈ C1,0([0, T ] × Td), consider the processes, for A ∈ {S, I}, given by

MtN,A,G = hπN,At , Gti − hπ0N,A, G0i − Z t 0 (∂s+ LN)hπN,As , Gsids, NtN,A,G =  MtN,A,G 2 − Z t 0 DN,A,G(s)ds,

with DN,A,G(s) = LNhπsN,A, Gsi2 − 2hπN,As , GsiLNhπN,As , Gsi. By Lemma 1.1.3, these are martingales with respect to the natural filtration. In particular, as before,

lim inf k→∞ P t∈[0,T ]sup M Nk,A,G t >  ! = 0, for every  > 0. Since,

MtN,S,G= hπtN,S, Gti − hπN,S0 , G0i − Z t

0 hπN,S

s , ∂sGs− pIhπN,Is , 1iGsids,

MtN,I,G= hπtN,I, Gti − hπN,I0 , G0i − Z t

0

sN,I, ∂sGs− pRGsi + hπN,Ss , pIhπsN,I, 1iGsids, all limit points Q∗ are concentrated on paths πt0, πt1

t∈[0,T ] such that hπ0 t, Gti = hπ00, G0i + Z t 0 hπ0 s, ∂sGs− pIhπ1s, 1iGsids, (2.49) hπ1t, Gti = hπ10, G0i + Z t 0 hπ1s, ∂sGs− pRGsi + hπs0, pIhπ1s, 1iGsids; (2.50) namely Z Td ρ0(t, u)Gt(u)du = Z Td ρ0S(u)G0(u)du + Z t 0 Z Td

∂sGs(u)ρ0(s, u) − Gs(u)ρ0(s, u)pI Z Td ρ1(s, x)dx  du  ds Z Td ρ1(t, u)Gt(u)du = Z Td ρ0I(u)G0(u)du + Z t 0 Z Td ∂sGs(u)ρ1(s, u) + Gs(u)  pIρ0(s, u) Z Td ρ1(s, x)dx  − pRρ1(s, u)  du  ds.

(42)

For all G ∈ C1,0([0, T ] × Td). In conclusion, we proved that all limit points are concen-trated on absolutely continuous trajectories π0t(du), π1t(du) = (ρ0(t, u)du, ρ1(t, u)du) that are weak solutions of (2.26) in the sense of (2.49) and (2.50) and whose densities at time 0 are ρ0S(·), ρ0I(·).

Step 3 (Uniqueness of weak solutions of system (2.26)). It remains to prove the uniqueness of limit points. We have seen that all the limit points are concentrated on trajectories whose densities are weak solutions of (2.26); if we prove the uniqueness of this solution, then we’ll have the uniqueness of limit points. We first fix the terminology. Definition 2.4.1. Given the initial conditions ρ0S ∈ C(Td, [0, 1]) and ρ0I ∈ C(Td, [0, 1]) such that supu∈Td{ρ0S(u) + ρI0(u)} ≤ 1, π0t(du), π1t(du))



t∈[0,T ], with π i

t ∈ M+1(Td), i ∈ {0, 1}, is a measure-valued solution of the system

           ∂tπt0= −pIhπt1, 1iπt0 ∂tπt1= pIhπt1, 1iπ0t − pRπt1 π00(du) = ρ0S(u)du π01(du) = ρ0I(u)du. (2.51)

if for every function bounded G ∈ Cb1,0([0, T ] × Td), and every t ∈ [0, T ] hπ0t, Gti = hπ00, G0i + Z t 0 hπs0, ∂sGs− pIhπs1, 1iGsids, (2.52) and hπt1, Gti = hπ01, G0i + Z t 0 hπs1, ∂sGs− pRGsi + hπs0, pIhπs1, 1iGsids. (2.53) Given a solution Π = (πt0, πt1)t∈[0,T ], we introduce the following notation:

ri(t) := hπti, 1i, i ∈ {0, 1}.

The functions ri : [0, T ] → R, i ∈ {0, 1}, are almost surely continuous for the limiting probabilities. Indeed, if tn→ t, since

Q∗ C([0, T ],M+1)2 = 1,

we have that δ(πitn, πti) → 0. Therefore, πitn weakly converges to πti, and, consequently, ri(t

n) → ri(t).

Using the function Gt(u) ≡ 1, we obtain: r0(t) = r0(0) + Z t 0 −pIr1(s)r0(s)ds r1(t) = r1(0) + Z t 0 pIr1(s)r0(s) − pRr1(s)ds;

(43)

2.4. MACROSCOPIC LIMIT 35 i.e.            d dtr 0(t) = −p Ir1(t)r0(t) d dtr1(t) = pIr1(t)r0(t) − pRr1(t) r0(0) = hπ00, 1i =R Tdρ 0 S(u)du = s0 > 0 r1(0) = hπ01, 1i =R Tdρ 0 I(u)du = i0 > 0 (2.54)

This system as a unique local solution. Moreover, r0(t) = s0e− Rt 0pIr1(s)ds r1(t) = i0e− Rt 0pR−pIr 0(s)ds ,

from which r0(t) > 0 and r1(t) > 0. By adding up the equations, we have d dt r 0(t) + r1(t) = −p Rr1(t) < 0 and, therefore, r0(t) + r1(t) ≤ s0+ i0, which ensures the globality of the solution.

Summarizing, given any solution Π = (πt0, π1t)t∈[0,T ], we have that hπt0, 1i = r0(t) and hπ1

t, 1i = r1(t), where (r0(t), r1(t)) is the unique solution of (2.54).

Suppose then that we have two distinct solutions, Π = (πt0, πt1)t∈[0,T ], ˜Π = (˜πt0, ˜π1t)t∈[0,T ], denote by µ the difference: µ = Π − ˜Π = (π0

t− ˜π0t, πt1− ˜πt1)t∈[0,T ] = (µ0t, µ1t)t∈[0,T ]. Then, from (2.52) e (2.53) it follows that

hµ0 t, Gti =

Z t

0 hπ0

s, ∂sGs− pIhπ1s, 1iGsi − h˜π0s, ∂sGs− pIh˜πs1, 1iGsids

and

hµ1t, Gti = Z t

0

hµ1s, ∂sGs− pRGsi + hπs0, pIhπs1, 1iGsi − h˜πs0, pIh˜πs1, 1iGsids.

But, using that hπ1s, 1i = h˜π1s, 1i = r1(s), we obtain

hµ0t, Gti = Z t 0 hµ0s, ∂sGs− pIr1(s)Gsids (2.55) and hµ1 t, Gti = Z t 0 hµ1 s, ∂sGs− pRGsi + hµ0s, pIr1(s)Gsids, (2.56)

(44)

for every G ∈ Cb1,0([0, T ] × Td). For the first relation above, we have hµ0t, Gti = Z t 0 hµ0s, ∂sGs− pIr1(s)Gsids ≤ Z t 0 pIr1(s)hµ0s, Gsi ds + Z t 0 hµ0s, ∂sGsi ds ≤ pI(s0+ i0) Z t 0 hµ0s, Gsi ds + Z t 0 hµ0s, ∂sGsi ds,

for every G ∈ Cb1,0([0, T ] × Td). Choosing a function G that does not depend on time, i.e. Gt(u) = F (u), F ∈ Cb(Td), it follows that

hµ0t, F i ≤ pI(s0+ i0) Z t 0 hµ0s, F i ds,

and, from Gronwall lemma, we have hµ0 t, F i ≤ 0, from which hµ0 t, F i = 0,

for every F ∈ Cb(Td). It follows that µ0t = 0 for every t ∈ [0, T ]. Thus (2.56) becomes

hµ1t, Gti = Z t

0

hµ1s, ∂sGs− pRGsids,

and we can show analogously that µ1t = 0 for every t ∈ [0, T ]. Therefore we have proved the uniqueness of solutions of the system.

Step 4 (Convergence in probability at fixed time). In conclusion, thanks to uniqueness result, we proved that the sequence (QN)N converges to the Dirac measure concentrated on the trajectory (ρ0(t, u)du, ρ1(t, u)du)t∈[0,T ], where (ρ0(t, u), ρ1(t, u)) is the unique solution of (2.26). This means that the process (πN,St , πtN,I)t∈[0,T ] converges in distribution, as N → ∞, to the deterministic trajectory (ρ0(t, u)du, ρ1(t, u)du)t∈[0,T ]. We now consider the application obtained by taking the value at time t ∈ [0, T ] of the process. In general this application is not continuous, but this is true if the process is almost surely continuous at time t for the limiting probability measure. In our case the limiting probability measure is concentrated on continuous trajectories, as we proved (2.37). Thus, since the composition with a continuous functions preserves convergence in distribution, for every fixed t, (πN,St , πtN,I) converges in distribution to the deterministic measure (ρ0(t, u)du, ρ1(t, u)du). Since convergence in distribution to a deterministic variable implies convergence in probability, the theorem is proved.

Riferimenti

Documenti correlati

In detail, we analyze the efficacy measured as application timeto-solution of power capping to maintain the same power budget using i RAPL controller and ii a simple fixed

Previous studies 166 demonsti·ated that mechanisms responsible to resistance includes specific mutations within blaKPc 167 gene and that emerged after ceftazidime/avibactam

The results of this study showed that silver nanoparticles have a positive effect in Achilles tendon healing, through boosting cell proliferation and stimulating the production

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28

Enhancement, 3-D Virtual Objects, Accessibility/Open Access, Open Data, Cultural Heritage Availability, Heritage at Risk, Heritage Protection, Virtual Restoration

Therefore, the apparent mismatches were produced by covert elements, since the agreement was not established with ustedes (which is analysed as a hanging topic) and the verb,

A procedure for assessing the predictive capability of the identified parameters by monitoring the response of different validation models is proposed and applied to the

Loch Ness The Palace of Holyroodhouse Dunstaffnage Castle The Royal Mile?. Glasgow