• Non ci sono risultati.

Modeling virus spread through interacting systems

N/A
N/A
Protected

Academic year: 2021

Condividi "Modeling virus spread through interacting systems"

Copied!
65
0
0

Testo completo

(1)

Corso di Laurea in Matematica

Università degli Studi di Pisa

Tesi di Laurea Magistrale

Modeling virus spread

through interacting systems

23 Ottobre 2020

Candidato

Eleonora La Fauci

Relatore

Prof. Franco Flandoli

Controrelatore

Prof. Dario Trevisan

(2)
(3)

Contents

Introduction i

1 Preliminaries 1

1.1 Introduction to mean field theory . . . 1

1.2 Compactness in C([0, T ] ; Rd) . . . 2

1.3 Compact sets in the space of measure-valued functions . . . 3

1.4 Compactness of random measure-valued sequences . . . 5

2 Viral load and epidemics 6 2.1 Viruses . . . 6

2.2 Immune system . . . 6

2.3 Models for viral load and immune system response . . . 8

2.3.1 Function b . . . . 8

2.3.2 Function σ . . . 11

2.3.3 Stochastic simulations . . . 11

3 Interacting systems 13 3.1 Equation for empirical measures . . . 15

3.2 Laws of empirical measures . . . 16

3.3 The PDE limit . . . 19

3.4 Uniqueness of measure-valued solutions for the PDE limit . . . 21

4 Modeling virus spread with Cellular Automata 25 4.1 Cellular Automaton . . . 25

4.1.1 Covid-19 Cellular Automaton . . . 26

4.2 Adjacency matrices . . . 28

4.2.1 Family units or bubbles . . . 29

4.2.2 Random groups or pseudo-bubbles . . . 30

4.2.3 Layers . . . 30

4.3 Probability to get infected . . . 30

4.4 Relation between parameters and some characteristics of the virus . . . . 32

4.4.1 Choice of parameters . . . 33

4.5 Modeling of the Covid-19 cellular automaton . . . 35

4.5.1 Scenario 1 . . . 35

4.5.2 Scenario 2 . . . 36

4.5.3 Scenario 3 . . . 37

4.5.4 Scenario 4 . . . 37

4.5.5 Comparison between scenarios . . . 38

4.6 Modeling a real case . . . 40

(4)

CONTENTS 2

4.7 Modeling virus properties . . . 43

4.7.1 Incubation period . . . 43

4.7.2 Serial interval . . . 48

4.7.3 Reproductive number . . . 49

4.8 Conclusions . . . 51

Appendix 52 5.1 Code for Scenario 1 . . . 52

5.2 Code for Scenario 2 . . . 53

5.3 Code for Scenario 3 . . . 54

5.4 Code for Scenario 4 . . . 54

5.5 Code for Incubation period . . . 55

5.6 Code for Serial interval . . . 56

5.7 Code for Reprouctive number . . . 57

(5)

Introduction

With the recent outbreak of Covid-19 pandemic the interest in mathematical models to describe epidemics is arising. In this work we show two different approaches to model virus spread in a large population of individuals: one is more theoretical whereas the second is numerical and relies on the theoretical background of Markov Chains. When we have to study an high number of interacting individuals we are dealing with a very high analytical and computational complexity and a possible strategy to reduce it is to derive a description of the collective dynamics. This means that starting with a setting of N individuals with characteristics summarized in Xt1, . . . , XtN ∈ Rd we want to find

a mean field limit for the empirical measures

StN = 1 N N X i=1 δXi t

when N → ∞. In our case individuals’ characteristics are: position, viral load, namely cells infected by the virus, and the immune system cells that fight them; they evolve in time following the system for every individual k:

dYtk = 0 dVtk = b1  Vtk, Itkdt + 1 N X j6=k gYk, YjhVtjdt + σVtkdWtk dItk = b2  Vtk, Itkdt

In the second equation the second addendum has the role to make the viral load of the individuals interact; hence, we are describing a dynamics made up of interacting systems of SDEs. This interacting kernel is built in such a way that the interactions pair to pair lose importance because every individual’s viral load interacts only with an average input of the others. Hence, to find the mean field limit of this large system means to describe this dynamics when N goes to infinity. In this way the information is included in a weak measure-valued solution of a non-linear PDE. Under suitable assumptions, we prove that

StN converges in probability to µt with respect to the topology of C



[0, T ]; P r1(Rd) where µt is a measure-valued solution of the PDE limit

∂µt ∂t = 22µt) ∂v2 − ∂(b2µt) ∂i∂(b(µt)µt) ∂v where b(µt)(y, v, i) = b1(v, i) +R Rdg(y, w)h(z)µt(dw, dz, di).

The second part of the thesis is devoted to show the other approach to model virus spread and it is a work made in collaboration with Franco Flandoli, Martina Riva and Patrizia Ferrante. We choose the individual states Sind= {S, IN, II, U, K, E} that stand

(6)

CONTENTS ii respectively for susceptible, infected not infectious, infected and infectious, unknown to the system, known to the system and exited and we consider a cellular automaton in which the transitions allowed are the following ones

S IN II U E

K E

The diagram of transitions is presented in this way because in some cases the state E will be split in two different states E1 and E2. In this way we try to model the infections evolution of the novel coronavirus, COVID-19. To construct the cellular automaton a consistent number of parameters was chosen with the help of the literature. The transi-tion probability from the state S to IN depends on the other infected individuals in the population. Hence, giving different rules of interaction among the population we can build cellular automata with different complexity and realism. With the most realistic version we model the real case of infections in Pisa in a period of time that includes the lockdown. We use this real case to justify why the randomness of our model is capable to describe reality. Moreover, we use our model also to find information about the incubation period, the serial interval and R0. In particular we try to fit densities for the incubation period and serial interval and make some comparison with those found in medical and statistical literature.

In Chapter 1 we give the tools to prove in the sequel that the laws of empirical mea-sures are relatively compact in the space P r(C([0, T ]; P r1(Rd))). In Chapter 2 we give a very brief biological background to the third chapter and we show some deterministic and stochastic models for the mutual dynamics of viral load and immune system cells. In Chapter 3 we study the problem of a mean field interaction between a large number of individuals, each of which is described by a system of SDEs. Finally, in Chapter 4 there is the result of our work group. The final is the most important chapter because we develop several versions of a cellular automaton to describe and understand COVID-19 epidemics and the property of this novel virus. In the Appendix can be found the code we write to do simulations. In many case the code is only partially reported due to its lenght and because many lines or blocks would be repetitive.

(7)

Chapter 1

Preliminaries

In this first chapter we are going to show the theoretical background of the thesis. It concerns with mean field theory and shows the theorems which will be used in the third chapter.

1.1

Introduction to mean field theory

Large systems of interacting particles are a topic of great interest in several science fields, such as physics, bioscience, epidemiology, economics, social science etc. In some of these fields it is better to talk about individual instead of particle and this will be our case. In the next chapter we are going to consider interacting individuals each of which obey to a stochastic SDE system, in which the interacting kernel is of mean field type. That is the case when pair interaction lose importance and every individual is subject to a mean force, hence what counts is the interaction of particle with some average input of the others. When we have to deal with a high number of individuals we are dealing with a very high analytical and computational complexity and we want to reduce it.

A possible strategy is to derive a continuous description of the dynamics where the information is embedded in densities solving non-linear PDEs. This means that starting with a setting of N interacting individuals when N → ∞ we find the collective behaviour. If we start with N individuals with characteristics summarized in Xt1, . . . , XtN ∈ Rd we

call empirical measure

StN = 1 N N X i=1 δXi t.

For any given t it is a random variable with values in P r1(Rd). Its role is to give a discretization of the individuals’ density distribution because for a fixed observation it returns the number of those N individuals in a certain position x ∈ Rd. Intuitively we don’t know how large system behave and one thing we can ask can be in which regions of Rd it is more likely to find agglomerates of particles, i.e. in what region at time t is likely to find an individual. In a situation of still individuals, given the law of the initial conditions we can say how much is likely to find an individual with a certain level of viral load, time by time. Hence,

Definition 1 (Mean field limit). Consider a sequence of initial random conditions

X01, . . . , X0N such that S0N converges to µ0 when N → ∞. Then the mean field limit holds for this particular initial data, if

S·N → µ.

(8)

CHAPTER 1. PRELIMINARIES 2 with µ. measure-valued solution of a specific PDE with initial data µ0.

In next chapters we will define the meaning of a measure-valued solution. In the definition the convergence of S0N → µ0 has to be intended as the weak convergence of measures, i.e. for every φ ∈ Cb(Rd) and every δ > 0

P  D SN0 , φE− hµ0, φi > δ  → 0,

whereas the convergence of the whole process StN has to be intended as a convergence in probability with regard to the topology of C[0, T ] ; P r1(Rd). Because the limit is deterministic, this hints at a possible connection with some sort of law of large numbers. Let us explain briefly how the theoretical part is organized. After having defined the em-pirical measures we find the evolution equation for them and prove that these equations converge to the evolution equation of the corresponding deterministic measure limit. This requires various steps, including tightness of the sequences of the laws of {S.N} and existence and uniqueness of the solution of the evolution equation of the measure limit. In the body of this thesis the PDE for the measure µt will be always interpreted as a

PDE for measure-valued functions.

In the next sections we show the essential theorems that will be useful to prove the tightness of the empirical measures laws. We are going to give only statements, but we suggest to see [5] for the proofs.

1.2

Compactness in C([0, T ] ; R

d

)

Let us recall some definitions and well know results.

Definition 2. A set A is called relatively compact if its closure A is compact. Proposition 1. Every subset of a compact set is relatively compact.

Theorem 1 (Ascoli-Arzelà classic). Given a family of functions F ⊂ C([0, T ] ; Rd), it is

relatively compact in the uniform topology if

(1) for every t ∈ [0, T ] the set {f (t) : f ∈ F } is bounded (2) for every ε > 0 there is δ > 0 such that

|f (t) − f (s)| ≤ ε for every f ∈ F and every s, t ∈ [0, T ] with |t − s| ≤ δ.

We recall that the Hölder seminorm of f : [0, T ] → Rdis [f ]Cα = sup

t6=s

|f (t) − f (s)| |t − s|α

where 0 < α ≤ 1. It is a seminorm because it doesn’t satisfy all axioms of a norm; it isn’t positive defined, indeed every constant functions have [c]Cα = 0 even though c 6= 0.

Using Hölder seminorm, the conditions in Ascoli-Arzelà theorem can be replaced with the following

(9)

CHAPTER 1. PRELIMINARIES 3 (1’) there is M > 0 such that kf k∞= supt∈[0,T ]|f (t)| ≤ M for all f ∈ F

(2’) for some α ∈ (0, 1), there is R > 0 such that [f ]Cα ≤ R for all f ∈ F .

The first one is only a rewriting while the second is a sufficient condition to have 2, indeed a family of equi-lipschitz functions or equi-hölder functions is a family of equi-continuous functions. Hence, the sets

f

KM,R=

n

f ∈ C([0, T ] , Rd) : kf k∞≤ M, [f ]Cα ≤ R

o

are relatively compact in C([0, T ] , Rd) for Proposition 1 because it is a subset of a relatively compact set. We need to restrict again the requests on the sets of functions that make up these relatively compact sets. For this reason we introduce

Definition 3 (Sobolev space with fractional index). The Sobolev space Wα,p(0, T ; Rd)

with α ∈ (0, 1) and p > 1 is defined as the set of f ∈ Lp([0, T ], Rd) such that [f ]Wα,p := Z T 0 Z T 0 |f (t) − f (s)|p |t − s|1+αp dtds < ∞ This space is endowed with the norm kf kWα,p = kf kLp+ [f ]Wα,p.

Theorem 2 (Sobolev immersion). If (α − ε)p > 1 it holds

Wα,p(0, T ; Rd) ⊂ Cε[0, T ]; Rd and

[f ]Cε ≤ Cε,α,pkf kWα,p

Thanks to the introduction of Sobolev spaces with fractional index, we can use instead of condition (1) and (2) the sufficient conditions (1’) and

(2”) for some α ∈ (0, 1) and p > 1 with αp > 1, there is R > 0 such that [f ]Wα,p ≤ R

for all f ∈ F .

Indeed (2") implies (2’) since if (2") holds there exist ε > 0 such that [f ]Cε ≤ Cε,α,pkf kWα,p,

besides kf kLp ≤ T1/pkf k≤ T1/pM , hence

[f ]Cε ≤ Cε,α,p



T1/pM + R

for all f ∈ F . Therefore if αp > 1, the sets

KM,R =

n

f ∈ C[0, T ] , Rd: kf k≤ M, [f ]Wα,p ≤ R

o

are relatively compact in C[0, T ] , Rd again for Proposition 1.

1.3

Compact sets in the space of measure-valued functions

Let P r1(Rd) the set of all probability measures µ on Borel sets of Rd with finite first moment R

Rd|x|µ(dx) < ∞. We can endow this space with the 1-Wasserstein metric, namely

(10)

CHAPTER 1. PRELIMINARIES 4 where [φ]Lip is the Lipschitz seminorm

[φ]Lip= sup

s6=t

|φ(t) − φ(s)| |t − s|

Actually this is not the usual definition but an equivalent definition whose equivalence is proved by a theorem named Kantorovic-Rubinstein characterization. In any case,

P r1(Rd) with Wasserstein distance is a complete and separable metric space and the

convergence in this distance is equivalent to the weak convergence of probability measures plus convergence of first moments, to guarantee that the limit measure is in Wasserstein space. As a consequence of the general version of Ascoli-Arzelà theorem in metric spaces, we have the following:

Proposition 2. Let P r1(Rd) with the Wasserstein metric W

1, a family of measure-valued functions F ⊂ C[0, T ] ; P r1(Rd)is relatively compact if

(a) for every t ∈ [0, T ] and for every ε > 0, there exists rε,t> 0 such that µt(B (0, rε,t)) > 1 − ε

for every µ.∈ F .

(b) for every ε > 0, there exists δ > 0 such that W1(µt, µs) < ε for every µ.∈ F and t, s ∈ [0, T ] such that |t − s| < δ.

The first conditio is called tightness of the family F and it can be proved using the following

Lemma 1. Given t ∈ [0, T ], if there exists a constant Ct> 0 such that

Z

Rd

|x| µt(dx) ≤ Ct

for every µ.∈ F , then condition (a) holds for that value of t.

Condition (b) can be proved using the following

Lemma 2. If for some α ∈ (0, 1) and p ≥ 1 with αp > 1 and a constant C > 0 it holds that Z T 0 Z T 0 W1t, µs)p |t − s|1+αp dtds ≤ C for every µ.∈ F , then (b) holds.

Using Lemma 1 and Lemma 2 we finally have an other family of relatively compact sets, that are given in the next Corollary.

Corollary 1. Given α ∈ (0, 1) and p > 1 with αp > 1, for any M, R > 0 the sets

KM,R⊂ C  [0, T ] , P r1Rd defined as KM,R= ( µ. : sup t∈[0,T ] Z Rd |x| µt(dx) ≤ M, Z T 0 Z T 0 W1(µt, µs)p |t − s|1+αp dtds ≤ C )

(11)

CHAPTER 1. PRELIMINARIES 5

1.4

Compactness of random measure-valued sequences

Consider a probability space (Ω, F , P) with expectation E and

µ.: (Ω, F , P) −→ C



[0, T ] , P r1(Rd)

that are random measure functions denoted with µt,ω(dx) with continuous paths. Recall that the space (P r1(Rd), W1) is a complete and separable metric space. Once we have shown how families of relatively compact sets in the space C[0, T ] ; P r1(Rd), using Prohorov theorem we give a criterium to prove that a sequence of probabilities in

P rC[0, T ] ; P r1(Rd) are relatively compact with respect to the weak convergence. It is well known that Prohorov theorem holds for complete and separable metric spaces; i.e. polish spaces. Assume we have a sequence µNt,ω(dx) then we have

Proposition 3. If for some p, β, C > 0 and for every N ∈ N E " sup t∈[0,T ] Z Rd |x| µt(dx) # ≤ C E h W1  µNt , µNs pi≤ C |t − s|1+β

(12)

Chapter 2

Viral load and epidemics

In this chapter we want to briefly introduce viruses and the immune system from an interdisciplinary point of view. We do not have the aim to be exhaustive but only to describe the biological background of the next chapter. After that we show some models that describe mutual dynamics of viruses and immune system cells. This chapter follows [11] mostly, especially for the biological explanation and some deterministic models. The interaction between pathogens, such as viruses, and the immune system can be viewed as predator-prey dynamics, like Lokta-Volterra models. We can imagine the viruses as the preys and the immune system cells as the predators. When the virus replicates within the host cells and grows its population size, the immune system cells replicate to maintain low the viral load. In absence of the preys, predators die but in the reality the immune cells size is subject to a contraction and then it remains at a memory level. Hence, we want to explain both why this happens and how to model it. One important part of the immune system that is very important in the fight against viral infections consist in killer T cells or cytotoxic T lymphocytes (CTL), they basically fight pathogens that live inside cells Let us show a few details about viruses and the immune system.

2.1

Viruses

A virus consist of genetic material wrapped in a protein coat. It does not have the ability to reproduce and this oblige the virus to enter a cell and use the cell’s metabolic mechanism. In order to enter a cell, the virus usually attaches to some receptor on the cell and it is taken into. Then the virus loses the protein coat and the viral genome is exposed. At that point the infected cells produce proteins that will build up new virus particles. Then new virus particles leave the infected cells to find new susceptible cells. Anyway different viruses have different genomes and the exact mechanism with which reproduction occurs depends on the form of the genome. Hence, when we talk about infected cells we are referring to human cells that are able to produce viral particles. We will see that exist specific immune system cells that fight infected cells or free virus particles. From now on we will refer to viral load as the number of infected cells.

2.2

Immune system

Immune responses can be subdivided into two categories: innate or nonspecific responses, and specific or adaptive responses. Nonspecific means that the immune system cannot specifically recognize the exact structure of the pathogen, so it only consist in a first de-fense, such as fever, that slow down the initial growth of a pathogen but it is insufficient

(13)

CHAPTER 2. VIRAL LOAD AND EPIDEMICS 7

Figure 2.1: CTL after expansion in order to fight the virus and a phase of contraction, stays at a memory level.

to extinguish an infection. At this point, a specific and adaptive immune response is re-quired. Specific means that cells bear receptors that can recognize the physical structure of a pathogen, in other words they recognize proteins from which the pathogen is built. Upon recognition, these immune cells start to divide and expand. They fastly increase in number, and this enables them to fight the pathogen, resulting in the resolution of the infection. A substance that induces the generation of a specific immune response is called an antigen. The site of the antigen that is actually recognized by the receptor of the immune cell is called an epitope. The same pathogen can have a variety of epitopes, each of which is recognized by a separate specific immune cell. Therefore, multiple im-mune cell clones can respond against the same pathogen.

From now on, when we talk about immune system cells we mean cells responsible for specific immune responses because those are the ones that are generally necessary for the resolution of infections. The specific immune system has two categories of cells that directly fight the pathogen: antibodies and CTL. Antibodies are produced by B cells can neutralize free virus particles. On the other hand, CTL attack infected cells. CTL have the T cell receptor on their surface that can recognize particles of the pathogen situated on the surface of infected cells.

When this recognition occurs, the CTL can release substances, and this results in the death of the infected cells. CTL can also secrete substances that prevent viral genomes from being expressed and, in some cases, this reaction also removes the viral genome from infected cells. As the infection is resolved, the population of CTL declines; this is often referred to the contraction phase.

However, it does not decline to the same low levels from where the response started. Instead it settles around an elevated level, and the CTL persist at this elevated level for long periods of time in the absence of any further exposure to the virus. This is called immunological memory and due to the the fact that an elevated number of immune cells remains after the resolution of infection, it is thought that the host can react more efficiently if it is reinfected with the same pathogen again. Such a secondary infection will not result in much virus growth and the host is protected from symptoms and dis-ease. Immunological memory is also the basis that underlies the protective function of vaccines.

As explained above, specific immune responses can recognize a defined part of a viral antigen, i.e. a viral epitope. The T cell receptor is specific for a given epitope. If the

(14)

CHAPTER 2. VIRAL LOAD AND EPIDEMICS 8 virus mutates this epitope, the T cell will not be able to recognize and fight it anymore.

2.3

Models for viral load and immune system response

In this section we want to show some stochastic models to describe the dynamics of viral load and immune system cells. In [11] there are plenty of deterministic models and we are going to show only one of this, trying afterwards to suggest variations. The reason why we introduce a stochastic term is simple: one individual infect one other because of his viral load, adding noise we can see fluctuations in that viral load that can help us to model in a more realistic way infections in a population that has the same behaviour. A simple stochastic model we can consider to describe viral load that does not take in account immune system cells is the following

dVt=  rVt  1 −Vt m  − aVt  dt + σdWt. (2.1)

For this model when r < a the virus fails to establish an infection, on the other hand if r > a, then the infection. The equilibria of the deterministic part are v1 = 0 and

v2 = m(r − a)/r. With the first choice of parameters the solutions tends to zero, while in the second setting they have a logistic shape and they tend to v2. This last model could be suitable to be used in a complex model; however in that model the viral load doesn’t decrease but tends to a maximum.

Biologically virus population firstly increases and then decreases if the immune sys-tem cells manage to clear the virus; this means that the real virus growth has to be modeled using CTL dynamics too if we want to have a realistic model. Even mathemat-ically, if we want to describe a positive quantity with that trend, we have to use other auxiliary quantities. If it was possible to describe Vt without other variables, we would

have a dynamics like

dVt= b (Vt) dt + σ (Vt) dWt,

where Vt is the viral load. For simplicity sake imagine that the noise is absent, that is

dVt

dt = b (Vt) .

This means that if the value of Vt for a given t is known, we can also determine even

the value of b (Vt), hence we have dVdtt. This is absurd because it would hold that the

value of Vt can say if the function itself is increasing or decreasing but the fact that

Vt initially grows and then decreases says it can take the same values in two different

phases, or more if there are oscillations. This is in accord with our knowledge of how the things behave biologically, so even mathematically we need at least an other quantity to describe Vt. For this reason we are looking for some function b and σ such that

dVt= b (It, Vt) dt + σ (It, Vt) dWt.

2.3.1 Function b

For a bit the role of σ (It, Vt) will be auxiliary because at first it is better to discuss the

deterministic part. An example of a realistic model known in literature of the dynamics of susceptible cells, infected cells and CTL is the following one

       ˙ Ct = λ − δCt− βCtVt ˙ Vt = βCtVt− aVt− pVtIt ˙ It = cVtIt− bIt (2.2)

(15)

CHAPTER 2. VIRAL LOAD AND EPIDEMICS 9 where Ctare uninfected cells, Vtthe infected cells and Itthe CTL cells. The parameters

have the following meanings: uninfected cells are produced with a rate λ and die with a rate δ; infection occurs with a rate β and infected cells die with a rate a and they are neutralized from CTL with rate p; finally CTL replicates, in response to antigenic stimulation, with rate c and die with rate b. In Figure 2.2 it is shown the plot of this model in which we omit the number of susceptible cells. As we said in the previous section, it is important to note that CTL can also have nonlytic activity but for simplicity we do not include this here. Adding noise to the second equation we obtain

dVt= (βCtVt− aVt− pVtIt) dt + σWt (2.3)

It is noteworthy that if the system starts with I0 = 0 the solution stays at zero for every

t; hence we can’t allow the value zero for I0. Given a value of I0> 0, until Vtis zero, It

decays at zero exponentially. This have the consequence that when Vt > 0 the growth

of CTL is slow but this is realistic because when an individual gets infected the immune system reacts with non specific response, whereas the specific one appears later when the virus grow. In addition to the first deterministic model (2.2), we can take into account other possibilities: the first possibility is to simplify the model using only the immune system cells, approximating their trend with a logistic function, that means

dIt

dt = It(IM ax− It)

where IM ax is its limit value. As in the previous case we will have an equation of the

type

dVt= b (It, Vt) dt + σdWt

hence we need a function b such that Vt at first grows exponentially when It is very small; then it saturates when It is growing; finally it decreases when It is approaching

is limit value.

What we have just said leads us to take as stochastic differential equation for Vt

dVt= ( ˜Imax− It)Vtdt + σdWt (2.4)

where ˜Imax = Imax− ε and ε is a small number.

The second possibility is more realistic and uses what we said in the introduction: that is that immune system cells after their growth don’t remain at high level but decreases at a memory level. To model this behaviour we have to use Vtin the dynamics

of Ittoo. Hence, this alternative model can try to summarize the information contained in (2.2) avoiding the use of too many variables. One attempt in this direction is the following: (˙ Vt = rVt(1 −Vmt) − aVt− pVtIt ˙ It = cVtIt− bIt (2.5) The first equation models with a logistic functions the growth of the virus with growth rate r and capacity limited by m, it dies with rate a and with rate p is killed by the immune system cells. The parameters in the second equation has the same meaning as in 2.2. The outcome of this different way to model their interactions is shown in Figure 2.3. The difference with the first model we saw is a less oscillating behaviour.

With the choices of parameters and the initial conditions we take, we have that the viral load ranges between tiny numbers. This is only a way to simplify the range of number we would have to deal with; indeed we don’t want to use the real numbers of particles.

(16)

CHAPTER 2. VIRAL LOAD AND EPIDEMICS 10 Besides we can think that under a certain threshold ε < 0.5, the quantity Vt≤ ε means

the virus was cleared by the immune system cells. Obviously, with different choices of parameters we can model different steady state or situations in which the CTL cells don’t manage to clear the virus.

Figure 2.2: A simulation of Eulero method applied to the system (2.2) with temporal unit h = 0.1 and parameters λ = 1, δ = 0.1, β = 0.1, a = 0.2, p = 0.3, c = 0.2, b = 0.1. As initial condition we choose x0= 5, y0 = 0.3, z0= 0.1.

Figure 2.3: A simulation of Eulero method applied to the system (2.5) with temporal unit h = 0.1 and parameters β = 0.1, a = 0.2, p = 0.5, c = 0.3, b = 0.1. As initial condition we choose y0= 0.3, z0 = 0.1.

(17)

CHAPTER 2. VIRAL LOAD AND EPIDEMICS 11

2.3.2 Function σ

Now we want to make some consideration regarding the function σ (Vt, It). For instance,

if we decide to choose (2.5) adding some noise to the viral load we will have the stochastic differential equations system

   dVt = h rVt  1 −Vt m  − aVt− pVtIt i dt + σdWt dIt = (cVtIt− bIt) dt (2.6)

where we take a constant σ. Could it be suitable? The answer is no: it’s a wrong choice and now we are going to show why. Indeed if we consider the equation

dVt=  rVt  1 −Vt m  − aVt− pVtIt  dt + σWt (2.7)

we don’t have that if one individual starts with V0i = 0, I0i = 0 then he continues with that value until he gets infected. Indeed, if at time t = 0 we have I0i = 0, V0i = 0 then the Brownian motion makes the derivative move to values different from zero when

t > 0, hence this would produce a positive or even negative viral load in the healthy

individual. Basically, we have to modify slightly (2.7) using a function σ that helps us to maintain at zero the viral load if we start from the initial conditions I0i = 0, V0i = 0; one suggestion is to choose

σ(Vt, It) = σ1Vt,

this means until the value of Vti = 0 even the contribution of the Brownian motion will be zero. Hence, the equation becomes

dVt=  rVt  1 −Vt m  − aVt− pVtIt  dt + σ1VtdWt. (2.8)

with a suitable constant σ1. Another possibility is to use a bounded function like

σ(Vt) = σ1arctan(Vt).

In the next chapter we will take into account interacting individuals that carry their own viral load and intensity of immune system response. Thus for every individual we will add a function that models the infection between a susceptible individual and an infected one. This will have as a result that the viral load starts to grow and the number of CTL too.

Moreover, this SDE leads to positive values of Vt and It because under Lipschitz as-sumption and linear growth of the coefficients we have continuous trajectories. Hence, to become negative, Vt has to reach the value zero, but when Vt = 0 its derivative

be-comes zero and it will remain at zero value. When Vt is zero and for what we have just said it remains at that value, the derivative of Itwill be zero, that is Itremains constant.

2.3.3 Stochastic simulations

In this section we show a plot of some trajectories of the stochastic process solution of one of the model for the viral load we have shown. The viral load of one individual is solution of the equation

(18)

CHAPTER 2. VIRAL LOAD AND EPIDEMICS 12 or equivalently Vt+h− Vt= Z t+h t bsds + Z t+h t σsdWs.

Thus if h is tiny we can approximate bs and σs with their values in t and obtain

Vt+h− Vt∼ hbt+ σt(Wt+h− Wt).

because Rt+h

t dWs = Wt+h− Wt and the random variable Wt+h− Wt is distributed

as N (0, h). This method is known as Euler-Maruyama method for the approximate numerical solution of a stochastic differential equation. It is a simple generalization of the Euler method for ordinary differential equation that we used in previous section.

(19)

Chapter 3

Interacting systems

Given a probability space Ω, F , (Ft)t∈[0,T ], P



we have N individuals still in positions

Y1, . . . , YN ∈ R2with viral load and intensity of immune reaction (V1

t, It1), . . . , (VtN, ItN).

Until the k-th individual is healthy it holds that (Vtk, Itk) = 0. In the previous chapter we showed various models for virus spread for a single individual. We decide to use only the mutual competition between viral load and immune system cells. Suppose that for an isolated individual the couple (Vtk, Itk) satisfies the system

dVtk = b1Vtk, Itkdt + σVtkdWtk dItk = b2



Vtk, Itkdt

with Wtk Brownian motion with respect to the filtration Ft. The function σ is supposed to depend on Vtk because we want to prescribe that only when the viral load is positive we have the contribution of the Brownian motion, as noted in the previous chapter. If the individuals interact with each other we have to prescribe a different stochastic equation for viral load; indeed, the viral load of the k-th individual that is healthy changes because of the interaction with individual with positive viral loads and it become

dVtk= b1  Vtk, Itkdt + 1 N X j6=k gYk, YjhVtjdt + σVtkdWtk.

with Wt1, . . . , WtN independent Brownian motions. In this way we are supposing that the influence of the j-th individual on the k-th depends on the relative position evaluated in the function g and by viral load of the j-th individual evaluated in the function h. Thus we are considering the situation in which once an individual gets sick he is not isolated, hence infectious people close to each other inevitably continue to contribute to their mutual viral load.

Remark 1. The functions involved in this model are thought in full generality. However,

some reasonable assumptions are

b1  0, Itk= 0 b1 ∈ Cb(R2) b2 ∈ Cb(R2) σ (0) = 0, σ ∈ Cb(R)

The first one and last one have the meaning that if Vtk = 0, the viral load of the k-th individual will remain equal to zero if no interactions occur. Besides, asking b1, b2

(20)

CHAPTER 3. INTERACTING SYSTEMS 14 to be bounded means we don’t want the derivative of Vt and It to be infinite whereas

with σ bounded means we don’t want to see huge oscillations. Furthermore, we assume

h ∈ Cb(R) and

h (0) = 0

h(x) ≤ h(y) if x ≤ y h(x) = 0 if x ≤ ε0.

In this way an healthy individual does not change the viral load of the individuals with which he is interacting and the change in the viral load is more considerable if the viral load is high. Moreover, individuals with a too low viral load can’t be infectious. The function g inevitably depends on

Y

k− Yj

and one can think that only if the distance is

lower than a certain threshold an interaction occurs. Hence, the function g ∈ Cc(R2×R2)

has to be with compact support.

So we are considering the following system with 3N variables

dYtk = 0 dVtk = b1  Vtk, Itkdt + 1 N X j6=k gYk, YjhVtjdt + σVtkdWtk dItk = b2  Vtk, Itkdt (3.1)

From now on we will refer to d with a little abuse of notation, the exact value of d will be clear looking at (3.1) and recalling the previous assumptions. Its vectorial expression can be written as

dZt= ˜b(Zt)dt + ˜σdWt

where ˜b is a vector field in R4N whereas ˜σ is a matrix-valued field in R4N × R4N and

Wt is a Brownian motion with independent components. For the stochastic version of

Cauchy-Lipschitz theorem, under the assumptions (1) ˜b(v1) − ˜b(v2) ≤ L |v1− v2| (2) |˜σ(z1) − ˜σ(z2)| ≤ L |z1− z2| (3) ˜b(v) ≤ C(1 + |v|) (4) |˜σ(z)| ≤ C(1 + |z|)

for every initial condition Z0 that is F0 measurable, there is existence and uniqueness of a strong solution Zt adapted with continuous trajectories. Thus, for every i the process

(Yi, Vti, Iti) is the i-th component of that solution and it is adapted and this holds for the process Vti as well. Moreover, (Yi, Vti, Iti) and its components have continuous trajectories. To satisfy the assumption of Cauchy-Lipschitz theorem it is sufficient to assume the following properties:

1. b1, b2, g ∈ Cb1(Rd), h ∈ Cb(R) and σ ∈ Cb1(R) to have (1) and (2)

2. |b1(v)| ≤ C(1 + |v|), |b2(v)| ≤ C(1 + |v|), |g(v)| ≤ C(1 + |v|) and |σ(x)| ≤ C(1 + |x|) to have 3. and 4.

(21)

CHAPTER 3. INTERACTING SYSTEMS 15 The easiest way to prove it is by considering the infinity norm both on matrices and on vectors and noticing that

˜b(Z) =              0 b1(Z2, Z3) + N1 Pj6=1,j≡1(3)g(Z1, Zj)h(Zj+1) b2(Z3, Z4) 0 b1(Z5, Z6) + N1 Pj6=4,j≡1(3)g(Z4, Zj)h(Zj+1) b2(Z5, Z6) .. .             

and ˜σ block diagonal, where the i-th blocks has only one component different from zero σ(Zj) with j ≡ 2(3)      0 0 0 0 0 0 0 0 0 0 σ(Zj) 0 0 0 0 0      . Notice that if C1

b(Rd) denotes the space of bounded functions with first derivative

bounded, making the assumption that every function belongs to that space every con-dition (1),(2),(3),(4) is satisfied. For what we said before for modeling purpose we have to assume g ∈ Cc1(Rd).

3.1

Equation for empirical measures

From now on with Cb1,2(Rd) we denote the space of bounded functions with ∂i, ∂v, ∂vv2

bounded. Given φ ∈ Cb2(R2× R × R; R) we write its gradient as ∇φ = (∇y, ∂v, ∂i).

We want to determine dφYi,N, Vi,N t , I

i,N t



using It¯o formula. Let us denote

Xtk=Yk,N, Vtk,N, Itk,N

for a lighter reading, we have

Xti,N= ∂vφ



Xti,NdVti,N+ ∂iφ



Xti,NdIti,N + σ(Vti,N)2vv2 φ(Xti,N)dt

where dYi,N = 0. Substituting the expressions of dVti,N, dIti,N we obtain

Xti,N=∂vφ  Xti,N  b1(V i,N t , I i,N t ) + 1 N X j6i g(Yj, Yi)h(Vtj,N)  dt + ∂vφXti,Nσ(Vti,N)dWti + ∂iφXti,Nb2(Vti,N, I i,N t )dt + σ(V i,N t )2∂vv2 φ(X i,N t )dt.

We have already introduced empirical measures StN = N1 PN

i=1δXi,Nt , thus let us consider

the equality D SNt , φE= 1 N N X i=1 φXti,N.

(22)

CHAPTER 3. INTERACTING SYSTEMS 16 We differentiate it and we obtain

dDStN, φE=1 N N X i=1 Xti,N =1 N N X i=1 ∂vφ  Xti,N  b1(Vti,N, I i,N t ) + 1 N X j6=i g(Yj, Yi)h(Vtj,N)  dt+ + 1 N N X i=1 ∂vφ  Xti,Nσ(Vti,N)dWti+ + 1 N N X i=1 ∂iφ  Xti,Nb2(Vti,N, I i,N t )dt + 1 N N X i=1 σ(Vti,N)2vv2 φ(Xti,N)dt.

Supposing g(Yi,N, Yi,N) = 0 the previous equation can be rewritten as

dDStN, φE=D∂vφb1, StN E dt +D∂iφb2, StN E dt +DStN, ∂vφ D StN, g1(y, ·, ·) EE dt +DStN, σ2vv2 φEdt + dMtφ,N where g1(y, w, z) = g(y, w)h(z),

D StN, ∂vφ D StN, g1(y, ·, ·) EE = Z Rd ∂vφ(y, v, i)StN(dx) Z Rd g1(y, w, z)dStN(dw, dz, di 0 ) and Mtφ,N = 1 N N X i=1 Z t 0 ∂vφ  Xsi,Nσ(Vsi,N)dWsi. (3.2)

Remark 2. The stocastic It¯o integrals Rt

0∂vφ(Xsi,N)σ(Vsi,N)dWti and

Rt

0σ(Vsi,N)dWsi are

martingales because the processes we are integrating are in M2([0, T ]) for boundness assumption and adapteness of Xti,N and Vti,N; indeed an adapted process with continuous trajectories is progressively measurable.

3.2

Laws of empirical measures

The empirical measure is

StN(dy, dv, di) = 1 N X k δ Yk,N,Vk,N t ,I k,N t (dy, dv, di) .

For any fixed t and ω StN ∈ P r1(Rd), that is

Z Rd |x| SN t (dx) = 1 N N X i=1 X i,N t < ∞

because the right hand side is a finite sum of vectors in Rd. Moreover, given ω the function StN belongs to C([0, T ]; P r1(Rd)) because for t → t0 W1(StN, SNt0) → 0, using

that Xti,N has continuous trajectories for every i.

In this section we are going to prove that the laws QN of SN are relatively compact in P rC([0, T ] , P r1(Rd))with respect to the weak convergence. We have to make the following assumptions:

(23)

CHAPTER 3. INTERACTING SYSTEMS 17 (a) X0i,N are F0-measurable;

(b) supi,N Eh Y i,N i < ∞, supi,N E h V i,N 0 i < ∞, supi,N E h I i,N 0 i < ∞

(c) X0i,N = X0i i.i.d with law µ0 ∈ P r1(Rd).

The last one guarantees that DS0N, φE → hµ0, φi for every φ ∈ Cb(Rd) in probability.

Indeed for weak law of large numbers if we have a sequence of i.i.d random variables then the sample average converge in law to their mean. For this reason

1 N N X i=1 φ(X0i) =DS0N, φE−→ EP hφ(X0i)i= Z Rd φµ0(dx) = hµ0, φi .

Let us recall a remarkable result that relates the maximum of a local martingale with its quadratic variation:

Proposition 4 (Burkholder-Davis-Gundy). For any 1 ≤ p < ∞ there exist positive constants cp, CP such that for all local martingale X with X0= 0 and stopping times τ , it holds that cpE h [X]p/2τ i≤ E " sup t≤τ |Xt|p # ≤ CpE h [X]p/2τ i. Theorem 3. nQNo n∈N is relatively compact in P r  C([0, T ] , P r1(Rd)).

Proof. We want to apply Proposition 3 to empirical measures StN. To do this we have to prove its assumptions; with a little abuse of notation, we won’t rename new constants. Step 1. To prove that Ehsupt∈[0,T ]R

Rd|x| S N t (dx) i ≤ C notice that E " sup t∈[0,T ] Z Rd |x| StN(dx) # = E " sup t∈[0,T ] 1 N N X i=1 X i,N t # ≤ 1 N N X i=1 E " sup t∈[0,T ] X i,N t # ≤ C N ≤ C.

To make the last inequality hold we need to show that Ehsupt∈[0,T ] X i,N t i < C.

Step 2. Using the inequality |x| ≤ |y| + |v| + |i| we have to show that

E h Y i,N i + E " sup t∈[0,T ] V i,N t # + E " sup t∈[0,T ] I i,N t # ≤ C

For the position we have assumed (b). For viral loads we have

V k,N tV k,N 0 + Z t 0 b1(Vsk,N, Isk,N)ds + 1 N X j6=k Z t 0 g(Y k, Yj) h(V j,N s ) ds + Z t 0 σ(Vsk,N)dWskV k,N 0 + kb1k∞T + kgkkhkT + Z t 0 σ(Vsk,N)dWsk

(24)

CHAPTER 3. INTERACTING SYSTEMS 18 hence E " sup t∈[0,T ] V i,N t # ≤ C + E " sup t∈[0,T ] Z t 0 σ(Vsk,N)dBs # ≤ C.

We have estimate the second addendum to make the previous hold true; using Proposi-tion (4) with τ ≡ T E " sup t∈[0,T ] Z t 0 σ(Vsk,N)dBs # ≤ CE   Z T 0 σ(Vsk,N)2ds !1/2  ≤ C (kσk< ∞)

Assuming (d) and b2 bounded we have similarly that

E " sup t∈[0,T ] I i,N t # ≤ C + kb2k∞T ≤ C.

Step 3. Moreover, we have to prove that E

h

W1(StN, SsN)p

i

≤ |t − s|1+β for some p > 1 and β > 0. We first estimate

D

StN, φE−DSsN, φE

where φ ∈ C(R

d) with

lipschitz constant Lip(φ) ≤ 1. We have

D StN, φE−DSsN, φE ≤ 1 N N X i=1  φ(Xti,N) − φ(Xsi,N) ≤ 1 N N X i=1  φ(Xti,N) − φ(Xsi,N) ≤ 1 N N X i=1 Y i,N t − Ysi,N + 1 N N X i=1 V i,N t − Vsi,N + 1 N N X i=1 I i,N t − Isi,N

The first sum is equal to zero because the individuals are still so Yti,N = Ysi,N, hence recalling Wasserstein metric definition we have

W1(StN, SsN) ≤ 1 N N X i=1 V i,N t − Vsi,N + 1 N N X i=1 I i,N t − Isi,N . (3.3)

As before we have to estimate the two terms separately. For the first, from the SDE we have V i,N t − Vsi,N ≤ Z t s b1(Vri,N, Iri,N)dr + 1 N N X j=1 Z t s g(Y i,N, Yj,N) h(V j,N r ) dr + Z t s σ(Vri,N)dBri .

Hence, elevating at power p and taking expectations we have E h V i,N t − Vsi,N pi ≤ C |t − s|p+ E  Z t s σ(Vri,N)dBri p ≤ C |t − s|p/2

(25)

CHAPTER 3. INTERACTING SYSTEMS 19 in which we use BDG inequality noticing that Rt

sσ(Vri,N)dBri for fixed s varying t ≥ s

is again a martingale and bounding from above that term with the the supremum over

t ≤ T . For the other term it is easy to see that

E h I i,N t − Isi,N pi ≤ C |t − s|p/2.

Hence, elevating at p (3.3), by H¯older inequality we have W1(StN, SsN)pC N N X i=1 V i,N t − Vsi,N p + C N N X i=1 I i,N t − Isi,N p

and finally taking expectations we obtain

E h W1(StN, SsN)pi≤ 1 N N X i=1 E h V i,N t − Vsi,N pi + 1 N N X i=1 E h V i,N t − Vsi,N pi ≤ C |t − s|p/2.

Choosing p > 2 and setting β = p2 − 1 > 0 we complete the proof.

3.3

The PDE limit

In the previous section we have proved that {QN}n∈Nare relatively compact, hence every sub-subsequence QNk converges weakly to a probability on C([0, T ], P r

1(Rd)). Notice that we avoid to double-index a sub-subsequence and its limit for a lighter reading. A priori, the limit in law of SNk

t converges to a limit measure that can depend both on

randomness and time.

Suppose for a while that SNk

t converges weakly to µt ∈ C([0, T ]; P r1(Rd)), namely

D

SNk

t , φ

E

→ hµt, φi for every φ ∈ Cb(Rd) and the martingale term goes to zero as N → ∞. Taking φ ∈ Cb1,2(Rd) a passage to the limit suggests that µt satisfies the limit

equation of that one satisfied by SNt . We denote the following equation

∂µt ∂t = 22µt) ∂v2 − ∂(b2µt) ∂i∂(b(µt)µt) ∂v (3.4)

as the PDE limit and we give the following:

Definition 4. A measure-valued solution of (3.4) is a family {µt}t∈[0,T ] in

C([0, T ], P r1(Rd)) with initial condition µ0 ∈ P r1(Rd) that satisfies t, φi = hµ0, φi + Z t 0 s, b(µs)∂vφi + Z t 0 s, b2∂iφi ds + Z t 0 * µs, σ2 2 2φ + ds (3.5)

with φ ∈ Cb1,2(Rd) where b(µt)(y, v, i) = b1(v, i) +R

Rdg1(y, w, z)µt(dw, dz, di).

Thus, at least intuitively, if StN converges weakly to a limit measure, that measure is a measure-valued solution of the PDE limit.

The empirical measure is a discrete random measure hence it can’t have a density with respect to Lebesgue measure whereas µt may admit a density. If a measure-valued

solution has a density ρt the PDE limit can be meant in a classical way. Indeed, ρt(x) would be a weak solution of that partial differential equation, with test functions in

Cc1,2(Rd). However, in this work we won’t study under which assumption µt has a

density.

Now we are going to prove that the equation (3.5) is the equation satisfied by the limit in law of sub-subsequences of StN.

(26)

CHAPTER 3. INTERACTING SYSTEMS 20 Lemma 3. The martingale (3.2) satisfies

E  supt∈[0,T ] M φ,N t 2 N →∞ −→ 0 In particular it converges to zero in probability uniformly in t.

Proof. The process Mtφ,N is a sum of martingales multiplied by 1/N , hence it is a martingale itself. For Doob’s inequality we have

E " sup t∈[0,T ] M φ,N t 2# ≤ CE  M φ,N T 2 .

Notice that we can apply Doob’s inequality because of the boundness of σ. The Brownian motions Wi, Wjare independent for every i 6= j hence every addendum in the expression of the martingale is uncorrelated. It means that

E Z t 0 ∂vφ  Xsi,Nσ(Vsi,N)dWsi  Z t 0 ∂vφ  Xsj,Nσ(Vsj,N)dWsj  = 0. Thus, E  M φ,N T 2 = 1 N2 N X i=1 E "Z t 0 ∂vφ  Xsi,Nσ(Vsi,N)dWsi 2# = 1 N2 N X i=1 E Z t 0 ∂vφ  Xsi,N2σ(Vsi,N)2ds  ≤ CT N → 0

where we use It¯o isometry and the boundness of ∂vφ, σ. To prove the second statement

it is sufficient to use Chebyshev inequality.

Theorem 4. The laws QN of empirical processes S.N have the following property: every

sub-subsequence QNk has limit concentrated on the set of measure-valued solutions of

the PDE limit.

Proof. Given µ0 and φ a test function, let us consider the functional Ψφ.) = sup t∈[0,T ] t, φi − hµ0, φi − Z t 0 * µs, b(µt)∂vφ + b2∂iφ + σ2 2 2φ + ds .

It is continuous with respect to the topology of C([0, T ]; P r1(Rd)). To prove it we take a sequence µnt → µtin Wasserstein distance uniformly in t, hence it converges also weakly

uniformly in t. This means that hµnt, φi → hµt, φi. For the linear terms we have that

* µnt, b2∂iφ + σ2 2 2φ + → * µt, b2∂iφ + σ2 2 2φ +

because φ ∈ Cb1,2(Rd) and b2, σ are bounded continuous. By dominated convergence we have that also the integral converges because the integrating function is bounded and we are integrating on [0, T ]. For the same reason we have the convergence of the linear part of the term in which appears b(µt). Finally, for

hµns, hµns, g1(y, ·, ·)i ∂vφi − hµs, hµs, g1(y, ·, ·)i ∂vφi =

(27)

CHAPTER 3. INTERACTING SYSTEMS 21 The second term goes to zero for reasons similar to the ones given before. For the first we have that if µns is tight, hence for every ε > 0 exists a compact set K such that

µns(Kc) < ε and

|hµns, hµns − µs, g1i ∂vφi| ≤ Cε + kφk

Z

K

|gn(s, y) − g(s, y)| µns(dx)

where gn(s, y) = hµns, g1(y, ·, ·)i. This sequence converges uniformly in K: it converges pointwise and it is equi-bounded and equi-lipschitz because g1 is lipschitz. Hence, for Ascoli-Arzelà theorem g(y) is a uniform limit. Thus, we have the convergence to zero of the whole term, because µnt converges weakly and gnuniformly.

If QNk converges weakly for Portmanteau theorem we have

Q(Ψφ(µ.> δ) ≤ lim inf k→∞ Q Nk φ(µ.) > δ). But QNk φ(µ.) > δ) = P(Ψφ(S.Nk) > δ) = P sup t∈[0,T ] D SNk s , φ E − hµ0, φi − Z t 0 * SNk s , b(µt)∂vφ + b2∂iφ + σ2 2 2φ + ds > δ ! .

From the equation of empirical measure we can substitute the first and the second terms with Mφ,Nk t + D SNk 0 , φ E obtaining QNk φ(µ.) > δ) = P sup t∈[0,T ] D SNk 0 , φ E − hµ0, φi + Mtφ,Nk > δ ! ≤ P D SNk 0 − µ0, φ E + sup t∈[0,T ] M φ,Nk t > δ ! ≤ P D SNk 0 − µ0, φ E > δ/2  + P sup t∈[0,T ] M φ,Nk t > δ/2 !

Hence for assumption on S0N and Theorem (3.2) we obtain Q(Ψφ.) > δ) = 0 for every

δ > 0. This means

Q(Ψφ(µ.) = 0) = 1

that implies that Q is supported on the set of measure-valued weak solutions defined in (3.5), by an argument of countable density of functions.

3.4

Uniqueness of measure-valued solutions for the PDE

limit

Some long and technical proofs in this final section are omitted or given as sketches because to give the full proofs we would have to go further the central aim of this work. The first theorem concerns existence and uniqueness of a Mc-Kean Vlasov equation with the same assumptions as usual on the functions b1, b2, g, h, σ.

With a classical argument of fixed point on the space C([0, T ]; P r1(Rd)), that we omit, it can be proved the following result:

(28)

CHAPTER 3. INTERACTING SYSTEMS 22 Theorem 5. If the coefficients are Lipschitz for every t there is a unique solution (X, µ) of the following problem:

             dYt= 0 dVt= (b1(Vt, It) +R g(Yt, x)h(v)µt(dx, dv, di)) dt + σ(Vt)dBt dIt= b2(Vt, It) dt µt = Law of Xt (3.6)

where Xt= (Y, Vt, It) and X0 is F0-measurable and integrable.

We call the previous SDE as SDE limit because we will prove that its marginals satisfy the PDE limit.

Lemma 4. Given Zt in the form of the SDE limit with a fixed law νtin its expression,

it holds that its marginals are measure-valued solution of the PDE

∂ηt ∂t = 22ηt) ∂v2 − ∂(b2ηt) ∂i∂(btηt) ∂v where bt(x, v, i) = b1(v, i) +R g(x, x0)h(v)νt(dx0, dv, di).

Proof. After having applied the It¯o formula on φ(Zt), the thesis is given by the equation

satisfied by E [φ(Zt)] =RRdφ(z)dγt(z) with γt its marginals.

Theorem 6. The PDE limit has unique measure-valued solution with test functions

φ ∈ Cb1,2([0, T ] × Rd) plus the first derivative in time bounded.

Proof. We have proved that limits of sub-subsequences of QN are concentrated on mea-sures µt solving hµt, φi = hµ0, φi + Z t 0 hµs, b(µs)∂vφi ds + Z t 0 hµs, b2∂iφi ds + Z t 0 * µs, σ2 2 2φ + ds (3.7)

with φ ∈ Cb1,2(Rd) that is the space of bounded functions, with ∂iφ, ∂vφ, ∂v22φ bounded.

The first step is to deal with measure-valued solutions of the linear case of the above equation. This means we treat the nonlinear term b(µt) as a function depending only on time and not on µt. This allows us to study

hµt, φi = hµ0, φi + Z t 0 hµs, bs∂vφi + Z t 0 hµs, b2∂iφi ds + Z t 0 * µs, σ2 2 2 v2φ + ds. (3.8)

Consider an arbitrary pair of ψ ∈ Cb2(Rd) given T0 ∈ [0, T ] and coefficients with the following assumptions: for every t they have to be in Cb3(Rd) with all derivatives bounded and continuous jointly in (t, x). Consider the next equation

∂ut ∂t + σ2 2 2ut ∂v2 + bt ∂ut ∂v + b2 ∂ut ∂i = 0, uT0 = ψ

following chapter 3 of [10] or chapter 9 of [9] it can be proved that the backward Kol-mogorov equation has a solution u ∈ Cb2(Rd) and first derivative in time bounded. Hence we have to strengthen the assumptions on our coefficients to prove have that BKE has a sufficient regular solution to apply the rules of calculus required in the next steps. The second step is to consider in the definition of measure-valued solution, instead of

(29)

CHAPTER 3. INTERACTING SYSTEMS 23 test functions in Cb1,2(Rd), every time dependent function with the same regularity of the solutions of the previous equation, that is at least Cb1,2(Rd). To obtain this rigorously, we should had applied It¯o formula to the quantityDStN, φt

E

, from the beginning. This has as consequence another term in the equation satisfied by µt, in particular we have to add the termDµt,∂u∂tt

E

dt in the equation satisfied by µt. At this point we know that

for every 0 ≤ t ≤ t0 ≤ T0 t, uti − hµt0, ut0i = Z t 0 * µs, ∂sus+ bs∂vus+ b2∂ius+ σ2 2 2 v2us + ds. (3.9)

Moreover with the same argument we prove that limits of sub-subsequences of QN are concentrated on measures µt satisfying (3.9). By the equation satisfied by u we get the left hand side is equal to zero, hence choosing t = T0, t0 = 0

t, ψi = hµ0, u0i .

Since ψ is arbitrary this identifies µt; to prove it is sufficient to approximate indicator

functions through decreasing functions with compact support and use the theorem of monotone convergence. This means the equation has a unique solution satisfied by µt

in the linear case. Now, let νt be a measure-valued solution of (3.5), again with test

functions φ ∈ Cb1,2([0, T ] × Rd), of the equation

∂ηt ∂t = 22ηt) ∂v2 − ∂(b2ηt) ∂i∂(btηt) ∂v (3.10)

where bt = b(νt). The function bt satisfies the previous assumptions of boundness and jointly continuity in (t, x) for every derivative. Indeed, for example jointly continuity in (t, x) is satisfied because for every yn → y and tn → y it is true that g(yn, w)h(v)

converges uniformly to g(y, w)h(v) because g is Lipschitz, and utn converges weakly to

µt. Having fixed ν the equation (3.10) is a linear case, hence that equation has unique

solution. However we want to prove the uniqueness of the nonlinear case; given a process

Zt that satisfies the equation in (3.6), substituting µt with νt in the equation for Vt, we

have that also the marginal laws of Zt satisfy (3.10), see Lemma 4. Because of the

uniqueness of the solution of (3.10) we obtain that marginals of Zt are equal to νt. This

means that (Z, ν) is another solution of the system (3.6), but that system has a unique solution. We conclude that νt is unique.

With the uniqueness of the measure-valued solution we have indirectly proved that the marginals of the SDE limit satisfy the PDE limit.

Remark 3. With the previous theorem we have found out that to prove uniqueness of

measure-valued solution of PDE limit we have to take more regularity on our functions

b1, b2, g, namely we want them to be Cb3(Rd).

Finally we give the following and final result.

Theorem 7. If µ·is the unique measure-valued solution of the PDE limit, the following

results hold:

1. {QN}N ∈N converges weakly to δµ;

(30)

CHAPTER 3. INTERACTING SYSTEMS 24

Proof. From Theorem 4 and Theorem 6 we have that every convergent subsequence of

{QNk} converges to δ

µ and this means that the whole sequence {QN} converges to the

same limit; this prove (i). The previous statement says that S.N converges in law to

µ. that is a constant and this means that convergence in law implies convergence in

probability with respect to the topology of C[0, T ]; P r1(Rd)



, that is for every ε > 0

lim N →∞P t∈[0,T ]sup W1(S N t , µt) > ε ! = 0.

Riferimenti

Documenti correlati

The results of the simulations carried out with r/D = 0.0037 and r/D = 0.05 are characterized by a mean pressure distribution occurring sig- nificantly more downstream compared to

Such a selection is now based on the maximum check interval value present in 

Permettant l'identification et la localisation des esclaves, donnant les noms des membres de leurs familles, ces listes servent à maintenir dans leur statut servile des

For a real number x whose fractional part is not 1/2, let hxi denote the nearest integer

In our high-school course Foundations of Computer Science, we attempt to instill in the students the princi- ples of a formal development process: requirements specification,

Abstract—In this paper, we show that BTI aging of MOS transistors, together with its detrimental effect for circuit per- formance and lifetime, presents considerable benefits for

It is submitted that the probable intention of the drafters of the Restatement (Second) was to achieve a balance (in section 187 (2) (a)) between the older reasonable

The income inequality raises several problems, both because of its thickness and of its characteristics which often make it inequality unacceptable according to