• Non ci sono risultati.

PercolazioneperinsiemidilivellodelGaussianfreefield Universit`adegliStudidiPadova

N/A
N/A
Protected

Academic year: 2021

Condividi "PercolazioneperinsiemidilivellodelGaussianfreefield Universit`adegliStudidiPadova"

Copied!
70
0
0

Testo completo

(1)

Universit`

a degli Studi di Padova

Dipartimento di Matematica

“Tullio Levi-Civita”

Corso di Laurea Magistrale in Matematica

Percolazione per insiemi di livello

del Gaussian free field

Relatori:

Prof. Giambattista Giacomin Universit´e Paris Diderot

Prof. Paolo Dai Pra Laureando:

1130557 Jacopo Borga 23 Febbraio 2018

(2)
(3)
(4)
(5)

Introduction

Existence of phase transition for the level-set percolation for the discrete Gaussian free field on Zd(DGFF) is a problem that received much attention in the past year, in particular it was studied in the 80’s by J. Bricmont, J.L. Lebowitz and C. Maes (see [3]). They showed that in three dimension the DGFF has a nontrivial percolation behavior: sites on which ϕx≥ h percolate

if and only if h < h∗ with 0 ≤ h∗ < ∞. Moreover, they generalized the lower

bound for h∗ in any dimension d ≥ 3, i.e. h∗(d) ≥ 0, but they were not able

to extend the proof of existence of a non trivial transition for any d ≥ 4. Recently P.-F. Rodriguez and A.-S. Sznitman (see [11]) proved that h∗(d)

is finite for all d ≥ 3 as a corollary of a more general result concerning the stretched exponential decay of the connectivity function when h > h∗∗,

where h∗∗ is a second critical parameter that satisfied h∗∗ ≥ h∗. In this

thesis we tried to get acquainted with some of the techniques developed in the domain, notably to control the large excursions of these fields and to understand the entropic repulsion phenomena, and to comprehend the results on level set percolation in dimension three and larger. In particular, the main goal is to present the two works of Bricmont, Lebowitz and Maes and of Rodriguez and Sznitman. Finally, in the last two chapters we also present a new and original (but incomplete) generalization of the proof (due to J. Bricmont, J.L. Lebowitz and C. Maes ) of the existence of a non trivial phase transition to any d ≥ 3.

(6)

Introduzione

L’esistenza di una transione di fase per gli insiemi di livello del Gaussian Free Field discreto in Zd (DGFF) `e un problema che ha ricevuto molta

at-tenzione negli anni passati, in particolar modo `e stato studiato intorno agli anni ’80 da J. Bricmont, J.L. Lebowitz and C. Maes (vedi [3]). In questo articolo dimostrarono che il DGFF in dimensione 3 presenta una transizione di fase non triviale: i siti nei quali ϕx ≥ h percolano se e solo se h < h∗

per 0 ≤ h∗ < ∞. Inoltre generalizzarono il bound inferiore per h∗ in ogni

dimensione d ≥ 3, cio`e h∗(d) ≥ 0, ma non furono in grado di estendere la

dimostrazione per l’esistenza di una transizione di fase non triviale ad ogni d ≥ 4. Recentemente P.-F. Rodriguez e A.-S. Sznitman (vedi [11]) hanno dimostrato che h∗(d) `e finito per ogni dimonsione d ≥ 3 come corollario di

un risultato pi`u generale riguardante il semi-decadimento esponenziale della funzione di connettivit`a quando h ≥ h∗∗, dove h∗∗`e un secondo parametro

che soddisfa h∗∗ ≥ h∗. In questa tesi l’autore ha cercato di prendere

famil-iarit`a con alcune tecniche sviluppate in questo dominio, in particolar modo a controllare le grandi escursioni di questi campi, capire il fenomeno di re-pulsione entropica e comprendere i risultati riguardanti la percolazione per insiemi di livello in dimensione tre o maggiore. In particolare l’obbiettivo principale `e quello di presentare i due lavori di Bricmont, Lebowitz e Maes e di Rodriguez e Sznitman. Infine, negli ultimi due capitoli, presenteremo una nuova ed originale (ma incompleta) generalizzazione della dimostrazione (di Bricmont, Lebowitz e Maes) dell’ esistenza di una transizione di fase ad ogni d ≥ 3.

Ringraziamenti

Vorrei ringraziare innanzitutto il Prof. Giambattista Giacomin per avermi permesso di realizzare questo lavoro nel periodo trascorso a Parigi, ma so-prattutto per avermi trasmesso tutta la sua passione per la Matematica. Un altro grandissimo ringraziamento va al Prof. Paolo Dai Pra per avermi seguito e sostenuto in tutti questi anni di studi.

(7)

Contents

Notation 9

1 The DGFF 11

1.1 The costruction of the model . . . 11

1.2 Some heuristic interpretations . . . 12

1.3 The random walk rappresentation for the massless model . . 13

1.3.1 Harmonic functions and the Discrete Green Identities 13 1.3.2 The random walk representation . . . 16

1.4 The infinite volume extension . . . 18

1.5 The Gibbs-Markov property. . . 20

2 Some useful general tools 25 2.1 Extended version for non-negative increasing functions of the Markov’s inequality . . . 25

2.2 BTIS-inequality . . . 25

3 Some useful tools for the DGFF 31 3.1 Maximum for the Lattice Gaussian Free Field . . . 31

3.2 Asymptotics for the Green function . . . 33

3.3 Potential theory . . . 33

3.4 Recurrent and transient sets: The Wiener’s Test . . . 34

3.5 Notation for the DGFF . . . 34

3.6 Density and uniqueness for the infinite cluster . . . 35

4 The two main results 39 4.1 Purpose of the thesis . . . 39

4.2 The proof of J. Bricmont, J.L. Lebowitz and C. Maes . . . . 39

4.2.1 Definitions and notation . . . 40

4.2.2 The technical lemmas . . . 41

4.2.3 Conclusion of the proof of the Theorem . . . 43

4.3 The proof of P.-F. Rodriguez and A.-S. Sznitman . . . 43

4.3.1 Renormalization scheme . . . 44

4.3.2 Crossing events . . . 46

4.3.3 The structure of the proof . . . 47 7

(8)

4.3.4 The main Theorem . . . 51

5 A generalization of the BLM proof 59 5.1 Finite energy property for the DGFF . . . 59

5.2 The setup . . . 61

5.3 The technical lemmas . . . 61

5.4 The conclusion of the proof . . . 66

6 Open problems 67

(9)

Notation

We introduce some notation to be used in the sequel. First of all we ex-plain our convection regarding constant: we denote by c, c0, . . . positive con-stants (different concon-stants could have the same name). Numbered concon-stants c0, c1, . . . are defined at the place they first occur within the text and remain

fixed from then on until the end of the section. In chapters 1, 3, 4, constants will implicitly depend on the dimension d. Throughout the entire paper, de-pendence of constants on additional parameters will appear in notation. On Zd, we respectively denote by | · | and | · |∞the Euclidean and `∞-norms. We

denote by i ∼ j the couple of vertices i and j such that |i − j| = 1. More-over, for any x ∈ Zd and r ≥ 0, we let B(x, r) = {y ∈ Zd; |y − x|∞≤ r} and

S(x, r) = {y ∈ Zd; |y − x|

∞= r} stand for the `∞-ball and the `∞-sphere of

radius r centered at x. Given K and U subsets of Zd, Kc= Zd\K stands for the complement of K in Zd, |K| for the cardinality of K, K ⊂⊂ Zd means that |K| < ∞, and d(K, U ) = inf{|x − y|∞; x ∈ K, y ∈ U } denotes the

`∞-distance between K and U . If K = {x}, we simply write d(x, U ). More-over, we define the inner boundary of K to be the set ∂iK = {x ∈ K; ∃y ∈ Kc, |x − y| = 1}, and the outer boundary of K as ∂K = ∂i(Kc). We also

introduce the diameter of any subset K ⊂ Zd, diam(K), as its `∞−diameter, i.e. diam(K)= sup{|x − y|∞; x, y ∈ K}. Throughout the paper, vectors are

taken to be row vectors, and a small t indicates transposition. The inner product between x and y in Rd is usually denoted by x · y and sometimes we will write for a vector (ti)i∈Λ, Λ ⊂ Zd, simply tΛ.

For the symmetric simple random walk X = (Xk)k∈N on Zd, which at

each time step jumps to any one of its 2d nearest-neighbours with probability

1

2d, we denote by Pithe distribution of the walk starting at i ∈ Z

d, and with

Ei the corresponding expectation. That is, we have Pi(X0 = i) = 1, and

Pi(Xn+1 = k|Xn = j) = 1/2d · 1{k∼j} =: P (j, k), for all k, j ∈ Zd. Given

U ⊂ Zd, we further denote the entrance time in U by τU = inf{n ≥ 0 : Xn∈

U } and the hitting time in U by ˜τU = inf{n ≥ 1 : Xn∈ U }.

Given two functions f, g : Zd→ R, we write f(x) ∼ g(x), as |x| → ∞, if they are asymptotic, i.e. if lim|x|→∞f (x)/g(x) = 1

(10)
(11)

Chapter 1

The DGFF

In this chapter we introduce the model studied in this paper, that is, the Lattice Gaussian Free Field or Discrete Gaussian Free Field (DGFF), also known as Harmonic Crystal.

1.1

The costruction of the model

We begin by defining the configuration space in finite and infinite volume as

ΩΛ:= RΛ and Ω := RZ

d

,

where Λ ⊂⊂ Zd. The measurable structure on ΩΛ(risp. Ω) is the σ-algebra

FΛ (risp. F ) generated by the cylinder sets, that is, the sets of the form

{ω ∈ ΩΛ : ωi ∈ Ai for every i ∈ I}, with I a finite subset of Λ (risp. Zd)

and Ai an open subset of R.

Given a configuration ω ∈ Ω we call the random variables ϕi(ω) = ωi, i ∈

Zd, the spin or height at i. We consider the Hamiltonian (i.e. the energy associated to a given configuration ω ∈ ΩΛ) defined by1

HΛ,β,m(ω) := β 4d X {i,j}∈Eb Λ (ϕi(ω) − ϕj(ω))2+ m2 2 X i∈Λ ϕi(ω)2, ω ∈ ΩΛ, (1.1) where β ≥ 0 is the inverse temperature, m ≥ 0 is the mass2 and EΛb = {{i, j} ∩ Λ 6= ∅ : i ∼ j}. Once we have a Hamiltonian and a configuration η ∈ Ω, we define the corresponding Gibbs distribution for the DGFF in Λ with boundary condition η, at inverse temperature β ≥ 0 and mass m ≥ 0,

1The constants β/4d and m2/2 will be very convinient later on. 2

The terminology ”mass” is inherited from quantum field theory, where the corre-sponding quadratic term in the Lagrangian indeed give rise to the mass of the associated particles.

(12)

as the probability measure µηΛ,β,m on (Ω, F ) defined by µηΛ,β,m(dω) := exp (−H η Λ,β,m(ω)) ZΛ,β,mη λ η Λ(dω), (1.2) where3 ληΛ(dω) :=Y i∈Λ dωi Y i∈ΛC δηi(dωi), (1.3)

and ZΛ,β,mη is a renormalization constant called partition function, that is of course (after some easy computation to show that is finite)

ZΛ,β,mη = Z

exp (−βHηΛ,β,m(ω))ληΛ(dω) < ∞. (1.4) Remark 1.1.1. We immediately observe that the scaling property of the Gibbs measure imply that one of the parameter, β or m, plays an ir-relevant role when studying the DGFF. Indeed, the change of variables ω0i= β1/2ωi, i ∈ Λ, leads to

ZΛ,β,mη = β−|Λ|/2ZΛ,1,mη0 , (1.5) where m0 = β−1/2m and η0 = β1/2η, and, similarly,

µηΛ,β,m(·) = µηΛ,1,m0 0(·). (1.6)

This shows that there is no loss of generality in assuming that β = 1. In particular our interest is on the massless model, that is when m = 0.

1.2

Some heuristic interpretations

We now give some heuristic interpretations of the DGFF.

First of all note that from the definition of the energy in (1.1) only spins located at nearest-neighbours vertices of Zd interact. A second important remark is to note that from definition (1.2) we know that our measure gives higher weight to the configurations that have low energy. So to have a low energy we want both terms in (1.1) to be small, in particular

• for the first term to be small we need that all (ϕi− ϕj)2 (that could be viewed as a sort of gradient) are small, and so that every vertex has a value similar to the neighbour vertices, namely the interaction favours agreement of neighbouring spins;

3

Obviously dωi denotes the Lebesgue measure. Note that the term Qi∈ΛCδηi(dωi)

(13)

1.3. THE RANDOM WALK RAPPRESENTATION FOR THE MASSLESS MODEL13 • for the second term to be small we need that all (ϕi)2 are small, and

so that all vertex has a value close to zero, that is, the spins favour localization near zero.

One possible interpretation of this model is as follows. In d = 1, the spin at vertex i ∈ Λ, ωi ∈ R, we can interpret as the height of a random line

above the x-axis. The behaviour of the model in large volumes is therefore intimately related to the fluctuations of the line away from the x−axis. Similarly, in d = 2, ωi can be interpreted as the height of a surface above

the (x, y)−plane (see for an example the figure in the first page).

The model comes from the Quantum Field Theory. It is the basic model on top of which more interesting field theories are constructed. Indeed a lot of other model are constructed as pertubation of the DGFF, so it is a sort of building block.

Recently, the reason to study this model is that the continuum GFF is a sort of rescaling of the DGFF as the mesh of the lattice goes to zero. The GFF plays a very important role in relation of critical properties of critical systems, especially in dimension 2 (for example, we recall the remarkable works due to the two Fields medal Wendelin Werner and Stanislav Smirnov). Finally, the DGFF could also be interpreted as a model which describes the small fluctuations of the positions of atoms of a crystal. That’s why the DGFF is also called the Harmonic Crystal.

1.3

The random walk rappresentation for the

mass-less model

When we look at the density distribution in (1.2) we immediately note an affinity with the Gaussian distribution. The goal of this section is to rewrite the measure µηΛ,1,0(dω) =: µηΛ(dω) in the canonical form

1

(2π)|Λ|/2√det GΛ

exp − (x − u) · G−1

Λ (x − u) , (1.7)

where u = (ui)i∈Λ, with ui = EηΛ[ϕi], is the |Λ|-dimensional mean vector and

GΛ(i, j) = CovηΛ(ϕi, ϕj) is the|Λ| × |Λ| covariance matrix.

Before locking at this representation we need some preliminary notions on harmonic functions.

1.3.1 Harmonic functions and the Discrete Green Identities

Given a collection f = (fi)i∈Zd of real numbers, we define, for each pair

{i, j} ∈ E

Zd, the discrete gradient

(14)

and, for all i ∈ Zd, the discrete Laplacian 1 2d(∆f )i := h 1 2d X j∼i fj i − fi = 1 2d X j∼i (∇f )ij = 1 2d d X j=1 (∇2f )i,i+ej, (1.9)

where (∇2f )i,i+ej = (fi+ej − fi) + (fi−ej − fi) = fi+ej − 2fi+ fi−ej. The

last term resembles the usual definition of the Laplacian of a function on Rd, but the first expression is a more natural way to think of the Laplacian, the difference between the mean value of f over the neighbours of i and the value off at i.

We have the following discrete analogues of the classical Green identities. Lemma 1.3.1 (Discrete Green Identities). Let Λ ⊂⊂ Zd. Then, for all collections of real numbers f = (fi)i∈Zd, g = (gi)i∈Zd,

X {i,j}∈Eb Λ (∇f )ij(∇g)ij = − X i∈Λ gi(∆f )i+ X i∈Λ,j /∈Λ,i∼j gj(∇f )ij, (1.10) and X i∈Λ n fi(∆g)i− gi(∆f )i o = X i∈Λ,j /∈Λ,i∼j n fi(∆g)ij − gj(∆f )ij o . (1.11) Proof. See, for example, [7], Lemma 8.7.

We can write the action of the Laplacian on f = (fi)i∈Zd, as:

(∆f )i = X j∼i (∇f )ij = X j∈Zd ∇ijfj, (1.12) where ∇ij =    −2d if i = j, 1 if i ∼ j, 0 otherwise. (1.13) Moreover we introduce the restriction of ∆ to Λ, defined by

∆Λ= (∆i,j)i,j∈Λ. (1.14)

Note that f ·∆Λg =Pi∈Λfi(∆Λg)i=Pi,j∈Λfi∆ijgj = g·∆Λf and (∆Λf ) =

P

j∈Λ∆ijfj.

Returning to the density of the DGFF and remembering that Λ ⊂⊂ Zd, f = (fi)i∈Zd, fi = ηi for all i /∈ Λ, we have, applying (1.10) with f = g

X {i,j}∈Eb Λ (fj− fi)2= X {i,j}∈Eb Λ (∇f )2ij = −X i∈Λ fi(∆f )i+ X i∈Λ,j /∈Λ,i∼j fj(∇f )ij = −X i∈Λ fi(∆Λf )i− 2 X i∈Λ,j /∈Λ,i∼j fifj + BΛ, = f · ∆Λf − 2 X i∈Λ,j /∈Λ,i∼j fifj + BΛ, (1.15)

(15)

1.3. THE RANDOM WALK RAPPRESENTATION FOR THE MASSLESS MODEL15 where in the last inequality we used that (∆f )i= (∆Λf )i+Pj /∈Λfj, for all

i ∈ Λ and fj(∇f )ij = fj2− fifj = η2j− fifj for all i ∈ Λ, j /∈ Λ, i ∼ j, and BΛ

is a boundary term. One can then introduce u = (ui)i∈Zd, to be determined

later, depending on η and Λ, and playing the role of the mean of f .

Our aim is to rewrite (1.15) in the form −(f − u) · ∆Λ(f − u), up to

boundary terms. We can, in particular, include in BΛ any expression that

depends only on the values of u. We have

(f − u) · ∆Λ(f − u) = f · ∆Λf − 2f · ∆Λu + u · ∆Λu = f · ∆Λf − 2 X i∈Λ fi(∆Λu)i+ BΛ = f · ∆Λf − 2 X i∈Λ fi(∆u)i+ 2 X i∈Λ X j /∈Λ,j∼i fiuj+ BΛ. (1.16) Comparing the two expressions for f · ∆Λf in (1.16) and (1.15), we deduce

that X {i,j}∈Eb Λ (fj−fi)2= −(f −u)·∆Λ(f −u)−2 X i∈Λ fi(∆u)i+2 X i∈Λ, j /∈Λ j∼i fi(uj−fj)+BΛ. (1.17) A look at the second term in this last display indicates exactly the restric-tions one should impose on u in order for −(f − u) · ∆Λ(f − u) to be the

one and only contribution to the Hamiltonian (up to boundary terms). To cancel the non-trivial terms that depend on the values of f inside Λ, we need to ensure that:

• u is harmonic in Λ, that is

(∆u)i = 0, for all i ∈ Λ; (1.18)

• u coincides with f (hence with η) outside Λ, that is

ui = ηi, for all i /∈ Λ. (1.19)

We have thus proved

Lemma 1.3.2. Assume that u = (ui)i∈Zd solves the Dirichlet problem in Λ

with boundary condition η :  (∆u)i = 0, i ∈ Λ, ui = ηi, i /∈ Λ. (1.20) then X {i,j}∈Eb Λ (fj− fi)2 = −(f − u) · ∆Λ(f − u) + BΛ. (1.21)

(16)

Existence of a solution to the Dirichlet problem will be proved later. Uniqueness can be verified easily.

Let us consider the massless Hamiltonian HηΛ,1,0 =: HηΛ, expressed in terms of the variables ϕ = (ϕi)i∈Zd, which are assumed to satisfy ϕi = ηi

for all i /∈ Λ. We apply Lemma 1.3.2 with f = ϕ, assuming for the moment that one can find a solution u to the Dirichlet problem (in Λ, with boundary condition η). Since it does not alter the Gibbs distribution, the constant BΛ

in (1.21) can always be subtracted from the Hamiltonian. We get HΛη = 1

2(ϕ − u) · (− 1

2d∆Λ)(ϕ − u). (1.22)

Our next tasks are, first, to invert the matrix −2d1∆Λ, in order to obtain an

explicit expression for the covariance matrix, and, second, to find an explicit expression for the solution u to the Dirichlet problem.

1.3.2 The random walk representation

We begin by writing

− 1

2d∆Λ= IΛ− PΛ, (1.23)

where IΛ= (δij)i,j∈Λ and PΛ= (P (i, j))i,j∈Λ with elements

P (i, j) =

 1

2d if i ∼ j,

0 otherwise. (1.24)

We immediately recognise that the numbers (P (i, j))i,j∈Zd are the

transi-tion probabilities of the symmetric simple random walk X = (Xk)k∈N on

Zd, which at each time step jumps to any one of its 2d nearest-neighbours with probability 2d1, as explained in the introduction. We denote by Pi the

distribution of the walk starting at i ∈ Zd. That is, we have Pi(X0 = i) = 1,

and Pi(Xn+1= k|Xn= j) = P (j, k) for all k, j ∈ Zd.

The next lemma shows that the matrix IΛ−PΛis invertible, and provides

a probabilistic interpretation for its inverse:

Lemma 1.3.3. The matrix IΛ − PΛ is invertible. Moreover, its inverse

GΛ = (IΛ− PΛ)−1 is given by GΛ = (GΛ(i, j))j∈Λ, the Green function in Λ

of the simple random walk on Zd, defined by GΛ(i, j) := Ei hτΛc −1 X n=0 1{Xn=j} i . (1.25)

The Green function GΛ(i, j) represents the average number of visits at

(17)

1.3. THE RANDOM WALK RAPPRESENTATION FOR THE MASSLESS MODEL17 Proof. First of all, observe that (below, Pn denotes the nth power of a

matrix P ) (IΛ− PΛ)(IΛ+ PΛ+ PΛ2+ · · · + PΛn) = (IΛ− PΛn+1). (1.26) Rewriting PΛk(i, j) = X i1,...,ik−1∈Λ PΛ(i, i1)PΛ(i1, i2) · · · PΛ(ik−1, j) = = Pi Xk= j, τΛC > k ≤ Pi τΛC > k, (1.27)

and using the classical bound on the probability that the walk exits a finite region in a finite time Pi τΛC > k ≤ e−ck, we can take the limit n → ∞ in

(1.26) obtaining (IΛ− PΛ)  X k≥0 PΛk= IΛ, (1.28) that is, GΛ= (IΛ− PΛ)−1 = X k≥0 PΛk, (1.29)

since by symmetry we have also that (GΛ)(IΛ− PΛ) = IΛ. Finally

X k≥0 PΛk(i, j) =X k≥0 Pi Xk= j, τΛC > k = Ei hτΛc −1 X n=0 1{Xn=j} i , (1.30)

gives the desired expression for GΛ(i, j).

Let us now prove the existence of a solution to the Dirichlet problem, also expressed in terms of the simple random walk. Let XτΛc denote the

position of the walk at the time of first exit from Λ.

Lemma 1.3.4. The solution to the Dirichlet problem in (1.20) is given by the function u = (ui)i∈Zd defined by

ui= Ei

h ηXτΛc

i

, for all i ∈ Zd. (1.31)

Proof. When j /∈ Λ, Pj(τΛc = 0) = 1 and, thus, uj = Ej

h ηXτΛc

i = EjηX0 = ηj. When i ∈ Λ, by conditioning on the first step of the walk

and using the Markov’s property, ui= Ei h ηXτΛc i =X j∼i Ei h ηXτΛc|X1 = j i Pi X1= j = 1 2d X j∼i uj, (1.32)

(18)

We finally have the desired representation.

Theorem 1.3.5. Under µηΛ, ϕ = (ϕi)i∈Λ is Gaussian, with mean u =

(ui)i∈Λ defined by ui = Ei h ηXτΛc i , for all i ∈ Λ, (1.33)

and positive definite covariance matrix GΛ = (GΛ(i, j)),j∈Λ, given by the

Green function GΛ(i, j) := Ei hτΛc −1 X n=0 1{Xn=j} i . (1.34) .

The reader should note the remarkable fact that the distribution of ϕ = (ϕi)i∈Λ under µηΛ depends on the boundary condition η only through its

mean; the covariance matrix is only sensitive to the choice of Λ.

1.4

The infinite volume extension

In this section, we will present the problem of existence of infinite-volume Gibbs measures for the massless DGFF.

In order to do that we need to introduce the notion of Gibbs state and we will do that in the particular case of the DGFF measure4.

Definition 1.4.1. Let f : Ω = RZd → R be a function. We say that f is

local if exists A ⊂⊂ Zd such that f (ω) = f (ω0) as soon as ω

i = ω0i for all

i ∈ A. The smallest such set A is called the support of f and it is denoted by supp(f ).

Definition 1.4.2. A state is a map f → hf i acting on a local function f : Ω → R satisfying the following three properties:

1. h1i = 1;

2. if f ≥ 0 then hf i ≥ 0;

3. for all λ ∈ R, hf + λgi = hf i + λhgi.

Definition 1.4.3. Let (Λ)n≥1be a sequence of finite set such that increase to

Zd, then the sequence µηΛn is said to converge to the state h·i, if E

η

Λn[f ] → hf i

for all f local functions. Then the state h·i is called a Gibbs state.

4

The notion of Gibbs state could be developed in a very more general framework but we prefer to chose a simpler presentation of this notion since the dissertation of this argument is not the goal of this paper.

(19)

1.4. THE INFINITE VOLUME EXTENSION 19 Remark 1.4.4. This notion is really natural from a mathematical point of view. The reason is that as soon as you have a functional with the properties of Definition 1.4.2 you can prove that there exists an underlying probability measure µ such that hf i =R f dµ and then the notion of convergence given in Definition 1.4.3 it is just the notion of weak convergence of the sequence µηΛ

n to µ.

Definition 1.4.5. We characterize the space of infinite-volume Gibbs mea-sures by

G = {µ ∈ M1(Ω)

µ(A|FΛc)(ω) = µωΛ(A) for all Λ ⊂⊂ Zd and all A ∈ F },

where we denote by M1(Ω) the set of probability measures on Ω.

Of course in the particular case of the DGFF, since we are dealing with sequences of Gaussian measures, any limit point of µηΛ

n is in any case a

Gaus-sian measure and this convergence takes place if and only if both covariance and mean converge (to finite limits). We note, by a standard monotone argument and remembering that the random walk on Zdis transient if and

only if d ≥ 3, that lim n→∞GΛn(i, j) = Ei h X n≥0 1{Xn=j} i =  < +∞ if d ≥ 3, = +∞ if d = 1 or d = 2. (1.35) This has the following consequence:

Theorem 1.4.6. When d = 1 or d = 2, the massless Gaussian Free Field has no infinite-volume Gibbs measures.

When d ≥ 3, transience of the symmetric simple random walk implies that the limit in (1.35) is finite. This will allow us to construct infinite-volume Gibbs measures. We say that η = (ηi)i∈Zd is harmonic (in Zd) if

(∆η)i = 0 for all i ∈ Zd.

Theorem 1.4.7. In dimensions d ≥ 3, the massless Gaussian Free Field possesses infinitely many infinite-volume Gibbs measures. More precisely, given any harmonic function η on Zd, there exists a Gaussian Gibbs measure µη with mean η and covariance matrix given by the Green function

G(i, j) = Ei h X n≥0 1{Xn=j} i . (1.36)

(20)

1.5

The Gibbs-Markov property.

From now till the end we tacitly suppose that d ≥ 3. In this section we want to show that every Gaussian Gibbs measure µη, given by Theorem 1.4.7,

satisfies the Gibbs-Markov property, that is, for all Λ ⊂⊂ Zd, and for all A ∈ F ,

µη A|FΛc(ω) = µωΛ(A), for µη-almost all ω. (1.37)

For that, we will verify that the field ϕ = (ϕi)i∈Zd with mean Eη[ϕi] = ηi

and covariance

Covη(ϕiϕj) = G(i, j), (1.38)

when conditioned on FΛc, remains Gaussian (Lemma 1.5.2 below) and that,

for all tΛ,

Eη[eitΛ·ϕΛ|FΛc](ω) = eitΛ·aΛ(ω)− 1

2tΛ·GΛtΛ, (1.39)

where ai(ω) = Ei[ωXτΛc] is the solution of the Dirichlet problem in Λ with

boundary condition ω.

Remark 1.5.1. We want to remark the difference between the probability measure P (with the associated expectation E) and P (with the associated expectation E): the first acts on the random field ϕ as just defined, the second acts on the simple random walk on Zdas defined in the initial section about notation.

Lemma 1.5.2. Let ϕ be the Gaussian field construct below. Let, for all i ∈ Λ,

ai(ω) := Eη[ϕi|FΛc](ω). (1.40)

Then, µη-a.s., ai(ω) = Ei[ωXτΛc]. In particular, each ai(ω) is a finite linear

combination of the variables ϕj and (ai)i∈Zd is a Gaussian field.

Proof. When i ∈ Λ, we use the following characterization of the conditional expectation: up to equivalence, Eη[ϕi|FΛc] is the unique FΛc−measurable

random variable ψ for which

Eη[(ϕi− ψ)ϕj] = 0, for all j ∈ Λc. (1.41)

We verify that this condition is indeed satisfied when ψ = Ei[ωXτΛc]. By

(1.38), Eη h ϕi− Ei[ϕXτΛc]ϕj i = Eη[ϕiϕj] − Eη h Ei[ϕXτΛc]ϕj i = = G(i, j) + ηiηj− Eη h Ei[ϕXτΛc]ϕj i . Using again (1.38), Eη h Ei[ϕXτΛc]ϕj i = X k∈∂Λ Eη[ϕkϕj]Pi(XτΛc = k) = Ei h Eη[ϕXτΛcϕj] i = EiG(XτΛc, j) + Ei h Eη[ϕXτΛc]Eη[ϕj] i . (1.42)

(21)

1.5. THE GIBBS-MARKOV PROPERTY. 21 On the one hand, since i ∈ Λ and j ∈ Λc, any trajectory of the random walk that contributes to G(i, j) must intersect ∂Λ at least once, so the Markov property gives G(i, j) = Ei h X h≥0 1{Xh=j} i = X k∈∂Λ Pi(XτΛc = k)G(k, j) = EiG(XτΛc, j). (1.43) On the other hand, since ϕ has mean η and η is solution of the Dirichlet problem in Λ with boundary condition η, we have

Ei

h

Eη[ϕXτΛc]Eη[ϕj]

i

= EiηXτΛcηj = Ei[ηXτΛc]ηj = ηiηj. (1.44)

This shows that ai(ω) = Ei[ωXτΛc]. In particular, the latter is a linear

com-bination of the ωjs :

ai(ω) =

X

k∈∂Λ

ωkP (XτΛc = k), (1.45)

which implies that also (ai)i∈Zd is a Gaussian field.

Corollary 1.5.3. Under µη, the random vector (ϕi− ai)i∈Λ is independent

of FΛc.

Proof. We know that the variables ϕi− ai, i ∈ Λ, and ϕj, j ∈ Λc form a

Gaussian field. Therefore, a classical result implies that (ϕi− ai)i∈Λ, which

is centered, is independent of FΛc if and only if each pair ϕi− ai (i ∈ Λ)

and ϕj (j ∈ Λc) is uncorrelated. But this follows from (1.41).

Let aΛ= (ai)i∈Λ. By Corollary 1.5.3 and since aΛ is FΛc-measurable,

Eη[eitΛ·ϕΛ|FΛc] = eitΛ·aΛEη[eitΛ·(ϕΛ−aΛ)|FΛc] = eitΛ·aΛEη[eitΛ·(ϕΛ−aΛ)].

(1.46) We know that the variables ϕi− ai, i ∈ Λ, form a Gaussian vector under µη.

Since it is centered, we need only to compute its covariance. For i, j ∈ Λ, write

(ϕi− ai)(ϕj− aj) = ϕiϕj− (ϕi− ai)aj− (ϕj− aj)ai− aiaj. (1.47)

Using Corollary 1.5.3 again, we see that Eη[(ϕi− ai)aj] = 0 and Eη[(ϕj −

aj)ai] = 0 (since ai and aj are FΛc-measurable). Therefore

Covη (ϕi− ai), (ϕj− aj) = Eη[ϕiϕj] − Eη[aiaj] = G(i, j) + ηiηj− Eη[aiaj]. (1.48) Proceeding as in (1.42) Eη[aiaj] = Ei,j h G XτΛc, X 0 τ0 Λc i + Ei,j h EηϕXτΛc  EηϕX0 τ 0 Λc i , (1.49)

(22)

where X and X0 are two independent symmetric random walks, starting respectively at i and j, Pi,j denotes their joint distribution, and τΛ0c is the

first exit time of X0 from Λ. As was done earlier, Ei,j h EηϕXτΛc  EηϕX0 τ 0 Λc i = Ei,jηXτΛcηX0 τ 0 Λc  = EiηXτΛcEjηXτΛc = ηiηj. (1.50) Let us then define the modified Green function

KΛ(i, j) := Ei h X n≥τΛc 1{Xn=j} i = G(i, j) − GΛ(i, j). (1.51)

Observe that KΛ(i, j) = KΛ(j, i) since G and GΛare both symmetric;

more-over, KΛ(i, j) = G(i, j) if i ∈ Λc. We can thus write

Ei,j h G XτΛc, X 0 τ0 Λc i = X k,l∈∂Λ Pi XτΛc = kPj XτΛc = lG(k, l) = X l∈∂Λ Pj XτΛc = lKΛ(i, l) = X l∈∂Λ Pj XτΛc = lKΛ(l, i) = X l∈∂Λ Pj XτΛc = lG(l, i) = KΛ(j, i) = G(i, j) − GΛ(i, j). (1.52)

We have thus shown Covη (ϕi − ai), (ϕj − aj) = GΛ(i, j), which implies

that

EηeitΛ·ϕΛ|FΛc = eitΛ·aΛe− 1

2tΛ·GΛtΛ. (1.53)

This shows that, under µη(·|FΛc), ϕΛis Gaussian with distribution given by

µηΛ(·). We have therefore (1.39).

Remark 1.5.4. All the computations done in this section can be generalized conditioning on the σ-algebra FΛ, for ∅ 6= Λ ⊂⊂ Zd. In this case we obtain

the following version of Lemma 1.5.2

Lemma 1.5.5. Let ϕ be the Gaussian field construct at the beginning of this section. Let, for all i ∈ Λc,

ui(ω) := Eη[ϕi|FΛ](ω). (1.54)

Then, µη-a.s., ui(ω) = Ei[ωXτΛ, τΛ < ∞]. In particular, each ui(ω) is a

finite linear combination of the variables ϕj, (ui)i∈Zd is a Gaussian field,

(23)

1.5. THE GIBBS-MARKOV PROPERTY. 23 We now define, for U ⊂ Zd, the probability measure µηU(·) on RZd of the

Gaussian field with mean EηU[ϕi] = ηi and covariance equal to the Green

function GU(·, ·) killed outside U, that is

CovηU[ϕi, ϕj] = GU(i, j) := X n≥0 Px Xn= y, n < TU  (1.55)

where TU := inf{n ≥ 0, Xn ∈ U } is the exit time from U. We have the/

following

Lemma 1.5.6. Let ∅ 6= K ⊂⊂ Zd, U = Λc. Every Gaussian Gibbs measure µη, given by Theorem 1.4.7, satisfies the ”exterior Gibbs-Markov property”,

that is, for all Λ ⊂⊂ Zd, and for all A ∈ F ,

µη A|FΛ(ω) = µωU(A), for µη-almost all ω. (1.56)

(24)
(25)

Chapter 2

Some useful general tools

In this and in the following chapter we present some important general tools that we are going to use in the sequel. In this first chapter we present some useful general probability tools.

2.1

Extended version for non-negative increasing

functions of the Markov’s inequality

We state an easy but important generalization of the classical Markov’s inequality.

Theorem 2.1.1. Let X be any random variable, and f a non-negative in-creasing function. Then, supposing that E[f (X)] < ∞,

P(X ≥ ) ≤ E[f (X)]f (). (2.1)

Proof. Since X ≥ ε if and only if f (X) ≥ f (ε) then the basic Markov inequality gives the result.

2.2

BTIS-inequality

The BTIS-inequality, is a result bounding the probability of a deviation of the uniform norm of a centred Gaussian stochastic process above its ex-pected value. The inequality has been described (in [1]) as “the single most important tool in the study of Gaussian processes.” We now present this important tool.

Consider a Gaussian random variable X ∼ N (0, σ2). The following two important bounds hold for every u > 0 and become sharp very quickly as x grows:  σ √ 2πu − σ3 √ 2πu3  e−12u 22 ≤ P(X > u) ≤ (√σ 2πu)e −1 2u 22 . (2.2) 25

(26)

In particular the upper bound follows from the observation that P(X > u) = Z u +∞ 1 √ 2πσ2e −x2 2σ2dx ≤ ≤ Z u +∞ 1 √ 2πσ2 x ue −x2 2σ2dx =  σ √ 2πu  e−12u 22 . (2.3)

For the lower bound, make the substitution x 7→ u + y/u to note that

P(X > u) = Z u −∞ 1 √ 2πσ2e −x2 2σ2dx = √ 1 2πσ2 Z 0 +∞e−u+y/u 2σ2 u dy = e −u2 2σ2 u√2πσ2 Z 0 +∞ e−(y2/u2+2y)/2σ2dy ≥ e −u2 2σ2 u √ 2πσ2 Z 0 +∞ e−(y/σ2)  1 − y 2 2u2σ2  dy = e −u2 2σ2 u √ 2πσ2  σ2− σ 4 u2  , (2.4) where the inequality is given by the fact that e−z≥ 1 − z for all z ≥ 0. One immediate consequence of (2.2) is that

lim

u→∞u −2

ln P(X > u) = − 1

2σ2. (2.5)

Now we state a classical result related to (2.5), but for the supremum of a general centered Gaussian process (Xt)t∈T. Assume that (Xt)t∈T is a.s.

bounded, then lim u→∞u −2 ln P  sup t∈T Xt> u  = − 1 2σ2 T , (2.6) where σ2T := sup t∈T E[X 2 t]. (2.7)

An immediate consequence of (2.6) is that for all ε > 0 and u large enough, P  sup t∈T Xt> u  ≤ eεu2−u2/2σ2T. (2.8)

Since ε > 0 is arbitrary, comparing (2.8) with (2.2), we reach the rather surprising conclusion that the supremum of a centered, bounded Gaussian process behaves much like a single Gaussian variable with a suitable chosen variance.

Now we want to see from where (2.8) comes. In fact, (2.8) and its conse-quences are all special cases of a ”nonasymptotic” result due independently, and with very different proofs, to Borell (B) and Tsirelson, Ibraginov and Sudakov (TIS).

(27)

2.2. BTIS-INEQUALITY 27 Theorem 2.2.1 (BTIS-inequality). Let (Xt)t∈T be a centered Gaussian

pro-cess, a.s. bounded on T. Write kXk = kXkT = supt∈T Xt. Then

E[kXk] < ∞, and for all u > 0,

P kXk − E[kXk] > u ≤ e−u

2/2σ2

T. (2.9)

Before looking at the proof, we take a moment to look at an immediate and trivial consequence of (2.9), that is, for all u > E(kXk),

P(kXk > u) ≤ e−(u−E[kXk])

2/2σ2

T, (2.10)

so that (2.6) and (2.8) follows from the BTIS-inequality.

We now turn to the proof of the BTIS-inequality. There are essentially three quite different ways to tackle this proof:

• The Borell’s original proof [2] relied on isoperimetric inequalities; • The proof of Tsirelson, Ibragimov, Sudakov [15] relied on Ito’s formula; • The proof reported in the collection of exercises [5] (although it root

is much older).

We choose, as in [1], the third and more direct route. The first step in this route involves the following two lemmas.

Lemma 2.2.2. Let X and Y be independent k-dimesional vectors of cen-tered, unit-variance, independent, Gaussian variables. If f, g : Rk → R are bounded C2 function then

Cov(f (X), g(X)) = Z 1 0 E∇f (X) · ∇g(αX + p 1 − α2Y )dα, (2.11) where ∇f (X) := ∂x∂ if (x)  i=1,...,k.

Proof. It suffices to prove the lemma with f (x) = ei(t·x) and g(x) = ei(s·x) with s, t, x ∈ Rk. Standard approximation arguments (which is where the requirement that f is C2 appears) will do the rest. Write

ϕ(t) := E[ei(t·x)] = exp{it · 0 − 1 2tIt

t} = e|t|2/2

, (2.12)

since X is a k-dimesional vectors of centered, unit-variance, independent, Gaussian variables. It is then trivial that

Cov(f (X), g(X)) = E[ei(t·X)ei(s·Y )] − E[ei(t·X)]E[ei(s·Y )]

= E[ei((t+s)·X)] − E[ei(t·X)]E[ei(s·X)] = ϕ(t + s) − ϕ(t)ϕ(s), (2.13)

(28)

where the second line follows from the fact that X and Y are independent with the same distribution. On the other hand, computing the integral in (2.11), using ∂x∂ if (x) = itie i(t·x), gives Z 1 0 E h ∇f (X) · ∇g(αX +p1 − α2Y )i = Z 1 0 E hXd j=1 itjei(t·x)isjei(s·(αX+ √ 1−α2Y ))i dα = − Z 1 0 d X j=1 sjtjEei((t+αs)·x)Eei(s·( √ 1−α2Y )) dα = − Z 1 0 (s · t)e(|t|2+2α|t||s|+|s|2)/2dα = −ϕ(s)ϕ(t)(1 − es·t) = ϕ(s + t) − ϕ(s)ϕ(t), (2.14)

which is all that we need.

Lemma 2.2.3. Let X be a k-dimensional vector of centered, unit-variance, independent, Gaussian variables. If h : Rk → R is C2 with Lipschitz

con-stant 1 and if E[h(X)] = 0, then for all t > 0, Eeth(X) ≤ et

2/2

. (2.15)

Proof. Let Y be an independent copy of X and α a uniform random variable on [0, 1]. Define the pair (X, Zα) via

(X, Zα) := (X, αX +

p

1 − α2Y ) (2.16)

Take h as in the statement of the lemma, t ≥ 0 fixed and define g = eth. Applying (2.11) and using ∇eth= teth∇h, gives

Eh(X)eth(X) = Eh(X)g(X) = Z 1 0 E∇g(X) · ∇h(Zα)dα = t Z 1 0 E∇h(X) · ∇h(Zα)eth(X)dα ≤ tEeth(X), (2.17) where we used the Cauchy-Schwarz inequality and the Lipschitz property of h. Let u be the function defined by

eu(t)= Eeth(X), (2.18) then derivating in the both sides

(29)

2.2. BTIS-INEQUALITY 29 so that from the preceding inequality, u0(t) ≤ t. Since u(0) = 0 it follows that u(t) ≤ t2/2 and we are done.

The following lemma gives the crucial step toward proving the BTIS inequality.

Lemma 2.2.4. Let X be a k-dimensional vector of centered, unit-variance, independent, Gaussian variables. If h : Rk → R has Lipschitz constant σ, then for all u > 0,

P  h(X) − E[h(X)] > u  ≤ e−12u 22 . (2.20)

Proof. By considering eh(x) = h(x)/σ it suffices to prove the result for σ = 1. Assume for the moment that h ∈ C2. Then, for every t, u > 0,

P h(X) − E[h(X)] > u ≤ Z h(X)−E[h(X)]>u et(h(X)−E[h(X)]−u)dP (x) ≤ e−tuEet(h(x)−E[h(X)]) ≤ e12t 2−tu , (2.21)

the last inequality following from (2.15). Taking the optimal choice of t = u gives (2.20) for h ∈ C2.

To remove the C2 assumption, take a sequence of C2 approximations to f each one of which has Lipschitz coefficient no grater than σ (we recall that if a function f has Lipschitz constant σ the regularized function Φ(f )(x)

has Lipschitz constant smaller than σ) and apply Fatou’s inequality. This complete the proof.

We now have all we need to prove Theorem 2.2.1.

Proof of Theorem 2.2.1. There will be two stages to the proof. Firstly, we shall establish Theorem 2.2.1 for finite T. We than lift the result from finite to general T.

Thus, let T be finite, so that we can write it as {1, 2, . . . , k}. In this case we can replace sup by max, which has Lipshitz constant 1. Let C the k × k covariance matrix of X on the finite set T, with components ci,j = E[XiXj]

so that

σT2 = max

1≤i≤kcii= max1≤i≤kE[X 2

i]. (2.22)

Let W a vector of independent, standard Gaussian variables, and A such that AtA = C. Thus X= AW and maxd iXi

d

= maxi(AW )i, where d

= indicates equivalence in distribution. Consider the function h(x) = maxi(Ax)i, which

(30)

is trivially C2. Then | max

i (Ax)i− maxi (Ay)i| = | maxi (eiAx) − maxi (eiAy)|

≤ max

i |eiA(x − y)|

≤ max

i |eiA| · |x − y|,

(2.23)

where, as usual, ei is the vector with 1 in position i and zeros elsewhere.

The first inequality above is elementary, and the second is Cauchy–Schwarz. But

|eiA|2= etiAtAei = etiCei= cii, (2.24)

so that

| max

i (Ax)i− maxi (Ay)i| ≤ σT|x − y|. (2.25)

In view of the equivalence in law of maxiXi and maxi(AW )i and Lemma

2.2.4, this establishes the theorem for finite T .

We now turn to lifting the result from finite to general T. For each n > 0, let Tn be a finite subset of T such that Tn ⊂ Tn+1 and Tn increases to a

dense subset of T. By separability, sup t∈Tn Xt a.s. −−→ sup t∈T Xt, (2.26)

and since the convergence is monotone, we also have that P  sup t∈Tn Xt> u  → Psup t∈T Xt> u  and E h sup t∈Tn Xt i → Ehsup t∈T Xt i . (2.27) Since σ2 Tn → σ 2

T < ∞ (again monotonically), we would be enough to prove

general version of the BTIS-inequality from the finite-T version if only we knew that the term, E[supt∈TXt], were definitely finite, as claimed in the

statement of the theorem. Thus if we show that the assumed a.s. finiteness of kXk implies also the finiteness of its mean, we shall have a complete proof to both parts of the theorem.

We proceed by contradiction. Thus, assume E[kXk] = ∞, and chose u0 > 0 such that e−u20/σT2 ≤ 1 4 and P h sup t∈T Xt< u0 i ≥ 3 4. (2.28)

Now chose n ≥ 1 such that E[kXkTn] > 2u0, which is possible since E[kXkTn] →

E[kXkT] = ∞. The BTIS-inequality on the finite space Tn then gives

1 2 ≥ 2e −u2 0/σT2 ≥ 2e−u 2 0/σ2Tn ≥ P  kXkT n− E kXkTn  > u0  ≥ PE kXkTn − kXkT > u0  ≥ PkXkT < u0  ≥ 3 4. (2.29) This provides the required contradiction, and so we are done.

(31)

Chapter 3

Some useful tools for the

DGFF

From now till the end of this paper our object of study is the Discrete Gaus-sian Free Field (DGFF) on Zd, with the canonical law P on Ω = RZd such

that under P, the canonical field ϕ = (ϕx)x∈Zd is a centered Gaussian field

with covariance E[ϕxϕy] = G(x, y), for all x, y ∈ Zd, where G(·, ·) denotes

the Green function of simple random walk on Zdas defined in (1.36). Again, we will use the same notation for the probabilities P and P as explained in Remark 1.5.1.

3.1

Maximum for the Lattice Gaussian Free Field

We now state a very useful bound for the expectation of the maximum of the DGFF in a fixed bounded subset of Zd.

Proposition 3.1.1. Let ∅ 6= K ⊂⊂ Zd then there exists a constant c > 0 such that E h max K ϕ i ≤ cplog |K|. (3.1)

Proof. In order to bound E[maxKϕ], we write, using Fubini’s theorem in

the third relation,

E max K ϕ ≤ E maxK ϕ +  = Eh Z +∞ 0 1{y≤maxKϕ+}dy i = Z +∞ 0 E h 1{y≤maxKϕ+} i dy = Z +∞ 0 P  y ≤ max K ϕ +dy ≤ A + Z +∞ A P  max K ϕ +> ydy, (3.2) 31

(32)

for arbitrary A ≥ 0. Now using the following claim (that we will prove at the end) P  max K ϕ +> y≤ |K|e−u2/2G(0) (3.3) and inserying it into (3.2) yields, for arbitrary A > 0,

E max

K ϕ ≤ A +

Z +∞

A

|K|e−u2/2G(0)dy ≤ A + c|K| · e−A2/2G(0). (3.4)

We select A such that e−A2/2G(0)= |K|−1 (i.e. A = (2G(0) log |K|)1/2), by which means (3.4) readily implies that

E max

K ϕ ≤ c

p

log |K|, for all ∅ 6= K ⊂⊂ Zd. (3.5) We now prove the claim (3.3). Recalling that E[ϕ2x] = G(x, x) = G(0)

for all x ∈ Zd, using (in the third relation) the translation invariance of the

probability P, we can easily obtain the following bound P  max K ϕ +> y = P h [ x∈K {ϕ+x > y} i ≤ X x∈K P[ϕ+x > y] = |K|P[ϕ0 > y]. (3.6) Introducing an auxiliary variable ψ ∼ N (0, 1) and recalling that the parti-tion funcparti-tion FN (µ,σ2)(x) = FN (0,1)(x−µσ ), we have

|K|P(ϕ0> y) = |K| P ψ > G(0)1/2y. (3.7)

Using the extended version for non-negative increasing functions of the Markov’s inequality (see section 2.1) with f (a) = eλa, we have

P(ψ > a) ≤ P(|ψ| > a) ≤ min λ>0e −λa E[eλψ] = min λ>0e −λa+λ2/2 = e−a2/2 (3.8) since the minimum is attained at λ = a. Applying (3.8) to the last term of (3.7) we obtain

|K|P(ψ > G(0)1/2y) ≤ |K| e−u2/2G(0). (3.9) Summarizing (3.6),(3.7) and (3.9) we finally obtain

P  max K ϕ +> y≤ |K| e−u2/2G(0) . (3.10)

(33)

3.2. ASYMPTOTICS FOR THE GREEN FUNCTION 33

3.2

Asymptotics for the Green function

We state here one important and very well known result regarding the be-haviour of the Green function defined in (1.36).

First of all we recall that, due to translation invariance, G(x, y) = G(x − y, 0) =: G(x − y).

Lemma 3.2.1. If d ≥ 3, as |x| → ∞, there exists a constant c(d) > 0 such that

G(x) ∼ c(d)|x|2−d. (3.11)

Proof. See [9], Theorem 1.5.4.

3.3

Potential theory

In this section we will introduce some very useful aspects of potential theory. The main reference for this topic is Chapter 2 §2 of [9].

Given K ⊂⊂ Zd, we define the escape probability eK : K → [0, 1] by

eK(x) = Px[˜τK = ∞], x ∈ K. (3.12)

We also define the capacity of K as

cap(K) =X

x∈K

eK(x). (3.13)

It immediately follows from the two definitions above that the capacity is a monotone and subadditive set function (see [9], Prop 2.2.1 for a proof), in particular

cap(K) ≤ cap(K0), for all K ⊂ K0 ⊂⊂ Zd; (3.14) cap(K) + cap(K0) ≥ cap(K ∪ K0) + cap(K ∩ K0), for all K, K0 ⊂⊂ Zd.

(3.15) Further the probability to enter in K may be expressed in terms of eK(x)

(see [13], Theorem 25.1, for a derivation) as Px(τK< ∞) =

X

y∈K

G(x, y) · eK(y), (3.16)

where G(·, ·) denotes the Green function defined in (1.36). Moreover, the following bounds on Px(τK< ∞), x ∈ Zdholds (c.f (1.9) of [14])

P y∈KG(x, y) supz∈KP y∈KG(z, y)  ≤ Px(τK < ∞) ≤ P y∈KG(x, y) infz∈K  P y∈KG(z, y)  . (3.17)

(34)

Finally from (3.16) and (3.17), togheter with classical bounds on the Green function (see Lemma (3.2.1)) for |x| → ∞, we obtain

sup z∈K  X y∈K G(z, y)≥ |K| cap(K) ≥ infz∈K  X y∈K G(z, y). (3.18)

A trivial consequence of the last inequalities, together with classical bounds on the Green function (c.f. Lemma 3.2.1), is the following useful bound for the capacity of a box:

cap B(0, L) ≤ cLd−2, for all L ≥ 1. (3.19)

3.4

Recurrent and transient sets: The Wiener’s

Test

We say that a set A ⊂ Zdis recurrent if P0(Xk ∈ A for infinitely many k) =

1, transient otherwise. We now state a very important criterion to determine whether a subset A ⊂ Zd is either recurrent or transient for a symmetric simple random walk on Zd.

Theorem 3.4.1 (Wiener’s Test). Suppose A ⊂ Zd, (d ≥ 3) and let

An= {z ∈ A; 2n≤ |z|∞< 2n+1}. (3.20)

Then A is a recurrent set if and only if T [A] = ∞ X n=0 cap(An) 2n(d−2) = ∞. (3.21)

Proof. See [9], Theorem 2.2.5.

We finally remark (See [9], p.57) that the function

fA(x) := Px(τA< ∞), (3.22)

is harmonic for x ∈ Ac and fA(x) = 1, for all x ∈ A. Moreover, if A is finite

then

lim

|x|→∞fA(x) = 0. (3.23)

3.5

Notation for the DGFF

We now turn to the Gaussian free field on Zd using the notation defined at the beginning of this section.

Given any subset K ⊂ Zd, we frequently write ϕK to denote the family

(35)

3.6. DENSITY AND UNIQUENESS FOR THE INFINITE CLUSTER 35 {ϕ|K > a} for the event {min{ϕx; x ∈ K} > a} and similarly {ϕ|K < a}

instead of {max{ϕx; x ∈ K} < a}. Next, we introduce certain crossing

events for the Gaussian free field. To this end, we first consider the space ˜

Ω = {0, 1}Zd endowed with its canonical σ-algebra and define, for arbitrary

disjoint subsets K, K0 ⊂ Zd, the event (subset of ˜Ω)

{K ←→ K0} = {there exists an open path (i.e. along which the

configuration has value 1) conneting K and K0}. (3.24) For any level h ∈ R, we introduced the (random) subset of Zd

Eϕ≥h= {x ∈ Zd; ϕx ≥ h}, (3.25)

and we write φh for the measurable map from Ω = RZd into ˜Ω = {0, 1}Zd

which send

ω ∈ Ω 7−→ (1{ϕx(ω)≥h})x∈Zd ∈ ˜Ω, (3.26)

and define

{K←→ K≥h 0} = (φh)−1({K ←→ K0}) (3.27) (a measurable subset of RZd endowed with its canonical σ-algebra F ), which

is the event that K and K0 are connected by a (nearest-neighbour) path in Eϕ≥h. Note that {K ←→ K≥h 0} is an increasing event upon introducing on RZd

the natural partial order (i.e. f ≤ f0 when fx ≤ f0x for all x ∈ Zd).

3.6

Density and uniqueness for the infinite cluster

In this section we want to investigate the properties of the infinite cluster. In particular we will state two very important results due to C.M. Newman and L.S. Schulman (see [10]) and R.M. Burton and M. Keane (see [4]), that can be summarized in the folliwing statement:

”If an infinite cluster exists, then it is unique and have positive density with probability one”.

All the results stated in this section hold for site percolation models in the d-dimensional cubic lattice with nearest-neighbor bonds and so, for this section, our notation is not refered to the notation for the DGFF. The models we consider may be defined by a lattice of (site percolation) random variables, {Xk; k ∈ Zd}, where each Xkeither takes the value 1

(correspond-ing to site k occupied) or the value 0 (correspond(correspond-ing to site k not occupied). Such a model may equivalent be defined by the joint distribution P of {Xk}

which is a probability measure on the configuration space, Ω = {0, 1}Zd = {ω = (ω

(36)

(with the standard definition of measurable sets); we assume (without loss of generality) that (Ω, P) is the underlying space with Xk(ω) = ωk.

For any j ∈ Zd we consider the shift operator Tj which acts either on

configuration ω ∈ Ω, or on events (i.e., measurable sets) W ⊂ Ω, or on measures P on Ω, or else on random variables X on Ω according to the following rules:

(Tjω)k= ωk−j, TjW = {Tjω : ω ∈ W },

(TjP)(W ) = P(T−jW ), (TjX)(ω) = X(T−jω).

(3.28) For each k ∈ Zd and η = 0 or 1 we consider the measure Pηk on Ωk =

{0, 1}Zd\{k}, defined so that for U ⊂ Ω k,

k(U ) = P(U × {ωk = η}) P({ωk= η})

, (3.29)

k is the conditional distribution of {Xj; j 6= k} conditioned on Xk = η.

Finally, throughout this section, we assume the following two hypotheses on P, or equivalently on {Xk} :

1. P is translation invariant; i.e., for any j ∈ Zd, TjP = P.

2. P has the finite energy property; i.e., for any k ∈ Zd, P0k and P1k are

equivalent measures; i.e., if U ⊂ Ωk, η = 0 or 1, and Pηk(U ) 6= 0 then

P1−ηk (U ) 6= 0.

Remark 3.6.1. In the original article of C.M. Newman and L.S. Schulman, it is required a third hypothesis, that is, P is translation ergodic; i.e., if j 6= 0 and W is an event such that TjW = W, then P(W ) = 0 or 1. Thanks to the

ergodic decomposition theorem we can omit this third hypothesis, as done in the article of R.M. Burton and M. Keane.

Similarly to the definition given in (3.24) for the DGFF, we say that i is connected to j if {i ←→ j}. Moreover, we define C(j), the cluster belonging to j, as C(j) = {i : i is connected to j}; note that C(j) 6= ∅ if j is not occupied. A set C ⊂ Zd is called cluster if C = C(j) for some j and is

called infinite cluster if in addition |C| = ∞. The percolation probability is ρ = P(|C(K)| = ∞) (which is independent of j). We define for any F ⊂ Zd its lower density, D(F ) and upper density D(F ) as

D(F ) = lim inf n→∞ |F ∩ Vn| nd , D(F ) = lim supn→∞ |F ∩ Vn| nd (3.30) where Vn=x ∈ Zd; −n/2 ≤ xi < n/2, i ∈ {1, . . . , d} (3.31) is the cubic box of size lenght n in Zd. F is said to have a density D(F ) if D(F ) = D(F ). We denote by H the set of all cluster C and define

(37)

3.6. DENSITY AND UNIQUENESS FOR THE INFINITE CLUSTER 37 F0= {j ∈ Zd; j ∈ C for some C ∈ H0}, (3.33)

H0 is the set of infinite clusters, N0 the number of infinite clusters, and F0

the union of infinite clusters.

We are know ready to state the two main result of this section.

Theorem 3.6.2 (Newman and Schulman). Exactly one of the following three event has probability 1:

1. N0 = 0;

2. N0 = 1;

3. N0 = ∞.

If N0 6= 0 (P-a.s.), then ρ > 0 and D(F0) = ρ (P-a.s.).

(38)
(39)

Chapter 4

The two main results

4.1

Purpose of the thesis

We are interested in the event that the origin lies in an infinite cluster of Eϕ≥h, which we denote by {0←→ ∞} (we also denote by C≥h h(0) = {i : i←→ 0},≥h

the cluster containing the origin), and ask for which values of h this event occurs with positive probability. Since

η(h) := P 0←→ ∞≥h 

(4.1) is decreasing in h, it is sensible to define the critical point for level-set percolation as

h∗(d) = inf{h ∈ R; η(h) = 0} ∈ [−∞, ∞] (4.2)

(with the convection inf ∅ = ∞). A non-trivial phase transition is then said to occur if h∗ is finite. In the next two sections we will present the proofs

of the following two results:

• h∗(d) ≥ 0 for all d ≥ 3 and h∗(3) < ∞, proved by J. Bricmont, J.L.

Lebowitz and C. Maes (BLM), see [3];

• h∗(d) < ∞ for all d ≥ 3 proved by P.-F. Rodriguez and A.-S. Sznitman

(RS), see [11].

Finally in chapter 5 we present a new and original (but incomplete) gener-alization of the BLM proof of the existence of a phase transition for d = 3 to any d ≥ 3, that gives the same result due to Sznitman-Rodriguez with completely different arguments.

4.2

The proof of J. Bricmont, J.L. Lebowitz and

C. Maes

In all this section we fix d = 3. See remark 4.2.4 to understand where the proof fails for d ≥ 4.

(40)

4.2.1 Definitions and notation

We start this section by introducing some definition and notation. Let V and Λ be cubes centered around the origin, that is sets of the type [−α, α]d, for some α ∈ N. In particular we suppose that |V |  |Λ|. Remember that Ch(0) denote the (random) cluster containing the origin and define the event

CVh =nϕ ∈ Ω : Ch(0) ∩ ∂iV 6= ∅o. (4.3) Let SV be the following collection of subsets of V

SV =nK ⊆ V : 0 ∈ K, K is connected, K ∩ ∂iV 6= ∅o. (4.4) Define for a paricular K ∈ SV the event

EKh =nϕ ∈ Ω : ϕx ≥ h, ∀x ∈ K and ϕx < h, ∀x ∈ ∂VK

o

, (4.5)

where ∂VK = ∂K ∩ V.

Note that the family SV is a collection of the possible shapes for the

infinite cluster Ch(0) inside V (see Figure 4.1).

Figure 4.1: In this example we fix a level h > 0 and we paint in black the sites over the level h. Moreover we highlight the sets V , Λ, K ∈ SV and

(41)

4.2. THE PROOF OF J. BRICMONT, J.L. LEBOWITZ AND C. MAES41

4.2.2 The technical lemmas

We are now ready to state the four technical lemmas that we need to prove our statement.

Lemma 4.2.1. Let CVh be the event defined in (4.3). Then 1. CVh is the disjoint union of the events EKh, i.e.,

CVh = G

K∈SV

EKh. (4.6)

and if K, K0 ∈ SV and K 6= K0, then EKh ∩ EKh0 = ∅.

2. P(CVh) ≥ η(h) for all V and P(EhK) > 0 for all K ∈ SV, all finite V.

Proof. We start proving that CVh =F

K∈SV E

h

K. If Ch(0) intersects the inner

boundary of V, that is Ch(0) ∩ ∂iV 6= ∅, then the intersection of Ch(0) with V contains a set K ∈ SV and EKh occours (see Figure 4.1). Note that it is

not always true that there exist a set K ∈ SV such that Ch(0) ∩ V = K

since Ch(0) ∩ V could not be connected. On the other side, if EKh occours for some K ∈ SV then K is a subset of Ch(0) and CVh occours. The events

EKh are disjoint by definition. Part (2) is obvious.

Lemma 4.2.2. The function x 7→ E[ϕx|EKh] is a harmonic function outside

K = K ∪ ∂K.

Proof. For all K ⊂⊂ Zd, applying Lemma 1.5.5 we have, for all x ∈ Zd\ K, E[ϕx|FK](w) = Ex[ωXτ

K, τK < ∞], P-a.s. , (4.7)

where FK = σ(ϕx; x ∈ K). Obviously EKh ∈ FK and in particular we have

E[ϕx|EKh] = E[Ex[ϕXτK]|EKh]. (4.8)

Using the Markov property and the fact that, if x ∈ Kc and y ∼ x then y /∈ K, we have 1 2d X y∼x Ey[ϕXτK] = X y∼x X k∈∂K 1 2dPy(XτK = k)ϕk = X k∈∂K ϕk X y∼x Px(X1= y)Py(XτK = k) = X k∈∂K ϕkPx(XτK = k) = Ex[ϕXτK]. (4.9)

This implies that the function x 7→ E[ϕx|EKh] is a harmonic function outside

(42)

Lemma 4.2.3. For some 0 < u ≤ 1 and for all Λ, we can take V = V (Λ) large enough so that

1 |Λ|

X

x∈Λ

fK(x) ≥ u, for all K ∈ SV and for all x ∈ Λ, (4.10)

where fK(x) is defined in (3.22).

Proof. In this proof we use the same notation used for the Winer’s test (see 3.4.1). The proof of this test implies that, as a set A grows (in the sense of inclusion) such that T [A] → ∞, then the function fA(x) → 1.

Therefore, since the set Λ is a fixed and bounded region in Z3, it is sufficient to show that T [K] can be made arbitrary large for all K ∈ SV and V large

enough. One has thus to verify condition (3.21) for an arbitrary infinite connected set A in d = 3. We will do this in two steps: first we reduce the volume of A and then we show that no set A is worse than a straight line. By the monotonicity property (3.14), T [A] ≥ T [a] where a ⊂ A is obtained by keeping (in a nonunique but arbitrary fashion) for each i = 0, 1, 2 . . . only one point in the intersection of A with the i-th shell= {y : y is on the boundary of the cube of size 2i}. The volume |an| = 2n, and by

(3.18) Cap(an) ≥ 2n/Mn, (4.11) where Mn= max x∈an  X y∈an G(x, y)  . (4.12)

Fix x ∈ an. Then order x ∈ an, y 6= x, according to their distance from x.

The k−th point in that order is at a distance at least k from x. Thus, using Lemma (3.2.1), we get Mn≤ c · 2n X k=1 1 k ≤ c 0· n. (4.13)

Combining the inequalities (4.11)-(4.13) we thus get that T [A] ≥ c ·P 1/n, hence the desired divergence.

Remark 4.2.4. We underline that in the last proof there is the key point where the reasoning fails for d ≥ 4. The main reason the result is restricted to d = 3 is that they use, in the previous lemma, the fact that T [A] = ∞ for any infinite, connected set A. This fails in d = 4, as can be seen explicitly by considering the set A equal to a lattice axis.

Lemma 4.2.5. For h < ∞ large enough there is a constant c > 0 such that for all Vn large enough

(43)

4.3. THE PROOF OF P.-F. RODRIGUEZ AND A.-S. SZNITMAN 43 Proof. This lemma is proved in [3], Lemma 3, p. 1264. We are not able to follow the last part of the proof where the Ruelle’s superstability estimate is applied.

4.2.3 Conclusion of the proof of the Theorem

Lemma (4.2.2) says that Eϕx|EKh is a harmonic function in Zd\K. Lemma

(4.2.5) says that this function is larger than a strictly positive constant c for all x ∈ K, for h large enough, and zero at infinity. Hence, by the principle of domination for harmonic functions

Eϕx|EKh ≥ c · fK(x), for all x ∈ Z

d. (4.15)

For d = 3 we can apply Lemma 4.2.3 and combine it with (4.15): there is a constant ˜µ > 0 such that for all Λ, we can choose V = V (Λ) large enough such that

1 |Λ|

X

x∈Λ

E[ϕx|EKh] ≥ ˜u, for all K ∈ SV. (4.16)

By Lemma 4.2.1, denoting SΛ:=Px∈Λϕx, E(SΛ)2 ≥ E(SΛ)21Ch V = X K∈SV E(SΛ)21EK = X K∈SV E(SΛ)2|EK  P[EK] (4.17) and by the Schwartz inequality,

≥ X K∈SV ESΛ|EK 2 P[EK] ≥ X K∈SV ˜ u2|Λ|2P[EK], (4.18)

where we used (4.16) for the last inequality. Now, by Lemma 4.2.1 again, = ˜u2|Λ|2P[CVh] ≥ ˜u2|Λ|2η(h). (4.19)

Since this chain of inequalities holds for all Λ and Eh (1/|Λ|)SΛ

2i → 0,, for Λ → Z3, we obtain that η(h) = 0. This completes the proof.

4.3

The proof of P.-F. Rodriguez and A.-S.

Sznit-man

The RS proof is based on two key ingredients:

• The ”Renormalization scheme” (see section 4.3.1);

• A ”recursive bounds” for the probabilities of some specifics events (see Proposition 4.3.7).

(44)

Before analysing the two key ingredients we state two technical results for the conditional distribution of the DGFF.

Lemma 1.5.6 yelds a choice of regular conditional distributions for (ϕx)x∈Zd

conditioned on the variables (ϕx)x∈Λ. Namely, P-almost surely,

P((ϕx)x∈Zd ∈ · |FΛ) = ˜P(( ˜ϕx+ ux)x∈Zd ∈ · ), (4.20)

where ux = Ex[ωXτΛ, τΛ < ∞], x ∈ Zd, ˜P does not act on (ux)x∈Zd, and

( ˜ϕx)x∈Zdis a centered Gaussian field under ˜P, with ˜ϕx= 0, ˜µ−almost surely

for x ∈ Λ.

The explicit form of the conditional distribution in (4.20) readily yields the following result, which can be viewed as a consequence of the FKG-inequality for the free field (see for example [8], Appendix B.1.).

Lemma 4.3.1. Let α ∈ R, ∅ 6= K ⊂⊂ Zd, and assume A ∈ F (the canonical

σ-algebra on RZd) is an increasing event. Then

P[A|ϕK = α] ≤ P[A|ϕ|K ≥ α], (4.21)

where the left-hand side is defined by the version of the conditional expecta-tion in (4.20).

Intuitively, augmenting the field can only favour the occurence of A, an increasing event.

Proof. See [11], Lemma 1.4.

4.3.1 Renormalization scheme

The main goal of this section is to present the renormalization scheme. This technique is one of the main ingredients of the proof of theorem 4.3.4 and it is similar to the one developed by Sznitman and Sidoravicius in the context of random interlacements, see [14] and [12]. This scheme will be used to derive recursive estimates for the probability of certain crossing events and the resulting bounds constitute the main tool for the proof. We begin by defining on the lattice Zd a sequence of length scales

Ln= ln0L0, for n ≥ 0, (4.22)

where L0 ≥ 1 and l0 ≥ 100 are both assumed to be integers and will be

specified below. Hence, L0 represents the finest scale and L1 < L2 < . . .

correspond to increasingly coarse scales. We further introduce renormalized lattices

Ln= LnZd⊂ Zd, n ≥ 0, (4.23)

and note that Lk ⊇ Ln for all 0 ≤ k ≤ n. To each x ∈ Ln, we attach the

boxes

(45)

4.3. THE PROOF OF P.-F. RODRIGUEZ AND A.-S. SZNITMAN 45 where we define Bx(L) = x + ([0, L) ∩ Z)d, the box of side length L attached

to x, for any x ∈ Zd and L ≥ 1 (not to be confused with B(x, L)), c.f. Figure 4.2 below (note that Bn,x is closed just on the left-hand side in every

direction). Moreover, we let e

Bn,x=

[

y∈Ln:d(Bn,y,Bn,x)≤1

Bn,y, n ≥ 0, x ∈ Ln, (4.25)

so that {Bn,x; x ∈ Ln} defines a partition of Zd into boxes of side length

Ln for all n ≥ 0, and eBn,x; x ∈ Ln, is simply the union of Bn,x and its

∗-neighbouring boxes at level n. Furthermore, for n ≥ 1 and x ∈ Ln, Bn,x

is the disjoint union of the ld

0 boxes {Bn−1,y; y ∈ Bn,x∩ Ln−1} at level n − 1

it contains. We also introduce the indexing sets

In= {n} × Ln, n ≥ 0, (4.26)

and given (n, x) ∈ In, n ≥ 1, we consider the sets of labels

H1(n, x) = {(n − 1, y) ∈ In−1; Bn−1,y⊂ Bn,x and Bn−1,y∩ ∂iBn,x 6= ∅},

H2(n, x) = {(n − 1, y) ∈ In−1; Bn−1,y∩ {z ∈ Zd; d(z, Bn,x) = bLn/2c} 6= ∅}.

(4.27) Note that for any two indices (n − 1, yi) ∈ Hi(n, x), i = 1, 2, we have

e

Bn−1,y1 ∩ eBn−1,y2 = ∅ and eBn−1,y1 ∪ eBn−1,y2 ⊂ eBn,x. Finally, given x ∈

Ln, n ≥ 0, we introduce Λn,x, a family of subsets T of S0≤k≤nIk (soon to

be thought as a binary trees) defined as

Λn,x =  T ⊂ n [ k=0 Ik; T ∩ In= (n, x) and every (k, y) ∈ T ∩ Ik, 0 < k ≤ n,

has two “descendants” (k − 1, yi(k, y)) ∈ Hi(k, y), i = 1, 2,

such that T ∩ Ik−1= [ (k,y)∈T ∩Ik {(k − 1, y1(k, y)), (k − 1, y2(k, y))}  . (4.28) Hence, any T ∈ Λn,x can naturally be identified as a binary tree having root

(n, x) ∈ Inand depth n. Since |H1(n, x)| = c1l0d−1and |H2(n, x)| = c2l0d−1,

the following bound on the cardinality of Λn,x is easily obtained,

|Λn,x| ≤ (cl0d−1)2(cl0d−1)2 2 · · · (cl0d−1)2 n = (cl02(d−1))2(2 n−1) ≤ (c0l02(d−1))2 n , (4.29) where c0 ≥ 1 is a suitable constant.

(46)

Figure 4.2: Renormalization scheme with n = 2, L0 = 1, l0 = 10.

4.3.2 Crossing events

We now consider the lattice Gaussian free field ϕ = (ϕx)x∈Zd defined in

chapter 1 and introduce the crossing events

Ahn,x= {Bn,x ≥h

←→ ∂iBen,x}, for n ≥ 0, x ∈ Ln. (4.30) Three properties of the events Ahn,x will play a crucial role in what follows. Denoting by σ(ϕy; y ∈ eBn,x) the σ-algebra on RZ

d

generated by the random variables ϕy, y ∈ eBn,x, we have

Ahn,x∈ σ(ϕy; y ∈ eBn,x), (4.31a)

Ahn,x is increasing (in ϕ) (see the discussion below (3.27)), (4.31b) Ahn,x⊇ Ahn,x0 , for all h, h0 ∈ R with h ≤ h0. (4.31c) Indeed, the property (4.31c) that Ahn,x decreases with h follows since E≥h ⊆ E≥h0 for all h ≤ h0 by definition, c.f. (3.25).

(47)

4.3. THE PROOF OF P.-F. RODRIGUEZ AND A.-S. SZNITMAN 47

4.3.3 The structure of the proof

In this section we give all the P.-F. Rodriguez and A.-S. Sznitman’s ideas to show that

h∗(d) < ∞, for all d ≥ 3. (4.32)

To prove (4.32) it enough to construct an explicit level h with 0 < h < ∞ such that

P B(0, L)

≥h

←→ S(0, L) decays in L, as L → ∞. (4.33) Actually, the proof of P.-F. Rodriguez and A.-S. Sznitman will even show that P[B(0, L)←→ S(0, L)] has stretched exponential decay which implies a≥h (seemingly) stronger result. A second critical parameter is defined

h∗∗(d) = inf{h ∈ R; for some α > 0, lim L→∞L

α

P[B(0, L)←→ S(0, L)] = 0},≥h (4.34) and the following stronger statement is proved:

h∗∗(d) < ∞, for all d ≥ 3. (4.35)

For the sake of clarity we investigate later the relevance of this second crit-ical parameter (see Remark 4.3.3) and we now directly prove that h∗(d) <

∞, for all d ≥ 3.

As stated before, it is enough to prove (4.33) and to understand why this implies that h∗(d) < ∞, for all d ≥ 3. To this second end we note that for

every h ∈ R, given L ≥ 2L0, there exists n ≥ 0 such that 2Ln≤ L ≤ 2Ln+1

and η(h) = P 0←→ ∞≥h (1) ≤ P B(0, L)←→ S(0, 2L)≥h (2) ≤ (2) ≤ P  [ x∈Ln:Bn,x∩S(0,L)6=∅ {Bn,x ≥h ←→ ∂iBen,x}  . (4.36)

We now comment on the two inequality, helping out with some picture extracted from a simulation of the renormalization scheme that we realized during the master thesis.

1. Consider a realization of the event {0 ←→ ∞}, i.e. a path in E≥h ϕ≥h connecting 0 to ∞, then this path must also connect the box B(0, L) to the sphere S(0, 2L), c.f. Figure 4.3 (Step 1) below;

2. consider a realization of the event {B(0, L)←→ S(0, 2L)}, i.e. a path≥h in Eϕ≥h connecting B(0, L) to S(0, 2L), then this path must also cross the box Bn,x for some x ∈ Ln: Bn,x∩ S(0, L) 6= ∅ and so must connect

the box Bn,x to the boundary ∂iBen,x of its Ln−neighbourhoods, c.f. Figure 4.3 (Step 2) below.

(48)

Riferimenti

Documenti correlati

The complete pipeline involves the use of free form deformation to parametrize and deform the hull shape, the high fidelity solver based on unsteady potential flow theory with

The Real-time Data API is a streaming API, which gives low latency ac- cess to new data registered to the platform, while the Historical Data API allows retrieval of his- torical

Using data from 12 nations (N=1480), we tested three models: (a) SIMCA (i.e., solidarity, anger, and efficacy), (b) a social dominance theory model of collective action (i.e.,

density». For the case in hand, of a superconducting thin film carrying its maximum resistance-less transport current, the field induction profile and corresponding flux

Some other sites, particularly the rock paintings, have been mapped by means of the modern photogrammetry approach in order to collect a suitable dataset to implement

Establish a viable change path (mechanism) that abides by the desired changeability level and costs. Concluding Remarks and Future Work The desire for changeable

87 Italy dealing with orphanages’ closing: make a virtue of necessity Isabella Ferrari. 109 New solutions in polish adoption law and proceedings

After the primary fermentation of red grapes, the free-run wine goes into tanks and the skins are pressed to extract the remaining juice and wine.. This press wine is blended with