• Non ci sono risultati.

Random walks in the quarter-plane: a numerical approach

N/A
N/A
Protected

Academic year: 2021

Condividi "Random walks in the quarter-plane: a numerical approach"

Copied!
132
0
0

Testo completo

(1)

Dipartimento di Matematica

Corso di Laurea in Matematica

Anno Accademico 2015/2016

Tesi di Laurea Magistrale

Random Walks in the Quarter-Plane:

a Numerical Approach.

Candidato Relatore

(2)
(3)

1 Introduction and preliminaries 1

1.1 Short summary on Markov chains and processes . . . 3

1.1.1 Irreducible classes and classification of states . . . 4

1.1.2 Continuous-time Markov processes . . . 5

1.2 Random walk model and problem formulation . . . 7

1.2.1 Generator matrix . . . 8

1.2.2 Functional equations for the invariant measure . . . 10

1.3 Geometric measure and induced measure . . . 11

1.4 Balance equations . . . 14

1.5 Algebraic curves and additional notation . . . 16

1.6 Conditions for ergodicity . . . 17

2 General analysis of the balance equations 19 2.1 Re-writing of the balance equations . . . 19

2.2 Fundamental linear systems . . . 24

2.3 Maximal uncoupled partitions . . . 26

2.4 General sketch of the detection algorithm . . . 28

3 Geometric study of algebraic curves 35 3.1 Algebraic functions and branch points . . . 38

3.1.1 Location of the branch points . . . 42

3.1.2 Analysis of the branches . . . 51

3.2 Singular points . . . 52

3.3 The curve in the quarter-plane . . . 57

3.4 Properties of the other algebraic curves . . . 61

3.4.1 Intersection of the algebraic curves . . . 62

4 The case of non-singular random walks 65 4.1 Shape of the set of geometric terms . . . 65

4.1.1 Properties of maximal uncoupled partitions . . . 65

4.1.2 Starting points of the set of geometric terms . . . 68

4.2 Detection algorithm in the non-singular case . . . 70

(4)

Contents

4.2.2 Intersection of algebraic curves . . . 71

4.2.3 Embedded computation of the coefficients . . . 71

4.2.4 Solving the corner linear system . . . 73

4.2.5 Normalizing the induced invariant measure . . . 74

4.3 Convergence conditions for the infinite case . . . 74

4.4 Interpretation of the conditions for ergodicity . . . 81

4.5 Chen’s model . . . 82

5 The case of singular random walks 87 5.1 First singular case . . . 87

5.2 Second singular case . . . 89

5.3 Third singular case . . . 92

5.4 Fourth singular case . . . 93

5.5 Fifth singular case . . . 95

6 Approximations with error bounds 99 6.1 Markov reward approach . . . 100

6.1.1 Error bound result . . . 101

6.2 Approximation scheme . . . 101

6.2.1 Narrowing the class of functions . . . 102

6.2.2 Bounding the bias terms . . . 103

6.2.3 A finite linear program . . . 107

6.3 Choice of the perturbed random walk . . . 109

6.3.1 Choosing the geometric terms . . . 109

6.3.2 Choosing the boundary transitions . . . 110

6.3.3 Choosing the corner transitions . . . 111

7 The join the shortest queue model 113 7.1 The join the shortest queue model . . . 113

7.1.1 Conditions for ergodicity . . . 114

7.1.2 Convergence of the series . . . 114

7.1.3 Intersection of algebraic curves . . . 115

7.1.4 The corner linear system . . . 115

7.1.5 Application of the detection algorithm . . . 116

7.2 Numerical illustrations . . . 116

7.2.1 First JSQ example . . . 116

7.2.2 Second JSQ example . . . 120

Ringraziamenti 125

(5)

Introduction and preliminaries

In this thesis we study steady-state performance measures for random walks in the quarter-plane. Random walks in the quarter-plane are frequently used to model queueing problems. At present, several techniques are available to find performance measures for random walks in the quarter-plane. Performance measures can be readily computed once the invariant measure of a random walk is known.

Various approaches to finding the invariant measure of a random walk in the quarter-plane exist. Most notably, methods from complex analysis have been used to obtain the generating function of the invariant measure. The studies of Fayolle et al. [9, 10], Cohen and Boxma [8] show that generating functions of random walks in the quarter-plane can be reduced to the solutions of Riemann boundary value problems. Another approach is given in [2], that develop the arithmetic of a special class of quasi-Toeplitz matrix, to solve certain semi-infinite quadratic matrix equations by means of cyclic reduction efficiently. However, neither of these approaches leads to a closed-form invariant measure.

In this thesis we focus on obtaining performance measures of the following two types: exact results and approximations. The random walk, of which the invariant measure is of product-form, is a typical example where the exact results of performance measures can be obtained. The performance measures of such random walks can be readily evalu-ated with tractable closed-form expressions, for instance, Jackson networks, see, e.g., [19, Chapter 6]. Moving beyond random walks in the quarter-plane with product-form invari-ant measures, Adan et al. [1] developed the so called compensation approach to find the stationary distribution, which is a sum of infinitely many geometric terms, of a random walk in the quarter-plane. Boxma et al. [3] applied the compensation approach to a 2 × 2 switch to find the closed-form invariant measure. Clearly, the performance measures of such two-dimensional Markov process can be obtained exactly as well. However, none of the two approaches mentioned above has characterized the type of random walks in the quarter-plane of which the invariant measure is a sum of finitely many geometric terms. Chen, Boucherie and Goseling [6, 5, 7] characterized the random walks of which the invariant measure is a linear combination of geometric measures, with some restrictive hypotheses on the boundary transition probabilities. The performance measures of the random walk of which the invariant measure is a sum of geometric terms can be readily

(6)

Chapter 1: Introduction and preliminaries

computed. For other random walks, a general approximation scheme is developed, in terms of a random walk with a sum of geometric terms invariant measure, which is obtained by perturbing the transition probabilities along the boundaries of the state space. A Markov reward approach is used to bound the approximation errors.

We further enlarge the class of random walks of which the performance measures can be obtained exactly, by removing the additional hypotheses that Chen put on the transition probabilities, and extending the classification to this more general case. The algorithms to compute the invariant measure, and to approximate performance measures of generic random walks, are also generalized.

Structure of the thesis

• In Chapter 1 we present the model and the problem considered in detail, we give some technical lemmas and we cite some previous work about conditions for the ergodicity of the random walk.

• In Chapter 2 we analyse the balance equations, that show up when we are looking for an invariant probability measure that is sum of geometric terms, with the most general hypotheses possible. Then we present a first general sketch of the detection algorithm, to determine if the random walk has an invariant probability measure of the form requested, and in that case compute the coefficients that give its closed form.

• In Chapter 3 we study the algebraic curves that arise when we devise the detection algorithm. We present necessary and sufficient conditions on the transition proba-bilities, for one of those algebraic curves to be irreducible, and of maximum degree in both variables (we will then say that the random walk is non-singular). Then we study the algebraic curves, in the non-singular case, from a geometric point of view.

• In Chapter 4 we complete the details of the balance equation analysis and of the detection algorithm for non-singular random walks, with the aid of the geometric properties of the algebraic curves that we studied in Chapter 3. We find necessary and sufficient conditions for the absolute convergence of certain series, that arise in the detection algorithm, depending on the transition probabilities of the random walk. We give a geometric interpretation to the ergodicity conditions that were reported in Chapter 1. Finally, we study how the additional hypotheses of the model of Chen et al. bring about simplifications to our detection algorithm. • In Chapter 5 we classify all singular random walks, and we develop the application

of the detection algorithm to those cases.

• In Chapter 6 we provide a scheme to approximate generic random walks, with perturbed random walks whose invariant probability measure is sum of geometric terms. In particular, this scheme permits to approximate performance measures by computing explicitly error bounds.

(7)

• In Chapter 7 we present the join the shortest queue model as an example to which we can apply our algorithms. We analyse the application of the detection algo-rithm to this specific model, and we present some numerical illustrations where we compute its invariant probability measure and some performance measures with arbitrary precision.

1.1

Short summary on Markov chains and processes

Who is already familiar with Markov chains and processes, can skip this whole section. For all not proven results, we refer to existing literature, for example [14].

Definition 1.1. A stochastic process is a collection of random variables {X(t) : t ∈ T }, indexed by some set T , defined on a common probability space and that take values in the same mathematical space S.

Often the index set T is some subset of the real line, such as the natural numbers or an interval, giving it the interpretation of time.

Definition 1.2. A dicrete-time Markov chain is a stochastic process {Xt : t ∈ T } in which the index set is T = N, the set S of the possible values assumed by Xtis countable,

and the following Markov property is satisfied:

P{Xn+1= x|X0= x0, . . . , Xn= xn} =P{Xn+1= x|Xn= xn},

for any (n + 2)-tuple of of states x, x0, . . . , xn∈ S and for all time n ∈ N, such that both

conditional probabilities are well-defined, i.e. if P{X0 = x0, . . . , Xn = xn} > 0. The

set S is the state space of the Markov chain and P{Xn+1 = x|Xn= y} is its transition

probability from state y to state x, from time n to time n + 1. If |S| < ∞, then the Markov chain is said to be finite, else it is infinite.

In fact, the Markov chains that we will consider satisfy a stronger condition: Definition 1.3. A Markov chain is time-homogeneous if it satisfies

P{Xn+1= x|Xn= y} =P{Xn= x|Xn−1= y}

for all x, y ∈ S and all n ≥ 1, i.e. if the transition probabilities are independent of n. Definition 1.4. Given a time-homogeneous Markov chain with state space S, we define its transition matrix as P := (pi,j)i,j∈S, where pi,j :=P{Xn+1= j|Xn= i}.

Depending on the state space S, the transition matrix P could not be of the form of matrices one is usually accustomed to. If |S| = k < ∞, then it is straightforward to choose a bijection from S to {1, . . . , k}, and give P the shape of a finite matrix in Rk×k. If S is a countably infinite set, then one possibility could be to choose a bijection from S to N, and give P the shape of an infinite matrix in RN×N.

(8)

Chapter 1: Introduction and preliminaries

Sometimes, and that will be the case of random walks in the quarter-plane, it is more useful to choose a bijection from S to N2, and represent P as an infinite N × N block matrix, where the blocks in turn are infinite matrices. In this case, referring to the notation given in Figure 1.1, which is an example of representation of matrix P , the transition probability from state (i, j) to state (i0, j0) in S ∼= N × N, is the (j, j0) entry of the (i, i0) block. The first and second coordinates are usually called level and phase, respectively. P =    P0,0 P0,1 · · · P1,0 P1,1 · · · .. . ... . ..   , Pi,i0 =    p(i,0),(i0,0) p(i,0),(i0,1) · · · p(i,1),(i0,0) p(i,1),(i0,1) · · · .. . ... . ..   ∈ R N×N

Figure 1.1: Infinite block matrix with infinite blocks

In any case, transition matrix are always stochastic matrices, i.e. they satisfy P ≥ 0 and P 1 = 1, where 0 and 1 indicate the vector (or matrix, depending on context) made of all zeroes or ones, respectively, and the inequality is component-wise.

Definition 1.5. Given a time-homogeneous Markov chain with transition matrix P , a vector π = (πi)i∈S is said to be invariant if πTP = πT. If the invariant vector π is such

that π ≥ 0 and πT1 = 1, then it is said to be an invariant probability measure.

Not all Markov chains present an invariant probability measure. For that, additional hypotheses have to be assumed.

1.1.1 Irreducible classes and classification of states

Definition 1.6. The transition graph of a Markov chain with transition matrix P is the directed graph (S, E), where the vertices are all possible states, and there is an edge from state x to state y if and only if px,y > 0. We say that state x leads to y if there

is a walk from x to y in the transition graph. We say that states x and y communicate if each leads to the other. Communication forms an equivalence relation on the states, so we call irreducible classes its equivalence classes. We say that the Markov chain is irreducible if all its states communicate, else it is reducible.

Definition 1.7. Given a Markov chain, we say that a state x ∈ S is periodic with period δ ≥ 2 if all closed walks through x in the transition graph have a length that is a multiple of δ. A state x =∈ S is aperiodic if it is not periodic.

Definition 1.8. Given a Markov chain, we denote by Tx the time of the first visit to

state x ∈ S, without taking into account the state at time 0, i.e. Tx:= min{n ≥ 1 : Xn= x}.

Define fx:=P{Tx< ∞|X0 = x}, that is, fx is the probability that, starting from x, the

Markov chain returns to x in a finite time. We then classificate state x of the Markov chain as:

(9)

• Transient if fx < 1;

• Recurrent if fx= 1. In this case we also distinguish between:

– Positive recurrent if the expected return time E[Tx|X0 = x] is finite;

– Null recurrent if the expected return time E[Tx|X0 = x] is infinite.

It can be shown that all states of a single irreducible class of a Markov chain belong to the same classification, so irreducible classes in turn can be transient, positive recurrent or null recurrent. In particular, if a Markov chain is irreducible, then it is transient, positive recurrent or null recurrent, depending on the classification of its states. Similarly, it can be shown that periodicity is also a class property, i.e. all states of a single irreducible class are aperiodic, or are periodic with have the same period δ. We can then refer to the periodicity of each irreducible class, or even to the periodicity of an irreducible Markov chain.

Theorem 1.9 ([14, Theorem 1.7.7]). An irreducible Markov chain has an invariant probability measure π if and only if it is positive recurrent. If the invariant probability measure exists, then it is unique.

Theorem 1.10 ([14, Theorem 1.8.3]). Consider an irreducible and aperiodic Markov chain {Xn} with state space S, and suppose that it has an invariant probability measure π. Then

lim

n→∞P{Xn= x|X0 = y} = πx ∀x, y ∈ S.

In other words, for any initial distribution λ ∈ RS, such that λ ≥ 0 and λT1 = 1, we have limn→∞λTPn= πT, where P is the transition matrix.

1.1.2 Continuous-time Markov processes

Definition 1.11. A continuous-time Markov chain, or Markov process, is a stochastic process {X(t) : t ∈ T }, in which the index set is T = [0, +∞) ⊂ R, the set S of the possible values assumed by X(t) is countable, and the following Markov property is satisfied:

P{X(tn+1) = x|X(t0) = x0, . . . , X(tn) = xn} =P{X(tn+1) = x|X(tn) = xn},

for any (n + 2)-tuple of of states x, x0, . . . , xn ∈ S and for all (n + 2)-tuple of times

0 ≤ t0 < . . . < tn+1, such that both conditional probabilities are well-defined. The state

space S is defined as before, and we call transition probabilities the quantities px,y(s, t) :=P{X(t) = y|X(s) = x}, x, y ∈ S, 0 ≤ s ≤ t.

(10)

Chapter 1: Introduction and preliminaries

Definition 1.12. A Markov process is time-homogeneous if it satisfies pi,j(s, t + s) = pi,j(0, t)

for all i, j ∈ S and all s, t ≥ 0, that is, if transition probabilities depend only on states and time difference. We will then indicate pi,j(0, t) just by pi,j(t).

From now on, all Markov processes we consider are assumed to be time-homogeneous. Definition 1.13. A Q-matrix on S is a matrix Q = (qi,j)i,j∈S satisfying the following

conditions:

1. −∞ < qi,i≤ 0 for all i ∈ S;

2. qi,j ≥ 0 for all i, j ∈ S such that i 6= j; 3. Q1 = 0.

Definition 1.14. The transition matrix of a Markov process is P (t) := (pi,j(t))i,j∈S.

The transition rate matrix or generator matrix of a Markov process is

Qi,j :=        lim h→0 pi,j(h) h for i 6= j, −X j6=i Qi,j for i = j.

Under appropriate regularity hypotheses, it can be shown that, given a Markov chain with transition matrix P (t) and generator matrix Q, they satisfy the differential equation P0(t) = QP (t), and Q is a Q-matrix. In particular, the generator matrix is sufficient to define uniquely the behaviour of the Markov process. This suggests to give the following definition of invariant vector, in the countinuous-time case:

Definition 1.15. Given a Markov process with generator matrix Q, a vector π = (πi)i∈S

is said to be invariant if πTQ = 0. If the invariant vector π is such that π ≥ 0 and πT1 = 1, then it is said to be an invariant probability measure.

We can define the transition graph, the irreducible classes, transience and positive/null recurrence of Markov processes similarly as how we did for Markov chains. Then, it can be shown that most results about discrete-time Markov chains are still true for the continuous-time case. Most notably, the following theorems are valid:

Theorem 1.16 ([14, Theorem 3.5.3]). An irreducible Markov process has an invariant probability measure π if and only if it is positive recurrent. If the invariant probability measure exists, then it is unique.

Theorem 1.17 ([14, Theorem 3.6.2]). Consider an irreducible and positive recurrent Markov process {Xn}, with state space S and invariant probability measure π. Then

lim

(11)

Remark 1.18. When we are interested in studying the invariant probability measure, it is immediate to pass from a discrete-time Markov chain to a continuous-time Markov process: since π satisfies πTP = πT, that is equivalent to πT(P − I) = 0, the Markov process with generator matrix Q = P − I (that is trivially a Q-matrix) has the same invariant probability measure.

It is also possible to pass from a continuous-time Markov process to a discrete-time Markov chain. Suppose that the generator matrix Q is such that

∃c > 0 : ∀j ∈ S X

i6=j

qi,j < c.

A continuous-time Markov process that satisfies this property is said to be uniformizable. We define then Pc := c−1Q + I, and we notice that it is a stochastic matrix, and that πTQ = 0 is equivalent to πTP

c = πT. In particular, the Markov chain with transition

matrix Pc has the same invariant probability measure as the Markov process with

gen-erator matrix Q. This technique is known as uniformization, and can always be applied when the additional hypothesis kQk < ∞ is satisfied. That hypothesis will be always applicable in the Markov processes that we consider.

1.2

Random walk model and problem formulation

In general, a random walk is a stochastic process that describes a path, that consists of a succession of random steps on some mathematical space. In particular, the random walks in the quarter-plane that we will consider are time-homogeneous Markov chains, or Markov processes, such that their state space is the lattice of pairs of non-negative integers S = N2. Since we can always switch between discrete-time Markov chains and continuous-time Markov processes through uniformization (see Remark 1.18), without loss of generality we consider only the latter.

We refer to S+:= N20, Sh:= N0×{0}, Sv := {0}×N0and S0:= {(0, 0)} as the interior,

the horizontal axis, the vertical axis and the origin of the state space, respectively. The transition rate from state (i, j) ∈ S to state (i + s, j + t) ∈ S for (s, t) 6= 0 is denoted by qs,t(i, j). Transitions are restricted to the adjoined points (horizontally, vertically and

diagonally) and the process is space homogeneous, in the sense that qs,t(i, j) = 0 when |s| > 1 or |t| > 1,

qs,t(i, j) = qs,t for (i, j) ∈ S+,

qs,t(i, j) = hs,t for (i, j) ∈ Sh,

qs,t(i, j) = vs,t for (i, j) ∈ Sv,

qs,t(0, 0) = rs,t,

where qs,t, hs,t, vs,t and rs,t are fixed non-negative numbers for (s, t) 6= 0. When we sum these coefficients over the indices s and t, we consider q0,0, h0,0, v0,0 and r0,0 to be defined as zero. The denote the opposites of the diagonal entries of the generator matrix

(12)

Chapter 1: Introduction and preliminaries as ˜ q :=X s,t qs,t, ˜h := X s,t hs,t, v :=˜ X s,t vs,t, ˜r := X s,t rs,t.

Since these values are bounded, we can apply the uniformization technique, as said before. The transition rates can be reported in a diagram, as illustrated in Figure 1.2.

r1,0 r0,1 r1,1 h−1,0 h1,0 h−1,1 h0,1 h1,1 v0,−1 v0,1 v1,−1 v1,0 v1,1 q0,−1 q0,1 q1,−1 q1,0 q1,1 q−1,−1 q−1,0 q−1,1

Figure 1.2: Transition rates of the random walk

Finally, we assume the Markov process to be irreducible, aperiodic and positive recurrent.

1.2.1 Generator matrix

Since this Markov process is two-dimensional, we can represent its generator matrix as we did for the transition matrix in Figure 1.1, and we can choose which coordinate is the level and which is the phase. In practice, when we write the generator matrix, we can end up with two different matrices. The structure of the model makes both matrices block quasi-Toeplitz tridiagonal, with quasi-Toeplitz tridiagonal blocks.

Second coordinate as level, first coordinate as phase In this case the infinitesimal generator matrix is

G =      B0 B1 A−1 A0 A1 A−1 A0 A1 . .. ... ...      ,

(13)

where the blocks are defined as B0=      −˜r r1,0 h−1,0 −˜h h1,0 h−1,0 −˜h h1,0 . .. . .. ...      , B1=      r0,1 r1,1 h−1,1 h0,1 h1,1 h−1,1 h0,1 h1,1 . .. . .. ...      , A−1=      v0,−1 v1,−1 q−1,−1 q0,−1 q1,−1 q−1,−1 q0,−1 q1,−1 . .. . .. . ..      , A0=      −˜v v1,0 q−1,0 −˜q q1,0 q−1,0 −˜q q1,0 . .. ... ...      , A1 =      v0,1 v1,1 q−1,1 q0,1 q1,1 q−1,1 q0,1 q1,1 . .. ... ...      .

First coordinate as level, second coordinate as phase In this case the infinitesimal generator matrix is

G0=      B00 B10 A0−1 A00 A01 A0−1 A00 A01 . .. ... ...      ,

where the blocks are defined as

B00=      −˜r r0,1 v0,−1 −˜v v0,1 v0,−1 −˜v v0,1 . .. ... ...      , B10 =      r1,0 r1,1 v1,−1 v1,0 v1,1 v1,−1 v1,0 v1,1 . .. ... ...      , A0−1=      h−1,0 h−1,1 q−1,−1 q−1,0 q−1,1 q−1,−1 q−1,0 q−1,1 . .. . .. . ..      , A00=      −˜h h0,1 q0,−1 −˜q q0,1 q0,−1 −˜q q0,1 . .. ... ...      , A01 =      h1,0 h1,1 q1,−1 q1,0 q1,1 q1,−1 q1,0 q1,1 . .. ... ...      .

(14)

Chapter 1: Introduction and preliminaries

1.2.2 Functional equations for the invariant measure

In this subsection only, we consider the random walk a discrete-time Markov chain, rather than a Markov process. Let {Xn= (Xn, Yn) : n ≥ 0} be that Markov chain, let x = (x, y)

be a vector of complex variables. Let us define the jump generating functions

Pr(x) := ExXn+1−Xn|Xn= z, z ∈ Sr,

for r ∈ {+, h, v, 0}, with the standard notation xX = xXyY. Since by our assumptions

Pr(x) does not depend on z, Kolmogorov’s equations take the form

ExXn+1 = ExXnxXn+1−Xn = X

r

ExXn1{X ∈ Sr}Pr(x). (1.1)

Let X = (X, Y ) be a random variable with the same distribution of the invariant prob-ability measure π, and let us define the generating functions

Φr(x) := ExX1{X ∈ Sr} =

X

z∈Sr

πzxz,

for r ∈ {+, h, v, 0}. Taking the limit n → ∞ in (1.1), we get the basic equation

X

r

(1 − Pr(x))Φr(x) = 0. (1.2)

Suppose now that the discrete-time Markov chain was obtained from a continuous-time Markov process through uniformization (see Remark 1.18), with a constant c > 0. Then we can write the jump generating functions as

P+(x, y) = 1 c 1 X s=−1 1 X t=−1 qs,txsyt− ˜q ! + 1, Ph(x, y) = 1 c 1 X s=−1 1 X t=0 hs,txsyt− ˜h ! + 1, Pv(x, y) = 1 c 1 X s=0 1 X t=−1 vs,txsyt− ˜v ! + 1, P0(x, y) = 1 c 1 X s=0 1 X t=0 rs,txsyt− ˜r ! + 1.

(15)

In particular, if we define the polynomials Q(x, y) := 1 X s=−1 1 X t=−1 qs,tx1+sy1+t− ˜qxy, H(x, y) := 1 X s=−1 1 X t=0 hs,tx1+syt− ˜hx, V (x, y) := 1 X s=0 1 X t=−1 vs,txsy1+t− ˜vy, R(x, y) := 1 X s=0 1 X t=0 rs,txsyt− ˜r,

then equation (1.2) is equivalent to

Q(x, y)Π+(x, y) + H(x, y)Πh(x) + V (x, y)Πv(y) + R(x, y)π0,0 = 0, (1.3)

where Π+(x, y) := Φ+(x, y) xy = X i,j≥0 πi+1,j+1xiyj, Πh(x, y) := Φh(x) x = X i≥0 πi+1,0xi, Πv(x, y) := Φv(y) y = X j≥0 π0,j+1yj. (1.4)

It is immediate to check that Π+(x, y), Πh(x) and Πv(y) are well-defined and holomorphic

in the region |x|, |y| < 1. The following theorem also holds:

Theorem 1.19 ([10, Theorem 1.3.1]). For the irreducible aperiodic random walk to be ergodic, it is necessary and sufficient that there exist Π+(x, y), Πh(x) and Πv(y)

holo-morphic in |x|, |y| < 1, and a constant π0,0 ∈ C\{0}, satisfying the fundamental equation (1.3), together with the `1-condition

X

i,j≥0

i,j| < ∞,

where πi,j for (i, j) 6= 0 are the coefficients of the Taylor series of the holomorphic

functions at 0, as in (1.4). In this case Π+(x, y), Πh(x), Πv(y) and π0,0 are unique up to

a common complex scalar, and ∃c ∈ C such that cπ = |π| = (|πi,j|)i,j∈N is the invariant

probability measure of the random walk.

1.3

Geometric measure and induced measure

(16)

Chapter 1: Introduction and preliminaries

Definition 1.20. Any two non-negative real numbers ρ, σ define a geometric measure over S of the form m(i, j) = ρiσj.

We represent a geometric measure ρiσj by its coordinate (ρ, σ) in [0, +∞)2.

Definition 1.21. Let Γ be a countable subset of R2+, and let the following sets be defined:

Γ+:= Γ ∩ R2+,

Γx := {ρ ∈ R+: (ρ, 0) ∈ Γ},

Γy := {σ ∈ R+: (0, σ) ∈ Γ}.

If α : Γ+ → R, β : Γy → R and γ : Γx → R are real functions, and δ ∈ R is a real

number, then the measure induced by (Γ, α, β, γ, δ) is

m(i, j) :=                        X (ρ,σ)∈Γ+ α(ρ, σ)ρiσj for i, j ≥ 1, X ρ∈Γx γ(ρ)ρi for i ≥ 1 and j = 0, X σ∈Γy β(σ)σj for j ≥ 1 and i = 0, δ for i = j = 0. Given Γ ⊆ R2+, we define also the sets

Γσx := {ρ ∈ Γx : (ρ, σ) ∈ Γ+} ∀σ ∈ Γy,

Γρy := {σ ∈ Γy : (ρ, σ) ∈ Γ+} ∀ρ ∈ Γx.

Assumption 1.22. We make the following assumptions on Γ: (i) (0, 0) /∈ Γ;

(ii) ∀(ρ, σ) ∈ Γ+ (ρ, 0), (0, σ) ∈ Γ; (iii) ∀(ρ, σ) ∈ Γ+ α(ρ, σ) 6= 0;

(iv) ∀ρ ∈ Γx, if @(ρ, σ) ∈ Γ+, then γ(ρ) 6= 0;

(v) Likewise, ∀σ ∈ Γy, if @(ρ, σ) ∈ Γ+, then β(σ) 6= 0.

Remark 1.23. We justify these assumptions by noticing that the following actions never change the induced measure m(i, j):

(i) Since (0, 0) is never part of Γ+, Γx or Γy, we can remove it from Γ.

(ii) We can always add points of the form (ρ, 0) to Γ, and set γ(ρ) = 0; likewise for points of the form (0, σ).

(17)

(iii) If (ρ, σ) ∈ Γ+ and α(ρ, σ) = 0, we can remove (ρ, σ) from Γ.

(iv) If ρ ∈ Γx, Γρy = ∅ and γ(ρ) = 0, we can remove (ρ, 0) from Γ without violating

point (ii).

(v) As the previous point.

We are interested in studying the cases where the Markov process has an invariant prob-ability measure of the previous form. When Γ is a finite set, it can be easily shown that Γ ⊆ [0, 1)2 is a necessary condition for the induced measure to be a probability measure. When Γ is countably infinite, we make some additional assumptions:

Assumption 1.24. We assume that Γ ⊆ [0, 1)2 and

X (ρ,σ)∈Γ+ |α(ρ, σ)| ρ 1 − ρ σ 1 − σ < ∞, (1.5a) X ρ∈Γx |γ(ρ)| ρ 1 − ρ < ∞, X σ∈Γy |β(σ)| σ 1 − σ < ∞. (1.5b) Remark 1.25. It can be easily verified that Assumption 1.24 is equivalent to the absolute convergence of the series

X i≥1 X j≥1 X (ρ,σ)∈Γ+ α(ρ, σ)ρiσj, X i≥1 X ρ∈Γx γ(ρ)ρi, X j≥1 X σ∈Γy β(σ)σj,

which is stronger than what it is necessary for the measure induced by (Γ, α, β, γ, δ) to be the invariant probability measure. This condition implies in particular that

X (ρ,σ)∈Γ+ |α(ρ, σ)|ρiσj < ∞, X ρ∈Γx |γ(ρ)|ρi< ∞, X σ∈Γy |β(σ)|σj < ∞, for all i, j ≥ 1, that will be needed to swap some summation symbols.

We disclose in advance that, if Γ is a countably infinite set, then we can order Γ+= {(ρn, σn) : n ∈ N}, Γx= {(ρ0n) : n ∈ N}, Γy = {(σ0n) : n ∈ N},

in such a way that (ρn, σn) ↓ 0, ρ0n ↓ 0 and σn0 ↓ 0. In particular, ∃N ∈ N such that

ρn, σn, ρ0n, σn0 < 1/2 for all n ≥ N , so we can write

X n≥N |α(ρn, σn)| ρ 1 − ρ σ 1 − σ ≤ 4 X n≥N |α(ρn, σn)|ρσ, X n≥N |γ(ρ0n)| ρ 0 1 − ρ0 ≤ 4 X n≥N |γ(ρ0n)|ρ0, X n≥N |β(σ0n)| σ 0 1 − σ0 ≤ 4 X n≥N |β(σn0)|σ0,

that means that the absolute convergence of P α(ρ, σ)ρσ, P γ(ρ)ρ and P β(σ)σ, plus Γ ⊆ [0, 1)2, is in fact sufficient to satisfy Assumption 1.24.

(18)

Chapter 1: Introduction and preliminaries

1.4

Balance equations

A vector π ∈ RS is the invariant probability measure of the random walk if and only if π ≥ 0, πT1 = 1 and πTG = 0, where G is the generator matrix of the random walk. The equation concerning the generator matrix is equivalent to the following balance equations, for all i, j ≥ 2: ˜ qπi,j = 1 X s=−1 1 X t=−1 qs,tπi−s,j−t, (1.6a) ˜ qπi,1= 1 X s=−1 0 X t=−1 qs,tπi−s,1−t+ 1 X s=−1 hs,1πi−s,0, (1.6b) ˜ qπ1,j = 0 X s=−1 1 X s=−1 qs,tπ1−s,j−t+ 1 X t=−1 v1,tπ0,j−t, (1.6c) ˜ hπi,0 = 1 X s=−1 qs,−1πi−s,1+ 1 X s=−1 hs,0πi−s,0, (1.6d) ˜ vπ0,j = 1 X t=−1 q−1,tπ1,j−t+ 1 X t=−1 v0,tπ0,j−t, (1.6e) ˜ qπ1,1= 0 X s=−1 0 X t=−1 qs,tπ1−s,1−t+ h0,1π2,0+ h−1,1π1,0 + + v1,0π0,2+ v1,−1π0,1+ r1,1π0,0, (1.6f) ˜ hπ1,0 = q0,−1π1,1+ q−1,−1π2,1+ h−1,0π2,0+ v1,−1π0,1+ r1,0π0,0, (1.6g) ˜ vπ0,1 = q−1,0π1,1+ q−1,−1π1,2+ v0,−1π0,2+ h−1,1π1,0+ r0,1π0,0, (1.6h) ˜ rπ0,0= q−1,−1π1,1+ h−1,0π1,0+ v0,−1π0,1. (1.6i)

Another way to obtain these conditions is through the functional equation (1.3): if we define

F(x, y) = X

i,j≥0

fi,j(π)xiyj :=

:= Q(x, y)Π+(x, y) + H(x, y)Πh(x) + V (x, y)Πv(y) + R(x, y)π0,0,

then each balance equation is equivalent to fi,j(π) = 0 for a certain (i, j) ∈ N2.

Remark 1.26. If π ∈ RS satisfies kπk1 < ∞ and all balance equations but one1, then the missing one is satisfied too. It is easy to show this with the functional equation interpretation. If all balance equation but one are satisfied, then F(x, y) = fi,j(π)xiyj

for a certain (i, j) ∈ N2. Since kπk1 < ∞, then F(1, 1) is well-defined, and F(1, 1) = 0, because Q(1, 1) = H(1, 1) = V (1, 1) = R(1, 1) = 0. But also F(1, 1) = fi,j(π), so the last balance equation is verified too.

1

(19)

We will now provide some instruments, useful to manipulate the balance equations. Lemma 1.27. If |Γ+| < ∞, then ∀N ∈ N there ∃m, n ≥ N such that ρmσn are all distinct for (ρ, σ) ∈ Γ+.

Proof. Consider all distinct pairs ((ρ, σ), (ρ1, σ1)) ∈ Γ+× Γ+, and choose wlog N ≥ 1:

condition ρiσj = ρi1σ1j would imply

(ρ/ρ1)i= (σ/σ1)j (1.7)

If we show that, fixed any distinct pair, for all i ≥ N there are finite j ≥ N such that condition (1.7) is satisfied, then we can choose any i ≥ N , and one j ≥ N such that condition (1.7) is not satisfied by any pair in Γ+. Fixed i ≥ N , we separate two cases:

• If σ = σ1, then condition (1.7) would imply ρ = ρ1, that is in contradiction with the distinctness of the pair, so there are no such j.

• If σ 6= σ1, then there is at most one choice of j that satisfies condition (1.7).

In any case, for every distinct pair and any choice of i ≥ N there is at most one j ≥ N such that (i, j) satisfies condition (1.7). This concludes the proof.

Actually, we proved more than what was requested by the statement of the lemma: for any choice of m ≥ 1, and all but finite choices of n ≥ 1, the terms ρmσn are all distinct, for (ρ, σ) ∈ Γ+. Swapping the roles of i and j in the demonstration, the same can be

said for any choice of n ≥ 1 and all but finite choices of m ≥ 1.

We will use the previous lemma together with the following well-known fact, to ma-nipulate the balance equations in the finite case:

Remark 1.28. Let λ1, . . . , λk be distinct real numbers, and suppose x1, . . . , xk satisfy

λd1x1+ . . . + λdkxk= 0, for d = 0, . . . , k − 1.

This is a linear system with Vandermonde matrix. Since the λi are all distinct, the Vandermonde matrix is non-singular, so the only solution of the system is x1 = . . . =

xk= 0.

We will need a similar result also for the infinite case, but it requires some more work. For that, we cite the following theoretical theorem from [4], without providing its proof: Theorem 1.29. Consider a finite, real-valued Borel measure µ over R with compact support. If

Z

R

P (x)dµ(x) = 0 for all polynomials P , then µ = 0.

With this theorem, we can prove the following lemma, that can be interpreted as a generalization of Remark 1.28 to the countably infinite case.

(20)

Chapter 1: Introduction and preliminaries

Lemma 1.30. Let (λk)k∈N be a bounded sequence of distinct real numbers, and let (xk)k∈N be a sequence of real numbers such that Pk≥0|xk| < ∞. If Pk≥0λdkxk = 0

for all d ≥ 0, then xk= 0 for all k ∈ N.

Proof. Consider the atomic measure µ : P(R) → R such that µ(A) :=X

k≥0

xk1{λk∈ A}.

It can be easily seen that µ is a finite real-valued Borel measure with compact support, and that Z R yddµ(y) =X k≥0 λdkxk= 0 ∀d ≥ 0.

In particularR P (y)dµ(y) = 0 for all polynomials P , so we can apply Theorem 1.29 and notice that µ = 0 implies xk= 0 for all k.

In the infinite case, Lemma 1.27 is not necessarily satisfied, so we need to make the following assumption:

Assumption 1.31. We assume that ∀(ρ0, σ0) ∈ Γ+ there exist m, n ≥ 1 such that

ρmσn6= ρm

0 σ0n for all (ρ, σ) ∈ Γ+\ {(ρ0, σ0)}.

Lemma 1.32. Suppose η : Γ+ → R is a real function such that P

Γ+|η(ρ, σ)| < ∞ and

P

Γ+ρ

iσjη(ρ, σ) = 0 for all i, j ≥ N , for some N ∈ N: then η ≡ 0.

Proof. Let (ρ0, σ0) ∈ Γ+, let m, n ≥ 1 be as in Assumption 1.31 and let λ0 := ρm0 σ0n.

Define for all λ ∈ R the set Γλ := {(ρ, σ) ∈ Γ+: ρmσn= λ}: we can then write

λd0η(ρ0, σ0)(ρ0σ0)N+ X λ6=λ0 λdX Γλ η(ρ, σ)(ρσ)N = 0, ∀d ≥ 0

and apply Lemma 1.30 (or simply Remark 1.28, if |Γ+| < ∞), obtaining η(ρ0, σ0) = 0.

We conclude that η ≡ 0.

1.5

Algebraic curves and additional notation

It will be useful to define the following algebraic curves, associated with the random walk: Q(x, y) := 1 X s=−1 1 X t=−1 qs,tx1−sy1−t− ˜qxy, H(x, y) := 1 X s=−1 1 X t=0 hs,tx1−sy1−t− ˜hx, V (x, y) := 1 X s=0 1 X t=−1 vs,tx1−sy1−t− ˜vy.

(21)

These algebraic curves are strongly related to functional equation (1.3), since Q(x, y) = x2y2Q(x1,y1), H(x, y) = x2yH(1x,1y), V (x, y) = xy2V (x1,y1). It can be also helpful to define the following polynomials:

q−1h (x) := q−1,−1x2+ q0,−1x + q1,−1, q−1v (y) := q−1,−1y2+ q−1,0y + q−1,1,

q0h(x) := q−1,0x2− ˜qx + q1,0, q0v(y) := q0,−1y2− ˜qy + q0,1,

q1h(x) := q−1,1x2+ q0,1x + q1,1, q1v(y) := q1,−1y2+ q1,0y + q1,1,

h0(x) := h−1,0x2− ˜hx + h1,0, v0(y) := v0,−1y2− ˜vy + v0,1,

h1(x) := h−1,1x2+ h0,1x + h1,1, v1(y) := v1,−1y2+ v1,0y + v1,1.

This way, we can write the algebraic curves in different ways: Q(x, y) = q−1h (x)y2+ q0h(x)y + q1h(x) =

= q−1v (y)x2+ q0v(y)x + q1v(y), H(x, y) = h0(x)y + h1(x),

V (x, y) = v0(y)x + v1(y).

We also add another couple of algebraic curves, now easier to define thanks to the previous polynomials:

H1(x, y) := qh1(x)h0(x) + yqh−1(x)h1(x),

V1(x, y) := qv1(y)v0(y) + xqv−1(y)v1(y).

Sometimes we will improperly indicate the the zero locus {(x, y) ∈ R2 : Q(x, y) = 0} as just Q, and likewise for the other algebraic curves.

It can be easily noticed that, for geometric measures of the form m(i, j) = λρiσj, balance equation (1.6a) is equivalent to (ρ, σ) ∈ Q.

1.6

Conditions for ergodicity

Definition 1.33. We define the mean jump vectors

M = (Mx, My) := X s,t sqs,t, X s,t tqs,t ! , Mh = (Mxh, Myh) := X s,t shs,t, X s,t ths,t ! , Mv = (Mxv, Myv) := X s,t svs,t, X s,t tvs,t ! .

(22)

Chapter 1: Introduction and preliminaries

The following ergodicity conditions hold. We do not provide a demonstration of this result, but a probabilistic proof can be found in [11], and a purely analytic one is presented in [10].

Theorem 1.34. Suppose the random walk is irreducible and aperiodic, and that M 6= 0. The random walk is positive recurrent if and only if one of the following three conditions holds: (i)      Mx< 0, My < 0, MxMyh− MyMxh < 0, MyMxv− MxMyv < 0; (ii) Mx< 0, My ≥ 0, MyMxv− MxMyv < 0; (iii) Mx≥ 0, My < 0, MxMyh− MyMxh < 0.

Fayolle et al. in [10] provide also the following result:

Theorem 1.35. Suppose the random walk is irreducible and aperiodic, and that M = 0. The random walk is positive recurrent if and only if Mxh < 0, Myv< 0 and

qv1(1)M h y Mh x + q1h(1)M v x Mv y > q1,1+ q−1,−1− q−1,1− q1,−1.

(23)

General analysis of the balance

equations

2.1

Re-writing of the balance equations

Given a random walk in the quarter-plane, we are looking for an invariant probability measure π, induced by a certain 5-tuple (Γ, α, β, γ, δ). In this case, the balance equations can be written in simpler forms.

Lemma 2.1. The induced measure π satisfies balance equation (1.6a) if and only if Q(ρ, σ) = 0 for all (ρ, σ) ∈ Γ+, that is Γ+ ⊆ Q.

Proof. Balance equation (1.6a) can be written as X s,t qs,t X Γ+ α(ρ, σ)ρi−sσj−t− ˜qX Γ+ α(ρ, σ)ρiσj = 0, ∀i, j ≥ 2. Thanks to Assumption 1.24, that is equivalent to

X

Γ+

α(ρ, σ)Q(ρ, σ)ρi+1σj+1= 0, ∀i, j ≥ 0. (2.1)

By applying Lemma 1.32 with η(ρ, σ) := α(ρ, σ)Q(ρ, σ)ρσ, we deduce that α(ρ, σ)Q(ρ, σ)ρσ = 0 ∀(ρ, σ) ∈ Γ+.

Since we assumed α(ρ, σ) 6= 0, then Q(ρ, σ) = 0 for all (ρ, σ) ∈ Γ+.

Lemma 2.2. If the induced measure π satisfies balance equation (1.6a), then balance equation (1.6b) and (1.6d) are equivalent to

h1(ρ)γ(ρ) − qh1(ρ)

X

σ∈Γρy

(24)

Chapter 2: General analysis of the balance equations and h0(ρ)γ(ρ) + qh−1(ρ) X σ∈Γρy σα(ρ, σ) = 0 ∀ρ ∈ Γx, (2.3)

respectively. Likewise, balance equation (1.6c) and (1.6e) are equivalent to v1(σ)β(σ) − q1v(σ) X ρ∈Γσ x α(ρ, σ) = 0 ∀σ ∈ Γy (2.4) and v0(σ)β(σ) + q−1v (σ) X ρ∈Γσ x ρα(ρ, σ) = 0 ∀σ ∈ Γy, (2.5) respectively.

Proof. Thanks to Assumption 1.24, balance equation (1.6b) can be written as X ρ∈Γx ρi+1  γ(ρ)h1(ρ) + X σ∈Γρy α(ρ, σ)(q−1h (ρ)σ2+ q0h(ρ)σ)   ∀i ≥ 0. By applying Lemma 1.30 or Remark 1.28, we get

γ(ρ)h1(ρ) +

X

σ∈Γρy

α(ρ, σ)(qh−1(ρ)σ2+ q0h(ρ)σ) = 0 ∀ρ ∈ Γx,

and since qh−1(ρ)σ2+ qh0(ρ)σ = Q(ρ, σ) − qh1(ρ) = −q1h(ρ) for all (ρ, σ) ∈ Γ+, we obtain

equation (2.2). Balance equation (1.6d) can likewise be written as X Γx ρi+1  h0(ρ)γ(ρ) + q−1(ρ) X σ∈Γρy α(ρ, σ)σ  = 0 ∀i ≥ 0,

which is equivalent to (2.3). It can be shown in the same way that balance equations (1.6c) and (1.6e) are equivalent to (2.4) and (2.5), respectively.

Before continuing, we spend a moment to find out how the previous lemmas affect As-sumption 1.24. First of all, we notice that for all ρ ∈ Γx we have

X Γρy |α(ρ, σ)|σ = 1 ρ X Γρy |α(ρ, σ)|ρσ < ∞, and likewiseP Γσ x|α(ρ, σ)|ρ < ∞ for all σ ∈ Γy. Since Γ+⊆ (0, 1)2, necessarilyP Γ+|α(ρ, σ)|ρ

iσj < ∞ for all i, j ≥ 1, but what about

the cases where ij = 0? We notice that X Γ+ |α(ρ, σ)|(q−1,−1ρ2σ2+ q−1,0ρ2σ + q0,−1ρσ2− ˜qρσ) = =X Γ+ |α(ρ, σ)|(Q(ρ, σ) − q1,−1σ2− q1,0σ − q1,1− q0,1ρ − q−1,1ρ2) = = −X Γ+ |α(ρ, σ)|(q1,−1σ2+ q1,0σ + q1,1+ q0,1ρ + q−1,1ρ2)

(25)

converges, so P Γ+q h 1(ρ)|α(ρ, σ)| and P Γ+q v

−1(σ)|α(ρ, σ)| also do. Then we have also

q1h(ρ)P

Γρy|α(ρ, σ)| < ∞ for all ρ ∈ Γx and q

v 1(σ)

P

Γσ

x|α(ρ, σ)| < ∞ for all σ ∈ Γy.

If we apply Lemma 2.2 in a similar way, we find out that

X ρ∈Γx  |γ(ρ)|(h−1,1ρ2+ h0,1ρ) − X σ∈Γρy α(ρ, σ) (q−1,1ρ2+ q0,1ρ)  = = X ρ∈Γx  |γ(ρ)|(h1(ρ) − h1,1) − X σ∈Γρy α(ρ, σ) (q1(ρ) − q1,1)  = = X ρ∈Γx  −h1,1|γ(ρ)| + q1,1 X σ∈Γρy α(ρ, σ)   converges. Since q1,1P Γ+|α(ρ, σ)| converges, so does h1,1 P

Γx|γ(ρ)|. The same is true

for v1,1PΓy|β(σ)|. In conclusion, this can be summarized with the following lemma:

Lemma 2.3. Suppose that Assumption 1.24 and balance equations (1.6a), (1.6b) and (1.6c) are satisfied. Then

(i) q1,1 = 0 or PΓ+|α(ρ, σ)| < ∞; (ii) q0,1 = 0 or PΓ+|α(ρ, σ)|ρ < ∞; (iii) q1,0 = 0 or PΓ+|α(ρ, σ)|σ < ∞; (iv) q−1,1 = 0 or PΓ+|α(ρ, σ)|ρ2< ∞; (v) q1,−1 = 0 or PΓ+|α(ρ, σ)|σ2 < ∞; (vi) h1,1 = 0 or PΓx|γ(ρ)| < ∞; (vii) v1,1= 0 or PΓy|β(σ)| < ∞; (viii) P Γρy|α(ρ, σ)|σ < ∞ and q h 1(ρ) P Γρy|α(ρ, σ)| < ∞ for all ρ ∈ Γx;

In fact, we will show that |Γ+| = ∞ can happen only if q1,1 = q0,1 = q1,0 = 0.

Lemma 2.4. If the induced measure π satisfies balance equations (1.6a), (1.6b) and (1.6c), then balance equation (1.6f) is equivalent to

q1,1 X Γ+ α(ρ, σ) − h1,1 X Γx γ(ρ) − v1,1 X Γy β(σ) + r1,1δ = 0. (2.6)

(26)

Chapter 2: General analysis of the balance equations

Proof. Balance equation (1.6f) can be written as X Γ+ α(ρ, σ)Q(ρ, σ) − q1h(ρ) − q1v(σ) + q1,1  + +X Γx γ(ρ)(h1(ρ) − h1,1) + X Γy β(σ)(v1(γ) − v1,1) + r1,1δ = 0.

Because of Lemma 2.3, we can split and recombine the summations, getting

X Γ+ α(ρ, σ)(Q(ρ, σ) + q1,1) − X Γx  qh1(ρ)X Γρy α(ρ, σ) − (h1(ρ) − h1,1)γ(ρ)  − −X Γy  q1v(σ)X Γσ x α(ρ, σ) − (v1(σ) − v1,1)β(σ)  + r1,1δ = 0.

By applying Lemma 2.1 and Lemma 2.2, we obtain then equation (2.6).

Lemma 2.5. If the induced measure π satisfies balance equation (1.6d), then balance equation (1.6g) is equivalent to v1,−1 X σ∈Γy β(σ)σ − X ρ∈Γx  q1,−1 X σ∈Γρy α(ρ, σ)σ + h1,0γ(ρ)  + r1,0δ = 0. (2.7)

Likewise, if balance equation (1.6e) is satisfied, then balance equation (1.6h) becomes

h−1,1 X ρ∈Γx γ(ρ)ρ − X σ∈Γy  q−1,1 X ρ∈Γσ x α(ρ, σ)ρ + v0,1β(σ)  + r0,1δ = 0. (2.8)

Proof. Balance equation (1.6g) can be written as

X ρ∈Γx   X σ∈Γρy σα(ρ, σ)(q−1,−1ρ2+ q0,−1ρ) + γ(ρ)(h−1,0ρ2− ˜hρ)  + + v1,−1 X σ∈Γy β(σ)σ + r1,0δ = 0.

By applying (2.3), this equation reduces to equation (2.7). The same can be done for balance equation (1.6h), to get equation (2.8).

In the previous lemma, one could be tempted to write q1,−1 X Γ+ α(ρ, σ)σ + h1,0 X Γx γ(ρ)

(27)

instead of X Γx  q1,−1 X Γρy α(ρ, σ)σ + h1,0γ(ρ)  

in equation (2.7) (and likewise for equation (2.8)). This can always be done in the finite case, but in the infinite case the convergence of P

Γ+q1,−1α(ρ, σ)σ and

P

Γxh1,0γ(ρ) is

not true in general.

We prove the following un unicity result, that further justifies Assumption 1.22: Theorem 2.6. Let π be a measure induced by (Γ, α, β, γ, δ). The representation is unique in the sense that, if π is also induced by (˜Γ, ˜α, ˜β, ˜γ, ˜δ), and both Γ and ˜Γ satisfy Assumption 1.22, then (Γ, α, β, γ, δ) = (˜Γ, ˜α, ˜β, ˜γ, ˜δ). Proof. Since πi,j = X Γ+ α(ρ, σ)ρiσj =X ˜ Γ+ ˜ α(ρ, σ)ρiσj ∀i, j ≥ 1, we must have X Γ+∩˜Γ+ α(ρ, σ) − ˜α(ρ, σ)ρiσj+ X Γ+\˜Γ+ α(ρ, σ)ρiσj− X ˜ Γ+\Γ+ ˜ α(ρ, σ)ρiσj = 0

for all i, j ≥ 1. By applying Lemma 1.32 as done in the proof of Lemma 2.1, we deduce that

• α(ρ, σ) = 0 for all (ρ, σ) ∈ Γ+\ ˜Γ+, meaning that Γ+\ ˜Γ+= ∅.

• ˜α(ρ, σ) = 0 for all (ρ, σ) ∈ ˜Γ+\ Γ+, meaning that also ˜Γ+\ Γ+= ∅, so Γ+= ˜Γ+.

• α(ρ, σ) = ˜α(ρ, σ) for all (ρ, σ) ∈ Γ+∩ ˜Γ+. Since Γ+ = ˜Γ+, we have α = ˜α.

In a similar manner, from the identity πi,0= X Γx γ(ρ)ρi=X ˜ Γx ˜ γ(ρ)ρi ∀i ≥ 1, we obtain X Γx∩˜Γx γ(ρ) − ˜γ(ρ)ρi+ X Γx\˜Γx γ(ρ)ρi− X ˜ Γx\Γx ˜ γ(ρ)ρi= 0 ∀i ≥ 1, and then

• γ(ρ) = 0 for all ρ ∈ Γx\ ˜Γx. In particular, if ρ ∈ Γx\ ˜Γx, then ∃(ρ, σ) ∈ Γ+, because

of the assumptions made on Γ. Since Γ+ = ˜Γ+, that would imply also ρ ∈ ˜Γx, that

is impossible, so we deduce Γx\ ˜Γx = ∅.

• ˜γ(ρ) = 0 for all ρ ∈ ˜Γx\ Γx. As above, we deduce ˜Γx\ Γx = ∅, so Γx = ˜Γx.

• γ(ρ) = ˜γ(ρ) for all ρ ∈ Γx∩ ˜Γx. Since Γx = ˜Γx, we have γ = ˜γ.

In the same way, we can prove Γy = ˜Γyand β = ˜β. Since δ = ˜δ is trivial, we can conclude

(28)

Chapter 2: General analysis of the balance equations

2.2

Fundamental linear systems

Consider a random walk, and suppose that its invariant probability measure π is induced by (Γ, α, β, γ, δ). Moreover, suppose that Γ is known: we want to determine α, β, γ and δ. By combining balance equations (2.2) and (2.3), for all ρ ∈ Γx we can set up the

following linear system:              h1(ρ)γ(ρ) − X σ∈Γρy qh1(ρ)α(ρ, σ) = 0, h0(ρ)γ(ρ) + X σ∈Γρy qh−1(ρ)σα(ρ, σ) = 0. (2.9)

It will be useful to note that, by adding together the first equation multiplied by h0(ρ),

and the second equation multiplied by h1(ρ), we obtain the necessary condition X

σ∈Γρy

H1(ρ, σ)α(ρ, σ) = 0 ∀ρ ∈ Γx. (2.10)

The cardinality of Γρy determines how linear system (2.9) should be solved:

• If |Γρy| = 0, then h1(ρ)γ(ρ) = h0(ρ)γ(ρ) = 0, that is reduced to h1(ρ) = h0(ρ) = 0,

because of the fourth point of Assumption 1.22. Since ρ ∈ (0, 1), this can happen only if h1 ≡ 0, from which h0(x) = (x − 1)(h−1,0x − h1,0), so we must have

ρ = h1,0/h−1,0.

• If |Γρy| = 1, then the system is constituted by two unknowns and two equations,

so it can have a non-trivial solution if and only if it is singular. Its determinant is equal to H1(ρ, σ), so we must have (ρ, σ) ∈ Q ∩ H1. This could have been evinced

also from equation (2.10).

• If |Γρy| = 2, then the system is of the form

h1(ρ) −q1h(ρ) −qh1(ρ) h0(ρ) σ1qh−1(ρ) σ2qh−1(ρ)    γ(ρ) α(ρ, σ1) α(ρ, σ2)  = 0.

We must look for non-trivial solutions, that is, solutions that satisfy α(ρ, σ1) 6= 0 and α(ρ, σ2) 6= 0.

• The case |Γρy| ≥ 3 is possible only if the algebraic curve Q(x, y) is reducible. Indeed,

equation

Q(ρ, y) = qh−1(ρ)y2+ q0h(ρ)y + qh1(ρ) = 0

has more than two solutions of y if and only if q−1h (ρ) = qh0(ρ) = qh1(ρ) = 0. This is a very specific case, since it would imply q−1h ≡ 0 and qh

1 ≡ 0, and we will consider

(29)

Likewise, for all σ ∈ Γy we can set up the linear system            v1(σ)β(σ) − qv1(σ) X ρ∈Γσ x α(ρ, σ) = 0, v0(σ)β(σ) + qv−1(σ) X ρ∈Γσ x α(ρ, σ)ρ = 0, (2.11)

and make similar considerations, switching (ρ, σ, hi, qih, Γ ρ

y, H1) and (σ, ρ, vi, qvi, Γσx, V1).

We call (2.9) and (2.11) fundamental linear systems.

Our remarks, concerning the case in which |Γρy| = 0, are important enough to

extrap-olate the following lemma:

Lemma 2.7. If ∃ρ ∈ Γx such that Γρy = ∅, then h1 ≡ 0, 0 < h1,0 < h−1,0 and ρ =

h1,0/h−1,0. In particular, there is at most one ρ ∈ Γx that satisfies that condition. The

same is true for σ ∈ Γy such that Γσx = ∅, with the obvious modifications.

Theorem 2.8. Consider a random walk and a 5-tuple (Γ, α, β, γ, δ), and define sets Γ+,

Γx, Γy, Γρy and Γσx as before. Suppose that the following statements are true:

(i) Γ ⊆ [0, 1)2 and Γ+⊆ Q;

(ii) Linear system (2.9) is satisfied for all ρ ∈ Γx; (iii) Linear system (2.11) is satisfied for all σ ∈ Γy;

(iv) Balance equations (2.6), (2.7) and (2.8) are satisfied; (v) Assumption 1.24 and Assumption 1.31 are satisfied.

Then the measure m(i, j) induced by (Γ, α, β, γ, δ) is proportional to the invariant prob-ability measure of the random walk. This theorem is satisfied also if the 5-tuple does not satisfy Assumption 1.22.

Proof. Suppose first that Assumption 1.22 is satisfied: because of the lemmas of Section 2.1, all balance equations but one are verified, so the last one also is (see Remark 1.26). Because of Assumption 1.24 and Γ ⊆ [0, 1)2, m(i, j) is well-defined and finite for all i, j ≥ 0, P

i,j≥0|m(i, j)| < ∞ and, by defining π0,0= δ and

Π+(x, y) := X i,j≥0 m(i + 1, j + 1)xiyj, Πh(x) := X i≥0 m(i + 1, 0)xi, Πv(y) := X j≥0 m(0, j + 1)yj,

(30)

Chapter 2: General analysis of the balance equations

we can apply Theorem 1.19 and conclude that m(i, j) is proportional to the invariant probability measure.

Suppose now that Assumption 1.22 is not satisfied: by manipulating (Γ, α, β, γ, δ) as in Remark 1.23, we obtain another 5-tuple (˜Γ, ˜α, ˜β, ˜γ, ˜δ), that satisfies Assumption 1.22 and induces the same measure m(i, j). Since those modifications do not invalidate (i-v), we conclude by applying the previous proof to the new 5-tuple.

2.3

Maximal uncoupled partitions

Because of linear systems (2.9) and (2.11), the values of α, β and γ for common coordi-nates are strongly interconnected. This calls for the following definitions:

Definition 2.9. Let A be any subset of R2. If (ρ, σ), (ρ0, σ0) ∈ A, we say that there exists a finite path through A that connects them if there exist {(ρi, σi) : i = 0, . . . , k} such

that (ρ0, σ0) = (ρ, σ), (ρk, σk) = (ρ0, σ0), and each point has a coordinate in common

with the previous one, i.e.

∀i = 1, . . . , k ρi−1= ρi or σi−1= σi.

From a path we can always extract a minimal subpath, such that the same ρ or σ never appear in more than two terms.

These finite paths are important, since linear systems (2.9) and (2.11) can be connected one to another following paths through Γ+.

Definition 2.10. Let A be any subset of R2. A partition {Ai: i ∈ I} of A is horizontally uncoupled if entries of different parts have different horizontal coordinates, i.e.

p, q ∈ I, p 6= q, (ρ, σ) ∈ Ap, (ρ0, σ0) ∈ Aq =⇒ ρ 6= ρ0.

Similarly, a partition is vertically uncoupled if entries of different parts have different vertical coordinates, and a partition is uncoupled if it is both horizontally and vertically uncoupled.

We say that a partition P0 of A is a refinement of another partition P of A if every element of P0 is a subset of some element of P. In this case, we say that P0 is finer than P and that P is coarser than P0. Informally, this means that P0is a further fragmentation

of P. This finer-than relation on the set of partitions of A is clearly a partial order, so it makes sense to define maximal partitions. Existence of maximal horizontally uncoupled partitions, maximal vertically uncoupled partitions and maximal uncoupled partitions can easily follow from Zorn’s lemma, but we prefer a constructive demonstration, that also proves their unicity.

Proposition 2.11. Let A be any subset of R2. The maximal horizontally uncoupled partition, the maximal vertically uncoupled partition and the maximal uncoupled partition

(31)

of A exist and are unique. In particular, the maximal horizontally uncoupled partition and the maximal vertically uncoupled partition can be written as

PH(A) = {Aρ,· : ρ ∈ R}, Aρ,· := {(x, y) ∈ A : x = ρ},

PV(A) = {A·,σ : σ ∈ R}, A·,σ := {(x, y) ∈ A : y = σ},

respectively. Moreover, two pairs (ρ, σ) and (ρ0, σ0) belong to the same part of the maximal uncoupled partition if and only if there is a finite path through A that connects them. Proof. Consider the partition PH(A), defined as in the statement of the proposition: it is trivially horizontally uncoupled. If P0 is another horizontally uncoupled partition of A, then each Aρ,· cannot be split in different parts of P0, but that means that each part of

P0 is union of some A

ρ,· sets, i.e. PH(A) ≥ P0. We conclude that PH(A) is the maximal

horizontally uncoupled partition, and that it is unique. We can prove in the same way that PV(A) is the maximal vertically uncoupled partition, and that it is unique.

Now let us consider uncoupled partitions. Consider a binary relation on the elements of A, such that (ρ, σ) ∼ (ρ0, σ0) if there exists a finite path through A, connecting (ρ, σ) to (ρ0, σ0): this is an equivalence relation, so it defines a partition PU(A) = {Ai : i ∈ I}

of A. This partition is uncoupled: if (ρ, σ) ∈ Ap and (ρ0, σ0) ∈ Aq are such that ρ = ρ0 or

σ = σ0, then there is a trivial path through A that connects them, so (ρ, σ) ∼ (ρ0, σ0) and p = q. On the other hand, if P0 is an uncoupled partition of A, and (ρ, σ), (ρ0, σ0) ∈ A are connected through a finite path (ρi, σi)i=0,...,k, then all terms of the path must be in the

same part of P0, since each term has one component in common with the previous. In particular, each equivalence class of ∼ must be wholly contained in one part of P0, and P0 ≤ PU(A). We conclude that PU(A) is the maximal uncoupled partition, and that it

is unique.

As in the previous proposition, we denote the maximal horizontally uncoupled partition, the maximal vertically uncoupled partition and the maximal uncoupled partition of a set A ⊆ R2 by PH(A), PV(A) and PU(A), respectively. If the set A is clear from the context, we will omit it from the notation.

Definition 2.12. A set A ⊆ R2 is pairwise-coupled if |PU(A)| = 1.

Each part of the maximal uncoupled partition is then a pairwise-coupled set, i.e., a set A is pairwise-coupled if and only if each pair of points x, x0 ∈ A are connected by a finite path through A.

If we take A = Γ+, then there is a biunivocal correspondence between the maximal horizontally uncoupled partition PH and the linear systems (2.9), and likewise between the maximal vertically uncoupled partition PV and the linear systems (2.11). We can then partition those linear systems through the maximal uncoupled partition PU, since it is coarser than both PH and PV. Linear systems in different parts of PU can be solved independently, since their unknowns never appear in the same linear systems.

(32)

Chapter 2: General analysis of the balance equations

2.4

General sketch of the detection algorithm

While we present a first sketch of the algorithm that computes (Γ, α, β, γ, δ), we disclose some additional facts, that we will prove later.

Fact 2.13. For all Γ1 ∈ PU(Γ+), there is at least one point (ρ, σ) ∈ Γ1 such that one of

the following statements is true:

• Γρy = {σ} and |Q ∩ H1∩ (0, 1)2| < ∞,

• Γσ

x = {ρ} and |Q ∩ V1∩ (0, 1)2| < ∞.

Because of this fact, we can find starting points to compute each Γ1 ∈ PU +), by

intersecating Q with H1 and V1.

We suppose that Q does not contain horizontal or vertical lines: we will consider the other case later. Instead of computing directly Γ, we will find a bigger set ˜Γ ⊇ Γ, and solve all linear systems for ˜Γ in place of Γ: if we find a 5-tuple (˜Γ, ˜α, ˜β, ˜γ, ˜δ) that satisfies the hypotheses of Theorem 2.8, then we can compute the invariant probability measure of the random walk. Even if ˜Γ is too big, if the invariant probability measure of the random walk is induced by the 5-tuple (Γ, α, β, γ, δ), then we will find a solution such that

• ˜α(ρ, σ) = 0 for all (ρ, σ) ∈ ˜Γ+\ Γ+,

• ˜γ(ρ) = 0 for all ρ ∈ ˜Γx\ Γx,

• ˜β(σ) = 0 for all σ ∈ ˜Γy \ Γy.

If there is no (˜Γ, ˜α, ˜β, ˜γ, ˜δ) that satisfies the hypotheses of Theorem 2.8, we can then deduce that the random walk does not have an invariant probability measure of the shape we were looking for.

To build ˜Γ in such a way that, if the random walk has an invariant probability measure induced by (Γ, α, β, γ, δ), then we are certain that ˜Γ ⊇ Γ, we can take the following steps:

Procedure 2.14 (Building ˜Γ). 1. Start with ˜Γ = ∅.

2. Compute Q ∩ H1: if |Q ∩ H1∩ (0, 1)2| < ∞, then add Q ∩ H1∩ (0, 1)2 to ˜Γ; do

the same for Q ∩ V1.

3. Add to ˜Γ all points of Q ∩ (0, 1)2 that are connected to another point of ˜Γ by a finite path through Q ∩ (0, 1)2 (see Definition 2.9).

4. For all (ρ, σ) ∈ ˜Γ, add (ρ, 0) and (0, σ) to ˜Γ.

5. If h1(x) ≡ 0 and 0 < h1,0 < h−1,0, then add (h1,0/h−1,0, 0) to ˜Γ; likewise, if

(33)

Notice that, since we supposed that Q does not contain horizontal or vertical lines, ˜Γ is necessarily countable.

Lemma 2.15. If ˜Γ is built following Procedure 2.14, and the random walk has an in-variant probability measure induced by (Γ, α, β, γ, δ), then ˜Γ ⊇ Γ.

Proof. Because of Fact 2.13, point 2 of the procedure adds to ˜Γ at least one point of each part of the maximal uncoupled partition PU of Γ+. Because of Proposition 2.11, point

3 of the procedure adds to ˜Γ, for each (ρ, σ) ∈ ˜Γ ∩ Γ+, the whole part of PU containing

(ρ, σ), so now ˜Γ ⊇ Γ+. Point 4 of the procedure adds to ˜Γ all (ρ, 0) such that Γρy 6= ∅,

and all (0, σ) such that Γσ

x 6= ∅. Finally, because of Lemma 2.7, if ∃ρ ∈ Γx such that

Γρy = ∅, then point 5 of the procedure adds (ρ, 0) to ˜Γ. Likewise, if ∃σ ∈ Γy such that

Γσx = ∅, then point 5 adds (0, σ) to ˜Γ. We conclude that ˜Γ ⊇ Γ.

For now, we take for granted that we are always able to compute efficiently Q ∩ H1 and Q ∩ V1. If ˜Γ is a finite set, we can easily compute all its points by intersecating Q with

appropriate horizontal and vertical lines. Then, since linear systems (2.9) and (2.11) are in finite number, have finite unknowns and finite equations, we can easily solve them. If there is no non-trivial solution, the random walk cannot have an induced invariant probability measure.

If ˜Γ is countably infinite, there are some issues: since we cannot possibly compute the whole ˜Γ, we must decide when to stop, and if that way we can have a viable approximation of the invariant probability measure. Moreover, we also need some way to distinguish the infinite case from the finite case, so that in the latter case we do not truncate ˜Γ ahead of time.

Lemma 2.16. If ˜Γ is built following Procedure 2.14, then |PU(˜Γ+)| < ∞.

Proof. PU has at most one part for each point added to ˜Γ in point 2 of the procedure. Since those points are finite, we conclude that PU is also finite.

Fact 2.17. Suppose that A ⊆ Q ∩ (0, 1)2 is an infinite pairwise-coupled set. Then 0 ∈ Q, and we can write A = {(ρi, σi) : i ≥ 0}, where limi→∞(ρi, σi) = 0, and one of the

following two properties is satisfied:

• ρ2i+1= ρ2i, σ2i+1< σ2i, ρ2i+2< ρ2i+1, and σ2i+2 = ρ2i+1 for all i ≥ 0;

• ρ2i+1< ρ2i, σ2i+1= σ2i, ρ2i+2= ρ2i+1, and σ2i+2 < ρ2i+1 for all i ≥ 0.

An example of the effects of Fact 2.17 is reported in Figure 2.1.

If Γ1 ∈ PU(˜Γ+) and |Γ1| = ∞, its infinite linear systems (2.9) and (2.11) have a

(2 × 2)-block bidiagonal structure, as in Figure 2.2. We will see that this linear system can then be solved iteratively, obtaining ˜α(ρi, σi), ˜β(ρi) and ˜γ(σi) for i = 0, . . . , k, with k

arbitrary, depending from a finite number of free parameters. Later we will characterize the cases in which the three series

X i≥0 | ˜α(ρi, σi)|ρiσi, X i≥0 |˜γ(ρi)|ρi, X i≥0 | ˜β(σi)|σi

(34)

Chapter 2: General analysis of the balance equations 0 0.2 0.4 0.6 0.8 1 1.2 0 0.2 0.4 0.6 0.8 1 1.2 x y Q H1 V1

Figure 2.1: Example of infinite ˜Γ. The red dots and the green dots represent ˜Γ+, the

color depending on the part of PU(˜Γ+) they belong to. The double dots correspond to

the starting points added in point 2 of Procedure 2.14.

converge, and their rate of convergence. In particular, we can exclude the cases in which any of the previous series diverges, by setting some free parameters to zero. This way, we can force Assumption 1.24 to be satisfied by (˜Γ, ˜α, ˜β, ˜γ). Then, we can stop computing ρi, σi, ˜α, ˜β, ˜γ as soon as (ρk, σk) and the terms ˜α(ρk, σk)ρkσk, ˜γ(ρk)ρk and ˜β(σk)σk are

all close enough to zero.

Lemma 2.18. If ˜Γ is built following Procedure 2.14, then ˜Γ+ satisfies Assumption 1.31.

Proof. Let (ρ0, σ0) ∈ ˜Γ+. We partition R2 in two sets:

A1 := {(ρ, σ) ∈ R2 : (ρ − ρ0)(σ − σ0) ≥ 0},

A2 := {(ρ, σ) ∈ R2 : (ρ − ρ0)(σ − σ0) < 0},

and we partition ˜Γ+ in Γ1 := ˜Γ ∩ A1 and Γ2 := ˜Γ ∩ A2. Since 0 does not belong to the

(35)

We apply Lemma 1.27 to Γ2∪ {ρ0, σ0}, obtaining m, n ≥ 1 such that ρm0 σ0n6= ρmσn

for all (ρ, σ) ∈ Γ2. Suppose for the sake of contradiction that ∃(ρ, σ) ∈ Γ1\ (ρ0, σ0) such

that ρm0 σ0n= ρmσn: then we would have  ρ0 ρ m = σ σ0 n , but since m, n > 0, that would imply that ρ0

ρ < 1 and σ σ0 < 1, or ρ0 ρ > 1 and σ σ0 > 1, or ρ0 ρ = 1 and σ

σ0 = 1. Both the first and the second conditions are impossible, since

they imply that (ρ, σ) ∈ Γ2; the third condition is impossible, since it would imply

(ρ, σ) = (ρ0, σ0). We conclude that such a (ρ, σ) ∈ Γ1\ (ρ, σ) cannot exist, so ˜Γ+ satisfies

Assumption 1.31, with exponents (m, n).

                   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ... ...                                       γ(ρ0) α(ρ0,σ0) β(σ1) α(ρ1,σ1) γ(ρ2) α(ρ2,σ2) .. .                    =                    0 0 0 0 0 0 .. .                   

Figure 2.2: Example of infinite (2 × 2)-block bidiagonal structure. The asterisks denote values that could be different from zero. The other possible case switches γ(ρi) with

β(σi).

Fact 2.19. Suppose that A ⊆ Q ∩ (0, 1)2 is a finite pairwise-coupled set. Then we can write A = {(ρi, σi) : 0 ≤ i ≤ k − 1}, and one of the following properties is satisfied:

• ρ2i+1= ρ2i, σ2i+16= σ2i, ρ2i+26= ρ2i+1, and σ2i+2 = σ2i+1 for all i ≥ 0;

• ρ2i+16= ρ2i, σ2i+1= σ2i, ρ2i+2= ρ2i+1, and σ2i+2 6= ρ2i+1 for all i ≥ 0.

Moreover, ∀i, j : |i − j| > 1 we have ρi 6= ρj and σi 6= σj.

The fundamental linear system of each finite pairwise-coupled subset of ˜Γ has a structure similar to that of the infinite case, but cut short at size 2k+2×2k+1, as in Figure 2.3. We can then solve this linear system, obtaining ˜α(ρi, σi), ˜β(ρi) and ˜γ(σi) for i = 0, . . . , k − 1,

depending from a finite number of free parameters. For such a system to have a non-trivial solution, its rank must be lesser or equal to 2k.

(36)

Chapter 2: General analysis of the balance equations                       ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗                                          γ(ρ0) α(ρ0,σ0) β(σ1) α(ρ1,σ1) γ(ρ2) α(ρ2,σ2) β(σ2)                    =                       0 0 0 0 0 0 0 0                      

Figure 2.3: Example of fundamental linear system for a part Γ1 ∈ PU with cardinality

k = 3. The asterisks denote values that could be different from zero. The other possible case switches γ(ρi) with β(σi).

Suppose now that we have computed ˜α, ˜β and ˜γ, up to some free parameters λ1, . . . , λk, i.e.

˜

α(ρ, σ) = hµρ,σ, λi, γ(ρ) = hν˜ ρ, λi, β(σ) = hη˜ σ, λi,

where λ = (λ1, . . . , λk), and µρ,σ, νρ, µσ ∈ Rk are appropriate vectors that have already

been calculated for (ρ, σ) ∈ ˜Γ+, ρ ∈ ˜Γx and σ ∈ ˜Γy, respectively. We can then set up one

last linear system, that we call the corner linear system, using balance equations (2.6), (2.7) and (2.8), with 3 equations and k + 1 unknowns, that are λ1, . . . , λk and ˜δ:

                           * q1,1 X (ρ,σ)∈˜Γ+ µρ,σ+ h1,1 X ρ∈˜Γx νρ+ v1,1 X σ∈˜Γy ησ, λ + + r1,1δ = 0,˜ * v1,−1 X σ∈˜Γy ησσ − X ρ∈˜Γx h1,0νρ+ q1,−1 X σ∈˜Γρy µρ,σσ ! , λ + + r1,0δ = 0,˜ * h−1,1 X ρ∈˜Γx νρρ − X σ∈˜Γy v0,1ησ+ q−1,1 X ρ∈˜Γσ x µρ,σρ ! , λ + + r0,1δ = 0,˜

We write the linear system in a more compact form:

  K1,1 · · · K1,k r1,1 K2,1 · · · K2,k r1,0 K3,1 · · · K3,k r0,1        λ1 .. . λk ˜ δ      =   0 0 0  

(37)

If we define K = (Ki,j)i≤3,j≤k and r = (r1,1r1,0r0,1)T, the linear system can be written

as Kλ + r˜δ = 0, or equivalently Kλ = −r˜δ. For the linear system to have a non-trivial solution, it is sufficient and necessary that Rk(K|r) ≤ k.

If r belongs to the linear span of the column vectors of K, then the linear system admits a non-trivial solution (λ, ˜δ) such that ˜δ > 0. In particular, all the hypotheses of Theorem 2.8 are satisfied, so the measure m(i, j) induced by (˜Γ, ˜α, ˜β, ˜γ, ˜δ) is proportional to the invariant probability measure π of the random walk, and we can compute π through the relation πi,j = m(i, j)/M , where

M := X i,j≥0 m(i, j) =X ˜ Γ ˜ α(ρ, σ) ρ 1 − ρ σ 1 − σ+ X ˜ Γx γ(ρ) ρ 1 − ρ+ X ˜ Γy β(σ) σ 1 − σ.

In particular, π is induced by (˜Γ, M−1α, M˜ −1β, M˜ −1γ, M˜ −1˜δ). We can then reduce the 5-tuple to (Γ, α, β, γ, δ), so that it satisfies Assumption 1.22, by following the procedure of Remark 1.23 (remember Theorem 2.6), but it is not strictly necessary.

If r does not belong to the linear span of the column vectors of K, then the linear system does not admit a non-trivial solution such that ˜δ > 0, and we conclude that the random walk does not present an induced invariant probability measure.

We summarize the algorithm we have presented in this section as following: Algorithm 2.1 Detection algorithm

Compute Q ∩ H1 and Q ∩ V1; Build ˜Γ;

Solve all fundamental linear systems, obtaining µρ,σ, νρ and ησ;

Solve the last linear system, obtaining all solutions (λ, ˜δ); if there is a solution such that ˜δ > 0 then

Compute ˜α, ˜β and ˜γ; Calculate M :=P i,j≥0m(i, j); Reduce (˜Γ, M−1α, M˜ −1β, M˜ −1γ, M˜ −1δ) to (Γ, α, β, γ, δ);˜ return (Γ, α, β, γ, δ); else return failure; end if

(38)

Riferimenti

Documenti correlati

Moderate aerobic resistance unsupervised exercises, associated with correct nutritional habits, allow a significant improvement of the principal cardiovascular risk factors in

The direct comparison allows an immediate perception of the differences among the two sprays, at the different dwell times, where the shorter tested, 160 microseconds, was chosen

Ciò che Jonas tiene a sottolineare, però, è il fatto che lo spirito greco seppe trovare una via, per quanto eccentrica, capace di distinguere tra utilità, inutilità e non-utilità

Keywords: branching random walk, contact process, percolation cluster, critical parameters, ap- proximation.. AMS subject classification:

We show that under some conditions one has almost sure convergence to a natural boundary (Theorems 5.1 and 5.2), then we prove a strong law of large numbers (Theorem 6.1) and a

Here, we investigate its properties, in light of the standard model. We find that the optical light curve of the afterglow of this burst presents an unusual steep/quick rise.

Food samples are suitable for direct analysis using ambient mass spectrometry and the speech, after an introduction explaining system design and operations, will report

In this study we evaluated the in vivo effect of DAS, NIL, and IM on intracellular calcium concentration, oxidative stress, and apoptosis in peripheral blood leukocytes of 45