# HAMILTONIAN MECHANICS —————-

(1)

(2)
(3)

## Contents

Preface 5

1 Introduction to Hamiltonian systems 7

1.1 Poisson structures . . . 7

1.2 Canonical transformations . . . 11

1.3 Integrability: Liouville theorem . . . 15

2 Probabilistic approach 19 2.1 Probability measures and integration . . . 19

2.2 Invariant measures . . . 22

2.3 Equilibrium statistical mechanics . . . 25

2.3.1 Micro-canonical measure . . . 25

2.3.2 Canonical measure . . . 27

2.4 Approach to equilibrium I: noise & dissipation . . . 28

2.4.1 The Langevin equation . . . 29

2.4.2 The Fokker-Planck equation . . . 30

2.4.3 Hamiltonian case . . . 32

2.5 The Clausius virial theorem . . . 37

2.5.1 Homogeneous case . . . 38

2.5.2 Theory of gasses . . . 40

3 Ergodic theory 47 3.1 Equilibrium: ergodicity . . . 48

3.1.1 Characterization of ergodic systems . . . 49

3.2 Approach to equilibrium II: mixing . . . 52

3.2.1 Characterization of mixing systems . . . 54

3.2.2 Correlations . . . 57

3.3 Recurrence . . . 59

4 Hamiltonian perturbation theory 65 4.1 Quasi-integrable systems . . . 65

4.2 Elimination of the angles: first step . . . 69

4.3 The Poincaré theorem on the nonexistence of first integrals . . . 73

A Hierarchy of structures in topology and their utility 77

3

(4)

B Fourier series expansion of functions on the torus 81

Bibliography 85

(5)

## Preface

The present notes are devoted to the study of those mathematical aspects relevant to the physics of Hamiltonian systems, and in particular to statistical mechanics and its dynami- cal foundations. Equilibrium statistical mechanics, whose present formulation is due to Gibbs [17], is the physical theory that explains the laws of equilibrium thermodynamics of a macro- scopic body, described by a large size Hamiltonian system, in terms of a privileged probability measure, named Gibbs measure, on its phase space. In referring to “large” size systems one has in mind numbers of “particles” that can be as large as the Avogadro number NA, i.e. the inverse of the atomic mass unit mu expressed in grams: NA= 1/mu= 6.02 · · · × 1023.

The problem that naturally arises concerns the role played by the dynamics. Indeed, at a microscopic level, particles move subject to both their mutual interaction forces and to forces due to external sources (e.g. gravity, confining forces, electromagnetic fields and so on). Thus it has to be explained how and why such an unsolvable microscopic dynamics may give rise to, or be compatible with the stationary Gibbs measure describing the collective properties of the whole system. In short, one should be able to explain thermodynamics starting from the Newton law.

In real experiments one usually measures physical quantities by taking the arithmetic mean of their values detected at different times. Such a procedure is used both in measuring the value of a quantity in stationary conditions and in measuring the evolution of its value in the course of time. Thus, it would be desirable to have at one’s disposal a theory concerning time averages along single orbits. The Clausius virial theorem represents a very interesting ex- ample in this direction. In that context, a second fundamental question arises, namely whether the “experimentally” computed time-averages coincide or not with the theoretically computable expectation values with respect to the Gibbs measure. Such a property, specific of the system at hand, of the chosen probability measure, and of the class of interesting physical quantities, or “observables”, is called ergodicity. It must be stressed that, given a Hamiltonian system of some interest to matter physics, nobody is presently able to decide whether it is ergodic or not, neither with possible restrictions of the observables.

However, even ergodicity would not be sufficient to build up thermodynamics. Indeed, not only the existence of an equilibrium, but also the approach to equilibrium must be justified.

This requires a property stronger than (i.e. implying) ergodicity to hold, namely mixing. Such a property, which consists in the asymptotic de-correlation of observables in time, ensures that even starting with a measure that is not the Gibbs one, the latter will be approached in the course of time, in a weak sense. Also establishing the validity of the mixing property for a given Hamiltonian system of physical interest is still out of order. The only existing theory of approach to equilibrium rests on the addition of mutually balanced noise and dissipation to the

5

(6)

conservative forces acting on the particles of the system. Such a theory is both mathematically elegant and conceptually plausible from a physical point of view, but in this way stochasticity is artificially inserted from outside instead of being obtained as a “collective” feature of the conservative system.

The approach to equilibrium of a given system may well take place notwithstanding the time reversal symmetry of the Hamilton equations that rule its dynamics at a microscopic level. More than this, the microscopic dynamics is characterized by the recurrence of almost every initial condition. The solution of such paradoxes helps to understand how an effective ar- row of time occurs, in a statistical sense, when passing from the microscopic to the macroscopic level of description.

While deciding whether the dynamics of a given system displays some precise degree of stochasticity (ergodicity, mixing or more) is extremely difficult, it is usually easier to show whether some integrable behavior has to be expected. Such a perspective is meaningful since many systems in matter physics are close to integrable: two fundamental examples are crystals at low temperatures and gasses at high temperatures. In practice, by means of the canonical perturbation theory, one can try to build up one or more approximate constants of motion, or quasi-integrals, of the system. This procedure leads to absolutely nontrivial results, such as the KAM theorem, which ensures the preservation of most invariant tori of an integrable system under small perturbations. The latter result, together with the classical Poincaré theorem on the nonexistence of smooth first integrals (besides the Hamiltonian) for quasi-integrable systems, is used to state that generic Hamiltonian systems are neither integrable nor ergodic [26]. However, one must never forget the role played by the energy-size relation characterizing the system: in the thermodynamics of ordinary matter the latter two quantities are strictly proportional. As a consequence, the validity of the above mentioned conclusion is doubtful in matter physics, and ergodicity or mixing might well old in the so-called thermodynamic limit, when the number of particles is ideally pushed to infinity at a constant (and usually small) value of the energy per particle.

(7)

## Introduction to Hamiltonian systems

### 1.1Poisson structures

The evolution in time of physical systems is described by differential equations of the form

˙x = u(x) , (1.1)

where the vector field u :Γ3 x 7→ u(x) ∈ TxΓis defined on some phase space (i.e. the space of all the possible states of the system)Γ and takes on values in its tangent bundle TΓ:=S

x∈ΓTxΓ (i.e. the union of all the tangent spaces TxΓ). Notice that ifΓ= Rnthen TΓ=Γ= Rn as well.

Remark 1.1. From a topological point of view, the phase spaceΓof the system has to be at least a Banach space. This is due to the necessity to guarantee the existence and uniqueness of the solutionx(t) =Φt(x0) to the differential equation (1.1), for any initial condition x(0) = x0∈Γand anyt ∈ Ix0⊆ R; see Appendix A and reference [38].

Very often,Γ is a Hilbert space (something more than Banach). This happens ifΓis finite dimensional or, for example, in the theory of the classical linear PDEs, such as the wave and the heat equations. The most notable case of an infinite dimensional Hilbert phase space is perhaps that of quantum mechanics (see below).

Hamiltonian systems are those particular dynamical systems whose phase spaceΓ is en- dowed with a Poisson structure, according to the following definition.

Definition 1.1 (Poisson bracket). Let A(Γ) be the algebra (i.e. vector space with a bilinear product) of real smooth functions defined onΓ. A function { , } :A(Γ) ×A(Γ) →A(Γ) is called a Poisson bracket onΓif it satisfies the following properties:

1. {F, G} = −{G, F} ∀F,G ∈A(Γ) (skew-symmetry);

2. {αF + βG, H} = α{F, H} + β{G, H} ∀α,β ∈ R and ∀F,G, H ∈A(Γ) (left linearity);

3. {F, {G, H}} + {G,{H, F}} + {H,{F,G}} = 0 ∀F,G, H ∈A(Γ) (Jacobi identity);

4. {FG, H} = F{G, H} + {F, H}G ∀F,G, H ∈A(Γ) (left Leibniz rule).

7

(8)

The pair (A(Γ), { , }) is a Poisson algebra, i.e. a Lie algebra (a vector space with a skew- symmetric, left-linear and Jacobi product) satisfying the Leibniz rule. Observe that 1. and 2.

in the above definition imply right-linearity, so that the Poisson bracket is actually bi-linear.

Observe also that 1. and 4. imply the right Leibniz rule.

Definition 1.2 (Hamiltonian system). Given a Poisson algebra (A(Γ), { , }), a Hamiltonian sys- tem on Γ is a dynamical system described by a differential equation of the form (1.1) whose vector field has the form

u(x) = XH(x) := {x, H} . whereH ∈A(Γ) is called the Hamiltonian of the system.

In the above definition the bracket {x, H} is meant by components with respect to the first entry: uk(x) = [XH(x)]k:= {xk, H}, where xk denotes the k-th component of x. Notice that k is not necessarily a discrete index. The following proposition holds:

Proposition 1.1. A skew-symmetric, bi-linear Leibniz bracket {, } onΓis such that

{F, G} = ∇F · J∇G :=X

j,k

µ∂F

∂xj

¶ Jjk(x)

µ∂G

∂xk

, (1.2)

for allF, G ∈A(Γ), where

Jjk(x) := {xj, xk}. (1.3)

The bracket at hand is Jacobi-like, i.e. it is a Poisson bracket, iff the operator function J(x) satisfies the relation:

X

s

µ

Jis∂Jjk

∂xs + Jjs∂Jki

∂xs + Jks

∂Ji j

∂xs

. (1.4)

CPROOF. Let us prove the proposition in the simple case of the algebraA(Γ) of the ana- lytic functions onΓ⊆ Rn. As a preliminary remark, we observe that from the Leibniz rule and from the linearity property it follows that {F, c} = 0 for any F ∈A(Γ) and any c ∈ R. Indeed, one has {F, c} = {F, c ·1} = c{F,1}+{F, c}1, so that {F,1} = 0 and c{F,1} = {F, c} = 0. The following facts are easily checked.

1. ©F,(xj− aj)kjª = {F, xj}kj(xj− aj)kj−1= {F, xj}∂x

j(xj− aj)kj for any F(x) ∈A(Γ), any coor- dinate xj, any aj∈ R and any kj∈ N (by induction).

2. n F,Qn

j=1(xj− aj)kjo

=Pn

j=1{F, xj}∂x

j

Qn

s=1(xs− as)ks (by Leibniz).

3. If G(x) =P

k∈NngkQn

j=1(xj− aj)kj, then {F, G} =Pn

j=1{F, xj}∂x∂G

j (by linearity).

4. {F, xj} = −{xj, F} = −Pn

i=1{xj, xi}∂x∂F

i =Pn i=1 ∂F

∂xi{xi, xj}(by skew-symmetry).

(9)

1.1. POISSON STRUCTURES 9 From the last two points (1.2) easily follows. The condition (1.4) is checked by a direct compu- tation. One has

{F, {G, H}} = ∇F · J∇(∇G · J∇H) = X

is jk

∂F

∂xi

Jis

∂xs

µ∂G

∂xj

Jjk∂H

∂xk

=

= X

is jk

·∂F

∂xi

Jis 2G

∂xs∂xj

Jjk∂H

∂xk

+∂F

∂xi

Jis 2H

∂xs∂xk

Jjk∂G

∂xj

+∂F

∂xi

∂G

∂xj

∂H

∂xk

Jis∂Jjk

∂xs

¸

=

= ∇F · µ

J2G

∂x2 J

∇H − ∇F · µ

J2H

∂x2 J

∇G +X

is jk

∂F

∂xi

∂G

∂xj

∂H

∂xk

Jis∂Jjk

∂xs

.

Now, exploiting the skew-symmetry of J and the consequent symmetry of a matrix of the form J(hessian)J, and suitably cycling over the functions F, G, H and over the indices i, j, k, one gets

{F, {G, H}} + {G,{H, F}} + {H,{F,G}} =X

i jk

∂F

∂xi

∂G

∂xj

∂H

∂xk

· X

s

µ

Jis∂Jjk

∂xs + Jjs∂Jki

∂xs + Jks

∂Ji j

∂xs

¶¸

.

Obviously, such an expression is identically zero for all F, G, H ∈A(Γ) iff (1.4) holds (show that).B

As a consequence of the above proposition, the Hamiltonian vector fields are of the form (put F = x and G = H in (1.3))

XH(x) := {x, H} = J(x)∇xH(x) , (1.5) i.e. are proportional to the gradient of the Hamiltonian function through the function operator J(x) := {x, x}. The latter operator, when (1.4) is satisfied, takes the name of Poisson tensor. A Poisson tensor J(x) is singular at x if there exists a vector field u(x) 6≡ 0 such that J(x)u(x) = 0, i.e. if ker J(x) is nontrivial. The functions C(x) such that ∇C(x) ∈ ker J(x) have a vanishing Poisson bracket with any other function F defined on Γ, since {F, C} = ∇F · J∇C ≡ 0 inde- pendently of F. Such special functions are called Casimir invariants associated to the given Poisson tensor J, and are constants of motion of any Hamiltonian system with vector field XH associated to J, i.e.:

C =˙ X

j

(xjC) ˙xj=X

j,k

(xjC)JjkxkH = {C, H} ≡ 0 independently of H.

Example 1.1. In the “standard case” x = (q, p), where q ∈ Rn is the vector of generalized co- ordinates and p ∈ Rn is the vector of the conjugate momenta. The Hamilton equations read q = ∂H/∂p, ˙p = −∂H/∂q, or, in compact form ˙x = J˙ 2nxH, where

J2n:=

µ On In

−In On

(1.6) is the 2n × 2n standard symplectic matrix, independent of x. The Poisson bracket reads then {F, G} =Pn

j=1£(qjF)(pjG) − (∂qjG)(pjF)¤ = ∂F∂q ·∂G∂p∂F∂p·∂G∂q. Everything extends to the case n → +∞, paying attention to matters of convergence.

(10)

We recall that the standard form of the Hamilton equations just mentioned is implied by that of the Euler-Lagrange equations when passing from the Lagrangian to the Hamiltonian formalism through a Legendre transformation, if this is possible.

Example 1.2. Let us consider a single harmonic oscillator, with Hamiltonian H =12¡ p2+ ω2q2¢, and introduce the complex variables z = (ωq + i p)/p

2ω and z= (ωq − i p)/p

2ω, where i is the imaginary unit. In terms of such complex coordinates, known as complex Birkhoff coordinates, the Hamiltonian reads H = ω|z|e 2, the Hamilton equations become ˙z = −iωz = −i∂ eH/∂z and its complex conjugate ˙z= iωz = i∂ eH/∂z. The new Poisson tensor is the second Pauli matrix, σ2=

µ 0 −i i 0

. Thus, with respect to the Birkhoff vectorζ = (z, z)T, the equations of motion of the Harmonic oscillator take on the form ˙ζ = σ2ζH.e

Example 1.3. The Euler equations, describing the evolution of the angular momentum L of a rigid body in a co-moving frame and in absence of external torque (moment of external forces), read ˙L = L ∧ I−1L, where I is the inertia tensor of the body (a 3 × 3 symmetric, positive definite matrix) and ∧ denotes the standard vector product in R3. This is a Hamiltonian system with Hamiltonian functionH(L) =12L · I−1L and Poisson tensor

J(L) :=

0 −L3 L2 L3 0 −L1

−L2 L1 0

= L ∧ .

In this way, the Euler equations have the standard form ˙L = J(L)∇LH(L), and the Poisson bracket of two functions F(L) and G(L) is {F, G} = L · (∇G ∧ ∇F). In order to check whether the relation (1.4) holds, observe that one can write Ji j(L) = −P3

k=1εi jkLk, where εi jk is the Levi-Civita symbol with three indices i, j, k = 1,2,3, so defined: εi jk= +1 if (i, j, k) is an even permutation of (1, 2, 3);εi jk= −1 if (i, j, k) is an odd permutation of (1, 2, 3) and εi jk= 0 if any two indices are equal (recall that a permutation is even or odd when it is composed by an even or odd number of pair exchanges, respectively). The following relation is also useful: εi jkεilm= δjlδkm− δklδjm, whereδi j is the Kronecker delta, whose value is 1 if i = j and zero otherwise.

We finally notice that the Casimir invariants of J(L) are all the functions C(L) := f¡|L|2¢, since

∇C = f0¡|L|2¢ 2L, so that J(L)∇C = L ∧ (2f0L) = 0.

Example 1.4. The wave equation utt = uxx, where the unknown function u(t, x) is defined on R × [0,`] with periodic boundary conditions, i.e. u is actually defined on R × R/(`Z). The wave equation is Hamiltonian, with Hamiltonian function given by the enrgy integral, namely H(u, p) = 12R`

0 ¡ p2+ u2x¢ dx and Poisson tensor J2, the standard 2 × 2 symplectic matrix. In this way, the Hamilton equations read ut= δH/δp = p, pt = −δH/δu = uxx. Indeed the gradient

∇H = (δH/δu, δH/δp) is meant in the L2 sense, i.e. ∇H is the object that multiplies the incre- ment in the differential ofH:

dH(u, p; h, k) := d

d²H(u + ²h, p + ²k)

¯

¯

¯

¯²=0= Z `

0

µδH

δuh +δH δpk

dx = 〈∇H,(h, k)〉L2 .

In this case the phase space Γ of the system is the set of pairs of space-periodic functions (u, p)(t, x), with some specified regularity. Observing that u enters the Hamiltonian H only

(11)

1.2. CANONICAL TRANSFORMATIONS 11 through its space derivative ux, one can introduce the new variable v := ux. In terms of the variables (v, p) the wave equation reads vt= px, pt= vx, which are the Hamilton equations as- sociated toH0(v, p) =12R`

0 ¡ p2+ v2¢ dx, the Poisson tensor being J0= σ1∂/∂x, where σ1=

µ 0 1 1 0

¶ is the first Pauli matrix. A third Poisson structure is obtained introducing the new variables w±:= (v ± p)/p

2, in terms of which the wave equation reads w±t = ±w±x. These are the Hamilton equation associated to H00(w+, w) = 12R`

0£(w+)2+ (w)2¤ dx, with Poisson tensor J00 = σ3∂/∂x, where σ3=

µ 1 0 0 −1

. Such a Poisson structure of the wave equation is also useful to solve it, since the two first order equations are decoupled in this case, and one hasw±(t, x) = w±(0, x ± t), i.e. a left (+ sign) and right (− sign) translation of the initial profiles, respectively. As a final remark, notice that in all the three mentioned Poisson structures the Poisson tensor is constant (i.e. does not depend on the “point” in the phase space); as a consequence relation (1.4) is trivially satisfied.

Example 1.5. The Korteweg-de Vries (KdV) equation vt= −2γvxxx+ 6vvx is a one-dimensional model for the propagation of shallow water surface waves, and, as such, is also the most basic model for the dynamics of Tsunamis. The unknown functionv(t, x) is defined onR × [0,`] with periodic boundary conditions. The KdV equation is a Hamiltonian system with Hamiltonian function H(v) =R`

0

¡γv2x+ v3¢ dx and Poisson tensor J := ∂/∂x, the derivation operator, so that vt= J∇H = ∂/∂xδH/δv. Notice that J is skew-symmetric and does not depend on v, so that the relation (1.4) is trivially satisfied. Finally, notice that ∂/∂xδC/δv = 0 iff δC/δv is independent of x, which implies that the Casimir invariants of the system are C(v) =R`

0 vdx, as well as any function of it.

### 1.2Canonical transformations

Given the system ˙x = J(x)∇xH(x), under whatever change of coordinates x 7→ y = f (x) with inverse y 7→ x = g(y), it transforms to ˙y = ˜J( y)∇yH( y), where ˜˜ H( y) := H(g(y)) and

J( y) :=˜ µ∂g

∂y

−1

J(g( y))

·µ∂g

∂y

−1¸T

, (1.7)

the superscript T denoting transposition. The above formula turns out to be useful also when expressed in terms of x, which yields

J(x) := ˜ˆ J( f (x)) = µ∂f

∂x

¶ J(x)

µ∂f

∂x

T

. (1.8)

It can be shown that ˜J( y) is a Poisson tensor iff J(x) is a Poisson tensor, so that, starting from a Hamiltonian system and performing any change of variables, one still gets a Hamiltonian system. Among all the possible changes of variables, a privileged role is played by those leaving the Poisson tensor invariant in form, namely ˜J( y) = J(y) in (1.7), or ˆJ(x) = J(x) in (1.8). Such particular transformations are called canonical. Noncanonical transformations appear in the examples 1.2 and 1.4 above. Canonical transformation are particularly useful in the standard

(12)

Hamiltonian case, where the reference Poisson tensor is the standard symplectic matrix J2n defined in (1.6). Making use of formula (1.8) with J = ˆJ = J2n, x = (q, p) and y = (Q, P) = f (q, p), one gets

µ On In

−In On

=

µ∂(Q, P)

∂(q, p)

¶ µ On In

−In On

¶ µ∂(Q, P)

∂(q, p)

T

, which is equivalent to the relations

" ∂Q

∂q µ∂Q

∂p

T

∂Q

∂p µ∂Q

∂q

T#

i j

= Xn s=1

µ∂Qi

∂qs

∂Qj

∂ps

∂Qi

∂ps

∂Qj

∂qs

:= {Qi, Qj}q,p= 0 ; (1.9)

" ∂P

∂q µ∂P

∂p

T

∂P

∂p µ∂P

∂q

T#

i j

=

n

X

s=1

µ∂Pi

∂qs

∂Pj

∂ps∂Pi

∂ps

∂Pj

∂qs

:= {Qi, Qj}q,p= 0 ; (1.10)

" ∂Q

∂q µ∂P

∂p

T

∂Q

∂p µ∂P

∂q

T#

i j

=

n

X

s=1

µ∂Qi

∂qs

∂Pj

∂ps∂Qi

∂ps

∂Pj

∂qs

:= {Qi, Pj}q,p= δi j , (1.11)

for all i, j = 1,..., n. Such relations are the necessary and sufficient conditions for a change of variables to be canonical in standard Hamiltonian mechanics.

Example 1.6. Consider again the harmonic oscillator of example 1.2. The transformation (q, p) 7→ (φ, I) defined by q =p

2I/ωsinφ, p =p

2ωI cosφ, is canonical, since {φ, I}q,p= 1. The new Hamiltonian readsH(e φ, I) = ωI, and the corresponding equations read ˙φ = ω, ˙I = 0.

Sometimes, the requirement of canonicity in the sense just stated turns out to be too re- strictive. For example, the simple re-scaling like

(q, p, H, t) 7→ (Q, P, K, T) = (αq,βp,γH,δt) , (1.12) depending on four real parameters α,β,γ,δ, preserves the form of the Hamilton equations, namely dQ/dT = ∂K/∂P, dP/dT = −∂K/∂Q, under the unique condition αβ = γδ. On the other hand, in order to satisfy the relations (1.11) one needs the further condition αβ = 1, which appears a bit superfluous. In order to extend the concept of canonical transformations (in the standard case) one starts from the following Hamilton variational principle.

Proposition 1.2. The solutions of the Hamilton equations ˙q = ∂H/∂p, ˙p = −∂H/∂q are the critical points (i.e. curves) of the action functional

A(q, p) :=

Z t2

t1

[p · ˙q − H(q, p, t)]dt

in the space of the smooth curvest 7→ (q(t), p(t)) such that q(t1) and q(t2) are fixed.

CPROOF. The differential of A at (q, p) along any direction (h, k), such that h(t1) = h(t2) = 0, is

dA(q, p; h, k) = Z t2

t1

·µ

q −˙ ∂H

∂p

· k + µ

− ˙p −∂H

∂q

· h

¸

dt + p · h

¯

¯

¯

t2

t1

. (1.13)

The boundary term above vanishes, and dA = 0 for all the directions (h, k) iff the Hamilton equations hold.B

(13)

1.2. CANONICAL TRANSFORMATIONS 13 Consider now a transformation (q, p, H, t) 7→ (Q, P, K, T), such that T depends only on t and satisfying

p · dq − Hdt = c (P · dQ − KdT) + dF(q,Q, t) , (1.14) where c is a constant; p · dq − Hdt is the so-called Poincaré-Cartan 1-form (differential form of degree one). Integrating the relation (1.14) over [t1, t2] yields

Z t2

t1

[p · ˙q − H(q, p, t)]dt

| {z }

Aold

= c Z T(t2)

T(t1) £P · ˙Q − K(Q, P, T)¤ dT

| {z }

Anew

+ F(q(t),Q(T(t)), t)

¯

¯

¯

t2

t1

| {z }

∆F

,

which we rewrite in short, with the notation indicated above, as Aold= cAnew+∆F. Suppos- ing now that the transformation at hand is such that to fixed values of q(t1) and q(t2) there correspond fixed values of Q(t1) and Q(t2), one has

Proposition 1.3. The transformation (q, p, H, t) 7→ (Q, P, K, T), satisfying (1.14), maps Hamil- ton equations into Hamilton equations.

CPROOF. dAold= cdAnew+d∆F, and the last term vanishes, so that dAold= 0 iff dAnew= 0.

According to (1.13), the condition dAnew/old= 0 is equivalent to the Hamilton equations. B Very often the canonical transformations are defined as those satisfying (1.14). The con- stant c and the function F(q, Q, t) appearing on the right hand side of (1.14) are called the valence and the generating function of the transformation.

Example 1.7. The rescaling (1.12) is canonical in the sense just stated with c = 1/(αβ) and F equal to any constant in (1.14), with the conditionαβ = γδ.

Let us now restrict our attention to the case of transformations that are canonical in the sense of (1.15), having unitary valence, i.e. c = 1, and not involving any re-parametrization of time, i.e. T(t) ≡ t. Notice that if the generating function F is not constant, then (1.14) implies

∂F

∂q = p ; ∂F

∂Q= P ; ∂F

∂t = K − H .

Starting from (1.14), one can introduce another generating function S(q, P, t) := F(q,Q, t)+Q·P, satisfying

dS = p · dq + Q · dP + (K − H)dt , (1.15) which implies

∂S

∂q = p ; ∂S

∂P = Q ; ∂S

∂t = K − H . (1.16)

Suppose that one looks for a change of variables (q, p, H, t) 7→ (Q, P, K, t) such that K = ϕ(t) depends (at most) on time only, being independent of Q and P. This amounts to look for a canonical transformation such that the new Hamiltonian variables do not evolve in time.

Notice that one can setϕ(t) ≡ 0 without any loss of generality. Indeed, if S satisfies (1.16) with K = ϕ(t), then ˜S := S −R

ϕ(t)dt satisfies (1.16) with K ≡ 0. With this in mind, the first and the third of relations (1.16) yield the Hamilton-Jacobi equation

∂S

∂t + H µ

q,∂S

∂q, t

= 0 , (1.17)

(14)

a first order PDE in the unknown function S(q, t). Notice that, among the possible solutions of equation (1.17), we are interested to the so-called complete integrals, namely those solutions depending on n parameters P1, . . . , Pnand such that

det µ 2S

∂q∂P

6= 0. (1.18)

A complete integral of the Hamilton-Jacobi equation generates a canonical transformation (q, p, H, t) 7→ (Q, P,0, t), since condition (1.18), together with the first two equations of (1.16), allows to get (q, p) in terms of (Q, P) and t and/or viceversa.

Very often, especially in problems of perturbation theory, H is independent of time, and one looks for time-independent canonical transformations such that the new Hamiltonian K de- pends on the momenta P only, since in the latter case the Hamilton equations are immediately solved: Q(t) = Q(0) + t(∂K/∂P), at constant P (such canonical transformations rectify the flow of the given Hamiltonian system). In this case, the following time-independent version of the Hamilton-Jacobi equation holds:

H µ

q,∂S

∂q(q, P)

= K(P) . (1.19)

Here again, a complete integral of the above equation is required. Notice that the new Hamil- tonian K is an unknown of the problem.

Another very convenient way of performing canonical transformations is to do it through Hamiltonian flows. More precisely, let us consider a Hamiltonian G(Q, P) and its associ- ated Hamilton equations ˙Q = ∂G/∂P, ˙P = −∂G/∂Q. Let ΦGs denote the flow of G, so that (Q, P) =ΦsG(q, p) is the solution of the Hamilton equations at time s, corresponding to the initial condition (q, p) at s = 0. We also denote by

LG:= {·,G} = (J∇G) · ∇ = XG· ∇ (1.20) the Lie derivative along the Hamiltonian vector field XG; notice that LGF = {F,G}.

Proposition 1.4. The change of variablesΦGs : (q, p) 7→ (Q, P) is canonical.

C PROOF. We want to show that (1.9)-(1.10) are satisfied. Due to a repetitive use of the Leibniz rule, one has

d

ds{Qi, Pj}q,p = { ˙Qi, Pj}q,p+ {Qi, ˙Pj}q,p=

½∂G

∂Pi

, Pj

¾

q,p

½

Qi, ∂G

∂Qj

¾

q,p

=

=

∂Pi

{G, Pj}q,p

∂Qj

{Qi, G}q,p= −∂ ˙Pj

∂Pi∂ ˙Qi

∂Qj =

= 2G

∂Pi∂Qj2G

∂Qj∂Pi ≡ 0 .

Thus {Qi, Pj}q,p(s) = {Qi, Pj}q,p(0) = {pi, qj}q,p = δi j. Analogously, one gets {Qi, Qj}q,p= 0 and {Pi, Pj}q,p= 0. B

(15)

1.3. INTEGRABILITY: LIOUVILLE THEOREM 15 Proposition 1.5. For any function F one has

F ◦ΦsG= esLGF . (1.21)

CPROOF. Set ˆF(s) := F ◦ΦGs, and observe that G ◦ΦGs = G (why?). ThenF = { ˆ˙ˆ F, G} = LGF,ˆ F = {¨ˆ F, G} = {{ ˆ˙ˆ F, G}, G} = L2GF and so on. Thus, the Taylor expansion of ˆˆ F(s) centered at s = 0 reads

F ◦ΦGs = ˆF(s) = F + sLGF +s2

2L2GF + ··· = esLGF . B

### 1.3Integrability: Liouville theorem

A dynamical system is integrable if it possesses a number of first integrals (i.e. functions defined on the phase space not evolving in time along the flow of the system) which is high enough to geometrically constraint the motion, a priori, on a curve. For a generic system of the form ˙x = u(x) in Rn, integrability would require, a priori, n−1 first integrals (the intersection of the level sets of m first integrals has co-dimension m and dimension n − m). However, it turns out that the Hamiltonian structure reduces such a number to half the (even) dimension of the phase space.

Definition 1.3. The system defined by the Hamiltonian H(q, p, t), is said to be integrable in Γ⊆ R2n, in the sense of Liouville, if it admitsn independent first integrals f1(q, p, t), . . . , fn(q, p, t) in involution, i.e., for any (q, p) ∈Γandt ∈ R

1. ∂fj/∂t + {fj, H} = 0 for any j = 1,..., n;

2. Pn

j=1cj∇ fj(q, p, t) = 0 ⇒ c1= · · · = cn= 0 (equivalently: the rectangular matrix of the gra- dients of the integrals has maximal rankn) for any (q, p, t);

3. { fj, fk} = 0 for any j, k = 1,..., n.

Notice that often H coincides with one of the first integrals. The introduction of the above definition is motivated by the following theorem.

Theorem 1.1 (Liouville). Let the Hamiltonian system defined by H be Liouville-integrable in Γ⊆ R2n, and leta ∈ Rnsuch that the level set

Ma:= {(q, p) ∈Γ : f1(q, p, t) = a1, . . . , fn(q, p, t) = an}

is non empty; let also M0a denote a (nonempty) connected component of Ma. Then, a func- tion S(q, t; a) exists, such that p · dq|Ma0 = dqS(q, a) and S is a complete integral of the time- dependent Hamilton-Jacobi equation, i.e. the generating function of a canonical transformation C : (q, p, H, t) 7→ (b, a,0, t), where b := ∂S/∂a.

Remark 1.2. If H(q, p) does not depend explicitly on time, then in the above definition of integrable system all the fj are independent of time as well, and condition 1. is replaced by { fj, H} = 0. In such a case, the generating function S(q; a) appearing in the Liouville theorem is a complete integral of the time-independent Hamilton-Jacobi equation H(q, ∇aS) = K(a), thus generating a canonical transformationC : (q, p) 7→ (b, a) such that H(C−1(b, a)) = K(a).

(16)

In order to better understand the meaning of the definition 1.3 and to prove the theorem 1.1, we start by supposing that H(q, p, t) admits n independent first integrals f1(q, p, t), . . . , fn(q, p, t), but we do not suppose, for the moment, that such first integrals are in involution. Without any loss of generality, as a condition of independence of the first integrals one can assume

det µ∂f

∂p

= det

µ ∂(f1, . . . , fn)

∂(p1, . . . , pn)

¶ 6= 0 ,

in such a way that the level set Ma= {(q, p) : f (q, p, t) = a} of the first integrals can be repre- sented, by means of the implicit function theorem, as

p1= u1(q, t; a) ; . . . pn= un(q, t; a) . (1.22) The above relations must hold at any time if they hold at t = 0. Differentiating the relation pi(t) = ui(q(t), t; a) (i = 1,..., n) with respect to time and using the Hamilton equations one gets

∂ui

∂t +

n

X

j=1

µ∂ui

∂qj∂uj

∂qi

∂H

∂pj = −∂H

∂qi

n

X

j=1

∂uj

∂qi

∂H

∂pj

¯

¯

¯

¯

¯p=u(q,t;a)

. (1.23)

Notice that, for the sake of convenience, the same sum of terms is artificially added on both sides of the equation. By introducing the quantities

rot(u) :=

µ∂u

∂q

− µ∂u

∂q

T

, (1.24)

v(q, t) := ∂H

∂p

¯

¯

¯

¯p=u(q,t;a)

, (1.25)

and

H(q, t) := H(q, u(q, t; a), t) ,e (1.26) the equations (1.23) can be rewritten in compact, vector form as

∂u

∂t + rot(u)v = −∇qH .e (1.27)

Notice the similarity of the latter equation with the (unitary density) Euler equation of hydro- dynamics, namely

∂u

∂t + rot(u)u = −∇µ |u|2 2 + p

, (1.28)

where u is the velocity field, p is the pressure and rot(u)u = ω∧u, ω := ∇∧u being the vorticity of the fluid. The similarity of (1.27) and (1.28) is completely evident in the case of natural mechanical systems, whose Hamiltonian has the form

H(q, p, t) = p · G(q, t)p

2 + V (q, t) ,

where G(q, t) is a n×n positive definite matrix. In such a case v = Gu and equation (1.27) takes the rather simple form

∂u

∂t + rot(u)Gu = −∇q

µu · Gu 2 + V

. (1.29)

In particular, in those cases such that G = Inthe latter equation is the Euler equation in space dimension n, with the potential energy V playing the role of pressure.

(17)

1.3. INTEGRABILITY: LIOUVILLE THEOREM 17 Remark 1.3. Attention has to be paid to the fact that for the Euler equation (1.28) the pressure p is determined by the divergence-free condition ∇· u = 0, while nothing similar holds, in general, for the equations (1.27) or (1.29).

Now, by analogy with the case of fluids, we look for curl-free, i.e. irrotational solutions of the Euler-like equation (1.27) (we recall that in fluid dynamics, looking for a solution of the Euler equation (1.28) of the form u = ∇φ leads to the Bernoulli equation for the velocity potential φ, namely ∂φ/∂t + |∇φ|2/2 + p = constant). In simply connected domains (of the n-dimensional configuration space), one has

rot(u) = 0 iff u = ∇S ,

where S = S(q, t; a). Upon substitution of u = ∇S into equation (1.27) and lifting a gradient, one gets

∂S

∂t + H(q, ∇qS, t) = ϕ(t; a) . (1.30)

One can setϕ(t; a) ≡ 0 without any loss of generality, and the latter equation becomes the time- dependent Hamilton-Jacobi equation (ifϕ 6≡ 0 thenS := S −e R

ϕdt satisfies equation (1.30) with zero right hand side). Thus, The Hamilton-Jacobi equation is the analogue of the Bernoulli equation for the hydrodynamics of Hamiltonian systems. The interesting point is that the curl-free condition rot(u) = 0 is equivalent to the condition of involution of the first integrals

f1, . . . , fn. Indeed, starting from the identity

fi(q, u(q, t; a), t) ≡ ai , (1.31) and taking its derivative with respect to qj one gets

∂fi

∂qs+

n

X

r=1

∂fi

∂pr

∂ur

∂qs = 0 for any i = 1,..., n. Thus

=

n

X

s=1

µ∂fi

∂qs

∂fj

∂ps∂fi

∂ps

∂fj

∂qs

=

n

X

r,s=1

µ∂fi

∂ps

∂fj

∂pr

∂ur

∂qs∂fj

∂ps

∂fi

∂pr

∂ur

∂qs

=

=

n

X

r,s=1

∂fj

∂pr

µ∂ur

∂qs∂us

∂qr

∂fi

∂ps =

"

µ∂f

∂p

¶ rot(u)

µ∂f

∂p

T#

ji

,

which implies rot(u) = 0 iff {fi, fj} = 0 for any i, j = 1,..., n (the direct implication is obvious, the reverse one requires the independence condition det(∂f /∂p) 6= 0). This is the key point: the condition of involution of the first integrals is equivalent to that of irrotational, i.e. gradient, velocity fields of the hydrodynamic equation (1.27). The velocity potential S(q, t; a) satisfies the Hamilton-Jacobi equation and is actually a complete integral of the latter. In order to see this, one can start again from identity (1.31), setting there u = ∇S and taking the derivative with respect to aj, getting the i, j component of the matrix identity

µ∂f

∂p

¶ µ 2S

∂q∂a

= In,

(18)

which, by the independence condition of the first integrals, yields det(2S/∂q∂a) 6= 0. We finally notice that if the first integrals and thus the velocity field u are known, then the potential S can be obtained by a simple integration, based on the identity dqS = u · dq, such as

S(q, t; a) − S(0, t; a) = Z

0→q

u(q0, t; a) · dq0= Z 1

0

u(λq, t; a) · qdλ ,

where S(0, t; a) may be set to zero. The function S(q, t; a), satisfying the Hamilton-Jacobi equa- tion, generates a canonical transformation (q, p, H, t) 7→ (b, a,0, t) to a zero Hamiltonian, trans- formation defined by the implicit equations p = ∇qS(q, t; a), b := ∇aS(q, t; a). What just re- ported above is actually the proof of theorem 1.1. The restriction to the case where H, f1, . . . , fn are independent of time is left as an exercise.

Example 1.8. The Hamiltonian system of central motions is Liouville-integrable. Indeed, if H = |p|2m2 + V (|r|) is the Hamiltonian of the system, then it is easily proven that the angular momentum L = r ∧ p is a vector constant of motion (the Hamiltonian is invariant with respect to the “canonical rotations” (r, p) 7→ (r0, p0) = (Rr, R p), where R is any orthogonal matrix; the conservation of the angular momentum is a consequence of the Nöther theorem). The phase space of the system has dimension 2n = 6, and three independent first integrals in involution are f1:= H, f2:= |L|2 and f3:= Lz, for example (show that).

Example 1.9. The Hamiltonian of n noninteracting systems, H =Pn

j=1hj(qj, pj), is obviously Liouville integrable, with the choice fj:= hj(qj, pj), j = 1,..., n. As an example, consider the case of harmonic oscillators, wherehj(qj, pj) = (p2j+ ω2jq2j)/2.

Example 1.10. The wave equation utt = uxx is an infinite dimensional Liouville integrable Hamiltonian system. For the sake of simplicity we consider the case of fixed ends: u(0, t) = 0 = u(`, t), ut(0, t) = 0 = ut(`, t). As previously shown, the equation has the obvious Hamilto- nian form ut= p = δH/δp, pt= uxx= −δH/δu, where H(u, p) =R`

0

p2+(ux)2

2 dx. Since the set of functionsϕk(x) :=p

2/`sin(πkx/`), k ≥ 1, is an orthonormal basis in the Hilbert space L2([0,`]) of square integrable functions on [0,`] with fixed ends, one can expand both u(x, t) and p(x, t) in Fourier series: u(x, t) =P

k≥1k(t)ϕk(x), p(x, t) =P

k≥1k(t)ϕk(x), with Fourier coefficients given by ˆuk(t) =R`

0 u(x, t)ϕk(x)dx and ˆpk(t) =R`

0 p(x, t)ϕk(x)dx, respectively. Upon substitution of the latter Fourier expansion into the Hamiltonian wave equation one easily gets ˙ˆuk= ˆpk,

˙ˆpk= −ω2kk, where ωk := (πk/`)2. These are the Hamilton equations of a system of infinitely many harmonic oscillators, with HamiltonianK =P

k≥1( ˆp2k+ ω2k2k)/2, which is obviously Liou- ville integrable with the choicefk= ( ˆp2k2k2k)/2, k ≥ 1. One easily finds that the substitution of the Fourier expansions ofu and p into the wave equation Hamiltonian H yields H = K (to such a purpose, notice thatR`

0(ux)2dx = uux|`0−R`

0 uuxxdx).

Useful reference books for the present chapter are [6] and [27]. Hydrodynamics of Hamil- tonian system is originally discussed e.g. in paper [24].

(19)

## Probabilistic approach

In many cases, instead of trying to control the details of the dynamics of a given system, it is convenient to approach the problem from the point of view of probability theory, trying to characterize the statistical aspects of the dynamics itself. To such a purpose, the phase space of the system has to be endowed with a probability measure that does not evolve along the flow, so that mean values of observables are independent of time. One of the most important results of such an approach is the deduction of the macroscopic laws of thermodynamics for mechanical systems with many degrees of freedom.

### 2.1Probability measures and integration

Given a setΩ(think e.g. to a differentiable manifold) let us denote by 2 the power set ofΩ, i.e. the set of all, proper and improper, subsets ofΩ(recall that the the notation is due to the fact that for a finite set of s elements the dimension of its power set is 2s).

Definition 2.1. A setσ⊆ 2 is called aσ-algebra onif 1. it containsΩ;

2. it is closed with respect to complementation, i.e. A ∈ σ⇒ Ac∈ σ; 3. it is closed with respect to countable union, i.e. {Aj}j∈N∈ σ⇒S

j∈NAj∈ σ.

Notice that the complement of a countable union of sets is the countable intersection of the complements of those sets, which means that closure with respect to complementation and countable union implies closure with respect to countable intersection.

Due to the fact that 2is aσ-algebra and the intersection of σ-algebras is still a σ-algebra, ifF ⊂ 2 denotes a set of subsets ofΩ, the smallestσ-algebra containingF always exists and is usually denoted by σ(F ), which is also refereed to as the σ-algebra generated by F . In this respect, ifΩis endowed with a topology, aσ-algebra particularly relevant to applications is the one generated by the open sets ofΩ, which is called the Borelσ-algebra of.

Definition 2.2. Given a setΩand a sigma-algebraσ on ti, a probability measure on Ωis a nonnegative functionµ : σ→ [0, 1] which is

19

(20)

• normalized, i.e.µ(Ω) = 1;

• countably additive, i.e. additive with respect to countable unions of pairwise disjoint sets:

{Aj}j∈N∈ σ andAi∩ Aj= ; ∀i 6= j ⇒ µ(S

j∈NAj) =P

j∈Nµ(Aj).

A set A ⊂Ωis then said to be µ-measurable if A ∈ σ. Moreover, if A is measurable and µ(A) = 0, then any set B ⊂ A is assumed to have measure zero. The general additivity law is readily proven by observing that A \ (A ∩ B), B \ (A ∩ B) and A ∩ B are pairwise disjoint sets whose union yields A ∪ B. Moreover, A \ B and A ∩ B are disjoint sets, their union being A, so thatµ(A \ B) = µ(A) − µ(A ∩ B). Thus, one gets

µ(A ∪ B) = µ(A) + µ(B) − µ(A ∩ B) ≤ µ(A) + µ(B) , (2.1) the equality sign holding iff A ∩ B has measure zero, which is true, in particular, when the intersection is empty.

Definition 2.3. A property relative to the elements ω ∈Ωis said to hold µ-almost everywhere (in shortµ-a.e.) in the measurable set A ⊆if it holds ∀ω ∈ A \ B and B has measure zero.

If Ω is finite or countably infinite, one can always build up a probability measure on the largest σ-algebra 2, in the natural way, by assigning a function p :Ω→ [0, 1] : ω 7→ pω such that P

ω∈Ωpω= 1. Indeed, Given A ∈ 2, since A =S

ω∈A{ω}, then, by the countable additivity of the measureµ one has

µ(A) = µ Ã

[

ω∈A

{ω}

!

= X

ω∈Aµ({ω}) .

Thus, the measure µ(A) of any measurable set A is completely determined by the value of the measure of all its singletons (i.e. subsets consisting of a single element), and one has to assign pω:= µ({ω}). The normalization of the sum of the pω’s follows taking A =Ωin the above displayed equation and usingµ(Ω) = 1. IfΩis uncountable, the latter procedure does not work, in general.

Example 2.1. Consider the case:= [0,1]. A natural (probability) measure µ onshould be such that if 0 ≤ a ≤ b ≤ 1, then µ([a, b]) = b − a. Observe that the singletons are the set consisting of a single pointω ofΩ, and that by shrinking any interval to a single point one gets µ({ω}) = 0

∀ω ∈Ω. Thus, one cannot define such a natural measure on singletons. Moreover, if one tries to define the candidate measure at hand on the uncountable power set 2, it can be proven that no such measure exists: the power set is too large.

With a (probability) measureµ onΩ, one defines an integration overΩas follows. First of all, ifχA denotes the characteristic (or indicator) function of the measurable set A (χA(x) = 1 if x ∈ A and zero otherwise), one defines

Z

A

dµ = Z

BχA dµ :=

Z

A∩B

dµ = Z

χA∩B dµ = µ(A ∩ B) .

(21)

2.1. PROBABILITY MEASURES AND INTEGRATION 21 In this way one can define the integration of the so-called simple functions, namely functions that are (finite) linear combinations of characteristic functions of given sets. Thus, if S = P

jsjχAj, one has

Z

B

S dµ :=X

j

sj Z

BχAj dµ =X

j

sjµ(Aj∩ B) .

More general functions are then approximated through sequences of simple functions. More precisely, if F ≥ 0, one sets

Z

B

F dµ := sup

simple S:

0≤S≤F

Z

B

S dµ .

For a function F with non constant sign one then introduces the positive part F+:= max{0, F}

and negative part F= max{0, −F} = −min{0, F} of F (notice that both F+ and F are nonneg- ative by definition).

Definition 2.4. A function F is said to be integrable over B ⊆Ωwith respect to the measureµ if

Z

B|F| dµ = Z

B

F+dµ + Z

B

F dµ exists finite.

Notice that the latter definition of integrability is equivalent to require that bothR

BF± dµ exist finite, so that

Z

B

F dµ :=

Z

B

F+dµ − Z

B

Fdµ exists finite.

Definition 2.5. The space of integrable functions overΩwith respect toµ is denoted byL1(Ω,µ).

In general, for any p ≥ 1 one defines

Lp(Ω,µ) :=

(

F : kFkp:=

µZ

|F|p dµ

1p

< +∞

) .

Definition 2.6. Given two probability measuresµ and ν onΩ(i.e. defined on the sameσ),µ is said to be absolutely continuous with respect toν if for any set A such that ν(A) = 0 it follows µ(A) = 0.

By the Radon-Nikodym theorem, ifµ is absolutely continuous with respect to ν, then µ has a density, namely there exists a nonnegativeν-integrable function % :Ω→ R+such that

µ(A) = Z

A

dµ = Z

A% dν

for any measurable set A. One writes the above condition in short as dµ = %dν, or % = dµ/dν, referring to the latter as the Radon-Nikodym derivative ofµ with respect to ν.

The most relevant case in applications is that of measures absolutely continuous with re- spect to the Lebesgue measure (the unique countably additive measure defined on the Borel

(22)

σ-algebra ofRnand such that the measure of a multi-rectangle is the product of the lengths of the sides), in which case one writes dµ = % dV , where dV denotes the Lebesgue volume element inRn.

In probability theory, the integral of F with respect to a probability measure µ overis referred to as the expectation or mean value of the random “variable” F, and is denoted by

〈F〉µ=Eµ(F) :=

Z

F dµ . The above formula implies for example that­

χA

®

µ= µ(A).

Exercise 2.1. Let A = [0,1] ∩ Q be the set of rationals in [0,1]; then the Lebesgue measure V (A) of A is zero. Moreover, the Dirichlet function D(x) - defined on [0, 1] as D(x) = 1 if x is irrational and D(x) = 0 otherwise - is not Riemann integrable but is integrable with respect to the Lebesgue measure over [0, 1], the value of the integral being exactly one. Indeed, since A is countable, it can be covered by a sequence of intervals {Ij}j∈N such that Ij is centered at xj∈ A and V (Ij) = ε/2j+1, whereε is arbitrarily small. Then, since A ⊂ SjIj, it followsV (A) ≤ V (S

jIj) ≤P

j≥0V (Ij) = ε, and the arbitrariness of ε implies V (A) = 0. For what concerns the Dirichlet function, observe thatD(x) = χAc, so thatR D(x)dV = R χAcdV = µ(Ac) = 1 − µ(A) = 1.

A good reference for probability theory is [20]. Abstract measure and integration theory is extensively treated in the analysis monograph [34].

### 2.2Invariant measures

Given a set Ω, suppose that a one-parameter group {Φt}t of transformations of Ω into itself is given. In the typical case, Ω is at least a topological space and, for any fixed t, Φt is at least a homeomorphism, i.e. a continuous bijection with continuous inverse. The group may be continuous, i.e. t ∈ R, or discrete, i.e. t ∈ Z. We will always have in mind the physical case whereΩis a smooth manifold,Φt being the flow of a given vector field on it.

Definition 2.7. A measureµ onΩis said to be invariant with respect toΦt if

µ(Φt(A)) = µ(A) (2.2)

for any measurable A ⊆Ωand anyt.

The reason why one is particularly interested invariant measures is that the mean value of a function F is the same as the mean value of F ◦Φt if the expectation is taken with respect to a measureµ invariant with respect toΦt. Indeed, (2.2) means that

µ(Φt(A)) = Z

χΦt(A)(x) dµ(x) = Z

χA−t(x)) dµ(x) = Z

χA( y) dµ(Φt( y))

Updating...

## Riferimenti

Argomenti correlati :