• Non ci sono risultati.

Integrability: Liouville theorem

Nel documento HAMILTONIAN MECHANICS —————- (pagine 15-22)

F ◦ΦsG= esLGF . (1.21)

CPROOF. Set ˆF(s) := F ◦ΦGs, and observe that G ◦ΦGs = G (why?). ThenF = { ˆ˙ˆ F, G} = LGF,ˆ F = {¨ˆ F, G} = {{ ˆ˙ˆ F, G}, G} = L2GF and so on. Thus, the Taylor expansion of ˆˆ F(s) centered at s = 0 reads

F ◦ΦGs = ˆF(s) = F + sLGF +s2

2L2GF + ··· = esLGF . B

1.3 Integrability: Liouville theorem

A dynamical system is integrable if it possesses a number of first integrals (i.e. functions defined on the phase space not evolving in time along the flow of the system) which is high enough to geometrically constraint the motion, a priori, on a curve. For a generic system of the form ˙x = u(x) in Rn, integrability would require, a priori, n−1 first integrals (the intersection of the level sets of m first integrals has co-dimension m and dimension n − m). However, it turns out that the Hamiltonian structure reduces such a number to half the (even) dimension of the phase space.

Definition 1.3. The system defined by the Hamiltonian H(q, p, t), is said to be integrable in Γ⊆ R2n, in the sense of Liouville, if it admitsn independent first integrals f1(q, p, t), . . . , fn(q, p, t) in involution, i.e., for any (q, p) ∈Γandt ∈ R

1. ∂fj/∂t + {fj, H} = 0 for any j = 1,..., n;

2. Pn

j=1cj∇ fj(q, p, t) = 0 ⇒ c1= · · · = cn= 0 (equivalently: the rectangular matrix of the gra-dients of the integrals has maximal rankn) for any (q, p, t);

3. { fj, fk} = 0 for any j, k = 1,..., n.

Notice that often H coincides with one of the first integrals. The introduction of the above definition is motivated by the following theorem.

Theorem 1.1 (Liouville). Let the Hamiltonian system defined by H be Liouville-integrable in Γ⊆ R2n, and leta ∈ Rnsuch that the level set

Ma:= {(q, p) ∈Γ : f1(q, p, t) = a1, . . . , fn(q, p, t) = an}

is non empty; let also M0a denote a (nonempty) connected component of Ma. Then, a func-tion S(q, t; a) exists, such that p · dq|Ma0 = dqS(q, a) and S is a complete integral of the time-dependent Hamilton-Jacobi equation, i.e. the generating function of a canonical transformation C : (q, p, H, t) 7→ (b, a,0, t), where b := ∂S/∂a.

Remark 1.2. If H(q, p) does not depend explicitly on time, then in the above definition of integrable system all the fj are independent of time as well, and condition 1. is replaced by { fj, H} = 0. In such a case, the generating function S(q; a) appearing in the Liouville theorem is a complete integral of the time-independent Hamilton-Jacobi equation H(q, ∇aS) = K(a), thus generating a canonical transformationC : (q, p) 7→ (b, a) such that H(C−1(b, a)) = K(a).

In order to better understand the meaning of the definition 1.3 and to prove the theorem 1.1, we start by supposing that H(q, p, t) admits n independent first integrals f1(q, p, t), . . . , fn(q, p, t), but we do not suppose, for the moment, that such first integrals are in involution. Without any loss of generality, as a condition of independence of the first integrals one can assume

det repre-sented, by means of the implicit function theorem, as

p1= u1(q, t; a) ; . . . pn= un(q, t; a) . (1.22) The above relations must hold at any time if they hold at t = 0. Differentiating the relation pi(t) = ui(q(t), t; a) (i = 1,..., n) with respect to time and using the Hamilton equations one gets

Notice that, for the sake of convenience, the same sum of terms is artificially added on both sides of the equation. By introducing the quantities

rot(u) := the equations (1.23) can be rewritten in compact, vector form as

∂u

∂t + rot(u)v = −∇qH .e (1.27)

Notice the similarity of the latter equation with the (unitary density) Euler equation of hydro-dynamics, namely

where u is the velocity field, p is the pressure and rot(u)u = ω∧u, ω := ∇∧u being the vorticity of the fluid. The similarity of (1.27) and (1.28) is completely evident in the case of natural mechanical systems, whose Hamiltonian has the form

H(q, p, t) = p · G(q, t)p

2 + V (q, t) ,

where G(q, t) is a n×n positive definite matrix. In such a case v = Gu and equation (1.27) takes the rather simple form

In particular, in those cases such that G = Inthe latter equation is the Euler equation in space dimension n, with the potential energy V playing the role of pressure.

1.3. INTEGRABILITY: LIOUVILLE THEOREM 17 Remark 1.3. Attention has to be paid to the fact that for the Euler equation (1.28) the pressure p is determined by the divergence-free condition ∇· u = 0, while nothing similar holds, in general, for the equations (1.27) or (1.29).

Now, by analogy with the case of fluids, we look for curl-free, i.e. irrotational solutions of the Euler-like equation (1.27) (we recall that in fluid dynamics, looking for a solution of the Euler equation (1.28) of the form u = ∇φ leads to the Bernoulli equation for the velocity potential φ, namely ∂φ/∂t + |∇φ|2/2 + p = constant). In simply connected domains (of the n-dimensional configuration space), one has

rot(u) = 0 iff u = ∇S ,

where S = S(q, t; a). Upon substitution of u = ∇S into equation (1.27) and lifting a gradient, one gets

∂S

∂t + H(q, ∇qS, t) = ϕ(t; a) . (1.30)

One can setϕ(t; a) ≡ 0 without any loss of generality, and the latter equation becomes the time-dependent Hamilton-Jacobi equation (ifϕ 6≡ 0 thenS := S −e R

ϕdt satisfies equation (1.30) with zero right hand side). Thus, The Hamilton-Jacobi equation is the analogue of the Bernoulli equation for the hydrodynamics of Hamiltonian systems. The interesting point is that the curl-free condition rot(u) = 0 is equivalent to the condition of involution of the first integrals

f1, . . . , fn. Indeed, starting from the identity

fi(q, u(q, t; a), t) ≡ ai , (1.31) and taking its derivative with respect to qj one gets

∂fi the reverse one requires the independence condition det(∂f /∂p) 6= 0). This is the key point: the condition of involution of the first integrals is equivalent to that of irrotational, i.e. gradient, velocity fields of the hydrodynamic equation (1.27). The velocity potential S(q, t; a) satisfies the Hamilton-Jacobi equation and is actually a complete integral of the latter. In order to see this, one can start again from identity (1.31), setting there u = ∇S and taking the derivative with respect to aj, getting the i, j component of the matrix identity

µ∂f

which, by the independence condition of the first integrals, yields det(2S/∂q∂a) 6= 0. We finally notice that if the first integrals and thus the velocity field u are known, then the potential S can be obtained by a simple integration, based on the identity dqS = u · dq, such as

S(q, t; a) − S(0, t; a) = Z

0→q

u(q0, t; a) · dq0= Z 1

0

u(λq, t; a) · qdλ ,

where S(0, t; a) may be set to zero. The function S(q, t; a), satisfying the Hamilton-Jacobi equa-tion, generates a canonical transformation (q, p, H, t) 7→ (b, a,0, t) to a zero Hamiltonian, trans-formation defined by the implicit equations p = ∇qS(q, t; a), b := ∇aS(q, t; a). What just re-ported above is actually the proof of theorem 1.1. The restriction to the case where H, f1, . . . , fn are independent of time is left as an exercise.

Example 1.8. The Hamiltonian system of central motions is Liouville-integrable. Indeed, if H = |p|2m2 + V (|r|) is the Hamiltonian of the system, then it is easily proven that the angular momentum L = r ∧ p is a vector constant of motion (the Hamiltonian is invariant with respect to the “canonical rotations” (r, p) 7→ (r0, p0) = (Rr, R p), where R is any orthogonal matrix; the conservation of the angular momentum is a consequence of the Nöther theorem). The phase space of the system has dimension 2n = 6, and three independent first integrals in involution are f1:= H, f2:= |L|2 and f3:= Lz, for example (show that).

Example 1.9. The Hamiltonian of n noninteracting systems, H =Pn

j=1hj(qj, pj), is obviously Liouville integrable, with the choice fj:= hj(qj, pj), j = 1,..., n. As an example, consider the case of harmonic oscillators, wherehj(qj, pj) = (p2j+ ω2jq2j)/2.

Example 1.10. The wave equation utt = uxx is an infinite dimensional Liouville integrable Hamiltonian system. For the sake of simplicity we consider the case of fixed ends: u(0, t) = 0 = u(`, t), ut(0, t) = 0 = ut(`, t). As previously shown, the equation has the obvious Hamilto-nian form ut= p = δH/δp, pt= uxx= −δH/δu, where H(u, p) =R`

0

p2+(ux)2

2 dx. Since the set of functionsϕk(x) :=p

2/`sin(πkx/`), k ≥ 1, is an orthonormal basis in the Hilbert space L2([0,`]) of square integrable functions on [0,`] with fixed ends, one can expand both u(x, t) and p(x, t) in Fourier series: u(x, t) =P

k≥1k(t)ϕk(x), p(x, t) =P

k≥1k(t)ϕk(x), with Fourier coefficients given by ˆuk(t) =R`

0 u(x, t)ϕk(x)dx and ˆpk(t) =R`

0 p(x, t)ϕk(x)dx, respectively. Upon substitution of the latter Fourier expansion into the Hamiltonian wave equation one easily gets ˙ˆuk= ˆpk,

˙ˆpk= −ω2kk, where ωk := (πk/`)2. These are the Hamilton equations of a system of infinitely many harmonic oscillators, with HamiltonianK =P

k≥1( ˆp2k+ ω2k2k)/2, which is obviously Liou-ville integrable with the choicefk= ( ˆp2k2k2k)/2, k ≥ 1. One easily finds that the substitution of the Fourier expansions ofu and p into the wave equation Hamiltonian H yields H = K (to such a purpose, notice thatR`

0(ux)2dx = uux|`0−R`

0 uuxxdx).

Useful reference books for the present chapter are [6] and [27]. Hydrodynamics of Hamil-tonian system is originally discussed e.g. in paper [24].

Chapter 2

Probabilistic approach

In many cases, instead of trying to control the details of the dynamics of a given system, it is convenient to approach the problem from the point of view of probability theory, trying to characterize the statistical aspects of the dynamics itself. To such a purpose, the phase space of the system has to be endowed with a probability measure that does not evolve along the flow, so that mean values of observables are independent of time. One of the most important results of such an approach is the deduction of the macroscopic laws of thermodynamics for mechanical systems with many degrees of freedom.

2.1 Probability measures and integration

Given a setΩ(think e.g. to a differentiable manifold) let us denote by 2 the power set ofΩ, i.e. the set of all, proper and improper, subsets ofΩ(recall that the the notation is due to the fact that for a finite set of s elements the dimension of its power set is 2s).

Definition 2.1. A setσ⊆ 2 is called aσ-algebra onif 1. it containsΩ;

2. it is closed with respect to complementation, i.e. A ∈ σ⇒ Ac∈ σ; 3. it is closed with respect to countable union, i.e. {Aj}j∈N∈ σ⇒S

j∈NAj∈ σ.

Notice that the complement of a countable union of sets is the countable intersection of the complements of those sets, which means that closure with respect to complementation and countable union implies closure with respect to countable intersection.

Due to the fact that 2is aσ-algebra and the intersection of σ-algebras is still a σ-algebra, ifF ⊂ 2 denotes a set of subsets ofΩ, the smallestσ-algebra containingF always exists and is usually denoted by σ(F ), which is also refereed to as the σ-algebra generated by F . In this respect, ifΩis endowed with a topology, aσ-algebra particularly relevant to applications is the one generated by the open sets ofΩ, which is called the Borelσ-algebra of.

Definition 2.2. Given a setΩand a sigma-algebraσ on ti, a probability measure on Ωis a nonnegative functionµ : σ→ [0, 1] which is

19

• normalized, i.e.µ(Ω) = 1;

• countably additive, i.e. additive with respect to countable unions of pairwise disjoint sets:

{Aj}j∈N∈ σ andAi∩ Aj= ; ∀i 6= j ⇒ µ(S

j∈NAj) =P

j∈Nµ(Aj).

A set A ⊂Ωis then said to be µ-measurable if A ∈ σ. Moreover, if A is measurable and µ(A) = 0, then any set B ⊂ A is assumed to have measure zero. The general additivity law is readily proven by observing that A \ (A ∩ B), B \ (A ∩ B) and A ∩ B are pairwise disjoint sets whose union yields A ∪ B. Moreover, A \ B and A ∩ B are disjoint sets, their union being A, so thatµ(A \ B) = µ(A) − µ(A ∩ B). Thus, one gets

µ(A ∪ B) = µ(A) + µ(B) − µ(A ∩ B) ≤ µ(A) + µ(B) , (2.1) the equality sign holding iff A ∩ B has measure zero, which is true, in particular, when the intersection is empty.

Definition 2.3. A property relative to the elements ω ∈Ωis said to hold µ-almost everywhere (in shortµ-a.e.) in the measurable set A ⊆if it holds ∀ω ∈ A \ B and B has measure zero.

If Ω is finite or countably infinite, one can always build up a probability measure on the largest σ-algebra 2, in the natural way, by assigning a function p :Ω→ [0, 1] : ω 7→ pω such that P

ω∈Ωpω= 1. Indeed, Given A ∈ 2, since A =S

ω∈A{ω}, then, by the countable additivity of the measureµ one has

µ(A) = µ Ã

[

ω∈A

{ω}

!

= X

ω∈Aµ({ω}) .

Thus, the measure µ(A) of any measurable set A is completely determined by the value of the measure of all its singletons (i.e. subsets consisting of a single element), and one has to assign pω:= µ({ω}). The normalization of the sum of the pω’s follows taking A =Ωin the above displayed equation and usingµ(Ω) = 1. IfΩis uncountable, the latter procedure does not work, in general.

Example 2.1. Consider the case:= [0,1]. A natural (probability) measure µ onshould be such that if 0 ≤ a ≤ b ≤ 1, then µ([a, b]) = b − a. Observe that the singletons are the set consisting of a single pointω ofΩ, and that by shrinking any interval to a single point one gets µ({ω}) = 0

∀ω ∈Ω. Thus, one cannot define such a natural measure on singletons. Moreover, if one tries to define the candidate measure at hand on the uncountable power set 2, it can be proven that no such measure exists: the power set is too large.

With a (probability) measureµ onΩ, one defines an integration overΩas follows. First of all, ifχA denotes the characteristic (or indicator) function of the measurable set A (χA(x) = 1 if x ∈ A and zero otherwise), one defines

Z

A

dµ = Z

χAdµ := µ(A) ; Z

BχA dµ :=

Z

A∩B

dµ = Z

χA∩B dµ = µ(A ∩ B) .

2.1. PROBABILITY MEASURES AND INTEGRATION 21 In this way one can define the integration of the so-called simple functions, namely functions that are (finite) linear combinations of characteristic functions of given sets. Thus, if S = P

More general functions are then approximated through sequences of simple functions. More precisely, if F ≥ 0, one sets

For a function F with non constant sign one then introduces the positive part F+:= max{0, F}

and negative part F= max{0, −F} = −min{0, F} of F (notice that both F+ and F are nonneg-ative by definition).

Definition 2.4. A function F is said to be integrable over B ⊆Ωwith respect to the measureµ if

Notice that the latter definition of integrability is equivalent to require that bothR

BF± dµ

Definition 2.5. The space of integrable functions overΩwith respect toµ is denoted byL1(Ω,µ).

In general, for any p ≥ 1 one defines

Lp(Ω,µ) :=

Definition 2.6. Given two probability measuresµ and ν onΩ(i.e. defined on the sameσ),µ is said to be absolutely continuous with respect toν if for any set A such that ν(A) = 0 it follows µ(A) = 0.

By the Radon-Nikodym theorem, ifµ is absolutely continuous with respect to ν, then µ has a density, namely there exists a nonnegativeν-integrable function % :Ω→ R+such that

µ(A) = referring to the latter as the Radon-Nikodym derivative ofµ with respect to ν.

The most relevant case in applications is that of measures absolutely continuous with re-spect to the Lebesgue measure (the unique countably additive measure defined on the Borel

σ-algebra ofRnand such that the measure of a multi-rectangle is the product of the lengths of the sides), in which case one writes dµ = % dV , where dV denotes the Lebesgue volume element inRn.

In probability theory, the integral of F with respect to a probability measure µ overis referred to as the expectation or mean value of the random “variable” F, and is denoted by

〈F〉µ=Eµ(F) :=

Z

F dµ . The above formula implies for example that­

χA

®

µ= µ(A).

Exercise 2.1. Let A = [0,1] ∩ Q be the set of rationals in [0,1]; then the Lebesgue measure V (A) of A is zero. Moreover, the Dirichlet function D(x) - defined on [0, 1] as D(x) = 1 if x is irrational and D(x) = 0 otherwise - is not Riemann integrable but is integrable with respect to the Lebesgue measure over [0, 1], the value of the integral being exactly one. Indeed, since A is countable, it can be covered by a sequence of intervals {Ij}j∈N such that Ij is centered at xj∈ A and V (Ij) = ε/2j+1, whereε is arbitrarily small. Then, since A ⊂ SjIj, it followsV (A) ≤ V (S

jIj) ≤P

j≥0V (Ij) = ε, and the arbitrariness of ε implies V (A) = 0. For what concerns the Dirichlet function, observe thatD(x) = χAc, so thatR D(x)dV = R χAcdV = µ(Ac) = 1 − µ(A) = 1.

A good reference for probability theory is [20]. Abstract measure and integration theory is extensively treated in the analysis monograph [34].

Nel documento HAMILTONIAN MECHANICS —————- (pagine 15-22)