• Non ci sono risultati.

The Lebowitz-Percus-Verlet theory of fluctuations

2.4 Equivalence of the two measures

2.4.1 The Lebowitz-Percus-Verlet theory of fluctuations

The equivalence of the two measures is expressed by the asymptotic correspondence of the expected values, relations (2.28) or (2.29). However, when considering the mean square fluc-tuations, or variances, such a correspondence no longer holds. Indeed, equation (2.28) for F2 and the square of equation (2.28) itself read

F2

where the order of magnitude of the various terms is pointed out beneath them, the dots standing for O(1) in both equations. Upon subtracting equation (2.31) from (2.30), and taking into account that h(δF )2i = hF2i − hF i2, one gets where all the displayed terms are O(N ), dots standing for O(1). Now, taking into account that hF2imc− hF i2mc= O(N ) by assumption, and taking the second derivative of the latter relation with respect to E, one can substitute

F2 00

mc= 2 hF i0mc2

+ 2 hF imchF i00mc+ O(1/N ) into the right hand side of (2.32), thus getting

(δF )2

c=(δF )2

mc+ T2C¯hF i0mc( ¯E)2

+ O(1) . (2.33)

The latter relation yields the canonical fluctuation of an extensive local observable in terms of the micro-canonical fluctuation and other related micro-canonical averages.

2.4. EQUIVALENCE OF THE TWO MEASURES 31 Example 2.3. Taking F = H, the Hamiltonian, equation (2.33) gives the known relation linking the heat capacity to the canonical energy fluctuation, namely h(δH)2ic = T2C, up to a¯ small remainder.

Equation (2.33) is not so useful, since in practice canonical averages are easier to be com-puted. In order to reverse it and obtain micro-canonical fluctuations in terms of canonical averages, one can differentiate equation (2.29) with respect to T , which yields

∂ hF ic

∂T = hF i0mcC + O(1) = hF i¯ 0mc( ¯E)(δH)2

c/T2+ O(1) .

Thus, inserting hF i0mc = T2(∂ hF ic/∂T )/ h(δH)2ic into the right hand side of (2.33), one gets (δF )2

mc =(δF )2

c− T4 h(δH)2ic

 ∂ hF ic

∂T

2

+ O(1) . (2.34)

The latter relation allows to get the micro-canonical fluctuation of an extensive local variable in terms of the canonical average and of the canonical fluctuation of the same quantity.

Exercise 2.8. Prove the following canonical formula:

∂ hF ic

∂T = hδF δHic

T2 = hF Hic− hF ichHic

T2 . (2.35)

Taking into account the identity (2.35), one can rewrite equation (2.34) in the form (δF )2

mc =(δF )2

c1 − corr2c(F, H)

, (2.36)

where

corrc(F, H) := hF Hic− hF ichHic

ph(δH)2icph(δF )2ic (2.37) is the canonical correlation of F and H. Observe that equation (2.36) obviously implies that h(δF )2imc≤ h(δF )2ic.

Exercise 2.9. By making repeated use of relation (2.28), prove that corrc(g(H), H) → 1 as N → ∞. Notice that this is in agreement with what predicted by equation (2.36), since h(δg(H))2imc= 0.

Chapter 3

Stochastically thermostated systems

The use of the canonical Gibbs measure (2.18) for systems that are in contact with a thermal bath at a fixed temperature is justified, for example, as follows. In order to mimic the action of the thermal bath on the system, particular stochastic forces are added on the right hand side of its Hamilton equations, such that, at any given instant of time, their mean is zero and their mean quadratic fluctuation is proportional to the temperature. At the same time, a particular dissipative force-term has to be added as well, in such a way that the energy of the system does not grow unboundedly with time. Once one does this, the Hamilton equations become stochastic Langevin equations, and the corresponding Liouville equation becomes the so-called Fokker-Planck equation, characterized by diffusion in the space of momenta. In a sense to be specified below, the solution of the Fokker-Planck equation approaches, asymptotically in time, the canonical density, for any initial condition. This is shown by means of a Lyapunov-like method.

3.1 The Langevin equation

We now follow a general approach, starting with a generic differential equation ˙x = u(t, x) in Rn, and adding to its vector field u a vector valued stochastic process, or random function, ξ(t), defined as follows

ξ : T × Ω → Rn : (t, ω) 7→ ξ(t, ω) , (3.1) where T = R or T = Z, and (Ω, σ, µ) is a given probability space. Any vector valued function of the time ξ(t, ¯ω) corresponding to a fixed ¯ω ∈ Ω is a sample (function or trajectory) of the given process. On the other hand, for any fixed ¯t, and for any i = 1, . . . , n, ξi(¯t, ω) is a random variable, i.e. a function defined on Ω. Moreover, given a finite discrete set of N instants t1 < t2 < · · · < tN, it is natural to consider the joint probability distribution of the vector variables ξ(1) = ξ(t1, ω), ξ(2) = ξ(t2, ω), ... ξ(N ) = ξ(tN, ω), possibly characterized by a density of n · N real variables: assigning a stochastic process is equivalent to specify such a joint density for any finite set of instants. Of course this is possible only when some restrictions are posed,

33

in such a way that the whole distribution of the process is specified in terms of a finite number of parameters.

Example 3.1. The sequence of random variables {ξt}, t = 0, 1, 2, . . . , where, for any fixed ¯t ξt¯= ω is a standard Gaussian variable with measure dµ(ω) = (e−ω2/2/√

2π)dω, is a discrete random process (or sequence). Here T = N, Ω = R. The samples are constant sequences:

ξ0 = ¯ω, ξ1 = ¯ω, ... where ¯ω is a given real number.

Example 3.2. Consider the random function u(t; A, θ) :=PN

k=1Akcos(t + θk), where each Ak is a standard Gaussian variable and each θk is distributed on the interval [0, 2π] with constant density, independently of the other ones. This is a continuous stochastic process. Here (A, θ) = ω ∈ Ω = Rn× Tn; T = R. A sample is a 2π-periodic function, obtained by fixing the N values of the amplitudes Ak and the N values of the angles θk.

Perhaps the simplest and most used stochastic process in physics is the so-called white noise, namely a zero mean, delta-correlated Gaussian process characterized by the relations

i(s)i = 0 , ∀i = 1, . . . , n ; ∀s ∈ R+ ; (3.2) hξi(s)ξj(s0)i = gijδ(s − s0) , ∀i, j = 1, . . . , n ; ∀s, s0 ∈ R+ , (3.3) where gij is the generic element of a constant, symmetric, n × n matrix G. The expectations in the relations (3.2) and (3.3) above are meant with respect to the distribution of the single component ξi(s) and to the distribution of any two components (ξi(s), ξj(s0)), respectively.

Notice that (3.2) and (3.3) are n + n2 relations, for a generic G, whereas their number reduces to (n2+ 3n)/2 if G is symmetric. The Gaussian character of the process allows then to compute the expectation of any product of a finite number of ξ-components at given instants. One has

ξi1(s1) · · · ξi2k+1(s2k+1) = 0 ;

(3.4) hξi1(s1) · · · ξi2k(s2k)i =X

P

D

ξiP (1)(sP (1)iP (2)(sP (2))E

· · ·D

ξiP (2k−1)(sP (2k−1)iP (2k)(sP (2k))E , where P denotes one of the (2k)!/(2kk!) partitions of the set of 2k integers {i1, . . . , i2k} into k distinct pairs. For example, if k = 2 one has

i1(s1) · · · ξi4(s4)i = gi1i2gi3i4δ(s1− s2)δ(s3− s4) + gi1i3gi2i4δ(s1− s3)δ(s2− s4) + + gi1i4gi2i3δ(s1− s4)δ(s2− s3) .

Now, we consider a particular stochastic differential equation, namely the Langevin equation

˙x = u(t, x) + ξ(t) , (3.5)

where ξ(t) is a vector valued, zero mean, delta correlated Gaussian process, or white noise, specified by the relations (3.2), (3.3) and (3.4). Strictly speaking, the Langevin equation (3.5) is meaningless, since one can prove that, with the hypotheses made, x(t) is a continuous but nowhere differentiable random function of the time

3.2. THE FOKKER-PLANCK EQUATION 35

Documenti correlati