**Appendix A**

**Hierarchy of structures in topology and** **their utility**

**Definition A.1. A metric space is a pair (X**, d) where X is a set and d : X ×X → R+is a distance,
i.e. a nonnegative function which satisfies

1. d(x, y) = 0 iff x = y (nondegeneracy);

2. d(x, y) = d(y, x) ∀x, y ∈ X (symmetry);

3. d(x, z) ≤ d(x, y) + d(y, z) ∀x, y, z ∈ X (triangle inequality).

In metric spaces one can define limits of functions and thus continuity. Indeed, given two
metric spaces (X, d) and (X^{0}, d^{0}), one says that a function f : X → X^{0}has (finite) limit*` ∈ X*^{0} as
x tends to x_{0}, and writes lim_{x→x}_{0}*f (x) = `, if for any ² > 0 there exists a δ**²*> 0 such that for all
x ∈ X \ {x0} satisfying d(x, x_{0}*) < δ** _{²}*one has d

^{0}( f (x),

*`) < ². The definition of continuous function*is obtained by setting

*` = f (x*0).

Notice that a metric space is not necessarily a linear (or vector) space. On the other hand,
if one wants to build up differential calculus, one needs the linear structure. Indeed, the
following definition of (weak, or Gateaux) differential of a function f : X → X^{0} in x_{0}∈ X with
increment h ∈ X is quite natural:

d f (x_{0}; h) := lim

t→0

1

t[ f (x0+ th) − f (x0)] .

Observe that one needs the linear combination x_{0}+ th ∈ X and the differential d f (x0; h) ∈ X^{0},
the convergence of the above limit being meant in the metric d^{0} of X^{0}. Actually, one needs
to measure the size of the objects in X and X^{0}, in order to have some control on the above
formula (for example, one would like to have that d f is “small” when h is “small”). This is
achieved when the distances d on the linear spaces X is induced by a norm k · k, such that
d(x, y) = kx − yk.

**Definition A.2. A normed space is a pair (X**, k · k) where X is a linear space and k · k : X → R_{+}
is a norm, i.e. a nonnegative function satisfying

1. kxk = 0 iff x = 0 (nondegeneracy);

101

*2. kλxk = |λ|kxk ∀λ ∈ R and ∀x ∈ X (homogeneity of degree one);*

3. kx − zk ≤ kx − yk + ky − zk (triangle inequality).

One easily checks that a normed linear space is a metric space with the distance

d(x, y) := kx − yk . (A.1)

Differential calculus is performed in normed spaces, but in order to set up all the machinery
of the differential equations one needs a stronger property. More precisely, in order to have
the uniqueness of the solution to a given differential equation one needs the completeness of
the space, namely that every Cauchy sequence converges (recall that a sequence {x_{n}}_{n∈N}⊂ X
*is a Cauchy sequence if ∀² > 0 there exist N** _{²}*∈ N such that for any pair n, m ∈ N satisfying
n ≥ m > N

*²*one has d(xn, xm

*) < ²).*

**Definition A.3. A complete normed space is called Banach space.**

Almost all the classical results in the theory of differential equations is (existence, unique-ness and regularity) hold in Banach spaces. Completeunique-ness enters the game in the proof of the Picard existence and uniqueness theorem, as follows. We recall that a differential equation

˙x(t) = u(x(t)), with initial condition x(0) = x0, is naturally solved by iterating the map P : X → X defined by

x^{0}(t) = P(x) := x0+
Z _{t}

0

u(x(s)) ds ,

with initial point x(s) ≡ x0. Now, P acts on the space B(I, X ) of bounded functions t 7→ x(t),
defined on a suitable real interval I, with value in the Banach space X . Such a space, endowed
with the metric d(x, y) := sup_{t∈I}kx(t)− y(t)k, k·k being the norm on X , turns out to be complete.

A local Lipschitz condition on u guarantees (and is optimal) that P : B → B is a contraction,
i.e. that there exists*ρ ∈]0,1[ such that d(P(x), P(y)) ≤ ρd(x, y) for any x, y close enough to x*0.
Since a contraction in a complete metric space admits a unique fixed point, then there exists a
unique ˆx(t) ∈ B(I, X ) such that ˆx(t) = P( ˆx(t)), i.e. ˆx(t) is the local solution of the given differential
equation.

It may happen that the linear space where the problem at hand is set up possesses a Eu-clidean structure, i.e. a scalar (or internal) product between its elements. When this happens it is very useful, since then concepts like angle, direction and projection become meaningful.

**Definition A.4. A Euclidean space is a pair (X**, 〈,〉) where X is a linear space and 〈,〉 : X ×X → R
is a scalar product, i.e. a real function satisfying

1. 〈x, x〉 > 0 ∀x ∈ X \ {0}, 〈0,0〉 = 0 (nondegeneracy);

2. 〈x, y〉 = 〈y, x〉 ∀x, y ∈ X (symmetry);

3.

*λx + µy, z® = λ〈x, z〉 + µ〈y, z〉 ∀λ,µ ∈ R and ∀x, y, z ∈ X (linearity in the first entry).*

103 One easily checks that a Euclidean space is a normed (and thus a metric, see (A.1)) space with the norm

kxk :=p

〈x, x〉 . (A.2)

Two vectors x, y of (X , 〈,〉) are said to be mutually orthogonal if 〈x, y〉 = 0. In a Euclidean space,
for example, the Pythagoras theorem holds, namely kx + yk^{2}= kxk^{2}+ kyk^{2} for all pairs x, y of
mutually orthogonal vectors.

Of course, the Euclidean structure is very useful in dealing with differential equations, whose appropriate environment, as explained above, is the Banach space. This justifies the introduction of a stronger topological structure, namely that of a Euclidean space which is complete with respect to the norm (A.2) naturally induced by the scalar product, which one can shortly refer as to a Euclidean-Banach space.

**Definition A.5. A Euclidean-Banach space is called a Hilbert space.**

**Remark A.1. Wherever needed, the linear structure of the space X can be considered on the**
complex fieldC, which only requires the symmetry property 2. of the scalar product to change
as follows: 〈x, y〉 = 〈y, x〉^{∗}, a star denoting complex conjugation.

**Bibliography**

[1] V. I. Arnol’d, Mathematical Methods of Classical Mechanics, Springer-Verlag, 1989.

[2] V. I. Arnol’d, Ordinary Differential Equations, MIT Press, 1978.

[3] V. I. Arnol’d, Metodi geometrici della teoria delle equazioni differenziali ordinarie, Editori Riuniti, 1989.

[4] V. I. Arnol’d, Lectures on Partial Differential Equations, Springer, 2004.

[5] V. I. Arnol’d and A. Avez, Ergodic problems of classical mechanics, W. A. Benjamin, 1968.

[6] V. I. Arnol’d, V. V. Kozlov and A. I. Neishtadt, Mathematical Aspects of Classical and Celestial Mechanics, 3rd edition, Springer, 2006.

[7] A. Fasano and S. Marmi, Analytical Mechanics, Oxford University Press, 2006.

[8] J. N. Franklin, Matrix Theory, Dover, 1968.

[9] F. R. Gantmacher, Lezioni di Meccanica Analitica, Editori Riuniti, 1980.

[10] J. W. Gibbs, Elementary Principles in Statistical Mechanics, Dover, 1960 (originally pub-lished by the Yale University press, 1902).

[11] P. R. Halmos, Lectures on Ergodic Theory, Chelsea Publishing Company, 1956.

[12] K. Huang, Statistical Mechanics, John Wiley & Sons, 1987.

[13] J. Jacod and P. Protter, Probability Essentials, Springer-Verlag 2004.

[14] M. Kac, Probability and related topics in physical sciences, Interscience Publishers, 1959.

[15] A. I. Khinchin, Mathematical Foundations of Statistical Mechanics, Dover, N.Y., 1949.

[16] A. N. Kolmogorov and S. V. Fomin, Elementi di teoria delle funzioni e di analisi funzionale, Mir, 1980.

[17] L. D. Landau and E. M. Lifšits, Meccanica, Fisica Teorica 1, Editori Riuniti, 1991.

[18] J. E. Marsden and T. S. Ratiu, Introduction to Mechanics and Symmetry, 2nd edition, Springer, 1999.

105

[19] G. Prodi, Lezioni di Analisi Matematica 2, Bollati Boringhieri, 2011.

[20] W. Rudin, Real and Complex Analysis, 3-rd ed., McGraw-Hill, 1987.

[21] J. A. Sanders, F. Verhulst and J. Murdock, Averaging Methods in Nonlinear Dynamical systems, Springer-Verlag, 2007.

[22] G. E. Shilov, Elementary Functional Analysis, Dover, N.Y., 1996.

[23] C. J. Thompson, Mathematical Statistical Mechanics, Macmillan, 1971.

[24] G. E. Uhlenbeck, G. W. Ford and E. W. Montroll, Lectures in Statistical Mechanics, Amer-ican Mathematical Society, 1963.