• Non ci sono risultati.

4 Ordinary di↵erential equations: elements of general theory

N/A
N/A
Protected

Academic year: 2022

Condividi "4 Ordinary di↵erential equations: elements of general theory"

Copied!
29
0
0

Testo completo

(1)

4 Ordinary di↵erential equations: elements of general theory

In this Section we study some elements of the general theory of ordinary di↵erential equa- tions. We assume the knowledge of scalar linear equations, and of the procedures to solve those of first order and of any order with real constant coefficients.

4.1 Generalities: definitions, Cauchy problem, a priori analysis

An ordinary di↵erential equation (sometimes ODE for short) is a problem where it is

Ordinary di↵erential equation (ODE)

required to determine a function y(t) : I ! C, defined on an open interval I ⇢ R, starting from a relation involving the derivatives of y(t) (including the function y(t) itself, viewed as zeroth order derivative) and the independent variable t. The problem of solving ODEs is a natural generalization of the problem of integrating a given continuous function f (t), which could be expressed in these terms by the equation y

0

= f (t). The adjective ordinary refers to the fact that the unknown function y(t) depends only on the variable t (or, at least, that in the equation appear only derivatives with respect to t). To stress the fact that y(t) takes values in C (i.e. y(t) is a scalar function) one sometimes says that the ODE is scalar; more generally, one could look for an unknown vectorial function y(t) = (y

1

(t), . . . , y

n

(t) : I ! C

n

with n 2, and in this case the ODE could be said to be a di↵erential system. The most common symbol for the independent variable is t;

(35)

but, depending on the cases, also other symbols (as e.g. x) could be used.

The family of all solutions of a given di↵erential equation is sometimes called general

General integral

integral of the equation: usually this family has infinitely many elements, as it happens

in the integration problem

(36)

. The order of a di↵erential equation is the highest order of

Order of a ODE

derivation appearing in it. A di↵erential equation of order n is said to be:

(a) in normal form if the derivative of top order y

(n)

is explicitly expressed in terms of

ODE in normal form

those of lower orders (i.e. y = y

(0)

, y

0

, ... , y

(n 1)

) and of the variable t;

(b) linear if it has the form of a polynomial of degree one in the derivatives y = y

(0)

, y

0

,

Linear ODE

... , y

(n 1)

, y

(n)

of the unknown function y(t);

(c) autonomous if the independent variable t does not appear explicitly in the equation.

Autonomous ODE

A Cauchy problem of order n consists in assigning a di↵erential equation of order n together

Cauchy problem

with an initial condition, i.e. the n values y(t

0

) = ↵

0

, y

0

(t

0

) = ↵

1

, ..., y

(n 1)

(t

0

) = ↵

n 1

(35)seen as the time, so that y(t) is a function of time, as the position of a punctiform body on a line during the motion.

(36)namely, as we know very well, any continuous function f : I ! C admits a primitive F (x), and the general integral of the equation y0= f (x) is l’insieme infinito{F (x) + k : k 2 C}.

(2)

that the desired solution y(t) and its derivatives up to the order n 1 should have in a certain t

0

2 I. Very often (actually, almost all the time), the solution of a given Cauchy problem exists and is unique, at least locally in a neighborhood of t

0

: we shall come back soon on this, when talking about the Cauchy-Lipschitz Theorems.

In general, to find elementary expressions for the solutions of a given ODE is impossible (this should not be surprising, since we already know that this happens for the integration problem), and only very few, although important, particular cases can be solved. Nevert- heless, from the mere form of a di↵erential equation it is already possible to get remarkable

“a priori” informations about the behaviour of its solutions (e.g. about growth, convexity,

A priori analysis

domain, asymptotes, parity, constant solutions), without finding explicitly the solutions.

Examples. (1) A solution y(t) of the equation y0 = 12+ cos t (of the first order, in normal form, linear, non autonomous) will be increasing if and only if y0(t) = 12 + cos t 0, i.e. for cos t 12, i.e. for

2⇡

3 + 2k⇡ t  2⇡3 + 2k⇡. In this case the general integral can be easily found by integration, and is {y(t) =2t + sin t + k : k2 C}. But if we add the Cauchy datum y(⇡) = 1 the solution becomes unique, i.e. y(t) = t2 + sin t 2 1. (2) A solution y(t) of the second order equation y00= 6x (here we use the notation x for the independent variable) will be convex if and only if y00(x) = 6x 0, i.e. for x 0.

The general integral can be easily found by double integration, and is{y(x) = x3+ ax + b : a, b2 C};

also in this case, if we add the Cauchy datum given by y( 1) = 0 and y0( 1) = 2 the solution becomes unique, i.e. y(x) = x3 x. (3) The autonomous equation of first order y0 = y has y ⌘ 0 as unique constant solution. Any other solution y(x) will be increasing if and only if y0(x) = y 0, i.e. where it is positive. We shall see soon that the general integral is{kex : k2 C}. (4) The systems x = ty = t˙˙ 2 1 and

x = y˙

˙

y = x are two examples of di↵erential systems: here we mean that we look for unknown functions (x(t), y(t)), i.e. for a vector-valued function Y (t) = (x(t), y(t)) : I! C2. The first system is easy to solve (it is enough to make two separate integrations), and gives

( x(t) =12t2 t + h

y(t) = 13t3+ k for h, k 2 C; the second is not easy (for the moment), but one of its solutions is clearly (x(t), y(t)) = (sin t, cos t). (5) Consider the equation y0= 2x(y 1)2 in the unknown function y(x) (of the first order, in normal form, non linear, non autonomous). A constant function y⌘ k is solution when 0 = 2x(k 1)2 for any x, hence the unique constant solution is y(x)⌘ 1. Now let '(x) be a solution defined at the neighborhood of x0 = 0, and let us check whether we can determine a priori something about its parity. Setting (x) = '( x), we have

0(x) = '0( x): since 0(x) = '0( x) = 2( x)('( x) 1)2 = 2x( (x) 1)2, also (x) is a solution.

Now, since (0) = '( 0) = '(0), both (x) and '(x) are solution of the same Cauchy problem: assuming to know that the hypotheses of existence and uniqueness are satisfied (as we said this is almost always true, and also in this case it is), this implies that (x) = '(x) at the neighborhood of x0 = 0. But this simply means that the solution '(x) is even. Now let us come to growth and convexity. Since a solution y(x) is increasing if and only if y0(x) = 2x(y(x) 1)2 0, we have that y(x) (if it is di↵erent from 1) is strictly decreasing for x < 0 and strictly increasing for x > 0: hence in x0= 0 it has a point of minimum. Moreover, by deriving (using Leibniz’ rule) we find y00= 2 (y 1)2+ 2xy0(y 1) = 2(y 1)(y 1 + 2x· 2x(y 1)2) = 2(y 1)2(1 + 4x2(y 1)), hence y(x) is convex where y 4x4x221. This equation will be solved very soon, and the solution will confirm these “predictions”.

Although we do not know Cauchy Theorem yet, we already said that existence and un-

iqueness of the solution of a ODE is verified in most of the standard cases. Hence it is

worth arguing on its consequences in some basic cases:

(3)

(1) if a first order scalar ODE has local existence and uniqueness, the graphs of two distinct solution cannot intersect;

(2) if a second order scalar ODE has local existence and uniqueness, the graphs of two distinct solution can intersect, but with di↵erent slopes (without tangency).

Figure 4.1: Some solutions of the ODEs of first order (a) y0=12+ cos x and (b) y0= y, and of the ODE of second order (c) y00= 6x.

4.2 Equations of first order with separable variables

We start with a concrete example: assume we want to find the solutions y(x) of the first order di↵erential equation 2(x + 1) y

0

= xy

3

respectively with Cauchy datum y(0) = 1 or y(0) = 0. Once we divide by x + 1 (hence from now on we mean that x 2] 1, + 1[)

the given problem have the following form with separable variables:

First order ODE with separable variables

(4.1) y

0

= f (x)g(y), y(x

0

) = y

0

(here x

0

= 0, and y

0

is respectively 0 or 1) where f (x) =

x+1x

and g(y) = y

3

are continuous functions defined at the neighborhood of respectively x

0

and y

0

.

• To obtain the solution y(x) such that y(0) = 1 we

“separate the variables” by dividing both members by y

3

(this is possible, since y

3

is nonzero: in fact the solution y(x) will be defined in principle only in a neighborhood of x

0

= 0 and, since it is a continu- os function and its value in x

0

= 0 is 1 6= 0, the function y(x) will keep being nonzero in a sufficiently small neighborhood of x

0

= 0 by the permanence of sign), obtaining

y23

y

0

=

x+1x

. Now, in both members there are functions of x (remember that we are loo-

king for a y(x)), so we may integrate it between x

0

= 0 and a generic x, obtaining

(4)

R

x

0 2

y3(t)

y

0

(t) dt = R

x 0 t

t+1

dt. At the left-hand side we change the variable of integration from t to ⌘ := y(t), so obtaining R

y(x)

1 2

3(t)

d⌘ = ( ⌘

2

]

y(x)1

= y(x)

2

( 1) =

1

y(x)2

+ 1, while at the right-hand side we get R

x 0 t

t+1

dt = (t log |t + 1|]

x0

= x log |x + 1|. Therefore we have

y(x)1 2

+ 1 = x log |x + 1|, and solving with respect to y(x) (and recalling that y(x) must be negative in x

0

= 0) we finally get our desired solution y(x) = p

1

1+log|x+1| x

, defined for x in a neighborhood of x

0

= 0 as large as long as x 6= 1 and the denominator does not vanish.

• On the other hand, to obtain the solution y(x) such that y(0) = 0 the above argument does not apply (since when dividing by y

3

we would in fact divide by zero, and this is not permitted). But in a case like this (where the Cauchy datum of y(x) annihilates y

3

) the answer is even easier, because the desired solution is just the constant function y(x) ⌘ 0 (note that both members of 2(x + 1) y

0

= xy

3

vanish identically for any x 2 R).

The procedure used in the example above remains the same for any problem with separable variables, i.e. in the form (4.1).

(i) If g(y

0

) = 0, the constant function y(x) ⌘ y

0

is solution of the Cauchy problem.

(37)

(ii) Assume instead that g(y

0

) 6= 0. By dividing both members by g(y) we obtain the

equation

g(y)1

y

0

= f (x), to be intended as valid only in a neighborhood I

0

di x

0

. (iii) Integrating both members between x

0

and a generic x 2 I

0

we obtain R

x

x0 1

g(y(t))

y

0

(t) dt = R

x

x0

f (t) dt; by operating in the integral at the right-hand side the change of variables

⌘ = y(x) and recalling that ⌘(x

0

) = y

0

, we finally obtain R

y(x) y0

1

g(⌘)

d⌘ = R

x

x0

f (t) dt.

(iv) Let G(⌘) be a primitive of

g(⌘)1

and F (x) a primitive of f (x): we then get G(y(x)) G(y

0

) = F (x) F (x

0

).

(v) Making y(x) explicit from the last equality we obtain the desired solution y : I

0

! R.

The maximal interval on which this solution is defined is the largest interval I

0

contained in I, containing x

0

and such that g(y(x)) 6= 0 for any x 2 I

0

.

A formal rereading of the procedure just described, useful in practice, is the following.

(i) If g(y0) = 0, then y⌘ y0 (constant) is solution. If instead g(y0)6= 0, proceed as follows.

(ii) Thinking at y0=dxdy (rformal ratio of di↵erentials dy and dx), we geta g(y)1 dy = f (x) dx.

(37)If the hypotheses of Cauchy Theorem are fulfilled, this constant solution is locally the unique solution.

We shall discuss this in the next paragraph.

(5)

(iii) Taking the integrals of both members in the respective variables, denoted by G(y) a primitiv of g(y)1 and by F (x) a primitive of f (x) we obtain G(y) = F (x) + k where k is to be determined.

(iv) Imposing the initail condition y(x0) = y0 we get k = G(y0) F (x0): hence G(y) = F (x) + G(y0) F (x0). [The point (iv) could be inverted with the following one (v), or could be omitted by letting k as undetermined in order to exhibit the general integral of the equation.]

(v) Making y explicit from the last equality we get the desired solution y(x).

Examples. (1) The autonomous equation y0 = y2 has y⌘ 0 as constant solution; if on the other hand the Cauchy datum is y0 := y(x0) 6= 0, by separating the variables and integrating we get 1y = x + k, where k = (x0+y 01 ) and therefore y(x) = (x 1

0+1

y0) x (note that such solution, of homographic type, is defined for x6= x0+y1

0). (2) Consider the equation xy0 = y + 2 with Cauchy condition y(1) = y0: here we have f (x) = 1x, g(y) = y + 2 and x0 = 1. If y0 = 2 the solution is y(x)⌘ 2. If instead y0 6= 2, by separating the variables we obtain y+2y0 = 1x, hence log|y + 2| = log |x| + c: imposing that y(1) = y0 we obtain c = log|y0+ 2|, so log |y + 2| = log |x(y0+ 2)|, which yields |y + 2| = |x(y0+ 2)|. Now there are two possibilities: y + 2 = x(y0+ 2) or y + 2 = x(y0+ 2), but from y(1) = y0 the admissible one is the former. We hence obtain the line y = (y0+ 2)x 2. (3) Now let us solve the previously seen equation y0= 2x(y 1)2, with the three di↵erent initial conditions (a) y(0) = 2, or (b) y(0) = 1 or (c) y(0) = 1.

The case (b) is immediately solved by the constant y(x)⌘ 1. In the other two cases, by separating the variables we obtain(y 1)y0 2 = 2x, hence y 11 = x2+ c. In case (a) we obtain 1 = c, and so y(x) = xx22 21; in case (b) we have 12 = c, hence y(x) = 2x2x22+11. Both of these solutions, defined at the neighborhood of x0 = 0 (in I = ] 1, 1[ for (a), and in I =R for (c)), satisfy the “predictions” previously deduced by a priori analysis from the mere form of the equation.

Now let us provide a first example of concrete application of ODEs to the study of demo- graphical problems.

The malthusian and logistic growth models. The autonomous equation y0 = ⌫y with Cauchy con- dition y(x0) = y0, if y0 = 0 has constant soluzione y ⌘ 0; otherwise, by separating the variables one obtains yy0 = ⌫, hence log|y| = ⌫x + c: imposing that y(x0) = y0 we have c = log|y0| ⌫x0, hence log|yy0| = ⌫(x x0), hence|yy0| = e⌫(x x0), hence yy

0 =±e⌫(x x0): from y(x0) = y0 it is necessary to choose “+”, and therefore y(x) = y0e⌫(x x0). This equation formalizes the classical malthusian groth model (from T. R. Malthus, 1766–1834) where one assumes that a population p(t) grows with respect to the time t proportionally to the population itself: namely in this case there will be an equation of type p0= (N M )p, where N > 0 and M > 0 denote respectively the rate of births and deaths of the population. As we have seen, denoted by p0the initial population one obtains p(t) = p0e(N M )t: note that, according to this model, for N < M the population dies out, for N = M remains stable while for N > M grows exponentially.

Now, if at one side the malthusian model gives reasonable re- sults when N M, on the contrary it appears to be not realistic for N > M : in fact the growth of the population must be sooner or later influenced by a saturation e↵ect due to the overexploita- tion of the region where this population is confined (for example, the progressive lack of food). A more refined model for the case N > M is the so-called logistic model, where one assumes that the growth of the population be proportional to the population itself as long as the number of individuals is low, but then in parallel with the increase of the number the

(6)

growth must get slower, and should become a decrease as soon as the number of individuals overcomes a critical level S which is expected to depend on various factors (type of population, available resources...).

The new model, which refines the malthusian one, becomes p0= (N M )(1 Sp)p (note that the malthu- sian model is recovered by eliminating the critical level, i.e. by passing to the limit for S! +1): assuming that 0 < p0< S, by separating the variables one obtains p(t) = p0 S e(N M )t

S+p0(e(N M )t 1), defined for t 0. The curve p(t) is usually called sigmoid (see the figure): note that the population grows asymptotically from p0 to the critical value S.

4.3 Existence and uniqueness: the Cauchy-Lipschitz theorems

Let us discuss the solutions of the first order equa- tion with separable variables y

0

= 2 p

|y| with the Cauchy datum y(0) = y

0

for y

0

running in R.

When y

0

6= 0 the solution exists and is unique (at least in a neighborhood of x

0

= 0) and more precise- ly it is y(x) =

0

(x+

0

p

|y

0

|)

2

where

0

= sign y

0

. On the other hand, when y

0

= 0 there is of cour- se the constant solution y ⌘ 0, but also y(x) = (sign x)x

2

(i.e. the function with values x

2

when x 0 and x

2

when x < 0) is a solution: these

two functions do not coincide in any neighborhood of x

0

= 0, hence the solution to this Cauchy problem is not unique, even locally. More generally, given any a  0  b, the function defined as (x a)

2

for x  a, as 0 for a < x < b and as (x b)

2

for x b is a solution as well.

We told that, at least locally, existence and uniqueness of the solution of a Cauchy problem are verified almost all the time; but here is a (not too difficult) example where this does not happen. What is wrong with this Cauchy problem? The problem, as we shall see in a moment, is that the function f (t, y) = 2 p

|y| is continuous at y

0

= 0 but it has an uncontrolled slope there (in particular, it is not derivable).

Assume we have a first order di↵erential equation y

0

= f (t, y) in the unknown function y(t), where f is a function defined in some open subset ⌦ ⇢ R

2

with values in R. Taken a (t

0

, y

0

) 2 ⌦, we want to know when the Cauchy problem given by

(4.2)

⇢ y

0

= f (t, y) y(t

0

) = y

0

has locally one and only one solution: in other words, when we can find a neighborhood

I ⇢ R of t

0

and a unique solution ' : I ! R of the above Cauchy problem (4.2), i.e. a C

1

(7)

function '(t) such that '

0

(t) = f (t, '(t)) for any t 2 I and '(t

0

) = y

0

.

Theorem 4.3.1. (Cauchy-Lipschitz, local existence and uniqueness) If f is of class C

1

with respect to y in some neighborhood of (t

0

, y

0

), then the Cauchy problem (4.2) has locally one and only one solution.

(38)

The Theorem 4.3.1 provides a (only sufficient) condition which ensures the existence and uniqueness of the solution '(t) of the Cauchy problem (4.2) in some neighborhood of t

0

: hence, given two solutions of (4.2) both defined in a same interval I ⇢ R containing t

0

, there surely exists > 0 such that they are equal on I ⇢ I, but in principle it could happen that they do not coincide on all of I. However, if local existence and uniqueness is verified for any Cauchy datum (t

0

, y

0

) 2 ⌦ then there are important consequences:

Proposition 4.3.2. Assume that the equation y

0

= f (t, y) verifies the local existence and uniqueness in any point of ⌦ (for example, this happens if f is of class C

1

in ⌦). Then:

(1) (Local uniqueness in all points implies global uniqueness) Two solutions defined on a same interval I which coincide in some point of I must coincide on all of I.

(2) (Maximal solutions) Given a Cauchy datum (t

0

, y

0

) 2 ⌦, the notion of a uniquely defined “maximal solution” of (4.2) (meaning that the domain of this solution is the largest possible) makes sense, and its domain is necessarily an open interval of R.

Proof. Omitted.

Here are a couple of remarks.

• It is important to note that in general Proposition 4.3.2(1) says that local existence and uniqueness in every point implies only global uniqueness, not global existence. For example (see Figure 4.2(a)) the equation y

0

= y

2

obviously verifies local existence and uniqueness in every point (t

0

, y

0

) of R

2

(namely f (t, y) = y

2

is clearly of class C

1

), so we can be sure that there is global uniqueness (i.e. two solutions of the equation which coincide for some t

0

must coincide everywhere) but the global existence on all R is not verified, as we already know that, apart from the zero solution y(x) ⌘ 0, all other solutions are '(t) =

k t1

for k 2 R, whose domain is either ] 1, k[ or ]k, +1[.

• The domain of the maximal solution of a Cauchy problem of type (4.2) depends of course not only on the equation y

0

= f (t, y), but also on the Cauchy datum y(t

0

) = y

0

. For example (see Figure 4.2(b)) the equation y

0

= 2ty

2

has local existence and uniqueness in all points (namely f (t, y) = 2ty

2

is of class C

1

on all of R

2

), hence it

(38)Actually, the hypothesis on f could be somehow weakened: it is enough that there exists a neighborhood of (t0, y0) where f satisfies a Lipschitz condition with respect to y, i.e. such that there exists a L > 0 such that|f(t, y1) f (t, y2)|  L|y1 y2| for any point (t, y1) and (t, y2) in the neighborhood (roughly speaking, this means that close to (t0, y0) the function is not too sloped with respect to y). For example, in the Cauchy problem given by y0= 2|y| and y(0) = 0 the function f(y, t) = 2|y| is not C1 in a neighborhood of (t0, y0) = (0, 0) but anyway satisfies a Lipschitz condition in a neighborhood of it (with L = 2), hence the local existence and uniqueness applies (in fact the only solution of the above given Cauchy problem is the zero function y(x)⌘ 0).

(8)

has also global uniqueness. Since it is with separable variables, the solutions are easy to be found: there is the zero function y ⌘ 0, and all others are of type y =

t21+k

for k 2 R. Hence the domains of the maximal solutions depend on k (hence depend on the Cauchy datum), more precisely they are R (for k > 0), ] 1, 0[ or ]0, +1[

(for k = 0), and ] 1, p

|k| [ , ] p

|k|, p

|k| [ or ] p

|k|, +1[ (when k < 0).

Figure 4.2: (a) Some solutions of y0= y2: note that the graphs of di↵erent solutions do not intersect (by global uniqueness), and that there is no global existence, since all nonzero solutions are defined only on a half-line. For example, the solution y(x) = 2 x1 (whose graph is purple) is defined either on ] 1, 2[ or ]2, +1[. (b) Some solutions of the ODE y0= 2ty2: the red (resp. green) one satisfies the Cauchy condition y(1) = 1 (resp. y(1) = 12).

Now let us switch from local to global considerations. The question is: given a di↵erential equation y

0

= f (t, y) and a Cauchy datum (t

0

, y

0

), what is the domain of the maximal solution of the corresponding Cauchy problem ? The case of y

0

= y

2

taught us that in a di↵erential equation, even when the function f (t, y) is very regular (as y

2

is, on all of R

2

) one should not expect that its maximal solutions be defined in the maximal possible interval (which in the case of y

0

= y

2

would be R). Hence it would be useful to find a criterion which gives at least a sufficient condition in order that this happens.

Theorem 4.3.3. (Cauchy-Lipschitz, global existence and uniqueness) Assume f is of class C

1

with respect to y in a domain containing a “vertical stripe” I ⇥ R, where I is an interval of R; assume also that the partial derivative

@f@y

is bounded on any subset of the domain of type K ⇥ R where K is a compact subset of I. Then for any (t

0

, y

0

) 2 I ⇥ R there is one and only one solution of the Cauchy problem (4.2) defined on all of I.

(39)

(39)Also in this case the hypothesis on f could be weakened, by assuming that f satisfies a Lipschitz condition with respect to y on any subset of type K⇥ R, i.e. there exists a LK > 0 such that|f(t, y1) f (t, y2)|  LK|y1 y2| for any point (t, y1) and (t, y2) in K⇥ R (roughly speaking, this means that in the

“vertical stripe” K⇥ R the function has bounded slope with respect to y).

(9)

Also here we make a couple of remarks.

• The equations y

0

= 2 p

|y| and y

0

= y

2

do not have global existence and uniqueness:

for the former we have already seen that there are not even local existence and uniqueness, while the second has solutions y =

k t1

which are not defined on all of R. In fact, Theorem 4.3.3 cannot be applied to them.

• A case of great importance where Theorem 4.3.3 can be applied is the one of linear equations, i.e. of the type

y

0

+ p(t) y = q(t)

where p and q are continuous functions defined on some interval I ⇢ R. Namely in this case we have y

0

= f (t, y) with f (t, y) = p(t) y + q(t), and it is enough to note that f is of class C

1

with respect to y and that on any compact subset K ⇢ I it holds |

@f@y

| = | p(t) |  max

t2K

|p(t)|.

Figure 4.3: Solutions of the linear equation t(y0 log t) + y = 0, defined on ]0, +1[.

Example. The di↵erential equation t(y0 log t)+y = 0 is linear, as it can be rewritten as y0+p(t) y = q(t) with p(t) = 1t and q(t) = log t defined in ]0, +1[: hence we already know that the solutions of these equations will all be defined in ]0, +1[. Actually we can also compute them (Figure 4.3): a primitive of p(t) is P (t) = log t andR

eP (t)q(t) dt =R

t log t dt = 14t2(2 log t 1), hence y(t) = e P (t)(R

eP (t)q(t) dt +k) =

1

t(14t2(2 log t 1) + k) = kt+14t(2 log t 1) for k2 R. The solution for k = 0 (ovvero y(t) = 14t(2 log t 1)) is the only one having finite limit (0) for t! 0+; note also that this special solution is an asymptote at +1 for all the others (namely the di↵erence kt is infinitesimal).

4.4 Di↵erential systems in the plane

We now deal with the following more general problem, where the purpose is to determine a pair of unknown functions (x(t), y(t)), i.e. a parametric curve in the plane:

(4.3)

⇢ ˙x = a(t, x, y)

˙y = b(t, x, y) .

(10)

Here the dot ˙x denotes, as it is commonplace in Physics, the derivation of the function x(t) with respect to the parameter t (which usually represents the time); a and b are real-valued continuous functions defined in an open subset U of R

3

. In this framework, a Cauchy datum for (4.3) is given by assigning a position (x(t

0

), y(t

0

)) = (x

0

, y

0

). Then the local Cauchy-Lipschitz Theorem 4.3.1 and the subsequent Proposition 4.3.2 (which could be stated directly for systems, although we bounded ourselves to the case of of scalar equation for the sake of simplicity) say that:

• if the functions a and b are of class C

1

with respect to x and y in a neighborhood of (t

0

, x

0

, y

0

) then the solution of (4.4) satisfying the Cauchy datum (x(t

0

), y(t

0

)) = (x

0

, y

0

) exists and is locally unique;

• if local existence and uniqueness is satisfied in any point of U (for example, this happens when (a, b) is of class C

1

with respect to x and y in U ) then there is also global uniqueness, i.e. two solutions (x

1

(t), y

1

(t)) and (x

2

(t), y

2

(t)) of (4.4) defined on a same interval I ⇢ R which coincide for some t 2 I must coincide on all of I;

• there is a notion of “maximal solution” of the Cauchy problem, defined in the largest possible interval of time, and this maximal interval will be open.

In the following we shall treat two di↵erent particular cases of (4.3):

1. the autonomous systems, where the functions a and b do not depend on t;

2. the linear systems, where the functions a and b are linear in x and y.

4.4.1 Autonomous di↵erential systems

Let us start with the autonomous system (4.4)

⇢ ˙x = a(x, y)

˙y = b(x, y) .

Here the function (a, b) : ⌦ ! R

2

is nothing but a vector field on some open set ⌦: so the idea is that we are looking for a curve '(t) = (x(t), y(t)) in the plane R

2

by prescribing at any point its tangent vector (the speed) ˙ '(t) = ( ˙x(t), ˙y(t)) (see Figure 4.4(a)). Then it is clear that, by drawing the vector field (a, b), one obtains an immediate visualization of the images of such curves, also called trajectories or flow curves of the system, or integral/field curves of the vector field.

A fundamental feature of autonomous systems is the following one, which says in particular

that the Cauchy datum (x(t

0

), y(t

0

)) = (x

0

, y

0

) can be assigned directly for the initial time

t

0

= 0, without losing any information about the family of solutions:

(11)

Figure 4.4: (a) Some trajectories of the system ( ˙x, ˙y) = (2y, x). (b) Some graphs of the solutions of y0= t2 y, seen as trajectories of the autonomous system ( ˙x, ˙y) = (1, x2 y).

Proposition 4.4.1. The space of solutions of an autonomous system is invariant under temporal translations: in other words, if (x(t), y(t)) is a solution, then for any ↵ 2 R also the translated function (x(t ↵), y(t ↵)) is a solution.

Proof. Setting (˜x(t), ˜y(t)) := (x(t ↵), y(t ↵)) we have ( ˙˜x, ˙˜y) = ( ˙x(t ↵), ˙y(t ↵)) = (a(x(t ↵), y(t

↵)), b(x(t ↵), y(t ↵))) = (a(˜x, ˜y), b(˜x, ˜y)).

To justify the interest of studying such autonomous di↵erential systems, it is important to observe that one can express in this form also the following remarkable scalar equations:

• the first order scalar equation y

0

= f (t, y) in the unknown function y(t): setting x = t, this equation is equivalent to the system

⇢ ˙x = 1

˙y = f (x, y) (for an example see Figure 4.4(b));

• the second order autonomous scalar equation z

00

= f (z, z

0

) in the unknown function z(t): setting x = z and y = z

0

, this equation is equivalent to the system

⇢ ˙x = y

˙y = f (x, y) .

As we already said, a trajectory of the autonomous system (also called flow line or integral

Trajectory Flow line Integral curve

curve of the vector field (a, b)) is the support (the image) of some maximal solution '(t) = (x(t), y(t)) of the equation. Sometimes the term “integral curve” refers to an irreducible curve that is not necessarily a single trajectory, but possibly a union of trajectories of the system (see the statement of Proposition 4.4.3 for more details).

A point (˜ x, ˜ y) 2 ⌦ such that a(˜x, ˜y) = b(˜x, ˜y) = 0 is said to be an equilibrium of the vector

Equilibrium

(12)

field (a, b). If the field is of class C

1

, then there exists one and only one solution of the Cauchy problem with (x(0), y(0)) = (˜ x, ˜ y), and it is the constant function (x(t), y(t)) ⌘ (˜ x, ˜ y) which is obviously maximal because it is defined on all of R, and its trajectory is the single point. Other features on the trajectories of fields of class C

1

are as follows.

Proposition 4.4.2. Assume the field (a, b) is of class C

1

. Then:

(1) Di↵erent trajectories never intersect.

(2) Two maximal solutions '

1

(t) and '

2

(t) with the same trajectory are temporal trans- lated of each other, i.e. there exists t

0

2 R such that '

2

(t) = '

1

(t + t

0

) for any t.

(3) A non constant solution '(t) covers its trajectory with a never vanishing speed '

0

(t).

Proof. The statements follows from the property of global uniqueness (which follows in turn from the local existence and uniqueness for any datum), as we now show. Let '1(t) = (x1(t), y1(t)) : I1! ⌦ and '2(t) = (x2(t), y2(t)) : I2! ⌦ be two maximal solutions of the system, and assume that '1(t1) = '2(t2) =: (x0, y0) for some t1 2 I1 and t2 2 I2. Then both of the maximal solutions '2(t) and '1(t (t2 t1))) (defined respectively on the intervals I2 and I1+ t2 t1) satisfy the Cauchy problem with datum (x(t2), y(t2)) = (x0, y0), hence by global uniqueness they are equal: in other words, '2 is a temporal translated of '1, and obviously both describe the same trajectory. This proves at the same time (1) and (2). As for (3), if '0(t0) = 0 then a('(t0)) = b('(t0)) = 0, hence (once more by global uniqueness) the solution '(t) must be the constant '(t0) (because both '(t) and the constant '(t0) satisfy the Cauchy problem with datum (x(t0), y(t0)) = '(t0)).

Now it is time to turn from abstract considerations to concrete methods for solving an autonomous di↵erential system in the plane (4.4), that we shall assume be of class C

1

.

(40)

The strategy will be twofold.

(1) Try to decouple the equations. Usually the main problem in solving (4.4) is that the two equations of the system are “coupled”, i.e. they interlace the di↵erential informations on x(t) and y(t). Things become easier when one equation (or better, both of them) is “decoupled”, since in this case it can be solved separately from the other one.

Examples. (1) The problem

˙x = y

˙y = x is coupled, and its only equilibrium (given by y = x = 0) is the origin (0, 0). Well, in this case a bit of smart glance at the problem should lead us to the evident solution '(t) = (x(t), y(t)) = (cos t, sin t); then, starting from there, a little further endeavour and the help of Proposition 4.4.2(2) provide the general solution '(t) = (x(t), y(t)) = (h cos(t+k), h sin(t+k)) for any h 0 and k2 R, or (a cos t+b sin t, b cos t+a sin t) for any a, b 2 R (so for example, if we assign the datum '(t) = (x(0), y(0)) = ( 3, 3) we get (h, k) = (3p

2,3⇡4 ), or (a, b) = ( 3, 3)). But it is clear that in general we cannot rely on this type of occasional intuition. (2) The problem

˙x = 2x 1

˙y = y2+ 1 is totally decoupled, and it has no equilibria (namely

(40)Actually what we really need is that the system satisfy local existence and uniqueness, so if necessary we could slighly weaken the hypothesis on the system as we explained above.

(13)

2x 1 = y2+ 1 = 0 has no real solutions). Both equations are with separable variables respectively in x(t) and in y(t) (the first one is also linear), hence we know how to solve each of them: it follows that the general solution of the system is (x(t), y(t)) = (h e2t+ 12, tg(t + k)) for h, k 2 R (for example, if we assign the datum '(t) = (x(0), y(0)) = ( 2, 1) we get (h, k) = ( 52,4)). (3) In the problem

˙x = x + 2

˙y = xy only the first equation is decoupled, and has solutions x(t) = h et 2;

writing this information in the second equation we get ˙y = (h et 2)y, with separable variables, having solutions y(t) = k eh et 2t. The general solution is then (x(t), y(t)) = (h et 2, k eh et 2t) for h, k2 R (for example, if we assign the datum '(t) = (x(0), y(0)) = ( 1, 1) we get (h, k) = (1,1e)).

(2) Determine a priori the trajectories of the solutions. Before finding the solutions '(t) = (x(t), y(t)) of the system it would be important (in many concrete situation it would be even enough) to determine the trajectories of these solutions, i.e. their images in the plane.

Example. The solutions of the problems

˙x = y

˙y = x and

˙x = 2y

˙y = 2x are respectively (h cos(t + k), h sin(t + k)) (we have realized it just above) and (h cos(k 2t), h sin(k 2t)) (easy adaptation of the previous one) for any h 0 and k2 R: these families of solutions are di↵erent, but it is clear that they cover the same trajectories, which are the circles centered at the origin. In other words, what changes in the two problems is not the geometry of the trajectories they describe, but just the motion laws they originate (the first one goes counterclockwise, while the second one goes clockwise with double speed), and maybe for our purposes it could have been enough to know the shape of these trajectories without knowing also how they are covered.

A useful tool for both aspects of the above strategy is to look for a prime integral of the

Prime integral

system, i.e. a scalar function E : ⌦ ! R at least of class C

1

which is constant on the solutions of the system: this means that if '(t) = (x(t), y(t)) is a solution of the system (4.4), then the value E(x(t), y(t)) does not depend on t, i.e. E mantains the same value all along the solution '. To find a prime integral of an autonomous system in the plane is extremely important, because it is substantially equivalent to determining its trajectories:

Proposition 4.4.3. The scalar function E is a prime integral for the system (4.4) if and only if the trajectories of the system are contained in the level curves of E.

More precisely, the trajectories of (4.4) are either the equilibria (single-point trajectories) or the entire connected components of the level curves of E not containing any equilibrium.

Proof. The first part of the statement is just an obvious reformulation of the definition: to say that E must be constant on any solution of the system means precisely that the images of the solutions (in particular the images of the maximal ones, i.e. the trajectories) are contained in the level curves of E. As for the rest, if a maximal solution has a one-point trajectory then that point must be an equilibrium. Otherwise, let C be a connected component of some level curve of E not containing any equilibrium, take any point (x0, y0)2 C and consider the (unique) maximal solution ˜' : I! ⌦ such that ˜'(0) = (˜x(0), ˜y(0)) = (x0, y0): we already know that the image C0 of ˜' (i.e. the trajectory covered by ˜') is contained in C, and we want to prove that C0= C. If by absurd this would not happen, C0 would have a boundary point (x, y)2 C, i.e. a point adherent both to C0and to C\C0. But also the Cauchy problem of (4.4) with datum (x(0), y(0)) = (x, y)2 C has a locally unique solution, which by monotonicity (since the speed of the solution never vanishes) would cover a complete neighborhood of (x, y)2 C, and hence by uniqueness this local solution would glue together with the solution ˜' to form a solution with a strictly larger trajectory, a contradiction (since ˜' was assumed to be maximal). Therefore C0= C.

(14)

Once understood the importance of knowing a prime integral, we are left with the problem of how to determine it. The following result gives an essential information.

Proposition 4.4.4. If the vector field (b, a) is conservative, then any primitive of (b, a) is a prime integral for the system (4.4).

Proof. If the vector field (b, a) is conservative, a primitive E : ⌦! R of (b, a) satisfies @E@x = b and

@E

@y = a: hence, given a solution '(t) = (x(t), y(t)) of the system (4.4) we have dtd(E(x(t), y(t))) =

@E

@x ˙x +@E@y ˙y = b(x, y) a(x, y) + ( a(x, y)) b(x, y)⌘ 0, in other words E is constant along the solution, i.e.

by definition E is a prime integral.

Now we can illustrate our strategy:

1. given an autonomous di↵erential system in the plane (4.4) (in particular a first or- der scalar equation y

0

= f (t, y) or a second order autonomous scalar equation z

00

= f (z, z

0

)) with initial condition (x(t

0

), y(t

0

)) = (x

0

, y

0

), to look for a primitive E(x, y) of the vector field (b, a), if necessary with the help of an “integrating factor” (see Proposition 4.4.5 below);

2. once determined the integral curve E(x, y) = k

0

passing through (x

0

, y

0

) (where ob- viously k

0

= E(x

0

, y

0

)), to come back to the system (4.4) and try to use the informa- tion provided by the integral curve to solve the system (possibly by making a least one of its equations decoupled) so determining the motion law '(t) which describes how the integral curve E(x, y) = k

0

is covered by the solution of the system.

Examples. (1) Let us consider the autonomous system x =y = x + y˙˙ 2xy2 . The only equilibrium of the system (given by 2xy = x + y2 = 0) is the origin O(0, 0). For the other trajectories, the vector field (x + y2, 2xy) is conservative (since it is irrotational on the simply connected setR2), and a primitive of it is E(x, y) = x2+ 2xy2. Among the level curves E(x, y) = k, the one with k = 0 (the union of the y-axis and of the parabola x = 2y2) contains also the equilibrium O, which is a single-point trajectory. From Proposition 4.4.3 we deduce that the trajectories of the system are the equilibrium{O}, then the two half-axes{(0, y) : y < 0} and {(0, y) : y > 0}, the two half-parabolas {(x, y) : x = 2y2, y < 0} and {(x, y) : x = 2y2, y > 0}, and all other curves x2+ 2xy2 = k with k 6= 0. Moreover, since we know that a non constant solution has a never vanishing speed (Proposition 4.4.2(3)), such trajectories will be covered monotonically from one side to the other.• Now let us focus our attention on the Cauchy problem given by the initial conditions (x(0), y(0)) = ( 2, 1): for what we have just said, the trajectory covered by the maximal solution will be the half parabola x = 2y2 with y > 0. In the case we would be interested only in the geometry of the trajectory of our solution, we would have finished; if on the other hand we are also interested in knowing the motion law, by replacing x = 2y2 in the second equation of the system we can decouple the last one obtaining ˙y = y2, which can be easily solved (it has separable variables) and, recalling that y(0) = 1, gives y(t) = t+11 ; from the first equation we then obtain ˙x = 2xy = t+12x, also having separable variables, which, recalling that x(0) = 2, gives x(t) = (t+1)2 2. The solution of our Cauchy problem is then (x(t), y(t)) = ( (t 1)2 2,t 11 ) defined for t2 ] 1, 1[, an open neighborhood of t = 0 (which, as expected, provides a parametrization of the upper part of the parabola x = 2y2, covered in the left-hand direction). (2) Let us consider the autonomous system x =y =˙˙ 12y

2x . The unique equilibrium of the system (given by 2y = 12x = 0) is the origin O(0, 0). The vector field (12x, 2y) is

(15)

clearly conservative (it has separate variables), and a primitive is E(x, y) =14x2+ y2. The level curves of E are (beside the single point O) the ellipses of type 14x2+ y2= k2 with k > 0, with half axes 2k and k.

We then get (e.g. for y > 0) that y =12p

4k2 x2 and replacing in the first equation ˙x = 2y we obtain

˙x = p

4k2 x2: setting x(t) = 2k cos ✓(t) this is equivalent to ✓0= 1, i.e. ✓(t) = t + ↵ with ↵2 R, hence x(t) = 2k cos(t + ↵). Replacing the latter in the second equation ˙y =12x we obtain ˙y = k cos(t + ↵) which, recalling that14x2+y2= k2, can be integrated and yields y(t) = k sin(t+↵). Therefore the general solution of our system is (x(t), y(t)) = (2k cos(t + ↵), k sin(t + ↵)) for k, ↵2 R (and this expression clearly works for all of the plane, not only for y > 0).

The most classical and important example of prime integral, with well-known applications

in Physics, is the integral of total energy of a second order autonomous scalar equation of

Integral of total energy

type ¨ z = h(z). Setting (x, y) = (z, ˙z) and V (z) = R

h(z) dz, we know that the equation

¨

z = h(z) is equivalent to the system given by

˙x = y

˙y = h(x)

, for which a prime integral is E(x, y) =

12

y

2

+ V (x) (a primitive of the conservative field (h(x), y), or rather of its opposite): i.e. the well-known total energy (sum of kinetic energy and potential energy)

E(z, ˙z) = 1

2 ˙z

2

+ V (z) .

The plane R

2

of the (z, ˙z)’s, which contains the level curves of E(z, ˙z), is called phase

Phase space

space, and should not be confused with the plane R

2

of the (t, z)’s, which contains the graphs of the solutions z(t).

Figure 4.5: Level curves of the total energy of the simple pendolum ¨✓ = g`sin ✓ in the phase space.

Examples. (1) The equation which governs the dynamic of a simple pendulum of length ` subject only to gravity is ¨✓ = g`sin ✓ , where ✓(t) is the motion law which describes the evolution of the angle ✓ with respect to the vertical and g is the gravity acceleration: the integral of total energy is then E(✓, !) =

1

2!2 g`cos ✓, where ! = ˙✓ is the angular speed. In the phase space (✓, !) the level curves of E(✓, !) are shown in Figure 4.5. (2) The integral of total energy could actually be discussed in the more general framework of second order autonomous systems of type (¨z1, . . . , ¨zn) = h(z1, . . . , zn) where h : ⌦! Rnis a conservative vector field on an open subset ⌦⇢ Rn: in this case V : ⌦! R is the potential energy associated to the field (of course, as we said, in one variable this is always possible with V (z) = R

h(z) dz). The

(16)

most important example is of course the dynamic of a point of mass m inR3 subject to conservative forces of energy U (x, y, z), governed by Newton second law (m¨x, m¨y, m¨z) = ( @U@x, @U@y, @U@z), with integral of total energy E(x, y, z; ˙x, ˙y, ˙z) =12( ˙x2+ ˙y2+ ˙z2) +m1U (x, y, z) where ( ˙x, ˙y, ˙z) is the speed (in Physics it is customary to consider as total energy the previous one multiplied by m).

Exercise. Given the di↵erential equation ¨x = 2e2x+ 2ex in the unknown x(t), determine the integral of total energy, and use it to solve the Cauchy problem with datum x(0) = 0 and ˙x(0) = 2p

2 . Solution. In this framework we have ¨x = h(x) = 2e2x+ 2ex and hence V (x) = R

h(x) dx = (e2x+ 2ex), so the integral of total energy results E(x, ˙x) =12˙x2 (e2x+ 2ex). This integral will be constant all along the desired solution, and its value will be then E(x(0), ˙x(0)) = E(0, 2p

2) = 1: therefore the solution x(t) satisfies 12˙x2 (e2x+ 2ex) = 1 , i.e. ˙x2 = 2(e2x+ 2ex+ 1), which (since ˙x never vanishes becaus the right-hand side never does, and hence remains positive as it is in t = 0, being ˙x(0) = 2p

2 > 0) yields

˙x =p

2(e2x+ 2ex+ 1) =p

2(ex+ 1), and this is a first order equation with separable variables that we know how to solve. Separating the variables we get ex1+1dx = p

2 dt, whose integration between 0 and t gives (log(ee+1 )]x(t)x(0) =p

2 (⌧ ]t0, i.e. log(exex+1) log(12) = log(e2ex+1x ) =p

2 t, hence ex = e

p2 t

2 ep2 t, which finally yields the desired solution x(t) = log( e

p2 t 2 ep2 t) =p

2 t log(2 ep2 t), defined as maximal solution in the interval where 2 ep2 t> 0, i.e. in ] 1,p12log 2 [.

Of course it could happen that the vector field (b, a) is not conservative, so that Proposi- tion 4.4.4 could not be applied to find a prime integral of the system (4.4). Should we give up on that search? Fortunately no, because we can try to fix the lack of conservativity of (b, a) by means of a suitable “integrating factor”:

Proposition 4.4.5. If there there exists a never vanishing C

1

function ⇢ : ⌦ ! R (called

Integrating factor

integrating factor) such that the vector field (⇢b, ⇢a) is conservative, then any primitive of (⇢b, ⇢a) is still a prime integral for the system (4.4).

Proof. If ⇢ : ⌦! R is an integrating factor for the vector field (b, a), then a primitive E : ⌦ ! R of (⇢b, ⇢a) satisfies @E@x = ⇢b and @E@y = ⇢a: hence, given a solution '(t) = (x(t), y(t)) of the system (4.4) we have dtd(E(x(t), y(t))) = @E@x ˙x +@E@y ˙y = ⇢(x, y)(b(x, y) a(x, y) + ( a(x, y)) b(x, y))⌘ 0, in other words E is constant along the solution, i.e. by definition E is still a prime integral.

Note that we required the integrating factor ⇢ to be never vanishing on ⌦, although this does not seem to be necessary for the proof. However, this requirement is important in order that the multiplication by ⇢ does not a↵ect any essential feature of the original system (4.4), which is equivalent to the system with (a, b) replaced by (⇢a, ⇢b) in the sense that the two systems have the same equilibria and their nonconstant solutions are the same up to an invertible change of parameter, in particular they have exactly the same trajectories.

(41)

(41)After all, it is not difficult to understand that the multiplication by ⇢ does not a↵ect any essential feature of the original system (4.4), since multiplying the vector field (a, b) by ⇢ means to multiply all its vectors by a real factor with constant sign, thus modifying only their length while keeping (or reversing) all their orientations.

(17)

We already know that conservativity is equivalent to irrotationality on simply connected domains (so, in particular, on balls): so, if the vector field (b, a) is not conservative, to determine an integrating factor ⇢ for it we should check the irrotationality condition

@(⇢b)

@y

=

@( ⇢a)@x

, i.e. the di↵erential equation with partial derivatives ⇢

@y@b

+ b

@⇢@y

= ⇢

@a@x

a

@⇢@x

. Now, it is possible to prove

(42)

that such a integrating factor ⇢(x, y) always exists;

however, it is possible to calculate it explicitly only in some particular cases, the most classical ones being as follows.

Proposition 4.4.6. In determining an integrating factor for a vector field (p, q):

(a) If

1q

(

@p@y @x@q

) does not depend on y then ⇢(x) = e

R 1

q(@p@y @q@x) dx

is integrating factor.

(b) If

1p

(

@p@y @x@q

) does not depend on x then ⇢(y) = e

R 1

p(@p@y @q@x) dy

is integrating factor.

(c) More generally, if there exist u(x) and v(y) of class C

1

such that

@p@y @q@x

= u(x) q v(y) p , then ⇢(x, y) = e

(Ru(x) dx+Rv(y) dy)

is an integrating factor.

Proof. It is enough to prove (c), because (a-b) are particular cases of it. In such hypotheses the above recalled condition ⇢(@p@y @q@x) = q(@⇢@x) p(@⇢@y) becomes ⇢ (u(x) q v(y) p) = q(@⇢@x) p(@⇢@y), i.e. q (⇢ u(x)) p (⇢ v(y)) = q(@⇢@x) p(@⇢@y), which is satisfied if we can find a function ⇢(x, y) such that (@⇢@x, @y@⇢) = (⇢ u(x), ⇢ v(y)): and the suggested ⇢(x, y) = e(Ru(x) dx+Rv(y) dy)satisfies this request.

Figure 4.6: Integral curves of the system ( ˙x, ˙y) = ( 3y2, x2+ 2x + y3); the drawing puts in evidence the level curves containing the equilibria O(0, 0) and A( 2, 0), as well as the Cauchy datum (1, 1).

Exercise. Find the trajectories of the autonomous di↵erential system

˙x = 3y2

˙y = x2+ 2x + y3 , and find the solution with Cauchy datum (x(0), y(0)) = (1, 1); then provide some examples of systems equivalent to the previous one.

Solution. (Figure 4.6) The equilibria of the system x =˙ 3y2

˙

y = x2+ 2x + y3 are the solutions of 3y2 = x2+ 2x + y3= 0, i.e. O(0, 0) and A( 2, 0), which correspond to constant solutions. For the other solutions note

(42)by using the method of the characteristics for the solution of di↵erential equations with partial deriva- tives.

Riferimenti

Documenti correlati

With this ever-present requirement for rapid diagnosis in mind, General ultrasound in the critically ill provides a key to practicing a visual medicine, a great benefit to

In that case, we could say that the external control, and in particular the accounting control entrusted to the external auditor, should not be mandatory, but it should be quite

The former consisted in the absolute phase measurement of a voltage, with respect to the cosine waveform with positive peak aligned with the PPS signal (see Fig. The

Therefore, we need to modify the way in which modi�cations are provided for persistent rules in order to accommodate reversal for temporary e�ects, only for the validity-time

In modern Astrodynamics and Celestial Mechanics, thanks to the rise of interplanetary exploration that has brought a renewed interest in Lambert’s problem, it has application mainly

If this manifold is non-degenerate (see Theorem 1.1 below) a one dimensional reduction of the perturbed variational problem in D 1,2 a (R N ) is possible. The exact knowledge of

We will also illustrate how our main theorem applies to “nonresonance” situations, when the nonlinearity is controlled by some Hamiltonian functions, and in the case of

and Russo, A., Virtual element methods for general second order elliptic problems on polygonal meshes, Math. Models