Universit`a degli Studi di Pisa
Corso di Laurea Triennale in Matematica
“Symmetry Break” in a Minimum Problem
related to Wirtinger generalized Inequality
Tesi di Laurea Triennale
Candidato
Giulio Rovellini
Relatore
Prof. Massimo Gobbino
Contents
Introduction 5
1 Statement of the problem 7
1.1 Dirichlet boundary conditions . . . 7
1.2 The Euler equation . . . 11
1.3 The null-integral condition . . . 14
1.4 Equivalent problems . . . 17
2 Symmetry or not? 21 2.1 Some limit cases . . . 21
2.2 Easy asymmetry results . . . 25
2.3 The case p = 1 . . . 29
3 An auxiliary function 35 3.1 Introducing the function . . . 35
3.2 Finding the derivatives . . . 40
3.3 The asymmetric case: q > 3p . . . 45
3.4 The symmetric case: q ≤ 3p . . . 51
A Useful facts in analysis 57 A.1 Classical results . . . 57
A.2 Rearrangements . . . 62
Bibliography 65
Introduction
Among the numerous Sobolev-type inequalities that allow to estimate the Lq-norm of a function once the norm of its (partial) derivative(s) is known, one of the neatest (and easiest to prove) states that if f is a 2π-periodic C1 function with a null integral, then kf kL2 ≤ kf0kL2, where the equality holds if and only if f is a linear combination of
sine and cosine: this is the so called Wirtinger inequality for functions. This relatively simple inequality can be generalized in a few ways –the main being perhaps the Poincar´ e-Wirtinger inequality. In the present work, we will be interested in one of these generaliza-tion, namely in the following fact: if we fix two indexes p ≥ 1 and q ≥ 1, every function f ∈ W1,p(−1, 1) such that R f = 0 satisfies
kf kLq ≤
kf0k Lp
cp,q
for an adequate constant cp,q > 0. Assuming that cp,q is the best constant (i.e. the greatest possible), it is quite natural to wonder for which functions f the equality holds, i.e. when kf0k
p = cp,qkf kq . In particular, an interesting question was raised in [1]: is it true that
f must be an odd function? In the paper, the authors realized that the answer depends on the p, q we have chosen: for every fixed p, a small q (for example: q ≤ 2p, as proved in Proposition 3.26 of the present work) gives an odd f , while f cannot be symmetrical when q is too big (see for instance Proposition 2.5).
In the next pages, we will try and establish for which q this “symmetry break” occurs; the means by which we are to conduct this research are largely based on the classical tools of calculus of variations. In Chapter 1, we are going to clearly state the problem, and we will start studying it by using (among the others) the Euler equation for stationary points of functional F (u) = ku 0k Lp kukLq .
In Chapter 2, a somewhat more “heuristic” approach is taken: by solving some specific cases, we will get an idea of how the general solution to the problem might work. Finally, in Chapter 3 we will come back to the Euler equation, and by manipulating it will be able to give a complete proof of the following fact (see Theorem 3.1, which is also the main result of the present work):
Take a function f for which the generalized Wirtinger inequality is not strict; then f is symmetrical with respect to the midpoint of its domain if and only if p ≤ 3q.
In many ways, our approach will retrace the “historic” development in the solution to the problem. In [2], Egorov noted that the symmetry break cannot occur after q = 4p−1 (see Proposition 2.6). In [4] and [5], Belloni and Kawohl raised the limit for an odd solution to q ≤ 2p + 1 (see Proposition 3.27). Finally, [3] introduced a fundamental auxiliary function (the titular of Chapter 3), which allowed to prove that the asymmetry extends to case q > 3p (see Proposition 3.16), and suggested that precisely q = 3p is the critical value we are looking for (a fact that was subsequently somehow proved by Nazarov in [6], and of which we will give a much simpler proof than the original in Proposition 3.30).
Chapter 1
Statement of the problem
We are interested in functional inequalities of the form kukLq ≤
ku0kLp
cw ∀ u ∈ W, (1.1)
where W is a subspace of W1,p(−1, 1) (see Definition A.14 in the Appendix), norm k · kLq
is the standard norm of space Lq (see Definition A.2 and Proposition A.3), and cw is a constant (which of course depends on p and q). The optimal constant in (1.1) is the solution to the following variational minimum problem:
cw = inf ku 0k p kukq : u ∈ W \ {0} ; (1.2)
observe that the functions minimizing (1.2) are also the functions for which inequal-ity (1.1) is not strict.
In this first chapter, we shall take an overall look at some of these problems: what happens when W includes all functions whose integral over [−1, 1] is zero? What about the case W = W01,p? What relations exist between the resulting constants cw? Can we say anything significant as far as their respective minimizers are concerned?
1.1
Dirichlet boundary conditions
To begin with, we are to study problem (1.2) with W = W01,p(−1, 1) =
n
u ∈ W1,p : u(−1) = u(1) = 0 o
.
In other words, we’re investigating a minimum problem with Dirichlet boundary condi-tions: cd(p, q) = inf ku 0k p kukq : u ∈ W 1,p(−1, 1), u(−1) = u(1) = 0 . (1.3)
Proving the following statement is the main purpose of the present and next section: 7
Theorem 1.1. Consider problem (1.3); then, for every choice of 1 < p < +∞ and 1 < q < +∞:
1. (existence of minimizer) constant cd(p, q) is greater then zero, and there exists
a ud ∈ W01,p(−1, 1) such that kudkq = 1 and ku0dkp = cd;
2. (uniqueness of minimizer) the function ud described in the previous point is
essentially unique, meaning that if ˜ud 6= ud is a second function satisfying k˜udkq = 1 and k˜u0dkp = cd, then ˜ud = −ud;
3. (symmetry) minimizer ud is even, and (strictly) monotone on both [−1, 0] and
[0, 1];
4. (Euler equation) the function u0d|u 0 d|
p−2 is weakly differentiable, and u
d satisfies
on [−1, 1] the differential equation
(u0|u0|p−2)0 = −λdu|u|q−2, (1.4) where λd = cpd;
5. (Erdmann equation) the function ud is of class C
1, and solves | ˙u|p p∗ + λd |u|q q = λd R 2, (1.5) where 1p + p1∗ = 1 and R = 1 p∗ + 1 q.
Proof (of point 1). A standard proof for the existence of minimizers to a problem like (1.3) is based on some classical results of functional analysis. Call W the subspace of W01,p(−1, 1) containing all functions such that kukq = 1, and for every u ∈ W set F (u) =
R1
−1|u 0|p.
Let us equip W1,p with a suitable notion of convergence: we will say that a sequence (un)n∈N converges to u∞ if and only if
u0n * uLp 0∞ and un L∞
−→ u∞,
where the first is the weak convergence in Lp (see Definition A.5 in the Appendix), while the second is the uniform convergence. This convergence has some “good properties” with respect to functional F :
• (Closed domain) F ’s domain W is closed in W1,p when the latter is equipped
with the described convergence (i.e. if un ∈ W for all n, and un −→ u∞, then
u∞ ∈ W too). This is because W is already closed with respect to the uniform
1.1. DIRICHLET BOUNDARY CONDITIONS 9 • (Lower semi-continuity) Because function g(x) = |x|p is convex for every
p > 1, and its derivative equals g0(x) = p|x|p−2x (hence |x0+ h|p ≥ |x0|p + phx0|x0|p−2
is true for every possible choice of x0 and h), we can write for every converging
sequence un → u∞ lim inf n F (un) = lim infn Z |u0∞ + u0n − u0∞|p ≥ ≥ Z |u0∞|p + lim inf n Z p(u0n − u0∞)u∞0 |u0∞|p−2 = F (u∞),
where the last liminf (actually a limit) is zero because u0n * uLp 0∞ and function u0∞|u0
∞|p−2 belongs to Lp
∗
(−1, 1) (where p∗ = p−1p stands for the conjugate of index p); this means that F is lower semi-continuous.
• (Compact sublevels) For every M > 0, the sublevel F−1([0, M ]) is sequen-tially compact, i.e. if for every n we take a un ∈ F−1([0, M ]), then we can find
a subsequence (unk)k∈N and a u∞ ∈ W
1,p such that u
nk −→ u∞. Indeed, suppose
F (un) ≤ M for all n. Then, all the u0n belong to closed ball
B M1p = n v ∈ Lp(−1, 1) : kvkLp ≤ M 1 p o ,
which –thanks to Banach-Alaoglu theorem (see Proposition A.6)– we know to be weakly sequentially compact: consequently, there exists a v ∈ Lp(−1, 1) and a sub-sequence (unk)k∈N such that u0n
k
Lp
* v. Now, the sequence (unk)k∈N also satisfies the hypothesis of Arzel`a-Ascoli theorem (Proposition A.27): this is because the un are equi-p1∗-H¨older (the common constant is M
1
p: see Proposition A.15), and
un(1) = 0 for every n. Therefore, (unk)k∈N has a further subsequence, which
con-verges uniformly to an adequate function u∞ ∈ C1,
1
p∗([−1, 1]), i.e. we can assume
unk −→ uL∞ ∞. We are left to prove that v is the weak derivative of u∞: this, however, is the content of Remark A.11.
These three facts allow to conclude that ku0kp
kukq attains a minimum on W
1,p
0 (−1, 1).
Indeed, let (un)n∈N be a sequence of functions such that
ku0nkp
kunkq → cd.
Of course, we can assume kunkq = 1 for every n (if that is not the case, we can
substi-tute it with its multiple un
sequence F (un) = ku0nkpp converges to c p
d; in particular, it is bounded, hence we can find
a subsequence (unk)k∈N and a u∞ such that unk −→ u∞. Well, u∞ is precisely the
min-imizer ud we were looking for: indeed, u∞ ∈ W (because W is closed), and F ’s lower
semi-continuity allows to write
ku0∞kpp = F (u∞) ≤ lim inf F (un) = cpd,
which along with ku0∞kp ≥ cd means ku0∞kp = cd.
Proofs for points 2-5 will be provided in section 1.2.
Remark 1.2. Using some theory of rearrangements (see Appendix A.2), we can easily prove that problem (1.3) admits at least one minimizer ud satisfying point 3 in Theo-rem 1.1. Indeed, suppose minimizer ud were not even. Take its symmetric rearrangement c
ud: Proposition A.32 then assures that R |cud0|p ≤R |u0 d| p, while R | c ud|q = R |u d|q, hence kcud0kp kcudkq ≤ ku 0 dkp kudkq .
The even minimizer is then given by the symmetric rearrangement of |cud|.
Remark 1.3. We can use the Lq-norm’s homogeneity in order to express (1.3) in a form that only involves “classic” integral functionals; indeed, the equality
λd = min Z 1 −1 |u0|p : u ∈ W1,p(−1, 1), u(−1) = u(1) = 0, Z |u|q = 1 (1.6) holds (where λd is the same constant as in point 4 of Theorem 1.1), and every ud minimiz-ing (1.6) also minimizes (1.3). Constant λd is sometimes called Dirichlet’s first eigenvalue for problem (1.2). This is because of case p = q = 2. Indeed, in such a case an interesting approach to the problem is the one of spectral theory in Hilbert spaces. Consider the linear function T : W → L2(−1, 1) defined by T u = −u00, where
W = n
u ∈ W2,2(−1, 1) : u(−1) = u(1) = 0 o
⊆ L2(−1, 1). Functional T is actually symmetrical on W : the equality
hT u, vi = Z −u00v = Z u0v0 = Z −uv00 = hu, T vi
holds for every u, v ∈ W . It is easy to find T ’s eigenvectors in W : they are the functions un(x) = cos(n2πx) (for n odd) and un(x) = sin(n2πx) (for n even); since T is symmetrical,
it does not surprise us that (un)n∈N is actually a Hilbert base for L2(−1, 1). Note that
1.2. THE EULER EQUATION 11 we want to show that the “first” eigenvalue λ1 actually equals λd. We can achieve this
by proving that the first eigenvector u1 minimizes expression hT u, ui = R u02 among all
unitary u, i.e. that
λ1 = hT u1, u1i ≤ hT u, ui
is true for every u such that kuk = 1. Well, if u is a finite linear combination u = Pk
i=1aiui, then the proof is simple enough:
hT u, ui = hT X i aiui, X i aiuii = h X i aiλiui, X i aiuii = X i λia2i ≥ λ1.
Take now a generic u, and let vn ∈ Span(u1, . . . , uk, . . . ) be the n-th term of a sequence
that approximates T u, i.e. vn L2
→ −u00. Then take wn ∈ W the function identified by
(
vn = −w00n
wn(−1) = wn(1) = 0
;
applying twice Arzel`a-Ascoli, we can assume wn L∞
→ u. Function wnis a linear combination
of u1, . . . , uk, . . . ; this implies λ1 ≤ D T wn kwnk , wn kwnk E
for every n ≥ 1. Since kwnk → kuk = 1, the limit of the right-hand side equals hT u, ui:
this concludes.
1.2
The Euler equation
In Theorem 1.1, we have introduced two fundamental differential equations –see (1.4) and (1.5). In the present section, we prove that minimizers ud to problem (1.6) satisfy these equations, as well as having some other important properties.
Proof (of points 4 and 5 in Theorem 1.1). Let F : W01,p(−1, 1) → R be the functional F (u) = ku
0k p
kukq
,
and for every v ∈ Ccpt∞ (−1, 1) consider ϕv(t) = F (ud+tv). If we call f and g the functions defined on a neighborhood of zero as
f (t) = Z 1 −1 |u0d+ tv0|p g(t) = Z 1 −1 |ud + tv|q,
then we can apply to both f and g Proposition A.13 in the Appendix: we will obtain that f and g are weakly differentiable with derivatives given by
f0(t) = p Z |u0d + tv0|p−2(u0 d+ tv 0)v0 g0(t) = q Z |ud+ tv|q−2(ud + tv)v.
Now apply Proposition A.1 to f0 and g0: we conclude that f and g are of class C1. But then, ϕv is also of class C1: its derivative in zero is
ϕ0v(0) = f 1 p g1q 0 (0) = 1 pf 1 p−1(0)f0(0)g 1 q(0) − 1 qg 1 q−1(0)g0(0)f 1 p(0) g2q(0) = =λ 1 p d Z λ−1d |u0d|p−2ud0 v0 − |ud|q−2udv.
In other words, since ϕ0v(0) = 0 (indeed, zero is ϕv’s minimum point), we have proved
that the equality
Z |u0d|p−2u0 dv 0 = λ d Z |ud|q−2u dv
must hold for every v ∈ Ccpt∞(−1, 1): by definition of weak derivative (see A.7), this means that |u0d|p−2u0
d is weakly differentiable on its domain, and that its derivative equals
(|u0d|p−2u0d)0 = −λd|ud|q−2ud, i.e. the thesis of point 4.
We must now prove point 5. First of all, we want to show that any ud satisfying (1.4) is of class C1, i.e. that u0d ∈ C0([−1, 1]). Well, u
d is continuous, hence the right-hand
side of (1.4) is continuous on [−1, 1]: this implies u0d|u0 d|
p−2 ∈ C1([−1, 1]). Now, call h
the function defined by h(u) = u|u|p−2, so as to have u0d = h−1(u0d|u0 d|
p−2): since h−1
is a continuous function, we conclude that u0d is continuous indeed. Now, in order to obtain (1.5), we can multiply both sides of equation (1.4) by ˙u: this will give us
˙u( ˙u| ˙u|p−2)0 = −λd˙uu|u|q−2. (1.7) Function q ˙uu|u|q−2 is the derivative of |u|q, while p∗˙u( ˙u| ˙u|p−2)0 is the (weak) derivative of | ˙u|p: (| ˙u|p)0 = ˙u| ˙u| p−2 p p−1 0 = p p − 1 ˙u| ˙u| p−2 · ˙u| ˙u| p−2 p p−1−2
· ( ˙u| ˙u|p−2)0 = p∗˙u( ˙u| ˙u|p−2)0. This means that we are able to take both sides of equation (1.7) and replace them with their antiderivatives: we will obtain
| ˙u|p
p∗ + λd
|u|q
1.2. THE EULER EQUATION 13 for an adequate constant c. This is the required equation once we observe that c can be explicitly determined by integrating both sides between −1 and 1:
2c = Z 1 −1 | ˙u|p p∗ + λd |u|q q = λd 1 p∗ + 1 q = λdR.
Remark 1.4 (regularity of minimizers). We can be slightly more precise as far as the question of ud’s regularity is concerned: if ud ∈ W1,p(−1, 1) satisfies (1.4), and 1 < p ≤ 2,
then ud ∈ C2([−1, 1]); if 2 < p < ∞, then u
d ∈ C 1,p−11
([−1, 1]). The reason for this is the regularity of the function h(u) = u|u|p−2 used in the proof above. If p ≤ 2, then h−1 ∈ C1: this means u0
d ∈ C
1([−1, 1]), i.e. u
d ∈ C
2([−1, 1]). If p < 2, on the other hand,
we can apply Proposition A.24: indeed, u0d is continuous, and |u0d|p−1 = u0d|u0
d| p−2
is Lipschitz, hence u0d is p−11 -H¨older.
Remark 1.5. Minimizer ud is nonconstant on every interval of [−1, 1]. This is, of course, because no constant function can satisfy both (1.4) and (1.5) at the same time.
Proof (of points 2 and 3 in Theorem 1.1). Take ud a minimizer for the problem: we will assume that M = max ud > 0. Call x0a point such that ud(x0) = M : then ud is increasing
on [−1, x0] (if it were not so, the norm of the derivative would be strictly improved by
substituting ud with the symmetric rearrangement of |ubd|, see Propositions A.32 and A.34 in the Appendix), and decreasing on [x0, 1]. Observe that ˙ud(x0) = 0, which implies
M = (λcq
d) 1
q because of (1.5); besides, x
0 is the only point in which ud reaches M : this is
because ud is never constant (see Remark 1.5). Now, ud solves on [−1, x0) the Cauchy
problem ( u0 = (p∗(c − λd q |u| q))1p u(−1) = 0 ; (1.8)
so does, however, ud(−x) on interval [−1, −x0). Cauchy-Lipschitz uniqueness theorem
(see Proposition A.26), then, assures that ud(x) and ud(−x) must coincide on [−1, −|x0|);
in particular, ud(x0) = ud(−x0) = M , which (because of the observation made above
about x0 being the only maximum point) implies x0 = 0. We have therefore obtained
ud(x) = ud(−x) for every 0 < x < 1 i.e. ud is even.
The same proof works for uniqueness. If two positive minimizers ud and ˜ud are given, then they must both solve (1.8) on [−1, 0), so they are actually the same function (Cauchy-Lipschitz); an analogous reasoning applies to (0, 1]. Problem (1.6) therefore has only one positive minimizer. The second minimizer is given by −ud.
1.3
The null-integral condition
In this section, we substitute the Dirichlet boundary condition of section 1.1 with an integral condition. Set
ci(p, q) = min ku 0k p kukq : u ∈ W1,p(−1, 1), Z u = 0 ; (1.9)
then Theorem 1.1 can be adapted as follows:
Theorem 1.6. Problem (1.9) admits at least one minimizer. In addition, if ui is a minimizer such that kuikq = 1, then the following facts are true:
1. (Euler equation) the function |u0i|p−1 is weakly differentiable, and u
i satisfies on
[−1, 1] the differential equation
(u0|u0|p−2)0 = −λiu|u|q−2 + β, (1.10) where λi = cpi, and β is an adequate constant;
2. (Erdmann equation) function ui is of class C
1, and solves | ˙u|p p∗ + λi |u|q q = βu + λi R 2, (1.11) where 1p + p1∗ = 1 and R = p1∗ + 1q;
3. (Neumann boundary condition) the derivative ˙ui equals zero at the endpoints
of its domain;
4. (monotonicity) minimizer ui is strictly monotone on [−1, 1].
Proof (of points 1-3). We can retrace the proofs for points 1, 4 and 5 in Theorem 1.1, with only a few little adjustments; the only step at which we ought to proceed with a certain care is the introduction of constant β. When adapting the proof of point 2, function v must be taken so as to have a null integral over [−1, 1]; in other words, we will discover that equality
Z |u0i|p−2u0 iv 0 = λ i Z |ui|q−2u iv
holds for every v ∈ Ccpt∞ (−1, 1) such thatR v = 0. If we apply the content of Remark A.10, we will conclude that u0i|u0
i|
p−2 is still weakly differentiable, and that expression
(u0i|u0
i|
p−2)0 + λ
iui|ui| q−2
1.3. THE NULL-INTEGRAL CONDITION 15 is constant: we call β this constant.
After finding that ui is of class C1, we still have to prove that ui satisfies the Neumann boundary conditions ˙ui(−1) = ˙ui(1) = 0. To see this, we must once again retrace the proof of point 2 in Theorem 1.1, this time taking a v ∈ C∞([−1, 1]) such that R v = 0, v(−1) = 0, and v(1) = 1. Once we get to
Z
˙v ˙ui| ˙ui|p−2 = λ i
Z
vui|ui|q−2,
we integrate the left-hand side by parts, and conclude that v(1) ˙ui(1)| ˙ui(1)|p−2 −
Z
v( ˙ui| ˙ui|p−2)0 = λi Z
vui|ui|q−2, hence –because ui solves (1.10)–
˙ui(1)| ˙ui(1)|p−2 = Z
βv = 0, i.e. ˙ui(1) = 0. The same reasoning applies to ˙ui(−1).
Proof (of point 4). Assume ui(−1) < 0 (if this is not the case, substitute ui with its opposite), and consider the Cauchy problem
(
u0 = (p∗(βu + c − λi
q|u| q))1p
u(0) = 0 ; (1.12)
we intend to prove that, if we call u0 : (a, b) → (u1, u2) its maximal solution (u1 < 0 < u2 are the two roots of equation βu + c = λi
q|u|
q), then u
i actually equals u0(x +a+b2 ) (which
is of course an increasing function).
First of all, observe this: between two zeros of ui, there must always be a stationary point; vice-versa, between two stationary points of ui there must always be at least one zero (this is because ui is never constant, and if x is stationary then either ui(x) = u1 or
ui(x) = u2). Now, the zeros of ui are isolated: let us call them x1 < x2 < · · · < xn, while
y0 = −1 < y1 < · · · < yn−1 < yn = 1 will be the stationary points of the minimizer. The
sequence ui(yk), then, alternates between two values; since we assumed ui(−1) < 0, we
will have
ui(y0) = u1, ui(y1) = u2, ui(y2) = u1
and so on according to the parity of index k. Note that ui is monotone on every interval [yk−1, yk].
Now, if k is odd, the function ui(x+xk) solves (1.12) on the interval (yk−1−xk, yk−xk);
actually the maximal solution to the problem. This means that for every odd k the equalities
yk−1− xk = a, yk − xk = b, ui(x + xk) = u0(x)
are true; in particular,
yk − yk−1 = b − a, Z yk yk−1 ui = Z b a u0, Z yk yk−1 |ui|q = Z b a |u0|q, Z yk yk−1 |u0i|p = Z b a |u00|p. (1.13)
If k is even, we can reason similarly, since the function ui(−x − xk) solves (1.12) on the
interval (xk − yk, xk − yk−1): consequently, relations (1.13) still hold.
Since equalities (1.13) are true for every k, by adding them together we get n(b − a) = yn − y0 = 2, n Z b a u0 = Z 1 −1 ui = 0, n Z b a |u0|q = Z 1 −1 |ui|q = 1, n Z b a |u00|p = Z 1 −1 |u0i|p = λ i. (1.14)
Consider the function
˜ u(x) = u0 x n + a + b 2 ; then (1.14) means that
Z 1 −1 ˜ u = 0, Z |˜u|q = 1, Z |˜u0|p = λi np.
Since λi is the minimum, however, one can only conclude n = 1, i.e. ui = ˜u is monotone. Remark 1.7. An alternate proof for point 4 is based on rearrangements. Indeed, suppose u is not monotone: then its increasing rearrangement ˜u is such that k˜ukq = kukq, and
k˜u0kp < ku0kp (see Proposition A.34), hence u cannot be a minimizer for (1.9).
Remark 1.8. As in Remark 1.3, observe that the constant λi that appears in the Euler equation (1.10) solves a minimum problem which is equivalent to (1.9):
λi = min Z 1 −1 |u0|p : u ∈ W1,p(−1, 1), Z u = 0, Z |u|q = 1 . (1.15)
If p = q = 2, note λi’s connection to functional T u = −u00. Indeed, consider T on the space
1.4. EQUIVALENT PROBLEMS 17 as in Remark 1.3, T is linear and symmetrical, and its eigenvectors un(x) = cos(n2πx)
(for n even) and un(x) = sin(n2πx) (for n odd) form a Hilbert base of L2. Functional
T ’s first eigenvector u0 is a constant function, which corresponds to eigenvalue λ0 = 0;
the second eigenvector u1, on the other hand, minimizer expression hT u, ui among all
unitary vectors of subspace u⊥0 ∩ W : the proof is the same as in Remark 1.3, although this time wn must be taken so as to solve
vn = −w00n wn0(−1) = wn0 (1) = 0 R wn = 0 . In other words, λ1 =hT u1, u1i = min n hT u, ui : u ∈ W, kuk = 1, hu, u0i = 1o = = min Z 1 −1 u02 : u ∈ W2,2(−1, 1), Z u2 = 1, Z u = 0 = λi, i.e. λi is Neumann’s second eigenvalue for functional T .
Remark 1.8 above means that function u1(x) = sin(π2x) is a “prototype” for
mini-mizers ui. A natural question subsequently arises: is ui always an odd function? (I.e., is it true that ui(−x) = −ui(x) for all x?) Well, unlike the case with the Dirichlet boundary condition (in which point 3 in Theorem 1.1 assure us of each minimizer’s symmetry), this time the answer is not quite as simple as one might expect; indeed, ui’s symmetry with respect to the origin actually depends on the p, q we have fixed in the first place. We will return to this interesting problem in Chapter 2; in the meantime, observe this: Proposition 1.9. Minimizer ui to (1.9) is odd if and only if constant β in the Euler equation (1.10) is null.
Proof. Retrace the proof of Theorem 1.6, point 4. If β 6= 0, then ui(−1) = u1 6= −u2 =
−ui(1), hence ui is not odd. Vice versa, suppose β = 0: then u0 is odd (both u0(x) and
−u0(−x) are maximal solutions to (1.12), so they must coincide), and ui = u0.
1.4
Equivalent problems
Before moving on to studying the question of ui’s symmetry, let us stop a minute so as to briefly recap how the various problems we have considered so far relate to each other. We have already somehow observed that (1.2) is equivalent to
λw = inf Z 1 −1 |u0|p : u ∈ W, Z |u|q = 1 : (1.16)
indeed, cw = λ
1 p
w, and the minimizers are essentially the same for the two problems,
mean-ing that u0 minimizes (1.2) if and only if its “normalized” version kuu0
0kq minimizes (1.16).
Note that in (1.16) we can substitute condition R |u0|p = 1 with R |u0|p ≥ 1, since a
minimizer u0 to this alternate minimum problem cannot exist unless R |u0|q = 1 (if
R |u0|q > 1, the Lp-norm of the derivative would decrease as soon as u
0 was rescaled so
as to have a unitary Lq-norm).
The constants cd and ci we obtain from (1.2) for W = W01,p(−1, 1) and W = n u ∈ W1,p(−1, 1) : Z u = 0 o
respectively are somewhat related, as stated in the following proposition:
Proposition 1.10. For every choice of 1 < p < ∞ and 1 ≤ q ≤ ∞, constant ci of (1.9) is smaller than cd of (1.3). Besides, ci = cd holds if and only if (1.9) admits at least one odd minimizer.
Proof. Since any ud minimizing (1.3) is even, the function ˜
ud(x) = (
ud(1 − x) if x ≥ 0 −ud(1 + x) if x < 0
is odd, and is a “good” (or rather “natural”) candidate for minimizing (1.9). Now, observe that k˜u0dkp = ku0dkp, and k˜udkq = kudkq: this means
ci ≤ k˜u 0 dkp k˜udkq = ku 0 dkp kudkq = cd,
where of course the equality holds as long as ˜ud minimizes (1.9).
Vice versa, assume that the ui minimizing (1.9) is odd: then we easily find an (even) function ˜ui which is zero at ±1 and respects both k˜u0kp = kui0kp and k˜ukq = kuikq. In
particular, therefore, cd ≤ ci, which together with ci ≤ cd implies ci = cd.
Let us consider some other possible “variations” of (1.16). In [1] authors Dacorogna, Gangbo and Sub´ıa studied the problem when a periodic boundary condition is added. Well, this minimum problem is actually equivalent to ours, meaning that the two can be reduced one to the other, as stated in the proposition that follows:
Proposition 1.11. The following equality holds: 2pλi = min Z 1 −1 |u0|p : Z u = 0, Z
|u|q = 1, u(−1) = u(1)
, (1.17)
where λi is the same as in (1.15). Besides, a minimizer for (1.17) is given by ui(2|x| − 1), where ui is a function minimizing (1.15).
1.4. EQUIVALENT PROBLEMS 19 Proof. Proving inequality ≥ of (1.17) is quite simple. Call ˜ui : [−1, 1] → R the function ˜
ui(x) = ui(2|x| − 1): then ˜ui respects both R−11 |˜ui|q = 1 and R1
−1u˜i = 0 (because it is a
rearrangement of ui), as well as the periodic condition ˜ui(−1) = ˜ui(1). This means that ˜
ui is indeed a “candidate” for minimizing (1.17); since Z 1 −1 |˜u0i|p = 2p Z 1 −1 |u0i|p = 2pλi, we conclude that the minimum equals 2pλ
i at most.
The proof for inequality ≤ is analogous. Let u0 be a minimizer to (1.17) (which exists because of an argument similar to the one used for point 1 in Theorem 1.6), and call ˜ u0 : [−1, 1] → R the function ˜ u0(x) =ub0 x + 1 2 ,
where ub0 is the symmetric rearrangement of u0. Again, we will have
R1 −1|˜u0| q = 1 and R1 −1u˜0 = 0, hence λi ≤ Z 1 −1 |˜u00|p = 2−p Z 1 −1 |u00|p,
which is of course incompatible with a strict inequality > in (1.17).
Remark 1.12. Another popular variation of (1.15) (which we find in [2] and [3]) consists of adding a Dirichlet boundary condition to the problem. Again, we can prove that in this case the solution can be deduced from minimizer ui; in particular, an equality similar to (1.17) will hold, namely
2pλi = min Z 1 −1 |u0|p : Z u = 0, Z
|u|q = 1, u(−1) = u(1) = 0
. (1.18) Remembering Proposition 1.11, the reason for (1.18) is quite apparent. Indeed: on the one hand, inequality ≤ must be true because condition u(−1) = u(1) = 0 is stricter than the periodic condition of (1.17); on the other, ≥ holds as well because problem (1.17) admits at least one minimizer respecting the Dirichlet condition (let u0 be a minimizer
for (1.17): because of condition R u0 = 0, function u0 must be zero at a certain x0, thus
enabling us to take the function ˜u0 : [−1, 1] → R defined as
˜
u0(x) =
(
u0(x + 1 + x0) if − 1 ≤ x ≤ −x0
u0(x − 1 + x0) if − x0 ≤ x ≤ 1 , which still minimizes (1.17) and is zero at ±1).
Remark 1.13. Problem (1.18) is equivalent to (1.15) in another aspect; namely, (1.15) admits an odd minimizer ui if and only if problem (1.18) does as well.
Remark 1.14. Here are a few other minimum problems which are equivalent to (1.9): 2ci = min ku0kp ku − ukq : u ∈ W1,p(−1, 1), u(−1) = u(1) = 0 = min ku 00k p ku0k q : u ∈ Cper2 (−1, 1) = min ku (k+1)k p ku(k)k q : u ∈ Ck+1(−1, 1), u(k)(±1) = u(k−1)(±1) = 0 . (where u is the mean value of function u.)
Remark 1.15. In the previous cases, the equivalences are all in all rather unsurprising. In Chapter 3 (Theorem 3.2, to be precise), however, we will find a much less apparent equivalent problem to the one we are studying: namely, we will prove that the minimizer ui to (1.9) is symmetrical with respect to the origin if and only if µ = 0 is the minimum point of A(Rµ), where Rµ ⊆ R2 is the region of the plain containing the solutions to
inequality
|x|p∗ + |y|q ≤ 1 + µy
(here, as usual, p∗ = p−1p is the conjugate of p), and A(R) is the area of R: it is precisely this characterization that will lead us on the road to a general solution.
Chapter 2
Symmetry or not?
In this and in the following chapter, we will concentrate on problem (1.9). As we have already mentioned, it is only natural to wonder whether the function minimizing this problem is symmetrical with respect to the origin or not. If p = q = 2, we know that the answer is positive (see Remark 1.8); however, in the following sections we will dis-cover that the answer is negative for some other choices of p and q. In many ways, the approach taken in this chapter is a “naive” one: by studying some specific cases, we will be convinced that the minimizer ui cannot be symmetrical when q is too big; then, for every odd function u we will try and exhibit a modified version of u that makes (we hope) ku0kp
kukq decrease, an attempt that will prove successful whenever q > 4p − 1.
2.1
Some limit cases
We are focusing on problem (1.9); before tackling the general case, it is quite instructive to solve some specific limit scenarios. The easiest one is arguably when p = ∞. In this case, the solution is very simple:
Proposition 2.1. Suppose that u∞ : [−1, 1] → R minimizes the expression ku
0k ∞
kukq among
all u ∈ W1,∞(−1, 1) such that R u = 0. Then, u∞ is a straight line through the origin. Proof. First of all, let us prove that problem (1.9) does attain a minimum when p is infinite. Let ci(∞, q) be the constant defined in (1.9), and take a sequence of functions un ∈ W1,∞(−1, 1) respecting ku0nk∞ = 1, Z 1 −1 un = 0, kunkq → 1 ci.
Because ku0nk∞ = 1 identically, for every n function un is Lipschitz continuous with
constant 1; in particular, (un)n∈N is equicontinuous. We can therefore apply Arzel`a-Ascoli
(see Proposition A.27), and obtain that unk
L∞
−→ u∞ for an adequate sequence of indexes (nk)k∈N. The limit function u∞ is the minimizer we were looking for: indeed, the fact
that unk converges uniformly implies that
Z u∞ = lim k Z 1 −1 unk = 0 and ku∞kq = lim k kunkkq = 1 ci;
also, u∞ must be Lispchitz continuous with constant 1, hence ku0∞k∞ (which is u∞’s smallest Lipschitz constant, see Remark A.16) cannot be greater than 1; in other words,
ku∞k∞
ku∞kq ≤ ci(∞, q).
We must now prove that u∞ is a straight line through the origin: u∞(x) = x. In order to do that, we will prove that, for every function u : [−1, 1] → R with Lipschitz constant ku0k∞ = 1 and R u = 0 (with the exception of u∞(x) = x) one can find a ˜u with the same Lipschitz constant and integral, but k˜ukq > kukq. Without loss of generality, u
is increasing: if it is not, substitute it with (a multiple of) its increasing rearrangement (which, observe, cannot be u∞(x) = x, since any rearrangement of u∞ has a Lipschitz constant greater than 1). Now, if u(0) = 0, then we can quite simply take ˜u(x) = x: indeed, because 1 is the Lipschitz constant for u we must have |u(x)| ≤ |x|, hence kukq < k˜ukq. If u(0) 6= 0, we will still have u(x0) = 0 for some −1 < x0 < 1; to fix ideas,
let us assume x0 > 0. Note that the request R u = 0, together with ku0k = 1 and u’s
monotonicity, assures that u(2x0− 1) > x0− 1; take then a 0 < ε < u(2x0− 1) − x0+ 1,
and set ˜ u = x − x0+ ε if x0− ε − l ≤ x ≤ 1 −l if x1 ≤ x ≤ x0 − ε − l u(x) if − 1 ≤ x < x1 ,
where l > 0 and x1 ≤ x0 are chosen so that ˜u is continuous and R ˜u = 0. Then:
• functions y = u(x) and y = x − x0+ ε must always have an intersection on interval
(−1, x0− ε];
• if we call x2 such an intersection (u(x2) = x2 − x0 + ε), then l > −u(x2), and
x1 ≤ x2;
• inequality x2 > 2x0 − 1 also holds, since u(x) > x − x0 + ε for all x ≤ 2x0 − 1,
which means that it is impossible that x2 ≤ 2x0 − 1;
2.1. SOME LIMIT CASES 23 All these facts allow us to write
Z 1 −1 |u|q ≤ Z x2 −1 |u|q + Z x0+u(x2) x2 |u(x2)|qdx + Z 1 x0+u(x2) |x − x0|qdx < < Z x2 −1 |˜u|q + Z 1 1−ε |1 − x0|qdx + Z 1−ε x0+u(x2)−ε |x + ε − x0|qdx < Z 1 −1 |˜u|q : this concludes, because we have indeed found a ˜u with the required properties.
When q equal 1 or ∞, minimizer ui can be found via the Euler equation. Take q = ∞: we can prove that
Proposition 2.2. Suppose 1 < p < ∞. If ui minimizes problem (1.15) with q = ∞, then ui is a multiple of function
u(x) = (p∗+ 1)|x ± 1|p∗ − 2p∗,
where p∗ stands for p−1p .
Proof. Theorem 1.6 assures that ui exists; Proposition A.34, on the other hand, tell us that it must be a monotone function. To fix ideas, let us assume that ui increases, and that kuik∞ = ui(1) = 1. Consider then the minimum problem
˜ λ = min Z 1 −1 |u0|p : u ∈ W1,p(−1, 1), Z u = 0, u(1) = 1 ≤ λi : (2.1) we can compute its minimizers using the Euler equation. Indeed, if ˜u minimizes (2.1), then it must solve a differential equation of the form
(u0|u0|p−2)0 = β
(retrace the proof of point 4 in Theorem 1.1), which becomes |u0|p = p∗βu +
˜ λ
2 (2.2)
when integrated. Also, ˜u is of class C1, and satisfies the Neumann boundary condition ˜
u0(−1) = 0 (see point 3 in Theorem 1.6). Now, we can actually explicitly solve an equation like (2.2): all its solutions are of the form u(x) = a|x + b|p∗ + c, where a, b, c are three adequate constants; we can then find these constants by solving the system given by the various boundary conditions:
R1 −1u = 0˜ ˜ u(1) = 1 ˜ u0(−1) = 0 =⇒ a p∗+1(|1 + b|p ∗+1 − |b − 1|p∗+1) + 2c = 0 a|1 + b|p∗ + c = 1 b = 1 =⇒ c = −p1∗ 2p∗a + c = 1 b = 1 ,
hence a = p∗p+1∗ 2−p ∗
, i.e. ˜u is a multiple of (p∗+ 1)|x + 1|p∗ − 2p∗. Observe that ˜u(−1) =
c > −1: since ˜u is increasing, this means k˜uk∞ = 1, which in turn implies λi ≤ ˜λ, i.e.
λi = ˜λ, and ui = ˜u. It’s worth noticing that ui is not symmetrical with respect to the origin.
The case q = 1 can be similarly analyzed, since its Euler equation is more or less the same as in the previous scenario:
Proposition 2.3. If q = 1 and 1 < p < ∞, problem (1.15) is minimized by an adequate multiple of the function f defined as
f (x) = (
1 − |x − 1|p∗ if x ≥ 0 |x + 1|p∗ − 1 if x ≤ 0 .
Proof. As usual, let ui be the minimizer; ui is monotone: increasing, to fix ideas. Let x0
be a point such that ui(x0) = 0. Of course,
R1
x0ui =
1
2; let us then consider the minimum
problem min Z 1 x0 |u0|p : u ∈ W1,p(x0, 1), Z 1 x0 u = 1 2, u(x0) = 0 .
Again, the minimizer is given by ˜u(x) = a|x + b|p∗+ c; the boundary condition u0(1) = 0 implies b = −1, which in turn means that the minimizer is nonnegative. But then, ui and ˜
u have the same integral and the same L1-norm over [x0, 1]; therefore, ui is guaranteed
to equal ˜u on [x0, 1], i.e. ui(x) = ˜u(x) = a|x − 1|
p∗ + c for all x ≥ x
0, where a and c are
uniquely determined via the linear system (R1 −1a|x − 1| p∗ + c dx = 1 2 a|x0− 1|p ∗ + c = 0 .
Now, let k be the positive constant such that kkf k1 = 1, and call fx0 the function
fx0(x) = f x+1 1+x0−1 1+x0 if x ≤ x0 f x−1 1−x0+1 1−x0 if x ≥ x0 .
The restriction of kfx0 to the interval [x0, 1] is a function of the form ˜c + ˜a|x − 1|
p∗; and, since fx0(x0) = 0, and R1 x0kfx0 = R1 0 kf = 1
2, the constants ˜a and ˜c must solve the exact
same linear system solved by a and c: this means ˜a = a and ˜c = c, i.e. ui(x) = kfx0(x)
for all x ≥ x0. If we proceed similarly on interval [−1, x0], we will find that ui = kfx0
here as well, i.e. ui is a multiple of fx0.
We still have to prove that x0 = 0; this, however, simply descends from the fact that
Z 1 −1 |fx0 0| p = ((1 + x 0)1−2p+ (1 − x0)1−2p) Z |f0|p
2.2. EASY ASYMMETRY RESULTS 25
2.2
Easy asymmetry results
The results of section 2.1 have shown that the minimizers to problem (1.9) need not be symmetrical with respect to the origin (we are of course referring to the case q = ∞). In this section, we will begin looking into the general case, and find out that in fact ui can only be odd when index q is “small enough”. Let us start with a quick investigation of Γ-limits (see Definition A.28):
Proposition 2.4. Consider
X = nu ∈ L1(−1, 1) : u 6= 0, Z
u = 0o
equipped with the usual norm, and let Rp,q be defined on this space as
Rp,q(u) =
ku0k p
kukq
(or Rp,q(u) = ∞ when the previous ratio is not well set). Then, if pn −→ p and qn −→ q
are two converging sequences (with p > 1), we will have Rpn,qn
Γ
−→ Rp,q.
Proof. For the liminf inequality, we must show that, if un L1
−→ u, then Rp,q(u) ≤ lim inf Rpn,qn(un).
We can assume without loss of generality that the sequence of Rpn,qn(un) converges to a number L < +∞, and is bounded by 2L: we must show that Rp,q(u) ≤ L. Let us first of all prove that ku0nkp−ε is bounded for every 0 < ε < p − 1. Assume ku0nkp−ε −→ ∞, and consider the sequence of functions
vn =
un ku0
nkp−ε
; since H¨older inequality (see Proposition A.3) implies
2p21 kf kp 1 ≤ 2
1 p1kf kp
2
for every measurable f : [−1, 1] → R and every choice of p1 ≤ p2 (which in the worst
case becomes kf kp1 ≤ 2kf kp2), we can assure that for every n
kvnk∞ = kunk∞ ku0 nkp−ε ≥ kunkqn 4ku0 nkpn ≥ 1 8L.
At the same time, (vn0)n∈N is bounded in Lp−ε, so there must exist a subsequence such that vnk L∞ −→ v, and v0 nk Lp−ε * v0; but since vn L1
the absurd: the only explanation is that ku0nkp−ε is indeed bounded. This boundedness means that we can extract a weakly converging subsequence: u0n
k
Lp−ε
* u0, where of course u is the uniform limit of the unk; in particular, u is bounded. Now,
L = lim k ku0 nkkpnk kunkkqnk ≥ lim infk 2 1 pnk− 1 p−εku0 nkkp−ε 2 1 qnk− 1 q+εku nkkq+ε = = lim 2 1 pnk− 1 p−ε 2 1 qnk− 1 q+ε · lim inf ku 0 nkkp−ε lim kunkkq+ε ≥ 2 1 p− 1 p−ε 21q− 1 q+ε · ku 0k p−ε kukq+ε ; since this is true for every ε, and
ku0kp−ε −→ ku0kp, kukq+ε −→ kukq as ε approaches zero, we can conclude that Rp,q(u) ≤ L.
In order to prove the limsup inequality, we must find for every u ∈ X a sequence of functions such that un → u, and
lim sup Rpn,qn(un) ≤ Rp,q(u).
If u /∈ W1,p, then R
p,q(u) = ∞, hence the inequality is trivially true. If u is a piecewise
affine function, then u and u0 are both bounded: we can therefore simply take un = u,
and use the fact that ku0kpn −→ ku
0k
p and kukqn −→ kukq. Finally, if u is a generic
function of W1,p, observe that for every k ∈ N we can find a piecewise affine functions uk such that Z 1 −1 uk = 0, ku − ukk∞ ≤ 1 k, |Rp,q(u) − Rp,q(uk)| ≤ 1 k. Because we know that Rpn,qn(uk) −→
n→∞ Rp,q(uk), for every k we can find a N (k) ∈ N such
that
|Rp,q(uk) − Rpn,qn(uk)| ≤
1
k ∀ n ≥ N (k);
we can of course also ask N (k + 1) > N (k). Now, if for every n we call kn the number kn = maxnk ∈ N : N(k) ≤ no,
then xkn is a sequence that respects the limsup inequality: indeed, kn −→
n→∞ ∞, hence ukn −→ n→∞ u, and N (kn) ≤ n, hence |Rp,q(u) − Rpn,qn(ukn)| ≤ |Rp,q(u) − Rp,q(ukn)| + |Rp,q(ukn) − Rpn,qn(ukn)| ≤ 2 kn n→∞−→ 0
2.2. EASY ASYMMETRY RESULTS 27 A consequence of Proposition 2.4 is the following:
Proposition 2.5. For every p > 1 there exists a q such that if q > q then the minimizer to (1.9) is not odd.
Proof. Suppose q did not exist, i.e. we could find a p > 1 and a qn −→ ∞ such that every
minimizer un of (1.9) (with q = qn) was odd. We can assume ku0nkp = 1: by extracting
a converging subsequence such that unk
L∞
−→ u∞ and u0nk
Lp
* u0∞, then, we would have identified an odd minimizer u∞ of (1.9) (with q = ∞). This is absurd: in Proposition 2.2,
we have found u∞ explicitly, and discovered that it cannot be symmetrical with respect
to the origin.
The content of Proposition 2.5 can also be proved through a more direct calculation, which we are going to carry out in the following statement and which will give us a first estimate for q(p):
Proposition 2.6. Suppose p > 1: for every q > 4p − 1, the function ui minimizing (1.9) is asymmetrical with respect to the origin.
Proof. Let us admit that ui is symmetrical: we shall prove that this leads to contraddic-tion. For every −1 < x0 < 1, let ux0 be the function defined as
ux0(x) = ui x+1 1+x0−1 1+x0 if x ≤ x0 ui x−1 1−x0+1 1−x0 if x ≥ x0 .
Then R ux0 = 0, and u0 = ui; besides,
ku0x 0kp kux0kq = (1 + x0)1−2p+ (1 − x0)1−2p 1p (1 + x0)1−q + (1 − x0)1−q 1q ku00kp ku0kq. If ui is indeed the minimizer, then, ku00kp
ku0kq ≤
ku0 x0kp
kux0kq for every x0, i.e. the function
h(x) = ((1 + x)
1−2p+ (1 − x)1−2p)p1
((1 + x)1−q + (1 − x)1−q)1q
attains its minimum value when x = 0. Now,
h(x) = 1 + (1 − 2p)x − p(1 − 2p)x2+ 1 − (1 − 2p)x − p(1 − 2p)x2+ o(x2) p1 1 + (1 − q)x − q2(1 − q)x2 + 1 − (1 − q)x − q 2(1 − q)x2 + o(x2) 1q = =1 − (1 − 2p)x 2 + o(x2) 1 − 1−q2 x2+ o(x2) 2 1 p− 1 q = 2 1 p− 1 q 1 + 1 − q 2 x 2− (1 − 2p)x2+ o(x2) :
in order that h’s minimum is reached in zero, therefore, we must have 1 − q ≥ 2(1 − 2p), i.e. q ≤ 4p − 1, which however is incompatible with the hypothesis.
Proposition 2.6 above proved that in many cases problem (1.9) does not have an odd minimizer by explicitly writing for every odd function u a modification of u that makes
ku0k p
kukq decrease. In reality, we already know that there is essentially only one odd function
u0 which could be the minimizer ui we are looking for; of course, that’s the function u0(x) =
(
ud(1 − x) if x ≥ 0
−ud(1 + x) if x < 0 , (2.3) where ud minimizes (1.6) (see Proposition 1.10). This means that, hypothetically, we could prove the asymmetry of (1.9)’s minimizers by showing that the second variation of ku0kp
kukq is not always positive in u0 (the second variation of F : W → R in u0 ∈ W is the
functional δ2F (u
0, v) = ϕ00v(0), where ϕv(t) = F (u0 + tv)). Out of curiosity, let us find
out what the second variation of ku0kp
kukq in u0 is: Proposition 2.7. If F (u) = ku 0k p kukq ,
then the second variation of F in u0 (which is the odd function defined in (2.3)) is given
(up to a positive multiplying constant) by the quadratic functional Z (p − 1)|u00|p−2v02− λ 0(q − 1)|u0|q−2v2+ λ0(q − p) Z |u0|q−2u 0v 2 , (2.4) where λ0 = R |u00|p.
Proof. Pick a v ∈ C1([−1, 1]) such that R v = 0, and call
f (t) = Z |u00 + tv0|p, g(t) = Z |u0+ tv|q : then f0(t) = p Z |u00 + tv0|p−2(u00+ tv0)v0, which means f0(0) = p Z |u00|p−2u0 0v 0 = pλ 0 Z |u0|q−2u 0v
(because u0 solves equation (1.4), and λ0 = λd), while
f00(0) = p(p − 1) Z
2.3. THE CASE P = 1 29 Similarly, g0(0) = q Z |u0|q−2u0v, g00(0) = q(q − 1) Z |u0|q−2v2. Then F (u0 + tv) = ku00+ tv0kp ku0 + tvkq = = λ0+ tpλ0R |u0|q−2u 0v + t 2 2p(p − 1)R |u 0 0|p−2v02+ o(t2) 1p 1 + tqR |u0|q−2u 0v + t 2 2q(q − 1)R |u0| q−2v2+ o(t2) 1 q = =λ 1 p 0 1 + t Z |u0|q−2u0v + t2 2 p − 1 λ0 Z |u00|p−2v02 + 1 2p 1 p − 1 tp Z |u0|q−2u0v 2 + o(t2) 1 − t Z |u0|q−2u0v − t2 2(q − 1) Z |u0|q−2v2+ 1 2q 1 q + 1 tq Z |u0|q−2u0v 2 + o(t2) = =λ 1 p 0 + λ 1 p 0 t2 2 Z p − 1 λ0 |u 0 0| p−2v02 − (q − 1)|u 0|q−2v2+ Z |u0|q−2u 0v 2 (q − p) + o(t2) : since the coefficient of the second degree term is indeed a (positive) multiple of (2.4), the theorem is proved.
Unfortunately, determining for which p, q there does (or does not) exists a v (with R v = 0) that makes (2.4) negative is no easy task: this is why in Chapter 3 we will take a different approach to solve the problem, an approach based on a more accurate study of the Euler equation.
2.3
The case p = 1
In the previous sections, we have more or less always assumed that p > 1. The reason for this is (among others) the fact that point 1 in Theorem 1.1 does not necessarily apply in the case p = 1; in fact, the functional ku0k1
kukq does not attain a minimum on
W = n u ∈ W1,1(−1, 1) : Z u = 0 o .
We can, however, extend the functional’s domain to a more general class of functions: the space BV (see Definition A.20).
In particular, we want to show that kukku0k
q does have a minimum on space
W = n u ∈ BV (−1, 1) : Z u = 0 o
(where ku0k is defined as in Proposition A.18), and that the minimizer is always a piece-wise constant function of the form
ux0(x) = ( − 1 1+x0 if x < x0 1 1−x0 if x > x0 . (2.5)
In order to achieve this, we begin by proving the two following propositions:
Proposition 2.8. Take a function f ∈ Lq(a, b), and let g : [a, b] → R be the constant function such that Rabf = Rabg. Then kf kq ≥ kgkq.
Proof. The proof is simply an application of H¨older inequality: kgkq = Z |g|q 1 q (b−a) |R f | b − a q1q ≤ (b−a)1q−1 Z |f | ≤ (b−a)1q−1(b−a) 1 q∗kf k q = kf kq.
Proposition 2.9. For every −1 < x0 < 1, consider the function ux0 defined in (2.5),
and let u ∈ BV (−1, 1) be a second function such that ess sup u = 1 1 − x0 , ess inf u = − 1 1 + x0 , Z u = 0. Then, kux0kq ≥ kukq for every choice of 1 ≤ q < ∞.
Proof. Suppose that u : [−1, 1] → R is a piecewise constant function, and call −1 = a0 <
a1 < a2 < · · · < an = 1 the endpoints of the intervals on which it is constant. Now define
f : [−1, 1] → R as the function f (x) = ( −1+x1 0 if ak−1 < x < xk 1 1−x0 if xk < x < ak ,
where for every k = 1, . . . , n the point xk ∈ [ak−1, ak] is chosen so that Z ak ak−1 fk = Z ak ak−1 u.
Since u is constant on [ak−1, ak], we can apply on every interval [ak−1, ak] Proposition 2.8:
we will discover that Rak
ak−1 |f |
q ≥Rak
ak−1|u|
q holds for every k, hence
Z 1 −1 |f |q ≥ Z 1 −1 |u|q.
To conclude, observe that R |f |q = R |ux0|
q; indeed, both f and u
x0 are piecewise constant
function who only assume values ∓1±x1
0, and who have a null integral over [−1, 1]: it is
2.3. THE CASE P = 1 31 Take now a generic u: of course, we can consider a sequence of piecewise constant functions (un)n∈N such that un
Lq
−→ u. Moreover, we can ask that for every n the function un has −1+x1
0 as minimum and
1
1−x0 as maximum, andR un = 0: therefore, kux0kq ≥ kunkq
will be true for every n, and when taking the limit for n → ∞ we will obtain the required inequality.
Proposition 2.9 allows us to prove that
Proposition 2.10. For every 1 ≤ q < ∞ there exists a value −1 < x0 < 1 such that ux0 minimizes kukku0k
q among all u ∈ BV (−1, 1) with R u = 0.
Proof. Let u : [−1, 1] → R be a generic function of bounded variation such that R u = 0: the first thing is to show that there exists a x0 such that
ku0 x0k
kux0kq ≤
ku0k
kukq. Well, call
M = ess sup u and m = − ess inf u: the x0 we are looking for is given by x0 =
M − m M + n. Indeed, we can suppose M = 1−x1
0, and m =
1
1+x0 (if that is not the case, substitute u
with its multiple M +m2M mu): then we can apply Proposition 2.9 and Remark A.23 to u so as to write ku0k kukq ≥ sup u − inf u kukq ≥ M + m kux0kq ,
which is precisely the required inequality once we observe that the distributional deriva-tive of ux0 is (M + m)δx0 (where δx0 is the unitary measure concentrated in x0), hence
ku0
x0k = M + m.
The reasoning above is enough to prove that the inf of functional kukku0k
q actually equals
the inf of expression ku
0 x0k kux0kq for −1 < x0 < 1. Now, ku0x 0k kux0kq = 2 1−1q (1 + x0)−1 + (1 − x0)−1 ((1 + x0)1−q + (1 − x 0)1−q) 1 q = 2 2−1q ((1 + x0)(1 − x0)q + (1 − x 0)(1 + x0)q) 1 q ; if we call h(x0) such an expression, then h → +∞ as x0 → ±1: this means that h attains
a minimum on [−1, 1], which will also be the minimum of kukku0k
q.
In conclusion, Proposition 2.10 shows that, although (1.9) does not have a minimum when p = 1, the problem can be extended to the space BV , in which case the minimizer is given by ux0, where x0 maximizes on [−1, 1] the function
Observe that ux0 is odd if and only if x0 = 0; we therefore ask: is x0 = 0 the maximum
point of g? Well, keeping in mind what we have found out in section 2.2, it will not surprise us to discover that the answer actually depends on q. If q ≤ 3, then g doesn’t have any stationary points except for x0 = 0 (see Proposition 2.11 below), which means
that zero is indeed the maximum point (i.e. the minimizer is odd); if q > 3, however, g(x) =(1 − x2)((1 − x)q−1 + (1 + x)q−1) =
=2(1 − x2)(1 + (q − 1)(q − 2)
2 x
2+ o(x2)) = 2 + q(q − 3)x2+ o(x2)
has a positive second derivative in zero, which means that the origin is actually a local minimum for g, hence the minimizer cannot be odd.
Proposition 2.11. If q ≤ 3 the function g defined in (2.6) doesn’t have any stationary points in [−1, 1] except for x0 = 0.
Proof. g is even: we can work on [0, 1], and then mirror the reasoning for the other half of the domain. Now,
g0(x) =(1 − x)q − q(1 + x)(1 − x)q−1− (1 + x)q + q(1 − x)(1 + x)q−1 = =(1 − x)q−1(1 − x − (1 + x)q) − (1 + x)q−1(1 + x − (1 − x)q) : keeping in mind 0 < x < 1, we want to show that g0(x) < 0. Observing that
1 − x − (1 + x)q = 1 − q − (1 + q)x < 0
certainly holds for every 0 < x < 1, we have that g0(x) < 0 reduces to 1 − x
1 + x q−1
> 1 + x − (1 − x)q
1 − x − (1 + x)q. (2.7)
For every fixed 0 < x < 1, the quantity 1−x1+x is positive but smaller then 1: this means that (1−x1+x)q−1 is decreasing as a function of q. On the other hand, expression
1 + x − (1 − x)q 1 − x − (1 + x)q
is increasing (as a function of q) on interval (1−x1+x, +∞) ⊇ [1, 3]: this means that inequal-ity (2.7) must essentially be proved with q = 3. In this case, the following inequalities are equivalent: 1 − x 1 + x 2 > 4x − 2 −4x − 2 (1 − x)2(1 + 2x) > (1 + x)2(1 − 2x) 2x3 > −2x3;
2.3. THE CASE P = 1 33 Remark 2.12. If, for every q ≥ 1, we call x0(q) the (positive) maximum point of g (we
still mean function (2.6)), then x0(q) −→
q→∞ 1. To see this, observe that x0 must satisfy (2.7)
(with = instead of >). Now,
h(x) =
1 + x 1 − x
q−1 is convex on [−1, 1], hence (if q > 3) we have
h(x0) > h(0) + x0h0(0) = 1 + 2(q − 1)x0.
On the other hand, q > 3 also implies
1 + 2(q − 1)x > 1 − x − (1 + x)q 1 + x − (1 − x)q ∀ x ∈ 0, q(q − 3) (q + 1)(q − 1) ; this means x0(q) ≥ q(q − 3) (q + 1)(q − 1) q→∞−→ 1.
Remark 2.13. Observe that if p = 1 and q = ∞ then problem (1.9) does not have a minimum (ci(1, ∞) = 1, but it is never attained); however, (ux0(q))q≥1 (where ux0(q)
minimzes kukku0k
q) is a “minimizing sequence”, meaning that
ku0x
0(q)k
kux0(q)k∞
−→
Chapter 3
An auxiliary function
As in Chapter 2, we are focusing on problem (1.9), specifically on whether the minimizer ui is symmetrical; throughout the following sections, we will develop the results necessary in order to give a definitive answer to the question. This answer claims:
Theorem 3.1. Pick two indexes p ≥ 1 and q ≥ 1: minimizer ui to problem (1.9) is symmetrical with respect to the origin if and only if q ≤ 3p.
Proof. The “if” part is taken care of in section 3.4, Proposition 3.32. The “only if” part is the object of Proposition 3.25 in section 3.3.
3.1
Introducing the function
In this and in the following sections, Gp,q and Jp,q will denote the functions
Gp,q(µ) = Z y2 y1 y (1 + µy − |y|q)1p dy (3.1) and Jp,q(µ) = Z y2 y1 (1 + µy − |y|q)p∗1 dy (3.2)
respectively, where y1(µ) < 0 < y2(µ) are for every µ ∈ R the two roots of equation 1 + µy = |y|q.
The fundamental role which function Jp,q plays in the solution of our problem is stated
in the following theorem:
Theorem 3.2. Pick two indexes 1 < p < ∞ and 1 < q < ∞: then the constant ci defined in (1.9) equals ci = 2−Rp∗p∗1 q 1 qRRmin J p,q, 35
where p∗ = p−1p and p are conjugate exponents, R = p1∗ +
1
q, and Jp,q is the function
defined in (3.2). Besides, problem (1.9) has an even minimizer if and only if µ = 0 is the global minimum point for Jp,q.
Remark 3.3. The importance of Theorem 3.2 consists of the fact that it reduces a variational minimum problem to the study of a function Jp,q : R → R. Observe, moreover, that it contains a statement of the “surprising” equivalence we mentioned in Remark 1.15. In order to prove Theorem 3.2, we need to study a little more closely equation (1.11). Observe that the equation can be re-written in a somewhat simpler form, once we have rescaled ui:
Proposition 3.4. There exist two positive constants a and b such that the function yi(x) = aui(xb) solves on its domain
|y|q + | ˙y|p = 1 + µiy, (3.3) where µi = acβ.
Proof. For every choice of a, b > 0, function yi(x) = aui(xb) solves on its domain [−b, b] the differential equation
bp| ˙y|p app∗ + λi |y|q aqq = β ay + c (where we have called c = λi
2(1 − 1 p +
1
q)); we simply have to take a
q = λi
cq and b
p = apcp∗
in order to obtain (3.3).
Let us study differential equations of the form (3.3). For every µ ∈ R, consider the Cauchy problem
( ˙
y = (1 + µy − |y|q)1p
y(0) = 0 , (3.4)
and call y(µ) : (x1, x2) → (y1, y2) the maximal solution.
Proposition 3.5. Function y(µ) is bounded and strictly increasing. Its domain (x1, x2) is
a bounded interval; we can extend y(µ) to a function of C1([x1, x2]) by imposing y(µ)(x1) =
y1, and y(µ)(x2) = y2 (where, as above, y1 < 0 < y2 are the roots of 1 + µy = |y|q).
Moreover, the following equalities are true: Z x2 x1 y(µ) = Z y2 y1 y (1 + µy − |y|q)1p dy = Gp,q(µ), (3.5) k ˙y(µ)kpLp(x 1,x2) = Z y2 y1 (1 + µy − |y|q)p∗1 dy = J p,q(µ), (3.6) x2 − x1 = Z y2 y1 1 (1 + µy − |y|q)p1 dy. (3.7)
3.1. INTRODUCING THE FUNCTION 37 Proof. The first thing to do is realizing that y(µ)’s limit as x approaches x1 must equal y1:
indeed, if x1 = −∞, then ˙y(µ)(x) −→ x→x1
0, which of course implies y(µ)(x) −→ x→x1
y1; if, on the
other hand, x1 is finite, then Proposition A.26 assures that the same conclusion must be reached. An analogous reasoning applies to x2, hence y(µ)(x) −→
x→x2
y2. This implies that y(µ) is actually a diffeomorphism between (x1, x2) and (y1, y2); we can therefore carry out the change of variables y = y(µ)(x) in order to more explicitly express as functions of µ all the integrals containing y(µ). For instance:
Z x2 x1 y(µ) = Z x2 x2 y(µ)(x) y˙(µ)(x) (1 + µy(µ)(x) − |y(µ)(x)|q) 1 p dx = Z y2 y1 y (1 + µy − |y|q)1p dy, k ˙y(µ)kpLp(x 1,x2) = Z x2 x1 | ˙y(µ)|p = Z x2 x1 1 + µy(µ)− |y(µ)|q (1 + µy(µ)− |y(µ)|q) 1 p ˙ y(µ) = Z y2 y1 (1 + µy − |y|q)p∗1 dy x2 − x1 = Z x2 x1 1 dx = Z x2 x2 ˙ y(µ)(x) (1 + µy(µ)(x) − |y(µ)(x)|q) 1 p dx = Z y2 y1 1 (1 + µy − |y|q)1p dy, which are the required equalities. Observe that, since the last integral is finite for every µ, the interval (x1, x2) must indeed be bounded. If we extend y(µ) to all of [x1, x2] by imposing y(µ)(x1) = y1 and y(µ)(x2) = y2, the resulting function will undoubtedly be continuous; as a matter of fact, it is of class C1, since
lim x→x1,2 ˙ y(µ)(x) = lim y→y1,2 (1 + µy − |y|q)1p = 0.
Proposition 3.6. Pick ui an increasing minimizer for (1.9), and let function yi and constant µi be defined as in Proposition 3.4. Then, y(µi) (the maximal solution to (3.4)
when µ = µi) is a translate of yi (along the x axis).
Proof. Let x0 be the point such that yi(x0) = 0: then yi(x + x0) (which, remember,
is a strictly increasing function) solves problem (3.4) (with µ = µi) on the interior of its domain, and equals y1,2 at the endpoints; because y(µi) is the maximal solution, this
implies yi(x + x0) = y(µi)(x) for every x ∈ [x1, x2].
Proposition 3.7. For every µ, function y(µ) solves on [x1(µ), x2(µ)] the differential
equation
p∗( ˙yp−1)0 + qy|y|q−2 = µ. Besides, the equalities
ky(µ)kqq = p∗ q Jp,q(µ) + µ qGp,q(µ) (3.8) and x2− x1 = p∗ q + 1 Jp,q(µ) + µ 1 q − 1 Gp,q(µ). (3.9)
Proof. Function y(µ) is of class C∞ on (x1(µ), x2(µ)), and satisfies ˙yp+ |y|q = 1 + µy; by
deriving both sides of this equation we will obtain µ ˙y = p ˙yp−1y00 + qy|y|q−2y = ˙˙ y
p
p − 1(p − 1) ˙y
p−2y00 + qy|y|q−2 = ˙y(p∗( ˙yp−1)0 + qy|y|q−2),
i.e. µ − qy(µ)|y(µ)|q−2 is the derivative on (x1, x2) of the function p∗y˙ p−1
(µ) (because ˙y(µ) > 0
on this interval, so we can divide the first and last member by ˙y(µ)). Since, however, this
derivative is continuous on [x1, x2], the same conclusion holds on the closed interval.
In order to obtain (3.8), we now simply have to multiply both sides of equation qy(µ)|y(µ)|q−2 = µ − p∗( ˙y(µ)p−1)0 by y(µ), and then integrate, remembering that in (3.5)
and (3.6) we found out that Rx2
x1 y(µ) = Gp,q(µ) and k ˙y(µ)k p p = Jp,q(µ): qky(µ)kq q = Z x2 x1 qy(µ)|y(µ)|q−2y (µ) = Z x2 x1 µy(µ) − p∗( ˙yp−1(µ) )0y(µ) = = µ Z y(µ)+ p∗ Z ˙ yp(µ) = µGp,q(µ) + p∗Jp,q(µ).
As for (3.9), we simply have to integrate ˙yp(µ)+ |y(µ)|q = 1 + µy(µ) between x1 and x2:
we will obtain
Jp,q(µ) + ky(µ)kqq = x2− x1+ µGp,q(µ),
which gives us the required equality as soon as we substitute ky(µ)kq
q with the expression
we wrote in (3.8).
Remark 3.8. If µ is such that Gp,q(µ) = 0, then relations (3.8) and (3.9) reduce to
ky(µ)kqq = p∗ q Jp,q(µ) x2− x1 = p∗RJp,q(µ) (where R = p1∗ + 1 q).
Proposition 3.9. For every µ ∈ R, let u(µ) : [−1, 1] → R be the function
u(µ)(x) = y(µ)(`x + m), (3.10) where ` = x2−x1 2 and m = x2+x1 2 . Suppose Gp,q(µ) = 0: then ku0 (µ)kLp(−1,1) ku(µ)kLq(−1,1) = 2−Rp∗p∗1 q 1 qRRJ p,q(µ). (3.11)
Consequently, for every fixed 1 < p < ∞ and 1 < q < ∞, the value µi minimizes Jp,q(µ)