• Non ci sono risultati.

The Inverse Power Method for the p(x)-Laplacian Problem

N/A
N/A
Protected

Academic year: 2021

Condividi "The Inverse Power Method for the p(x)-Laplacian Problem"

Copied!
19
0
0

Testo completo

(1)

The inverse power method for the p(x)-Laplacian problem

--Manuscript

Draft--Manuscript Number: JOMP-D-14-00262R1

Full Title: The inverse power method for the p(x)-Laplacian problem

Article Type: Original Research

Keywords: p(x)-Laplacian; eigenpairs; inverse power method

Corresponding Author: Marco Caliari, Ph.D.

University of Verona Verona, ITALY Corresponding Author Secondary

Information:

Corresponding Author's Institution: University of Verona Corresponding Author's Secondary

Institution:

First Author: Marco Caliari, Ph.D.

First Author Secondary Information:

Order of Authors: Marco Caliari, Ph.D.

Simone Zuccher, Ph.D. Order of Authors Secondary Information:

Abstract: We present an inverse power method for the computation of the first homogeneous

eigenpair of the p(x)-Laplacian problem. The operators are discretized by the finite element method. The inner minimization problems are solved by a globally convergent inexact Newton method. Numerical comparisons are made, in one- and

two-dimensional domains, with other results present in literature for the constant case p(x) ≡ p and with other minimization techniques (namely, the nonlinear conjugate gradient) for the p(x) variable case.

Link to official published version: https://www.sciencedirect.com/science/article/pii/S0377042716302989?via%3Dihub

(2)

Abstract We present an inverse power method for the computation of the first homogeneous eigenpair of the p(x)-Laplacian problem. The operators are discretized by the finite element method. The inner minimization problems are solved by a globally convergent inexact New-ton method. Numerical comparisons are made, in one- and two-dimensional domains, with

other results present in literature for the constant case p(x)≡ p and with other minimization

techniques (namely, the nonlinear conjugate gradient) for the p(x) variable case.

Keywords p(x)-Laplacian· eigenpairs · inverse power method

Click here to download Manuscript: pLaplace.tex Click here to view linked References

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(3)

2(will be inserted by the editor) M. Caliari, S. Zuccher

The inverse power method for the p(x)-Laplacian problem

M. Caliari · S. Zuccher

November 20, 2014

1 Introduction

In this paper we consider the generalization of the p-Laplacian eigenvalue problem with homogeneous Dirichlet boundary conditions

−∆pu(x) = ηp|u(x)|p−2u(x) (1)

where ∆ u(x) = div(|∇u(x)|p−2∇u(x)), Ω is a bounded domain in Rd, u

∈ W01,p(Ω ) and

p> 1, to the case with variable exponent p(x).

The p-Laplacian operator appears in several mathematical models that describe nonlin-ear problems in physics and mechanics [7, 9]. Examples include fluid dynamics [11], mod-eling of non-Newtonian fluids and glaciology [4, 10, 12, 20, 27, 28], turbulent flows [15], climatology [16], nonlinear diffusion (where the equation is called the N-diffusion equa-tion, see Ref. [29] for the original article and Ref. [21] for some current developments), flows through porous media [30], power law materials [5] and torsional creep [24].

The p-Laplacian operator where p is replaced by p(x), 1 < p−≤ p(x) ≤ p+< +∞, is

deeply related to generalized Lebesgue and Sobolev spaces, which have been vigorously

studied and whose theory has been ripe for applications to PDEs.The common

assump-tion in literature is that the exponent p(x) is a measurable funcassump-tion and 1/p(x) is globally log-H¨older continuous [17].There have been many contributions to nonlinear elliptic prob-lems associated with the p(x)-Laplacian from various view points (see Ref. [1, 2, 14] and Ref. [22] for a survey), whereas there are much less contributions to parabolic problems [3] and eigenvalue problems [18].

If p is constant, it is well known that the smallest eigenvalue satisfies

ηp= inf u∈W01,p(Ω ) u6=0 R Ω|∇u| p dx R Ω|u| p dx . (2) M. Caliari· S. Zuccher

Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134, Verona, Italy M. Caliari E-mail: marco.caliari@univr.it S. Zuccher E-mail: simone.zuccher@univr.it 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(4)

The trivial generalization of quotient (2) to the case p= p(x) yields a problem in which

the homogeneity is lost, that is if u is a minimizer then ωu, ω6= 0, is not. In Ref. [19], the

homogeneity is restored considering

λ= inf u∈W1,p(x) 0 (Ω ) u6=0 k∇ukp(x) kukp(x) ,

where the normk·kp(x)is the so called Luxemburg norm

kukp(x)= infγ>0 ( γ : Z Ω u(x) γ p(x) dx p(x)≤ 1 ) . (3)

The use of p(x)−1dx (rather than the classical dx) just simplifies the equations a little (see Ref. [19]).The corresponding Euler–Lagrange equation is

−div   ∇u(x) k∇ukp(x) p(x)−2 ∇u(x) k∇ukp(x)  = λ T u(x) kukp(x) p(x)−2 u(x) kukp(x) , (4) where T= R Ω ∇u(x) k∇ukp(x) dx R Ω u(x) kukp(x) dx .

We notice that in the constant case p(x)≡ p, since pkukp

p=kuk p Lp,

(λ , u) eigenpair for (4)⇔ (λp, u/√p p) eigenpair for (1) (5)

withkukp=

u/√pp

Lp= 1.

Although some different numerical methods for the eigenvalue problem with constant p are available, see Ref. [7, 9, 13], to the knowledge of the authors the first numerical method, based on the nonlinear conjugate gradient method, for computing

λ2= Λ = inf u∈W01,p(x)(Ω ) u6=0 k∇uk2 p(x) kuk2p(x) (6)

and the corresponding minimizer was proposed by the first author in Ref. [6], where only the case p(x)≥ 2 was considered. Here we use a preconditioned quadratic model for the minimization problem, either with the exact Hessian, if p(x)≥ 2, or with a modified Hessian, otherwise. The new method turns out to be more general, more robust and faster (see the numerical experiments in Table 3, Section 3) than the previous one.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(5)

2 The inverse power method for a nonlinear eigenproblem

Given R : Rn→ R

+and S : Rn→ R+convex, differentiable, even and positively

2-homoge-neous functionals, with the further assumption that S(u) = 0 only if u = 0, then the critical

points u∗of F(u) = R(u)/S(u) satisfy

∇uF(u∗) = ∇uR(u∗)−

R(u∗)

S(u∗)∇uS(u∗) = 0.

If we define r(u) = ∇uR(u), s(u) = ∇uS(u) and Λ∗=R(u

)

S(u∗), then the critical point u∗satisfies

the nonlinear eigenproblem

r(u∗)−Λ∗s(u∗) = 0. (7)

If R(u) =hu,Aui, where A: Rn → Rn is linear and S(u) =hu,ui, h·,·i being the scalar

product in Rn× Rn, the standard linear eigenproblem is retrieved. The generalization of

the inverse power method for the computation of the smallest eigenvalues is the following algorithm:

1. take ˜u0random and compute u0= u˜

0 S( ˜u0)1/2 and Λ 0= F(u0) = R(u0) 2. repeat – ˜uk= arg min u {R(u) − hu,s(u k−1)i} – uk= u˜ k S( ˜uk)1/2 – Λk= F(uk) = R(uk) until convergence

It is proven [23] that the sequence{(ukk)}

kproduced by the algorithm satisfies F(uk) <

F(uk−1) and converges to an eigenpair (u) for problem (7), with ua critical point of

F.

Let now u(x) be a linear combination of basis functions u(x) =

n

j=1

ujφj(x),

with u= (u1, . . . , un)T the vector of coefficients, and let us apply the algorithm described

above to the case

R(u) = n

j=1 uj∇φj(·) 2 p(x) , S(u) = n

j=1 ujφj(·) 2 p(x)

In our numerical experiments, the set of functions{φj(x)}nj=1will be a proper set of finite

element basis functions.With abuse of notation, we will write for short R(u) =k∇uk2p(x)

and S(u) =kuk2p(x).

We remind that the algorithm does not guarantee the convergence to the smallest eigen-value. On the other hand, it is possible to apply it to different initial guesses and then take the smallest eigenvalue.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(6)

2.1 Initial values and exit criteria

Although ˜u0can be chosen randomly, it is clear that the more it is close to u∗, the faster the

algorithm converges. In order to speed up the convergence, we use a continuation technique

over p(x), starting from p(x)≡ 2 for which the solution is known at least for common

domains Ω . By employing a convex combination

pm(x) = 2· (1 − θm) + p(x)· θm, θm=

r m

M, m= 0, 1, . . . , M, (8)

m-th problem is solved by providing as initial condition the solution of(m− 1)-th problem.

For the minimization problem (called inner problem) ˜

uk= arg min

u {R(u) − hu,s(u

k−1)i}, given uk−1

we use an iterative method as well. Since the inverse power method, at convergence, ensures

r(uk−1)−Λk−1s(uk−1)≈ 0 ⇒ r uk−1

Λk−1



− s(uk−1)≈ 0,

and (at each iteration)

r( ˜uk)

− s(uk−1) = 0,

a good choice for the initial value of ˜uk is uk−1/Λk−1. By the proof of Lemma 3.1 in

Ref. [23], we know that the descent in F is guaranteed not only for the exact solution of

the inner problem but also for any vector ¯ukwhich satisfies the condition

R( ¯uk) − h ¯uk, s(uk−1)i < R  uk−1 F(uk−1)  −  uk−1 F(uk−1), s(u k−1)  .

Since F(uk−1) = Λk−1, uk−1/F(uk−1) is precisely the above suggested choice for the initial

value of ˜uk. Therefore, in principle, starting from this value, the use of a descent method

for the inner problem would require a single step in order to guarantee the descent in F. More realistically, this means that far from convergence (for instance during the continuation technique) it makes no sense to solve the inner problem too accurately. For this reason, an exit criterion could encompass both a relatively large tolerance and a relatively small maximum number of iterations.

A classical exit criterion for the inverse power method is Λ k −Λk−1 ≤tol· Λ k (9)

for a given tolerance tol. As soon as it is satisfied, we assume that Λk≈ Λ∗and thus

r(uk)

−Λks(uk)

≈ 0. (10)

We call the left hand side residual; clearly, its infinity norm could be used as a different exit criterion. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(7)

2.2 Approximation of the Luxemburg norm

If u(x) is notzero almost everywhere, then its Luxemburg norm γ is implicitly defined by

F(u, γ) = Z Ω u(x) γ p(x) dx p(x)− 1 = 0. (11)

Hereafter we remove the integration domain Ω from the notation. The derivative with re-spect to γ of F(u, γ) is ∂γF(u, γ) =− 1 γ Z u(x) γ p(x) dx (12)

and thus Newton’s method can be easily applied. For a given u(x), F(u, γ) is a monotonically

decreasing convexC2function in γ. This guarantees the convergence of Newton’s method

whenever the initial guess γ0> 0 is chosen such that F(u, γ0)≥ 0. For the initial guess γ0,

we use the following estimates

γ0=                Z Ω |u(x)|p(x) p(x) dx !pmax1 if Z Ω |u(x)|p(x) p(x) dx≥ 1 Z Ω |u(x)|p(x) p(x) dx ! 1 pmin if Z Ω |u(x)|p(x) p(x) dx< 1

where pmax= max p(x) and pmin= min p(x). In fact, if we consider the first case,

F(u, γ0) = Z Ω u(x) γ0 p(x) dx p(x)− 1 ≥ Z Ω |u(x)|p(x) γ0pmax dx p(x)− 1 = = 1 R Ω |u(x)|p(x) p(x) dx Z Ω|u(x)| p(x) dx p(x)− 1 = 0.

2.3 The inner problem

In the inverse power method, we need to solve the inner problem ˜

uk= arg min

u f(u) = arg minu {R(u) − hu,s(u

k−1)i}, given uk−1.

We first computehv,s(u)i = hv,∇uS(u)i for given vectors u and v. Since γ(u) = kukp(x)is

implicitly defined by F(u, γ(u)) = 0, the differentiation of implicit functions leads to

hv,∇uF(u, γ(u))i = hv,∂uF(u, γ(u))i + hv,∂γF(u, γ(u))∇uγ(u)i = 0,

from which

hv,∇ukukp(x)i = hv,∇uγ(u)i = −hv,∂

uF(u, γ(u))i

∂γF(u, γ(u))

. By recalling that, in general,

∂ui Z g(u(x))dx  = Z g0(u(x))φi(x)dx, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(8)

for v(x) = ∑n j=1vjφj(x) we have hv,∂ui Z g(u(x))dx  i = Z g0(u(x))v(x)dx and thus hv,∂uF(u, γ)i = Z u(x) γ p(x)−2u(x) γ v(x) γ dx. By employing (12) we get hv,∇ukukp(x)i = R u(x) kukp(x) p(x)−2 u(x) kukp(x)v(x)dx R u(x) kukp(x) p(x) dx , from which

hv,s(u)i = hv,2kukp(x)∇ukukp(x)i = 2

R u(x) kukp(x) p(x)−2 u(x)v(x)dx R u(x) kukp(x) p(x) dx . (13)

Given uk−1of unitary Luxemburg norm, the problem reduces to find the minimizer of

f(u) = R(u)− hu,s(uk−1)i =

=k∇uk2p(x)− 2 R uk−1(x) p(x)−2uk−1(x)u(x)dx R |uk−1(x)|p(x)dx (14)

We use a line search method based on the sufficient decrease condition. If ucis the solution

at the current iteration, its update is

u+= uc+ δ d, (15)

where d is a descent direction, i.e.h∇uf(uc), di < 0, and δ is such that

f(uc+ δ d) < f (uc) + αδh∇uf(uc), di (16)

with α= 10−4. Condition (16) is usually called Armijo’s rule.

2.3.1 The quadratic model for the inner problem, p(x)≥ 2

We first consider the case p(x)≥ 2 and approximate f (u) in (14) with a quadratic model

f(u)≈ f (uc) +h∇uf(uc), u− uci +

1

2hu − uc, H(uc)(u− uc)i

where H(uc) is the exact Hessian of f at the current iteration uc. For p(x)≥ 2, R(u) is twice

continuously differentiable and convex, i.e. the Hessian of f is symmetric positive definite. Therefore

∇uf(u+)≈ ∇uf(uc) + H(uc)(u+− uc) = 0⇔ u+− uc=−H(uc)−1∇uf(uc)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(9)

and hence the descent direction is d=−H(uc)−1∇uf(uc). If condition (16) is not satisfied

for a given δ , then it is repeatedly reduced (backtracking) until the condition is satisfied,

using a quite standard cubic polynomial model for p(δ ) = f (uc+ δ d) (see Ref. [26]).

Let us compute ∇uf(u) and H(u)v, for a given v∈ Rn. With the arguments used to

obtain (13), we immediately have

ri(u) = ∂uiR(u) = 2 k∇ukp(x) R ∇u(x) k∇ukp(x) p(x)−2 ∇u(x) k∇ukp(x)· ∇φi(x)dx R ∇u(x) k∇ukp(x) p(x) dx (17)

where here· denotes the scalar product in Rd×Rd. Since uk−1has unitary Luxemburg norm,

∂uif(u) = 2 KR ∇u K p−2 ∇u K · ∇φi R ∇u K p − 2 R uk−1 p−2 uk−1φi R |uk−1|p (18)

where we set K=k∇ukp(x) and omitted the dependency on x in the integrals in order to

reduce the notation. By introducing

Ni(u) = Z ∇u K p−2 ∇u· ∇φi, D(u) = Z ∇u K p

the i-th row of H(u)v is

(H(u)v)i= 2

D(u)hv,∇uNi(u)i − Ni(u)hv,∇uD(u)i

D(u)2 = = 2 R (p− 2) ∇u K p−4 ∇u K · ∇vK−∇uhv,∇uKi K2 ∇u· ∇φi+ ∇u K p−2 ∇v· ∇φi D(u) + − 2 Ni(u)Rp ∇u K p−2 ∇u K · ∇vK−∇uhv,∇uKi K2 D(u)2 = = 2 R (p− 2) ∇u K p−4 ∇u K · ∇v∇uK · ∇φi D(u) + − 2hv,∇u KiR (p− 2) ∇u K p−2 ∇u K · ∇φi D(u) + 2 R ∇u K p−2 ∇v· ∇φi D(u) + − 2 Ni(u) K R p ∇u K p−2 ∇u K · ∇v D(u)2 + 2 Ni(u) K hv,∇uKi R p ∇u K p D(u)2 .

By using again (13), the inner producthv,∇uKi can be recast as

hv,∇uKi = R ∇u K p−2 ∇u K · ∇v D(u) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(10)

and the i-th row of H(u)v becomes (H(u)v)i= 2 R (p− 2) ∇u K p−4 ∇u K · ∇v ∇u K · ∇φi D(u) + − 2 R ∇u K p−2 ∇u K · ∇v R (p− 2) ∇u K p−2 ∇u K · ∇φi D(u)2 + + 2 R ∇u K p−2 ∇v· ∇φi D(u) + − 2 R ∇u K p−2 ∇u K · ∇φi R p ∇u K p−2 ∇u K · ∇v D(u)2 + + 2 R ∇u K p−2 ∇u K · ∇φi R ∇u K p−2 ∇u K · ∇v R p ∇u K p D(u)3 . (19)

The reason for computing the action of H(u) to a vector is that H(u) is not a sparse matrix,

even if finite elements are used. Therefore, the descent direction−H(uc)−1∇uf(uc) is

com-puted through an iterative method such as the preconditioned conjugate gradient method,

which only requires the action of H(uc) to a vector v. A good choice for the preconditioner

in computing ˜ukcan be suggested by the modified gradient

∂uifc(u) = 2 Z ∇uc Kc p−2 ∇u· ∇φi Z ∇uc Kc p − 2 R uk−1 p−2 uk−1φi R |uk−1|p

and the corresponding modified Hessian Hc, whose entries are

2 Z ∇uc Kc p−2 ∇φj· ∇φi Z ∇uc Kc p . (20)

In fact, this preconditioner turns out to be sparse in case the basis functionsj(x)}

cor-respond to a finite element method. It is possible to use a fixed preconditioner for each iteration of the inverse power method (and not a preconditioner for each iteration of the inner problem), starting from the modified gradient

∂uif k−1(u) = 2 Z ∇uk−1 Kk−1 p−2 ∇u· ∇φi Z ∇uk−1 Kk−1 p − 2 R uk−1 p−2 uk−1φi R |uk−1|p

and the corresponding modified Hessian Hk−1, whose entries are

2 Z ∇uk−1 Kk−1 p−2 ∇φj· ∇φi Z ∇uk−1 Kk−1 p . (21) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(11)

Once the modified Hessian is computed, it is factorized by Choleski’s method and the trian-gular factors are inverted in order to get the preconditioned conjugate gradient iterations.

2.3.2 The quadratic model for the inner problem, p(x)≥ p−> 1

If, for some values of x, p(x) is less than two and ∇u(x) is zero, than the first and the third term in the Hessian (19) are not defined, while the gradient (18) is still well defined, in the sense that

|∇u(x)|p(x)−2∇u(x)· ∇φi(x) = 0 if ∇u(x) = 0.

We can consider (see Ref. [9]) the modified Hessian arising from

Nεn i (u) = Z εn2+ ∇u K 2!p−22 ∇u· ∇φi, D(u) = Z ∇u K p

where εnis a small quantity, possibly depending on the number n of basis functions φj(x)

2 R (p− 2)  εn2+ ∇u K 2p−42 ∇u K · ∇v ∇u K · ∇φi D(u) + − 2 R ∇u K p−2 ∇u K · ∇v R (p− 2)  εn2+ ∇u K 2p−22 ∇u K · ∇φi D(u)2 + + 2 R  εn2+ ∇u K 2p−22 ∇v· ∇φi D(u) + − 2 R  εn2+ ∇u K 2p−22 ∇u K · ∇φi R p ∇u K p−2 ∇u K · ∇v D(u)2 + + 2 R  εn2+ ∇u K 2p−22 ∇u K · ∇φi R ∇u K p−2 ∇u K · ∇v R p ∇u K p D(u)3

We notice that, in this way, the integrand function of the first term above side is equal to

zero where ∇u(x) = 0, and well defined otherwise, even without ε2

n. Therefore we replace εn2+ ∇u K 2 with (1− sign|∇u|2 ) + ∇u K 2

in that term. Indeed, such a modification is useful in the numerical implementation of f , its

gradient and its exact Hessian, even in the case p(x)≥ 2, for the evaluation of terms (see,

for instance, the second, the fourth and the fifth above) like

|z|p−2z, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(12)

which can be numerically computed as

((1− sign|z|2) +|z|2)p−22 z.

On the other hand, if ∇u(x) = 0, the third term in the Hessian above is not defined without

εn2, but it is well defined if ∇u(x)6= 0. Therefore, we replace

εn2+ ∇u K 2

in the third term of the Hessian above and in the corresponding modified Hessian (see (20)) with εn2· (1 − sign|∇u| 2 ) + ∇u K 2 .

When p(x)→ 1+ for some values of x, the eigenvalue problem is numerically quite

difficult. This is due not only to the fact that the exact Hessian does not exist, but also to the high derivatives of the solution at the boundaries. In fact, as reported in Figure 1 for the case

of p constant, when p→ 1+the first eigenfunction on the ball converges to its characteristic

function (see Ref. [25]).

3 Numerical results 3.1 Details of the algorithm

For the computation of the Luxemburg norm, Newton’s method is stopped when the relative

difference of two successive solutions is smaller then 10·ε, where ε is the machine precision.

From our numerical experience, we found that a number of continuation steps in (8) chosen as M=       max p(x) 2  if min p(x)≥ 2

d−3log(min p(x) − 1)e if min p(x) < 2

provides a satisfactory behavior. Moreover, the m-th problem in (8) is solved with tolerance

tolm= tol0· (1 − θm) + tol· θm

where tol0= 100· tol and tol is the tolerance in the exit criterion (9) for the inverse power

method. It is chosen proportional to hr+1, where r is the degree of the finite element

piece-wise polynomial basis. The same tolerance is used for inner problem (14). Four exit criteria are used. Since we use (modified) Newton’s method, if the infinity norm of the descent

di-rection δ d in (15) is smaller than tol· kuckalready with δ= 1, we stop the iterations. Then

we consider the ratio between the relative change in f(u+) and the relative change in u+. If

∇uf(u+)◦ u+ f(u+) ∞ ≤ tol,

where◦ denotes Hadamard’s product, we stop the iterations. We stop the iterations if the

method stagnates, that is ifkδ dk≤ 0.1 · tolkuck, too. Finally, we fix the maximum

num-ber of iterations to 10. The value of εnfor the modified Hessian is 10−5.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(13)

3.2 One-dimensional case

We implemented and run the one-dimensional case in GNU Octave 3.8, using piecewise

linear basis functions φj(x) and approximating the arising integrals by midpoint quadrature

formulas.

The initial solution u0(x) corresponding to p(x)≡ p = 2 is the (piecewise linear

inter-polation of the) first eigenfunction of

−u00(x) = η2u(x), x0< x < x1

normalized in order to have u0

2= 1, that is u0(x) =√ 2 x1− x0 sin x− x0 x1− x0π  , η2= η2=  π x1− x0 2 . p λp ηp− λ p /ηp 1.1 1.28 8.6e-05 1.2 1.46 1.9e-05 1.3 1.61 4.8e-05 1.4 1.75 5.9e-05 1.5 1.88 6.5e-05 1.6 2.00 6.8e-05 1.7 2.12 7.1e-05 1.8 2.24 7.5e-05 1.9 2.35 7.8e-05 2.0 2.47 8.2e-05 3.0 3.54 1.6e-04 4.0 4.56 3.0e-04 5.0 5.58 5.1e-04 6.0 6.59 8.0e-04 7.0 7.59 1.2e-03 8.0 8.59 1.6e-03 9.0 9.59 2.1e-03 10.0 10.59 2.6e-03

Table 1 Values of λpand relative error with respect to ηpfor some values of constant p(x)≡ p, in the

interval(−1,1).

The first test aims at comparing our results with the analytical solution, which is known

for constant values of p(x)≡ p in the interval (−1,1) (see, for instance, Ref. [8])

ηp=

 2√p p

− 1(π/p) 2 sin(π/p)

p

Table 1 reports the eigenvalue λp, computed with 101 basis functions, and the relative

differ-ence with the corresponding analytical value ηp(see (5)). Figure 1 shows the eigenfunction

u(x) for different values of constant p. In the limit p→ 1+it tends to an almost constant

function with steep gradients at the boundaries due to the homogeneous boundary

condi-tions, whereas in the limit p→ +∞ the eigenfunction resembles the absolute value of a

linear function (non differentiable at the origin).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(14)

0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 u (x ) x p = 1.1 p = 1.5 p = 2.0 p = 10

Fig. 1 First eigenfunctions for some values of constant p(x)≡ p.

0 10 20 30 40 50 60 -1 -0.5 0 0.5 1 p( x ) x 0 10 20 30 40 50 60 -1 -0.5 0 0.5 1 p( x ) x p(x)≡ 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 -1 -0.5 0 0.5 1 u (x ) x 0 0.2 0.4 0.6 0.8 1 1.2 1.4 -1 -0.5 0 0.5 1 u (x ) x

Fig. 2 p(x) = 28 + 26 cos(2πx) (left) and p(x) = 1.1 if x < 0 and p(x) = 10 if x≥ 0 (right) and the corre-sponding eigenfunctions (bottom).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(15)

Figure 2 reports the first eigenfunction for two cases. In the first one (left), p(x) =

28+ 26 cos(2πx), in the second one (right) p(x) is discontinuous, p(x) = 1.1 if x < 0 and

p(x) = 10 otherwise.The second case, although does not satisfy the log-H¨older continuity

usually required to the exponent, was chosen as a limit case to check the robustness of the method.The values of λ for the two cases are about 0.94548 and 1.25999, respectively; the corresponding eigenfunctions are reported in the second row of plots. The values of residuals (10) in infinity norm are 0.0018 and 0.64, respectively. The second residual is

not so small. In fact, the first part of the numerical residual contains terms like|∇u(x)|p(x)−1

(see (17)), with ∇u(x) close to zero where p(x) is close to one. Those terms are approximated

by|∇u(x) + O(h)|p−1, where h the mesh size, which can be quite far from their exact values.

A possible confirmation of such an explanation is the fact that the maximum of the residual

is taken at ¯x= 0.04, where ∇u( ¯x)≈ 0 and p( ¯x) = 1.1.Similar values of the residuals were

observed for the smaller values of p in the constant case.

3.3 Two-dimensional cases

We implemented and run the algorithm with FreeFem++ 3.23, using piecewise quadratic elements for u(x, y) and piecewise constant elements for p(x, y), in order to better manage discontinuous functions p(x, y).

3.3.1 The rectangle

The initial solution u0(x, y) corresponding to p(x, y)≡ p = 2 is the first eigenfunction of

−∇2

u(x, y) = η2u(x, y), x0< x < x1, y0< y < y1

normalized to have u0 2= 1, that is u0(x, y) = s 8 (x1− x0)(y1− y0) sin x− x0 x1− x0 π  sin y− y0 y1− y0 π  η2= η2=  π x1− x0 2 +  π y1− y0 2 . p λp Ref. [13] Ref. [9] 1.5 10.0722 10.0722 10.0720 2.0 19.7393 19.7392 19.7392 2.5 35.9490 35.9493 35.9487 4.0 176.618 176.693 176.598 Table 2 Values of λpfor some values of constant p(x)≡ p.

For some constant values of p(x)≡ p, Ref. [13, Table 4, p-version] reports the values of

the eigenvalues λpwith six significant digits for the unit square and Ref. [9, Table 2] reports

the values for a wider range of p. In Table 2 we compare our results for p= 1.5, 2, 2.5 and

4.0 computed on a regular mesh with 26× 26 vertices and 1250 triangles. We find that our

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(16)

precNewton(10) precNLCG(10) λ residual CPU s. λ residual CPU s. p1(x) 4.0571 2.16e-4 38.46 4.0571 2.19e-4 84.42

p2(x) 4.1727 2.51e-4 49.32 4.1727 2.52e-4 77.14

Table 3 Comparison for the cases p1(x, y) = 5 + 3 sin(3πx) and p2(x, y) = 4.5 + 3 sin(3πx).

results are closer to those obtained by the p-version of FEM algorithm used in Ref. [13] than to those obtained in Ref. [9].

In order to test the code performance, we compared our new minimization method

(calledprecNewton(10)in Table 3) with the FreeFem++ functionNLCG, which implements

the nonlinear conjugate gradient minimization method (see Table 3). In a first example, taken

from Ref. [6, Section 2.2], p1(x, y) = 5 + 3 sin(3πx). Here, differently than in Ref. [6],we

implemented a preconditioner for theNLCGmethod based on modified Hessian (21) and set

the maximum number of iterations to 10. Therefore, the method is denotedprecNLCG(10)

in Table 3. In a second example we checked the case p2(x, y) = 4.5 + 3 sin(3πx), in which

p2(x, y) assumes values less than two. As shown in Table 3, our method outperforms the

preconditionedNLCGin terms of CPU time (with comparable residuals).

1 0.8 x 0.6 0.4 0.2 0 0 0 y 0.5 1 1.5 2 0.8 1 0.6 0.4 0.2

Fig. 3 Eigenfunction for p3(x, y). λ≈ 4.3491, residual 0.16.

As a final test, we considered a discontinuous case with

p3(x, y) =

(

1.1 if x2+ y2

≤ 1

10 otherwise

The corresponding eigenfunction u(x, y) is shown in Figure 3. Its shape resembles that of the one-dimensional case (see Figure 2, right).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(17)

3.3.2 The disk

We computed the eigenvalues for the disk for some constant values of p(x)≡ p, namely

p= 1.5, 2, 2.5 and 4. The initial solution u0(x, y) corresponding to p(x, y)≡ p = 2 is the

normalized first eigenfunction of

−∇2

u(x, y) = η2u(x, y), x2+ y2< r2

that is u0(x, y) = u˜ 0(x, y) k ˜u0k 2 , u˜0(x, y) = J0 j0,1 p x2+ y2 r ! , η2= η2=  j0,1 r 2

where J0is the Bessel function of the first kind of order zero and j0,1≈ 2.40482555769577

is its first zero, computed by the commands

f = @(z) besselj(0,z);

fzero(f,[2,3],optimset("TolX",eps))

in GNU Octave 3.8. In Table 4 we compare our results for the unit disk (r= 1), on a mesh

p λp Ref. [9] 1.5 4.0167 4.0053 2.0 5.7815 5.7616 2.5 7.7081 7.6815 4.0 14.6785 14.6369 Table 4 Values of λpfor some values of constant p(x)≡ p.

with 2032 vertices and 3912 triangles, with those provided in Ref. [9, Table 1] (there

re-ported with five or six significant digits). We notice that for p(x)≡ 2, the analytical solution

is j20,1≈ 5.7832 and that our value is more accurate.

3.3.3 The annulus

The initial solution u0(x, y) corresponding to p(x, y)≡ p = 2 for the annulus is the

normal-ized first eigenfunction of

−∇2

u(x, y) = η2u(x, y), a2< x2+ y2< b2

that is u0(x, y) = u˜ 0(x, y) k ˜u0k 2 , η2= η2=  k0,1 a 2 where ˜ u0(x, y) = J0(k0,1)Y0 k0,1 p x2+ y2 a ! −Y0(k0,1)J0 k0,1 p x2+ y2 a !

If we choose a= 0.5 and b = 1, then k0,1≈ 3.12303091959569 is the first zero of

J0(z)Y0  zb a  −Y0(z)J0  zb a 

computed by the commands

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(18)

a=0.5;,b=1;

f = @(z) besselj(0,z).*bessely(0,z*b/a)-bessely(0,z).*besselj(0,z*b/a); fzero(f,[2,4],optimset("TolX",eps))

and Y0being the Bessel function of the second kind of order zero. The eigenvalues in Table 5

p λp exact

1.5 14.8932

2.0 38.9932 39.0133 2.5 95.0417

4.0 1154.08 Table 5 Values of λpfor some values of constant p(x)≡ p.

were obtained on a mesh with 1502 vertices and 2780 triangles. The exact value of η2for

the analytical known case p(x)≡ 2 is 4k2

0,1.

4 Conclusions

We described an effective method for the computation of the first eigenpair of the p(x)-Laplacian problem. The numerical results, in one- and two-dimensional domains, agree well

with those analytical or numerical available in literature for the constant case p(x)≡ p. The

main contribution of the present work is a fast and reliable method for the general case of

variable exponent p(x)≥ p> 1, applied for the first time to some test problems in the

critical range p(x) < 2 somewhere in the domain.

Acknowledgements The authors cordially thank Dr. Stefan Rainer for the helpful discussions about numer-ical methods for unconstrained minimization.

References

1. Acerbi E, Mingione G (2001) Regularity Results for a Class of Functionals with Non-Standard Growth. Arch Rational Mech Anal 156(2):121–140

2. Acerbi E, Mingione G (2005) Gradient estimates for the p(x)-Laplacean system. J reine angew Math 584:117–148

3. Akagi G, Matsuura K (2013) Nonlinear diffusion equations driven by the p(

·)-Laplacian. Nonlinear Differ Equ Appl 20:37–64

4. Astarita G, Marrucci G (1974) Principles of non-Newtonian fluid mechanics. McGraw-Hill

5. Atkinson C, Champion CR (1984) Some boundary value problems for the equation

∇· (|∇ϕ|N). Q J Mech Appl Math 37:401–419

6. Bellomi M, Caliari M, Squassina M (2013) Computing the first eigenpar for problems with variable exponents. J Fix Point Theory Appl 13(2):561–570

7. Biezuner RJ, Ercole G, Martins EM (2009) Computing the first eigenvalue of the p-Laplacian via the inverse power method. J Func Anal 257:243–270

8. Biezuner RJ, Ercole G, Martins EM (2011) Computing the sinpfunction via the inverse

power method. Comput Methods Appl Math 11(2):129–140

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

(19)

9. Biezuner RJ, Brown J, Ercole G, Martins EM (2012) Computing the First Eigenpair of the p-Laplacian via Inverse Iteration of Sublinear Supersolutions. J Sci Comput 52(1):180–201

10. Bird R, Armstrong R, Hassager O (1987) Dynamics of polymeric liquids, vol 1: Fluid mechanics, 2nd edn. John Wiley

11. Bogn´ar G (2008) Numerical and analytic investigation of some nonlinear problems in fluid mechanics. In: Computers and Simulation in Modern Science, Vol. II., World Sci-entific and Engineering Academy and Society (WSEAS), pp 172–180

12. Bogn´ar G, Ront´o M (2005) Numerical-analytic investigation of the radially symmetric solutions for some nonlinear PDEs. Comput Math Appl 50:983–991

13. Bogn´ar G, Szab´o T (2003) Solving nonlinear eigenvalue problems by using p-version of FEM. Comput Math Appl 46(1):57–68

14. Breit D, Diening L, Schwarzacher S (2014) Finite element methods for the p(x)-Laplacian, arXiv:1311.5121v2 [math.NA]

15. Diaz JI, de Thelin F (1994) On a nonlinear parabolic problem arising in some models related to turbulent flows. SIAM J Math Anal 5(4):1085–1111

16. Diaz JI, Hernandez J (1997) On the multiplicity of equilibrium solutions to a nonlinear diffusion equation on a manifold arising in climatology. J Math Anal Appl 216:593–613 17. Diening L, Harjulehto P, H¨ast¨o P, Ruzicka M (2011) Lebesgue and Sobolev Spaces with

Variable Exponents, Lect. Notes Math., vol 2017. Springer

18. Fan X, Zhang Q, Zhao D (2005) Eigenvalues of p(x)-Laplacian Dirichlet problem. J Math Anal Appl 302(2):306–317

19. Franzina G, Lindqvist P (2013) An eigenvalue problem with variable exponents. Non-linear Anal 85:1–16

20. Glowinski R, Rappaz J (2003) Approximation of a nonlinear elliptic problem arising in a non-Newtonian fluid model in glaciology. Mod´el Math Anal Num´er 37(1):175–186 21. Guan M, Zheng L (2006) The similarity solution to a generalized diffusion equation

with convection. Adv Dyn Syst Appl 1(2):183–189

22. Harjulehto P, P H¨ast¨o P, Lˆe UV, Nuortio M (2010) Overview of differential equations with non-standard growth. Nonlinear Anal 72:4551–4574

23. Hein M, B¨uhler T (2011) An inverse power method for nonlinear eigenproblems with applications in 1-spectral clustering and sparse pca. In: Lafferty J, Williams CKI, Shawe-Taylor J, Zemel RS, Culotta A (eds) Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, Curran Associates, Inc., pp 847–855

24. Kawohl B (1990) On a family of torsional creep problems. J reine angew Math 410:1– 22

25. Kawohl B, Fridman V (2003) Isoperimetric estimates for the first eigenvalue of the p-Laplace operator and the Cheeger constant. Comment Math Univ Carolin 44:659–667 26. Kelley CT (1999) Iterative Methods for Optimization, Frontiers in Applied

Mathemat-ics, vol 18. SIAM, Philadelphia

27. Mastorakis NE, Fathabadi H (2009) On the Solution of p-Laplacian for non-Newtonian fluid flow. WSEAS Trans Math 8(6):238–245

28. P´elissier MC, Reynaud ML (1974) Etude d’un mod´ele math´ematique d’´ecoulement de glacier. C R Acad Sci - S´er I - Math 279:531–534

29. Philip JR (1961) N-diffusion. Aust J Phys 14:1–13

30. Showalter RE, Walkington NJ (1991) Diffusion of fluid in a fissured medium with mi-crostructure. SIAM J Math Anal 22:1702–1722

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

Riferimenti

Documenti correlati

This work is an effort to improve the robustness of the Comparison Model Method (CMM), a direct inversion approach aimed at the identification of the hydraulic transmissivity of

We want to simultaneously fix both the regularisation parameter and the discretisation parameter, in correspondence to the given noise level, such that the discrete

A clinical score was used to objectivate the severity of clinical status of our patients assigning one point for each sign or symptom mentioned above (minimum value 2, maximum 8).

Platone sottolinea a chiare lettere questa delimitazione di campo: ciò cui Trasimaco si riferisce non è quel modo di considerazione degli enti per il quale essi vengono

degradation in HS-AD of OFMSW due both to mass transfer effects and NH 3

The multidisciplinary team was guided by Riccardo Renzi (architectural design) as scientific coordinator and composed of Mario De Stefano (structural design) and Claudio

We consider the class of the so called quantized distributed consensus algorithms in which the local measurements or current states of each sensor belong to a finite set (see

To this end, I conclude that although dermoscopy in 2011 is no longer a new technique but has become a standard tool for the diagnosis and management of patients with pigmented