• Non ci sono risultati.

Convergence of the Sequential Alternating Subspace Minimization

Nel documento Numerical methods for sparse recovery (pagine 93-107)

3.4 Domain Decomposition Methods for Total Variation Minimization

3.4.3 Convergence of the Sequential Alternating Subspace Minimization

(u(ℓ)1 K(g− Ku2− Ku(ℓ)1 ))

− PαK(u(ℓ)1 + K(g− Ku2− Ku(ℓ)1 + u2− η)) .

The computation ofη(ℓ)can be implemented by the algorithm (3.173).

Proposition 3.31 Assume u2 ∈ V2 and kKk < 1. Then the iteration (3.174) con-verges to a solutionu1 ∈ V1of (3.163) for any initial choice ofu(0)1 ∈ V1.

The proof of this proposition is similar to the one of Theorem 3.7 and it is omitted.

Let us conclude this section mentioning that all the results presented here hold sym-metrically for the minimization onV2, and that the notations should be just adjusted accordingly.

3.4.3 Convergence of the Sequential Alternating Subspace Minimization In this section we want to prove the convergence of the algorithm (3.162) to minimizers ofJ . In order to do that, we need a characterization of solutions of the minimization problem (2.80) as the one provided in [77, Proposition 4.1] for the continuous set-ting. We specify the arguments in [77, Proposition 4.1] for our discrete setting and we highlight the significant differences with respect to the continuous one.

Characterization of solutions We make the following assumptions:

(Aϕ) ϕ : R→ R is a convex function, nondecreasing in R+such that (i) ϕ(0) = 0.

(ii) There exist c > 0 and b ≥ 0 such that cz − b ≤ ϕ(z) ≤ cz + b, for all z∈ R+.

The particular example we have in mind is simplyϕ(s) = s, but we keep a more gen-eral notation for uniformity with respect to the continuous version in [77, Proposition 4.1]. In this section we are concerned with the following more general minimization problem

argminu∈H{Jϕ(u) :=kKu − gk22+ 2αϕ(|∇u|)(Ω)} (3.175) whereg∈ H is a datum, α > 0 is a fixed constant (in particular for ϕ(s) = s).

To characterize the solution of the minimization problem (3.175) we use duality results from [36]. Therefore we recall the definition of the conjugate (or Legendre transform) of a function (for example see [36, Def. 4.1, pag. 17]):

Definition 3.32 LetV and Vbe two vector spaces placed in the duality by a bilinear pairing denoted byh·, ·i and φ : V → R be a convex function. The conjugate function (or Legendre transform)φ : V → R is defined by

φ(u) = sup

u∈V{hu, ui − φ(u)}.

Proposition 3.33 Let ζ, u∈ H. If the assumption (Aϕ) is fulfilled, then ζ ∈ ∂Jϕ(u) if and only if there existsM = (M0, ¯M ) ∈ H × Hd, | ¯M(x) | ≤ c1 ∈ [0, +∞) for all x∈ Ω such that

h ¯M (x), (∇u)(x)iRd+ 2αϕ(|(∇u)(x)|) + 2αϕ1

| ¯M (x)| 2α



= 0 for all x∈ Ω (3.176) KM0− div ¯M + ζ = 0 (3.177)

−M0 = 2(Ku− g), (3.178)

whereϕ1is the conjugate function ofϕ1defined byϕ1(s) = ϕ(|s|), for s ∈ R.

If additionally ϕ is differentiable and |(∇u)(x)| 6= 0 for x ∈ Ω, then we can compute ¯M as

M (x) =¯ −2αϕ(|(∇u)(x)|)

|(∇u)(x)| (∇u)(x). (3.179) Remark 3.34 (i) Forϕ(s) = s the function ϕ1from Proposition 3.33 turns out to

beϕ1(s) =|s|. Its conjugate function ϕ1 is then given by

ϕ1(s) = sup

s∈R{hs, si − |s|} = (

0 for|s| ≤ 1

∞ else .

Hence condition (3.176) specifies as follows

h ¯M (x), (∇u)(x)iRd+ 2α|(∇u)(x)| = 0

and, directly from the proof of Proposition 3.33 in the Appendix,| ¯M (x)| ≤ 2α for allx∈ Ω.

(ii) We want to highlight a few important differences with respect to the continuous case. Due to our definition of the gradient and its relationship with the divergence operator − div = ∇ no boundary conditions are needed. Therefore condition (10) of [77, Proposition 4.1] has no discrete correspondent in our setting. The continuous total variation of a function can be decomposed into an absolute con-tinuous part with respect to the Lebesgue measure and a singular part, whereas no singular part appears in the discrete setting. Therefore condition (6) and (7) of [77, Proposition 4.1] have not a discrete correspondent either.

(iii) An interesting consequence of Proposition 3.33 is that the mapSα= (I− PαK) is bounded, i.e.,kSα(zk)k2 → ∞ if and only if kzkk2 → ∞, for k → ∞. In fact, since

Sα(z) = arg min

u∈Hku − zk22+ 2α|∇u|(Ω), from (3.177) and (3.178), we immediately obtain

Sα(z) = z−1

2div ¯M , and ¯M is uniformly bounded.

Proof. (Proposition 3.33.) It is clear thatζ ∈ ∂Jϕ(u) if and only if u = argminv∈H{Jϕ(v)− hζ, viH}, and let us consider the following variational problem:

v∈Hinf{Jϕ(v)− hζ, viH} = inf

v∈H{kKv − gk22+ 2αϕ(|∇v|)(Ω) − hζ, viH} (P) We denote such an infimum byinf(P). Now we compute (P) the dual of (P). Let F : H → R, G : H × Hd→ R, G1:H → R, G2 :Hd→ R, such that

F(v) = −hζ, viH

G1(w0) = kw0− gk22

G2( ¯w) = 2αϕ(| ¯w|)(Ω) G(w) = G1(w0) +G2( ¯w)

withw = (w0, ¯w) ∈ H × Hd. Then the dual problem of (P) is given by (cf. [36, p 60])

sup

p∈H×Hd{−F(Mp)− G(−p)} (P) where M :H → H × Hdis defined by

Mv = (Kv, (∇v)1, . . . , (∇v)d)

and M is its adjoint. We denote the supremum in (P) by sup(P). Using the definition of the conjugate function we computeFandG. In particular

F(Mp) = sup

v∈H{hMp, viH−F(v)} = sup

v∈HhMp+ζ, viH=

(0 Mp+ ζ = 0

∞ otherwise wherep = (p0, ¯p) and

G(p) = sup

w∈H×Hd{hp, wiH×Hd − G(w)}

= sup

w=(w0, ¯w)∈H×Hd{hp0, w0iH+h¯p, ¯wiHd− G1(w0)− G2( ¯w)}

= sup

w0∈H{hp0, w0iH− G1(w0)} + sup

¯

w∈Hd{h¯p, ¯wiHd− G2( ¯w)}

=G1(p0) +G2(¯p)

We have that

For ease we include in below the explicit computation of these conjugate functions.

So we can write (P) in the following way

Hence we can writeK in the following way K = We now apply the duality results from [36, Theorem III.4.1], since the functional in (P) is convex, continuous with respect to M v in H × Hd, andinf(P) is finite. Then inf(P)= sup(P)∈ R and (P) has a solutionM = (M0, ¯M )∈ K.

Let us assume thatu is a solution of (P) and M is a solution of (P). Frominf(P)= verifies the direct implication of (3.177). In particular

−hζ, uiH=hKM0, uiH− hdiv ¯M , uiH =hM0, KuiH+h ¯M ,∇uiHd,

Let us write (4.183) again in the following form X

Conversely, if such anM = (M0, ¯M )∈ H × Hdwith| ¯M(x)| ≤ c1exists which ful-fills conditions (3.176)-(3.178), it is clear from previous considerations that equation (4.182) holds. Let us denote the functional on the left side of (4.182) by

P (u) :=kKu − gk22+ 2αϕ(|∇u|)(Ω) − hζ, uiH

and the functional on the right side of (4.182) by P(M ) :=−

We know that the functional P is the functional of (P) and P is the functional of (P). Henceinf P = inf(P) and sup P = sup(P). SinceP is convex, continuous

If additionally ϕ is differentiable and|(∇u)(x)| 6= 0 for x ∈ Ω, we show that we can compute ¯M (x) explicitly. From equation (3.176) (resp. (4.185)) we have

2αϕ1

| − ¯M (x)| 2α



=−h ¯M (x), (∇u)(x)iRd− 2αϕ(|(∇u)(x)|). (4.187) From the definition of conjugate function we have

2αϕ1

which implies

j(x) =−2αϕ(|(∇u)(x)|)

|(∇u)(x)| (∇u)j(x) j = 1, . . . , d, and verifies (3.179). This finishes the proof.

Computation of conjugate functions. Let us compute the conjugate function of the convex functionG1(w0) =kw0− gk22. From Definition 3.32 we have

G1(p0) = sup

w0∈H{hw0, p0iH− G1(w0)} = sup

w0∈H{hw0, p0iH− hw0− g, w0− giH}.

We setH(w0) := hw0, p0iH − hw0− g, w0− giH. To get the maximum ofH we compute the Gˆateaux-differential atw0ofH,

H(w0) = p0− 2(w0− g) = 0

Ifϕ were an even function then sup whereϕ is the conjugate function ofϕ.

Unfortunately ϕ is not even in general. To overcome this difficulty we have to choose a function which is equal toϕ(s) for s≥ 0 and does not change the supremum fors < 0. For instance, one can choose ϕ1(s) = ϕ(|s|) for s ∈ R. Then we have

We return to the sequential algorithm (3.162). Let us explicitly express the algorithm as follows:

Note that we do prescribe a finite numberL and M of inner iterations for each sub-space respectively and thatu(n+1) = ˜u(n+1)1 + ˜u(n+1)2 , withu(n+1)i 6= ˜u(n+1)i ,i = 1, 2, in general. In this section we want to prove its convergence for any choice ofL and M .

Observe that, fora∈ Vi andkKk < 1,

kui− ak22− kKui− Kak22 ≥ Ckui− ak22, (3.190)

forC = (1− kKk2) > 0. Hence

J (u) = Jis(u, ui)≤ Jis(u, a), (3.191) and

Jis(u, a)− Jis(u, ui)≥ Ckui− ak22. (3.192) Proposition 3.35 (Convergence properties) Let us assume thatkKk < 1. The algo-rithm in (3.210) produces a sequence(u(n))n∈NinH with the following properties:

(i) J (u(n)) >J (u(n+1)) for all n∈ N (unless u(n)= u(n+1));

(ii) limn→∞ku(n+1)− u(n)k2 = 0;

(iii) the sequence(u(n))n∈Nhas subsequences which converge inH.

Proof. Let us first observe that

J (u(n)) =J1s(˜u(n)1 + ˜u(n)2 , ˜u(n)1 ) = J1s(˜u(n)1 + ˜u(n)2 , u(n+1,0)1 ).

By definition ofu(n+1,1)1 and the minimal properties ofu(n+1,1)1 in (3.210) we have J1s(˜u(n)1 + ˜u(n)2 , u(n+1,0)1 )≥ J1s(u(n+1,1)1 + ˜u(n)2 , u(n+1,0)1 ).

From (3.191) we have

J1s(u(n+1,1)1 + ˜u(n)2 , u(n+1,0)1 )≥ J1s(u(n+1,1)1 + ˜u(n)2 , u(n+1,1)1 ) =J (u(n+1,1)1 + ˜u(n)2 ).

Putting in line these inequalities we obtain

J (u(n))≥ J (u(n+1,1)1 + ˜u(n)2 ).

In particular, from (3.192) we have

J (u(n))− J (u(n+1,1)1 + ˜u(n)2 )≥ Cku(n+1,1)1 − u(n+1,0)1 k22. AfterL steps we conclude the estimate

J (u(n)) ≥ J (u(n+1,L)1 + ˜u(n)2 ), and

J (u(n))− J (u(n+1,L)1 + ˜u(n)2 )≥ C

L−1

X

ℓ=0

ku(n+1,ℓ+1)1 − u(n+1,ℓ)1 k22.

By definition ofu(n+1,1)2 and its minimal properties we have

J (u(n+1,L)1 + ˜u(n)2 )≥ J2s(u(n+1,L)1 + u(n+1,1)2 , u(n+1,0)2 ).

By similar arguments as above we finally find the decreasing estimate

J (u(n))≥ J (u(n+1,L)1 + u(n+1,M )2 ) =J (u(n+1)) =J (˜u(n+1)1 + ˜u(n+1)2 ), (3.193) and

J (u(n))− J (u(n+1))

≥ C

L−1X

ℓ=0

ku(n+1,ℓ+1)1 − u(n+1,ℓ)1 k22+

M−1X

m=0

ku(n+1,m+1)2 − u(n+1,m)2 k22

!

, (3.194) which verifies(i).

From (3.193) we haveJ (u(0))≥ J (u(n)). By the coerciveness condition (C) (u(n))n∈N is uniformly bounded inH, hence there exists a convergent subsequence (u(nk))k∈N and hence (iii) holds. Let us denote u(∞) the limit of the subsequence. For sim-plicity, we rename such a subsequence by(u(n))n∈N. Moreover, since the sequence (J (u(n)))n∈N is monotonically decreasing and bounded from below by 0, it is also convergent. From (3.194) and the latter convergence we deduce

L−1X

ℓ=0

ku(n+1,ℓ+1)1 − u(n+1,ℓ)1 k22+

MX−1 m=0

ku(n+1,m+1)2 − u(n+1,m)2 k22

!

→ 0, n → ∞.

(3.195) In particular, by the standard inequality(a2+ b2) ≥ 12(a + b)2 fora, b > 0 and the triangle inequality, we have also

ku(n)− u(n+1)k2→ 0, n → ∞. (3.196) This gives(ii) and completes the proof.

The use of the partition of unity{χ1, χ2} allows not only to guarantee the boundedness of(u(n))n∈N, but also of the sequences(˜u(n)1 )n∈Nand(˜u(n)2 )n∈N.

Lemma 3.36 The sequences(˜u(n)1 )n∈Nand(˜u(n)2 )n∈Nproduced by the algorithm (3.210) are bounded, i.e., there exists a constant ˜C > 0 such thatk˜u(n)i k2≤ ˜C for i = 1, 2.

Proof. From the boundedness of(u(n))n∈Nwe have

k˜u(n)i k2=kχiu(n)k2≤ κku(n)k2≤ ˜C fori = 1, 2.

From Remark 3.34 (iii) we can also show the following auxiliary lemma.

Lemma 3.37 The sequences1(n,L))nand(η2(n,M ))nare bounded.

Proof. From previous considerations we know that

u(n,L)1 = Sα(z(n,L1 −1)+ ˜u(n2 −1)− η1(n,L))− ˜u(n2 −1) u(n,M )2 = Sα(z(n,M2 −1)+ u(n,L)1 − η2(n,M ))− u(n,L)1 .

Assume(η(n,L)1 )n were unbounded, then by Remark 3.34 (iii), also Sα(z1(n,L−1) +

˜

u(n2 −1)− η(n,L)1 ) would be unbounded. Since (˜u(n)2 )n and (u(n,L)1 )n are bounded by Lemma 3.36 and formula (3.195), we have a contradiction. Thus(η1(n,L))nhas to be bounded. With the same argument we can show that(η(n,M )2 )nis bounded.

We can eventually show the convergence of the algorithm to minimizers ofJ . Theorem 3.38 (Convergence to minimizers) AssumekKk < 1. Then accumulation points of the sequence(u(n))n∈Nproduced by algorithm (3.210) are minimizers ofJ . IfJ has a unique minimizer then the sequence (u(n))n∈Nconverges to it.

Proof. Let us denoteu(∞)the limit of a subsequence. For simplicity, we rename such a subsequence by(u(n))n∈N. From Lemma 3.36 we know that(˜u(n)1 )n∈N,(˜u(n)2 )n∈N and consequently (u(n,L)1 )n∈N,(u(n,M )2 )n∈N are bounded. So the limit u(∞) can be written as

u(∞)= u(1∞)+ u(2∞)= ˜u(1∞)+ ˜u(2∞) (3.197) whereu(1∞)is the limit of(u(n,L)1 )n∈N,u(2∞)is the limit of(u(n,M )2 )n∈N, andu˜(i∞)is the limit of(˜u(n)i )n∈N fori = 1, 2. Now we show that ˜u(∞)2 = u(∞)2 . By using the triangle inequality, from (3.195) it directly follows that

ku(n+1,M )2 − ˜u(n)2 k2 → 0, n → ∞. (3.198) Moreover, sinceχ2 ∈ V2is a fixed vector which is independent ofn, we obtain from Proposition 3.35(ii) that

2(u(n)− u(n+1))k2→ 0, n → ∞, and hence

k˜u(n)2 − ˜u(n+1)2 k2→ 0, n → ∞. (3.199) Putting (3.198) and (3.199) together and noting that

ku(n+1,M )2 − ˜u(n)2 k2+k˜u(n)2 − ˜u(n+1)2 k2≥ ku(n+1,M )2 − ˜u(n+1)2 k2

we have

ku(n+1,M )2 − ˜u(n+1)2 k2 → 0, n → ∞, (3.200)

which means that the sequences(u(n,M )2 )n∈Nand(˜u(n)2 )n∈Nhave the same limit, i.e.,

˜

u(2∞) = u(2∞), which we denote by u(2∞). Then from (3.200) and (3.197) it directly follows thatu˜(1∞)= u(1∞).

As in the proof of the oblique thresholding theorem we set

F1(u(n+1,L)1 ) :=ku(n+1,L)1 − z1(n+1,L)k22+ 2α|∇(u(n+1,L)1 + ˜u(n)2

1

)|(Ω1) where

z1(n+1,L):= u(n+1,L−1)1 + (K(g− K ˜u(n)2 − Ku(n+1,L−1)1 )) 1

.

The optimality condition foru(n+1,L)1 is

0∈ ∂V1F1(u(n+1,L)1 ) + 2η1(n+1,L) where

η(n+1,L)1 = (Tr|Γ1)Tr|Γ1



(z1(n+1,L)) + PαK1(n+1,L)− z(n+1,L)1 − ˜u(n)2 ) . In order to use the characterization of elements in the subdifferential of|∇u|(Ω), i.e., Proposition 3.33, we have to rewrite the minimization problem for F1. More precisely, we define

11(n+1,L)) :=kξ1(n+1,L)− ˜u(n)2

1

− z1(n+1,L)k22+ 2α|∇(ξ1(n+1,L))|(Ω1) for ξ1(n+1,L) ∈ V1 with Tr |Γ1 ξ(n+1,L)1 = ˜u(n)2 . Then the optimality condition for ξ1(n+1,L)is

0∈ ∂ ˆF11(n+1,L)) + 2η1(n+1,L) (3.201) Note that indeedξ(n+1,L)1 is optimal if and only ifu(n+1,L)1 = ξ1(n+1,L)− ˜u(n)2

1

is optimal.

Analogously we define

2(n+1,M )2 ) :=kξ(n+1,M )2 − u(n+1,L)1

2

− z2(n+1)k22+ 2α|∇(ξ(n+1,M )2 )|(Ω2) forξ(n+1,M )2 ∈ V2withTr |Γ2 ξ(n+1,M )2 = u(n+1,L)1 , and the optimality condition for ξ2(n+1,M )is

0∈ ∂ ˆF2(n+1,M )2 ) + 2η2(n+1,M ) (3.202) where

η(n+1,M )2 = (Tr|Γ2)Tr|Γ2

(z2(n+1,M )) + PαK(n+1,M )2 − z2(n+1,M )− u(n+1,L)1 ) .

Let us recall that now we are considering functionals as in Proposition 3.33 with ϕ(s) = s, K = I, and Ω = Ωi, i = 1, 2. From Proposition 3.33 and Remark 3.34 we get thatξ1(n+1,L), and consequentlyu(n+1,L)1 is optimal, i.e., −2η(n+1,L)1

∂ ˆF11(n+1,L)), if and only if there exists an M1(n+1) = (M0,1(n+1), ¯M1(n+1))∈ V1× V1d with| ¯M1(n+1)(x)| ≤ 2α for all x ∈ Ω1such that

h ¯M1(n+1)(x), (∇(u(n+1,L)1 + ˜u(n)2 ))(x)iRd+ 2αϕ(|(∇(u(n+1,L)1 + ˜u(n)2 ))(x)|) = 0 (3.203)

−2(u(n+1,L)1 (x)− z1(n+1,L)(x))− div ¯M1(n+1)(x)− 2η(n+1,L)1 (x) = 0.

(3.204) for allx ∈ Ω1. Analogously we get that ξ2(n+1,M ), and consequently u(n+1,M )2 is optimal, i.e., −2η(n+1,M )2 ∈ ∂ ˆF22(n+1,M )), if and only if there exists an M2(n+1) = (M0,2(n+1), ¯M2(n+1))∈ V2× V2dwith| ¯M2(n+1)(x)| ≤ 2α for all x ∈ Ω2such that

h ¯M2(n+1)(x), (∇(u(n+1,L)1 + u(n+1,M )2 ))(x)iRd+ 2αϕ(|(∇(u(n+1,L)1 + ˜u(n+1,M )2 ))(x)|) = 0 (3.205)

−2(u(n+1,M )2 (x)− z2(n+1,M )(x))− div ¯M2(n+1)(x)− 2η(n+1,M )2 (x) = 0, (3.206) for allx ∈ Ω2. Since( ¯M1(n)(x))n∈N is bounded for allx ∈ Ω1and ( ¯M2(n)(x))n∈N is bounded for all x ∈ Ω2, there exist convergent subsequences ( ¯M1(nk)(x))k∈N and ( ¯M2(nk)(x))k∈N. Let us denote ¯M1(∞)(x) and ¯M2(∞)(x) the respective limits of the sequences. For simplicity we rename such sequences by ( ¯M1(n)(x))n∈N and ( ¯M2(n)(x))n∈N.

Note that, by Lemma 3.37 (or simply from (3.204) and (3.206)) the sequences (η1(n,L))n∈N and (η(n,M )2 )n∈N are also bounded. Hence there exist convergent sub-sequences which we denote, for simplicity, again by (η1(n,L))n∈N and (η2(n,M ))n∈N with limitsηi(∞), i = 1, 2. By taking in (3.203)-(3.206) the limits for n → ∞ we obtain

h ¯M1(∞)(x), (∇(u(∞)1 + u(∞)2 ))(x)iRd+ 2αϕ(|(∇(u(∞)1 + u(∞)2 ))(x)|) = 0 for all x ∈ Ω1

−2(u(1∞)(x)− z(1∞)(x))− div ¯M1(∞)(x)− 2η(1∞)(x) = 0 for allx∈ Ω1

h ¯M2(∞)(x), (∇(u(∞)1 + u(∞)2 ))(x)iRd+ 2αϕ(|(∇(u(∞)1 + u(∞)2 ))(x)|) = 0 for all x ∈ Ω2

−2(u(2∞)(x)− z(2∞)(x))− div ¯M2(∞)(x)− 2η(2∞)(x) = 0 for allx∈ Ω2

Sincesupp η(1∞)= Γ1andsupp η(2∞)= Γ2we have

h ¯M1(∞)(x), (∇(u(∞))(x)iRd+ 2αϕ(|(∇(u(∞))(x)|) = 0 for all x ∈ Ω1

−2K((Ku(∞))(x)− g(∞)(x))− div ¯M1(∞)(x) = 0 for allx∈ Ω1\ Γ1

(3.207) h ¯M2(∞)(x), (∇(u(∞))(x)iRd + 2αϕ(|(∇(u(∞))(x)|) = 0 for all x ∈ Ω2

−2K((Ku(∞))(x)− g(∞)(x))− div ¯M2(∞)(x) = 0 for all x∈ Ω2\ Γ2. (3.208) Observe now that from Proposition 3.33 we also have that 0∈ J (u(∞)) if and only if there existsM(∞)= (M0(∞), ¯M(∞)) with| ¯M0(∞)(x)| ≤ 2α for all x ∈ Ω such that

h ¯M(∞)(x), (∇(u(∞))(x)iRd+ 2αϕ(|(∇(u(∞))(x)|) = 0 for all x ∈ Ω

−2K((Ku(∞))(x)− g(∞)(x))− div ¯M(∞)(x) = 0 for all x∈ Ω. (3.209) Note that ¯Mj(∞)(x), j = 1, 2, for x ∈ Ω1∩ Ω2satisfies both (3.207) and (3.208).

Hence let us choose

M(∞)(x) =

(M1(∞)(x) if x∈ Ω1\ Γ1

M2(∞)(x) if x∈ (Ω2\ Ω1)∪ Γ1

.

With this choice of M(∞) equations (3.207) - (3.209) are valid and hence u(∞) is optimal inΩ.

Remark 3.39 (i) If∇u(∞)(x)6= 0 for x ∈ Ωj,j = 1, 2, then ¯Mj(∞)is given as in equation (3.179) by

j(∞)(x) =−2α (∇u(∞)|j)(x)

|(∇u(∞) |j)(x)|.

(ii) The boundedness of the sequences(˜u(n)1 )n∈Nand(˜u(n)2 )n∈Nhas been technically used for showing the existence of an optimal decompositionu(∞)= u(∞)1 +u(∞)2 in the proof of Theorem 3.38. Their boundedness is guaranteed as in Lemma 3.36 by the use of the partition of the unity{χ1, χ2}. Let us emphasize that there is no way of obtaining the boundedness of the local sequences (u(n,L)1 )n∈N and (u(n,M )2 )n∈N otherwise. In Figure 3.12 we show that the local sequences can become unbounded in case we do not modify them by means of the partition of the unity.

(iii) Note that for deriving the optimality condition (3.209) foru(∞)we combined the respective conditions (3.207) and (3.208) foru(∞)1 andu(∞)2 . In doing that, we strongly took advantage of the overlapping property of the subdomains, hence avoiding a fine analysis ofη1(∞)and η2(∞) on the interfacesΓ1 and Γ2. This is the major advantage of this analysis with respect to the one provided in [44] for nonoverlapping domain decompositions.

Nel documento Numerical methods for sparse recovery (pagine 93-107)