• Non ci sono risultati.

Let V 0 be the set of all continuous, linear forms on V.

N/A
N/A
Protected

Academic year: 2021

Condividi "Let V 0 be the set of all continuous, linear forms on V."

Copied!
8
0
0

Testo completo

(1)

Preconditioned finite elements method

Let V be a Hilbert space, (·, ·) V an inner product on V and k · k V the cor- responding induced norm. Let a be a coercive, continuous, bilinear form on V, that is, a : V × V → R and there exist m, M , 0 < m ≤ M such that for all u, v, w ∈ V, α, β ∈ R,

a(αu + βv, w) = αa(u, w) + βa(v, w), a(u, αv + βw) = αa(u, v) + βa(u, w) (bilinearity),

|a(u, v)| ≤ M kuk V kvk V (continuity), a(v, v) ≥ mkvk 2 V (coercivity).

Let V 0 be the set of all continuous, linear forms on V.

Then the following Lax-Milgran results hold:

(LM1) For any F ∈ V 0 , there exists a unique element u = u F ∈ V such that a(u, v) = F(v), ∀ v ∈ V.

(LM2) If, moreover, V h is a finite-dimensional subspace of V, then to F we can also associate an element u h ∈ V h such that a(u h , v h ) = F(v h ), ∀ v h ∈ V h , which is uniquely defined too.

Approximating u by u h

Intuitively, such element u h can be used as an approximation of u if V h

belongs to a family of subspaces {V h } h→0 of increasing dimension, such that the closure of ∪ h→0 V h coincides with V. In fact, it can be shown that an hypothesis of consistency on {V h } h→0 (implying the latter property) yields the result:

h → 0 ⇒ ku − u h k V → 0. (conv)

Consistency of {V h } h→0 in V. {V h } h→0 , V h ⊂ V, is said to be consistent in V if there exist V ⊂ V dense in V (with respect to k · k V ) and an operator R h : V → V h such that for any v ∈ V, kR h (v) − vk V → 0 as h → 0 (R h might be an interpolation operator).

Let us show that the consistency of {V h } h→0 implies (conv). First we prove that the error ku − u h k V is proportional to the minimal error we can have with V h . Note that

a(u, v h ) = F (v h ), a(u h , v h ) = F (v h ) ⇒ a(u − u h , v h ) = 0 ∀ v h ∈ V h

and this remark implies

mku − u h k 2 V ≤ a(u − u h , u − u h ) = a(u − u h , u − v h ) ≤ M ku − u h k V ku − v h k V , ku − u h k V ≤ M

m inf

v

h

∈V

h

ku − v h k V . (cea) Now we can prove (conv). Let v ∈ V be such that kv − uk V < ε. By (cea) and the consistency hypothesis, if h < h ε (h is suitably small), then

ku − u h k V ≤ M

m ku − R h (v)k V ≤ M

m (ku − vk V + kv − R h (v)k V ) < M

m 2ε.

(2)

How to compute u h

Let N h be the dimension of V h , and ϕ i , i = 1, . . . , N h , a basis for V h . Then u h = P N

h

j=1 (u h ) j ϕ j and the condition a(u h , v h ) = F (v h ), v h ∈ V h , can be rewritten as follows:

N

h

X

j=1

(u h ) j a(ϕ j , ϕ i ) = F (ϕ i ), i = 1, . . . , N h .

So the (u h ) j defining u h can be obtained by solving a linear system Ax = b, being a ij = a(ϕ j , ϕ i ), b i = F (ϕ i ), 1 ≤ i, j ≤ N h . It is important to notice that the symmetric part of the coefficient matrix A is positive definite, that is z T Az > 0, ∀ z ∈ R n z 6= 0. In fact, by the coercivity of a, we have

z T Az = X

ij

z i a(ϕ j , ϕ i )z j = a( X

z j ϕ j , X

z i ϕ i ) ≥ mk X

z i ϕ i k 2 V > 0

unless the z i are all null (the ϕ i are assumed linearly independent).

Example: a differential problem solved by the finite element method Assume that u : Ω → R is the unique solution of the differential problem

−∇(α∇u) + β∇u + γu = f, x ∈ Ω, u = ϕ, x ∈ Γ D ,

∂u

∂n

c

= ψ, x ∈ Γ N .

Here Ω is an open set in R d , Γ D and Γ N are open subsets of ∂Ω such that

∂Ω = Γ D ∪ Γ N , α : Ω → R d

2

, β : Ω → R d , γ, f : Ω → R.

Then, for all v, v| Γ

D

= 0, a(u, v) :=

Z

α∇u∇v + Z

β∇uv + Z

γuv = Z

f v + Z

Γ

N

ψvdσ.

If we set u = u ϕ + w with u ϕ , w : Ω → R, u ϕ | Γ

D

= ϕ and w| Γ

D

= 0, then the latter equation becomes:

a(w, v) = Z

f v + Z

Γ

N

ψvdσ − a(u ϕ , v) =: F (v).

So, we have the following

Problem. Find w, w| Γ

D

= 0 such that a(w, v) = F (v), ∀ v, v| Γ

D

= 0.

Moreover, the functions w, v must be also such that a(w, v) and F (v) are well defined, that is we also require w, v ∈ H 1 (Ω) where

H 1 (Ω) = {v ∈ L 2 (Ω) : D i v ∈ L 2 (Ω)}

(D i v ∈ L 2 (Ω) iff ∃g i ∈ L 2 (Ω) | R

Ω g i ϕ = − R

Ω vD i ϕ ∀ ϕ ∈ C 0 (Ω) (D i v := g i )).

Briefly, find w ∈ H 0,Γ 1

D

(Ω) | a(w, v) = F (v), ∀ v ∈ H 0,Γ 1

D

(Ω).

Under suitable conditions on the data ∂Ω, α, β, γ, ϕ, ψ, the space V = H 0,Γ 1

D

(Ω) is a Hilbert space with respect to the inner product (u, v) V = (u, v) 1,Ω = (u, v) L

2

(Ω) + P

i=1...d (D i u, D i v) L

2

(Ω) , and the forms a and F are well defined

and satisfy the conditions required by the Lax-Milgran results (LM1) and (LM2)

to hold. So, the w of the problem is well defined, and for any finite-dimensional

(3)

subspace V h of V = H 0,Γ 1

D

(Ω) is well defined a function w h ∈ V h such that a(w h , v h ) = F (v h ), ∀ v h ∈ V h .

Definition of w h convergent to w

In order to yield functions w h convergent to w as h → 0, we only have to define V h such that {V h } h→0 is consistent in H 0,Γ 1

D

(Ω). Let us do this in the case d = 2, Ω = polygon, by using the finite element method.

Let τ h be a triangulation of Ω of diameter h, that is a set of triangles T such that

• T ∈ τ h ⇒ T = T ⊂ Ω and diam (T ) =: h T ≤ h := max T ∈τ

h

h T ,

• ∪ T∈τ

h

T = Ω,

• T 1 , T 2 ∈ τ h ⇒ T 1 ∩ T 2 is a common vertex, a common side, the whole triangle T 1 = T 2 or the empty set.

To any T in the following we need to associate also the number ρ T , the diameter of the circle enclosed in T . Let S h be the set of all functions p : Ω → R such that p| T is a degree-1 polynomial (in x 1 and x 2 ) and set V h = S h ∩ C 0 (Ω).

Let i = 1, 2, . . . , N h be the nodes of the triangulation τ h (the verteces of the triangles of τ h ) and denote by ϕ i the elements of V h satisfying the identities ϕ i (j) = δ ij , i, j = 1, 2, . . . , N h . Obviously any element v of V h can be expressed as v = P N

h

i=1 v(i)ϕ i .

Choose V h = V h ∩ H 0,Γ 1

D

(Ω) = Span {ϕ 1 , . . . , ϕ N

h

} where 1, . . . , N h are the nodes of the triangulation τ h which are not on Γ D . We want to show that {V h } h→0 is consistent in H 0,Γ 1

D

(Ω), so that the well defined functions w h = P

j=1...N

h

(w h ) j ϕ j ∈ V h such that a(w h , v h ) = F (v h ), ∀ v h ∈ V h , strongly converge to w as h → 0, i.e. kw − w h k V → 0.

First we introduce the space V = H 0,Γ 1

D

(Ω) ∩ H 2 (Ω), contained and dense in V = H 0,Γ 1

D

(Ω) (H 2 (Ω) = {v ∈ H 1 (Ω) : D ij v ∈ L 2 (Ω)}, (u, v) 2,Ω = (u, v) 1,Ω + P

ij (D ij u, D ij v) 0,Ω , |v| 2 2,Ω = P

i kD ii vk 2 0,Ω ). Now let v be an element of such V . Notice that v ∈ C 0 (Ω) (. . .), thus the function Π h v = P N

h

i=1 v(i)ϕ i of V h , interpolating v in the nodes of the triangulation, is well defined. Moreover, Π h v is a function of H 1 (Ω) and one can measure the interpolating error using the norm of V:

kv − Π h vk 1,Ω ≤ c e h|v| 2,Ω , c e constant.

The latter inequality holds if the family of triangulations {τ h } h→0 is chosen regular, that is there exists a constant c r such that h T /ρ T ≤ c r for all T ∈ τ h

and h. Thus we have the operator R h : V → V h required by the consistency hypothesis, it is the interpolating operator Π h .

Observe that in case the function w is in H 2 (Ω) we can say more: kw − w h k V → 0 at least at the same rate of h. In fact,

kw − w h k V ≤ M

m kw − Π h wk V ≤ M

m c e h|w| 2,Ω . Computation of w h

In order to compute w h = P N

h

j=1 (w h ) j ϕ j one has to solve the linear system

Ax = b, a ij = a(ϕ j , ϕ i ), b i = F (ϕ i ).

(4)

In fact, (w h ) j = (A −1 b ) j . More specifically, in our example, if s(g) denotes the set supp (g), then the entries of A and b are:

a ij = R

s(ϕ

j

)∩s(ϕ

i

) α∇ϕ j ∇ϕ i + R

s(ϕ

j

)∩s(ϕ

i

) β∇ϕ j ϕ i + R

s(ϕ

j

)∩s(ϕ

i

) γϕ j ϕ i , b i = R

s(ϕ

i

) f ϕ i + R

Γ

N

∩s(ϕ

i

) ψϕ i dσ − R

s(ϕ

i

)∩s(u

ϕ

) α∇u ϕ ∇ϕ i

− R

s(ϕ

i

)∩s(u

ϕ

) β∇u ϕ ϕ i − R

s(ϕ

i

)∩s(u

ϕ

) γu ϕ ϕ i ,

1 ≤ i, j ≤ N h . Here the ϕ i are the Lagrange basis of V h (ϕ i (j) = δ ij ). So, the (w h ) j are the values of w h in the nodes j ((w h ) j = w h (j)) and the matrix A is sparse, in fact for any fixed i, the number of j such that the measure of s(ϕ j )∩s(ϕ i ) is not zero is smaller than a constant (with respect to h) dependent upon the regularity parameter c r of the triangulations (such constant is a bound for the number of nodes j linked directly to i). But these properties are far from to be essential: in particular, more important would be to know that the matrix A is well conditioned. Unfortunately, even in case the differential problem is simply the Poisson problem (α = I, β = 0, γ = 0, Γ N = ∅) the matrix A has a condition number growing as (1/h) 2 , if the Lagrange functions are used to represent w h . (This estimate of the condition number holds more in general for the convection-diffusion problem, if the triangulations are quasi-uniform (i.e.

h T ≥ c u h, ∀ T ∈ τ h ∀ h) and regular).

So, consider an arbitrary basis { ˜ ϕ i } of V h , and represent w h in terms of this basis: w h = P N

h

j=1 (w ˜ h ) j ϕ ˜ j . Then, (w ˜ h ) j = ( ˜ A −1 b ˜ ) j where

˜

a ij = a( ˜ ϕ j , ˜ ϕ i ), ˜b i = F ( ˜ ϕ i ),

˜ a ij =

Z

s( ˜ ϕ

j

)∩s( ˜ ϕ

i

)

α∇ ˜ ϕ j ∇ ˜ ϕ i + Z

s( ˜ ϕ

j

)∩s( ˜ ϕ

i

)

β∇ ˜ ϕ j ϕ ˜ i + Z

s( ˜ ϕ

j

)∩s( ˜ ϕ

i

)

γ ˜ ϕ j ϕ ˜ i ,

˜b i = R

s( ˜ ϕ

i

) f ˜ ϕ i + R

Γ

N

∩s( ˜ ϕ

i

) ψ ˜ ϕ i dσ − R

s( ˜ ϕ

i

)∩s(u

ϕ

) α∇u ϕ ∇ ˜ ϕ i

− R

s( ˜ ϕ

i

)∩s(u

ϕ

) β∇u ϕ ϕ ˜ i − R

s( ˜ ϕ

i

)∩s(u

ϕ

) γu ϕ ϕ ˜ i .

If the ˜ ϕ i are such that µ 2 ( ˜ A) < µ 2 (A) (. . .), then we can solve the system Ax = ˜ ˜ b , better conditioned than Ax = b, and then, if needed, recover w h = (w h (j)) N j=1

h

= A −1 b solving the system Sw h = ˜ w h = ( ˜ (w h ) j ) N j=1

h

= ˜ A −1 b ˜ . (Of course, all this is convenient if S is a matrix of much lower complexity than A).

Let us prove this assertion in detail. Let v h be a generic element of V h and let S be the matrix such that ˜ v h = Sv h , being v h = [(v h ) 1 · · · (v h ) N

h

] T and

˜

v h = [ ˜ (v h ) 1 · · · (v h ˜ ) N

h

] T such that v h ∈ V h N

h

X

j=1

(v h ) j ϕ j = v h =

N

h

X

j=1

(v ˜ h ) j ϕ ˜ j .

Then we have

ϕ s =

N

h

X

j=1

[S T ] sj ϕ ˜ j , s = 1, . . . , N h , and therefore

a ij = a( P N

h

r=1 [S T ] jr ϕ ˜ r , P N

h

m=1 [S T ] im ϕ ˜ m ) = P

r,m [S T ] im a( ˜ ϕ r , ˜ ϕ m )[S] rj

= P

r,m [S T ] im ˜ a mr [S] rj = [S T AS] ˜ ij ,

(5)

b i = F (

N

h

X

j=1

[S T ] ij ϕ ˜ j ) =

N

h

X

j=1

[S T ] ij F ( ˜ ϕ j ) = [S T b ˜ ] i .

Thus, the equalities A = S T AS and b = S ˜ T b ˜ must hold, and the thesis follows.

In the Poisson case, −∆u = f , x ∈ Ω, u = ϕ, x ∈ ∂Ω, a basis { ˜ ϕ i } for V h can be introduced yielding a matrix ˜ A whose condition number µ 2 ( ˜ A) grows like (log 2 (1/h)) 2 . (An analogous result in the convection-diffusion case (a not symmetric) in 1995 was not known!). We now see (not in all details) that this is possible by using a particular family of triangulations τ h .

Let τ 0 be a rada triangulation of Ω. Let us define τ 1 . For each triangle T of τ 0 draw the triangle whose verteces are the middle points of the sides of T . The four triangles you see (similar to T ) are the triangles of τ 1 . Note that if h 0 is the diameter of τ 0 (h 0 = max{h T : T ∈ τ 0 }), then h 1 , the diameter of τ 1 , is equal to 2 −1 h 0 . Note also that the nodes of τ 0 are nodes of τ 1 ; the new nodes of τ 1

are the middle points of the sides of τ 0 . We can continue in this way, and define the triangulations τ 2 , τ 3 , . . ., τ j , . . . (obviously, τ j is an abbreviation for τ h

j

).

The diameter of the generic τ j is h j = 2 −j h 0 , and the family of triangulations {τ j } +∞ j=0 is regular and quasi-uniform.

To each τ j we can associate the space V j = V h

j

∩ H 0 1 (Ω) of the functions which are continuous, null on ∂Ω, and degree-1 polynomials in each T ∈ τ j . Note that V j ⊂ V j+1 .

Let x jk , k ∈ I j = {1, . . . , N j } ⊂ {1, . . . , N j }, denote the generic inner node of the triangulation τ j , and {ϕ jk : k ∈ I j } the Lagrange basis of V j , ϕ jk (x jl ) = δ kl , k, l ∈ I j . Obviously, any v ∈ V j can be represented as v = P

k∈I

j

v(x jk )ϕ jk , and, if Π j is the interpolating operator, then

v ∈ C 0 (Ω) → Π j (v) = X

k∈I

j

v(x jk )ϕ jk .

Instead of v(x jk ) we will write shortly v jk .

Now that all is defined, consider a function v ∈ V j+1 and observe that X

k∈I

j+1

v j+1,k ϕ j+1,k = v = Π j+1 v = Π j v + (v − Π j v) = X

k∈I

j

v jk ϕ jk + (v − Π j v).

Now the question is: what must we add to V j in order to obtain V j+1 ? This question can be reduced to: what elements of {ϕ j+1,k : k ∈ I j+1 } are needed to represent v − Π j v ?

Let x be a point of Ω and let T be a triangle of τ j including x. Let us observe the above quantities and, in particular, the function v − Π j v on T . Call x jk

1

, x jk

2

, x jk

3

(k 1 , k 2 , k 3 ∈ I j ) the verteces of T . Note that they are nodes also of τ j+1 , thus x jk

i

= x j+1,ρ(k

i

) , for some ρ(k i ) ∈ I j+1 o = {k ∈ I j+1 : x j+1,k is a node of τ j }. Call x j+1,σ(k

1

k

2

) , σ(k 1 k 2 ) ∈ I j+1 n = I j+1 \I j+1 o , the middle point of the side x jk

1

x jk

2

of T , which is a new node, a node of τ j+1 , but not of τ j . Draw the restrictions to T of the functions v and Π j v. Then it is clear that, on T ,

v − Π j v = [v j+1,σ(k

1

k

2

) − 1 2 (v j,k

1

+ v j,k

2

)]ϕ j+1,σ(k

1

k

2

)

+[v j+1,σ(k

2

k

3

) − 1 2 (v j,k

2

+ v j,k

3

)]ϕ j+1,σ(k

2

k

3

)

+[v j+1,σ(k

3

k

1

) − 1 2 (v j,k

3

+ v j,k

1

)]ϕ j+1,σ(k

3

k

1

)

= v ˜ j,σ(k

1

k

2

) ϕ j+1,σ(k

1

k

2

) + ˜ v j,σ(k

2

k

3

) ϕ j+1,σ(k

2

k

3

) + ˜ v j,σ(k

3

k

1

) ϕ j+1,σ(k

3

k

1

)

= P

k∈I

nj+1

v ˜ jk ϕ j+1,k

(6)

where

˜

v jk = v j+1,k − 1

2 (v j,k

0

+ v j,k

00

), k ∈ I j+1 n ,

and k 0 , k 00 ∈ I j are such that x jk

0

= x j+1,ρ(k

0

) , x jk

00

= x j+1,ρ(k

00

) are the extreme points of the side of τ j having x j+1,k as middle point. Thus, if ψ jk := ϕ j+1,k = ϕ j+1,σ(k

0

k

00

) , k ∈ I j+1 n , then

v ∈ V j+1 ⇒ v = X

k∈I

j+1

v j+1,k ϕ j+1,k = X

k∈I

j

v jk ϕ jk + X

k∈I

nj+1

˜ v jk ψ j,k .

It follows that V j+1 = V j + W j , W j := Span {ψ jk : k ∈ I j+1 n }, and the set {ϕ jk : k ∈ I j } ∪ {ψ j,k : k ∈ I j+1 n } is an alternative basis of V j+1 . So, the answer to the question is: the Lagrangian functions of V j+1 corresponding to the new nodes.

Observe that the v jk , k ∈ I j , and the ˜ v jk , k ∈ I j+1 n , can be computed from the v j+1,k , k ∈ I j+1 , via the formulas

v jk = v j+1,ρ(k) , k ∈ I j

˜

v j,k = v j+1,k − 1 2 [v j+1,ρ(k

0

) + v j+1,ρ(k

00

) ], k ∈ I j+1 n .

Viceversa, the v j+1,k , k ∈ I j+1 , can be computed from the v jk , k ∈ I j , an from the ˜ v jk , k ∈ I j+1 n , via the formulas:

v j+1,ρ(k) = v jk , k ∈ I j

v j+1,k = ˜ v j,k + 1 2 [v j,k

0

+ v j,k

00

], k ∈ I j+1 n . These formulas can be written in matrix form:

 v jk

k ∈ I j

˜ v jk

k ∈ I j+1 n

=

 I |I

j

| 0 B I |I

j+1n

|



v j+1,ρ(k)

k ∈ I j

v j+1,k

k ∈ I j+1 n

 ,

v j+1,ρ(k)

k ∈ I j

v j+1,k

k ∈ I j+1 n

=

 I |I

j

| 0

−B I |I

j+1n

|



 v jk

k ∈ I j

˜ v jk

k ∈ I j+1 n

where each row of B has only two nonzero elements, both equal to − 1 2 . Now let J be a positive integer. We are ready to introduce a basis ˜ ϕ J,k , k ∈ I J , of the space V J yielding a matrix ˜ A, ˜ a r,s = a( ˜ ϕ J,s , ˜ ϕ J,r ), r, s ∈ I J , whose condition number in the Poisson case grows like O((log 2 (1/h J )) 2 ) = O((J + log 2 (1/h 0 )) 2 ), and thus is smaller than the condition number of the matrix A, a r,s = a(ϕ J,s , ϕ J,r ), r, s ∈ I J , yielded by the Lagrange basis ϕ J,k , k ∈ I J of V J

(recall that µ 2 (A) = O((1/h J ) 2 ) = O((2 J /h 0 ) 2 )).

Observe that if v ∈ V J then P

k∈I

J

v Jk ϕ Jk = v = Π J v = Π 0 v + P J−1

j=0 (Π j+1 v − Π j v)

= P

k∈I

0

v 0k ϕ 0k + P J−1 j=0

P

k∈I

j+1n

v ˜ jk ψ j,k , ψ jk = ϕ j+1,k , k ∈ I j+1 n , j = 0, . . . , J − 1.

It follows that V J admits the representation

V J = V J−1 + W J−1 = V J−2 + W J−2 + W J−1 = V 0 + W 0 + . . . + W J−1

(7)

and the set { ˜ ϕ J,k : k ∈ I J } := {ϕ 0k : k ∈ I 0 } ∪ {ψ 0,k : k ∈ I 1 n } ∪ · · · ∪ {ψ J−1,k : k ∈ I J n } is an alternative basis of V J .

Remark. Observe that if v J = (v J,k ) k∈I

J

,

˜

v J = (˜ v J,k ) k∈I

J

= ((v 0,k ) k∈I

0

(˜ v 0,k ) k∈I

1n

· · · (˜ v J−1,k ) k∈I

Jn

),

then ˜ v J = Sv J = E 0 P 0 E 1 P 1 · · · E J−1 P J−1 v J where the P k are permutation matrices, the E k are matrices of the form

E 0 =

I |I

0

| 0 B 0 I |I

1n

|

I

 , E 1 =

I |I

1

| 0 B 1 I |I

2n

|

I

 , . . . , E J−1 =

 I |I

J −1

| 0 B J−1 I |I

Jn

|



and the B k , in the definition of the E k , have only two nonzero elements for each row, both equal to − 1 2 . So, v J can be computed from ˜ v J (as well as ˜ v J can be computed from v J ) with 2(|I J | − |I 0 |) divisions by 2.

The transform of v J into ˜ v J is described in detail here below:

v J =

 v J,k

k ∈ I J



→ P J−1 v J =

 v J,ρ(k)

k ∈ I J−1

− − − v J,k

k ∈ I J n

→ E J−1 P J−1 v J =

 v J−1,k

k ∈ I J−1

− − −

˜ v J−1,k

k ∈ I J n

→ P J−2 E J−1 P J−1 v J =

v J−1,ρ(k)

k ∈ I J−2

− − − v J−1,k

k ∈ I J−1 n

− − −

˜ v J−1,k

k ∈ I J n

→ E J−2 P J−2 E J−1 P J−1 v J =

 v J−2,k

k ∈ I J−2

− − −

˜ v J−2,k

k ∈ I J−1 n

− − −

˜ v J−1,k

k ∈ I J n

· · · → E 0 P 0 · · · E J−1 P J−1 v J =

 v 0,k

k ∈ I 0

− − −

˜ v 0,k

k ∈ I 1 n

− − −

· · ·

− − −

˜ v J−1,k

k ∈ I J n

 .

Theorem. Let τ j , V j , Π j , j = 0, 1, . . . , J, be the triangulations of Ω, the subspaces of V = H 0 1 (Ω) and the interpolating operators C 0 (Ω) → V j defined above. For any v ∈ V J set

ˆ kvˆk 2 = |Π 0 v| 2 1,Ω +

J−1

X

j=0

X

k∈I

nj+1

|(Π j+1 v−Π j v)(x j+1,k )| 2 = |Π 0 v| 2 1,Ω +

J−1

X

j=0

X

k∈I

j+1n

|˜ v j,k | 2 .

(8)

Then there exist two positive constants c 1 , c 2 (depending only on the angles of τ 0 ) such that

c 1

ˆ kvˆk 2

J 2 ≤ |v| 2 1,Ω ≤ c 2 ˆ kvˆk 2 .

This inequality, involving the coefficients ˜ v J,k of v ∈ V J with respect to the hierarchical basis ˜ ϕ J,k , k ∈ I J , is due to Yserentant. It allows us to evaluate the condition number of ˜ A, ˜ a r,s = a( ˜ ϕ J,s , ˜ ϕ J,r ), r, s ∈ I J , in the Poisson case where a(u, v) = R

Ω ∇u∇v.

First note that

|Π 0 v| 2 1,Ω = | P

k∈I

0

v 0,k ϕ 0,k | 2 1,Ω = R

Ω ∇( P

k∈I

0

v 0,k ϕ 0,k ) · ∇( P

k∈I

0

v 0,k ϕ 0,k )

= P

k,sI

0

v 0k v 0s R

Ω ∇ϕ 0k ∇ϕ 0s ,

thus the Yserentant inequality can be rewritten as follows:

c 1

˜ v T

J

 N 0 0 I



˜ v J

J 2 ≤ ˜ v T J M ˜ v J ≤ c 2 v ˜ T J

 N 0 0 I



˜

v J (Y)

where N = ( R

Ω ∇ϕ 0r ·∇ϕ 0,s ) r,s∈I

0

and M = ( R

Ω ∇ ˜ ϕ J,r ·∇ ˜ ϕ J,s ) r,s∈I

J

. Note that N and M are positive definite matrices. Note also that in case of the Poisson differential problem −∆u = f , x ∈ Ω, u = ϕ, x ∈ ∂Ω, the form a is simply a(u, v) = R

Ω ∇u∇v, i.e. we have the continuous problem w ∈ V = H 0,Γ 1

D

(Ω) |

Z

∇w∇v = Z

f v − Z

∇u ϕ ∇v, ∀ v ∈ V = H 0,Γ 1

D

(Ω) which is reduced first to the discrete problem

w J ∈ V J | Z

∇w J ∇v J = Z

f v J − Z

∇u ϕ ∇v J , ∀ v J ∈ V J

and then, via the representation w J = P

k∈I

J

(w ˜ J ) k ϕ ˜ J,k , to the linear system Ax = ˜ ˜ b , ˜ a r,s = R

Ω ∇ ˜ ϕ J,r ∇ ˜ ϕ J,s , ˜b r = R

Ω f ˜ ϕ J,r − R

Ω ∇u ϕ ∇ ˜ ϕ J,r , (w ˜ J ) k = ( ˜ A −1 b ˜ ) k .

Observe that the coefficient matrix ˜ A of this system is exactly the matrix M in (Y). Now we prove that µ 2 (M ) = O((log 2 1

h

J

) 2 ).

Consider the Cholesky factorization of N , N = L N L T N , and note that L :=

 L N

I



⇒ LL T =

 N I

 . Set z = L T v ˜ J , ˜ v J = L −T z . By (Y), for all vectors z 6= 0 we have

c 1

1

J 2 ≤ z T L −1 M L −T z z T z ≤ c 2 .

Thus, if λ is any eigenvalue of the positive definite matrix L −1 M L −T , then c 1

J 2 ≤ λ ≤ c 2 ,

and this result implies that the condition number of L −1 M L −T is bounded by

c

2

c

1

J 2 . Since L N is a small matrix and its dimension does not depend on J, it follows that µ 2 (M ) ≤ cJ 2 = c(log 2 h h

0

J

) 2 for some constant c.

Riferimenti

Documenti correlati

[r]

Solution proposed by Roberto Tauraso, Dipartimento di Matematica, Universit` a di Roma “Tor Vergata”, via della Ricerca Scientifica, 00133 Roma, Italy. Let N b (k) be the set of

Solution proposed by Roberto Tauraso, Dipartimento di Matematica, Universit` a di Roma “Tor Vergata”, via della Ricerca Scientifica, 00133 Roma, Italy. We first show that if n is

Doney, One-sided Local Large Deviation and Renewal Theorems in the Case of Infinite Mean, Probab. Erickson, Strong renewal theorems with infinite

Indeed, the pointwise convergence says that, for every fixed x ∈ A, the sequence of vectors (f n (x)) n∈N converges in R m to the vector f (x), but it does not say anything about

Otherwise, you can compute a basis as in the

An easy computation (for example with determinants) shows that this happens if and only if t 6= 4 and s 6=

(Caranti) Thursday 31 October 2019 (2 hrs) Four lemmata on systems of generators, linearly independent vectors and bases.. Some related exercises: extending a linearly