• Non ci sono risultati.

On a new orthonormal basis for RBF native spaces

N/A
N/A
Protected

Academic year: 2022

Condividi "On a new orthonormal basis for RBF native spaces"

Copied!
23
0
0

Testo completo

(1)

On a new orthonormal basis for RBF native spaces

Stefano De Marchi and Gabriele Santin San Diego: July 8, 2013

(2)

RBF Approximation

1 Data:

Ω ⊂ R

n, X

⊂ Ω

, test function f X = {x1, . . . , xN} ⊂Ω

f1, . . . , fN, where fi = f (xi)

2 Approximation setting: kernel Kε,

N

K

(Ω)

,

N

K

(

X

) ⊂ N

K

(Ω)

kernelK = Kε, positive definite and radial

native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω)

Aim

Findsf

∈ N

K

(

X

)

s.t. sf

f

(3)

Problem setting

Problem:the standard (data-dependent) basis of

N

K

(

X

)

is unstable and not flexible

Question 1

Is it possible to find a “better” basisUofNK(X)?

Question 2

How to embed information about K andΩinU?

Question 3

Can we extractU0 ⊂ Us.t. s0

f is as good as sf?

(4)

Q1:It is possible to find a “better” basis?

Change of basis ([Pazouki-Schaback 2011]):

Let Aij

=

K

(

xi

,

xj

) ∈ R

N×N. Any basis

U

arises from a

factorizationA

=

VU

·

CU−1, whereCU is the matrix of change of basis,VU

= (

uj

(

xi

))

16i,j6N.

Each

N

K

(Ω)

-orthonormal basis

U

arises from a decompositionA

=

BT

·

B with

B

=

CU−1

,

VU

=

BT

= (

CU−1

)

T.

Each

`

2

(

X

)

-orthonormal basis

U

arises from a decomposition A

=

Q

·

B with Q

=

VU,QTQ

=

I, B

=

CU−1

=

QTA .

The best bases in terms of stability are theNK(Ω)-o.n. ones.

(5)

Motivation

The “natural” (data-independent) basis for Hilbert spaces (Mercer’s theorem,1909)

Let K be a continuous, positive definite kernel on a bounded Ω ⊂ Rn. Then K has an eigenfunction expansion

K (x, y) =

X

j=0

λjϕj(x)ϕj(y), ∀ x, y ∈ Ω

Moreover,

λjϕj(x) = Z

K (x, y)ϕj(y)dy := T [ϕj](x), ∀x ∈ Ω, j ≥ 0

j}j>0 orthonormal∈ NK(Ω)

j}j>0 orthogonal∈ L2(Ω), kϕjk2

L2(Ω)=λj

−→ 0,

X

j>0

λj = K (0, 0) |Ω|

(6)

Definition

Q2:How to embed information on K and

in

U

?

Idea:Approximate the integral equation

λ

j

ϕ

j

(

x

) = T [ϕ

j

](

x

)

with thesymmetric Nystr ¨om method, with a cubature formula

(

X

, W)

:

j

, ϕ

j

}

j>0 are approximated by eigenvalues/eigenvectors of AW

:= √

W

·

A

· √

W , with W

= diag(

wj

)

, i.e. the solution of the scaled eigenvalue problem

λ( √

W

·

v

) =

AW

(

W

·

v

)

.

Definition:

A weighted SVD basisUis a basis forNK(X)s.t.

VU

=

W−1

·

Q

· Σ,

CU

=

W

·

Q

· Σ

−1 since A =VUCU−1, then AW =Q·Σ2·QT is the SVD (and unitary diagonalization).

(7)

Properties

This basis is in fact an approximation of the “natural” one (provided wi

>

0,

P

N

i=1wi

= |Ω|

):

Properties (cf. [De Marchi-Santin 2013])

σ

2j uj

(

x

) = T

KN

[

uj

](

x

)

where TKN

[

uj

](·) := (

uj

,

K

(·,

x

))

`w

2(X )

j

, ∀

x

∈ Ω N

K

(Ω)

-orthonormal

`

w2

(

X

)

-orthogonal,

k

uj

k

2`w

2(X )

= σ

2j

j

P

N

j=1

σ

2j

=

K

(

0

,

0

) |Ω|

(8)

Approximation

Interpolant:sf

(

x

) = P

N

j=1

(

f

,

uj

)

Kuj

(

x

) ∀

x

∈ Ω

WDLS:sM

f

:= argmin

 k

f

g

k

`w2(X )

:

g

∈ span{

u1

, . . . ,

uM

}



WDLS as truncation:

Let f

∈ N

K

(Ω)

, 1

6

M

6

N. Then

x

∈ Ω

sM

f

(

x

) =

X

M

j=1

(

f

,

uj

)

`w

2(X )

(

uj

,

uj

)

`w 2(X )

uj

(

x

) = X

M

j=1

(

f

,

uj

)

Kuj

(

x

)

Q3:Can we extract

U

0

⊂ U

s.t. s0

f is as good as sf?We can take

U

0

= {

u1

, . . . ,

uM

}

.

(9)

Approximation II

If we define the pseudo-cardinal functions as

` ˜

i

=

sM`

i, we get

˜ `

i

(

x

) = P

M

j=1 uj(xi)

σ2j uj

(

x

),

sM

f

(

x

) = P

N

i=1f

(

xi

)˜ `

i

(

x

).

Generalized Power Function and Lebesgue constant:

If f

∈ N

K

(Ω)

,

f

(

x

) −

sM

f

(

x

) 6

P

(M) K,X

(

x

)k

f

k

N

K(Ω)

x

∈ Ω

, where



PK,X(M)

(

x

)



2

=

K

(

0

,

0

) − X

M

j=1

[

uj

(

x

)]

2

.

Moreover,

k

sM

f

k

6 ˜ Λ

X

k

f

k

X.

(10)

Sub-basis

Q3:Can we extract

U

0

⊂ U

s.t. s0

f is as good as sf?We can take

U

0

= {

u1

, . . . ,

uM

}

0 100 200 300 400 500 600 700 800 900

10−20 10−15 10−10 10−5 100 105

← ε = 1 ← ε = 4

← ε = 9

Figure: Gaussian kernel,σ2j on equispaced points of the square

recall that

k

uj

k

`w

2(X )

= σ

2j

0 we can choose M s.t.

σ

2M+1

< tol

we don’t need uj, j

>

M

(11)

An Example: I

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8

−1 −0.5 0 0.5 1

−1

−0.5 0 0.5 1

0 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8 1

Figure:The domains used in the numerical experiments with an example of the corresponding sample points. From left to right: the lens Ω1

(trigonometric-gaussian points), the disk Ω2(trigonometric-gaussian points) and the square Ω3(product Gauss-Legendre points).

(12)

An Example: II

ε =

1

ε =

4

ε =

9

Gaussian 100 340 500

IMQ 180 580 580

Matern3 460 560 580

Table:Optimal M for different kernels and shape parameter that correspond to the indexes such that the weighted least-squares approximant sfM provides the best approximation of the function f (x, y) = cos(20(x + y)) on the disk with center C = (1/2, 1/2) and radius R = 1/2

(13)

An Example: III

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−5 10−4 10−3 10−2 10−1 100 101 102 103 104

N

RMSE

WSvd Std

0 100 200 300 400 500 600 700 800 900 1000

10−8 10−6 10−4 10−2 100 102 104

N

RMSE

WSvd Std

Figure:Franke’s function, lens, IMQ Kernel,ε = 1 and RMSE.Left:

complete basis.Right:σ2M+1< 1017.

Problem:We have to compute the whole basis before truncation!

Solution:Krylov methods.

(14)

Arnoldi iteration

Consider Aij = K (xi, xj), bi = f (xi), 1 6 i, j 6 N

define the Krylov subspace KM(A, b) = span{b, Ab, . . . , AM1b}, M  N.

compute an o.n. basis {φ1, . . . , φM} of KM(A, b)

define the (tridiagonal) matrix HMwhich represents the projection of A into KM(A, b) (since ΦTMA ΦM = HM, with

ΦM = [φ1, . . . , φM], N × M)

Arnoldi’s algorithm givesA ΦM= ΦM+1HM where HM =

"

HM

hM+1,MeMT

# is (M + 1) × M. In practise hM+1,M ≈ 0 so that KM+1(A, b) = KM(A, b).

(15)

Approximation of the SVD

Consider a SVDHM

=

UM

Σ

2MVMT, where UM

∈ R

(M+1)×(M+1)

, VM

∈ R

M×M,

Σ

2M

= h ˜ Σ

2M

,

0

i

T

and

Σ ˜

2M

= diag(σ

2M,1

, . . . , σ

2M,M

)

.

Approximate SVD (Novati-Russo 2013:)

Let UM

= Φ

M+1UM, VM

= Φ

MVM, then A VM

=

UM

Σ

2M

,

A UM

=

VM

2M

)

T

the first M singular values of A are well approx. by

σ

2M,j If M

=

N, in exact arithmetic the triplet



Φ

M+1U

˜

M

, ˜ Σ

M

, Φ

MVM



is a SVD of A , whereU

˜

M is UM without the last column.

(16)

Definition

Recall:

A

Φ

M

= Φ

M+1HM HM

=

UM

Σ

2MVMT

Σ

2M

= h ˜ Σ

2M

,

0

i

T

U

˜

M is UM without the last column.

Definition:

The sub-basisUM is a set{u1, . . . ,uM} ⊂ NK(X)defined by VU

= Φ

M+1U

˜

M

Σ ˜

M, CU

= Φ

MVM

Σ ˜

−1M .

(17)

Properties

Properties:

The sub-basis

U

M has the following properties for each 1

6

M

6

N:

it is

N

K

(Ω)

-orthonormal

it is

`

2

(

X

)

-orthogonal with

k

uj

k

`2(X )

= σ

2M,j 1

6

j

6

M if M

=

N it is the SVD basis

U

(

Φ

M

=

I)

Using this basis we get

f

∈ N

K

(Ω)

s0

f

(

x

) =

M

X

j=1

(

f

,

uj

)

`2(X )

σ

2M,j uj

(

x

) =

M

X

j=1

(

f

,

uj

)

Kuj

(

x

) ∀

x

∈ Ω

(and PK,X(M)

(

x

)

,

˜ Λ

X as before)

(18)

Stopping rule



HM



M+1,M

≈ σ

2M,j

<

10−13

0 50 100 150

10−16 10−14 10−12 10−10 10−8 10−6 10−4 10−2 100 102

M

hM+1,M σM,M

2 svd(A)

0 50 100 150

10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100

M

RMSE

Figure:Gaussian kernel,ε = 1, square [−1, 1]2, N = 200 e.s. points, f ∈ NK(Ω), with f (x) = K (x, y1) + 2K(x, y2) − 2K(x, y3) + 3K(x, y4),

y1 = (0, −1.2), y2 = (−0.4, 0.5), y3 = (−0.4, 1.1), y4 = (1.2, 1.3).

(19)

Comparison with the SVD basis

0 200 400 600 800 1000 1200 1400 1600

10−2 10−1 100 101 102

Time

Lanc.

WSVD

0 200 400 600 800 1000 1200 1400 1600

10−10 10−9 10−8 10−7 10−6

RMSE

Lanc.

WSVD

Figure:Gaussian kernel,ε = 1, disk, trig.-gauss. points, f ∈ NK(Ω)

(20)

An example

−1 −0.5 0 0.5 1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8

−1

−0.5 0

0.5 1

−1

−0.5 0 0.5 1 0 0.5 1 1.5 2 2.5 3 3.5

−7

−6

−5

−4

−3

−2

Figure:IMQ kernel,ε = 1, cutted-disk, N = 1600 random points, M = 260, f (x, y) = exp(|x − y|) − 1

(21)

Lebesgue Constant and Power Function

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

ΛX = 160.2017

0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

4 5 6 7 8 9 x 10−6

Figure:IMQ kernel,ε = 1, cutted-disk, N = 1600 random points, M = 260, f (x, y) = exp(|x − y|) − 1. Left: Lebesgue function. Right:

power function

(22)

Further investigation is needed:

a better stopping rule

understand the decay rate of PK,X(M) understand the growing rate of

Λ ˜

X understand how X ,

ε

influence s0

f

Thank you for your attention

(23)

R. K. Beatson, J. B. Cherrie, C. T. Mouat .

Fast fitting of radial basis functions: methods based on preconditioned GMRES iteration.

Adv. Comp. Math., 11 : 253–270, 1999.

S. De Marchi, G. Santin.

A new stable basis for radial basis function interpolation.

J. Comput. Appl. Math., 253 : 1–13, 2013.

G. Fasshauer, M. J. McCourt.

Stable evaluation of Gaussian radial basis functions interpolants.

SIAM J. Sci. Comput., 34: A737–A762, 2012.

A. C. Faul, M. J. D. Powell.

Krylov subspace methods for radial basis function interpolation.

Chapman & Hall/CRC Res. Notes Math., 420 : 115–141, 1999.

P. Novati, M. R. Russo.

A GCV based Arnoldi-Tikhonov regularization method.

preprint, 2013.

M. Pazouki, R. Schaback.

Bases for kernel-based spaces.

J. Comput. Appl. Math., 236 : 575–588, 2011.

M. Vianello and G. Da Fies.

Algebraic cubature on planar lenses and bubbles.

Dolomites Res. Notes Approx. DRNA 15, 7–12, 2012.

Riferimenti

Documenti correlati

Figure 4.3: Trial and error strategy for the interpolation of the 1-dimensional sinc function by the Gaussian for  ∈ [0, 20], taking 100 values of  and for different equispaced

Subramanian, Interpolat- ing implicit surfaces from scattered surface data using compactly supported radial basis functions, in: IEEE International Conference on Shape Modeling

Problem : The kernel methods although they are built to be well-posed for every data distribution, it is also well known that the interpolation based on translates of radial

The BS equation, is a linear parabolic PDE that models the price evolution of an European call option (or European put option). Let F(s, t) be the price of the option at the time

CORRESPONDENCE (a transformation function) which connects the points of two images, so that the transformed template image is similar to reference... Graphs of two

Abstract We give a survey of some recent results on the construction of numerical cubature formulas from scattered samples of small/moderate size, by using interpolation with

Error analysis and numerical tests with random points in the unit square have shown that Thin-Plate Splines and Wendland’s compactly supported RBF provide stable and reasonably

Concerning the second test function f 3 , in Figure 7 we plot the solutions and the final data sets obtained by considering both Halton (left) and greedy (right) points with