• Non ci sono risultati.

On a new orthonormal basis for RBF native spaces and its fast computation

N/A
N/A
Protected

Academic year: 2022

Condividi "On a new orthonormal basis for RBF native spaces and its fast computation"

Copied!
42
0
0

Testo completo

(1)

On a new orthonormal basis for RBF native spaces and its fast

computation

Stefano De Marchi and Gabriele Santin Torino: June 11, 2014

(2)

Outline

1 Introduction 2 Change of basis 3 WSVD Basis 4 The new basis 5 Numerical Results 6 Further work 7 References

2 of 33

(3)

Introduction

RBF Approximation

1 Data:

Ω ⊂ R

n, X

⊂ Ω

, test function f X = {x1, . . . , xN} ⊂Ω

f1, . . . , fN, where fi = f (xi)

2 Approximation setting: kernel Kε,

N

K

(Ω)

,

N

K

(

X

) ⊂ N

K

(Ω)

kernelK = Kε, positive definite and radial

native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω)

Aim

Findsf

∈ N

K

(

X

)

s.t. sf

f

(4)

Introduction

RBF Approximation

1 Data:

Ω ⊂ R

n, X

⊂ Ω

, test function f X = {x1, . . . , xN} ⊂Ω

f1, . . . , fN, where fi = f (xi)

2 Approximation setting: kernel Kε,

N

K

(Ω)

,

N

K

(

X

) ⊂ N

K

(Ω)

kernelK = Kε, positive definite and radial

native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω)

Aim

Findsf

∈ N

K

(

X

)

s.t. sf

f

3 of 33

(5)

Introduction

RBF Approximation

1 Data:

Ω ⊂ R

n, X

⊂ Ω

, test function f X = {x1, . . . , xN} ⊂Ω

f1, . . . , fN, where fi = f (xi)

2 Approximation setting: kernel Kε,

N

K

(Ω)

,

N

K

(

X

) ⊂ N

K

(Ω)

kernelK = Kε, positive definite and radial

native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω)

Aim

Findsf

∈ N

K

(

X

)

s.t. sf

f

(6)

Introduction

Problem setting and questions

Problem:the standard basis of translates (data-dependent) of

N

K

(

X

)

is unstable and not flexible

Question 1

Is it possible to find a “better” basisUofNK(X)?

Question 2

How to embed information about K andΩin the basisU?

Question 3

Can we extractU0 ⊂ Us.t. s0

f is as good as sf?

4 of 33

(7)

The “natural” basis

The “natural” (data-independent) basis for Hilbert spaces (Mercer’s theorem,1909)

Let K be a continuous, positive definite kernel on a bounded Ω ⊂ Rn. Then K has an eigenfunction expansion with non-negative coefficients, the eigenvalues, s.t.

K (x, y) =

X

j=0

λjϕj(x)ϕj(y), ∀ x, y ∈ Ω .

Moreover,

λjϕj(x) = Z

K (x, y)ϕj(y)dy := T [ϕj](x), ∀x ∈ Ω, j ≥ 0

j}j>0 orthonormal∈ NK(Ω)

j}j>0 orthogonal∈ L2(Ω), kϕjk2

L2(Ω)=λj

−→ 0,

X

j>0

λj = K (0, 0) |Ω|, (the operator is oftrace-class)

(8)

Change of basis

Notation

Letting

Ω ⊂ R

nand X

= {

x1

, . . . ,

xN

} ⊂ Ω

T

X

= {

K

(·,

xi

),

xi

X

}

: the standard basis of translates;

U = {

ui

∈ N

K

(Ω),

i

=

1

, . . . ,

N

}

: another basis s.t.

span

(U) =

span

(T

X

) := N

K

(Ω) .

At x

∈ Ω

,

T

Xand

U

can be expressed as the row vectors T

(

x

) = [

K

(

x

,

x1

), . . . ,

K

(

x

,

xN

)] ∈ R

N

U

(

x

) = [

u1

(

x

), . . . ,

uN

(

x

)] ∈ R

N

.

we need also the scalar products

(

f

,

g

)

2L

2(Ω)

:=

Z

f

(

x

)

g

(

x

)

dx

N

X

j=1

wjf

(

xj

)

g

(

xj

) =: (

f

,

g

)

2`w 2(X )

.

6 of 33

(9)

Change of basis

General idea

Some useful results [Pazouki,Schaback 2011]

Change of basis

Let Aij = K (xi, xj) ∈ RN×N. Any basis U arises from a factorization A = VU· CU1, whereVU = (uj(xi))16i,j6NandCU is the matrix of change of basis s.t. U(x) = T (x) · CU.

Some consequences of this factorization

1. The interpolant Pf,X at x can be written as

Pf,X(x) =

N

X

j=1

Λj(f )uj(x) = U(x)Λ(f ), ∀x ∈ Ω

where Λ(f ) = [Λ1(f ), . . . , ΛN(f )]T ∈ RNis a column vector of values of linear functionals defined by

Λ(f ) = CU1· A1· fX= VU1· fX,

where fXis the column vector given by the evaluations of f at X .

(10)

Change of basis

General idea

Some useful results [Pazouki,Schaback 2011]

Change of basis

Let Aij = K (xi, xj) ∈ RN×N. Any basis U arises from a factorization A = VU· CU1, whereVU = (uj(xi))16i,j6NandCU is the matrix of change of basis s.t. U(x) = T (x) · CU.

Some consequences of this factorization 1. The interpolant Pf,X at x can be written as

Pf,X(x) =

N

X

j=1

Λj(f )uj(x) = U(x)Λ(f ), ∀x ∈ Ω

where Λ(f ) = [Λ1(f ), . . . , ΛN(f )]T ∈ RNis a column vector of values of linear functionals defined by

Λ(f ) = CU1· A1· fX= VU1· fX,

where fXis the column vector given by the evaluations of f at X .

7 of 33

(11)

Change of basis

Consequences

2. If U is a NK(Ω)-orthonormal basis, we get thestability estimate

Pf,X(x) 6

qK (0, 0) kfkK ∀x ∈ Ω. (1)

In particular, for fixed x ∈ Ω and f ∈ NKthe values kU(x)k2and kΛ(f )k2, are the same for all NK(Ω)-orthonormal bases

independently on X kU(x)k26

qK (0, 0), kΛ(f)k26 kfkK. (2)

(12)

Change of basis

Other results [Pazouki,Schaback 2011]

Change of basis

Each

N

K

(Ω)

-orthonormal basis

U

arises from an orthornormal decompositionA =BT ·B with B

=

CU−1

,

VU

=

BT

= (

CU−1

)

T.

Each

`

2

(

X

)

-orthonormal basis

U

arises from a decomposition A =Q·B with Q

=

VU,QTQ =I, B

=

CU−1

=

QTA .

Notice: the best bases in terms of stability are the

N

K

(Ω)

-orthonormal ones!

Q1:It is possible to find a “better” basis?Yes, we can!

9 of 33

(13)

Change of basis

Other results [Pazouki,Schaback 2011]

Change of basis

Each

N

K

(Ω)

-orthonormal basis

U

arises from an orthornormal decompositionA =BT ·B with B

=

CU−1

,

VU

=

BT

= (

CU−1

)

T.

Each

`

2

(

X

)

-orthonormal basis

U

arises from a decomposition A =Q·B with Q

=

VU,QTQ =I, B

=

CU−1

=

QTA .

Notice: the best bases in terms of stability are the

N

K

(Ω)

-orthonormal ones!

Q1:It is possible to find a “better” basis?Yes, we can!

(14)

Change of basis

Other results [Pazouki,Schaback 2011]

Change of basis

Each

N

K

(Ω)

-orthonormal basis

U

arises from an orthornormal decompositionA =BT ·B with B

=

CU−1

,

VU

=

BT

= (

CU−1

)

T.

Each

`

2

(

X

)

-orthonormal basis

U

arises from a decomposition A =Q·B with Q

=

VU,QTQ =I, B

=

CU−1

=

QTA .

Notice: the best bases in terms of stability are the

N

K

(Ω)

-orthonormal ones!

Q1:It is possible to find a “better” basis?Yes, we can!

9 of 33

(15)

WSVD Basis

Main idea: I

Q2:How to embed information on K and

in

U

?

Symmetric Nystr ¨om method [Atkinson,Han 2001]

The main idea for the construction of our basis is to discretize the

“natural” basis introduced in Mercer’s theorem.

To this aim, consider onΩacubature rule(X, W),that is a set of distinct points X= {xj}Nj=1 ⊂Ωand a set ofpositive weights W= {wj}Nj=1, N∈ N, such that

Z

f(y)dy ≈ XN

j=1

f(xj)wj ∀f ∈ NK(Ω). (3)

(16)

WSVD Basis

Main idea: II

Thus, the operator

T

K can be evaluated on X as

λ

j

ϕ

j

(

xi

) =

Z

K

(

xi

,

y

j

(

y

)

dy i

=

1

, . . . ,

N

, ∀

j

>

0

,

and then discretized using the cubature rule by

λ

j

ϕ

j

(

xi

) ≈ X

N

h=1

K

(

xi

,

xh

j

(

xh

)

wh i

,

j

=

1

, . . . ,

N

.

(4)

Letting W

= diag(

wj

)

, it suffices to solve the followingdiscrete eigenvalue problemin order to find the approximation of the eigenvalues and eigenfunctions (evaluated on X ) of

T

K

[

f

]

:

λ

v

= (

A

·

W

)

v (5)

11 of 33

(17)

WSVD basis

Main idea: III

A solution is to rewrite (4) using the fact that the weights are positive as

λ

j

( √

wi

ϕ

j

(

xi

)) = X

N

h=1

( √

wi

Φ(

xi

,

xh

) √

wh

)( √

wh

ϕ

j

(

xh

)) ∀

i

,

j

=

1

, . . . ,

N

,

(6) and then to consider the correspondingscaled eigenvalue problem

λ  √

W

·

v



=  √

W

·

A

·

W

  √

W

·

v



which is equivalent to the previous one, now involving the symmetric and positive definite matrix AW := √

W·A · √ W .

(18)

WSVD Basis

Definition

j

, ϕ

j

}

j>0 are then approximated by eigenvalues/eigenvectors of AW

:= √

W

·

A

· √

W . This matrix is normal, then a singular value decomposition of AW is a unitary diagonalization.

Definition:

A weighted SVD basisUis a basis forNK(X)s.t.

VU =

W−1·Q·Σ, CU =

W·Q·Σ−1 since A =VUCU−1, then AW =Q·Σ2·QT is the SVD .

Here

Σ

jj

= σ

j

,

j

=

1

, . . . ,

N and

σ

21

≥ · · · ≥ σ

2N

>

0 are the singular values of AW.

13 of 33

(19)

WSVD Basis

Properties

This basis is in fact an approximation of the “natural” one (provided wi

>

0,

P

N

i=1wi

= |Ω|

)

Properties of the new basis U (cf. [De Marchi-Santin 2013])

uj

(

x

) =

1

σ

2j

X

N

i=1

wiuj

(

xi

)

K

(

x

,

xi

) ≈

1

σ

2j

T

K

[

uj

](

x

)

,

1

6

j

6

N

, ∀

x

∈ Ω

;

N

K

(Ω)

-orthonormal

`

w2

(

X

)

-orthogonal,

k

uj

k

2`w

2(X )

= σ

2j

uj

∈ U X

N

j=1

σ

2j

=

K

(

0

,

0

) |Ω|

(20)

WSVD Basis

Properties

This basis is in fact an approximation of the “natural” one (provided wi

>

0,

P

N

i=1wi

= |Ω|

)

Properties of the new basis U (cf. [De Marchi-Santin 2013])

uj

(

x

) =

1

σ

2j

N

X

i=1

wiuj

(

xi

)

K

(

x

,

xi

) ≈

1

σ

2j

T

K

[

uj

](

x

)

,

1

6

j

6

N

, ∀

x

∈ Ω

;

N

K

(Ω)

-orthonormal

`

w2

(

X

)

-orthogonal,

k

uj

k

2`w

2(X )

= σ

2j

uj

∈ U X

N

j=1

σ

2j

=

K

(

0

,

0

) |Ω|

14 of 33

(21)

WSVD Basis

Approximation

Interpolant:sf

(

x

) = P

N

j=1

(

f

,

uj

)

Kuj

(

x

) ∀

x

∈ Ω

WDLS:sM

f

:= argmin

 k

f

g

k

`w2(X )

:

g

∈ span{

u1

, . . . ,

uM

}



Weighted Discrete Least Squares as truncation:

Let f

∈ N

K

(Ω)

, 1

6

M

6

N. Then

x

∈ Ω

sfM(x) =

M

X

j=1

(f, uj)`w

2(X)

(uj, uj)`w 2(X)

uj(x) =

M

X

j=1

(f, uj)`w

2(X)

σ2j uj(x) =

M

X

j=1

(f, uj)Kuj(x)

Q3:Can we extract

U

0

⊂ U

s.t. s0

f is as good as sf?Yes we can: takeU0 = {u1, . . . ,uM}.

(22)

WSVD Basis

Approximation

Interpolant:sf

(

x

) = P

N

j=1

(

f

,

uj

)

Kuj

(

x

) ∀

x

∈ Ω

WDLS:sM

f

:= argmin

 k

f

g

k

`w2(X )

:

g

∈ span{

u1

, . . . ,

uM

}



Weighted Discrete Least Squares as truncation:

Let f

∈ N

K

(Ω)

, 1

6

M

6

N. Then

x

∈ Ω

sfM(x) =

M

X

j=1

(f, uj)`w

2(X)

(uj, uj)`w 2(X)

uj(x) =

M

X

j=1

(f, uj)`w

2(X)

σ2j uj(x) =

M

X

j=1

(f, uj)Kuj(x)

Q3:Can we extract

U

0

⊂ U

s.t. s0

f is as good as sf?Yes we can:

takeU0 = {u1, . . . ,uM}.

15 of 33

(23)

WSVD Basis

Approximation II

If we define the pseudo-cardinal functions as`˜i = s`M

i, we get sMf (x) =

N

X

i=1

f (xi)˜`i(x), ˜`i(x) =

M

X

j=1

uj(xi) σ2j uj(x).

Generalized Power Function and Lebesgue constant:

If f ∈ NK(Ω),

f (x) − sM

f (x)

6PK,X(M)(x)kf kN

K(Ω) ∀x ∈ Ω, where hPK,X(M)(x)i2

= K (0, 0) −

M

X

j=1

[uj(x)]2.

Moreover,ksMk 6 ˜Λ kf k .

(24)

WSVD Basis

Sub-basis

How can we extractU0⊂ Us.t. s0

f is as good as sf?Idea.

0 100 200 300 400 500 600 700 800 900

10−20 10−15 10−10 10−5 100 105

← ε = 1 ← ε = 4

← ε = 9

Figure: Gaussian kernel,σ2j on equispaced points of the square

recall that

k

uj

k

`w

2(X )

= σ

2j

0 we can choose M s.t.

σ

2M+1

< tol

we don’t need uj, j

>

M

17 of 33

(25)

WSVD Basis

An Example: I

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8

−1 −0.5 0 0.5 1

−1

−0.5 0 0.5 1

0 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8 1

Figure:The domains used in the numerical experiments with an example of the corresponding sample points.

Fromleft to right: the lens Ω1(trigonometric-gaussian points), the disk Ω2

(trigonometric-gaussian points) and the square Ω3(product Gauss-Legendre points).

(26)

WSVD Basis

An Example: II

ε =

1

ε =

4

ε =

9

Gaussian 100 340 500

IMQ 180 580 580

Matern3 460 560 580

Table:Optimal Mfor different kernels and shape parameter that correspond to the indexes such that the weighted least-squares approximant sfM provides the best approximation of the function f (x, y) = cos(20(x + y)) on the disk with center C = (1/2, 1/2) and radius R = 1/2

19 of 33

(27)

WSVD Basis

An Example: III

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvd Std

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvdStd

Figure:Franke’s test function, lens, IMQ Kernel,ε = 1 and RMSE.

Left: complete basis.Right:σ2M+1< 1017.

Problem:We have to compute the whole basis before truncation! Solution:Krylov methods.

(28)

WSVD Basis

An Example: III

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvd Std

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvdStd

Figure:Franke’s test function, lens, IMQ Kernel,ε = 1 and RMSE.

Left: complete basis.Right:σ2M+1< 1017.

Problem:We have to compute the whole basis before truncation!

Solution:Krylov methods.

20 of 33

(29)

The new basis

Arnoldi iteration

Consider Aij = K (xi, xj) (which is symmetric), bi = f (xi), 1 6 i, j 6 N define theKrylov subspaceKM(A, b) = span{b, Ab, . . . , AM1b}, M  N.

compute an o.n. basis {φ1, . . . , φM} of KM(A, b) and form ΦM = [φ1, . . . , φM], N × M

define the (tridiagonal) matrix HM= ΦTMA ΦM which represents the projection of A into KM(A, b)

Arnoldi iterationgivesA ΦM= ΦM+1HMwhere HM=

"

HM

hM+1,MeTM

# is (M + 1) × M. In practice hM+1,M ≈ 0 so that KM+1(A, b) = KM(A, b).

(30)

The new basis

Approximation of the SVD

Consider a SVDHM =UMΣ2MVMT, where UM

∈ R

(M+1)×(M+1)

, VM

∈ R

M×M,

Σ

2M

= h ˜ Σ

2M

,

0

i

T

and

Σ ˜

2M

= diag(σ

2M,1

, . . . , σ

2M,M

)

.

Approximate SVD (Novati-Russo 2013:)

Let UM

= Φ

M+1UM, VM

= Φ

MVM, then A VM

=

UM

Σ

2M

,

A UM

=

VM

2M

)

T

the first M singular values of A are well approx. by

σ

2M,j If M

=

N, in exact arithmetic the triplet



Φ

M+1U

˜

M

, ˜ Σ

M

, Φ

MVM



is a SVD of A , whereU

˜

M is UM without the last column.

22 of 33

(31)

The new basis

Definition

Recall:

A

Φ

M

= Φ

M+1HM HM

=

UM

Σ

2MVMT

Σ

2M

= h ˜ Σ

2M

,

0

i

T

U

˜

M is UM without the last column.

Definition:

The sub-basisUM is a set{u1, . . . ,uM} ⊂ NK(X)defined by VU = ΦM+1U˜MΣ˜M, CU = ΦMVMΣ˜−1M .

(32)

The new basis

Properties

Properties [De Marchi, Santin 2014]:

The sub-basis

U

M has the following properties for each 1

6

M

6

N:

1 it is

`

2

(

X

)

-orthogonal with

k

uj

k

`2(X )

= σ

2M,j 1

6

j

6

M

2 it is near-orthonormal in

N

K

(Ω)

3 if M

=

N it is the SVD basis

U

(

Φ

M

=

I)

About point 2: it means that

(

ui

,

uj

) = δ

ij

+

rij(M)where

(

R(M)

)

ij

:=

rij(M)is a rank one matrix for 1

6

M

6

N, and rij(M)

=

0 if M

=

N;

24 of 33

(33)

The new basis

Properties

Using this basis we get

f

∈ N

K

(Ω)

s0

f(x) = XM

j=1

(f,uj)`2(X )

σ2M,j uj(x) = XM

j=1

(f,uj)Kuj(x) ∀x∈Ω

(and PK,X(M)

(

x

)

,

˜ Λ

X as before)

(34)

Numerical Results

Stopping rule: I

Fix

τ >

0

HM

M+1,M ≈σ2M,j < τ

ora better choiceis the following

X

M

j=1

(

HM

)

jj

N

< τ,

(7)

that works for functions lying in the native space of the kernel.

26 of 33

(35)

Numerical Results

Stopping rule: II

The decay of the residual described in (7) compared to the

corresponding RMSE goes as in the Figure below withτ = 1.e − 15

0 50 100 150

10−16 10−14 10−12 10−10 10−8 10−6 10−4 10−2 100 102

m

hm+1,m σm,m2 svd(A)

0 50 100 150

10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100

m

RMSE

Figure:Gaussian kernel,ε = 1, square [−1, 1]2, N = 200 e.s. points,

(36)

Numerical Results

CPU time comparison

For each N we compute theoptimal Musing the new algorithm with toleranceτ = 1e − 14.

N 121 225 361 529 729 961 1225 1521

optimal M 100 110 113 114 114 115 115 116

RMSE 5.0e-08 3.4e-10 1.0e-10 6.7e-11 6.4e-11 5.5e-11 4.7e-11 3.4e-11 new

RMSE 5.0e-08 3.3e-09 1.3e-09 1.1e-09 1.1e-09 8.3e-10 7.8e-10 7.9e-10 WSVD

Time 1.7e-01 3.4e-01 6.1e-01 1.0e+00 1.6e+00 2.6e+00 3.7e+00 6.5e+00 new

Time 3.3e-01 7.2e-01 1.5e+00 4.2e+00 1.0e+01 2.5e+01 5.5e+01 1.1e+02 WSVD

Table:Comparison between the WSVD basis and the new basis.

Computational time in seconds and corresponding RMSE, restricted to n = 49, . . . , 1600 equally spaced points.

28 of 33

(37)

Approximated Power function and Lebesgue function

Figure:Approximated power function (left, logarithmic plot) and approximated Lebesgue function (right) Ω

(38)

Numerical Results

Another example

−1 −0.5 0 0.5 1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8

−1

−0.5 0

0.5 1

−1

−0.5 0 0.5 1 0 0.5 1 1.5 2 2.5 3 3.5

−7

−6

−5

−4

−3

−2

Figure:IMQ kernel,ε = 1, cutted-disk, N = 1600 random points, M = 260, f (x, y) = exp(|x − y|) − 1

30 of 33

(39)

Numerical Results

Lebesgue Constant and Power Function

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

ΛX = 160.2017

0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

4 5 6 7 8 9 x 10−6

Figure:IMQ kernel,ε = 1, cutted-disk, N = 1600 random points, M = 260, f (x, y) = exp(|x − y|) − 1. Left: Lebesgue function. Right:

(40)

Further work

More analysis has to be done a better stopping rule

understand the decay rate of PK,X(M) understand the growing rate of

Λ ˜

X understand how X ,

ε

influence s0

f

Thank you for your attention!

32 of 33

(41)

Further work

More analysis has to be done a better stopping rule

understand the decay rate of PK,X(M) understand the growing rate of

Λ ˜

X understand how X ,

ε

influence s0

f

Thank you for your attention!

(42)

References

R. K. Beatson, J. B. Cherrie, C. T. Mouat .

Fast fitting of radial basis functions: methods based on preconditioned GMRES iteration.

Adv. Comp. Math., 11 : 253–270, 1999.

S. De Marchi, G. Santin.

A new stable basis for radial basis function interpolation.

J. Comput. Appl. Math., 253 : 1–13, 2013.

S. De Marchi, G. Santin.

Fast computation of orthonormal bases for RBF spaces through Krylov spaces methods preprint., 2014.

G. Fasshauer, M. J. McCourt.

Stable evaluation of Gaussian radial basis functions interpolants.

SIAM J. Sci. Comput., 34: A737–A762, 2012.

A. C. Faul, M. J. D. Powell.

Krylov subspace methods for radial basis function interpolation.

Chapman & Hall/CRC Res. Notes Math., 420 : 115–141, 1999.

P. Novati, M. R. Russo.

A GCV based Arnoldi-Tikhonov regularization method.

BIT Numerical Analysis, DOI: 10.1007/S10543-013-0447-Z, 2013.

M. Pazouki, R. Schaback.

Bases for kernel-based spaces.

J. Comput. Appl. Math., 236 : 575–588, 2011.

M. Vianello and G. Da Fies.

Algebraic cubature on planar lenses and bubbles.

Dolomites Res. Notes Approx. DRNA 15, 7–12, 2012.

33 of 33

Riferimenti

Documenti correlati

Abstract We give a survey of some recent results on the construction of numerical cubature formulas from scattered samples of small/moderate size, by using interpolation with

Error analysis and numerical tests with random points in the unit square have shown that Thin-Plate Splines and Wendland’s compactly supported RBF provide stable and reasonably

Figure 4.3: Trial and error strategy for the interpolation of the 1-dimensional sinc function by the Gaussian for  ∈ [0, 20], taking 100 values of  and for different equispaced

Optimal shape parameter (Trial&amp;Error, Cross Validation), stable bases (RBF-QR, HS-SVD, WSVD) [Fornberg et al 2011, Fasshauer Mc Court 2012, DeM Santin 2013], Optimal

Subramanian, Interpolat- ing implicit surfaces from scattered surface data using compactly supported radial basis functions, in: IEEE International Conference on Shape Modeling

Problem : The kernel methods although they are built to be well-posed for every data distribution, it is also well known that the interpolation based on translates of radial

The BS equation, is a linear parabolic PDE that models the price evolution of an European call option (or European put option). Let F(s, t) be the price of the option at the time

CORRESPONDENCE (a transformation function) which connects the points of two images, so that the transformed template image is similar to reference... Graphs of two