• Non ci sono risultati.

Radial basis functions topics in 40 +1 slides

N/A
N/A
Protected

Academic year: 2022

Condividi "Radial basis functions topics in 40 +1 slides"

Copied!
63
0
0

Testo completo

(1)

Radial basis functions topics in 40 +1 slides

Stefano De Marchi

Department of Mathematics “Tullio Levi-Civita”

University of Padova Napoli, 22nd March 2018

(2)

Outline

1 From splines to RBF

2 Error estimates, conditioning and stability Strategies for “controlling” errors

3 Stable basis approaches WSVD Basis

4 Rescaled, Rational RBF and some applications

2 of 41

(3)

From cubic splines to RBF

3 of 41

(4)

Cubic splines

Let S3(X ) = {s ∈ C2[a, b] : s|[xi,xi+1] ∈ P3, 1 ≤ i ≤

N − 1, s[a,x1], s[xN,b]∈ P1} be the space ofnatural cubic splines.

S3(X ) has the basis of truncated powers (· − xj)3+, 1 ≤ j ≤ N plus an arbitrary basis for P3(R)

s(x) =

N

X

j=1

aj(x − xj)3++

3

X

j=0

bjxj, x ∈ [a, b] . (1)

Using the identity x+3 =(|x|3+2x3) and the fact that s[a,x1], s[xN,b]∈ P1

Every natural cubic spline s has the representation s(x) =

N

X

j=1

ajφ(|x − xj|) + p(x), x ∈ R

whereφ(r) = r3, r ≥ 0 and p ∈ P1(R) s.t.

N

X

j=1

aj =

N

X

j=1

ajxj = 0. (2)

4 of 41

(5)

Cubic splines

Let S3(X ) = {s ∈ C2[a, b] : s|[xi,xi+1] ∈ P3, 1 ≤ i ≤

N − 1, s[a,x1], s[xN,b]∈ P1} be the space ofnatural cubic splines.

S3(X ) has the basis of truncated powers (· − xj)3+, 1 ≤ j ≤ N plus an arbitrary basis for P3(R)

s(x) =

N

X

j=1

aj(x − xj)3++

3

X

j=0

bjxj, x ∈ [a, b] . (1)

Using the identity x+3 =(|x|3+2x3) and the fact that s[a,x1], s[xN,b]∈ P1

Every natural cubic spline s has the representation s(x) =

N

X

j=1

ajφ(|x − xj|) + p(x), x ∈ R

whereφ(r) = r3, r ≥ 0 and p ∈ P1(R) s.t.

N

X

j=1

aj =

N

X

j=1

ajxj = 0. (2)

4 of 41

(6)

Extension to R d

Going to Rdis straightforward:”radiality”becomes more evident

s(x) =

N

X

i=1

ajφ(kx − xjk2) + p(x), x ∈ Rd, (3) whereφ : [0, ∞) → R is a univariate fixed function and p ∈ Pl1(Rd) is a low degree d-variate polynomial. The additional conditions in (??) become

N

X

i=1

ajq(xj) = 0, ∀ q ∈ Pl1(Rd). (4)

Obs:

We can omit theside conditionson the coefficients (??) (like using

“B-splines”) and consider approximants of the form

s(x) =

N

X

i=1

ajφ(kx − xjk2)

and also omit the 2-norm symbol

5 of 41

(7)

Extension to R d

Going to Rdis straightforward:”radiality”becomes more evident

s(x) =

N

X

i=1

ajφ(kx − xjk2) + p(x), x ∈ Rd, (3) whereφ : [0, ∞) → R is a univariate fixed function and p ∈ Pl1(Rd) is a low degree d-variate polynomial. The additional conditions in (??) become

N

X

i=1

ajq(xj) = 0, ∀ q ∈ Pl1(Rd). (4)

Obs:

We can omit theside conditionson the coefficients (??) (like using

“B-splines”) and consider approximants of the form

s(x) =

N

X

i=1

ajφ(kx − xjk2)

and also omit the 2-norm symbol 5 of 41

(8)

Scattered data fitting:I

Setting

Given data X = {xj, j = 1, . . . , N}, with xj ∈Ω ⊂ Rdand values Y = {yj ∈ R, j = 1, ..., N} (yj = f (xj)), find a (continuous) function Pf ∈ span{φ(· − xi), xi∈ X } s.t.

Pf =

N

X

j=1

αjφ(k · −xik), s.t.. (Pf)|X = Y (interpolation)

Figure:Data points, data values and data function

Obs: Pf =Pn

i=jujfj, with uj(xi) =δji.

6 of 41

(9)

Scattered data fitting:I

Setting

Given data X = {xj, j = 1, . . . , N}, with xj ∈Ω ⊂ Rdand values Y = {yj ∈ R, j = 1, ..., N} (yj = f (xj)), find a (continuous) function Pf ∈ span{φ(· − xi), xi∈ X } s.t.

Pf =

N

X

j=1

αjφ(k · −xik), s.t.. (Pf)|X = Y (interpolation)

Figure:Data points, data values and data function

Obs: Pf =Pn

i=jujfj, with uj(xi) =δji. 6 of 41

(10)

Scattered data fitting:II

Problem

For which functionsφ : [0, ∞) → R such that for all d, N ∈ N and all pairwise distrinct x1, . . . , xn∈ Rdthe matrixdet(Aφ,X) , 0? When the matrixAφ,X := (φ(kxi− xjk2)1i,jN,is invertible?

Answer

The functionsφ should bepositive definiteorconditionally positive definiteof some order.

7 of 41

(11)

Scattered data fitting:II

Problem

For which functionsφ : [0, ∞) → R such that for all d, N ∈ N and all pairwise distrinct x1, . . . , xn∈ Rdthe matrixdet(Aφ,X) , 0? When the matrixAφ,X := (φ(kxi− xjk2)1i,jN,is invertible?

Answer

The functionsφ should bepositive definiteorconditionally positive definiteof some order.

7 of 41

(12)

General kernel-based approximation

φ, Conditionally Positive Definite (CPD) of order ` or Strictly Positive Definite (SPD) and radial

globally supported:

name φ `

Gaussian C(GA) e−ε2r2 0

Generalized Multiquadrics C(GM) (1 + r22)3/2 2

locally supported:

name φ `

Wendland C2(W2) (1 −εr)4+(4εr + 1) 0 Buhmann C2(B2) 2r4log r − 7/2r4+ 16/3r3− 2r2+ 1/6 0

we often considerφ(ε·), withεcalledshape parameter

kernel notation K (x, y)(= Kε(x, y)) = Φε(x − y) =φ(εkx − yk2) native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω).

8 of 41

(13)

General kernel-based approximation

φ, Conditionally Positive Definite (CPD) of order ` or Strictly Positive Definite (SPD) and radial

globally supported:

name φ `

Gaussian C(GA) e−ε2r2 0

Generalized Multiquadrics C(GM) (1 + r22)3/2 2

locally supported:

name φ `

Wendland C2(W2) (1 −εr)4+(4εr + 1) 0 Buhmann C2(B2) 2r4log r − 7/2r4+ 16/3r3− 2r2+ 1/6 0

we often considerφ(ε·), withεcalledshape parameter

kernel notation K (x, y)(= Kε(x, y)) = Φε(x − y) =φ(εkx − yk2)

native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω).

8 of 41

(14)

General kernel-based approximation

φ, Conditionally Positive Definite (CPD) of order ` or Strictly Positive Definite (SPD) and radial

globally supported:

name φ `

Gaussian C(GA) e−ε2r2 0

Generalized Multiquadrics C(GM) (1 + r22)3/2 2

locally supported:

name φ `

Wendland C2(W2) (1 −εr)4+(4εr + 1) 0 Buhmann C2(B2) 2r4log r − 7/2r4+ 16/3r3− 2r2+ 1/6 0

we often considerφ(ε·), withεcalledshape parameter

kernel notation K (x, y)(= Kε(x, y)) = Φε(x − y) =φ(εkx − yk2) native spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω).

8 of 41

(15)

Error estimates, conditioning and stability

9 of 41

(16)

Separation distance, fill-distance and power function

qX:=1 2min

i,jkxi− xjk2, (separation distance) hX,Ω:= sup x∈Ω

xj ∈Xminkx − xjk2, (fill − distance)

PΦε,X(x) :=

q

Φε(0) − (u(x))TAu(x), (power function) uvector of cardinal functions

Figure:The fill-distance of 25 Halton points h ≈ 0.2667

Figure:Power function for the Gaussian kernel withε = 6 on a grid of 81 uniform, Chebyshev and Halton points,

respectively. 10 of 41

(17)

Pointwise error estimates

Theorem

Let Ω ⊂ Rdand K ∈ C(Ω × Ω) be PD on Rd. Let X = {x1, . . . , , xn} be a set of distinct points. Take a function f ∈ NΦ(Ω) and denote with Pfits interpolant on X . Then, for every x ∈ Ω

|f (x) − Pf(x)| ≤PΦε,X(x)kf kNK(Ω). (5)

Theorem

Let Ω ⊂ RdandK ∈ C2κ(Ω × Ω)be symmetric and positive definite, X = {x1, . . . , , xN} a set of distinct points. Consider f ∈ NK(Ω) and its interpolant Pfon X . Then, there exist positive constants h0and C (independent of x, f and Φ), with hX,Ω≤ h0, such that

|f (x) − Pf(x)| ≤ ChXκ,Ω q

CK(x)kf kNK(Ω). (6) and CK(x) = max|β|=2κ maxw,z∈Ω∪B(x,c2hX,Ω)|D2βΦ(w, z)| .

11 of 41

(18)

Reducing the interpolation error

Obs:

The choice of the shape parameterεin order to get the smallest (possible) interpolation error iscrucial.

Trial and Error

Power function minimization

Leave One Out Cross Validation (LOOCV)

Trial and error strategy: interpolation of the 1-d sinc function with Gaussian for ε ∈ [0, 20], taking 100 values of ε and different equispaced data points.

12 of 41

(19)

Reducing the interpolation error

Obs:

The choice of the shape parameterεin order to get the smallest (possible) interpolation error iscrucial.

Trial and Error

Power function minimization

Leave One Out Cross Validation (LOOCV)

Trial and error strategy:

interpolation of the 1-d sinc function with Gaussian for ε ∈ [0, 20], taking 100 values of ε and different equispaced data points.

12 of 41

(20)

Trade-off principle

Consider the errors (RMSE and MAXERR) and the condition numberµ2 of A = (Φε(kxi− xjk))Ni,j=1

Figure:RMSE, MAXERR and Condition Number,µ2, with 30 values ofε ∈ [0.1, 20], for interpolation of theFranke functionon a grid of 40 × 40 Chebyshev points

Trade-off or uncertainty principle [Schaback 1995]

Accuracy vs Stability Accuracy vs Efficiency

Accuracy and stability vs Problem size 13 of 41

(21)

Accuracy vs stability

The error bounds fornon-stationaryinterpolation using infinitely smooth basis functions show that theerror decreases (exponentially) as the fill distance decreases.

For well-distributed data adecrease in the fill distance also implies a decrease of the separation distance

But now thecondition estimates above imply that the condition number of A grows exponentially

This leads tonumerical instabilitieswhich make it virtually impossible to obtain the highly accurate results promised by the theoretical error bounds.

14 of 41

(22)

Stable basis approaches

15 of 41

(23)

Problem setting and questions

Obs:

the standard basis of translates (data-dependent) of NK(X ) is unstable and not flexible

Question 1

Is it possible to find a “better” basis U of NK(X )?

Question 2

How to embed information about K and Ω in the basis U?

Question 3

Can we extract U0⊂ U s.t. s0

f is as good as sf?

16 of 41

(24)

Problem setting and questions

Obs:

the standard basis of translates (data-dependent) of NK(X ) is unstable and not flexible

Question 1

Is it possible to find a “better” basis U of NK(X )?

Question 2

How to embed information about K and Ω in the basis U?

Question 3

Can we extract U0⊂ U s.t. s0

f is as good as sf?

16 of 41

(25)

The “natural” basis

The “natural” (data-independent) basis for Hilbert spaces (Mercer’s theorem,1909)

Let K be a continuous, positive definite kernel on a bounded Ω ⊂. Then K has an eigenfunction expansion with non-negative coefficients, the eigenvalues, s.t.

K (x, y) =

X

j=0

λjϕj(x)ϕj(y), ∀ x, y ∈ Ω .

Moreover,

λjϕj(x) = Z

K (x, y)ϕj(y)dy := T [ϕj](x), ∀x ∈ Ω, j ≥ 0

j}j>0 orthonormal∈ NK(Ω)

j}j>0 orthogonal∈ L2(Ω), kϕjk2

L2(Ω)=λj

−→ 0,

X

j>0

λj = K (0, 0) |Ω|, (the operator is oftrace-class)

Notice:the functionsϕiare explicitely in very few cases [Fasshauer,McCourt 2012, for GRBF].

17 of 41

(26)

Notation

TX = {K (·, xi), xi ∈ X }: thestandard basis of translates;

U= {ui ∈ NK(Ω), i = 1, . . . , N}:another basiss.t.

span(U) = span(TX).

Change of basis [Pazouki,Schaback 2011]

Let Aij = K (xi, xj) ∈ RN×N. Any other basis U arises from a factorization A · CU = VU orA = VU· CU1, whereVU = (uj(xi))16i,j6NandCUis the matrix of change of basis.

Each NK(Ω)-orthonormal basisU arises from an orthonormal decompositionA = BT· Bwith B = CU1, VU = BT = (CU1)T. Each`2(X )-orthonormal basisU arises from a decomposition A = Q · Bwith Q = VU,QTQ = I, B = CU1= QTA .Notice: the best bases in terms of stability are the NK(Ω)-orthonormal ones! Q1:

Yes, we can!

18 of 41

(27)

Notation

TX = {K (·, xi), xi ∈ X }: thestandard basis of translates;

U= {ui ∈ NK(Ω), i = 1, . . . , N}:another basiss.t.

span(U) = span(TX).

Change of basis [Pazouki,Schaback 2011]

Let Aij = K (xi, xj) ∈ RN×N. Any other basis U arises from a factorization A · CU = VUorA = VU· CU1, whereVU = (uj(xi))16i,j6NandCUis the matrix of change of basis.

Each NK(Ω)-orthonormal basisU arises from an orthonormal decompositionA = BT· Bwith B = CU1, VU = BT = (CU1)T. Each`2(X )-orthonormal basisU arises from a decomposition A = Q · Bwith Q = VU,QTQ = I, B = CU1= QTA .Notice: the best bases in terms of stability are the NK(Ω)-orthonormal ones!

Q1:

Yes, we can!

18 of 41

(28)

Notation

TX = {K (·, xi), xi ∈ X }: thestandard basis of translates;

U= {ui ∈ NK(Ω), i = 1, . . . , N}:another basiss.t.

span(U) = span(TX).

Change of basis [Pazouki,Schaback 2011]

Let Aij = K (xi, xj) ∈ RN×N. Any other basis U arises from a factorization A · CU = VUorA = VU· CU1, whereVU = (uj(xi))16i,j6NandCUis the matrix of change of basis.

Each NK(Ω)-orthonormal basisU arises from an orthonormal decompositionA = BT· Bwith B = CU1, VU = BT = (CU1)T. Each`2(X )-orthonormal basisU arises from a decomposition A = Q · Bwith Q = VU,QTQ = I, B = CU1= QTA .Notice: the best bases in terms of stability are the NK(Ω)-orthonormal ones!

Q1:

Yes, we can!

18 of 41

(29)

WSVD Basis

construction

Q2:How to embed information on K and Ω in U?

Symmetric Nystr ¨om method [Atkinson,Han 2001]

Idea:discretize the “natural” basis (Mercer’s theorem) by aconvergent cubature rule (X, W),with X = {xj}Nj=1⊂Ω andpositive weights W= {wj}Nj=1

λjϕj(xi) = Z

K (xi, y)ϕj(y)dy i = 1, . . . , N, ∀j > 0, applying the cubature rule

λjϕj(xi) ≈

N

X

h=1

K (xi, xhj(xh)wh i, j = 1, . . . , N. (7)

Letting W = diag(wj), we reduce to solve the eigen-problem

λv = (A · W)v (8)

19 of 41

(30)

WSVD Basis

construction

Q2:How to embed information on K and Ω in U?

Symmetric Nystr ¨om method [Atkinson,Han 2001]

Idea:discretize the “natural” basis (Mercer’s theorem) by aconvergent cubature rule (X, W),with X = {xj}Nj=1⊂Ω andpositive weights W= {wj}Nj=1

λjϕj(xi) = Z

K (xi, y)ϕj(y)dy i = 1, . . . , N, ∀j > 0, applying the cubature rule

λjϕj(xi) ≈

N

X

h=1

K (xi, xhj(xh)wh i, j = 1, . . . , N. (7)

Letting W = diag(wj), we reduce to solve the eigen-problem

λv = (A · W)v (8)

19 of 41

(31)

WSVD basis

Cont’

Rewrite (??) (using the fact that the weights are positive) as

λj(√

wiϕj(xi)) =

N

X

h=1

(√

wiK (xi, xh)√ wh)(√

whϕj(xh)) ∀i, j = 1, . . . , N , (9) and then to consider the correspondingscaled eigenvalue problem

λ √ W · v

= √ W · A ·

√ W  √

W · v

which is equivalent to the previous one, now involving thesymmetric and positive definite matrix

AW :=

√ W · A ·

W (10)

20 of 41

(32)

WSVD Basis

Definition

j, ϕj}j>0are then approximated by eigenvalues/eigenvectors ofAW. This matrix is normal, then a singular value decomposition of AW is a unitary diagonalization.

Definition

A weighted SVD basis U is a basis for NK(X ) s.t.

VU=

W1· Q · Σ, CU =

W · Q · Σ1

since A = VUCU1, then Q · Σ2· QT is the SVD of AW .

Here Σjjj, j = 1, . . . , N and σ21≥ · · · ≥σ2N > 0 are thesingular values of AW.

21 of 41

(33)

WSVD Basis

Properties

This basis is in fact an approximation of the “natural” one (provided wi> 0, PNi=1wi = |Ω|)

Properties of the new basis U (cf. [De Marchi-Santin 2013])

TN[uj](x) =σjuj(x), ∀ 1 6 j 6 N, ∀x ∈ Ω;

NK(Ω)-orthonormal;

`w2(X )-orthogonal, kujk2`w

2(X)2j , ∀uj ∈ U;

N

X

j=1

σ2j = K (0, 0) |Ω|.

22 of 41

(34)

WSVD Basis

Properties

This basis is in fact an approximation of the “natural” one (provided wi> 0, PNi=1wi = |Ω|)

Properties of the new basis U (cf. [De Marchi-Santin 2013])

TN[uj](x) =σjuj(x), ∀ 1 6 j 6 N, ∀x ∈ Ω;

NK(Ω)-orthonormal;

`2w(X )-orthogonal, kujk2`w

2(X)2j , ∀uj ∈ U;

N

X

j=1

σ2j = K (0, 0) |Ω|.

22 of 41

(35)

WSVD Basis

Approximation

Interpolant:sf(x) =PN

j=1(f,uj)Kuj(x) ∀x ∈Ω WDLS:sM

f := argmin

 kf−gk

`w2(X ) :g∈ span{u1, . . . ,uM}



Weighted Discrete Least Squares as truncation

Let f ∈ NK(Ω), 16M6N. Then∀x ∈Ω

sfM(x) =

M

X

j=1

(f, uj)`w

2(X)

(uj, uj)`w 2(X)

uj(x) =

M

X

j=1

(f, uj)`w

2(X)

σ2j uj(x) =

M

X

j=1

(f, uj)Kuj(x)

Q3:Can we extractU0⊂ Us.t. s0

f is as good as sf?

Yes we can! Take U

0

= { u

1

, . . . , u

M

} .

23 of 41

(36)

WSVD Basis

Approximation

Interpolant:sf(x) =PN

j=1(f,uj)Kuj(x) ∀x ∈Ω WDLS:sM

f := argmin

 kf−gk

`w2(X ) :g∈ span{u1, . . . ,uM}



Weighted Discrete Least Squares as truncation

Let f ∈ NK(Ω), 16M6N. Then∀x ∈Ω

sfM(x) =

M

X

j=1

(f, uj)`w

2(X)

(uj, uj)`w 2(X)

uj(x) =

M

X

j=1

(f, uj)`w

2(X)

σ2j uj(x) =

M

X

j=1

(f, uj)Kuj(x)

Q3:Can we extractU0 ⊂ Us.t. s0

f is as good as sf?

Yes we can! Take U

0

= { u

1

, . . . , u

M

} .

23 of 41

(37)

WSVD Basis

Approximation II

If we define the pseudo-cardinal functions as`˜i = s`M

i, we get sMf (x) =

N

X

i=1

f (xi)˜`i(x), ˜`i(x) =

M

X

j=1

uj(xi) σ2j uj(x).

Generalized Power Function and Lebesgue constant

If f ∈ NK(Ω),

f (x) − sMf (x)

6PK,X(M)(x)kf kN

K(Ω) ∀x ∈ Ω, where hPK,X(M)(x)i2

= K (0, 0) −

M

X

j=1

[uj(x)]2, (generalized PF)

ksfMk 6 ˜ΛXkf kX,

where ˜ΛX = max

x∈Ω N

X

i=1

|˜`i(x)| is the “pseudo-Lebesgue constant”.

24 of 41

(38)

WSVD Basis

Sub-basis

How can we extractU0⊂ Us.t. s0

f is as good as sf?

0 100 200 300 400 500 600 700 800 900

10−20 10−15 10−10 10−5 100 105

← ε = 1 ← ε = 4

← ε = 9

Figure:Gaussian kernel,σ2j on equispaced points of the square

recall that kujk`w

2(X )2j →0 we can choose M s.t.

σ2M+1

we don’t need uj, j >M

25 of 41

(39)

WSVD Basis

Example

ε =1 ε =4 ε =9

Gaussian 100 340 500

IMQ 180 580 580

Matern3 460 560 580

Table:Optimal Mfor different kernels and shape parameter that correspond to the indexes such that the weighted least-squares approximant sfM provides the best approximation of the function f (x, y) = cos(20(x + y)) on the disk with center C = (1/2, 1/2) and radius R = 1/2

26 of 41

(40)

WSVD Basis

Example, cont’

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvd Std

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvd Std

Figure:Franke’s test function, lens, IMQ Kernel,ε = 1 and RMSE.

Left: complete basis.Right:σ2M+1< 1017.

Problem:We have to compute the whole basis before truncation! Solution:Krylov methods, approximate SVD [Novati, Russo 2013]

27 of 41

(41)

WSVD Basis

Example, cont’

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvd Std

0 100 200 300 400 500 600 700 800 900 1000

10−6 10−4 10−2 100 102

0 100 200 300 400 500 600 700 800 900 1000

10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 102

103 RMS error

WSvd Std

Figure:Franke’s test function, lens, IMQ Kernel,ε = 1 and RMSE.

Left: complete basis.Right:σ2M+1< 1017.

Problem:We have to compute the whole basis before truncation!

Solution:Krylov methods, approximate SVD [Novati, Russo 2013]

27 of 41

(42)

New fast basis [De Marchi-Santin, BIT15]: a comparison

N 225 529 961 1521

M 110 114 115 116

RMSE 3.4·10−10 6.7·10−11 5.5·10−11 3.4·10−11 new

RMSE 3.3·10−9 1.1·10−9 8.3·10−10 7.9·10−10 WSVD

Time 3.4·10−1 1.0·100 2.6·100 6.5·100 new

Time 7.2·10−1 4.2·100 2.5·101 1.1·102 WSVD

Table:Comparison of the WSVD basis and the new basis. Computational time in seconds and corresponding RMSE for the example consisting of interpolation of a function sum of gaussians by GRBF, restricted to N = 152, 232, 312, 392equally spaced points on [−1, 1]2.

Notice: for each N we computed the optimalMusing the new

algorithm with toleranceτ =10−4. 28 of 41

(43)

Rescaled, Rational RBF and some applications

29 of 41

(44)

Rescaled RBF Interpolation

Classical interpolant: Pf(x) =

N

X

k=1

αkK (x, xk), x ∈ Ω, xk ∈ X.

[Hardy and Gofert 1975] usedmultiquadrics K (x, y) = p1 + 2kx − yk2.

Rescaled interpolant: ˆPf(x) = Pf(x) Pg(x) =

PN

k=1αkK (x, xk) PN

k=1βkK (x, xk) where Pg is the kernel interpolant of g(x) = 1, ∀x ∈ Ω.

Obs:

•Rescaled Localized RBF (RL-RBF)based on CSRBF introduced in [Deparis et al, SISC 2014]: they are smoother even for small radii of the support

• In [DeM et al 2017] it is shown that it is aShepard’s PU method: exacteness on constants.

• Linear convergence of localized rescaled interpolants is still anopen problem[DeM and Wendland, draft 2017].

30 of 41

(45)

Rescaled RBF Interpolation

Classical interpolant: Pf(x) =

N

X

k=1

αkK (x, xk), x ∈ Ω, xk ∈ X.

[Hardy and Gofert 1975] usedmultiquadrics K (x, y) = p1 + 2kx − yk2.

Rescaled interpolant: ˆPf(x) = Pf(x) Pg(x) =

PN

k=1αkK (x, xk) PN

k=1βkK (x, xk) where Pg is the kernel interpolant of g(x) = 1, ∀x ∈ Ω.

Obs:

•Rescaled Localized RBF (RL-RBF)based on CSRBF introduced in [Deparis et al, SISC 2014]: they are smoother even for small radii of the support

• In [DeM et al 2017] it is shown that it is aShepard’s PU method:

exacteness on constants.

• Linear convergence of localized rescaled interpolants is still anopen problem[DeM and Wendland, draft 2017].

30 of 41

(46)

Example

Take, f (x) = x on [0, 1] by using W2 at the points set X = {0, 1/6, 1/3, 1/2, 2/3, 5/6, 1}, ε = 5 ( = 1/r).

Figure:(left) interpolants and (right) the abs error

For more results see [Deparis et al 14, Idda’s master thesis 2015].

31 of 41

(47)

RBF- PU interpolation

Ω = ∪sj=1i (can overlap): that is, each x ∈ Ω belongs to a limited number of subdomains, say s0< s.

Then we consider, Wjnon-negative functions on Ωj, s.t.

Ps

j=1Wj = 1 is apartition of unity. A possibile choice is Wj(x) =

j(x) Ps

k=1k(x), j = 1, . . . , s (11) where ˜Wj are compactly supported functions on Ωj.

RBF-PU interpolant

I(x) =

s

X

j=1

Rj(x)Wj(x), ∀ x ∈ Ω (12)

where Rj is a RBF interpolant on Ωj i.e. Rj(x) =

Nj

X

i=1

αjiK (x, xji), Nj= |Ωj|

xjk ∈ XNj, k = 1, . . . , Nj. 32 of 41

(48)

Rescaling gives a Shepard method

We start by writing the interpolant of a function f ∈ NKusing the cardinalsuj(xi) =δi,j, Pf=

N

X

j=1

f (xj)uj , so that for g ≡ 1 we get

Pg=

N

X

j=1

uj .

The rescaled interpolant is then

f= PN

j=1f (xj)uj PN

k=1uk

=

N

X

j=1

f (xj) uj

PN k=1uk

=:

N

X

j=1

f (xj)ˆuj,

where we introduced the(new) cardinal functionsˆuj := uj

PN k=1uk

.

Corollary

The rescaled interpolation method is aShepard-PU method, where the weight functions are defined as ˆuj = uj/PN

k=1uk

, {uj}jbeing the cardinal basis of span{K (·, x), x ∈ X}.

33 of 41

(49)

Rescaling gives a Shepard method

We start by writing the interpolant of a function f ∈ NKusing the cardinalsuj(xi) =δi,j, Pf=

N

X

j=1

f (xj)uj , so that for g ≡ 1 we get

Pg=

N

X

j=1

uj . The rescaled interpolant is then

f = PN

j=1f (xj)uj

PN

k=1uk =

N

X

j=1

f (xj) uj

PN

k=1uk =:

N

X

j=1

f (xj)ˆuj,

where we introduced the(new) cardinal functionsˆuj := uj

PN k=1uk

.

Corollary

The rescaled interpolation method is aShepard-PU method, where the weight functions are defined as ˆuj = uj/PN

k=1uk

, {uj}jbeing the cardinal basis of span{K (·, x), x ∈ X}.

33 of 41

(50)

Rescaling gives a Shepard method

We start by writing the interpolant of a function f ∈ NKusing the cardinalsuj(xi) =δi,j, Pf=

N

X

j=1

f (xj)uj , so that for g ≡ 1 we get

Pg=

N

X

j=1

uj . The rescaled interpolant is then

f = PN

j=1f (xj)uj

PN

k=1uk =

N

X

j=1

f (xj) uj

PN

k=1uk =:

N

X

j=1

f (xj)ˆuj,

where we introduced the(new) cardinal functionsˆuj := uj

PN k=1uk

.

Corollary

The rescaled interpolation method is aShepard-PU method, where the weight functions are defined as ˆuj = uj/PN

k=1uk

, {uj}jbeing the

cardinal basis of span{K (·, x), x ∈ X}. 33 of 41

(51)

Variably Scaled Kernels (VSK)

Definition

Letψ : Rd→(0, ∞) be a given scale function. AVariably Scaled Kernel (VSK) Kψon Rdis

Kψ(x, y) := K((x, ψ(x)), (y, ψ(y))), ∀ x, y ∈ Rd. where K is a kernel on Rd+1.

Given X and the scale functionsψj : Rd→(0, ∞), j = 1, . . . , s, the RVSK-PU is

Iψ(x) =

s

X

j=1

Rψj(x) Wj(x), x ∈ Ω, (13)

with Rψj(x) the RVSK-PU on Ωj.

Obs: (if Φ is radial)

(Aψj)ik = Φ(kxji− xjkk2+ (ψj(xji) −ψj(xjk))2), i, k = 1, . . . , Nj.

34 of 41

(52)

Variably Scaled Kernels (VSK)

Definition

Letψ : Rd→(0, ∞) be a given scale function. AVariably Scaled Kernel (VSK) Kψon Rdis

Kψ(x, y) := K((x, ψ(x)), (y, ψ(y))), ∀ x, y ∈ Rd. where K is a kernel on Rd+1.

Given X and the scale functionsψj : Rd→(0, ∞), j = 1, . . . , s, the RVSK-PU is

Iψ(x) =

s

X

j=1

Rψj(x) Wj(x), x ∈ Ω, (13) with Rψj(x) the RVSK-PU on Ωj.

Obs: (if Φ is radial)

(Aψj)ik = Φ(kxji− xjkk2+ (ψj(xji) −ψj(xjk))2), i, k = 1, . . . , Nj.

34 of 41

(53)

Variably Scaled Kernels (VSK)

Definition

Letψ : Rd→(0, ∞) be a given scale function. AVariably Scaled Kernel (VSK) Kψon Rdis

Kψ(x, y) := K((x, ψ(x)), (y, ψ(y))), ∀ x, y ∈ Rd. where K is a kernel on Rd+1.

Given X and the scale functionsψj : Rd→(0, ∞), j = 1, . . . , s, the RVSK-PU is

Iψ(x) =

s

X

j=1

Rψj(x) Wj(x), x ∈ Ω, (13)

with Rψj(x) the RVSK-PU on Ωj.

Obs: (if Φ is radial)

(Aψj)ik = Φ(kxji− xjkk2+ (ψj(xji) −ψj(xjk))2), i, k = 1, . . . , Nj.

34 of 41

(54)

Rational RBF (RRBF)

definition

R(x) = R

(1)(x) R(2)(x) =

PN

k =1αkK(x,xk) PN

k =1βkK(x,xk) [Jackbsson et al. 2009, Sarra and Bai 2017]

=⇒RRBFs well approximate data with steep gradients or discontinuites [rational with PU+VSKin DeM et al. 2017].

35 of 41

Riferimenti

Documenti correlati

Pinning Model, Polymer Model, Linear Chain Model, Phase Transition, Local- ization Phenomena, Gradient Interaction, Laplacian Interaction, Free Energy, Markov

Gli appunti delle lezioni vengono spesso scritti in fretta, e talvolta non

Matematica Discreta e Logica Matematica CdL in Informatica, Facolt` a di Scienze

Abstract We give a survey of some recent results on the construction of numerical cubature formulas from scattered samples of small/moderate size, by using interpolation with

In this paper we present a stable and efficient Fortran implementation of the Lagrange interpolation formula at the Padua points, with cost O(n 3 ) flops for the evaluation (once

Error analysis and numerical tests with random points in the unit square have shown that Thin-Plate Splines and Wendland’s compactly supported RBF provide stable and reasonably

Then x − x k tends to be orthogonal to all vectors of the canonical basis, and thus tends to be the null vector...

• Nell’ambiente di lavoro del Section Designer selezionare Draw Draw Solid Shape Rectangle , posizionarsi nel baricentro degli assi e premere il tasto sinistro del mouse