On a new orthonormal basis for RBF native spaces
Stefano De Marchi and Gabriele Santin San Diego: July 8, 2013
RBF Approximation
1 Data:
Ω ⊂ R
n, X⊂ Ω
, test function f X = {x1, . . . , xN} ⊂Ωf1, . . . , fN, where fi = f (xi)
2 Approximation setting: kernel Kε,
N
K(Ω)
,N
K(
X) ⊂ N
K(Ω)
kernelK = Kε, positive definite and radialnative spaceNK(Ω)(where K is the reproducing kernel) finite subspaceNK(X )= span{K (·, x) : x ∈ X } ⊂ NK(Ω)
Aim
Findsf
∈ N
K(
X)
s.t. sf≈
fProblem setting
Problem:the standard (data-dependent) basis of
N
K(
X)
is unstable and not flexibleQuestion 1
Is it possible to find a “better” basisUofNK(X)?
Question 2
How to embed information about K andΩinU?
Question 3
Can we extractU0 ⊂ Us.t. s0
f is as good as sf?
Q1:It is possible to find a “better” basis?
Change of basis ([Pazouki-Schaback 2011]):
Let Aij
=
K(
xi,
xj) ∈ R
N×N. Any basisU
arises from afactorizationA
=
VU·
CU−1, whereCU is the matrix of change of basis,VU= (
uj(
xi))
16i,j6N.Each
N
K(Ω)
-orthonormal basisU
arises from a decompositionA=
BT·
B withB
=
CU−1,
VU=
BT= (
CU−1)
T.Each
`
2(
X)
-orthonormal basisU
arises from a decomposition A=
Q·
B with Q=
VU,QTQ=
I, B=
CU−1=
QTA .The best bases in terms of stability are theNK(Ω)-o.n. ones.
Motivation
The “natural” (data-independent) basis for Hilbert spaces (Mercer’s theorem,1909)
Let K be a continuous, positive definite kernel on a bounded Ω ⊂ Rn. Then K has an eigenfunction expansion
K (x, y) =
∞
X
j=0
λjϕj(x)ϕj(y), ∀ x, y ∈ Ω
Moreover,
λjϕj(x) = Z
Ω
K (x, y)ϕj(y)dy := T [ϕj](x), ∀x ∈ Ω, j ≥ 0
{ϕj}j>0 orthonormal∈ NK(Ω)
{ϕj}j>0 orthogonal∈ L2(Ω), kϕjk2
L2(Ω)=λj
−→ 0,∞
X
j>0
λj = K (0, 0) |Ω|
Definition
Q2:How to embed information on K and
Ω
inU
?Idea:Approximate the integral equation
λ
jϕ
j(
x) = T [ϕ
j](
x)
with thesymmetric Nystr ¨om method, with a cubature formula(
X, W)
:{λ
j, ϕ
j}
j>0 are approximated by eigenvalues/eigenvectors of AW:= √
W
·
A· √
W , with W
= diag(
wj)
, i.e. the solution of the scaled eigenvalue problemλ( √
W
·
v) =
AW(
√
W
·
v)
.Definition:
A weighted SVD basisUis a basis forNK(X)s.t.
VU
=
√
W−1
·
Q· Σ,
CU=
√
W
·
Q· Σ
−1 since A =VUCU−1, then AW =Q·Σ2·QT is the SVD (and unitary diagonalization).Properties
This basis is in fact an approximation of the “natural” one (provided wi
>
0,P
Ni=1wi
= |Ω|
):Properties (cf. [De Marchi-Santin 2013])
σ
2j uj(
x) = T
KN[
uj](
x)
where TKN
[
uj](·) := (
uj,
K(·,
x))
`w2(X )
∀
j, ∀
x∈ Ω N
K(Ω)
-orthonormal`
w2(
X)
-orthogonal,k
ujk
2`w2(X )
= σ
2j∀
jP
Nj=1
σ
2j=
K(
0,
0) |Ω|
Approximation
Interpolant:sf
(
x) = P
Nj=1
(
f,
uj)
Kuj(
x) ∀
x∈ Ω
WDLS:sMf
:= argmin
k
f−
gk
`w2(X )
:
g∈ span{
u1, . . . ,
uM}
WDLS as truncation:
Let f
∈ N
K(Ω)
, 16
M6
N. Then∀
x∈ Ω
sM
f
(
x) =
X
Mj=1
(
f,
uj)
`w2(X )
(
uj,
uj)
`w 2(X )uj
(
x) = X
Mj=1
(
f,
uj)
Kuj(
x)
Q3:Can we extract
U
0⊂ U
s.t. s0f is as good as sf?We can take
U
0= {
u1, . . . ,
uM}
.Approximation II
If we define the pseudo-cardinal functions as
` ˜
i=
sM`i, we get
˜ `
i(
x) = P
Mj=1 uj(xi)
σ2j uj
(
x),
sMf
(
x) = P
Ni=1f
(
xi)˜ `
i(
x).
Generalized Power Function and Lebesgue constant:
If f
∈ N
K(Ω)
,f
(
x) −
sMf
(
x) 6
P(M) K,X
(
x)k
fk
NK(Ω)
∀
x∈ Ω
, where PK,X(M)(
x)
2=
K(
0,
0) − X
Mj=1
[
uj(
x)]
2.
Moreover,
k
sMf
k
∞6 ˜ Λ
Xk
fk
X.Sub-basis
Q3:Can we extract
U
0⊂ U
s.t. s0f is as good as sf?We can take
U
0= {
u1, . . . ,
uM}
0 100 200 300 400 500 600 700 800 900
10−20 10−15 10−10 10−5 100 105
← ε = 1 ← ε = 4
← ε = 9
Figure: Gaussian kernel,σ2j on equispaced points of the square
recall that
k
ujk
`w2(X )
= σ
2j→
0 we can choose M s.t.σ
2M+1< tol
we don’t need uj, j
>
MAn Example: I
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8
−1 −0.5 0 0.5 1
−1
−0.5 0 0.5 1
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
Figure:The domains used in the numerical experiments with an example of the corresponding sample points. From left to right: the lens Ω1
(trigonometric-gaussian points), the disk Ω2(trigonometric-gaussian points) and the square Ω3(product Gauss-Legendre points).
An Example: II
ε =
1ε =
4ε =
9Gaussian 100 340 500
IMQ 180 580 580
Matern3 460 560 580
Table:Optimal M for different kernels and shape parameter that correspond to the indexes such that the weighted least-squares approximant sfM provides the best approximation of the function f (x, y) = cos(20(x + y)) on the disk with center C = (1/2, 1/2) and radius R = 1/2
An Example: III
0 100 200 300 400 500 600 700 800 900 1000
10−6 10−5 10−4 10−3 10−2 10−1 100 101 102 103 104
N
RMSE
WSvd Std
0 100 200 300 400 500 600 700 800 900 1000
10−8 10−6 10−4 10−2 100 102 104
N
RMSE
WSvd Std
Figure:Franke’s function, lens, IMQ Kernel,ε = 1 and RMSE.Left:
complete basis.Right:σ2M+1< 10−17.
Problem:We have to compute the whole basis before truncation!
Solution:Krylov methods.
Arnoldi iteration
Consider Aij = K (xi, xj), bi = f (xi), 1 6 i, j 6 N
define the Krylov subspace KM(A, b) = span{b, Ab, . . . , AM−1b}, M N.
compute an o.n. basis {φ1, . . . , φM} of KM(A, b)
define the (tridiagonal) matrix HMwhich represents the projection of A into KM(A, b) (since ΦTMA ΦM = HM, with
ΦM = [φ1, . . . , φM], N × M)
Arnoldi’s algorithm givesA ΦM= ΦM+1HM where HM =
"
HM
hM+1,MeMT
# is (M + 1) × M. In practise hM+1,M ≈ 0 so that KM+1(A, b) = KM(A, b).
Approximation of the SVD
Consider a SVDHM
=
UMΣ
2MVMT, where UM∈ R
(M+1)×(M+1), VM
∈ R
M×M,Σ
2M= h ˜ Σ
2M,
0i
Tand
Σ ˜
2M= diag(σ
2M,1, . . . , σ
2M,M)
.Approximate SVD (Novati-Russo 2013:)
Let UM
= Φ
M+1UM, VM= Φ
MVM, then A VM=
UMΣ
2M,
A UM=
VM(Σ
2M)
Tthe first M singular values of A are well approx. by
σ
2M,j If M=
N, in exact arithmetic the tripletΦ
M+1U˜
M, ˜ Σ
M, Φ
MVM is a SVD of A , whereU˜
M is UM without the last column.Definition
Recall:
A
Φ
M= Φ
M+1HM HM=
UMΣ
2MVMTΣ
2M= h ˜ Σ
2M,
0i
TU
˜
M is UM without the last column.Definition:
The sub-basisUM is a set{u1, . . . ,uM} ⊂ NK(X)defined by VU
= Φ
M+1U˜
MΣ ˜
M, CU= Φ
MVMΣ ˜
−1M .Properties
Properties:
The sub-basis
U
M has the following properties for each 16
M6
N:it is
N
K(Ω)
-orthonormalit is
`
2(
X)
-orthogonal withk
ujk
`2(X )= σ
2M,j 16
j6
M if M=
N it is the SVD basisU
(Φ
M=
I)Using this basis we get
∀
f∈ N
K(Ω)
s0
f
(
x) =
M
X
j=1
(
f,
uj)
`2(X )σ
2M,j uj(
x) =
M
X
j=1
(
f,
uj)
Kuj(
x) ∀
x∈ Ω
(and PK,X(M)
(
x)
,˜ Λ
X as before)Stopping rule
HMM+1,M
≈ σ
2M,j<
10−130 50 100 150
10−16 10−14 10−12 10−10 10−8 10−6 10−4 10−2 100 102
M
hM+1,M σM,M
2 svd(A)
0 50 100 150
10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100
M
RMSE
Figure:Gaussian kernel,ε = 1, square [−1, 1]2, N = 200 e.s. points, f ∈ NK(Ω), with f (x) = K (x, y1) + 2K(x, y2) − 2K(x, y3) + 3K(x, y4),
y1 = (0, −1.2), y2 = (−0.4, 0.5), y3 = (−0.4, 1.1), y4 = (1.2, 1.3).
Comparison with the SVD basis
0 200 400 600 800 1000 1200 1400 1600
10−2 10−1 100 101 102
Time
Lanc.
WSVD
0 200 400 600 800 1000 1200 1400 1600
10−10 10−9 10−8 10−7 10−6
RMSE
Lanc.
WSVD
Figure:Gaussian kernel,ε = 1, disk, trig.-gauss. points, f ∈ NK(Ω)
An example
−1 −0.5 0 0.5 1
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8
−1
−0.5 0
0.5 1
−1
−0.5 0 0.5 1 0 0.5 1 1.5 2 2.5 3 3.5
−7
−6
−5
−4
−3
−2
Figure:IMQ kernel,ε = 1, cutted-disk, N = 1600 random points, M = 260, f (x, y) = exp(|x − y|) − 1
Lebesgue Constant and Power Function
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8 1
ΛX = 160.2017
0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−1
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8 1
4 5 6 7 8 9 x 10−6
Figure:IMQ kernel,ε = 1, cutted-disk, N = 1600 random points, M = 260, f (x, y) = exp(|x − y|) − 1. Left: Lebesgue function. Right:
power function
Further investigation is needed:
a better stopping rule
understand the decay rate of PK,X(M) understand the growing rate of
Λ ˜
X understand how X ,ε
influence s0f
Thank you for your attention
R. K. Beatson, J. B. Cherrie, C. T. Mouat .
Fast fitting of radial basis functions: methods based on preconditioned GMRES iteration.
Adv. Comp. Math., 11 : 253–270, 1999.
S. De Marchi, G. Santin.
A new stable basis for radial basis function interpolation.
J. Comput. Appl. Math., 253 : 1–13, 2013.
G. Fasshauer, M. J. McCourt.
Stable evaluation of Gaussian radial basis functions interpolants.
SIAM J. Sci. Comput., 34: A737–A762, 2012.
A. C. Faul, M. J. D. Powell.
Krylov subspace methods for radial basis function interpolation.
Chapman & Hall/CRC Res. Notes Math., 420 : 115–141, 1999.
P. Novati, M. R. Russo.
A GCV based Arnoldi-Tikhonov regularization method.
preprint, 2013.
M. Pazouki, R. Schaback.
Bases for kernel-based spaces.
J. Comput. Appl. Math., 236 : 575–588, 2011.
M. Vianello and G. Da Fies.
Algebraic cubature on planar lenses and bubbles.
Dolomites Res. Notes Approx. DRNA 15, 7–12, 2012.