Caratheodory-Tchakalo Least Squares
Federico Piazzon, Alvise Sommariva and Marco Vianello
Dip. Matematica Tullio Levi-Civita Univ. Padova fpiazzon@.math.unipd.it
DRWA17, Dolomites Research Week on Approximation 2017, Alba di Canazei September 3-8th 2017
Abstract
We discuss the Caratheodory-Tchakalo (CATCH) subsampling method, implemented by Linear or Quadratic Programming, for the compression of multivariate discrete measures and polynomial Least Squares
Discrete Tchakaloff Theorem
Theorem: Let µ be a positive nite measure with compact support in R d . Denote by P n the space of (the traces on the support of µ of) the algebraic polynomials of total degree not larger than n. There exist m ≤ N := dim P n , x 1 , . . . , x m ∈ supp µ and (w 1 , . . . , w m ) ∈ (R + ) m such that
Z
pdµ =
m
X
i=1
p(x i )w i , ∀p ∈ P n .
Note that µ is possibly a nite support measure, e.g., a quadrature rule itself. The proof relies on the Caratheodory Th. in convex geometry.
Approx. Tchakaloff Points
We can nd approximate solution to V t c − b = 0 (V is the Vandermonde matrix at X and b is the moment vector) up to a moment residual << 1.
Then for any datum f
M
X
i=1
f (x i )λ i −
m
X
i=1
f (x i )w i
≤ C n ()E n (f ) + kf k `
2λ
. Here
C() := 2(µ(X) + p
µ(X)), E n (f ) = min
p∈P
nkf − pk `
∞(X) .
Compressed Quadrature
Assume X := {x 1 , . . . , x M } , M > N and µ := P M i=1 λ i δ x
iis positive.
Then by Tchakalo theorem we can nd m < M points of X and m positive weights w i such that (up to re-ordering the sum)
M
X
i=1
p(x i )λ i =
m
X
i=1
p(x i )w i , ∀p ∈ P n .
Compress-ratio M/m may be large!
Proof of theorem is not constructive.
Compression of LS
Consider the case X = {x 1 , . . . , x M } , M > dim P 2n and λ = (1, 1, . . . , 1).
Let x 1 , . . . x m , w 1 , . . . , w m be Tchakalo points and weights of degree 2n extracted from X, then
kpk 2 `
2=
M
X
i=1
p(x i ) 2 =
m
X
i=1
p(x i ) 2 w i =: kpk 2 `
2w
, ∀p ∈ P n .
Denote by L n standard LS of degree n and L c n w -weighted least squares on -approximated Tchakalo points.
Error estimates:
kf − L n f k `
2(X) ≤ √
M E n (f ) kf − L c n f k `
2(X) ≤
1 +
s
1 + √ M 1 − √
M
√
M E n (f ).
Linear Programming
Let V i,j = q j (x i ) be the Vandermonde matrix at X of a basis {q 1 , . . . q N } of P n . Consider the linear programming problem
( min c t u , subject to
V t u = b, u ∈ (R + ) M . (LP)
Here c is chosen to be linearly independent from the rows of V t , the feasible region is a polytope and the vertex are sparse candidate solutions.
Standard approach → simplex method.
Quadratic Programming
As an alternative, we may seek for a sparse, non-negative solution to the moment problem by minimizing the ` 2 norm of the residue
( min kV t u − bk 2 , subject to
u ∈ (R + ) M . (QP)
Such a problem can be solved by the lsqnonneg matlab native function, which implements a variant of the Lawson-Hanson method.
Compression of Polynomial Meshes
Admissible polynomial meshes are sequence {X n } of nite subsets of a given polynomial determining compact set K ⊂ R d satisfying
kpk K ≤ C n kpk X
n, ∀p ∈ P n ,
where C n = O(n α ) and M n := Card X n = O(n β ). Least squares approxi- mation of f ∈ C(K) has the nearly optimal property
kL X
nk := sup
f ∈C(K)
kL X
nf k K /kf k K ≤ C n p
M n .
If M n > dim P 2n we can use Tchakalo points T 2n extraction as a thinning of X n and we get the norm estimate
kL T
2nk := sup
f ∈C(K)
kL T
2nf k K /kf k K ≤ C n p
M n
1 − p
M n 1/2 .
Experiment: Pipe-Mania Domain
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2 0 0.2 0.4 0.6 0.8 1
100 101
100 101 102 103
104 starting card compressed card
error amplification factor