• Non ci sono risultati.

Convex piecewise-linear fitting of the terminal cost function

4. Learning Model Predictive Control for quadrotor autonomous and

4.6. Sampled safe set and terminal cost relaxation

4.6.5. Convex piecewise-linear fitting of the terminal cost function

At iteration ๐‘—, the sampled safe set ๐‘†๐‘†๐‘— contains the following states:

๐‘†๐‘†๐‘— = {๐‘ฟ0, ..., ๐‘ฟ๐‘—} = {๐’™00, ๐’™01, ..., ๐’™๐‘—๐‘‡

๐‘—} (4.51)

The terminal cost function ๐‘„๐‘–(๐’™), if (as in Note 4.5) there are no states in common to multiple trajectories (except for ๐’™๐‘†), when applied to every state in ๐‘†๐‘†๐‘— gives:

๐‘„๐‘—(๐‘†๐‘†๐‘—) = {๐‘„๐‘—(๐’™00), ๐‘„๐‘—(๐’™01), ..., ๐‘„๐‘—(๐’™๐‘—๐‘‡

๐‘—)} =

= {๐ฝ[0,๐‘‡0

0](๐’™00), ๐ฝ[1,๐‘‡0

0](๐’™01), ..., ๐ฝ[๐‘‡๐‘—

๐‘—,๐‘‡๐‘—](๐’™๐‘—๐‘‡

๐‘—)} โ‰ก

โ‰ก {๐ฝ00, ๐ฝ10, ..., ๐ฝ๐‘‡๐‘—

๐‘—} โ‰ก

โ‰ก {๐‘„00, ๐‘„10, ..., ๐‘„๐‘—๐‘‡

๐‘—} (4.52)

We then consider the problem of fitting the obtained data:

(๐’™00, ๐‘„00), ..., (๐’™๐‘—๐‘‡

๐‘—, ๐‘„๐‘—๐‘‡

๐‘—) โ‰ก (๐’™1, ๐‘„1), ..., (๐’™๐‘š, ๐‘„๐‘š) โˆˆR๐‘›ร—R, ๐‘š = |๐‘†๐‘†๐‘—| (4.53) with a convex piecewise-linear function ๐‘“ โˆถ R๐‘› โ†’ R from some set โ„ฑ of candidate functions. With a least-squares fitting criterion, we obtain the problem:

min๐‘“โˆˆโ„ฑ

โˆš๐ฝ (๐‘“)

๐‘š , ๐ฝ (๐‘“) =

๐‘š

โˆ‘

๐‘–=1

(๐‘“(๐’™๐‘–) โˆ’ ๐‘„๐‘–)2 (4.54)

We call โˆš๐ฝ(๐‘“)๐‘š the RMSE index (root-mean-square error) of the function ๐‘“ fitting the data. The convex piecewise-linear fitting problem (4.54) is to find the function ๐‘“, from

the given family โ„ฑ of convex piecewise-linear functions, that gives the best (smallest) RMSE value with the given data.

Our main interest is in the case when ๐‘› (i.e. the state dimension) is relatively small, while ๐‘š (i.e. the number of data points) can be relatively large (e.g. 104 or more). The methods that will be described in the following, however, work for any values of ๐‘› and ๐‘š.

Several special cases of the convex piecewise-linear fitting problems (4.54) can be solved exactly.

When โ„ฑ is the set of affine functions, i.e. ๐‘“(๐’™) = ๐’‚๐‘‡๐’™+๐‘, the problem (4.54) reduces to an ordinary linear least-squares problem in the function parameters ๐’‚ โˆˆ R๐‘› and ๐‘ โˆˆ R and can be analytically solved. This approach is called parametric convex piecewise-linear fitting.

A less trivial case is, by converse, the non-parametric convex piecewise-linear fitting, in which โ„ฑ is the set of all piecewise-linear functions from R๐‘› to R, with no other constraint on the form of ๐‘“.

Of course, not all data can be fit well (i.e. with small RMSE) with a convex piecewise-linear function. For example, if the data are samples from a function that has a strong concave curvature, then no convex function can fit it well.

Max-affine functions

For our purposes, we will consider a parametric fitting problem, in which the candidate functions are parametrized by a finite-dimensional vector of coefficients ๐œถ.

We choose as โ„ฑ the set of functions on R๐‘› with the form:

๐‘“(๐’™) = max (๐’‚๐‘‡1๐’™ + ๐‘1, ..., ๐’‚๐‘‡๐‘˜๐’™ + ๐‘๐‘˜) (4.55)

We refer to a function of this form as a max-affine function with ๐‘˜ terms. Such functions are parameterized by the coefficients vector:

๐œถ = (๐’‚1, ..., ๐’‚๐‘˜, ๐‘1, ..., ๐‘๐‘˜) โˆˆR๐‘˜(๐‘›+1) (4.56)

Max-affine functions can be visualized as an intersection of ๐‘˜ planes in theR๐‘›ร—R space, of which we take the envelope (which corresponds to the max operation).

4.6. Sampled safe set and terminal cost relaxation

Indeed, any convex piecewise-linear function on R๐‘› can be expressed as a max-affine function, for some ๐‘˜, so this form is in a sense universal.

Our interest, however, is in the case when the number of terms ๐‘˜ is relatively small, In this case the max-affine representation (4.55) is compact, in the sense that the number of parameters needed to describe ๐‘“ is much smaller than the number of parameters in the original data set (i.e. ๐‘š(๐‘› + 1)). The methods that will be described in the following, however, do not require ๐‘˜ to be small.

With max-affine functions, the fitting problem (4.54) reduces to the nonlinear least-squares problem:

min๐œถ โˆš๐ฝ (๐œถ)

๐‘š , ๐ฝ (๐œถ) =

๐‘š

โˆ‘

๐‘–=1

( max

๐‘—=1,...,๐‘˜(๐’‚๐‘‡๐‘—๐’™๐‘–+ ๐‘๐‘—) โˆ’ ๐‘„๐‘–)

2

(4.57)

Fitting procedure

In this section, we present an algorithm that solves approximately the ๐‘˜-term max-affine fitting problem (4.57) [11].

The heuristic idea behind this algorithm is the following:

โ€ข create ๐‘˜ partitions of the data points {(๐’™1, ๐‘„1), ..., (๐’™๐‘š, ๐‘„๐‘š)};

โ€ข on each partition ๐‘— = 1, ..., ๐‘˜, solve analytically the linear least-square fitting problem associated to the ๐‘—-th affine functions composing ๐‘“, obtaining a first estimate of the coefficients ๐’‚๐‘— and ๐‘๐‘—;

โ€ข update each partition on the base of the current coefficients value;

โ€ข iterate the algorithm until the coefficients converge to a certain value.

The algorithm, therefore, alternates between partitioning the data and carrying out least-squares fits to update the coefficients.

Let ๐‘ƒ๐‘—(๐‘™) for ๐‘— = 1, ..., ๐‘˜, be a partition of the data indices {1, ..., ๐‘š} at the ๐‘™-th iteration of the algorithm:

๐‘ƒ๐‘—(๐‘™) โŠ† {1, ..., ๐‘š}

โ‹ƒ

๐‘—

๐‘ƒ๐‘—(๐‘™) = {1, ..., ๐‘š}, ๐‘ƒ๐‘–(๐‘™)โˆฉ ๐‘ƒ๐‘—(๐‘™) = โˆ… for ๐‘– โ‰  ๐‘— (4.58)

The generation of the initial partitions ๐‘ƒ๐‘—(0) will be analyzed in the next section.

Denoting with ๐’‚(๐‘™)๐‘— and ๐‘(๐‘™)๐‘— the values of the parameters at the ๐‘™-th iteration of the algorithm, we generate the next values, ๐’‚(๐‘™+1)๐‘— and ๐‘(๐‘™+1)๐‘— , from the current partition ๐‘ƒ๐‘—(๐‘™), as explained in the following paragraphs.

For each ๐‘— = 1, ..., ๐‘˜, we carry out a least-squares fit of ๐’‚๐‘‡๐‘—๐’™๐‘–+ ๐‘๐‘— to ๐‘„๐‘–, using only the data points with ๐‘– โˆˆ ๐‘ƒ๐‘—(๐‘™). In other words, we take ๐’‚(๐‘™+1)๐‘— and ๐‘(๐‘™+1)๐‘— as values of ๐’‚ and ๐‘ that minimize

โˆ‘

๐‘–โˆˆ๐‘ƒ๐‘—(๐‘™)

(๐’‚๐‘‡๐’™๐‘–+ ๐‘ โˆ’ ๐‘„๐‘–)2 (4.59)

which is indeed the linear least-square problem associated to the ๐‘—-th affine function composing ๐‘“, restricted to only the data in the ๐‘—-th partition.

Such problem can be solved analytically, and the pair (๐’‚, ๐‘) that minimizes (4.59) is given by:

(๐’‚(๐‘™+1)๐‘—

๐‘๐‘—(๐‘™+1)) = (โˆ‘ ๐’™๐‘–๐’™๐‘‡๐‘– โˆ‘ ๐’™๐‘–

โˆ‘ ๐’™๐‘‡๐‘– โˆฃ๐‘ƒ๐‘—(๐‘™)โˆฃ)

โˆ’1

(โˆ‘ ๐‘„๐‘–๐’™๐‘–

โˆ‘ ๐‘„๐‘– ) (4.60)

where the sums are over ๐‘– โˆˆ ๐‘ƒ๐‘—(๐‘™).

Using the new values of the coefficients, we update the partition to obtain ๐‘ƒ๐‘—(๐‘™+1), by assigning ๐‘– to ๐‘ƒ๐‘—(๐‘™+1) if:

๐‘“(๐‘™+1)(๐’™๐‘–) = max

๐‘ =1,...,๐‘˜(๐’‚(๐‘™+1)๐‘‡๐‘  ๐’™๐‘–+ ๐‘๐‘ (๐‘™+1)) = ๐’‚(๐‘™+1)๐‘‡๐‘— ๐’™๐‘–+ ๐‘(๐‘™+1)๐‘— (4.61)

This means that ๐‘ƒ๐‘—(๐‘™+1)is the set of indices ๐‘– for which the affine function ๐’‚(๐‘™+1)๐‘‡๐‘— ๐’™๐‘–+๐‘๐‘—(๐‘™+1) is the maximum for the data point ๐’™๐‘–.

During this step of the algorithm, one or more of the sets ๐‘ƒ๐‘—(๐‘™+1) may become empty.

The simplest approach is to drop these empty sets from the partitions, and continue with a smaller value of ๐‘˜.

This algorithm is iterated until one of the following conditions is met:

โ€ข Convergence has been reached, which equivalently occurs when:

โˆ˜

the coefficients vector ๐œถ(๐‘™) has converged to a steady-state value: ๐œถ(๐‘™+1) = ๐œถ(๐‘™),

โˆ€๐‘™ โ‰ฅ ๐‘™โ€ฒ;

4.6. Sampled safe set and terminal cost relaxation

โˆ˜

the partitions ๐‘ƒ๐‘—(๐‘™) do not change anymore after a certain iteration: ๐‘ƒ๐‘—(๐‘™+1) = ๐‘ƒ๐‘—(๐‘™),

โˆ€๐‘™ โ‰ฅ ๐‘™โ€ฒ, โˆ€๐‘— โˆˆ {1, ..., ๐‘˜}.

โ€ข The maximum number of iterations has been reached: ๐‘™ > ๐‘๐‘–๐‘ก๐‘’๐‘Ÿ.

Generation of the initial partitions

The choice of the initial partitions ๐‘ƒ๐‘—(0) is a crucial step, since it directly affects the quality of the fitting given by the max-affine interpolating function generated by the algorithm. A bad selection of the initial partitions may lead the algorithm to not converge, or to converge to a piecewise-linear approximation that poorly fits the data (even when the data shows a good convex curvature).

We provide in the following a general approach to generate initial partitions that ensure, in most cases, a good interpolation of the data points [11].

Each partition ๐‘ƒ๐‘—(0), with ๐‘— = 1, ..., ๐‘˜, is generated starting from ๐‘˜ randomly chosen points ๐’‘1, ..., ๐’‘๐‘˜ โˆˆR๐‘›, which are called seed points.

Then, ๐‘ƒ๐‘—(0) is defined as the set of indices ๐‘– associated to the data points ๐’™๐‘– that are closest to ๐’‘๐‘—:

๐‘ƒ๐‘—0 = {๐‘– โˆถ |๐’™๐‘–โˆ’ ๐’‘๐‘—| < |๐’™๐‘–โˆ’ ๐’‘๐‘ |, โˆ€๐‘  โ‰  ๐‘—}, ๐‘— = 1, ..., ๐‘˜ (4.62)

Specifically, the indices in the partitions ๐‘ƒ๐‘—(0) define the Voronoi sets of data points ๐’™๐‘– associated to the seed points ๐’‘๐‘—.

With this method, we generate initial partitions that tessellate the whole subset of R๐‘› containing the data points {๐’™๐‘–}๐‘š๐‘–=1. Such partitions are very well suited for max-affine interpolation, since max-affine functions are the envelope of ๐‘˜ intersecting planes in R๐‘› and the fitting procedure locally interpolates the data in each partition with one of the planes composing the function.

The Voronoi sets associated to the initial partitions, which tessellate the set of data points, are depicted in Figure 4.6, showing the case of 20 randomly generated partitions.

The seed points ๐’‘๐‘— should be randomly generated according to some probability distri-bution that matches the shape of the data points ๐’™๐‘–. A possible choice is to generate them from a normal distribution with the sample mean ๐๐‘ฅ and the sample covariance

๐œฎ๐‘ฅ of the data points:

๐๐‘ฅ = 1 ๐‘š

๐‘š

โˆ‘

๐‘–=1

๐’™๐‘–, ๐œฎ๐‘ฅ = 1 ๐‘š

๐‘š

โˆ‘

๐‘–=1

๐’™๐‘–๐’™๐‘‡๐‘– (4.63)

Note 4.7 Since the seed points are generated from a normal distribution, it may happen that some of them are chosen outside the set of data points and enough far from it such that the related partition is empty. As done in the fitting algorithm, the simplest approach is to drop these empty partitions and continue with a smaller value of ๐‘˜.

This method for generating initial partitions can be reformulated to more specific pro-cedure that applies only to 1D data (i.e. ๐‘› = 1) and provides in most cases better results.

Instead of generating randomly the initial partitions ๐‘ƒ๐‘—(0)from seed points, the partitions are chosen as equally spaced intervals on the ordered set (i.e. from lowest to highest value) of data points {๐‘ฅ1, ..., ๐‘ฅ๐‘š} โŠ‚R:

๐‘ƒ๐‘—(0) = {1 + (๐‘— โˆ’ 1) โŒŠ๐‘š

๐‘˜ โŒ‹ , ..., ๐‘— โŒŠ๐‘š

๐‘˜โŒ‹} , for ๐‘— = 1, ..., ๐‘˜ โˆ’ 1 ๐‘ƒ๐‘˜(0) = {1 + ๐‘˜ โŒŠ๐‘š

๐‘˜ โŒ‹ , ..., ๐‘š} (4.64)

With this approach, we generate, as well as in the general method, initial partitions that cover the whole subset ofR containing the data points {๐‘ฅ๐‘–}๐‘š๐‘–=1. This method applies well only to 1D data for the trivial reason thatR is much easier to subdivide in bounded and connected subsets rather thanR๐‘›, for which instead generating the partitions randomly in more convenient.

Fitting algorithm

The full algorithm for convex piecewise-linear fitting can be written as follows:

Algorithm 4.2 Convex piecewise-linear fitting

Inputs: {(๐’™1, ๐‘„1), ..., (๐’™๐‘š, ๐‘„๐‘š)}; ๐‘˜; ๐‘๐‘–๐‘ก๐‘’๐‘Ÿ (max n. of iterations) Outputs: ๐œถ

(1) Compute the sample mean ๐๐‘ฅ and sample covariance ๐œฎ๐‘ฅ of the data points

4.6. Sampled safe set and terminal cost relaxation

{๐’™๐‘–}๐‘š๐‘–=1 as in (4.63)

(2) Generate the seed points ๐’‘๐‘—, ๐‘— = 1, ..., ๐‘˜, from the distribution ๐’ฉ(๐๐‘ฅ, ๐œฎ๐‘ฅ) (3) Compute the initial partitions ๐‘ƒ๐‘—(0), ๐‘— = 1, ..., ๐‘˜, as in (4.62)

(4) Remove empty partitions from ๐‘ƒ๐‘—(0), ๐‘— = 1, ..., ๐‘˜, and update the value of ๐‘˜ (5) For ๐‘™ = 0, ..., ๐‘๐‘–๐‘ก๐‘’๐‘Ÿโˆ’ 1:

(2.1) Compute ๐’‚(๐‘™+1)๐‘— and ๐‘(๐‘™+1)๐‘— , ๐‘— = 1, ..., ๐‘˜, as in (4.60) (2.2) Update the partitions to ๐‘ƒ๐‘—(๐‘™+1), ๐‘— = 1, ..., ๐‘˜, as in (4.61)

(2.3) Remove empty partitions from ๐‘ƒ๐‘—(๐‘™+1), ๐‘— = 1, ..., ๐‘˜, and update the value of ๐‘˜

(2.4) โ€ข If ๐‘ƒ๐‘—(๐‘™+1) = ๐‘ƒ๐‘—(๐‘™), return:

๐œถ = (๐’‚(๐‘™)1 , ..., ๐’‚(๐‘™)๐‘˜ , ๐‘(๐‘™)1 , ..., ๐‘(๐‘™)๐‘˜ )

โ€ข otherwise, continue (6) Return:

๐œถ = (๐’‚(๐‘1 ๐‘–๐‘ก๐‘’๐‘Ÿ), ..., ๐’‚(๐‘๐‘˜ ๐‘–๐‘ก๐‘’๐‘Ÿ), ๐‘(๐‘1 ๐‘–๐‘ก๐‘’๐‘Ÿ), ..., ๐‘(๐‘๐‘˜ ๐‘–๐‘ก๐‘’๐‘Ÿ))

Examples

In this section, we will show two examples of application of the convex piecewise-linear fitting algorithm.

We will use it to fit a set of 1D and 2D data points, obtained from 2 convex functions, specifically:

๐‘“1 โˆถR โ†’ R, ๐‘“1(๐‘ฅ) = ๐‘ฆ = 0.1๐‘ฅ2 (4.65a) ๐‘“2 โˆถR2โ†’R, ๐‘“2(๐’™) = ๐‘ฆ = 0.1(๐‘ฅ21+ ๐‘ฅ22) (4.65b)

For function 1, data points are {(๐‘ฅ๐‘–, ๐‘ฆ๐‘–)}๐‘–= {(๐‘ฅ๐‘–, ๐‘“1(๐‘ฅ๐‘–))}๐‘–, {๐‘ฅ๐‘–}๐‘– = [โˆ’5 โˆถ 0.5 โˆถ 5].

The algorithm is set up with ๐‘˜ = 10, ๐‘๐‘–๐‘ก๐‘’๐‘Ÿ = 10; the initial partitions are generated by setting up the MATLAB random number generator with the 'default' seed.

The algorithm returns ๐œถ after only 1 iteration:

๐œถ = ((๐‘Ž๐‘—)๐‘—, (๐‘๐‘—)๐‘—)

(๐‘Ž๐‘—)๐‘‡๐‘— =

โŽ›โŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽ 0.35 0.95 0.65 0.1

โˆ’0.8

โˆ’0.3

โŽžโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽ 

, (๐‘๐‘—)๐‘‡๐‘— =

โŽ›โŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽ

โˆ’0.3

โˆ’2.25

โˆ’1.02

โˆ’0.00833

โˆ’1.55

โˆ’0.175

โŽžโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽ 

(๐‘Ž๐‘—)๐‘— and (๐‘๐‘—)๐‘— have 6 < ๐‘˜ elements, so 4 empty partitions have been generated and removed.

Figure 4.4 depicts the max-affine interpolating function with respect to data points:

Figure 4.4. Convex piecewise-linear fit of a 1D convex function. Data points are depicted in red, while the max-affine interpolating function is depicted in cyan.

We see that the interpolating function is the envelope of 6 lines, having angular coeffi-cient ๐‘Ž๐‘— and intercept ๐‘๐‘—, for ๐‘— = 1, ..., 6.

For function 2, data points are {(๐’™๐‘–, ๐‘ฆ๐‘–)}๐‘– = {(๐’™๐‘–, ๐‘“2(๐’™๐‘–))}๐‘–, {๐’™๐‘–}๐‘– = [โˆ’5 โˆถ 0.5 โˆถ 5] ร— [โˆ’5 โˆถ 0.5 โˆถ 5].

The algorithm is set up with ๐‘˜ = 20, ๐‘๐‘–๐‘ก๐‘’๐‘Ÿ = 50; the initial partitions are generated by setting up the MATLAB random number generator with the 'default' seed.

The algorithm returns ๐œถ after 16 iterations:

๐œถ = ((๐’‚๐‘—)๐‘—, (๐‘๐‘—)๐‘—)

4.6. Sampled safe set and terminal cost relaxation

(๐’‚๐‘—)๐‘‡๐‘— =

โŽ›โŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœ

โŽ

0.688 0.246 0.75 โˆ’0.578

โˆ’0.702 โˆ’0.299 0.439 0.827 0.196 0.342

โˆ’0.757 0.79

โˆ’0.351 0.354 0.31 โˆ’0.246

โˆ’0.863 0.208 0.0964 โˆ’0.777

โˆ’0.65 โˆ’0.8

โˆ’0.147 0.829

โˆ’0.161 โˆ’0.18 0.837 โˆ’0.159 0.908 0.724

0.75 โˆ’0.9

โŽžโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸ

โŽ 

, (๐‘๐‘—)๐‘‡๐‘— =

โŽ›โŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœโŽœ

โŽœ

โŽ

โˆ’1.21

โˆ’2.15

โˆ’1.31

โˆ’2.08

โˆ’0.29

โˆ’2.87

โˆ’0.506

โˆ’0.295

โˆ’1.86

โˆ’1.37

โˆ’2.47

โˆ’1.66

โˆ’0.0165

โˆ’1.72

โˆ’3.27

โˆ’3.34

โŽžโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸโŽŸ

โŽŸ

โŽ 

(๐’‚๐‘—)๐‘— and (๐‘๐‘—)๐‘— have 16 < ๐‘˜ elements, so 4 empty partitions have been generated and removed.

We see that the interpolating function is the envelope of 16 planes, having coefficients (๐’‚๐‘‡๐‘—, ๐‘๐‘—), for ๐‘— = 1, ..., 16.

Figure 4.5 depicts the interpolating max-affine function with respect to data points:

Figure 4.5. Convex piecewise-linear fit of a 2D convex function. Data points are depicted in red, while the max-affine interpolating function is depicted in cyan.

Figure 4.6 depicts instead the Voronoi sets associated to the initial partitions:

Figure 4.6. Voronoi sets of the initial partitions. Seed points are depicted in red, while the boundaries of each set are marked with black lines.

As pointed out in Note 4.7, in Figure 4.6 we see only 16 out of 20 generated partitions;

this means that 4 partitions have been generated outside the subset of data points in R2 (i.e. [โˆ’5, 5] ร— [โˆ’5, 5]). Indeed, we have seen that (๐’‚๐‘—)๐‘— and (๐‘๐‘—)๐‘— have only 16 elements, meaning that the 4 empty partitions have been removed from ๐‘ƒ๐‘—(0), ๐‘— = 1, ..., 20.

Documenti correlati