• Non ci sono risultati.

Reducing fragmentation in parameterized surfaces

N/A
N/A
Protected

Academic year: 2021

Condividi "Reducing fragmentation in parameterized surfaces"

Copied!
83
0
0

Testo completo

(1)

Laurea Magistrale in Informatica

Tesi di Laurea

Reducing fragmentation in

parameterized surfaces

Candidato:

Andrea Maggiordomo

Relatori:

Dott. Paolo Cignoni

Dott. Matteo Dellepiane

(2)
(3)

This thesis faces the problem of improving the existing texture atlas of a polygonal model. We target textures generated by 3D reconstruction pipelines, which usually exhibit high fragmentation in the texture data, and try to reduce the number of of discontinuities without losing existing information. We do so by computing a new mesh parameterization that is less fragmented, has shorter boundaries and exhibits minimal distortion compared to the original texture map. Our approach relies on available parameterization algorithms, and adapts them in an original way to mini-mize deformation of the newly computed flattening with respect to the shape of the existing one. We report experimental results obtained by ap-plying our method on a collection of publicly available models generated from 3D reconstruction software.

(4)

Voglio innanzitutto ringraziare i miei relatori Paolo Cignoni e Matteo Dellepiane per la loro supervisione nello sviluppo di questa tesi. Li ringrazio per la disponibilità che mi hanno mostrato, per aver avuto la pazienza di indirizzarmi nel lavoro quando stavo facendo le scelte sbagliate nonostante la mia testardaggine, e per aver comunque sempre accettato di buon grado i miei contributi.

Ringrazio i miei familiari per l’affetto nei miei confronti, e per avermi supportato e incoraggiato; se ho deciso di intraprendere il percorso uni-versitario è in larga parte merito loro.

(5)

List of Figures vi

List of Tables vii

1 Introduction 1

2 Texture mapping 4

3 Overview of parameterization methods 6

3.1 Preliminary concepts . . . 6

3.2 Parameterization of triangle meshes . . . 9

3.3 Barycentric mappings . . . 10

3.3.1 Tutte’s embedding . . . 10

3.3.2 General barycentric mappings . . . 12

3.3.3 Drawbacks of fixed boundary methods . . . 13

3.4 Methods based on energy minimization . . . 14

3.4.1 Conformal maps . . . 16

3.4.2 Area preservation . . . 17

3.4.3 Optimization strategies . . . 20

3.4.4 The local-global approach . . . 22

3.4.5 Scalable Locally Injective Mappings . . . 24

4 Problem description 27 4.1 Possible approaches . . . 29

(6)

4.2 Challenges . . . 30

5 Method description and insights 32 5.1 Overview . . . 32

5.2 Aggregation phase . . . 33

5.2.1 Procedure specification . . . 34

5.2.2 The weight function . . . 36

5.3 Parameterization phase . . . 37

5.3.1 Energy formulation . . . 38

5.3.2 Parameterizing using linear methods . . . 40

5.4 Parameterizing with iterative optimizers . . . 42

5.4.1 Energy choice . . . 43

5.4.2 Quasi-Newton iterations . . . 44

5.4.3 Performance concerns . . . 46

5.4.4 Backtracking . . . 46

5.4.5 Summary . . . 47

5.5 Generation of the new texture data . . . 48

6 Experimental results 49 6.1 Metrics . . . 49

6.2 Test configuration . . . 52

6.3 Texture map improvement . . . 53

6.4 Distortion metrics . . . 56

6.5 Image quality . . . 59

6.6 Comparison with standard energy minimization . . . 60

7 Conclusions 68 7.1 Future work . . . 69

(7)

3.1 Notation for angle coefficients . . . 13

5.1 Texture map optimization applied to the Nefertiti model . . 39

5.2 Charts parameterization with conformal mappings . . . 41

5.3 Charts parameterization with signal preservation . . . 43

6.1 Detail preservation in the AAS model . . . 61

6.2 Area distortion comparison . . . 62

6.3 Angle distortion comparison . . . 63

6.4 Reducing texture fragmentation on the Vase model . . . 64

6.5 Reducing texture fragmentation on the Miniature model . . 65

6.6 Reducing texture fragmentation on the Ortli model . . . 66

6.7 Reducing texture fragmentation on the Kokura model . . . 67

(8)

6.1 Models used in the experiments . . . 50

6.2 Texture optimizatin with wUV weighting . . . 54

6.3 Texture optimization with w3D weighting . . . 55

6.4 Distortion values for wUV weighting . . . 57

6.5 Distortion values for w3D weighting . . . 58

(9)

Introduction

3D reconstruction techniques generate digitalizations of real world objects by capturing information about their geometry and color, and have many practical applications such as design, 3D printing and cultural heritage preservation.

For a long time, reconstruction has been a challenging problem and accu-rate results were obtained only in controlled environments with dedicated hardware such as laser scanners to generate volumetric data; such recon-struction techniques are active, in the sense that they interact directly with the object.

In recent years passive techniques that only require image data to recon-struct objects have become more and more accurate, and have been the enabling factor in the development of free and commercial 3D reconstruc-tion software. Multi-view stereo reconstrucreconstruc-tion algorithms only require a collection of images and the camera parameters of each image to recon-struct objects; camera parameters can be estimated using recon-structure from motion techniques (which also only require a sequence of scene images), leading to full software solutions.

(10)

To accurately reproduce an object, it is important to capture not only its geometric details but also its surface appearence. The standard technique to map color information to a polygonal model in computer graphics is to use texture mapping, which requires to define a correspondence between polygons and image areas; when a surface point is rendered on screen, the matching image data is mapped to the surface and determines its color. The correspondence between surface points (which are three dimensional) and image points (which are two dimensional) provides a flattening of the polygonal mesh, and computing it is known as mesh parameterization. Inside a reconstruction pipeline, generating texture maps is usually done by projecting registered images to surface patches while trying to match each area with the image that provides most detail. This approach can yield results of outstanding quality when the model is displayed at high resolutions because the texture data is optimized to maximize the level of detail, but typically generates a highly fragmented texture map with lots of small image fragments projected independently to the model surface. Fragmented textures have drawbacks: they are difficult to work with for artists and discontinuities in the texture map can lead to visible artifacts when the model is rendered on screen if not carefully handled.

Project goals

What we set out to do in this thesis is define a method that is able to improve the fragmented texture data of reconstructed 3D models by reducing the number of discontinuities without losing color detail. We do so by computing a new mesh parameterization that is less fragmented, has shorter boundaries and exhibits minimal distortion compared to the original texture map. Our approach relies on available parameterization algorithms, and adapts them in an original way to minimize deformation

(11)

of the newly computed flattening with respect to the shape of the existing one.

Thesis overview

This thesis is structured as follows. Chapter 2 briefly reviews texture mapping and some of the challenges that emerge when textured polygonal models are rendered on the screen. Chapter 3 presents an overview of mesh parameterization methods, with particular focus towards what ended up being relevant in the context of this thesis. Chapter 4 describes in more detail the aims, goals, challenges and possible shortcomings of the project. Chapter 5 presents our approach, and experimental results are reported in chapter 6, while chapter 7 concludes this thesis and proposes possible future improvements.

(12)

Texture mapping

Texture mapping is a rendering technique that allows to improve the appearance of a polygonal models by mapping detail on its surface. The most common application of texture mapping, and the motivation behind why it was first introduced, is to map color information to a model in order to approximate the material properties of its surface and how it reflects light. For the remainder of this thesis we will focus on two dimensional texture mapping where the texture data is a matrix of (color) values. To project information on the model, it is necessary to define a mapping between the model surface and the texture data. This correspondence between two-dimensional image points and three-dimensional surface points induces a flattening of the model where each triangle in model space is mapped to a triangle in texture (or parameter) space. Texture coordinates are assigned to the vertices of the model, and values on the interior of the triangles are obtained by interpolation. The process of computing the flattening of a mesh model is known as mesh parameterization, and is only applicable to models that are topological disks. If that is not the case, then the model must be segmented (Sander et al. [23], Lévy et al. [17]) or cut (Gu et al. [9], Sheffer and Hart [26]) before parameterizing it.

(13)

Texture coordinates identify points in texture space, and for convenience they are assumed to lie in the range [0, 1]2 so that they are not dependent on the size of the texture data. (This is not always true, and applications can define rules to handle coordinates that fall out of this range but this is not relevant to the current discussion). Because of this, the resolution of the data mapped to a surface triangle depends both the size of the texture data on which the map is defined, and the area that the parameter space triangle occupies in texture space.

Since the texture data is discrete and the parameter space is continuous, rules must be defined to determine how texels are indexed by the texture coordinates (texture filtering). Nearest-neighbor filtering simply maps to the closest texture element (texel), but typically results in jagged and discontinuous appearance; for this reason bilinear filtering, where the color value is the linear interpolation of the four closest texels, is the preferred choice.

Texture mapping is a powerful technique that greatly improves the ren-dering quality of polygonal models, but poor texture maps can produce noticeable artifacts.

Seams happen when two adjacent portions of the surface get mapped to different regions in texture space; this result in a “jump” in the sur-face color because the texture data does not perfectly align across the discontinuity. Seam artifacts are usually quite noticeable, and especially when bilinear filtering is used can cause background color to bleed onto the surface. This second issue can be resolved by padding the texture boundaries.

(14)

Overview of parameterization

methods

Mesh parameterization is the problem of finding a piecewise linear map from the surface of a three dimensional triangle mesh to a planar domain. Parameterizations are an important tool in computer graphics since they enable common and powerful techniques such as texture mapping and remeshing. Despite being a well studied problem, significant advance-ments are still being made even in very recent times: this chapter presents an overview of parameterization techniques from early works to some significant more recent advancements.

3.1

Preliminary concepts

A continuous injective function φ that maps values from a domainΩ⊆R2 to values in R3 is called a parameterization and its image S describes a surface

S= Im φ= {φ(u, v): (u, v) ∈ Ω}. (3.1)

(15)

The setΩ is usually referred to as parameter space, and surfaces that can be described in this manner are called parametric surfaces.

Since a parameterization is a function that maps a planar surface to a curved one, it is interesting to study how much distortion it introduces around a point – that is, how much areas and angles change around a parameter point(u, v) when it is mapped onto the surface point φ(u, v). Consider the explicit expression of φ as a vector valued function:

f(u, v) = xφ(u, v) yφ(u, v) zφ(u, v) T

. (3.2)

Given a displacement(∆u, ∆v)the function φ can be linearly approximated with the first order Taylor’s expansion

φ(u+∆u, v+∆v) ≈ φ(u, v) +Jφ(u, v) ∆u

∆v !

(3.3) where Jφ(u, v) is the 3×2 Jacobian matrix of φ whose i-th row is the transposed gradient of the i-th component of f

Jφ(u, v) =      ∂xφ ∂u ∂xφ ∂v ∂yφ ∂u ∂yφ ∂v ∂zφ ∂u ∂zφ ∂v      u v ! . (3.4)

The linear approximation given by Taylor’s formula is the tangent plane at the point u = (u, v), spanned by the columns of Jφ(u). The columns of the Jacobian matrix contain the component-wise partial derivatives of φ with respect to the u and v variables and are commonly denoted φu and

φv. These vectors are extremely important because they provide a frame of reference for the tangent space at the point φ(u): in particular, we can use these vectors to move in the neighborhood of φ(u) when we take an infinitesimal step du = (du, dv)T in parameter space

(16)

φ(u+du) ≈ φ(u) +Jφ(u)du =φ(u) +duφu+dvφv. (3.5) When we use this linear approximation to map a circle in the neighborhood of u to the tangent space, it gets distorted into an ellipse according to the magnitude of the tangent vectors φu and φv and the angle they form. Note that this distortion is a property of the function φ since it is induced by the values of the Jacobian matrix at the point; this means that a surface can admit several parameterizations that exhibit different local distortion around the same point, and mesh parameterization methods will aim at finding parameterizations with very low distortion as this allows to map values from the parameter space uniformly across the surface.

The distortion of a parameterization in the neighborhood of a point can be characterized by the singular values of its Jacobian matrix at the point. The singular-values decomposition (SVD) of Jφ is

Jφ =U     σ1 0 0 σ2 0 0     VT (3.6)

where σ1, σ2 are the singular values of the Jacobian and U and V are a 3×3 and 2×2 unitary matrices.

This decomposition explains what happens to a vector such as the displace-ment du in the linear approximation displace-mentioned earlier when it is mapped to the tangent space. The vector du gets first rotated by VT, then it gets scaled, and finally it is rotated again with the multiplication by U; in the process the vector is also projected toR3. Therefore the local distortion of a parameterization is characterized only by the singular values of its Jacobian matrix at the point.

A parameterization is said to be locally conformal if σ1 =σ2. In this case the map scales vectors uniformly across both dimensions of the parameter

(17)

space, leaving angles unchanged.

A parameterization is said to be locally equiareal if σ1σ2 =1. An equiareal mapping preserves areas locally, but not ratios between distances (for example it could map a square to a rectangle while keeping the same area).

A parameterization that is both conformal and equiareal is said to be isometric or an isometry. Isometric mappings introduce no deformation when the parameter domain is mapped to R3, but only exist for the restricted set of developable surfaces.

3.2

Parameterization of triangle meshes

A triangle mesh is a collection of faces described by three vertices, and each pair of vertices that form the side of a face is an edge of the mesh. Therefore a meshM = (V,E,F )is defined by the set of its vertices V, the set of its edges E and the set of its facesF. The set of vertices is further partitioned in the set of boundary vertices VB and internal vertices VI, where a boundary vertex is a vertex incident on an edge that belongs to only one face. Given a vertex vi, the set of its neighbors is denoted by Ni and δi = |Ni| is its degree. Each vertex occupies a position in space, and the set of vertex positions determine the overall shape of the mesh, as opposed to the geometry of the mesh which is determined by the various adjacency relationships between the vertices, edges and faces of the mesh. I will use p= (px, py, pz) to denote points inR3 and u= (u, v) to denote parameter space coordinates (points inR2).

Parameterizing a mesh requires assigning to each vertex vi ∈ V a position

ui in parameter space such that those coordinates define a flattening of the original surface—which I will call parameter-space surface. A flattening only exists if the surface has disk-like topology, and I will assume this to

(18)

be the case. If a mesh does not have disk like topology techniques such as mesh segmentation where the surface is partitioned into disk-like patches or mesh cutting can be employed to tackle the issue.

The parameterization of a mesh is therefore computed as a piecewise linear function, where each piece maps a face to its corresponding parameter triangle. To be suitable for applications like texture mapping it is usually desirable that the function is injective, because otherwise a single texel can end up being mapped to two different surface points. A distinction that is commonly made is whether the parameterization is locally injective or globally injective. Local injectivity can be lost when a parameter triangle is inverted, and overlaps with its neighbors; however, even if a map is locally injective global overlaps can happen when the boundary in parameter space self-intersects. Unfortunately, while many parameterization methods can guarantee local injectivity, almost none is able to deal with global overlaps unless significant trade-offs are made either in terms of running time or quality of the result.

3.3

Barycentric mappings

The parameterization methods presented in this section have the common feature of computing parameter coordinates solving a linear system of equations. The linear systems that emerge in these methods are sparse and can be computed efficiently, but have the drawback of producing parameterizations with significant distortion.

3.3.1

Tutte’s embedding

A simple parameterization method comes from the work of Tutte [33, 32] and emerged in the context of graph drawing, specifically to draw planar

(19)

graphs without edge intersections. Tutte’s approach was to map the boundary of the graph (or in this case of the boundary vertices of the mesh) to the outline of a convex shape, for example a circle, and constrain the position of each internal vertex to be the average of the position of its neighbors. The procedure is as follows: first the boundary vertices are traversed sequentially and counter-clockwise according to the orientation of the mesh surface, and to each vertex vk ∈ VBis assigned a corresponding point ¯uk on the chosen convex outline, thus fixing the position of the boundary vertexes:

uk = ¯uk, vk ∈VB. (3.7) Then the position of the internal vertices is dictated by the following relations: ui =

vj∈Ni 1 δi uj, vi ∈ VI (3.8) which can be rearranged as

ui

vj∈Ni 1

δi

uj =0, vi ∈ VI. (3.9)

These equations can be arranged in a sparse linear system where each row i only has Ni+1 non-zero coefficients for internal vertices, and a single coefficient (the diagonal entry) for boundary vertices. The system is solved independently for the ui and vi coordinates of the parameter space points

ui = (ui, vi).

The parameterization produced by this method is provably injective (Tutte [33], Floater [5]) both globally and locally. The weights 1/δi in equation (3.9) are a particular case of barycentric coordinates (coordinates that identify a point in the convex hull of a set of boundary points as a

(20)

combi-nation of their positions points) and therefore Tutte called the mappings produced by this method barycentric mappings.

3.3.2

General barycentric mappings

Floater [5] generalized Tutte’s approach by observing that different barycen-tric mappings can be obtained as the minimizers of the following quadratic function (where {u1. . . un} are the parameter positions of all the internal vertices and wij’s are weight coefficients)

F({u1. . . un}) =

(vi,vj)∈E

wijkui−ujk2. (3.10)

The function F is quadratic and the global minimum is attained under the condition that ∂F/∂ui =0 for all i such that vi ∈ VI, that is

∂F

∂ui

=2

vj∈Ni

wij(ui−uj) =0 vi ∈ VI (3.11)

which is equivalent to (3.9) if wij =1 (which explains why Tutte’s embed-ding is also referred to as uniform barycentric mappings).

Assuming a coordinate assignment to boundary vertices, Floater proved that a solution to the linear system always exists. Furthermore, if the weights wij are all positive and the boundary is mapped to a convex outline then the parameterization is guaranteed to be locally injective and no triangle flips can occur.

Choosing edge weights that take into account the actual geometry of the surface leads less distorted parameterizations. Using the notation of figure 3.1, some of the weight coefficients that have been proposed are

Wachspress coordinates Introduced by Wachspress [34], and later used by Desbrun et al. [3] in the computation of their Discrete authalic

(21)

pi pj αij βij γij δij γji δji

Figure 3.1: Notation for angle coefficients mappings:

wij =

cot γij+cot δij

kpipjk2 . (3.12)

Discrete Harmonic coordinates Emerged in the work of Pinkall and Polth-ier [21] and were later used by Eck et al. [4] and Desbrun et al. [3]. These coefficients produce a Harmonic mapping, a map that minimizes the Dirichlet energy (the next section will expand on this):

wij =cot αij+cot βij. (3.13)

Mean value coordinates Proposed by Floater [6]: wij =

tanδ2ji +tanγ2ji

kpipjk2 . (3.14) Note that since Wachspress and discrete harmonic coordinates can assume negative weights, they no longer guarantee that the produced parameteri-zation is injective.

3.3.3

Drawbacks of fixed boundary methods

The biggest drawback of this family of methods is that they require the boundary to be fixed in advance to a convex shape, which typically causes extreme distortion in the parameterization of the faces close to it.

(22)

Even though the boundary can be mapped to any arbitrary shape, there is no way to know a priori what would be a good outline for the parameteri-zation. Early methods involved either choosing the position by hand and then moving the vertices around until a satisfactory result was obtained, or using a “virtual boundary” (Lee et al. [15]) which consists in padding the mesh geometry with a strip of extra faces around the boundary in order to let them absorb part of the distortion. The former approach obviously does not scale with large and complex boundaries, while the latter partially mitigates the issue but does not completely solve it.

The methods described in the next sections do not require the boundary to be fixed in advance and compute it naturally during the parameterization process.

3.4

Methods based on energy minimization

A very large portion of the parameterization methods proposed in the literature compute maps that minimize an energy function, which is a function of the local distortion the map induces.

Recall from section 3.1 that the singular values of a mapping locally characterize its distortion. Energy functions are defined in terms of the singular values of the parameterization in such a way that their minima satisfy some desired property (for example a combination of conformality and area preservation). The energy of the map φ is then the integral of the local energy over the parameter domain:

E(φ) = Z

ΩE(σ1(ω), σ2(ω))dω. (3.15) For triangle meshes, it is common to minimize the energy of the inverse function φ−1instead (that is, the map from the surface to the parameter

(23)

space); this is justified by the fact that the parameter space coordinates are the actual unknowns of the problem. The two formulations are related by the fact that the singular values of the inverse map are the inverse of the singular values of the original map.

Since a mesh parameterization G is piecewise linear, the distortion is con-stant per face and the energy can be computed as the sum of the energies E(f) of the linear functions that map each face f to its parameter triangle tf, integrated over the face area. I already mentioned that distortion is not impacted by rigid transformations such as rotations and translations. This suggests to isometrically project the surface triangle of a face f inR2 and study the deformation of the map from the projected triangle Tf to the parameter triangle tf. The Jacobian of this map, which has the same singular values as the original map, is a square 2×2 matrix.

The overall energy formulation on triangle meshes becomes E(G) =

f∈F

E(f) =

f∈F

A(Tf)E(σ1(f), σ2(f)). (3.16)

Methods based on energy minimization all follow these common steps: 1. Define an energy function as outlined above.

2. Find a formulation of the energy that depends on the coordinates of the mesh vertexes.

3. Minimize the energy by finding a suitable assignment of parameter coordinates to each vertex v.

The following sections describe some methods proposed in the literature which either have been particularly relevant in the field or that I ended up using during my work.

(24)

3.4.1

Conformal maps

An early adoption of the energy minimization approach is due to Pinkall and Polthier [21] in the context of computing discrete minimal surfaces, but was also used later by Eck et al. [4], and employs a quadratic energy that can be easily minimized. The idea is to minimize the Dirichlet energy, which for a continuous parameterization φ is

ED(φ) = 1 2 Z Ωkφuk 2+ k φvk2dudv (3.17) because it provides an upper bound on the surface area, and it is mini-mized when f is conformal. As shown in Desbrun et al. [3], the following holds A(S) = Z Ωkφφvkdudv≤ Z Ωkφukkφvkdudv≤ 1 2 Z Ωkφuk 2+ k φvk2dudv (3.18) and the first inequality becomes an equality if and only if the tangent vectors fu and fv are orthogonal, while the second becomes an equality if and only ifkφuk2 = kφvk2. Note that (using the SVD of Jφ and the cyclic property of the trace operator)

kφuk2+ kφvk2 =tr 

JTφJφ=σ12+σ22, (3.19) therefore the energy is minimized if the map is conformal. A map that minimizes the Dirichlet energy is called a harmonic map.

A very convenient formulation of the Dirichlet energy exists for a linear map between two triangles [21]. Denoting with αi the angle at the vertex i of f in model space, the energy can be expressed as

ED(f) = 1 2A(Tf) 2

i=0 cot αikui+1−uik2 (3.20) where the indices are given modulo 2. This formulation yields a quadratic function that can be rearranged in order to match the structure of (3.10)

(25)

with wij = 12(cot αij+βij) and minimized to compute a mesh parameteri-zation, but the boundary still needs to be fixed in advance.

The method of Desbrun et al. [3] computes conformal parameterizations with natural boundaries; they constrain boundary vertices based on the following conditions:

∇ED(f) − ∇A(tf) = 0, (3.21) that are explained by observing that, according to (3.18), these are the conditions under which the gradient of the conformal energy EC(f) = ED(f) − A(tf) vanishes. With these additional constraints, the resulting linear system can be solved by simply fixing the position of two arbitrary vertexes in parameter space to avoid degenerate solutions.

The LSCM method of Lévy et al. [17] computes conformal maps by ex-plicitly minimizing the conformal energy formulated as the distance of the tangent vectors of the map to orthogonality (the ⊥ operator rotates a vector 90 degrees counter-clockwise):

EC(φ) = 1 2 Z Ωkφ ⊥ u −φvk2dudv. (3.22) Despite different formulations, conformal parameterizations with natural boundaries and LSCM are equivalent and produce the same results ([14, 19]).

3.4.2

Area preservation

Conformal maps have the attractive characteristic of being computed as the solution of a simple sparse linear system, but their energy only depends on the angles of the surface triangles, and minimizing it makes no attempt at area preservation. As already mentioned this becomes an issue especially when the parameterization is used in the context of texture mappings

(26)

because it means that the texture data is not sampled uniformly, leading to areas where the detail is very low.

It is important to mention that area distortion in the context of texture mapping is evaluated up to the scale factor of the model because, as explained earlier, once the parameterization is packed all the texture coordinates lies in the unit square [0, 1]2, while the area of the surface on the other hand depends on size of the model that is being parameterized. Nonetheless, energies that penalize area distortion force the relative area of the parameter triangles to match that of the surface triangles; this, in a parameterization that also exhibits low angle distortion, ensures that the texture is sampled uniformly across the surface.

The energies described in the following sections take into account area distortion and tend to produce parameterizations better suited to texture mapping, however they are no longer quadratic and iterative optimization schemes need to be employed in order to compute them (I review those in section 3.4.3).

Most Isometric Parameterizations

Hormann and Greiner [12] introduced the MIPS energy, defined as EMIPS(f) =

σ12+σ22

σ1σ2

. (3.23)

Observing that the term at the numerator is simply twice the Dirichlet en-ergy already mentioned, while the denominator is the ratio A(tf)/A(Tf), the MIPS energy can be interpreted as the Dirichlet energy per parameter area [12].

To see that σ1σ2 = A(tf)/A(Tf), let gf be the linear function that maps Tf to tf and decompose it as gf =gt◦g−T1 where gT maps the “standard” triangle Te whose sides are the standard basis vectors e1 and e2 to Tf, and

(27)

gt maps Te to tf. Then

σ1σ2 = |det Jf| = |det gtdet g−T1| =

A(tf)

A(Tf)

. (3.24) where I used Jf to indicate the Jacobian matrix of gf.

The MIPS energy does not directly optimize for area distortion but only penalizes degenerate triangles. In fact, this energy was defined trying exactly to avoid the shrinking that is typical for conformal methods. An interesting property of the MIPS energy is that it does not change when the inverse map from parameter space to model space is considered [14].

Combined energy

Degener et al. [2] combined the MIPS energy with a term that penalizes actual area distortion. Area distortion penalization is enforced by the term

EA(f) =  σ1σ2+ 1 σ1σ2  , (3.25)

that is minimized when the ratio between the parameter and surface triangle areas is one. The combined energy they use is

EComb(f) = EMIPS(f)EA(f)θ, (3.26) and the parameter θ is used to balance weight angle and area preservation. (The authors reported numerical issues when optimizing with values of θ larger than two).

Stretch based distortion

Sander et al. [23] defined stretch based metrics that penalize parameteri-zations altering the aspect ratios of the triangles. The terminology here can be confusing because they evaluated distortion from parameter space

(28)

to model space, and therefore they call stretch what I referred to as the shrinking of the parameter triangles. Their L2 stretch is similar to the MIPS energy and penalizes deformations across all directions

L2(f) = v u u t1 2 σ12+σ22 σ12σ22 ! . (3.27)

The L∞ stretch instead measures the maximum deformation across the principal directions

L∞(f) = 1

σ2. (3.28)

Since the L∞ only penalizes shrinking, but not triangles that become too large, Sorkine et al. [30] use the following version instead

D(f) =max  σ1, 1 σ2  . (3.29)

Another interesting energy formulation comes from Smith and Schaefer [28], and has been referred to as the symmetric Dirichlet energy [22]

ESD(f) =  1+σ1−2σ2−2   σ12+σ22  . (3.30)

A similar version of this energy was derived by Schreiner et al. [24] in order to obtain a version of the L2 stretch that was symmetric with respect to the inverse map. They used it in to compute inter-surface mappings and weighted each term according to triangle areas.

3.4.3

Optimization strategies

The minimization of area preserving energies is a challenging optimization problem. None of the presented energies has a quadratic formulation, and iterative methods from nonlinear optimization theory must be used in order to minimize them.

(29)

Unfortunately, this implies that area preserving parameterization meth-ods are much less efficient than quadratic methmeth-ods, and performance becomes a critical aspect in the feasibility of their application. For this reason, many early methods minimized nonlinear energies using hierar-chical solvers.Hormann et al. [13] introduced the concept, based on the progressive mesh data structure of Hoppe [11]; the approach was then later adopted by other authors (Sander et al. [23], Desbrun et al. [3], Sheffer et al. [27]).

Hierarchical solvers compute parameterizations with a multi-resolution framework where the mesh is processed at different levels of detail to speed up the overall process. In conjunction with this, these methods often adopt a descent strategy where a single vertex at a time is optimized. This approach yields fast iterations but is very slow to converge. Fu et al. [7] advocate the use of inexact block descent optimization and prove report faster convergence times as opposed to performing exact line search. More recently, several works have emerged that use conventional optimization methods applied to the full set of variables (Schüller et al. [25], Aigerman et al. [1], Smith and Schaefer [28], Rabinovich et al. [22]).

Injectivity preserving line search

Something to be aware of when minimizing area preserving energies using line search is that, since they asymptotically approach infinity as the parameter triangle gets smaller, the step size computed during the line search should be small enough to prevent accidental triangle inversions. Given a descent direction and a face f , let d0, d1 and d2 be the

two-dimensional vectors along which displacing the parameter space coor-dinates u0, u1, u2 of f causes a decrease in the energy value. Smith and

Schaefer [28] observe that the maximum flip-preventing step size can be computed using as upper bound the smallest positive root of the following

(30)

quadratic equation in the unknown step size t det (u1+td1) − (u0+td0)

(u2+td2) − (u0+td0)

!

=0, (3.31)

where the left hand side is twice the signed area of the displaced triangle. Performing iterative descent with this flip-preventing line search is guaran-teed to keep the parameterization locally injective during the minimization process.

3.4.4

The local-global approach

A different approach, proposed by Liu et al. [18] and inspired by a paper from Sorkine and Alexa [29], computes mesh parameterizations using the so called local-global approach. The local-global approach leverages the invariance of isometric and conformal maps under specific classes of linear transformations. Specifically, isometric mappings are rigid trans-formations from the surface triangles to the parameter triangles and are therefore invariant under rotations; conformal mappings on the other hand are invariant not only under rotations but also under uniform scalings (similarity transformations).

Let L = {Lf}f∈F be a set of linear transformations under which the

targeted energy is invariant, then the idea is to minimize the following quantity ELG(F, L) =

f∈F A(Tf)ELG(f , Lf) =

f∈F A(Tf)kJf −Lfk2F (3.32)

wherek·kF is the Frobenius norm.

Since the Jacobian matrix Jf encodes the linear transformation of Tf to the parameter triangle tf, the expression above is a measure of the distance of that transformation to the matrix Lf assigned to the face. The matrix Lf

(31)

was chosen from the class of matrices that map Tf without distortion, so the idea is to minimize the energy in order to “push” the parameterization of each face closer to the relative transformation. However, the set of transformations L can also be freely chosen in order to match the Jacobians, therefore the minimization involves two distinct set of variables: the parameter space coordinates which determine the values of each Jacobian matrix Jf and the corresponding linear transformations Lf.

Starting from an initial configuration of the vertices in parameter space, the local-global approach minimizes the energy ELG using an iterative scheme that involves a local step and a global step. At each iteration the local step keeps the Jacobian matrices fixed, and finds for each face f the corresponding Lf in the chosen class that best matches Jf; then, a global step minimizes the energy by computing a new parameter space configuration while the local transformations are kept fixed.

The Lf matrix that is closest to the corresponding Jacobian is computed using the SVD decomposition of Jf. In general, it can be shown that given a matrix Jf = UΣVT, the orthogonal matrix Q that is closest to

Jf with respect to the Frobenius norm is Q = UVT [10]. However, Q cannot be used directly as Lf if it is a reflection. This is easily detected by checking if the determinant of UVT is negative, in which case the sign of the last column of U is changed to make sure that Lf = UVT is indeed a rotation [31]. If scalings are also allowed, then the authors choose Lf =U(σ1+2σ2)IVT.

After the linear transformations L are computed, the global step minimizes the energy 3.32 solving for the unknown parameter space coordinates. The authors reformulate the local energy term as

ELG(f , Lf) = 1 2 2

i=0 cot αik(ui+1−ui) −Lf(pi+1−pi)k2, (3.33) which is a quadratic expression, and the overall energy can again be

(32)

mini-mized by solving a sparse linear system with two fixed vertex positions to ensure the matrix is full rank.

If similarity transformations are allowed, it turns out that the energy is equivalent to the conformal energies of section 3.4.1 and can be minimized while solving for both the parameter coordinates vector u; the authors called those parameterizations As-Similar-As-Possible (ASAP).

When only rigid transformations are allowed, the formulation for both sets of variables is no longer quadratic and the method must be applied it-eratively. This is necessary because once the parameter coordinates change after the global step, the previously computed local transformations might not be optimal anymore. The parameterizations computed in this manner were labeled As-Rigid-As-Possible (ARAP).

The ARAP method computes in reasonable time parameterizations that have much lower area distortion compared to simple conformal methods, but can still generate inverted triangles during the global step.

3.4.5

Scalable Locally Injective Mappings

While the minimization of area-preserving energies produces parameteri-zations of the highest quality, it involves solving optimization problems that are extremely challenging. The number of iterations required to optimize those energies is usually extremely large, leading to very long running times; this is one of the reasons for which hierarchical schemes were first introduced. More recent works started to tackle those prob-lems with conventional methods from nonlinear optimization theory, but performance remains a huge issue.

A very recent work, “Scalable Locally Injective Mappings” by Rabinovich et al. [22] is the first to explicitly tackle the issue of optimizing nonlinear energies on large scale models. This is a descent method that is able to

(33)

converge in a very small number of iterations thanks to the quality of the search directions it produces, and can be adapted to minimize several different area-preserving energies.

The SLIM descent direction is computed by solving a quadratic minimiza-tion subproblem inspired by the local-global method described earlier. Consider a single mesh face f and a generic distortion energy defined as a function of the singular values of Jf (as in equation (3.16))

E(f) = A(Tf)E(Jf) = A(Tf)E(σ1, σ2). (3.34) The idea behind the SLIM method is to define a quadratic proxy function that matches the gradient of E(Jf). Let Jkf be the Jacobian matrix of f at iteration k, Rkf be the closest rotation matrix to the Jacobian as computed in the local step of the ARAP method, and assume that for each f there exists a weight matrix Wkf such that the proxy function

Pkf(Jf) = kWkf(Jf −Rkf)k2F (3.35) matches the gradient of E at the current iteration, that is

∇E(Jkf) = ∇Pkf(Jkf). (3.36) Furthermore, let uk−1be the vector of variables that represents the mesh configuration in parameter space at the iteration k, and ˜uk be the minimizer of the quadratic energy that replaces each local term with the correspond-ing quadratic proxy in equation (3.16). Then the search direction dkSLIMof the SLIM method is computed as

dkSLIM= ˜uk−uk−1. (3.37) Assuming the proxy energy is well defined, this is guaranteed to be a descent direction for E at the current parameter coordinates thanks to the matching gradients property (3.36).

(34)

It turns out that it is always possible to find a suitable set of weight matrices {Wkf}f∈F for many area-preserving energies, although there are

some technicalities involved. The authors were able to provide a general construction for the set of separably strictly convex energies, that is energies that are strictly convex when considered as one dimensional functions of each singular value independently.

Once the descent direction has been determined the new solution uk is computed with a common line search procedure like any other descent method, computing an initial step size that guarantees local injectivity with the technique described in section 3.4.3.

Descent performed with the SLIM search direction converges much faster than with other available methods. The iterations are generally slow because they involve solving a large (albeit sparse) linear system, but on average a few hundreds iterations are enough to parameterize even complex meshes. The authors of the original paper report that running times in the order of seconds are enough to parameterize meshes with even millions of faces, although their implementation exploits a state-of-the-art parallel linear solver for the proxy energy minimization.

(35)

Problem description

Models obtained from 3D reconstruction pipelines accurately reproduce their real world counterpart by relying on texture mapping, and incredible detail can be achieved thanks to the high resolution pictures provided by modern digital cameras even when objects are reconstructed using passive image-based methods [8].

When texture maps are generated from registered images (that is, images paired with information about the position within the scene of the camera that took them), the parameterization process is usually driven by metrics different from the ones used by the methods reviewed in chapter 3. In particular, these methods typically try to match surface patches (or charts) according to how much detail their projection to the image planes can capture (Lempitsky and Ivanov [16], Waechter et al. [35]), while making little to no effort to avoid discontinuities in the resulting parameterization; in fact, they favor splitting charts if this leads to more accurate results. Although this makes sense for reconstruction algorithms as their goal is to produce models that are as accurate as possible, the generated textures are usually highly fragmented with hundreds thousands of small charts in the final texture atlas.

(36)

Fragmentation in textures can be an issue for a number of reasons. The most obvious one is that manipulating the texture using standard image editing software becomes next to impossible, because patches that are close together end up being spread apart in image space; however, this is less of an issue with 3D painting systems that allow to draw directly on the model. Furthermore, as already mentioned in chapter 2, texture seams lead to rendering artifacts when filtering is used, and the usual technique of padding the texture boundaries mitigates the issue but wastes space that could be used to accommodate additional signal information. Finally, fragmented textures can sometimes cause issues when mipmapping is used to reduce the aliasing that occurs when distant models are rendered using high resolution textures. Mipmaping involves resizing the texture data, and makes discontinuities at the seams more apparent.

The aim of this thesis was to investigate strategies to reduce the fragmen-tation in the texture data of models obtained from reconstruction without impairing the level of detail.

An explicit goal was trying to make the texture suitable to be edited with image editing software. This would be useful because it would allow artists to correct visual artifacts coming from the source images. For example, different lighting in the source images can cause the surface color vary when instead should be even, or shadows in the original scene can end up being textured and mapped to the model, creating confusing results when the model is later rendered with virtual lighting.

A secondary goal was to evaluate whether the result would improve the original texture by either allowing to compress it more to save space (and bandwidth in web-based applications), or by improving the texture occupancy and increasing the resolution of the data mapped to the surface. Our method computes a new parameterization optimized for the preser-vation of the initial signal allocation. This is achieved by merging together

(37)

adjacent charts, and computing for each of them an individual parame-terization that minimizes deformation with respect to the shape of the original charts in parameter space.

4.1

Possible approaches

It is immediately clear that this problem can be approached in two different stages of the reconstruction pipeline: either when the texture is generated from the registered images or after the object has been reconstructed and textured, as a post-process job.

In the first case there is the considerable advantage of having the source images available, and a method working at this level could be aimed at finding trade-off between image quality and texture fragmentation. At one end of the spectrum it would parameterize the mesh optimizing for geometric distortion disregarding how the parameterized triangles would align with the registered images, while at the other end it would try to maximize the detail stored in the texture. However such an approach inevitably lacks flexibility: once a model is generated the parameterization is fixed, and if a different trade-off is desired the model needs to be reconstructed again; furthermore, nothing can be done if source data is not available (as is often the case when models are downloaded from online repositories).

The second strategy yields a more flexible solution not only because it can be used when the reconstruction data is no longer available, but also be-cause it makes no assumption on the nature of the mesh parameterization. If the input of the method only consists in the parameterized mesh and the texture files, then such a method can be applied to improve fragmented parameterizations from models of any kind, regardless of how they are generated.

(38)

The strategy developed in this thesis follows the second approach, and tries to define a procedure that improves the parameterization of reconstructed models in an automatic way, only working with the geometry and texture data. Note that although user feedback can be desirable (especially if the optimization is driven by image-editing needs) having an automatic procedure that can process models in batches can be extremely convenient if needs to be applied to large collections.

4.2

Challenges

In the ideal case, the output should be indistinguishable from the input model when rendered, with a significant reduction in the number of surface patches (possibly not more than a few tens). The surface patches should also have simple outlines in texture space, as this would allow them to be better packed together in the final image. Moreover, regular regions with simple boundaries are easier to work with for artists.

These features are obviously desirable, but pose significant challenges. First of all, when the model is processed the mesh parameterization changes and so does its distortion (regardless of the metric used to measure it). This is inevitable, and it means that the resolution of the texture data mapped to the model surface will change on per triangle basis. Loss of detail can happen not only when the new parameter triangle of a face is smaller than the original, but also when its angles change. Therefore, the method must take into account how the original data was allocated and cannot simply produce a new and completely unrelated parameterization. Another issue is that it is very common for reconstructed textures to contain regions with complex and jagged boundaries. It’s easy to see how this can happen when the parameterization optimizes for image plane projection over the whole mesh: the discontinuities in the texture map are

(39)

the tipping points at which changing the projection of faces that lie on the boundaries increases the cost function. Since the geometry of the objects is generally noisy, this results in irregular borders in the parameterization and consequently in the texture image. Irregularities also arise due to occlusion for objects that have creases and complex shapes. As the existing seams can already be quite noticeable, because often the color across two patches doesn’t perfectly blend, the procedure should try to reduce original discontinuities without introducing new ones.

(40)

Method description and insights

This chapter describes our strategy to reduce the fragmentation of a parameterized mesh. The working assumption is that although the existing parameterization is fragmented, the way it maps data to the surface should be preserved. In particular, we want to preserve the resolution of each patch relative to the original texture in order not to lose information. Increasing the resolution of the whole texture map (by mapping more texels while keeping the same texture size) can be acceptable (or even desirable) only if this does not come at the expense of losing some of the initial data, which can happen if a region is upscaled while another is shrunk.

5.1

Overview

Our method takes as input a parameterized mesh with the corresponding texture data and generates a new parameterization by working in two phases. In the first phase distinct charts from the original parameterization are grouped together. In the second phase, the new charts are param-eterized by minimizing distortion relative to the shape of the original

(41)

parameter-space surface. Doing so “stitches” together the original charts while keeping their shape as close as possible to the one they had in the initial texture; the resulting parameterization can then be used to generate a new texture file. Throughout the rest of the document I will refer to the first phase as the aggregation phase and to the second phase as the parameterization phase.

The final parameterization should not contain overlaps (see section 3.2) as this implies losing data and, more importantly, renders the new texture useless. A common strategy when overlaps are detected is to cut a chart into smaller ones and parameterize those independently [17]. In our case we employ a backtracking procedure that splits the aggregate chart into its initial components and then creates two new aggregates that are pa-rameterized independently. This backtracking strategy is guaranteed to end (potentially splitting charts until they revert to the original configu-ration) and only generates seams that were already present in the initial parameterization.

5.2

Aggregation phase

The aggregation phase is formulated as a graph problem, and before detail-ing how it works I must introduce the definition of the parameterization graph of a mesh. The parameterization graph of a mesh is the graph that represents the mesh segmentation (the partition of the mesh faces into connected components) induced by the existing charts and their adjacency relation with respect to the surface topology.

Let GM = (C, E)be the parameterization graph of the meshM. The set C

is the set of existing charts that partition the face set F; the set E is the set of adjacencies among charts such that the unordered pair {C1, C2} belongs to E if and only if there exists at least one face f1∈ C1 that shares a mesh

(42)

edge with some face f2 ∈ C2.

5.2.1

Procedure specification

The aggregation phase generates a new mesh segmentation by growing existing charts while it performs a sequence of edge contractions on GM.

An edge contraction is a graph operation which removes an edge by merging together its two endpoints; in our case the result of a contraction is that the faces belonging to the two charts that are the endpoints of the graph edge get grouped together into a single chart. Assuming the resulting charts can then be parameterized correctly, an edge contraction will produce the effect of removing the border between the two merged charts in the new texture.

It is fairly obvious that the order in which edge contractions are performed has a large impact on the final output produced by the method, because it dictates how the original charts are merged together and passed along to the the parameterization phase. Therefore, we would like to guide the sequence of edge collapses towards generating reasonable charts with respect to our goals. In particular we do not want small charts in the final segmentation, and we possibly want to create charts with “regular” boundaries, as this will limit the potential for global overlaps in the resulting parameterization; we also want to avoid “islands”, that is charts that are surrounded by a single other chart.

The aggregation process is centered around augmenting the parameteriza-tion graph with a weight funcparameteriza-tion w : E →R that evaluates the estimated

cost of performing an edge collapse. The procedure performs a greedy sequence of edge contractions (by increasing cost) on the parameterization graph until a termination criterion is met.

(43)

is defined in the following manner:

Greedy aggregation

1. Given a parameterization graph GM = (C, E) and a cost function

w : E→R, insert each edge e ∈ E in a priority queue Q with priority value w(e) (where a smaller value implies higher priority).

2. Repeat until a termination criterion is met or no more collapses can be performed:

2.1. Extract the next edge e= {C1, C2} from Q.

2.2. Collapse the edge e in GM, by merging C1 and C2and update the edge set E to reflect the new adjacencies.

2.3. Update Q by re-evaluating the priorities after the update to the adjacency set.

There can be several choices for the termination criterion. The most obvious one is to use the number of charts left in the parameterization graph or a fixed number of iterations. Another possibility could be to stop once all the charts have reasonable size (either in the number of faces or the surface portion they cover), if the sequence of edge contractions is guaranteed to merge small regions first.

Note that at step 2 the procedure interrupts the iterations not only if the termination condition is satisfied, but also if no more collapses are allowed. This can happen when there are edges in the graph for which the collapse would create charts whose topology is no longer disk-like; in which case the chart cannot be parameterized (unless cuts are introduced). When we detect such an edge, we reinsert it in the queue Q with infinite cost which is interpreted as the merge being unfeasible. Because of this, the procedure can stop not only if the termination condition is met, but also if no feasible edge collapse exists.

(44)

5.2.2

The weight function

The choice of the weight function is important because it drives the greedy procedure and impacts how the initial charts get grouped together into the final ones. I already mentioned some desirable properties for the weight function in section 5.2, namely that it should create large charts with simple boundaries; however, it is extremely difficult to predict if a chart can be parameterized effectively (with low distortion, that is) unless it has very simple geometry. Furthermore, the parameterization phase will stitch together distinct pieces of texture, so there is also the issue of taking into account how well the existing data will fit together. This could either be left for the parameterization method to take care of, or it could be considered in the greedy aggregation.

Before discussing possible choices for the weight function and the rea-soning behind them, I need to introduce some additional notation. Let e= {C1, C2} be an edge of the parameterization graph. The 3D boundary length operator ∂3D(

←−→

C1C2) returns the length of the boundary they share on the mesh surface. The UV boundary length operator ∂uv(

−−→

C1C2) returns the length of the boundary C1shares with C2 projected to the parameter space. Note that the UV boundary length is not symmetric among ad-jacent charts, therefore ∂uv(

−−→

C1C2) 6= uv(

−−→

C2C1). Furthermore, let ∂3D(C) be the boundary length of chart C over the mesh surface, and ∂uv(C) the projected length of the boundary in parameter space.

Assume that C1 is the chart with smaller parameter space area incident to edge e, and that its area in parameter space is denoted by Auv. The first weight function I propose is

wuv(e) = Auv(C1) 1− uv( −−→ C1C2) uv(C1) !  uv( −−→ C1C2) −uv( −−→ C2C1) 2 . (5.1) This function favors the collapse of edges connecting charts were the border shared by the smaller one in texture space is a large fraction of

(45)

the total border (this is determined by the central term, that becomes zero if the smaller region is an “island” inside the larger one); it also favors the merge of charts whose shared border is of similar length in texture space: this means that the original data has similar resolution along the boundary and so it should blend easily. Finally, everything is weighted by the parameter space area of the smaller region; this term ensures that small fragments get quickly merged into larger charts.

Another proposition is the following one, which takes into account the 3D shape of the charts rather than the parameter space features. This time C1 is assumed to be the chart with smaller surface area:

w3D(e) = A3D(C1) 3D

(←−→C1C2)

3D(C1)

!−1

. (5.2)

Disregarding the features of the original parameter space surface might seem surprising, but the reasoning behind this is that by merging charts in a way that seemingly makes sense according to the mesh geometry, we can obtain charts that will have more regular outlines. Of course this could be countered by the fact that the parameterization phase will have to fit the existing data together, so if charts with significantly different resolution in the original texture got mixed together, this could lead to a parameterization with high distortion along the old boundaries. Note that this function does not explicitly favor merging islands, but such regions can be easily detected and merged in a preprocess step (in fact, this is what we did in our experiments).

5.3

Parameterization phase

Regardless of the choice for the weight function, after the sequence of edge contractions the aggregation phase produces a new mesh segmentation, encoded in the resulting parameterization graph. This segmentation is the

(46)

starting point for the second phase of the method, which tries to generate a parameterization of each aggregate chart that is optimized towards preserving the original data.

As I already anticipated, our approach computes new flattening of that minimizes deformation relative to the original parameter-space surface, which we then use to generate a new texture file for the input model. (In the following, I will refer to the original flattened surface as the UV surface, as opposed to the 3D surface of the model).

To achieve this we employ a parameterization method based on energy minimization, and we formulate the energy function using the shape features (areas and angles) of the UV surface. This ensures that the newly computed parameter triangles are very similar to their counterparts in the initial flattening, preserving the resolution and distortion features of the original texture map.

Note that the UV surface of an aggregate chart is disconnected, as the charts that are merged together were initially mapped to distinct parts of the texture space. This is not a limiting factor in computing a parameteri-zation because the underlying geometry still has disk-like topology, but can make the problem more difficult from a numerical standpoint.

5.3.1

Energy formulation

In our case, a parameterization already exists and we want to compute a new parameterization that matches the features of the original one as much as possible. Assume that the existing parameterization locally maps a triangle Tf to tuvf , and consider a different parameter triangle tf. We observe that a natural measure of the difference in how tuvf and tf map data to the surface triangle Tf is the distortion of the map from tuvf to tf.

(47)

(a) Initial map (b) Optimized with ESD (c) Optimized with ESDuv

Figure 5.1: Texture map optimization applied to the Nefertiti model. The initial map is heavily fragmented and presents large discrepancies in the sampling frequency (a). The map optimized using the symmetric Dirichlet energy maps the texture uniformly, potentially losing detail on the original high resolution areas (b). Our energy formulation is aware of the differences in sampling frequency and is able to preserve them in the new parameterization (c).

Given a local deformation energy

E∗(f) = A(Tf)E∗(σ1, σ2), (5.3) we reformulate it in terms of the singular values of the Jacobian Juvf of the map from tuvf to tf

Euv∗ (f) = A(tuvf )E∗(σ1uv, σ2uv). (5.4) To see an example of how this approach makes a significant difference in preserving the features of the original texture map see figure 5.1, where charts that have different resolution in the original texture are joined

(48)

together. Parameterizing using the 3D surface features yields a map that does not account for the difference in sampling frequency of the original signal, while minimizing distortion towards the UV surface accounts for different resolutions by scaling the various parameter triangles non-uniformly.

The choice of the parameterization algorithm to use in this phase has the biggest impact on the feasibility and effectiveness of our method because it influences the efficiency of the overall procedure, and the quality of the final result. Although the primary concern is the detail preservation of the texture map, and not the method efficiency, having a procedure that computes parameterizations of excellent quality but needs for hours to parameterize models of average size makes its use impractical. To give indicative figures, averagely sized models should not require hours to process.

We tested several parameterization algorithms before finding one that would yield acceptable results both in terms of quality and efficiency. The next sections present a review of the parameterization strategies we tried, and which one ended up being the one best suited to our needs.

5.3.2

Parameterizing using linear methods

Initially, we tried to parameterize the aggregate charts using conformal methods that generate angle preserving parameterizations([3, 17]). These methods compute the parameterization by solving a simple sparse linear system and have two features that make them particularly attractive:

• They are extremely fast. Thanks to the efficiency of modern sparse linear solvers they can be used to parameterize large charts – even with hundreds of thousands of faces – in a matter of seconds. • They allow to pin vertices to arbitrary positions in parameter space.

(49)

Model Initial texture Optimized with DCP

Figure 5.2: Parameterizing the aggregate charts of the Griffon model with Discrete Conformal Parameterizations (section 3.4.1). The new mapping exhibits significant area distortion compared to the original: notice for example the wing on the top left corner, which gets noticeably smaller, and the chart at the center of the new texture where the base of the platform is upscaled for no reason.

The second feature in particular seemed promising because the initial idea was to parameterize aggregate charts by keeping large charts fixed in place, and merging adjacent charts by only altering the existing parameterization near the boundary segments connecting them together.

However, in practice pinning vertices along a boundary segment often resulted in degenerate parameterizations that collapsed around the fixed positions, with lots of triangle inversions and boundary overlaps. Such parameterizations are inadequate for our purposes so we decided to parameterize the whole aggregate charts instead.

Unfortunately, even parameterizing whole charts yielded unsatisfactory results. The issue with this kind of methods is that they often produce parameterizations that exhibit significant area distortion, which causes arbitrary changes in the sampling frequency completely unrelated to the

(50)

way the original signal was mapped to the surface. An example of this can be seen in figure 5.2, where the UV shape of the newly parameterized chart disregards the relative proportions of the initial parameter triangles; using this parameterization to generate a new texture file causes loss of data in the areas that get shrunk and an unjustified increase in the resolution of data mapped to the areas that get upscaled.

This behavior happened regardless of the shape features used to produce the parameterization, so it is not caused by minimizing conformal energy of the map to the UV surface rather than towards the 3D surface. Further experiments also showed this behavior to be very erratic and unpredictable. Sometimes just adding a small fragment to a large patch was enough to yield a completely different shape that either triggered or fixed the issue; in any case, solving this by trying to piece together different charts when we already had a procedure to do so in the aggregation phase did not seem reasonable. Fixing vertices in order to keep at least parts of the original shapes was also not feasible because of the issue mentioned earlier. Given all these issues, we decided to adopt a parameterization algorithm based on the minimization of a state-of-the-art nonlinear energy.

5.4

Parameterizing with iterative optimizers

Area preserving energies are minimized using methods from nonlinear optimization theory. In general, a valid parameterization is required as starting point and then the energy is iteratively minimized using descent methods that move the vertices positions in parameter space to find a configuration with smaller energy.

Because this energies approach infinity as parameter triangles become smaller, an initial configuration that is locally injective must be provided.

(51)

Model Initial texture Optimized with Euv SD

Figure 5.3: Parameterizing the aggregate charts of the Griffon model minimizing our adapted version of symmetric Dirichlet energy ESDuv. The resulting charts respect the original proportions of the texture map resolu-tion while also getting uniformly up-sampled.

A common choice among area preserving methods is to use Tutte’s em-bedding, and that is what we also used.

5.4.1

Energy choice

Among the many available energies we chose to minimize the symmetric Dirichlet energy ESDuv. The formulation of this energy for a face f is (using the angle notation from equation (3.20)

EuvSD(f) = 1+A(t uv f )2 A(tf)2 ! ∑2 i=0cot αuvi kui+1−uik2 2A(tuvf ) ! . (5.5) For the line search, we used a simple bisection-based procedure, starting from the injectivity preserving step size computed as described in section 3.4.3.

Riferimenti

Documenti correlati

This result strongly suggests that we are observing discrete shifts from part-time to full-time work, as conjectured by Zabalza et al 1980 and Baker and Benjamin 1999, rather

Le scelte degli autori dei testi analizzati sono diverse: alcuni (per esempio Eyal Sivan in Uno specialista) usano materiale d’archivio co- me fotografie e filmati, altri invece

Hydrogen photoreforming is the result of the complex interplay among many elementary steps, such as light absorption, charge carrier recombination, hole transfer to substrate, which

Here, to make up for the relative sparseness of weather and hydrological data, or malfunctioning at the highest altitudes, we complemented ground data using series of remote sensing

Although both PU-G and PU-LN1 scaffolds showed increased cell density compared to pristine PU scaffolds, LN1 functionalization resulted in improved protection of CPCs from apoptosis

Il dibattito sul regionalismo differenziato si può affrontare – cercando di non scadere in un “derby politico” semplificatorio tra gli asseriti alfieri dell’autonomia a tutti

Activation of receptor neurons housed in long sensilla trichodea on the antennae of male Phthorimaea operculella by 13 host plant volatiles and the two main pheromone