• Non ci sono risultati.

Low Rank Decompositions for Tensors and Matrices

N/A
N/A
Protected

Academic year: 2021

Condividi "Low Rank Decompositions for Tensors and Matrices"

Copied!
2
0
0

Testo completo

(1)

Low Rank Decompositions for Tensors and Matrices

Eugene Tyrtyshnikov

Institute of Numerical Mathematics of Russian Academy of Sciences Lomonosov Moscow State University

eugene.tyrtyshnikov@gmail.com

Lectures in aula “Dal Passo” of the Department of Mathematics

of the University of Rome “Tor Vergata”, at 10:00-13:00 of February 5, 6, 12 (year 2018)

1 Skeleton (dyadic) decomposition for matrices and Tucker decomposition for tensors

As everybody knows, a matrix of rank r can be expressed as a sum of r rank-one matrices and can be approximated by a sum of k rank-one matrices with k < r. The best possible approximation of rank k is obtained by a suitable truncation of the Singular Value Decom- position (SVD). We assume that the students are aware of the basic properties of the SVD and the corresponding best approximation results. Recommended reference could be Matrix Computations by Gene Golub and Charles van Loan or A brief introduction to numerical analysis by Eugene Tyrtyshnikov.

If instead of a matrix we are given a d-tensor (d-dimensional matrix), then it is natural to investigate possible generalizations of skeleton decompositions to tensors. First of all, we introduce a convenient formalism for multiplication of tensors and consider the so called general Tucker decomposition, minimal Tucker decompostion and orthogonal Tucker decom- position. Many important properties of the SVD will be lost. However, the study of tensors can be reduced to the study of matrices associated with a given tensor. Using the SVD of these matrices, we will be able to produce best approximation estimates for tensors. Note that Tucker decomposition is widely used in statistics and various branches of data analysis.

2 Tensor trains and structured decompositions of associated matrices

Tensor train is a special decomposition of a d-tensor using d other tensors of smaller dimen- sionality. More precisely, two of them are 2-dimensional and others are 3-dimensional. If there are n points at each dimension, then the total number of entries of such a tensor is equal to n

d

and, in contrast, the number of representation parameters in the tensor-train decomposition is bounded by dnr

2

, where r is the so-called tensor-train rank. Multi-dimensional problems are abundant with the cases where the value of r is reasonably small. Fortunately, this value is small for typical applications. Tensor trains implement the general idea of the reduction of dimensionality and have everything to do with a sequence of matrices associated with a given tensor. These matrices are wonderfully structured and classical matrix decompositionis, e.g.

the SVD, can be computed for them simultaneously in a very fast way. The complexity of

basic tensor-train algorithms depend on d just linearly, in the worst case polynomially. The

idea of tensorization allows us to use tensor-train algorithms in the work with large-scale

vectors and matrices.

(2)

3 Tensor ranks and relation with algebraic geometry

By definition, a rank-one tensor in three indices is a nonzero tensor of the form a(i, j, k) = u(i)v(j)w(k). Any tensor can be represented as a sum of rank-one tensors (this is the so called trilinear decomposition or canonical polyadic decompostion) and the minimal possible number of summands is called the tensor rank of this tensor. If we fix the number of summands, then the possibility for a tensor to be of rank bounded by this number means the solvability of a certain system of algebraic equations. Here we step on the grounds of algebraic geometry.

Due to Hilbert’s Nullstellensatz, the tensor rank can be computed in finitely many arithmetic operations. However, the number of those operations may be and usually is prohibitively high.

Thus, the case of three and more indices differs crucially from the case of two indices. One of most important discrepances is the so called uncloseness, which still needs a better theoretical understanding. The same applies to estimates of maximal possible ranks depending on the sizes and Friedland’s conjecture about generic ranks, still open.

References

[1] L. de Lathauwer, B. de Moor, J. Vandewalle, A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl. 21 (2000) 1253–1278.

[2] L. de Lathauwer, B. de Moor, J. Vandewalle, On best rank-1 and rank-(R

1

, R

2

, . . . , R

N

) approximation of high-order tensors, SIAM J. Matrix Anal. Appl. 21 (2000) 1324-1342.

[3] I. Oseledets, E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Lin-

ear Algebra Appl., 432 (2010), 70–88.

Riferimenti

Documenti correlati

Definizione centro di massa e momento angolare, Leggi di conserva- zione dell’impulso e del momento angolare, Teoremi di Konig, sistemi di riferimento, accelerati e non,

44. Supponendo che la pioggia raccolta dalla scodella rimanga in quiete rispetto ad essa, e che questa si inizialmente ferma, studiarne il moto. Trascurare l’effetto dell’urto

In conclusion, the revelatory narrative cracks in Alice Munro’s dementia stories open new ‘in-sights’ on the Alzheimer’s mind and encourage readers to trace the

Determinare le specie vegetali spontanee specie vegetali spontanee presenti nel manto erboso dei vigneti di una zona intensamente coltivata e dei margini di campo

In conclusion, a single course of corticosteroids (either two 12 mg doses of betamethasone given intra- muscularly 24 hours apart or four 6 mg doses of dexa- methasone

Abstract:   Palm  oil  is  one  of  the  most  consumed  oils,  one  of  whose  refining  steps  is  the  removal 

Il presente contributo si propone di esaminare la rappresentazione del regno vegetale in una delle più importanti enciclopedie del periodo mamelucco, il Masālik al-abṣār fī

Zeri, La percezione visiva dell’Italia e degli italiani nella storia della pittura , in Storia d’Italia , vol.. Piganiol, Il sacco di