• Non ci sono risultati.

Universit`a degli studi di Padova Scuola di Ingegneria

N/A
N/A
Protected

Academic year: 2021

Condividi "Universit`a degli studi di Padova Scuola di Ingegneria"

Copied!
46
0
0

Testo completo

(1)

Universit`

a degli studi di Padova

Scuola di Ingegneria

Corso di laurea magistrale in Ingegneria dell’Automazione

Iterated projection methods with classical and

quantum applications

Relatore: Dott. Francesco Ticozzi Laureando: Luca Zuccato

(2)

Contents

1 Introduction 1

1.1 History . . . 1

1.2 Mathematical background . . . 1

2 Basic Theory 4 2.1 Iterated Projection Theorem . . . 4

2.1.1 Rate of Convergence . . . 5

2.2 Extensions . . . 7

2.2.1 Row-Action Methods . . . 7

2.2.2 Dykstra’s Algorithm . . . 8

2.3 Typical applications . . . 9

2.3.1 Solving Constrained L-S Matrix Problems . . . 9

2.3.2 MMUP: Matrix Model Updating Problem . . . 11

2.3.3 Projection Methods on Quantum Information Science . . . 12

2.3.4 Fixed-Order Control Design for LMI Control problems using MAP . . 13

3 Iterated Quantum Maps 17 3.1 Statistical Description of Finite-Dimensional Quantum Systems . . . 17

3.2 Open System Dynamics . . . 19

3.3 CPTP Projections . . . 21

3.4 Iterated CPTP Map Theorem . . . 24

3.5 Quantum Application . . . 25

3.5.1 Example . . . 27

4 Bregman’s Theory 30 4.1 Bregman’s Divergences and their Properties . . . 30

4.2 Generalized Pythagorean Theorem . . . 32

4.3 Iterated Convergence Results . . . 33

4.4 Quantum Bregman’s Divergences . . . 34 A Tensor Product and Partial Trace 36

(3)

Chapter 1

Introduction

1.1

History

In various scientific fields, it is often required to solve the following problem: find a point in the intersection of closed subspaces (or sets) that minimizes the distance from a given point of the whole space. Von Neumann, in 1933 [18], found an iterative approach to solve this problem: he found that the projection onto the intersection of two subspaces can be found alternating projections onto the single subspaces. Starting from this discovery, Halperin [14] extended Von Neumann’s theorem for the case of N subspaces with non empty intersection. Using this alternating projections approach, different algorithms where invented. Among these, we recall Kaczmarz method in 1937 [16], in order to find the solution of a linear system, MAMS in 1954 [1][17], useful to find a feasible solution for system inequalities, and Dykstra’s alternating projections algorithm for general convex sets in 1986 [11] (etc...). This algorithms are used in order to solve different problems like constrained least-squares matrix minimization problems, matrix model updating problem in order to adapt a given model to measured data , control design, etc...

In 1967 Bregman [5] extended the concept of distance defining what now are called Bregman divergencies. With this extended concept, he defined the corresponding gener-alized projections and he gave an iterative Bregman projection theorem. Similar results were developed by Csiszar [8] in the field of information theory, using relative entropy as a pseudo-distance.

In this work some classical results of alternating projections and Bregman’s theory will be presented (Chapters 2 and 4). In Chapter 3 we will propose an application of Von Neumann-Halperin’s theory quantum maps.

1.2

Mathematical background

In this section we present some definitions that will be essentials to this work. Details can be found in specific algebra books.

(4)

• u + v = v + u, ∀ u, v ∈ V ;

• u + (v + w) = (u + v) + w, ∀ u, v, w ∈ V ;

• it exists a null vector e such that e + v = v, ∀ v ∈ V • λ(µv) = (λµ)v, ∀ v ∈ V and λ, µ scalars;

• (λ + µ)v = λv + µv, ∀ v ∈ V and λ, µ scalars; • λ(u + v) = λu + λv, , ∀ v, u ∈ V and λ scalar; • 0v = e, ∀ v ∈ V ;

• 1v = v, ∀ v ∈ V .

Definition 1.2 A set C in a vector space is said to be convex if (1 − α)x + αy ∈ C

for all x, y ∈ C, and 0 ≤ α ≤ 1

Definition 1.3 A metric space X is a vector space where it is defined a distance function d : X × X →R+ with the following properties for all x, y, z ∈ X:

• d(x, y) = 0 ⇐⇒ x = y; • d(x, y) = d(y, x);

• d(x, z) ≤ d(x, y) + d(y, z).

Definition 1.4 Let V be a vector space over F (R, C). The inner product is a map h·, ·i : V × V → F that satisfies: 1. hx, xi ≥ 0 (= 0 ⇐⇒ x = 0); 2. hx, yi = hy, xi∗; 3. hα1x + α2y, zi = α1hx, zi + α2hy, zi where α1, α2∈ F Examples:

• In the Euclidean space Rn the inner product is given by:

hx, yi = x1y1+ ... + xnyn;

• InCn, the inner product is given by:

hx, yi =

n

X

j=1

xjyj∗

• In L2, set of square integrable functions, the inner product is given by:

hg, f i = Z

(5)

• In l2= {{x

j} ∈C s.t. P∞j=1|xj|2< ∞}, the inner product is given by:

hx, yi =

X

j=1

xjyj∗

Definition 1.5 An inner product space is a linear space in which there is defined an inner product between pairs of elements of the space.

Definition 1.6 A sequence of points {xi} in a metric space with metric d is called a

Cauchy sequence if ∀  > 0 there exists an integer N such that d(xi, xj) <  when

i > N and j >N.

Definition 1.7 A metric space is called complete in the norm induced by its inner product if every Cauchy sequence of points in it converges to a point in the space.

Definition 1.8 An inner product space, which is also complete, is called Hilbert space. Definition 1.9 A projection is a linear transformation P from a vector space to itself such that P2= P .

Let V be an inner product space and consider a subspace W ⊂ V . Every vector v ∈ V can be uniquely written as v = w1+ w2, w1∈ W and w2 ∈ W⊥. The orthogonal projection

of v onto W is defined as PW(v) = w1.

It satisfies the following properties: 1. It is linear: PW(v) = PWv;

2. It is idempotent; P2= P ;

3. It is self-adjoint: ∀v1, v2∈ V hPW(v1), v2i = hv1, PW(v2)i

If ω1, ..., ωn is an orthonormal basis of W , the orthonormal projection can be written as:

PW(v) = n

X

i=1

hv, ωiiωi

Before presenting the algorithms that are built upon iterated methods, we recall an impor-tant theorem for Hilbert spaces and projections. In the next sections H denotes a general Hilbert space.

Theorem 1.1 (Kolmogorov’s criterion) Let x be a vector in H and C be a closed convex subset of H. Then ∃! c0∈ C such that kx − c0k ≤ kx − ck, ∀ c ∈ C.

Moreover, c0 is the unique minimizing vector if and only if hx − c0, c − c0i ≤ 0, ∀ ∈ C.

(6)

Chapter 2

Basic Theory

2.1

Iterated Projection Theorem

In this Section we are going to see the original theorems that brought to the development of more advanced algorithms, which will be discussed in Section 2.3. These algorithms are used to solve linear systems (Ax = b), linear feasibility problems (i.e. find x ∈ Rn s.t. Ax ≤ b), or, in general, convex feasibility problems (find x ∈T Ci where Ci is closed, convex

for 1 ≤ i ≤ m). In Section 2.4 we will show some application of these alternating projections methods. As said in the history section, Von Neumann was interested to find the projection of a given point in H onto the intersection of two closed subspaces. Before seeing his theorem let us give the following definition:

Theorem 2.1 (Von Neumann’s alternating projections theorem ) Let N , M be closed subspaces of H. Then for each x ∈ H:

lim

n→∞(PNPM) nx = P

M∩Nx

Proof. Let us consider the sequences Σ1 and Σ2 of operators PM, PNPM, PMPNPM,...,

and PN, PMPN, PNPMPN,..., respectively. We have to show that both sequences have

same limit T and that it is T = PM∩N.

Let Tn be the n-th operator of Σ1. It holds:

hTmx, Tnyi = hTm+n−δx, yi,

where δ = 1 if m and n have the same parity, it is 0 otherwise. We need to show that if x ∈ H, then limn→∞Tnx exists. It holds:

kTmx − Tnxk2 = hTmx − Tnx, Tmx − Tnxi

= hTmx, Tmxi − hTmx, Tnxi − hTnx, Tmxi + hTnx, Tnxi

= hT2m−1x, xi + hT2n−1x, xi − 2hTm+n−δx, xi

= hT2m−1x, xi + hT2n−1x, xi − 2hT2k−1x, xi.

(7)

we have that

kTi+1xk2= hT2i+1x, xi.

Ti+1x is either PMTix or PNTix. So, it holds that

kTi+1xk2≤ kTixk2.

So, for all i, it holds:

hT2i−1x, xi ≥ hT2i+1x, xi,

therefore limi→∞hT2i−1x, xi exists and it implies

lim

m,n→∞kTmx − Tnxk = 0.

Let us denote by x∗ the limit for Tnx. If T is defined by the condition T x = x∗, then

dom(T ) = H and T is singular valued, T is linear and by lim

m,n→∞hTmx, Tnyi =m,n→∞lim hTm+n−δx, yi,

it follows:

hT x, T yi = hT x, yi.

So, T is a projection PL. Now, if x ∈ M ∩ N , then PMx = PNx = x, Tnx = x and T x = x.

So, x ∈ L and, by that, M ∩ N ⊆ L. Now, it holds that PMT = PNT = T , let y ∈ H and

T y = x ∈ L. Then PMx = PMT y = T y = x ∈ M, and PNx = PNT y = T y = x ∈ N ,

which implies L ⊆ M ∩ N .

Now, making the same for Σ2it is clear that its limit T0= PM∩N, so T = T0 and the proof

is complete. 

The generalization to the intersection of multiple subspaces was given by Halperin: Theorem 2.2 (Halperin) If M1,...,Mr are closed subspaces in H, then ∀x ∈ H

lim

n→∞(PM1...PMr) nx = P

Tr i=1Mix

A proof for this theorem can be found in Halperin’s original work [14].

2.1.1

Rate of Convergence

These two theorems gave the basis to the development of different algorithms. For this reason we are interested in the convergence of alternating projections. The rate is linked to the angle between the subspaces. We recall their definitions and properties.

Define the function arccos : [−1, 1] → [−π22]. We will use only the elements in interval [0, 1]. Then the angle θ(M, N ) between the closed subspaces M and N of H is the element of [0,π2].

Definition 2.1 (Friedrichs) Define the cosine c(M, N ) between the closed subspaces M and N of H as:

c(M, N ) = sup{|hx, yi| : x ∈ M ∩ (M ∩ N )⊥, kxk ≤ 1, y ∈ N ∩ (M ∩ N )⊥, kyk ≤ 1}. Then the angle is given by:

(8)

Definition 2.2 (Dixmier) Define the cosine c0(M, N ) as:

c0(M, N ) = sup{|hx, yi| : x ∈ M, kxk ≤ 1, y ∈ N , kyk ≤ 1}.

Then the minimal angle is given by:

θ0(M, N ) = arccos(c0(M, N )).

Properties:

1. if M ∩ N = {0} then c0(M, N ) = c(M, N );

2. Some consequences of definitions are: i) 0 ≤ c(M, N ) ≤ c0(M, N ) ≤ 1;

ii) c(M, N ) = c(N , M) and c0(M, N ) = c0(N , M);

iii) c0(M, N ) = c0(M ∩ (M ∩ N )⊥, N ∩ (M ∩ N )⊥);

iv) |hx, yi| ≤ c0(M, N )kxkkyk for all x ∈ M, y ∈ N .

Lemma 2.1 The following relations hold:

1. c(M, N ) = c0(M, N ∩ (M ∩ N )⊥) = c0(M ∩ (M ∩ N )⊥, N );

2. c0(N , M) = kPMPNk = kPMPNPMk

1 2;

3. c(M, N ) = kPMPN− PM∩Nk = kPMPNP(M∩N )⊥k.

Having the definition of the angle, we next state the theorem that gives the exact rate in case of projection onto two subspaces.

Theorem 2.3 k(PM2PM1) n− P M1∩M2k = c(M1, M2) 2n−1 (n = 1, 2, ...).

In case of projecting onto more subspaces we cannot give an exact expression but we give an upper bound:

Theorem 2.4 For each i = 1, 2, ..., r, let Mi be a closed subspace of H. Then, for each

x ∈ H, and for any integer n ≥ 1 it holds: k(PMr...PM1) nx − P Tr i=1Mixk ≤ c n 2kx − PTr i=1Mixk, where c = 1 − r−1 Y i=1 sin2θi,

and θi is the angle between Mi andT r

j=i+1Mj.

Remark: By Theorem 2.4 we can see that a condition to finite time convergence of iterated projection is given by c = 0 which is satisfied if c(Mi, Mj) = 0 for all 1 ≤ i, j ≤ r,

in other words: Mi∩ Trt=1Mt ⊥

 ⊥ Mj∩ Trt=1Mt ⊥

 for every i, j = i + 1, ..., r.

(9)

2.2

Extensions

2.2.1

Row-Action Methods

The followings are iterative methods developed to solve large and sparse systems, linear and non-linear, equalities (i.e Ax = b ) and inequalities (i.e. find x ∈Rn s.t. Ax ≤ b) in a finite

dimensional space as said in Section 2.1.

Typically, row action methods involve alternating projections in hyperplanes, linear varieties or closed and convex sets and have these properties:

1. No changes or operations are made on the original matrix A; 2. They only use one row per iteration;

3. At every iteration, the computation of xk+1requires only the value of xk;

4. For finite dimensional problems, they only require vector arithmetic such as inner products and vector sums.

Definition 2.3 A sequence of indices {ik} is called a control sequence of a row-action

method if at the k-th iteration the convex set Cik is used.

Here are some type of control:

• Cyclic Control: ik = k mod n + 1, where m is the number of convex sets involved

in the problem;

• Almost Cyclic Control: ik ∈ M = {1, 2, ..., m} ∀ k ≥ 0 and ∃ ¯M integer s.t. ∀k

M ⊂ {ik+1, ..., ik+ ¯M}

• Remotest Set Control: ik is chosen s.t. d(xk, Cik) = maxi∈Md(xk, Ci), xk is the

k-th iteration of the row-action method, d(xk, Ci) is the distance from xk to set Ci;

• Random Set Control: ikis chosen from set {1, 2, ..., m} randomly with a probability

function that guarantees that every set is chosen, with non zero probability, in every sweep of projection.

The followings are some most used row-action methods.

The relaxation method of Agmon, Motzkin, and Sch¨onberg (MAMS) The prob-lem to solve is the following:

Ax ≤ b A ∈ Rm×n

x ∈ Rn b ∈ Rm

The problem can be generalized to any Hilbert space H to find x in the intersection of m closed half spaces given by Si = {x ∈ H : hai, xi ≤ bi} ∀i ∈ M. This is called linear

feasibility problem.

Given an arbitrary x0∈ H, a typical step of this method can be described by:

(10)

where:

δk= min(0, ωk

bik− < aik, xk

< aik, < aik>

);

where 0 <  ≤ ωk ≤ 2 −  < 2 for all k, a small given  and ik is chosen by one of the control

seen before.

This method does not guarantee the convergence to the nearest vector, in the feasible set, to x0.

Hildreth’s Method Let us consider the following problem: minimize kx2k

s. t. hai, xi ≤ bi ∀i ∈ M

Let Si indicate the subspace given by hai, xi ≤ bi. Starting from x0 6∈ Si ∀i a typical step

is given by: xk+1 = xk+ δkaik; zk+1 = zk− δkeik; where: δk = min zkik, ωk bik− haik, xki haik, aiki ,

and eik have all component zeros except the ik-th component, which is one; any of the

controls described in the beginning can be imposed and it holds 0 <  ≤ ωk≤ 2 −  < 2 for

all k and a given small positive . Again, ik follows one of the control introduced before.

This algorithm converges to minimal norm. Other examples can be found in [12].

Now we are going to show an important algorithm to find the closest vector into an in-tersection of closed convex sets (convex feasibility problem).

2.2.2

Dykstra’s Algorithm

Let H be a Hilbert space. For a given non empty, closed, convex set C of H, and x ∈ H, it exists a unique x∗ that solves:

min

x∈Ckx0− xk, (2.2.1)

which satisfies the Kolmogorov criterion:

x∗∈ C, hx0− x∗, x − x∗i ≤ 0, ∀x ∈ C.

Let us consider the case C =Tr

1Ci, where Ciis a closed, convex set in H. Moreover, we will

assume that ∀y ∈ H, PC(y) is not trivial, while PCi(y) is easy to calculate.

In order to solve the problem (2.2.1), this algorithm generates two sequences: the iterates {xn

i} and the increments {Iin}, with n ∈N and i = 1, ..., r.

(11)

Where the initial values are x0r= x0, Ii0= 0.

Note:

• The increment In−1

i associated with Ci in the previous cycle is always subtracted

before projecting into Ci;

• If Ci is a subspace, then PCi is linear and it is not required , in the n-th cycle, to

subtract the increment Iin−1 before projectiong onto Ci. So, in this case, Dykstra’s

algorithm reduces to MAP procedure. • The following relations hold:

xn−1r − xn 1 = I n−1 1 − I n 1; xni−1− xni = I n−1 i − I n i; and xni = x0+ I1n+ ... + I n i + I n−1 i+1 + ... + I n−1 r .

The next lemma proves the convergence of Dykstra’s algorithm.

Lemma 2.2 Let C1, ..., Cr be closed, convex subsets of a H and C = Tri=1Ci 6= ∅. The

sequence {xn

i} generated by the algorithm 2.2.2 converges strongly to x∗= PC(x0), for every

xo∈ H.

More details on Dykstra’s and other alternating projections algorithms can be found in [12].

2.3

Typical applications

2.3.1

Solving Constrained L-S Matrix Problems

The task is to solve using Dykstra algorithm the following problem: min kX − Ak2 F s.t. XT = X L ≤ X ≤ U λmin≥  > 0 X ∈ P

where kM kF =ptr(MM†) is the Frobenius norm, A, L, U ∈ Rn×n; A ≤ B means Aij≤ Bij

with 1 ≤ i, j ≤ n.

The constraints define sets whose intersection identifies a feasibility problem. Those sets are:

B = {X ∈ Rn×n : L ≤ X ≤ U };

pd= {X ∈ Rn×n : XT = X, λmin(X) ≥  > 0};

P = {X ∈ Rn×n : X =Pm

(12)

with 1 ≤ m ≤ n(n+1)2 .

Property: In the definition of P, G1, ..., Gm are given n × n non-zero symmetric

ma-trices whose entries are either 0 or 1 and have the following property: for all st-entry 1 ≤ s, t ≤ n, it exists one and only one k (1 ≤ k ≤ m) s.t. (Gk)st= 1.

Now the problem can be stated as:

min{kX − Ak2F : X ∈ B ∩ pd∩ P}

Let us see how the projections onto the singular sets can be found. Theorem 2.5 If A ∈ Rn×n, then the unique solution to min

X∈BkX − AkF is given by

PB(A) defined as:

[PB(A)]ij=        Aij if Lij≤ Aij ≤ Uij, Uij if Aij> Uij, Lij if Aij< Lij.

Theorem 2.6 If A ∈ Rn×n, then the unique solution to min

X∈PkX − AkF is given by PP(A) =P ¯αkGk where ¯ αk= Pn i,j=1Aij[Gk]ij Pn i,j=1[Gk]ij for 1 ≤ k ≤ m.

Theorem 2.7 Define B = A+A2 T, then the unique solution to minX∈pdkX − AkF is given

by Ppd(A) = Zdiag(di)Z T where di= ( λi(B) if λi(B) ≥   if λ <  and Z is s.t. B = Z∆ZT is a spectral decomposition.

We can now apply Dykstra’s algorithm, which, in this particular case, becomes: Set: A0= A; I0pd = I 0 B = 0. For i = 0, 1, ..: Ai= PP(Ai) − Iipd Ii+1pd = Ppd(Ai) − Ai Ai= Ppd(Ai) − I i B IBi+1= PB(Ai) − Ai Ai+1= PB(Ai)

Theorem 2.8 If the closed convex set B ∩ pd∩ P is not empty, then for any A ∈ Rn×nthe

sequences {PP(Ai)}, {Ppd(Ai)} and {PB(Ai)} generated by the previous algorithm converge

(13)

2.3.2

MMUP: Matrix Model Updating Problem

Consider the following finite element model of a vibrating structure: M ¨x(t) + D ˙x(t) + Kx(t) = 0

M, D, K are n × n matrices that denote mass, damping and stiffness of the structure. M is symmetric and positive definite; D, K are symmetric.

For several reasons (modeling errors, etc...) the finite elements data do not agree with measured data and the required structure of the matrices is lost.

It is required to update an analytic finite element model such that the updated model reproduces the measured data while preserving the structure of the matrices. This problem is called MMUP(Matrix Model Updating Problem).

By FEM modal analysis, the solutions are of the form x(t) = veλt, λ and v solve the

quadratic eigenvalue problem (QEP):

(λ2M + λD + K)v = 0

P (λ) = λ2M + λD + K is called quadratic pencil and the eigenvalues are given by the roots

of det(P (λ)) = 0. Eigenvalues and eigenvectors describe the dynamic of the system linking natural frequencies and mode shapes of the structure.

The solutions lead to the following inverse eigenvalue problem for P (λ): Given:

• real n × n matrices M, K, D (M = MT > 0, D = DT, K = KT) with Λ(P ) =

{λ1, ..., λ2n} and eigenvectors {x1, ..., x2n};

• a set of p self-conjugate numbers {µ1, ..., µp}, p vectors {y1, ..., yp}, with p < 2n.

Find: ˜K, ˜D ∈ Rn×n(both symmetric) s.t. Λ( ˜P (λ) = λ2M +λ ˜D+ ˜K) = {µ

1, ..., µp, λp+1, ..., λ2n}

and the eigenvectors are {y1, ..., yp, xp+1, ..., x2n)}.

The problem can be reformulated as follows:

find minkK − ˜Kk2F+ kD − ˜Dk2F such that: ˜ K = ˜KT; ˜ D = ˜DT; M (Λ∗ 1)2Y1+ ˜D(Λ∗1)Y1+ ˜KY1= 0

where Λ∗1= diag(µ1, ..., µp), Y1= [y1, ..., yp] are the desired matrices.

In order to simplify the problem, let us define the following matrices:

A = M (Λ∗1)2Y 1; B = (Λ∗1)Y1; C = Y1; X =K 0 0 D  ; X =˜ K˜ 0 0 D˜  .

(14)

minkX − ˜Xk2 F s.t. ˜X = ˜XT; A + ˜X22B + ˜X11C = 0 Now, defining ˆI = In×n In×m  and W =C B  it holds: A + ˆITXW = A + ˜˜ KC + ˜DB = A + ˜X22B + ˜X11C

So the constraints become:

˜

X = X˜T (2.3.1) A + ˆITXW˜ = 0 (2.3.2) We will project onto the subspace S of symmetric matrices, defined by constraint 2.3.1, whose projection is given by

PS(X) =

X + XT

2 ;

and onto the linear variety V = {x ∈ R2n×2n : A + ZTXW = 0} whose projection is given by the following result.

Theorem 2.9 If X ∈ R2n×2n is any given matrix, the the projection onto the linear variety

V is given by PV(X) = X + ZΣWT, where Σ satisfies:

WTW ΣT = −1 2(A

T + WTXTZ)

The solution can now be found using MAP on S and V.

2.3.3

Projection Methods on Quantum Information Science

A natural problem in quantum information science is to construct, if it exists, a quantum operation sending a given set of quantum states {ρ1, .., ρk} to another set of quantum states

{¯ρ1, ..., ¯ρk}. Quantum states are mathematically represented as density matrices (positive

semi-definite, Hermitian matrices with unitary trace) while quantum operations are rep-resented by trace preserving, completely positive maps (CPTP maps) T that maps n × n density matrices to m × m density matrices, having the form:

T(X) = r X j=1 FjXFj† where it holds Pr j=1F ∗

jFj = In. More details on quantum formalism will be given in

Chapter 3.

Given some density matrices A1, ..., Ak and B1, ..., Bkour task is to find a CPTP map which

satisfies T(Ai) = Bi. If we denote with E11, E12, ..., Enn standard orthonormal basis, T is a

CPTP map if and only if the Choi matrix C(T)

(15)

is positive semi-definite and tr(Pij) = δij.

Our problem is equivalent to the positive semi-definite feasibility problem for P = (Pij):

       P ij(Al)ijPij= Bl l = 1, ..., k tr(Pij) = δij 1 ≤ i ≤ j ≤ n P ∈ Hnm +

It is easy to see that the first condition is true: we can write Al=Pi,j(Al)i,iEi,j. Then, by

linearity of T, we have T(Al) = Blthat is equivalent to Bl= T(Al) = T(Pi,j(Al)i,iEi,j) =

P

i,j(Al)i,jT(Ei,j) =Pi,j(Al)i,jPi,j. Let us define:

LA(P ) = (Pij(Al)ijPij)l;

LT(P ) = (tr(Pij))i,j

L(P ) = (LA(P ), LT(P ));

B = [B1...Bk];

∆ = (δij)i,j

We want to find a matrix P in the intersection of Hnm

+ with the affine subspace A = {P :

L(P ) = (B, ∆)}.

If P = U†diag(λ1, ..., λmn)U , then the projection onto Hnm+ is given by: PHnm + (P ) =

U†diag(λ+1, ..., λ+

mn)U , where r+= max{0, r}.

The projection onto A is given by PA(P ) = P + L†R where L† is the Moore-Penrose general

inverse while R = (B, ∆) − L(P ) is the residual. Using MAP we can find the solution to our problem. More details can be found in [7].

2.3.4

Fixed-Order Control Design for LMI Control problems using

MAP

Consider a linear, TI, continuous dynamic system with the follow state-space representation: ˙

x = Ax + B1w + B2u

z = C1x + D11w + D12u

y = C2x + D21w

Where x(t) ∈Rnp is the state, u(t) ∈Rnu is the control. w(t) ∈ Rnw is an external input

(i.e. noise), z(t) ∈Rnz is the regulated output and y(t) ∈Rny is the measured output.

We seek a linear, TI, continuous time controller of order ncwith state-space representation:

˙

xc = Acxc+ Bcy

u = Ccxc+ Dcy

We will consider w(t) = 0: noise free stabilization problem.

Theorem 2.10 The following statements are equivalent:

(16)

b) there exist matrices X > 0, Y > 0 s.t. (B2)⊥[AX + XAT](B2)⊥T < 0; (C2)T ⊥[Y A + ATY ](C2)T ⊥T < 0; X I I Y  ≥ 0; rankX I I Y  ≤ np+ nc.

Theorem 2.11 The following statements are equivalent: a) it exists an H∞ sub-optimal controller of order nc;

b) there exist matrices X > 0, Y > 0 s.t.  B2 D12 ⊥AX + XAT + B 1B1T XC1T+ B1DT11 C1X + D11B1T D11DT11− I   B2 D12 ⊥T < 0;  CT 2 DT 21 ⊥Y A + ATY + CT 1C1 Y B1+ C1TD11 BT 1Y + D11TC1 D11TD11− I   CT 2 DT 21  < 0; X I I Y  ≥ 0; rankX I I Y  ≤ np+ nc.

For nc = np the relations of Theorems 2.10 and 2.11 are convex: it becomes a convex

feasibility problem.

Now we need to find a projection formulation for the problem.

Let Snbe the set of real, symmetric n × n matrices equipped with the Frobenius norm and

the inner product: hx, yi = tr(xy).

Consider the set L = {X ∈ Sn : EXF + FTXET + Q < 0} where E, F, Q ∈ Sn are of

compatible dimensions. L is a convex set of Sn. Every LMI constraint seen in the previous

two theorems can be written as in L(an example can be seen in [13]).

We will consider the closed -approximation of L: L= {X ∈ Sn: EXF +FTXET+Q ≤ I}

with  > 0

Proposition 2.1 Let us define the following sets in S2n:

J . = {W ∈ S2n : E FT W ET F  ≤ −Q} T = {W ∈ S. 2n : W = W11 W12 W12T W22  , W11= W22= 0, W12∈ Sn}

where Q= Q + I. Then the following statements are equivalent:

a) X ∈ L;

b) X = W12

(17)

Proposition 2.2 Let W ∈ S2n. Consider the SVD: E FT = UΣ 0 VT and define ¯ W = V. TW V = ¯ W11 W¯12 ¯ WT 12 W¯22  with ¯W11∈ Sn.

Consider the eigenvalue-eigenvector decomposition: ¯

W11+ Σ−1UTQU Σ−1 = LΛLT

The projection W∗= PJ(W ) of W onto J is

W∗= V ¯ W11∗ W¯12 ¯ WT 12 W¯22  VT

where ¯W11∗ = LΛ−LT − Σ−1UTQU Σ−1, Λ− is the diagonal matrix obtained by replacing

the positive eigenvalues of Λ by 0.

Proposition 2.3 Let W ∈ S2n. The orthogonal projection W∗= PT(W ) of W in T is

W∗= 0 X

X∗ 0 

where X∗=12(W12+ W12T)

In addiction to the previous constraints, we need to derive the expression of the orthogonal projection onto the positivity and rank constraints sets. In order to achieve that, we define the following sets:

Definition 2.4 Let us define:

D =. {Z ∈ S2n: Z = X 0 0 Y  , X, Y ∈ Sn} P =. {Z ∈ S2n: Z ≥ −J } R =. {Z ∈ S2n: rank(Z + J ) ≤ k}; n ≤ k ≤ 2n J =.  0 In In 0  ∈ S2n

We next provide the expression of the orthogonal projection onto the previous sets. Proposition 2.4 Let Z =Z11 Z12

Z12T Z22



∈ S2n. The projection Z∗= PD(Z) of Z onto D is

Z∗=Z11 0 0 Z22



Proposition 2.5 Let Z ∈ Sn and Z + J = LΛLT. Z∗ = PP(Z), the projection of Z onto

P, is given by

Z∗= LΛ+LT − J

(18)

Proposition 2.6 Z ∈ S2n and Z + J = U ΣVT. The projection Z∗= PR(Z) of Z onto R

is given by:

Z∗= U ΣkVT − J

where Σk is the diagonal matrix obtained by replacing the 2n − k smallest singular values of

Z + J by zero.

(19)

Chapter 3

Iterated Quantum Maps

3.1

Statistical Description of Finite-Dimensional

Quan-tum Systems

To every quantum physical system Q is associated a complex Hilbert space H whose di-mension depends on the observable quantities we want to describe. If the system is finite-dimensional, namely the quantities of interest admit a finite set of outcomes, the Hilbert space is isomorphic to CN. H is associated to the inner product:

hx, yi =X

j

x∗jyj= x†y.

where the xj represent the components of x ∈CN.

Postulate 3.1 A physical quantity relative to the system of interest that can (in principle) be measured is called observable.

In quantum mechanics any observable is associated to an Hermitian operator A ∈ h(H) (h(H) indicates the set of all hermitian operators in H). The operator A can be written, by the spectral theorem, as A = P

jajΠj where aj are the eigenvalues of A and Πj the

respective orthogonal projectors (ΠjΠk = δjkΠj, PjΠj = I). The eigenvalues represent

the possible outcomes of A and the projectors the quantum events.

Postulate 3.2 A state of maximal information for the system is associated to a state vector |ψi, which is a norm-1 vector in H.

It is very difficult to know exactly the state of the system, more often there is some uncer-tainty. Let us suppose that H is composed of states {|ψji} and let pj be the corresponding

probability of being in that state.

Definition 3.1 The density operator for the system is defined by ρ =X

j

pj|ψji hψj| ;

where pj ≥ 0 and Pjpj = 1. They have, in quantum mechanics, the role of probability

(20)

Hereafter we will consider only density operators that correspond to complex N ×N matrices such that

ρ = ρ†; tr(ρ) = 1; tr(ρ2) ≤ 1.

If p1= 1, ρ = |ψ1i hψ1|. In this case ρ is called pure state and it has the same meaning of

the state vector |ψi provided before. In case of two or more pj > 0, ρ is called mixed state

and it cannot be described by a single state vector.

In case we want to calculate the probability Paj of observing the value aj as an outcome

of the observable A, we can calculate it as:

Paj = tr(ρΠj)

while the conditional density operator after the measurement of aj is:

ρA=aj =

ΠjρΠj

tr(ΠjρΠj)

If we want to compute the expectation of A assuming that the state is ρ: Eρ(A) = tr(Aρ) = hA, ρiH.S.

where h·, ·iH.S.is an inner product for the space of operators in H, the called Hilbert-Schmidt

inner product.

Systems composed by different subsystems are described as tensor products of different subsystems, i.e. H = HA⊗ HB.

Tensor product is a way to assemble vector space.

Definition 3.2 Let V and W be Hilbert spaces of dimensions n and m respectively. Then the tensor product V ⊗ W is an Hilbert space of dimension nm which elements are linear combination of |vi ⊗ |wi where |vi ∈ V and |wi ∈ W .

A state ρ ∈ HA⊗ HB that can be decomposed as ρ = ρA⊗ ρB where ρA ∈ HA and

ρB∈ HB is called uncorrelated.

A state ρ that can be decomposed as ρ =P

kλkρkA⊗ ρ k

B is called classically correlated.

A state that cannot be decomposed as before is called entangled.

Proposition 3.1 (Schmidt decomposition) For every |ψi ∈ H = HA⊗ HB there exists

orthonormal bases {ej∈ HA} and {fj∈ HB} such that

|ψi =Pd

j=1pλj|eji ⊗ |fji;

with λj≥ 0, Pjλj= k |ψi k2 and d = min{dim(HA), dim(HB)}. 

Proposition 3.2 Operator Schmidt decomposition For every operator ρ ∈ H = HA⊗

HB there exist orthonormal bases {Aj∈ HA} and {Bj ∈ HB} such that

ρ =

d

X

j=1

pλjAj⊗ Bj

(21)

If there are a composite system described by the density operator ρ ∈ HA⊗ HB, the

reduced density operator for the subsystem A can be computed as: ρA= trB(ρ);

where trB is the partial trace over the system B.

Definition 3.3 Let ρAB = ρ ⊗ σ ∈ H = H

A⊗ HB, then the partial trace over B is defined

as:

trB(ρAB) = trB(ρ ⊗ σ) = ρtr(σ);

By linearity, for general ρ =P

ijcijXi⊗ Yj we define:

trB(ρ) =

X

ij

cijtr(Yj)Xi;

More details about tensor product and partial trace can be found in Appendix A. More details about statistical description in quantum systems can be found in [2],[19],[23].

3.2

Open System Dynamics

Quantum dynamics are typically studied in two different scenarios: Closed systems and Open systems.

A system is called open if it has non trivial interaction with the environment to which it belongs, while closed systems are isolated.

Postulate 3.3 The state vector |ψi of a closed quantum system obeys to the Schr¨odinger equation:

(

~|ψi = −iH˙ 0|ψi

|ψ(0)i = |ψ0i

;

where H0 is the Hamiltonian of the system which, like classical mechanic, depends by the

energy of the system, and ~ is the Planck constant (we will consider ~ = 1).

Quantum states belong to complex sphereS2N −1 which can be lifted to the Lie group

SU (N ) = {U ∈CN ×N : UU = U U= I , det(U ) = 1}.

So the Schr¨odinger equations for the unitary propagator can be obtained: ( ˙U = −iH

0U

U (0) = I ; where U ∈ SU (N ). The solution for the state is given by:

|ψ(t)i = U (t) |ψ0i ;

where

U (t) = e−iH0t;

Therefore for any pure state we obtain:

(22)

This implies, by linearity, that for a general state ρ(t) we have, in terms of unitary propa-gator:

ρ(t) = U (t)ρ(0)U†(t);

A real physical quantum system, in general, have interaction with the environment in which it belongs. Le us consider a finite-dimensional quantum system S coupled to the environment E, chosen so that S and E together can be considered isolated, and let HS and HE be the

system and environment Hilbert spaces, with dim(HS) = N < ∞. The total Hamiltonian

for the composite system is given by:

Htot= HS ⊗ IE + IS⊗ HE+ HSE; (3.2.1)

where HS,HE,HSE are the system, environment and interaction Hamiltonian, respectively.

On the joint space HS⊗ HE the dynamics is unitary by Postulate 3.3, since S ⊗ E is isolated

by construction. Let us assume that the initial state is ρSE,0 = ρ0⊗ ρE. By previous

considerations we have:

ρSE,t= USE,t(ρ0⊗ ρE)USE,t† ;

where USE,t= e−iHtott. In order to obtain the state of the system we need the partial trace

which gives:

ρS,t= T (ρ0) = trE(USE,t(ρ0⊗ ρE)USE,t† ). (3.2.2)

Definition 3.4 A map E (·) is a Completely Positive(CP) map if for every R being an auxiliary system of arbitrary, finite dimension, (I ⊗ E )(A) ≥ 0 for every operator A ≥ 0 on the combined system R ⊗ H, where I denotes the identity map on h(R).

Clearly CP implies positivity when dim(R) = 0.

Definition 3.5 A map E (·) is said Trace Preserving(TP) if tr(E (A)) = tr(A).

The map (3.2.2) is a CPTP map [2],[19],[21].

Note. CPTP maps are also called quantum channels or Kraus maps.

Properties:

• CPTP map can be written as

E(ρ) =X

k

MkρMk†

where Mk is such thatPkMkMk†= I;

• Any CPTP map is non-expansive: let ρ and σ be two states, then kE(ρ) − E(σ) ≤ kρ − σk

where kAk = tr(√A†A) is the trace norm;

(23)

3.3

CPTP Projections

Let us consider a CPTP map E such that E2= E , which we shall call a CPTP projection.

Then it maps any operator X onto the set of fixed points F ix(E ) = {X ∈ H : E (X) = X}. It has been proved [4] that the fixed points of a CPTP map form an algebra with respect to a weighed product what has been called a distorted algebra [10]. More precisely, for any CPTP map there exists a decomposition of the Hilbert space H such that:

H = (M

i

HSi⊗ HF i) ⊕ HR;

By that it results that:

F ix(E ) = (M

i

B(HSi) ⊗ τi) ⊕ ∅R;

where ∅Ris the null operator on HR.

We will consider, for sake of simplicity, only maps that admit at least one full rank state, so the decomposition is given by:

H =M i HSi⊗ HF i; (3.3.1) and F ix(E ) =M i B(HSi) ⊗ τi; (3.3.2)

A CPTP projection is then a CPTP map E that projects a state ρ into the set F ix(E ). We next provide its structure with respect to F ix(E as in 3.3.2.

In general:

Proposition 3.3 If ˆE2= ˆE and there exists a full rank ¯ρ such that E ( ¯ρ) = ¯ρ, then

ˆ

E(ρ) =M

i

trF i(ΠSF iρΠSF i) ⊗ τi; (3.3.3)

where ΠSF i is the projection onto HSi⊗ HF i as in 3.3.1.

It easy to see that ˆE(ρ) ∈ F ix( ˆE). Let us call A =L

iAi =LiB(HSi) ⊗ τi. The orthogonal projection of ρ ∈ H onto A is

given, by definition (1.9):

ρA=

X

l,j

hσl⊗ τj, ρiHSσl⊗ τj;

where σl⊗ τj is an orthonormal basis for Ai. Decomposing ρ =PkAk⊗ Bk we have:

(24)

Note: P

k(Aktr(τjBk)) = trF i(ρ) ⇐⇒ τj = I so, in general, ˆE(ρ) of 3.3.3 is not an

orthogonal projection with respect to Hilbert Schmidt inner product.

In order to obtain an orthogonal projection we can define a new inner product.

Definition 3.6 Let ξ be a positive definite operator. We define the modified ξ-inner product as:

hX, Y iξ = tr(XξY ); (3.3.4)

We define the modified symmetric ξ-inner product as: hX, Y iξ,s= tr(Xξ

1 2Y ξ

1

2). (3.3.5)

Proposition 3.4 (3.3.4) is a valid inner product.

Proof. We have to show that h·, ·iξ satisfies the properties of definition (1.4):

1. hX, Xiξ= tr(XξX) = tr(ξX2) ≥ 0 clearly = 0 ⇐⇒ X = 0;

2. hX, Y iξ = tr(XξY ) = tr(Y†ξ†X†)† = tr(Y ξX)∗= hY, Xi∗ξ;

3. hα1X + α2Y, Ziξ = tr((α1X + α2Y )ξZ) = α1tr(XξZ) + α2tr(Y ξZ) = α1hX, Ziξ+

α2hY, Ziξ.

Similarly for (3.3.5). 

Note. Changing the inner product for the Hilbert space (h·, ·iHS→ h·, ·iξ,s) is equivalent

to a change of measure in a classical probability space. In fact it holds:

Eρ(X) = hρ, XiHS

Eρ˜(X) = hρ, Xiξ,s

where ˜ρ = ξ−12ρξ− 1

2 is the ”new” unnormalized state.

In order to show that E is an orthogonal projection with reference to (3.3.4), we will need the following lemma. With W =L Wi we will denote an operator that acts as Wi on Hi

for a decomposition of H =L

iHi.

Lemma 3.1 Let W =L Wiand let Y be an operator. Then tr(W Y ) =Pitr(WiYi), where

Yi= ΠiY Πi

Proof. Let Πibe the projector onto Hi. Remembering thatLiΠi= I and Πi= Π2i, it holds:

(25)

Therefore we obtain: tr(W Y ) = tr(M j WjY ) = tr((X i Πi)( M j Wj)Y ) = X i tr(Πi M j WjY ) = X i tr(ΠiWiY ) = X i tr(ΠiWiΠiY ) = X i tr(WiΠiY Πi) = X i tr(WiYi)  Proposition 3.5 Let ξ = ρ−1, where ρ is a full-rank fixed state in H = L

iHSi⊗ HF i.

Then (3.3.3) is an orthogonal projection with the reference to the modified inner product (3.3.4).

Proof. We already know that E is linear and idempotent. In order to show that E is an orthogonal projection we need to show that it is self-adjoint (Definition 1.9).

Let us decompose X, Y and ρ as:

(26)

and by Lemma (3.1): < E (X), Y >ξ = tr(E (X)ρ−1Y ) = tr(M i trF i(Xi) ⊗ τi(ρ−1i ⊗ τ −1 i )Yi) = X i,k

tr(Ak,itr(Bk,i) ⊗ τi(ρ−1i ⊗ τ −1 i )Yi)

= X

i,k

tr((Ak,itr(Bk,i)ρ−1i ⊗ I)Yi)

= X

i,k,l

tr([Ak,itr(Bk,i)ρ−1i ⊗ I][Cl,i⊗ Dl,i])

= X

i,k,l

tr(Ak,itr(Bk,i)ρ−1i Cl,i⊗ Dl,i)

= X

i,k,l

tr(Bk,i)tr(Ak,iρ−1i Cl,i)tr(Dl,i)

By similar calculation: < X, E (Y ) >ξ = tr(Xρ−1E(Y )) = tr(Xi(ρ−1i ⊗ τ −1 i )trF i(Yi) ⊗ τi) = X i,l tr(Xiρ−1i Cl,itr(Dl,i) ⊗ I) = X i,k,l

tr([Ak,i⊗ Bk,i][ρ−1i Cl,itr(Dl,i) ⊗ I])

= X

i,k,l

tr(Bk,i)tr(Ak,iρ−1i Cl,i)tr(Dl,i)

So, < E (X), Y >ξ=< X, E (Y ) >ξ wich proves self-adjointness. 

The same properties still hold if we define the modified symmetric ξ-inner product (3.3.5).

3.4

Iterated CPTP Map Theorem

Thanks to the previous results we can now apply alternating projections theorem to iterated CPTP maps.

Theorem 3.1 Let H be an Hilbert space and ˆE1,..., ˆErmaps that project onto F ix( ˆEi) ⊂ H,

i = 1, ..., r and let F ix( ˆE) =Tr

i=1F ix( ˆEi) 6= ∅. If there exists a full-rank state ρ0∈ F ix(E),

then ∀x ∈ H:

lim

n→∞( ˆEr... ˆE1)

nx = ˆEx

where ˆE is the projection onto F ix( ˆE) =Tr

i=1F ix( ˆEi)

Proof. Let us consider ξ = ρ−10 , then ρ0∈ F ix( ˆE) implies that the maps ˆEiare all orthogonal

projection with respect to the modified inner product (Propositions 3.3, 3.5).

By Halperin classical theorem (Theorem 2.2) the limit of the cyclic orthogonal projections onto subsets converges to the projection onto the intersection of the subsets; by that, because

ˆ

Eiis the orthogonal projection onto F ix( ˆEi), the cyclic projections converge to the projection

(27)

3.5

Quantum Application

In this Section we will focus on open quantum systems composed by a finite number n of distinguishable systems, defined on

H =

n

O

a=1

Ha, dim(Ha) = da, dim(H) = D

We are interested in studying dynamics in the case of quasi-locality constraints which specify a list of neighborhood: groups of subsystems on which operators can act simultaneously. Our goal is to understand if a given full rank state ρ ∈ D(H) can be prepared or, equally, it exists a dynamic for which ρ is global asymptotically stable(GAS).

We will next indicate with Nj the list of subsystems’ indicies that compose th j−th

neighborhood. By definition, a neighborhood induce a bipartite structure of H: H = HNj ⊗ HNj

By this structure we can define the reduced neighborhood state ρj:

ρj = trNj(ρ).

Definition 3.7 An operator Mj is a neighborhood operator if its action is non trivial only

on Nj. It can be decomposed as:

Mj = MNj⊗ INj;

where I is the identity operator for the complement of Nj.

Definition 3.8 A CPTP map E (·) is said Quasi local(QL) if it can be written as: E(ρ) =X

k

MkρMk†;

where Mk is a neighborhood operator for the same Nj. These maps are called neighborhood

maps.

By this definition E (·) can be decomposed as:

E(·) = ENk(·) ⊗ IdNk(·).

Now we can describe our dynamics of interest:

i) It must exist a sequence of p CPTP maps, p → ∞, such that Ep◦ ...E1(ρ) = ρd where

ρd is the state to be prepared and ρ is any state in H.

ii) For every t = 1, ..., p it must hold: Et(ρd) = ρd;

iii) For every t = 1, ..., p it exists a neighborhood Nj(t) such that Et(·) = ENj(t)⊗ IdNj(t)

A dynamics that satisfies conditions i), ii), iii) is a Quasi-local stabilizing dynamics.

Let us recall that given an X ∈ B(HA ⊗ HB), we can write its operator Schmidt

de-composition (Proposition 3.2) as:

X =X

j

Aj⊗ Bj

(28)

Definition 3.9 Given an X ∈ B(HA⊗ HB), we define the Schmidt span as:

ΣA(X) = span({Aj})

The Schmidt span is important because, if we want to leave an operator invariant with a neighborhood map, it also imposes the invariance of its Schmidt span.

Lemma 3.2 Given a ρ ∈ D(H = HN|⊗ HN|) and E = ENj⊗ INj ∈ B(H), it holds:

span(ρ) ⊆ F ix(ENj⊗ INj) ⇒ ΣNj ⊗ B(HNj) ⊆ F ix(ENj⊗ INj)

Given a ρ > 0 ∈ F ix(E ) the following properties hold:

I. F ix(E ) is a ∗-algebra w.r.t. the modified product A ×ρ−1B = Aρ−1B;

II. F ix(E ) is invariant for the modular product ∆ρ(X) = ρ

1 2Xρ−12.

In general ΣNk⊆ F ix(ENk); we need to enlarge ΣNk in order to satisfy properties I.,II.

Definition 3.10 Let ρ ∈ D(H) and W ∈ B(H). The minimal modular-invariant distorted algebra generated by W is the smallest ρ-distorted algebra generated by W which is invariant w.r.t ∆ρ(·). In our case W = ΣNk(ρ) and we will call FρNk the minimal ∆ρNk invariant,

distorted algebra w.r.t. ρNk modified product.

Note: FρNk can be constructed iteratively starting from F0 = algρNk(ΣNk(ρ)) with

k-th step given by:

Fk+1= alg

ρNk(∆ρNk(ΣNk(ρ)))

It runs until Fk+1= Fk = F ρNk.

We next present a key result by Takesaki (in a finite-dimensional version given by Petz [21]).

Theorem 3.2 Let A be a †-closed subalgebra of B(H), and ρ a full rank state. Then the following are equivalent:

(i) There exists a unital CP map E† such that F ix(E†) = A, (E†)2= Eand E (ρ) = ρ;

(ii) A is invariant w.r.t. ∆ρ(·).

The previous theorem allows to provide a characterization of distorted algebras that contain a given full-rank fixed state and are fixed point of some CPTP map.

Theorem 3.3 Let ρ be a full-rank state. A distorted algebra Aρ admits a CPTP map E (·)

such that F ix(E ) = Aρ if and only if it is invariant for ∆ρ.

In our case, by the two previous theorems, it exists a CPTP map ENk such that

F ix(ENk) = FρNk and E

2

Nk= ENk. For Ek = ENk⊗ IdNk it holds that:

F ix(Ek) = FρNk ⊗ B(HNk) = Fk

(29)

Proposition 3.6 Let ρdbe a full-rank state. There exists a Quasi-local stabilizing dynamics

satisfying i), ii), iii) if and only if

span(ρd) =

\

k

Fk

Proof. (⇒) By contradiction, let us suppose it exists ρ2 ∈ TkFk such that ρ2 6= ρd. It

clearly implies that ρ cannot be GAS because there exist an other state invariant for some maps, which implies that not all the maps ”guide” to ρd.

(⇐) By Theorems 3.2, 3.3 and what we have seen in Section 3.3, we can see that Ek is a

CPTP projection. By Theorem 3.1, we already know that for every ρ, (Eq...E1)k(ρ) →TkFk

for k → ∞. Now, by hypothesis,T

kFk = span(ρd) and, being ρd the only state in his span,

we can conclude that ρd is GAS. 

3.5.1

Example

A Gibbs state is defined as: ρβ= eβH tr(eβH), H = X j Hj, β ∈R+,

where each Hj is a neighborhood operator relative to Nj. ρ is called commuting Gibbs state

if it holds [Hj, Hk] for all j, k.

Every 1D lattice can be associated to indexed subsystems (i.e. i = 1, 2, 3, ...). On those systems, we define the neighborhood structure nearest neighborhood (NN) as the one where Ni= {i, i+1}, and the next nearest neighbor (NNN) as the one given by Ni= {i, i+1, i+2}.

We will solve our examples thanks the following proposition [15].

Proposition 3.7 Gibbs states of 1D NN commuting Hamiltonians can be prepared with Quasi-local stabilizing dynamics acting on the NNN neighborhood structure (see Figure 3.5.1).

We will solve our example following the proof of the previous proposition, which can be found in [15].

Let see our first example in the commuting case. Let us consider the following Hamiltonian:

H =X

i

σz(i)⊗ σ(i+1)z ,

where σzis the Pauli matrix

1 0 0 −1



. The Gibbs state is given by:

ρ = eβH=X

i

σi,i+1,

where σi,i+1 = eβi,i+1σ

(i) z ⊗σ

(i+1)

z . For sake of simplicity, we will consider only the case of 4

subsystems. We need to find the minimal modular-invariant algebras for the subsystems {1, 2, 3} and {2, 3, 4}. First, we need to find Σ123(ρ) which is given by:

(30)

Figure 3.1: A visual example of a 1D lattice system with NN and NNN neighborhood structure .

Now, by definition, it holds:

Σ3(σ34) = span{tr4([I3⊗ B]σ34), ∀B ∈ B(H4)} = span{tr4     eβ 0 0 0 0 e1β 0 0 0 0 1 eβ 0 0 0 0 eβ      B 02×2 02×2 B  } = span{tr4 Beβσz 0 2×2 02×2 Be−βσz  } = span{tr(Be βσz) 0 2×2 02×2 tr(Be−βσz)  } = span{I, σz}

By that, we obtain Σ123(ρ) = σ12σ23[I12⊗ span{I, σz}]. By symmetry, the same result can

be obtained for Σ234(ρ) = σ23σ34[span{I, σz} ⊗ I34]. Now, it is evident that Σ123and Σ234

are invariant for the distorted and the modular product, so we can immediately write: F123 = σ12σ23[I12⊗ span{I, σz}] ⊗ B(H4)

(31)

We need to find F123∩ F234. First of all, notice that σ12 = ˆσ12⊗ I34 (analogously for

σ34). Consider τ123and τ234 operators in F123, F234 respectively. They can be expressed as

following: τ123 = σ23 X i ˆ σ12⊗ D3i⊗ W4i τ234 = σ23 X j Vj⊗ C2j⊗ ˆσ34

To find the intersection we need to impose τ123= τ234and it is possible if and only if

(P

iD3i⊗ W4i= ˆσ34

P

jVj⊗ C2j= ˆσ12

So, finally, we get that F123∩ F234= σ12σ23σ34 .

As second example, we are considering a non-commuting Gibbs state. In this case, the Hamiltonian is given by:

H = σx⊗ σx⊗ I34+ I ⊗ σz⊗ σz⊗ I + I12⊗ σx⊗ σx where σx= 0 1 1 0  and σz= 1 0 0 −1 

are Pauli matrices.

In this case manual calculations are not easy, so we have used a mathematical software (MATLAB) in order to find if the Gibbs state can be prepared with NNN operators. We implemented the following algorithm:

Algorithm 1 Pseudo-code for the non commuting case:

1: find Σ123(ρ); 2: F123⇐ Σ123;

3: while rank(F123) is not stable do

4: find alg(Σ123) closing Σ123w.r.t. the distorted product; 5: find ∆(alg(Σ123)) closing alg(Σ123) w.r.t. modular product; 6: F123⇐ ∆(alg(Σ123));

7: end while

8: F123⇐ F123⊗ B(H4)

9: Repeat for neighborhood {2, 3, 4};

10: Calculate a basis for F = F123∩ F234; 11: if rank(F ρ)==1 then

12: ρ can be prepared;

13: else

14: ρ cannot be prepared;

15: end if

(32)

Chapter 4

Bregman’s Theory

4.1

Bregman’s Divergences and their Properties

In this chapter it will be shown an extension of Halperin (Von Neumann) theorem using Bregman divergences and projections.

Definition 4.1 Let Ai ∈ X a family of closed convex sets; R =Ti∈IAi 6= ∅ and S ⊂ X a

convex set such that S ∩ R 6= ∅.

The function D(x, y) defined over S × S is a Bregman divergence if satisfies the following conditions:

I. D(x, y) ≥ 0 and D(x, y) = 0 ⇐⇒ x = y;

II. For every y ∈ S, i ∈ I it exists x ∈ Ai∩ S such that:

D(x, y) = arg min

z∈Ai∩S

D(z, y)

x is called Bregman projection of y onto Ai and it will be indicated by Pi(y);

III. For every index i ∈ I and y ∈ S the function G(z) = D(z, y) − D(z, Pi(y)) is convex

over AiT S;

IV. A derivative D0x(x, y) of D(x, y) exists when x = y and D0x(y, y) = 0 (i.e. limt→0D(y+tz,y)t = 0 for all z ∈ X );

V. For every z ∈ R ∩ S and for every real number L, the set T = {x ∈ S : D(z, x) ≤ L} is compact;

VI. If D(xn, yn) → 0, yn → y∈ ¯S and the set of elements of {xn} is compact, xn→ y.

The following proposition gives a useful tool to verify if a function is a Bregman diver-gence.

Proposition 4.1 If f (x) is a strictly convex and differentiable function, then

Df(x, y) = f (x) − f (y) − h∇f (y), x − yi (4.1.1)

satisfies conditions I-IV.

(33)

I. It derives from the property of the convex functions: f (x) ≥ f (y) + h∇f (y), x − yi

From which we obtain that Df(x, y) ≥ 0. And, for the strictly convexity, it is clear

that Df(x, y) = 0 ⇔ x = y.

II. This condition is satisfied because arg min

x∈S

D(x, y) = 0 exists, ergo arg min

x∈Ai∩S

D(x, y) exists for every closed convex set Ai.

III. It results

G(z) = −f (y) + f (Pi(y)) − h∇f (y), yi + h∇f (Pi(y)), Pi(y)i − h∇f (y) − ∇f (Pi(y)), zi

= cost. − h∇f (y) − ∇f (Pi(y)), zi

which is convex.

IV. Dx0(y, y) = ∇f (y) − ∇f (y) = 0

 Note. A ”Bregman Distance” as (4.1.1) does not guarantee conditions V,VI. More assumptions are needed.

Here are two classical examples used by Bregman himself in his work. Proposition 4.2 Let X be an Hilbert space, S = X and

D(x, y) = hx − y, x − yi Then it is a Bregman Divergence.

Proof. Let us see that D(x, y) satisfies the condition in Definition 4.1. I. It is obvious by Definition 1.4;

II. In this case Bregman Projection is the classical orthogonal projection, so it is satisfied; III. It holds:

G(z) = D(z, y) − D(z, Pi(y))

= hz − y, z − yi − hz − Pi(y), z − Pi(y)i

= 2hz, Pi(y) − yi + hy, yi − hPi(y), Pi(y)i

which is linear in z, so the condition is satisfied; IV. Dx0(y, y) = limt→0 hy+tz−y,y+tz−yit = limt→0thz, zi = 0;

V. T = {y ∈ S : hx − y, x − yi ≤ L} is not generally compact but it is bounded. A weak topology can be introduced in order to achieve compactness (i.e. Rn,Cn have a strong

topology, so this property is often satisfied);

VI. Let hxn− yn, xn− yni → 0, yn → yand let {xn} be compact. Let xnk → x. Then

for every u ∈ X :

|hu, xnk− ynki| ≤ kukkxnk− ynkk → 0

(34)

 Proposition 4.3 Let f (x) =Pn

i=1xiln xibe the entropy function, where xi is a probability

measure. Then Df(x, y) = n X i=1 xiln xi yi is a Bregman divergence. A proof can be found in [5].

4.2

Generalized Pythagorean Theorem

In this paragraph we will give a generalization of Pythagorean theorem for Bregman diver-gence.

Figure 4.1: A visual representation of generalized Pythagorean theorem.

Lemma 4.1 Let z ∈ Ai∩ S. Then for any y ∈ S

(35)

is valid.

Proof. By condition III, λ ∈ [0, 1]

D(λz + (1 − λ)Piy, y) − D(λz + (1 − λ)Pi(y), Pi(y)) ≤ λ(D(z, y) − D(z, Pi(y))) +

+(1 − λ)D(Pi(y), y) When λ > 0: D(z, y) − D(z, Pi(y)) − D(Pi(y), y) ≥ D(λz + (1 − λ)Pi(y), y) − D(Pi(y), y) λ − −D(λz + (1 − λ)Pi(y), y) λ

Now λz + (1 − λ)Pi(y) ∈ Ai∩ S, the first term in the right side of the previous inequality

is non-negative (condition II) and the second term tend to 0 when λ → 0 (condition IV). From which

D(z, y) − D(z, Pi(y)) − D(Pi(y), y) ≥ 0

 In Figure 4.2 an intuitive vision of Lemma is given. It can be seen that Bregman Projec-tions behave ”like” classical projecProjec-tions giving a property similar to classical Pythagorean Theorem, essential property in order to prove the convergence of the iterated method that will be given in the next paragraph.

4.3

Iterated Convergence Results

Let us consider the following iterative process: • choose x0∈ S

• if xn ∈ S is known, select an index (in some way) i

n(xn) ∈ I and find the point xn+1

wich is the Bregman projection of xn onto A in(xn).

The series {xn} is called relaxation sequence.

Lemma 4.2 For any relaxation sequence it holds: 1. The set of elements {xn} is compact; 2. It exists limn→∞D(z, xn), ∀z ∈ R;

3. D(xn+1, xn) → 0 when n → ∞.

Proof. Let z ∈ R ∩ S. By Lemma 4.1

D(xn+1, xn) ≤ D(z, xn) − D(z, xn+1)

Because D(xn+1, xn) ≥ 0, we have D(z, xn) ≥ D(z, xn+1), so it exists the limit for D(z, xn)

(36)

Now the set of element of {xn} is contained in T = {x ∈ S : D(z, x) ≤ D(z, x0)}, compact

for condition V, so 1. is proven.  Lemma 4.2 assures the convergence of the relaxation sequence and it gives the basis to prove the main Bregman result.

Theorem 4.1 (Bregman’s iterative method) Let I = {1, 2, ..., m} and let the indices be chosen in cyclic order. Then any limiting point x∗ of the relaxation sequence {xn} is a common point of the sets Ai.

Proof. Let x∗ be the limiting point of {xn} and xnk → x∗.

Let us separate from {xnk} a sequence fully contained in one A

i (i.e. {xnk} ∈ A1) and

separate out from {xnk+i−1} the sequences which are convergent (we assume {xnk+i−1}

themselves convergent). We have:

xnk → x= x∗ 1 xnk+1→ x∗ 2 .. . xnk+m−1→ x∗ m {xnk+i−1} ∈ A i ⇒ x∗i ∈ Ai.

From Lemma 4.2 (D(xnk+1, xnk) → 0) and condition VI ⇒ lim xnk+1= lim xnk = x

1= x∗2

which implies x∗∈ A2. In the same way it holds x∗∈ A3,...

Concluding we have:

x∗∈\

i∈I

Ai

 Theorem 4.2 If ∀y ∈ S it exists maxi∈Iminx∈AiD(x, y), let in(x

n) be the index which

realizes

max

i∈I x∈Amini

D(x, xn);

Then any limiting point of the relaxation sequence is a common point of the sets Ai.

In other words Bregman gives a sort of generalization of Von Neumann and Halperin’s methods using Bregman projections to extend iterative methods.

Bregman divergences have been studied in various cases, for istance, one of the most im-portant is the case of Bregman divergences generated by particular functions: Legendre functions. Details can be found in [6].

4.4

Quantum Bregman’s Divergences

In this paragraph we will show how some quantum functions can be seen as Bregman divergences.

Proposition 4.4 Let x and y be quantum states. The quadratic distance kx − yk2

ξ induced

by the modified ξ-inner product is a Bregman divergence.

(37)

In the next proposition we will show that the quantum relative entropy is a Bregman divergence.

Proposition 4.5 Let x, y be strictly positive quantum states and let f (x) = tr(x log x). Then

Df(x, y) = tr(x log x) − tr(x log y) (4.4.1)

is a Bregman distance which also satisfies conditions V-VI. In other words it is a Bregman divergence.

Proof. First of all let us recall the following identity:

Df(x, y) = f (x) − f (y) − h∇f (y), x − yi = f (x) − f (y) − lim x→0+t

−1[f (y + t(x − y)) − f (y)]

Now f (x) = tr[x log x] from which: Df(x, y) = tr[x log x] − tr[y log y] − lim

x→0+t

−1tr[(y + t(x − y)) log(y + t(x − y)) − y log y]

lim

x→0+t

−1tr[(y + t(x − y)) log(y + t(x − y)) − y log y] =

lim

x→0+t

−1tr[y log(y + t(x − y)) − y log y] + tr[(x − y) log y] =

lim

x→0+t

−1tr[y log(y + t(x − y)) − y log y] =

lim

x→0+t

−1tr[y(log(y) + t log0

y(x − y) + o(t2)) − y log y] = tr[x − y] =

0 Finally:

Df(x, y) = tr[x log x] − tr[y log y] − tr[x log y] + tr[y log y] = tr[x log x] − tr[x log y]

(38)

Appendix A

Tensor Product and Partial

Trace

As said in the relative chapter, tensor product is a way to assemble vector spaces in order to obtain a larger vector space. Consider two Hilbert spaces H1and H2of dimensions n and m

respectively, then H1⊗ H2is an nm dimensional vector space. If we consider the elements

v ∈ H1 and w ∈ H2, then the elements in H1⊗ H2 are of the kind v ⊗ w, in addiction to

that if {i} and {j} are orthonormal bases for H1and H2, then {i}⊗{j} is a basis for H1⊗H2.

By definition, tensor product satisfies the following properties: • let t be a scalar element, and v ∈ H1, w ∈ H2, then

t(v ⊗ w) = tv ⊗ w = v ⊗ tw;

• let v1, v2∈ H1 and w ∈ H2, then

(v1+ v2) ⊗ w = v1⊗ w + v2⊗ w;

• let v ∈ H1 and w1, w2∈ H2, then

v ⊗ (w1+ w2) = v ⊗ w1+ v ⊗ w2.

Now, let us suppose that A is a linear operator in H1 and B is a linear operator acting on

H2. We can produce an operator A ⊗ B acting on H1⊗ H2 and the following relation is

valid:

(A ⊗ B)(v ⊗ w) = Av ⊗ Bw

In matrix representation, tensor product is known as the Kronecker product and the previous relation derives from the distributive property of the multiplication.

Given two matrices A of dimensions m × n and B p × q, the Kronecker product is given by:

(39)

which is a mp ⊗ nq matrix.

Now, consider a composite system HAB= HA⊗ HB. The partial trace over the

subsys-tem B is defined as:

trB(·) : HAB→ HA

Consider, for example, the stateρ = ρa⊗ ρB, we obtain trB(ρ) = ρAtr(ρB). The partial

trace can be thought as a way to obtain the information of a particular subsystem. For example, let us consider a bipartite qubit system HA⊗ HB; we have

ρAB =     a b c d e f g h l m n o p q r s     ⇒ ρA=     tra b e f  trc d g h  tr l m p q  trn o r s      =a + f c + h l + q n + s  while ρB= a + n b + o e + r f + s 

(40)

Appendix B

MATLAB Code

%%Thesis code for the thesis example. %%Beware: not refined code!

close all; clear all;

%def hamiltonian matrices x=[0 1; 1 0]; %sigma_x z=[1 0; 0 -1]; %sigma_z e=[1 0; 0 1]; % id %elementary matrices %e_1=[1 0; 0 0]; %e_2=[0 1; 0 0]; %e_3=[0 0; 1 0]; %e_4=[0 0; 0 1]; %pauli matrices e_1=[1 0; 0 1]; e_2=[0 1; 1 0]; e_3=[1 0; 0 -1]; e_4=[0 -1i; 1i 0];

%subsystem dimensions vector dim=[2 2 2 2];

H=kron(kron(kron(x,x),e),e)+kron(kron(kron(e,z),z),e)+kron(kron(kron(e,e),x),x); %hamiltonian rho=expm(-H)/trace(expm(-H)); %normalized state

rho_123=TrX(rho,[4], dim); %neighborhood{1,2,3} reduced state rho_234=TrX(rho,[1], dim); %neighborhood{2,3,4} reduced state

(41)

Sigma3_123=TrX(kron(eye(8),e_3)*rho,[4], dim); Sigma4_123=TrX(kron(eye(8),e_4)*rho,[4], dim); Sigma1_123v=reshape(Sigma1_123,[],1); Sigma2_123v=reshape(Sigma2_123,[],1); Sigma3_123v=reshape(Sigma3_123,[],1); Sigma4_123v=reshape(Sigma4_123,[],1);

Sigma_123v=[Sigma1_123v Sigma2_123v Sigma3_123v Sigma4_123v]; Sigma_123=[Sigma1_123 Sigma2_123 Sigma3_123 Sigma4_123];

%iteration in order to find F_123 oldrank=0; oldrank2=1; while oldrank2>oldrank %algebra cycle while rank(Sigma_123v)>oldrank oldrank=rank(Sigma_123v); for i=1:8:length(Sigma_123) for j=1:8:length(Sigma_123) p_j=Sigma_123(:,i:i+7)*pinv(rho_123)*Sigma_123(:,j:j+7); if rank([Sigma_123v reshape(p_j,[],1)])>rank(Sigma_123v) Sigma_123v=[Sigma_123v reshape(p_j,[],1)]; Sigma_123=[Sigma_123 p_j]; end end end end

%modular invariancy cycle while rank(Sigma_123v)>oldrank2 oldrank2=rank(Sigma_123v); for i=1:8:length(Sigma_123) p_j=(rho_123)^(1/2)*Sigma_123(:,i:i+7)*pinv(rho_123)^2; if rank([Sigma_123v reshape(p_j,[],1)])>rank(Sigma_123v) Sigma_123v=[Sigma_123v reshape(p_j,[],1)]; Sigma_123=[Sigma_123 p_j]; end end end end %Extending with B(H4) F1_123=kron(Sigma_123,e_1); F2_123=kron(Sigma_123,e_2); F3_123=kron(Sigma_123,e_3); F4_123=kron(Sigma_123,e_4); F_123=[F1_123 F2_123 F3_123 F4_123];

(42)

%%Same thing for Subsystem 234 Sigma1_234=TrX(kron(e_1,eye(8))*rho,[1], dim); Sigma2_234=TrX(kron(e_2,eye(8))*rho,[1], dim); Sigma3_234=TrX(kron(e_3,eye(8))*rho,[1], dim); Sigma4_234=TrX(kron(e_4,eye(8))*rho,[1], dim); Sigma1_234v=reshape(Sigma1_234,[],1); Sigma2_234v=reshape(Sigma2_234,[],1); Sigma3_234v=reshape(Sigma3_234,[],1); Sigma4_234v=reshape(Sigma4_234,[],1);

Sigma_234v=[Sigma1_234v Sigma2_234v Sigma3_234v Sigma4_234v]; Sigma_234=[Sigma1_234 Sigma2_234 Sigma3_234 Sigma4_234]; oldrank_0=0; oldrank_2=1; while oldrank_2>oldrank_0 while rank(Sigma_234v)>oldrank_0 oldrank_0=rank(Sigma_234v); for i=1:8:length(Sigma_234) for j=1:8:length(Sigma_234) p_j=Sigma_234(:,i:i+7)*pinv(rho_234)*Sigma_234(:,j:j+7);; if rank([Sigma_234v reshape(p_j,[],1)])>rank(Sigma_234v) Sigma_234v=[Sigma_234v reshape(p_j,[],1)]; Sigma_234=[Sigma_234 p_j]; end end end end while rank(Sigma_234v)>oldrank_2 oldrank_2=rank(Sigma_234v); for i=1:8:length(Sigma_234) p_j=(rho_234)^(1/2)*Sigma_234(:,i:i+7)*pinv(rho_234)^2; if rank([Sigma_234v reshape(p_j,[],1)])>rank(Sigma_234v) Sigma_234v=[Sigma_234v reshape(p_j,[],1)]; Sigma_234=[Sigma_234 p_j]; end end end end F1_234=kron(e_1, Sigma_234); F2_234=kron(e_2, Sigma_234); F3_234=kron(e_3, Sigma_234); F4_234=kron(e_4, Sigma_234); F_234=[F1_234 F2_234 F3_234 F4_234];

F_234v=[reshape(F1_234,256,[]) reshape(F2_234,256,[]) reshape(F3_234,256,[]) reshape(F4_234,256,[])];

%%finding intersection

(43)

rank([ints(F_123v, F_234v) reshape(rho,[],1)])

(44)

Bibliography

[1] Shmuel Agmon. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):382–392, 1954.

[2] Claudio Altafini and Francesco Ticozzi. Modeling and control of quantum systems: an introduction. Automatic Control, IEEE Transactions on, 57(8):1898–1917, 2012. [3] Heinz H Bauschke and Adrian S Lewis. Dykstras algorithm with bregman projections:

A convergence proof. Optimization, 48(4):409–427, 2000.

[4] Robin Blume-Kohout, Hui Khoon Ng, David Poulin, and Lorenza Viola. Information-preserving structures: A general framework for quantum zero-error information. Phys-ical Review A, 82(6):062306, 2010.

[5] Lev M Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics, 7(3):200–217, 1967.

[6] Yair Censor and Arnold Lent. An iterative row-action method for interval convex programming. Journal of Optimization theory and Applications, 34(3):321–353, 1981. [7] Yuen-Lam Cheung, Dmitriy Drusvyatskiy, Chi-Kwong Li, Diane Pelejo, and Henry

Wolkowicz. Projection methods in quantum information science. arXiv preprint arXiv:1407.6604, 2014.

[8] Imre Csisz´ar. I-divergence geometry of probability distributions and minimization prob-lems. The Annals of Probability, pages 146–158, 1975.

[9] Domenico d’Alessandro. Introduction to quantum control and dynamics. CRC press, 2007.

[10] Kenneth R. Davidson. C*-algebras by example, volume 6. American Mathematical Soc., 1996.

[11] Richard L Dykstra. An algorithm for restricted least squares regression. Journal of the American Statistical Association, 78(384):837–842, 1983.

[12] Ren´e Escalante and Marcos Raydan. Alternating projection methods, volume 8. SIAM, 2011.

(45)

[14] Israel Halperin. The product of projection operators. Acta Sci. Math.(Szeged), 23:96– 99, 1962.

[15] Peter D Johnson, Francesco Ticozzi, and Lorenza Viola. General fixed points of quasi-local frustration-free quantum semigroups: from invariance to stabilization. arXiv preprint arXiv:1506.07756, 2015.

[16] Stefan Kaczmarz. Angen¨aherte aufl¨osung von systemen linearer gleichungen. Bulletin International de l’Academie Polonaise des Sciences et des Lettres, 35:355–357, 1937. [17] TS Motzkin and IJ Schoenberg. The relaxation method for linear inequalities. Canadian

Journal of Mathematics, 6(3):393–404, 1954.

[18] J von Neumann. Functional operators, volume 2. Princeton Univ. Press Princeton, N. J, 1950.

[19] Michael A Nielsen and Isaac L Chuang. Quantum computation and quantum informa-tion. Cambridge university press, 2010.

[20] Denes Petz. Bregman divergence as relative operator entropy. Acta Mathematica Hun-garica, 116(1-2):127–131, 2007.

[21] D´enes Petz. Quantum information theory and quantum statistics. Springer Science & Business Media, 2007.

[22] Francesco Ticozzi. Symmetrizing quantum dynamics beyond gossip-type algorithms. arXiv preprint arXiv:1509.01621, 2015.

(46)

Ringraziamenti

Desidero ringraziare immensamente il mio relatore, il professor Francesco Ticozzi per la pazienza, il tempo e la passione che mi ha dedicato per seguirmi in questo lavoro.

Voglio fortemente ringraziare la mia famiglia, sempre disponibile a tollerare i miei scleri universitari.

Ringrazio i miei amici, anch’essi sopravvissuti: Gabriele, compagno d’avventura in questi lunghi anni, Fabio ed Alessandro, sfaccendati sempre troppo indaffarati e tutti gli altri che sono in grado di sopportarmi (Damiano, Giulia, Lucrezia, ”Quei del 90”, etc...).

Riferimenti

Documenti correlati

Without loss of generality we may assume that A, B, C are complex numbers along the unit circle. |z|

[r]

Answer: As a basis of Im A, take the first, fourth and fifth columns, so that A has rank 3. Find a basis of the kernel of A and show that it really is a basis... Does the union of

If S is not a linearly independent set of vectors, remove as many vectors as necessary to find a basis of its linear span and write the remaining vectors in S as a linear combination

Such a selection is now based on the maximum check interval value present in 

The botanical garden can operate to safeguard the vegetables species both in situ and ex situ, with collections of rare species and/or in danger of extinction and structuring

It is submitted that the probable intention of the drafters of the Restatement (Second) was to achieve a balance (in section 187 (2) (a)) between the older reasonable

23 Among the tools most recently available are: OpenDOAR containing 355 registered repositories searchable by country, content and discipline, http://www.opendoar.org/;