### D

EPARTMENT OF### P

HYSICS### “E. F

ERMI### ”

MASTERDEGREE INPHYSICSAcademic Year 2018/2019

**Multipartite entanglement properties in 1D**

**quantum many-body systems**

Supervisor Candidate

**Contents**

**Introduction** **1**

**1** **Correlations** **5**

1.1 Classical information theory . . . 6

1.2 Quantum composite systems . . . 9

1.2.1 The postulates of quantum mechanics . . . 9

1.2.2 Composite systems . . . 10

1.2.3 Density matrix formalism . . . 12

1.3 Quantum correlations . . . 17

1.3.1 Correlations taxonomy . . . 19

1.3.2 Measurement of quantum correlations . . . 23

1.3.3 von Neumann Enropy . . . 25

1.4 EPR and Bell’s inequalities . . . 28

1.5 Summary . . . 31

**2** **QFI and multipartite entanglement** **33**
2.1 Multipartite entanglement . . . 34

2.1.1 Entanglement of three qubits . . . 34

2.1.2 Entanglement classes for the general case . . . 35

2.2 Fisher Information . . . 36

2.2.1 Classical Fisher information . . . 36

2.2.2 Quantum Fisher Information . . . 37

2.2.3 Properties of the QFI . . . 38

2.3 QFI and entanglement . . . 39

2.4 Summary . . . 41

**3** **The spin-1/2 quantum Heisenberg chain** **43**
3.1 Model and its Hamiltonian . . . 44

3.1.1 Classical limit(J⊥ =0) . . . 46

3.1.2 Jordan-Wigner transformation and Jz=0 . . . 47

3.1.3 Symmetries of the hamiltonian . . . 49

3.2 Quantum phases and entanglement entropy . . . 51

3.3 Phases of the XXZ Heisenberg chain . . . 53

3.3.1 Ferromagnetic phase . . . 53 3.3.2 Antiferromagnetic phase . . . 54 3.3.3 XY phase . . . 55 3.4 Entanglement entropy . . . 56 3.5 Summary . . . 58 iii

**4** **Ground state results** **59**

4.1 Ground state . . . 60

4.1.1 FM ground state(∆>1) . . . 61

4.1.2 AF Ising ground state(∆→ −∞) . . . 62

4.1.3 XY/FM critical point(∆=1) . . . 63

4.1.4 XY and AF phases(∆<1) . . . 63

4.2 QFI density of the ground state . . . 64

4.3 MPE of the ground state . . . 67

4.4 Bloch vectors configuration . . . 69

4.4.1 Antiferromagnetic phase configuration . . . 70

4.4.2 XY phase configuration . . . 71

4.4.3 FM phase configuration . . . 71

4.5 Finite-size scaling of the QFI density . . . 72

4.6 Robustness of the ground state . . . 74

4.7 Summary . . . 79

**5** **Excited states results** **81**
5.1 Full spectrum quantum Fisher information . . . 81

5.1.1 AF and FM phases . . . 81

5.1.2 XY phase and transition points . . . 83

5.2 Nonzero temperature . . . 84

5.3 Summary . . . 86

**Conclusions** **87**

**Appendices** **89**

**A QFI in the ferromagnetic phase** **91**

**B MATLAB Code** **95**

**Abstract**

The characterization of many-body quantum systems through their entangle-ment properties is an intruiguing problem at the verge of quantum information and many-body physics. The study of quantum systems from the perspective of information science allows a deeper understanding of many phenomena in which these systems are involved and allows to explore the possible appplica-tions for quantum technologies. Current studies have mainly focused on bipar-tite or pairwise entanglement in the ground state of critical Hamiltonians: these studies have emphasized a growth of entanglement in the vicinity of quantum critical points. However many-body quantum states show a richer structure of multiparticle correlations. Despite this, much less attention has been devoted to witnessing multipartite entanglement in quantum many-body systems. Yet, multipartite entanglement has been detected experimentally in atomic ensem-bles and a vivid line of research is currently focused on the study of possible witnesses that allows to detect multipartiticity in quantum states. Among these witnesses, the quantum Fisher information (QFI) has proved to be especially powerful: it extends the class of entangled states detectable by popular meth-ods such as the spin squeezing, it can be extracted from experimental data and it has a physical interpretation in terms of distinguishability of quantum states after an external parametric transformations. In this thesis we investigate the behaviour of multipartite entanglement as detected by the quantum Fisher in-formation of a spin-1/2 XXZ Heisenberg chain. The analysis is performed both at zero and finite temperature and in the presence of quenched disorder.

**Introduction**

One of the most surprising features of quantum mechanics is its non-local nature. Nonlocality allows correlations between outcomes of measurements performed at spatially separated locations that could never occur according to classical physics. These correlations are the manifestation of entanglement, the feature Schrödinger referred to as the characteristic trait of quantum mechanics [1, 2]. More than eighty years later, we can say that he was quite far-sighted, since entanglement appears to be a key ingredient in the explanation of a large variety of quantum phenomena, especially in the context of quantum many-body systems. Milestone examples are the entangled ground states used to ex-plain superconductivity (BCS ansatz), the fractional quantum Hall effect (the Laughlin ansatz) and the superfluid transition of4He [3].

A currently vivid line of research in the field of many-body physics is
fo-cused on the study of entanglement properties of one-dimensional (1D)
sys-tems such as spin chains or 1D Bose gases. Up to now a wide variety of
re-sults are known for the case of bipartite entanglement, namely entanglement
between two partitions or two sites of a chain. Much less is known about
mul-tipartite entanglement, i.e. entanglement between more than two subsystems.
The main reason is that the quantification and classification of multipartite
en-tanglement is fairly more complex and full of open problems compared to the
bipartite case [4]. In this thesis, we present a numerical study, through exact
di-agonalization method, of the multipartite entanglement properties of a
quan-tum spin-1_{2}XXZ Heisenberg chain. This model is a prototypical quantu
many-body system characterized by a rich phenomenology [5, 6]

The interest in 1D systems is raised by several reasons: from a technical point of view, lowering the number of space dimensions may result in a sim-pler and more tractable mathematical problem, and in some cases one can hope to find an exact solution. Nevertheless, a lower dimension usually means stronger interactions between the constituents of the system. This is a double-edged sword, because on one hand it is reasonable to expect that these sys-tems exhibit more interesting physics than weakly interacting ones, but on the other hand in many cases one has to deal with the impossibility to resort to perturbative methods. So 1D systems are in a particular position where the in-terplay between strength of interaction and mathematical simplification made them perfect candidate for applying and testing non-perturbative or numeri-cal methods. Moreover, very often the physinumeri-cal insight one obtains by studying them, can provide a guidance for the direction to follow when working with higher dimensional systems.

From the practical point of view, several systems where the degrees of free-1

dom are effectively confined to move along a line/chain, can be experimentally realized. Suitable platforms are crystalline compounds, or even more control-lable setups such as ultra-cold atoms in optical lattices [7].

As regards the current state of the art in the study of quantum correlations
in many-body systems, an important class of works consider a bipartition of
the system. If the total system is in a pure state, then a measure of the
entangle-ment between two partitions A and B is given by the von Neumann entropyS
*associated to the reduced density matrix of one of the two blocks (ρ*A/B). A well
studied feature of entanglement entropy is its scaling properties [8]. An
impor-tant result can be cast under the name of area law [9]. When it holds, the reduced
entropyS depend only on the surface of separation between the two regions
A and B. In d-dimensional systems this means that S ∼ld−1where l is of the
order of the size of one of the block. For 1D spin systems the entanglement
entropy in the ground state obeys an area law for gapped phase and acquires
logarithmic corrections at criticality. Entanglement entropy has been also
stud-ied in the context of nonequilibrium dynamics, where it has been found that S
obeys a volume law after a quench [9]. The increase with time is slower in the
presence of disorder, with a distinct logarithmic behaviour characterizing the
many-body localised states [10].

As regards the multipartite case, much less is known. It has been recently shown that a convenient way to quantity the amount of multipartite entangle-ment is the quantum Fisher information (QFI) [11]. This tool can be expressed as the variance, over a given quantum state, of an operator which depends on the specific problem. For a spin system in a thermal state, the QFI is related to a dynamical susceptibility [12]. Since susceptibilities can be experimentally mea-sured also for quite large systems, this suggests an easy way to experimentally probe multipartite entanglement in many-body systems. Finally, in the context of non-equilibrium dynamics, the multipartite entaglement has been studied for a quantum Ising chain after a quench [13]. The authors found that if the system start from a ferromagnetic state, there is no limitation on the degree of multipartiteness achievable. In the paramagnetic case, the degree of multipar-titeness is limited (while it tends to diverge if the system start from the critical point) and attains a maximum only close to the equilibrium critical point.

We propose a systematic analysis of the QFI and of the multipartite entan-glement properties of a one-dimensional XXZ spin-1/2 Heisenberg model with periodic boundary conditions and the possibility of quenched disorder.

H(∆, h) = −J 2 L

### ∑

i=1Sx_{i}S_{i+1}x +Sy_{i}Sy_{i+1}+∆Sz_{i}S_{i+1}z +

L

### ∑

i=1

hiSzi. (1) The static random fields hi are independent random variables uniformly dis-tributed in[−h, h]. In the case of clean system (hi = 0,∀i ∈ {1...L}) the two isotropic points∆= −1 and∆= +1 describe the antiferromagnetic and ferro-magnetic chains respectively.

The zero temperature phase diagram of the XXZ model in zero magnetic field shows a gapless phase in the interval−1 ≤ ∆ <1. Outside this interval the excitations are gapped. The system exhibits three phases: the antiferromag-netic (AF) phase for∆≤ −1, the axial phase (XY) for−1<∆< +1 and the fer-romagnetic phase (FM) for∆>1. The point∆=1, where the system reduces to the isotropic Heisenberg model, is a first order critical point. At∆ = −1 a

continuous transition of the Bereziski-Kosterlitz-Thouless phase occurs [3]. We study the quantum Fisher information (QFI) and the multipartite entan-glement (MPE) of the ground state, of the excited states and nonzero temper-ature, and finally we made a stability analysis of the MPE in the presence of disorder. The analysis has been made for different values of∆.

The work is organised as follow. Chapter 1 is devoted to the concept of cor-relation. We start with a brief overview of classical information theory that will lead to the definition of mutual information as a classical measure of correla-tions. Then we move to the quantum domain and discuss composite quantum systems and the mathematical formalism to describe them. We then provide a broader outlook of hierarchy of correlations in states of composite quantum systems and conclude the Chapter with a review of the Bell’s inequality and the EPR paradox.

In chapter 2 we describe the link between the quantum Fisher information and the multipartite entanglement. The first part of the chapter is devoted to the presentation of multiparticle entanglement and in the second one we intro-duce the quantum Fisher information. We conclude the Chapter by providing the link between QFI and MPE.

In chapter 3 we introduce our model: quantum spin-1_{2} XXZ Heisenberg
chain. We describe the Hamiltonian, its symmetries, limiting cases and phase
diagram. Finally we present the state of the art about the bipartite
entangle-ment for this system.

The original contribution of this thesis is contained in Chapters 4 and 5. Chapter 4 is devoted to the analysis of the multipartite entanglement proper-ties of the ground state. We start by discussing the QFI and the MPE in the vari-ous phases of the system and compare these results with the bipartite entangle-ment. Then we perform a finite-size scaling of the QFI and conclude the chap-ter by studying the robustness of the ground state in the presence of quenched disorder.

In chapter 5 we study the multipartite entanglement properties of the ex-cited states. We first analize the QFI/MPE for the entire spectrum of some rep-resentative values of∆ for the AF, XY and FM phases. We concude the chapter by analysing the QFI/MPE in a thermal state and the robustness of the multi-partiticity under thermal fluctuations.

**Correlations**

The concept correlations is widespread over all modern statistical sciences. The term correlation refers to any broad class of mutual relationship or associ-ation between quantities. Given two (or more) measurable quantities, it tells how much I can predict about one of them if I have some knowledge of the other in comparison to how much I can predict without that knowledge. In-tuitively, when two random variables are correlated, we can learn something about one variable when we are told the value of the other.

The usefulness of these statistical dependencies lies in the fact that they can often indicate the presence of an underlying causal relationship. In many-body physics, for example, correlation functions provide a bridge between the-ory and experiment. The connection between correlation functions and exper-imental data can be established by the so called linear response formalism: as-suming an experimentally imposed perturbation of a many-body system to be weak, the response of the system can be analyzed within a controlled expan-sion scheme whose leading order term is identified as a retarded correlation function [3]. In the context of quantum information, correlations are viewed as an essential resource to perform a wide range of quantum protocols. Tele-portation, super-dense coding, quantum-key distribution just to mention few important examples, can be realized exclusively with the help of entangled (i.e. quantum correlated) states [14]. Furthermore, from a conceptual point of view, any measurement can be modeled as an establishment of correlations between two random variables: one represents the values of the quantity pertaining to the system to be measured, while the other represents the states of the appara-tus used to measure the system. It is by looking at the states of the apparaappara-tus, and discriminating them, that we infer the states of the system [15].

We can fairly say that the problem of quantifying correlation between vari-ables is as old as statistics itself. In 1885, Sir Francis Galton first defined the term regression and completed the theory of bivariate correlation. A decade later, Karl Pearson developed the index that we still use to measure correla-tion, Pearson’s r. The Pearson’s correlation measures the linear relationship be-tween two variables. It has the advantage that it can be calculated directly from a data sample without the need to actually known the probability distributions involved (since it is an expression involving moments of the distribution). The downside is that it fails to detect non-linear relationship and it can be zero even if there is a relationship between variables. Rank correlation coefficients, such

as Spearman’s rho and Kendall’s tau coefficients measure the extent to which, as one variable increases, the other variable tends to increase, without requir-ing that increase to be represented by a linear relationship. Other measures of dependence among random variables have been created over the years, and depending on the specific problem, some are more suitable than others [16].

Our purpose is ultimately to explore von Neumann entropy based mea-sures of quantum correlations. These meamea-sures derive from the application of information theory, a theory originally proposed by Claude Shannon in 1948 to explore fundamental limits on signal processing and communication opera-tions, to quantum mechanics [17]. We will start then by giving a mathematical shape to the intuitive ideas of correlations within the framework of classical information theory, and then we will extend these concepts in a more general and fundamental quantum context.

**1.1**

**Classical information theory**

The term information in colloquial speech is predominantly used as an ab-stract noun to denote any amount of data, text, and in a broad sense any content that can be stored, sent, received or manipulated in any medium. Historically the study of the concept of information can be understood as an effort to make the properties of human knowledge measurable. Nowadays the term informa-tion has a precise mathematical meaning [18].

The science of information theory begins with the observation that there is a fundamental link between probabilities and information. Whether something is informative or not, depends on the context in which the information is to be used. Our experiences shape our expectations about phenomena around us. A snowfall in the Artic is quite a boring and uninformative event because our ex-perience tells us that snow in the Artic is an everyday occurrence. On the con-trary a snowfall in the Sahara desert is unexpected and potentially contains a great deal of information (a climate catastrophe maybe?). We assign prior prob-abilities distribution to uncertain quantities based on our pre-existing knowl-edge. These probabilities express our beliefs about these quantities before some evidence is taken into account. Based on these a priori probabilities we make prediction about phenomena. Additional information, acquired for example through measurements, can modifies our prior probabilities.

Let us consider for example the drosophila of probability theory: a six faces dice. What we do not know about the system -our dice- is which side is going to be up when the dice is thrown. What we do know is that it has six sides. Is there a way to quantify our knowledge (or the lack thereof) about this dice? In this simple case, we can use the fact that our knowledge is limited to the number of states of the system. Intuitively, the outcome of a twenty sides dice is more unpredictable that the outcome of a six sides dice. Thus we expect our measure to increase with the number of possible states. Another useful prop-erty is additivity. Suppose we have two independent systems, say system A and system B. The first one can be in nAstates and the second in nBstates. The total number of states of the joint system must be nAnB. We expect our uncer-tainty to be additive, that is to say, being IAand IBthe uncertainty pertaining the two systems, we expect IAB=IA+IB.

the uncertainty for our six-faces dice as I = log 6 ≈ 2.58. We are considering the logarithm in base two. The unit measure in this case is the bit. So the uncer-tainty about our system is log 6 bits. Let us assume now that some malevolent gambler load our dice so much that one of the six sides (say, the six) can never be up.

Now, we start throwing this weighted die, and after a long series of throws without a six turning up, we start to become suspicious. After realising that this output never shows up, but the other five occur with roughly equal frequency, we adjust your expectation by hypothesizing (a posteriori) that it is a weighted die with five equally likely outcomes, and one outcome that never actually occurs. Now the expected uncertainty is log 5. But we have learned something through all these measurements. We gained information. How much? It is the difference between our uncertainty before we started to be suspicious, minus the uncertainty after we changed our expectations. The information we gained is thus I = log 6−log5 ≈ 0.23 bits of information. These extra information reduces our uncertainty and allows us to make better prediction that we would make without it. Thus simply put, information is that which allows us to make predictions with accuracy better than chance.

The ideas introduced so far can be extended to the case of a more gen-eral random variable X with corresponding probability distribution P(X). The sample spaceX is the set of all possible realizations of X. An event xi∈ X can occur with probability pi≡P(X=xi). After performing a measurement on X, we acquire some information, thus we reduce the uncertainty about the state of the system.

A suitable way to quantify this uncertainty was established for the first time by Claude Shannon in 1948 in the context of sending a message over a channel such as a telephone wire [17]. The main motivation in his case, was to investigate the maximum communication capacity of a classical communi-cation channel in order to maximize telephone communicommuni-cation profits for the Bell Corporation. He introduced the notion of self-information I as a measure of uncertainty associate with an event xiwith probability pi. This measure must possess the following properties:

Property 1. The amount of information of an event ximust depend only upon its probability pi. The more surprised we are by an event happening (i.e. the lower the probability of that event), the more information this event carries.

Property 2. I is a continuous function of p. When the content of a message is known a priori with certainty there is no actual information conveyed in the message

Property 3. Given two independent events, xiand xj with probability respec-tively piand pj, the information content of the two symbols xi and xj is additive: I(xi; xj) =I(xi) +I(xj).

Shannon proved that there is a unique measure which satisfies these require-ments. If X is a random variable over a sample space X such that event xi occurs with probability p(xi), then the information content of the event xiis

This measure of surprise has the desirable property that it is higher for lower probability events and lower for higher probability events. The expected self-information content of the random variable X is

H(X) ≡H(p(x)) = −

### ∑

i

pilog(pi). (1.2) Its unit depends on the base of the logarithm: bits in base two, nats in base e, dits in base ten.

This quantity is called Shannon entropy, and it is essentially a measure of
the sharpness of a probability distribution, that is, a measure of the uncertainty
associated with a random variable. Any change in the distribution P(X)which
levels out the probabilities increases the uncertainty, thus the entropy. For a
perfectly sharp distributions, in which the probability is pi = P(X = xi) = 1
for xi and zero for all others possible outcomes we have H(X) = 0. For a
distribution over N equally likely elements pi = _{N}1, ∀xi ∈ X we obtain the
maximal entropy Hmax(X) =log N.

This definition can be straightforwardly generalized in a multivariate case. Given two random variables X and Y, with a set of possible outcomesX andY respectively, we can define the joint probability distribution p(xi; yj) ≡P(X= xi; Y = yj) with corresponding marginal probability distributions p(xi) ≡ P(X = xi) = ∑jp(xi; yj) and P(Y = yj) ≡ p(yj) = ∑ip(xi; yj). The joint entropy for two variables with a joint distribution p(x, y)is

H(X, Y) ≡H(p(x, y)) = −

### ∑

i,j

p(xi, yj)log p(xi, yj). (1.3)

Suppose that two random variables X and Y has joint entropy H(X, Y). It means that on average we need H(X, Y) bits of information to describe the state of the two random variables. If we are told the value of X, we have gained H(X)bits of information. Once X is known, we only need

H(X|Y) =H(X, Y) −H(X) (1.4) bits to describe the state of the whole system. The quantity H(X|Y)is called conditional entropy and quantifies the amount of information needed to describe the outcome of a random variable X given that the value of another random variable Y is known. If X and Y are completely independent then H(X|Y) = H(X)since the information gained from X is not reduced by knowledge about Y [19].

If we are not informed of the value of Y, then our information concerning X, H(X)is calculated from the marginal distribution P(X). However, if we are now told that Y has the value yjthen our information about X changes to the information of the conditional distribution P(Y|X). According to what we have said, we wish the degree correlation to measure how much we learn about X by being informed of the value of Y. Thus:

I (X : Y) =H(Y) −H(Y|X) =H(X) −H(X|Y) =H(X) +H(Y) −H(X, Y) = −

### ∑

i,j p(xi, yj)log p(xi, yj) p(xi)p(yj) . (1.5)The quantityI (X : Y)is called mutual information, and it measures the cor-relation between two random variables in terms of how much knowledge of one random variable reduces the information gained from learning the other random variable. The Venn diagram below fives a visual idea of the relation among mutual information, Shannon entropy and relative entropy.

**Figure 1.1.**Venn diagram showing additive and subtractive relationships various
infor-mation measures associated with correlated variables X and Y. The area contained by
both circles is the joint entropy H(X, Y). The circle on the left (red and violet) is the
indi-vidual entropy H(X), with the red being the conditional entropy H(X|Y). The circle on
the right (blue and violet) is H(Y), with the blue being H(Y|X). The violet is the mutual
information I(X : Y).

**1.2**

**Quantum composite systems**

The aim of our work is to investigate correlations in a quantum context. Quantum mechanics is an axiomatic theory grounded on few principles called the postulates of quantum mechanics [20]. This section is devoted to provide a brief overview of quantum mechanics with a particular focus on the mathemat-ical formalism used to describe composite systems. We start by stating the pos-tulates of quantum mechanics, then we provide the mathematical description of composite systems and finally we introduce the density matrix formalism.

**1.2.1**

**The postulates of quantum mechanics**

**Postulate 1. To any isolated physical system, one can associate a complex Hilbert**

space H known as the state space of the system. At a fixed time t0, the system is completely described by its state vector|ψ(t0)iwhich is a unit vector belonging to the state spaceH.

SinceHis a vector space, this postulate implies the superposition principle: a linear combination of state vectors is itself a state vector. This principle is a hallmark of quantum theory and the responsible of many features of quantum mechanics [14, 20].

**Postulate 2. Every measurable physical quantity** A is described by an Hermitian

operator A acting inH.

**Postulate 3. The only possible result of the measurement of a physical quantity**A is

one of the eigenvalues of the corresponding observable A.

**Postulate 4. When the physical quantity**A is measured on a system in the

normal-ized state|ψi, the probabilityP(an)of obtaining the eigenvalue anof the correspond-ing observable A is:

P(an) = gn

### ∑

i=1

|huin|ψi|2, (1.6)

where gnis the degree of degeneracy of anand{uin

}_{i=1,...,g}_{n}is an orthonormal set of
vectors which forms a basis in the eigensubspaceHnassociated with the eigenvalue an
of A.

**Postulate 5. If the measurement of the physical quantity**A on the system in the state

*ψ gives the result a*n, the state of the system immediately after the measurement is the
normalized projection, _{√} Pn*|ψi*

*hψ|P*n*|ψi|ψi* of|ψionto the eigensubspace associated with an.

The state of the system immediately after the measurement is therefore al-ways an eigenvector of A with the eigenvalue an, precisely the projection of|ψi onHn(suitably normalized, for convenience). If anis non-degenerate, the post measurement state will be cn

|cn||uni =e

arg cn_{, which describes the same physical}

state as|uni.

**Postulate 6. The time evolution of**|ψ(t)iis governed by the Schrödinger equation
i¯hd

dt|ψi = H(t)|ψ(t)i, (1.7) whereH(t)is the Hamiltonian of the system

The Schrödinger equation is linear and homogeneous, thus the correspon-dence between the states|ψ(t0)iand|ψ(t)iis linear. Thus there is a linear op-erator U(t, t0)such that:

|ψ(t)i =U(t, t0)|ψ(t0)i. (1.8) It can be shown that if the Hamiltonian of the system is time-independent, the evolution operator U takes the form

U(t, t0) =e−iH(t−t0)/¯h. (1.9)

**1.2.2**

**Composite systems**

We are accustomed from classical physics to the fact that composite sys-tems can be decomposed into their subsyssys-tems and that conversely, individual systems can be combined to give overall composite systems. The classical total system is completely describable in terms of the states of its subsystems and their mutual dynamic interactions. In quantum context, however, it is found

that composite systems can have in addition completely different global prop-erties impossible to describe in terms of individual subsystems. These come to light when the composite quantum systems are in entangled states.

LetH_{A}andH_{B}be two Hilbert spaces of dimension nAand nBrespectively.
By definition, the vector spaceH_{AB}is called the tensor product ofH_{A}andHB:

H_{AB}= H_{A}⊗ HB (1.10)

if is associated with each pair of vectors|φi_{A} belonging toHA and|χiB
be-longing toHB, a vector ofHAB, denoted by

|φi_{A}⊗ |χi_{B}, (1.11)

which is called the tensor product of |φi_{A} and |χi_{B} such that the following
conditions hold:

· It is linear with respect to multiplication by complex numbers
[λ|φi_{A}] ⊗|χi_{B}=*λ[|φi*_{A}] ⊗|χi_{B}]

|φi_{A}⊗ [µ|χi_{B}] =*µ[|φi*_{A}] ⊗|χi_{B}].
· It is distributive with respect to vector addition

|φi_{A}⊗ [|χi_{B}+|ζi_{B}] =|φi_{A}⊗ |χi_{B}+|φi_{A}⊗ |ζi_{B}
[|φi_{A}+|χi_{A}] ⊗|ζi_{B}=|φi_{A}⊗ |ζi_{B}+|χi_{A}⊗ |ζi_{B}.

· When a basis has been chosen in each of the spacesH_{A}andH_{B},{|uiiA}
for H_{A} and {

vj

B}for HB the set of vectors {|uiiA} ⊗ { vj

B}
consti-tutes a basis in H_{AB}. If nA and nB are finite, the dimension of HAB is
consequently nAnB.

Let us consider a tensor product vector |φi_{A}⊗ |χi_{B}. Whatever |φi_{A} and
|χi_{B} may be, they can be expressed in the{|uiiA}and {

v_{j}
B} basis
respec-tively:
{|φi_{A}} =

### ∑

i ai|uiiA, {|χiB} =### ∑

j bj v_{j}B. (1.12)

Using the properties of the tensor product, the expansion of the vector|φi_{1}⊗
|χi_{2}in the{|ii_{1}} ⊗ {|ji_{2}}basis can be written

|φi_{A}⊗ |χi_{B}=

### ∑

ij aibj|uiiA⊗ v_{j}B (1.13)

Therefore the components of a tensor product vector are the products of the components of the two vectors of the product.

There exist inH_{AB}vectors which are not tensor products of a vector of basis
H_{A}by a vector ofH_{B}. Since{|uiiA⊗

vj

B}constitutes by hypothesis a basis
inH_{AB}the most general vector ofH_{AB}is

|ψi =

### ∑

ij cij|uiiA⊗ v_{j}B (1.14)

Given nAnBarbitrary complex numbers cij, it is not always possible to put them
in the form of products aibjof nAnumbers aiand nBnumbers bj. Therefore, in
general, a vector vectors|φi_{A}and|χi_{B}of which|ψiis the tensor product may
not exist. However, an arbitrary vector ofH_{AB}cam always be decomposed into
a linear combination of tensor product vectors, as is shown in Eq.(1.14).

**Schmidt decomposition** A tool of great usefulness in quantum information
is the so called Schmidt decomposition.

Suppose|ψiis a pure state of a composite system, AB. Then there exist or-thonormal states{|uA

i i}for system A, and orthonormal states{|vBji}of system B such that

|ψi =

### ∑

i

*λ*i|uiAi|vBji, (1.15)
*where λ*iare non-negative real numbers satisfying∑i*λ*2i =1 known as Schmidt
coefficients.

One of the consequences of Schmidt decomposition is that system A or B can be traced out immediately to obtain

*ρ*A =trB*ρ*=

### ∑

i*λ*2

_{i}|u

_{i}AihuA

_{i}|, (1.16)

*ρ*B=trA

*ρ*=

### ∑

i*λ*2

_{i}|vB

_{i}ihv

_{i}B|. (1.17)

*So the eigenvalues of ρ*A*and ρ*B*are identical, namely λ*2_{i} for both density
op-erators. Many important properties of quantum systems are completely
deter-mined by the eigenvalues of the reduced density operator of the system, so for
a pure state of a composite system such properties will be the same for both
systems.

The bases{|uA

i i}and{|vBji}are called the Schmidt bases for A and B,
*re-spectively, and the number of non-zero values λ*iis called the Schmidt number
for the state|ψi. The Schmidt number is an important property of a
compos-ite quantum system, which in some sense quantifies the "amount" of
entan-glement between systems A and B. To get some idea of why this is the case,
consider the following obvious but important property: the Schmidt number
is preserved under unitary transformations on system A or system B alone. To
see this, notice that if∑i*λ*i|uiAi|vBjiis the Schmidt decomposition for|ψithen
∑i*λ*iU |uAi i

|vB_{j}iis the Schmidt decomposition for U(|ψi), where U is a
uni-tary operator acting on system A alone. Algebraic invariance properties of this
type make the Schmidt number a very useful tool.

**1.2.3**

**Density matrix formalism**

The case of a physical state represented by a vector on an Hilbert space, is an ideal case in which the full information content of the system is accessible. However, in practice, the state of the system is often not perfectly determined. This is true, for example, for the polarization state of photons coming from a source of natural unpolarized light, and also for the atoms of a beam emit-ted by a furnace at temperature T, where the atoms kinetic energy is known only statistically. We need a tool to incorporate into the formalism the incom-plete information we possess about the state of the system, so that our predic-tions make maximum use of this partial information. It is possible to introduce a more general formalism to take into account statistical ensemble of states, called mixed states. This is the so called density matrix formalism.

When one has incomplete information about a system, one typically ap-peals to the concept of probability. For example, we know that a photon emitted

by a source of natural light can have any polarization state with equal
probabil-ity. Similarly, a system in thermodynamic equilibrium at a temperature T has a
probability proportional to e−En/kT_{of being in a state of energy E}_{n}_{. In general,}

what can we say when we don’t have complete information about the state of the system is that it may be either the state|ψ1iwith a probability p1or in the state|ψ2iwith a probability p2, etc. Obviously∑ipi=1.

We then say that we are dealing with a statistical mixture of states.

**Density operator for pure states** Before studying this general case, we shall
begin by again examining the simple case where the state of the system is
per-fectly known (all the probabilities pkare zero, except one). The system is then
said to be in a pure state. We shall show that characterizing the system by its
state vector is completely equivalent to characterizing it by a certain operator
acting in the state space, the density operator. The usefulness of this operator
will become apparent in § 4, where we shall show that nearly all the
formu-las involving this operator, and derived for the pure case, remain valid for the
description of a statistical mixture of states.

Consider a system whose state vector at the instant t is : |ψ(t)i =

### ∑

n

cn(t)|uni, (1.18)

where the{|uni}form an orthonormal basis of the state space, assumed to be discrete. The coefficients cn(t)satisfy the relation :

### ∑

n

|cn(t)|2=1, (1.19)

which expresses the fact that|ψ(t)iis normalized. If A is an observable with matrix elements unAup=Anp, the expected value of A at the time t is:

hA(t)i = hψ(t)|A|ψ(t)i =

### ∑

n,p

c∗_{n}cp(t)Anp. (1.20)
Finally, the evolution of|ψ(t)iis described by the Schrödinger equation:

i¯hd

dt|ψ(t)i = H(t)|ψ(t)i. (1.21) whereH(t)is the Hamiltonian of the system. The Eq.(1.20) shows that coef-ficients cn(t)enter into the mean values through quadratic expression of the type c∗ncp(t). These are simply the matrix elements of the projector operator

|ψ(t)ihψ(t)|, the projector onto the ket|ψ(t)i:
up*ψ(*t)

hψ(t)|uni. (1.22)

*It is therefore natural to introduce the density operator ρ*(t), defined by
*ρ(*t) =|ψ(t)ihψ(t)|. (1.23)
The density operator is represented in the{|uni}basis by a matrix called the
density matrix whose elements are

*ρ*pn(t) = up*ρ(*t)

*The specification of ρ*(t)suffices to characterize the quantum state of the
sys-tem, that is, it enables us to obtain all the physical predictions that can be
cal-culated from |ψ(t)i. We can indeed rewrite Eq.(1.19), Eq.(1.20) and Eq.(1.21)
in term of density matrix. From the Eq.(1.24) we obtain that the sum of the
diagonal elements of the density matrix is equal to 1:

### ∑

n

|cn(t)|2=

### ∑

n*ρ*nn(t) =*tr ρ*(t) =1. (1.25)
In addition, by means of the closure relation, the Eq.(1.20) can be written as

hA(t)i =

### ∑

n,p up*ρ(*t) un unA up=tr{ρ(t)A}. (1.26)

*The time evolution of ρ*(t)can be deduced from the Schrödinger equation

d
dt*ρ(*t) =
d
dt|ψ(t)i
hψ(t)| + |ψ(t)i d
dthψ(t)|
= 1
i¯h[H(t)*, ρ*(t)]. (1.27)
*Finally we can calculate from ρ*(t)the various probabilityP(an)of the various
results anwhich can be obtained in the measurement of an observable A at time
t. We know thatP(an)can be written as a mean value, that of the projector Pn
onto the eigensubspace associated with an. Using the closure relation we obtain

P(an) = hψ(t)|Pn|ψ(t)i = hψ(t)|uni hun|ψ(t)i =

### ∑

p hψ|uniun u_{p}u

_{p}

*ψ(*t) =tr{ρ(t)Pn}. (1.28) To sum up we found that for a density operator the following properties hold

*tr ρ*(t) =1, (1.29)
hA(t)i =tr{ρ(t)A}, (1.30)
d
dt*ρ(*t) =
1
i¯h[H(t)*, ρ*(t)]. (1.31)
*Finally, directly form its definition Eq.(1.23), we have ρ*(t)†_{=}_{ρ(}_{t}_{)}_{, and for the}
pure state:

*ρ(*t)2=*ρ(*t), (1.32)

*tr ρ*2(t) =1. (1.33)

**Density operator for mixed states** Let us now consider the mixed state case.
Suppose the system is described by an ensemble{pk,|ψki}. We want to
calcu-late the probabilityP(an)that a measurement of the observable A will yield
the result an. If the state vector of the system is|ψki, the probability to observe
anis

Pk(an) = hψk|Pn|ψki (1.34) To obtainPanif the system is in one must weightPk(an)by pkand then sum over k

P(an) =

### ∑

kNow, for Eq.(1.28) we have

Pk(an) =tr{ρkPn} (1.36)

*where ρ*k=|ψkihψk|*is the density operator corresponding to the state ψ*k.
Sub-stituting this expression into Eq.(1.35) we have

P(an) =tr{ρPn}. (1.37)

We can generalize to a statistical mixture of states the equations
*Eqs.(1.29)-(1.31) valid for a pure state. Since ρ is no longer a projector, we have instead of*
Eq.(1.32) and Eq.(1.33)

*ρ*26=*ρ* (1.38)

*tr ρ*2≤1 (1.39)

**Partial trace and reduced density operator** Perhaps the deepest application
of the density operator is as a descriptive tool for subsystems of a composite
quantum system. Such a description is provided by the reduced density
oper-ator. Suppose Alice and Bob are two parties who share a quantum state

|ψi =

### ∑

i,j cij u A i E ⊗ u B j E , (1.40)where{u_{i}A}and{uB_{j}}are orthonormal basis for A and B respectively. By mean
*of the Schmidt decomposition, ψ can be written as*

|ψi =

### ∑

n

*λ*k|χki ⊗ |ζki. (1.41)
*The density operator associated with this state is ρ* = |ψihψ|. The expectation
value of any operator A acting only on the space of the first system, spanned
by the states{|χki}is then

hψ|A|ψi =

### ∑

m,n a∗manhζm| ⊗ hχm|A⊗**I**|χni ⊗ |ζni =

### ∑

m,n a∗manhζm|ζni hχm|A|χni =### ∑

k huk|### ∑

n |an|2|χnihχn|A ! |uki =tr{ρaA}. (1.42)Here A⊗**I denotes A acting on the first system and the identity operator I**

acting on the second and

*ρ*a=

### ∑

n|an|2|χnihχm|, (1.43) is the reduced density operator for the first system. This is the reduced operator which provides a complete description of the statistical properties of the single system spanned by the basis{|χni}but contains no information on the other

system. If we restrict our attention to observables associated with only one of the two entangled systems then this takes no account of any correlations between the two systems. We can write the expectation value given in Eq.(1.42) in the form

hAi =tr(A⊗**I**|ψihψ|) =tra[A trb(|ψihψ|)]. (1.44)
The subscripts a and b denote traces over the spaces spanned by{|χni}and

{|ζmi} respectively. The operator trb(|ψihψ|)is the reduced density operator
for the first system. More generally, if the two systems are described by the
*density operator ρ*ABthen the reduced density operator for system a

*ρ*A=

### ∑

m

hζm|ρAB|ζmi =trb*ρ*AB. (1.45)
The reduced density operator for system B is obtained by evaluating the trace
*of ρ*AB over the states for system A. Let us consider, as an example, the Bell
state
Φ+
= √1
2(|00i + |11i). (1.46)
This has density operator

*ρ*=Φ+
_{Φ}+

=

|00ih00| + |11ih00| + |00ih11| + |11ih11|

2 . (1.47)

Tracing out the second qubit, we find the reduced density operator of the first
qbit
*ρ*A=trB(ρ) =
trB|00ih00| +trB|11ih00| +trB|00ih11| +trB|11ih11|
2
= |0ih0| + |1ih1|
2 =
ˆ
**I**
2.
(1.48)

Notice that this state is a mixed state, since tr ˆ**I/2**2) = 1/2≤1. This is quite
a remarkable result. The state of the joint system of two qubits is a pure state,
that is, it is known exactly; however, the first qubit is in a mixed state, that
is, a state about which we apparently do not have maximal knowledge. This
strange property, that the joint state of a system can be completely known, yet
a subsystem be in mixed states, is another hallmark of quantum entanglement.

**Postulate of QM revisited** It turns out that the postulates of quantum
me-chanics can be reformulated in terms of the density operator language.

More generally, at a fixed time t0the system is completely described by a
*density operator ρ which is a nonnegative, self-adjoint operator normalized to*
*be tr ρ*=1. When the physical quantityA is measured on a system in the state
*ρ*, the probabilityP(an)of obtaining the eigenvalue an of the corresponding
observable A is:

P(an) =tr{ρ(t)Pn}, (1.49) where Pn is the projector on the eigensubspaceHnassociated with the eigen-value anof A.

If the measurement of the physical quantity*A on the system in the state ρ*
gives the result an, the state of the system immediately after the measurement
is

*ρ*n= Pn*ρP*n

tr{Pn*ρ}*. (1.50)

Finally, suppose, for example, that the evolution of a system is described by the unitary operator U. If the system was initially in the state |ψii with probability pi then after the evolution has occurred the system will be in the state U|ψiiwith probability pi. Thus, the evolution of the density operator is described by the equation

*ρ*=

### ∑

i |ψ_{i}ihψ

_{i}|−→U

### ∑

i piU|ψiihψi|U†. (1.51)**1.3**

**Quantum correlations**

Now that the formalism of quantum mechanics has been briefly introduced, it is natural to ask what are the consequences of this formalism and how it affects our predictions. Asking what are the features that make the quantum world depart form the classical one is anything but trivial question, neverthe-less, for our purposes, we can identify one important fingerprint in the super-position principle and in its consequences on composite systems. Let us con-sider for example a classical experiment: a coin toss. The possible outcomes for a single coin are head (H) with probability p or tail (T) with probability q = 1−p. The sample space for a single coin is then the set C1 = {H, T}. Even if we’re working in a classical context, it can be helpful to adopt the Dirac notation and assign a ket |Hi ≡ |1i = 1

0

to our "head" state and |Ti ≡ |0i = 0

1

to our "tail" state. It is important to stress the fact that this
notation is just an handy way borrowed from quantum formalism to describe
this classical experiment, but the sample spaceC_{1} is not endowed with any
algebraic structure. That being said, what we can say after a coin toss and
before we observe the result is that the state of the coin is head with
proba-bility p or tail with probaproba-bility q. This uncertainty is epistemic, in the sense
that it derives from our ignorance about the state of the system. Once the
ob-servation is performed, we "extract" information from the system that allows
us to resolve this uncertainty. If we add a second coin whose sample space
isC2 = {H, T}to our single-coin system, the sample space of the composite
system is the Cartesian product ofC1andC2that is the set of all ordered pair

(c1, c2)where c1∈ C1and c2∈ C2:C12≡ C1× C2= {(c1, c2)|c1∈ C1, c2 ∈ C2}.
ExplicitlyC_{12}= {HH, HT, TH, TT}. Again, we can represent our four possible
states as a collection of vectors

|HHi = 1 0 0 0 , |HTi = 0 1 0 0 , |THi = 0 0 1 0 , |TTi = 0 0 0 1 . (1.52)

These four states cover all the possibilities for a classical two-coins system. Whatever the global state of the composite system, it can always be written

as a product of the states of individual subsystems. This classical two-coins composite system can be represented by a classical probability distribution

*ρ*C2 =
p(h, h) 0 0 0
0 P(h, t) 0 0
0 0 p(t, h) 0
0 0 0 p(t, t)
. (1.53)

where p(h, t)is the probability to have head and tail as outcome oh the toss of each coin.

Let us consider now a quantum coin, namely a two level system (also called qubit). A very well known example of a two-state system is the spin of a spin-1/2 particle such as an electron, whose spin can have values±1/2. The state of a quantum coin is described by a vector on a two-dimensional complex Hilbert space H1 whose basis vector are represented as in the previous case

|Hi ≡ |1i =1 0 and|Ti ≡ |0i =0 1

. A crucial difference between the
sam-ple space of the classical coin and the Hilbert space of the quantum coin, is that
the latter is a vector space, thus endowed with the operations of addition and
scalar multiplication. That means that the information on the state of the
quan-tum coin could be not only encoded in the classical state of the form{|1i,|0i}
but also in a more general state|ψi =*α*|0i +*β*|1iwith|α|2+|β|2 =1.
Quan-tum mechanics tells us that when we measure a qubit we get either the result 0,
with probability|α|2, or the result 1, with probability|β|2. A classical coin has
two possible states: either heads or tails up. By contrast, a quantum coin can
ex-ist in a continuum of states between|0iand|1i. One picture useful in thinking
about qubits is the following geometric representation. Because|α|2+|β|2=1
we may rewrite|ψias

|ψi =cos*θ*
2|0i +e

*iφ*_{sin}*θ*

2|1i. (1.54)

*The numbers θ and φ define a point on the unit three-dimensional sphere.*
This sphere is often called the Bloch sphere and it provides a useful means of
visualizing the state of a single qubit (see Fig.(1.2) below).

Suppose we add a second quantum coin described by a state vector on a
second Hilbert spaceH2. The state space of the composite system is the tensor
product of the two Hilbert spacesH_{1}andH_{2}:H_{12} = H_{1}⊗ H_{2}.

We can construct a base for the total Hilbert space H12as a tensor product
of the two bases of the factor spaces:|iji_{12} ≡ |ii_{1}⊗ |ji_{2}

|11i = 1 0 0 0 |10i = 0 1 0 0 |01i = 0 0 1 0 |00i = 0 0 0 1 . (1.55)

H12is therefore spanned by{|00i,|01i,|10i,|11i}and every vector|ψiABin

HABcan be expanded in terms of the basis

|ψi_{12} =

### ∑

i,j

**Figure 1.2.**Bloch sphere representation of a qubit.

What are the possible outcomes of our quantum coin toss? The simplest
case is the analogue of the classical coin we already considered: if the two
coins are in one of the states{|0i,|1i}, the composite state will be one of the
four possible tensor product states, so the possible outcomes in this case are
{|00i,|01i,|10i,|11i}. But as we said, the single coins can be in a
superposi-tion state,|φi_{1} = *α*1|0i +*β*2|1iand |χi2 = *α*2|0i +*β*2|1i, so that the global
state will be something like

|ψi_{12} =|φi_{1}⊗ |χi_{2}=*α*1*α*2|00i +*α*1*β*2|01i +*β*1*α*2|10i +*β*1*β*2|11i, (1.57)
so basically an object of the form of Eq.(1.56). Once again, we can express the
global state in terms of the two factor states. The final, and possibly more
in-teresting case is that of a global state|ψi_{12} ∈ H12of composite system which
cannot be written as a product of the states of individual subsystems. Such is
for example the state|Ψ+_{i =} _{√}1

2(|01i + |10i).

Entanglement encompasses any possible form of correlations in pure bi-partite states, and can manifest in different yet equivalent ways. For instance, every pure entangled state is nonlocal, meaning that it can violate a Bell in-equality. Similarly, every pure entangled state is necessarily disturbed by the action of any possible local measurement. Therefore, entanglement, nonlocal-ity, and QCs are generally synonymous for pure bipartite states.

**1.3.1**

**Correlations taxonomy**

**Pure States** Suppose we have a physical system consisting of two parts,
each associated with a finite dimensional Hilbert spaceH_{A}andS_{B}respectively.
The Hilbert space of the composite system is defined as usual as the tensor
product of the two factor spacesH_{AB}= H_{A}⊗ H_{B}. If the system is prepared in
*a pure state ρ*AB∈ HAB, then essentially two possibilities can occur:

· Uncorrelated state: the first is that the two subsystems are completely in-dependent. In this case the state take the form of a tensor product state

*ρ*AB=*ρ*A⊗*ρ*B, (1.58)

*with ρ*A ∈ HA *and ρ*B ∈ HB. In this case there are no correlations of
any form (classical or quantum) between the two parts of the composite
system. We denote withP_{AB}the set of product states

PAB:= {ρAB|*ρ*AB=*ρ*A⊗*ρ*B}. (1.59)

· Correlated state: The second possibility is that, instead, there exists no local
*state ρ*A*and ρ*Bsuch that the global state can be written in tensor product
*form ρ*AB=*ρ*A⊗*ρ*B.

*In this case, ρ*ABdescribes an entangled state of the two subsystems.
Entangle-ment encompasses any possible form of correlations in pure bipartite states,
and can manifest in different yet equivalent ways. For instance, every pure
en-tangled state is nonlocal, meaning that it can violate a Bell inequality. Similarly,
every pure entangled state is necessarily disturbed by the action of any possible
local measurement. Therefore, entanglement, nonlocality, and QCs are
gener-ally synonymous for pure bipartite states [21, 22]. The situation is subtler and
richer in caseSA andSBare globally prepared in a mixed state, described by
*a density matrix ρ*AB ∈ DAB, whereDABdenotes the convex set of all density
operators acting onHAB*. The state ρ*ABis separable, or unentangled, if it can be
prepared by means of local operations and classical communication (LOCC),
i.e., if it takes the form

*ρ*AB=

### ∑

ipi*ρ*(i)_{A} ⊗*ρ*(i)_{B} , (1.60)
with{pi}a probability distribution, and quantum states{ρA}ofSA*and ρ*Bof

SB. The setSAB*of separable states is constitutes therefore by all states ρ*ABof
the form given by Eq.(1.60)

SAB= {ρAB|*ρ*AB=

### ∑

pi*ρ*(i)

_{A}⊗

*ρ*(i)

_{B}}. (1.61)

*Any other state ρ*AB∈ S/ ABis entangled. Mixed entangled states are hence de-fined as those which cannot be decomposed as a convex mixture of product states. Notice that, unlike the special case of pure states, the set of separable states is in general strictly larger than the set of product states,SAB⊃ PAB En-tanglement, one of the most fundamental resources of quantum information theory, can be then recognized as a direct consequence of two key ingredients of quantum mechanics: the superposition principle and the tensorial structure of the Hilbert space. Within the set of entangled states, one can further dis-tinguish some layers of more stringent forms of non-classicality. In particular, some but not all entangled states are steerable, and some but not all steerable states are nonlocal. Steering, i.e. the possibility of manipulating the state of one subsystem by making measurements on the other, captures the original essence of inseparability adversed by Einstein, Podolsky and Rosen (EPR) and appreciated by Schrödinger, and has been recently formalised in the modern language of quantum information theory. It is an asymmetric form of corre-lations, which means that some states can be steered from A to B but not the

other way around. On the other hand, nonlocality, intended as a violation of EPR local realism, represents the most radical departure from a classical de-scription of the world, and has received considerable attention in the last half century since 1964 theorem [23].

Nonlocality, like entanglement, is a symmetric type of correlations, invari-ant under the swap of parties A and B. An importinvari-ant question we should con-sider is: are the correlations in separable states completely classical? In general, this is not the case. The only states which may be regarded as classically corre-lated form a negligible subset of separable states, and will be formally defined in the next subsection.

**Figure 1.3.**Hierarchy of correlations in states of composite quantum systems. Pure
states can be either uncorrelated or just entangled. For mixed states, several layers of
non-classical correlations have been identified. In order of decreasing strength, these
can be classified as: nonlocality→steering→entanglement→general quantum
corre-lations. All of these forms of non-classical correlations can enable classically impossible
tasks. Picture taken from [21]

**Classically correlated states** Let us begin by introducing the popular
charac-ters of quantum information: Alice and Bob. Suppose Alice is in Pisa and Bob is
in Kraków, in their respective laboratories, with only a telephone line between
their labs. Alice has a classical bit (it could be for example a coin whose possible
outcomes are head with probability p0and tail with probability p1 = 1−p0)
and Bob a quantum system that can exist in one of two independent quantum
*states ρ*0_{B} *or ρ*1_{B}. For convenience, we shall adopt the same formalism for both,
so if the outcome of the coin flip is head we denote the state of the classical
system with|0i, and if the outcome is tail with|1i. Bob prepares his quantum
*system in the state ρ*0_{B}. Now Alice flips a coin and if the outcome is head, Bob
*leaves his state unchanged, if the outcome is tail he changes his state form ρ*0_{B}

*to ρ*1_{B}. The state after the coin toss is

head :|0ih0| ⊗*ρ*0_{B} , tail :|0ih0| ⊗*ρ*1_{B}. (1.62)
Since their action depends on the outcome of a coin, they will need to
com-municate via the telephone line, this is the classical communication part of
LOCC. After the communication, Bob can make the appropriate unitary
trans-formation on his system, which is the local operation part of LOCC. Now
sup-pose Alice and Bob go to Brussels to attend a quantum information conference
and after several day out and a busy schedule they forget the outcome of the
coin toss. Because of their incomplete knowledge, all they can say about the
*state of the system is that it is ρ*0_{B} with probability p0*and ρ*1B with probability
p1=1−p0. The resulting state can be written as a density operator

*ρ*AB=p0|0ih0|A⊗*ρ*0B+p1|1ih1|A⊗*ρ*1B. (1.63)
This is an example of what we call a classical-quantum state, or classical on
*A, since subsystem A is classical. Equivalently, we can say that ρ*AB is
clas-sically correlated with respect to subsystem A. Going beyond just bits, they
can construct a computer program that has K outcomes. Each outcome occurs
with probability pk*and Bob prepare the corresponding quantum state ρ*kB. Any
classical-quantum state can then be written as a statistical mixture in the
fol-lowing way
*ρ*AB=
K

### ∑

i pi|iihi| ⊗*ρ*(i)

_{B}, (1.64)

*where ρ*i

_{B}is the quantum state of subsystem B when A is in the state|ii

_{A}, and {pi}is a probability distribution. The setCAof classical-quantum states is then formed by any state that can be written as

C_{A}:= {ρ_{AB}|*ρ*AB=

### ∑

ipi|iihi|A⊗*ρ*iB}, (1.65)
where {|ii_{A}} is any orthonormal basis of subsystem A and {ρi

B} are
quan-tum states of subsystem B. We stress that the orthonormal basis{|ii_{A}}
appear-ing in Eq.(1.65) is not fixed but rather can be chosen from all the orthonormal
bases of subsystem A. The setC_{A} may look similar to the setS_{AB}of
separa-ble states, defined in Eq.(1.61), but there is an important difference: in S_{AB},
any state of subsystem A is allowed in the ensemble, while inC_{A} only
pro-jectors corresponding to an orthonormal basis can be considered. This reveals
that classical-quantum states form a significantly smaller subset of the set of
separable states,CA ⊂ SAB. Swapping the roles of A and B, one can define the
quantum-classical states along analogous lines,

*ρ*AB=
K

### ∑

j

pj*ρ*(j)_{A} ⊗ |jihj|_{B}, (1.66)
and corresponding setCB

CB:= {ρAB|*ρ*AB=

### ∑

jwhere{|ji_{B}}is any orthonormal basis of subsystem B and{ρj_{A}}are any
quan-tum states of subsystem A. These states can be equivalently described as
clas-sically correlated with respect to subsystem B. Finally, if we consider the
com-position of two classical objects, we can introduce the set of classical-classical
states, or classical on A and B, which are classically correlated with respect to
both subsystems. A state is classical-classical if it can be written as

*ρ*AB=

### ∑

ijpij|iihi| ⊗ |jihj|B, (1.68)
where we now have a joint probability distribution {pij} and orthonormal
bases for both subsystems A and B. The set C_{AB}of classical-classical is then
formed by any state that can be written as in Eq.(1.68),

CAB:= {ρAB|*ρ*AB=

### ∑

ijpij|iihi|A⊗ |jihj|B}, (1.69)
where {|ii_{A}} and {|ji_{B}} are any orthonormal bases of subsystem A and B,
respectively. These states can be thought of as the embedding of a joint
classi-cal probability distribution pij into a density matrix formalism labelled by
or-thonormal index vectors on each subsystem. It holds by definition that
classical-classical states amount to those which are both classical-classical-quantum and
quantum-classical, that is,CAB= CA∩ CB. More generally, we have

PAB⊂ CAB⊂ {CA,CB} ⊂ SAB⊂ DAB. (1.70) In this hierarchy, only the two rightmost sets (containing separable states, and all states, respectively) are convex, while all the remaining ones are not; that is, mixing two classically correlated states one may obtain a state which is not classically correlated anymore. Another interesting fact is that, while sepa-rable states span a finite volume in the space of all quantum states, the setsCA,

CB, and consequentlyCABare of null measure and nowhere dense withinSAB.

**1.3.2**

**Measurement of quantum correlations**

**Von Neumann Entropy** The Shannon entropy measures the uncertainty
as-sociated with a classical probability distribution. Quantum states are described
in a similar fashion, with density operators replacing probability distributions.
In this section we generalize the definition of the Shannon entropy to quantum
*states. Von Neumann defined the entropy of a quantum state ρ by the formula*
[14]

S(ρ) = −tr(ρ*log ρ*). (1.71)
*In this formula logarithms are taken to base two, as usual. If λ*iare the
*eigen-values of ρ then von Neumann’s definition can be re-expressed as*

S(ρ) = −

### ∑

i

*λ*i*log λ*i, (1.72)

where we define 0 log 0 ≡ 0, as for the Shannon entropy. For calculations it
is usually this last formula which is most useful. For instance, the completely
mixed density operator in a d-dimensional space, ˆ**I/2, has entropy log d. From**
now on, when we refer to entropy, it will usually be clear from context whether
we mean the Shannon or von Neumann entropy.

**Relative entropy** As for the Shannon entropy, it is extremely useful to define
*a quantum version of the relative entropy. Suppose ρ and σ are density *
*opera-tors. The relative entropy of ρ to σ is defined by*

S(ρ||σ) ≡tr(ρ*log ρ*) −tr(ρ*log σ*). (1.73)
As with the classical relative entropy, the quantum relative entropy can
sometimes be infinite. In particular, the relative entropy is defined to be+∞
*the kernel of σ (the vector space spanned by the eigenvectors of σ with *
*eigen-value 0) has non-trivial intersection with the support of ρ (the vector space*
*spanned by the eigenvectors of ρ with non-zero eigenvalue), and is finite *
oth-erwise. It can be shown that the quantum relative entropy is non-negative, a
result sometimes known as Klein’s inequality:

**Theorem 1(Klein’s inequality). The quantum relative entropy is non-negative,**

S(ρ||σ) ≥0, (1.74)

*with equality if and only if ρ*=*σ*.

**Properties of von Neumann’s Entropy** The von Neumann entropy has many
interesting and useful properties:

· Non-negativity From the definition is clear that the entropy is non-negative. It is zero if and only if the state is pure.

S(ρ) =0 ⇐⇒ *ρ*=|ψihψ|. (1.75)
· In a d-dimensional Hilbert space the entropy is at most log d. The entropy
is equal to log d if and only if the system is in the completely mixed state

S(ρ) =log d ⇐⇒ *ρ*= 1

d**I.**ˆ (1.76)

· The von Neumann entropy is invariant under a unitary transformation:

S*UρU*†=S(ρ). (1.77)

This is a consequence of the fact that the entropy is a function only of the
*eigenvalues of ρ and that these, unlike the associated eigenvectors, are*
unchanged by a unitary transformation. It follows that the von Neumann
entropy for an isolated quantum system is unchanged by its natural
evo-lution.

· Concavity The entropy is a concave function of its inputs. That is, it sat-isfies the inequality:

S

### ∑

i pi*ρ*i ! ≥

### ∑

i piS(ρi). (1.78)· It can be shown that the following inequalities holds

### ∑

i piS(ρi) ≤S### ∑

i pi*ρ*i ! ≤

### ∑

i piS(ρi) −### ∑

i pilog pi. (1.79)**1.3.3**

**von Neumann Enropy**

Our study of classical information in demonstrated the significance of the joint probability distribution P(xi, yj)for two events X and X and of the asso-ciated entropy H(Y, Y). We also encountered the mutual information H(X : Y) as a measure of correlation between the events A and B, and the conditional entropy H(B|A). It is natural, in quantum information theory, to define analo-gous properties based on the von Neumann entropy for the state of two quan-tum systems, A and B, which we denote S(A, B):

S(A, B) =S(ρ_{AB}) = −tr(ρ_{AB}*log ρ*AB), (1.80)
*where ρ*ABis the density operator for the two systems. We can also define von
Neumann entropies for the A and B systems alone in terms of their reduced
density operators:

S(A) = −trA(ρA*log ρ*A) (1.81)
S(B) = −trB(ρB*log ρ*B) (1.82)
*If the two systems are statistically independent, so that ρ*AB = *ρ*A⊗*ρ*B, then
S(A, B) = S(A) +S(B). This property is sometimes referred to as additivity.
More generally, we find that the entropy is subaddititive in that

S(A, B) ≤S(A) +S(B). (1.83) This inequality follows directly from the positivity of the relative entropy

S(ρAB||ρA⊗*ρ*B) =tr
h
*ρ*AB(*log ρ*AB−*log ρ*A⊗*ρ*B)
i
=tr(ρAB*log ρ*AB) −tr
*ρ*AB*log ρ*A⊗**I**ˆ
−tr*ρ*ABlog ˆ**I**⊗*ρ*B
= −S(A, B) +S(A) +S(B)
(1.84)

In classical information H(X, Y)is bounded from below. In particular, it must be greater than or equal to the larger of H(X)and H(Y):

H(X, Y) ≥sup(H(X), H(Y)) (1.85) This inequality does not hold, however, for the von Neumann entropy. Con-sider, in particular, a pure entangled state of the two systems written as a Schmidt decomposition

|ψi_{AB}=

### ∑

n

*λ*n|χi_{A}|ζi_{B}. (1.86)

The von Neumann entropy for the whole system is zero, that is, S(A, B) =0.
*Nevertheless, because the state is entangled, each of the states ρ*A *and ρ*B is
mixed and the associated entropies will not be zero. The von Neumann entropy
for each of these is the same:

S(A) =

### ∑

n

This must be true for all pure states of A and B. Here S(A)and S(B) are posi-tive but S(A, B)is zero and it is clear, therefore, that the classical inequality in Eq.(1.85) does not apply to the von Neumann entropy. In its place we have the Araki-Lieb inequality

S(A, B) ≥|S(A) −S(B)| (1.88) This is clearly satisfied for entangled pure states, for which both sides of the inequality are zero. We can combine this inequality with the subadditivity con-dition to place both lower and upper bounds on S(A, B):

|S(A) −S(B)| ≤S(A, B) ≤S(A) +S(B). (1.89) It is helpful to introduce von Neumann analogues of the mutual information H(A : B)and of the conditional entropy H(B|A). The von Neumann, or quan-tum, mutual information is defined as

S(A : B) =S(A) +S(B) −S(A, B). (1.90) This quantity, like its classical counterpart, is clearly symmetrical in A and B and it is also greater than or equal to zero, by virtue of subadditivity. The mu-tual information is restricted to be less than or equal to the lesser of H(A)and H(B). The quantum mutual information is restricted, by virtue of the Araki-Lieb inequality, to the range

0≤S(A : B) ≤2 inf(S(A), S(B)). (1.91) The mutual information is, as we have seen in in the first section, a measure of correlation and the same can be said of the von Neumann mutual informa-tion for quantum informainforma-tion. Indeed, this quantity is sometimes refered to as the index of correlation.

The von Neumann conditional entropy is defined by direct analogy with its classical counterpart, H(Y|X), to be

S(B|A) =S(A, B) −S(A). (1.92) This quantity, in contrast to its classical counterpart, is not restricted to be greater than or equal to zero. Indeed, it can take any value between S(B)and −S(A). The conditional entropy H(Y|X)has the simple and physically appeal-ing interpretation as the information about X and Y not already contained in X alone. In other world, the conditional entropy H(X|Y)(or H(Y|X)) represents the average uncertainty(ignorance) we have on X(Y)given the value of Y(X): To put it another way, if we write

H(X, Y) =H(X) +H(Y|X), (1.93) then the information associated with X and Y is simply that associated with X plus that for Y when X is known.

H(X|Y) =H(X, Y) −H(Y) = −

### ∑

i,j pijlog pij rj = −### ∑

ij rj pij rj logpij rj =### ∑

j rjH(X|yj). (1.94)H(Y|X) =H(X, Y) −H(X) = −

### ∑

i,j pijlog pij qi = −### ∑

ij qi pij qi logpij qi =### ∑

i qiH(X|xi). (1.95)It is tempting to apply the same interpretation to S(B|A)and to write S(A, B) =S(A) +S(B|A). (1.96) However, in general this quantity does not represent the mutual information of the post-measurement state, which depends on the observable one has mea-sured on the subsystem A.

Intuitively, if we measure an observable A= A⊗** _{I}**ˆ

_{B}

_{, the state is projected}on one of the eigenstates of A, while a measurement of an observable A0 = A0⊗

**ˆ**

_{I}_{B}

_{not commuting with A projects the state in an eigenstate of A}0

_{that}is different from those of A. We recall that a local measurement on A whose result is an

*maps the system into a statistical ensemble ρ*n = Mn

*ρM*

†_{n}

tr M†

nMn*ρ*Thus, the

quantum version of Eq.(1.95) is

S(B|{MA}) =

### ∑

k

pkS(ρk), (1.97)

and the quantity

J(B : A) =S(B) −S(B|{MA}) (1.98) evaluates the average information gained on the subsystem B after that the lo-cal measurement{MA}has been performed. The key point is that, conversely to the classical case, this quantity is always smaller than the quantum mutual information in the initial state:

I(ρ) =J(B : A) ≥J(B : A)_{{M}A_{}},∀{MA}. (1.99)

Indeed, it is easy to verify that S(B|{MA}) ≥S(B|A).

0≤ H(B|A) ≤H(B) (1.100)

−S(A) ≤S(B|A) ≤S(B). (1.101)
Therefore, the mutual information, which measures the total (classical and
quantum) correlations, decreases after the measurement. Since classical
cor-relations are certainly kept invariant, the corcor-relations which have been lost
ap-pear to be genuinely quantum. Also, they seem to evaluate how much the state
is disturbed by the quantum measurement. The state is only classically
cor-related if and only if there exists at least one measurement{MA}on A that
does not affect the mutual information and achieves the equality in Eq.(1.99)
I= J_{{M}A}. We are able to quantify the quantum correlations of a state by
find-ing the minimum of the conditional entropy S(B|{MA_{})}_{over all the possible}
measurements on A, i.e. determining the maximum of S(B|{MA}), which is
the largest possible amount of correlations preserved after a measurement on
A, and then subtracting it to the total correlations the system had before the