• Non ci sono risultati.

Optimal Methods for Quantum Batteries

N/A
N/A
Protected

Academic year: 2021

Condividi "Optimal Methods for Quantum Batteries"

Copied!
98
0
0

Testo completo

(1)

University of Pisa

Department of Physics “E. Fermi”

Master’s Degree in Physics

Curriculum of Theoretical Physics

Optimal Methods for Quantum

Batteries

Graduation Session October 16

th

, 2019

Academic Year 2018/2019

Candidate:

Francesco Mazzoncini

Supervisor :

Prof. Vittorio Giovannetti

Internal Supervisor :

Dr. Davide Rossini

(2)
(3)

Contents

1 Introduction 5

1.1 Outline of the work . . . 6

2 Key Concepts of Quantum Information Theory 9 2.1 Quantum States . . . 9

2.1.1 Qubit . . . 10

2.1.2 Composite Systems . . . 11

2.2 Quantum Unitary Evolution . . . 13

2.2.1 Pure States . . . 13

2.2.2 Mixed States . . . 14

2.2.3 Open Quantum Systems . . . 14

3 Quantum Batteries 17 3.1 Ergotropy . . . 17 3.1.1 Passive States . . . 18 3.1.2 Ergotropy calculation . . . 20 3.2 Physical Implementations . . . 22 3.2.1 Spin-chain Battery . . . 23

3.2.2 Cavity Assisted Batteries. . . 24

4 Optimal Control Theory 25 4.1 Optimal Control Problem . . . 25

4.2 Variational Approach . . . 27

4.2.1 Introduction to calculus of variations . . . 27

4.2.2 Necessary Conditions for Optimal Control without Con-trol Boundaries . . . 28

4.2.3 Boundary Conditions . . . 30

4.2.4 Pontryagin’s Minimum Principle. . . 31

4.2.5 Summary of the necessary conditions for optimality:. . . 31

4.2.6 Example of a Bang Bang Protocol. . . 32

(4)

4 CONTENTS

5 Continuous Variable Systems 35

5.1 Fock Space . . . 35

5.2 Coherent and Squeezed States . . . 36

5.2.1 Coherent States . . . 36

5.2.2 Squeezed States . . . 37

5.3 Gaussian States . . . 38

5.3.1 Canonical Commutation Relations and Definition of a Gaussian State . . . 39

5.3.2 Displacement Operator . . . 40

5.3.3 Symplectic transformation . . . 41

5.3.4 Single-mode Gaussian States . . . 42

6 Unique System Optimization 45 6.1 Commutative case . . . 46 6.2 H0 vs H1 . . . 46 6.2.1 Qubit Case . . . 47 6.3 Non-commutative case . . . 48 6.3.1 Qubit . . . 51 6.3.2 Numerical analysis . . . 55

6.3.3 Non-commutative case with a quantum harmonic oscillator 61 6.4 Conclusions . . . 64

7 Mediated Charging Process 67 7.1 Two Qubits . . . 69

7.1.1 Commutative case . . . 69

7.1.2 Non-commutative case . . . 71

7.2 Quantum Harmonic Oscillator vs Qubit. . . 77

7.3 Quantum Harmonic Oscillator vs Quantum Harmonic Oscillator 78 7.4 Conclusions . . . 79

8 Summary 81 A Proofs of Non-Existence of Singular Intervals 83 A.1 Single Qubit . . . 83

A.2 Two Qubits with mixed initial state . . . 84 B Ergotropy Calculation for Gaussian States 89

(5)

Chapter 1

Introduction

In the last years, with the rapid development of new quantum technologies [1,2] there has been a worldwide interest in exploiting quantum phenomena that arise at a microscopic level. The main topic of this thesis pertains to the emerging field of quantum thermodynamics [3–5], that attempts to extend the laws of thermodynamics to microscopic systems dominated by quantum effects. In particular we will focus on the study of the so-called ”Quantum Batteries” [6–13], i.e. quantum mechanical systems for storing energy, where quantum effects can be used to obtain more efficient and faster charging pro-cesses with respect to classical systems.

This interesting area of research is based on the, well-motivated, common idea that microscopic quantum features can be utilized to boost our perfor-mances in a macroscopic scale. Moreover, the study of Quantum Batteries is well justified by an increasing miniaturization of electrical user devices, essen-tials in our daily life, that implies the implementation of smaller and smaller batteries: as their dimension approaches the size of molecules and atoms, their description has to account for quantum mechanical effects. This blossoming research field has to address many different questions, such as the stabiliza-tion of stored energy, the practical implementastabiliza-tion of Quantum Batteries and the study of the optimal charging processes, offering a vast research panorama on both theoretical and experimental ends. Within this framework, in order to find the best charging processes for our Quantum Batteries, we will use some features of Quantum Control Theory [14–16], a powerful mathematical tool that has many applications in different fields of physics such as quantum optics [17] and physical chemistry [18–20]. It has also contributed to under-stand interesting aspects of quantum mechanics such as the quantum speed limit [21–24] and to generate efficient quantum gates in open quantum

(6)

6 1.1. OUTLINE OF THE WORK

tems [25,26].

Quantum Control Theory studies how a quantum state can be guided into a new chosen state of the Hilbert space of the system (the so-called “control-lability” problem [27]) and how we can make use of this ability to control our system in order to reach a given objective (for us the objective is to charge the Quantum Battery).

In this thesis, we will study how a two-level system (TLS) or a quantum harmonic oscillator (QHO) can be optimally charged with a modulation of an external Hamiltonian. In order to find the best charging protocol we will use the Pontryagin’s Minimum Principle [28,29], a very useful theorem of Classical Optimal Control Theory, which is frequently used also in Quantum Control Theory [30,31]. We will show that our Quantum Batteries can be optimally charged through a so called Bang-Bang-off modulation of the intensity of the external Hamiltonian (such that the intensity λ(t) can assume only its bound-ary limits and the 0 value, i.e. external Hamiltonian turned off).

1.1

Outline of the work

In the follow we will explain the structure of this thesis, chapter by chapter. Chapter 2 In this chapter, we give an overview of the fundamental concepts of

quantum information theory, in order to fix some useful notations that will be used in the main chapters.

Chapter 3 In this chapter, we give a brief explanation of what a Quantum Battery is and which features it may have, outlining the possible physical im-plementations. We also define a fundamental quantity for our charging problem: the maximal amount of work that can be extracted from a quantum system via a unitary channel, called ergotropy.

Chapter 4 This chapter introduces the classical Optimal Control Theory, providing a simple mathematical description of an optimization problem. In addi-tion we describe a variaaddi-tional approach to that problem, providing the necessary conditions for optimality of a given process: the Pontryagin’s Minimum Principle, utilized in the main chapters in its quantum version. Chapter 5 In this chapter, we analyze continuous variable systems and their fea-tures. We focus on the particular subclass of states of continuous variable systems, called Gaussian states, for which we can analytically calculate

(7)

1.1. OUTLINE OF THE WORK 7

their ergotropy. Indeed, in the following chapters we assume our Quan-tum Battery to be either a two-level system or a quanQuan-tum harmonic oscillator (i.e. a continuous variable system).

Chapter 6 This chapter is the core of this thesis and it is devoted to the analysis of a single Quantum Battery charging problem through the modulation of the intensity λ(t) of an external Hamiltonian. We utilize the Pontryagin’s Minimum Principle to show that the optimal charging protocols for our Quantum Batteries are the ones that we called Bang-Bang-off protocols. Moreover, we perform a numerical analysis for the two-level system case, verifying the validity of our theoretical results.

Chapter 7 In this chapter, we exploit the main results of the precedent one, ex-tending our analysis to a charging process with two different systems involved: the first one that acts as a quantum charger and the second one that is the actual Quantum Battery.

Chapter 8 In chapter 8 we summarize our main results and explain how we can extend our analysis.

(8)
(9)

Chapter 2

Key Concepts of Quantum

Information Theory

In this chapter we will introduce some relevant concepts from quantum me-chanics and quantum information theory, in order to make this master thesis accessible to any physics graduate and to fix some useful notations that will be used in the main chapters. The following topics can be found in any quantum mechanics book. However, we recommend [32], since it explains them in the perspective of the quantum information theory.

2.1

Quantum States

It is well known that the state space of quantum physics is the Hilbert space, a complete, complex vector space endowed with an inner product.

A pure state is a unit-length vector on the quantum system’s Hilbert space, represented by Dirac with a ket vector |ψi. It is called pure because it possesses all information that we can know about the system. However, it may occurs that we do not know exactly in which state the system is. Suppose a quantum system is in one of a number of states |ψii with respective probabilities pi, we are then in a case where there is an ensemble of pure states. In this case the system is said to be in a mixed state. In order to describe such a system it is better to use an alternative formulation, but mathematically equivalent, of quantum mechanics using a tool known as the density matrix.

(10)

10 2.1. QUANTUM STATES

Density Matrix

A density matrix for a quantum system is defined through the following convex sum of projectors:

ρ ≡X i

pi|ψiihψi| , (2.1)

where pi is the probability that the system is in the pure state vector |ψii.

Properties: • Tr[ρ] = 1

• ρ is a positive operator • ρ is Hermitian: ρ = ρ†

• ρ represents a pure state if and only if ρ2 = ρ

In quantum mechanics observables are described as operators in the Hilbert space of the system. The expectation value E[O] of the observable O can be calculated in the density matrix representation as:

E[O] = Tr[ρO]. (2.2)

If we analyze the particular case of a pure state ρ = |ψihψ|, the equivalence with the usual formulation of the quantum mechanic is evident:

E[O] = Tr[ρO] = Tr[ |ψihψ| O] = hψ|O|ψi . (2.3)

2.1.1

Qubit

A quantum system described by a two-dimensional Hilbert space is called qubit. It is important to mention and describe more in depth this system, because its states are the basic unit of quantum information, i.e. the quantum version of the classical binary bit. Defining the orthonormal basis {|0i , |1i} we can describe its state in terms of the Pauli operators

σx= |1ih0| + |0ih1| , σy = i |1ih0| − i |0ih1| , σz = |0ih0| − |1ih1|

(11)

2.1. QUANTUM STATES 11

and the identity operator I = |0ih0| + |1ih1| as ρ = I + ~r · ~σ

2 , (2.5)

where ~r is a vector of real numbers, called Bloch vector. This parametrization allows to describe a qubit state on a 3D sphere called Bloch sphere, as shown in Figure 2.1

Figure 2.1: Bloch sphere’s representation

Furthermore, the components of the Bloch vector correspond to mean val-ues of the Pauli matrices operators:

rx = Tr[σxρ], ry = Tr[σyρ], rz = Tr[σxρ]. (2.6)

2.1.2

Composite Systems

The space state of a composite physical system is the tensor of the state spaces of the component physical systems:

H = n O

i=1

(12)

12 2.1. QUANTUM STATES

where n labels each individual subsystem. When the subsystems are com-pletely independent from each ohter, the global state can be written as

ρ = n O

i=1

ρi (2.8)

and it is called product state. When a state possesses just classical corre-lations, called separable state, it is possible to describe the state with the following density matrix:

ρ =X k pk n O i=1 ρki. (2.9)

The physical meaning of a separable state is more evident in the case of a pure state: a state is separable if it is possible to represent the state as

|ψi = n O

i=1

|ψii . (2.10)

When a state is not separable, it means that quantum correlations are present and hence it is called entangled state. In the case in which we are interested in the description of one of the subsystems of our composite system, a useful tool is the reduced density matrix. Given two physical systems A and B, described by a space state H = HAN HB and a density operator ρAB, then the reduced density operator for system A is defined by:

ρA ≡ TrB[ρAB], (2.11) where TrB[·] is known as the partial trace over system B. Let’s define |eii

an orthonormal basis for the subsystem B, then the partial trace of ρAB over system B is defined by

TrB[ρAB] = X

i

hei|ρAB|eii . (2.12)

The reduced density matrix is a useful tool, because it gives rise to the correct description of observable quantities for subsystems of a composite system. Indeed, given an observable MAB = MA⊗ IB, its expected value for a system in the state ρAB can be written as

(13)

2.2. QUANTUM UNITARY EVOLUTION 13

2.2

Quantum Unitary Evolution

2.2.1

Pure States

The time evolution of a state vector in the quantum mechanical Hilbert space is governed by the Schr¨odinger equation

i}d

dt|ψ(t)i = H(t) |ψ(t)i , (2.14) where H(t) is the Hamiltonian operator associated with the system.

The solution to this equation defines the time evolution operator, U (t, t0): |ψ(t)i = U (t, t0) |ψ(t0)i , (2.15) with U (t, t0) that has the following evolution

i}dU (t, t0)

dt = HU (t, t0). (2.16) It is worth to mention, since it will be useful in the next chapters, that there are three particular cases that we have to analyze:

• The Hamiltonian does not depend on time: the solution of the equation (2.16) is U (t, t0) = exp  − i } H(t − t0)  (2.17)

• The Hamiltonan H depends on time but the operators H corresponding to different moments of time commute: [H(ti), H(tj)] = 0.

U (t, t0) = exp  − i }  Z t t0 dt0 H(t0)  . (2.18)

• The operators H evaluated at different moments of time do not commute. U (t, t0) = T  exp  − i }  Z t t0 dt0 H(t0)  , (2.19)

where we have introduced the time-ordered exponential, T exp, which is defined by the following expression:

(14)

14 2.2. QUANTUM UNITARY EVOLUTION T exp  − i } Z t t0 H(t0)dt0  ≡I + ∞ X n=1 1 n!  − i }  Z t t0 dt1... ... Z t t0 dtnTH(t1)...H(tn), (2.20)

with T •  a time ordering of the operators.

2.2.2

Mixed States

In order to extend this formalism in the case of mixed state it is possible to reformulate the postulate of the quantum evolution as

Postulate on closed evolution

The evolution of a closed quantum system is described by a unitary transformation. That is, the state ρ of the system at time t1 is related to the state ρ0 of the system at time t2 by a unitary operator U which depends only on times t1 and t2,

ρ0 = U ρU†. (2.21)

by differentiation one obtains the Von Neumann equation dρ(t)

dt = − i }

[H, ρ] (2.22)

The evolution of the state ρ, described by the Von Neumann’s equation, will be the constraint that we will fix in our optimization problem.

2.2.3

Open Quantum Systems

Up to now only closed systems have been treated. Indeed, when the system of interest S interacts with an environment E (for example a thermal bath), the evolution of the density matrix of the system S can not be described by Eq. (2.22). It is necessary to study the so called CPT maps (Completely Positive Trace-Preserving maps) [32]. We know that the whole system S + E can be described by a closed evolution, represented by a particular unitary operator USE. The evolution of the subsystem S can be obtained by calculating the evolution of the whole system and tracing out the enviroment’s degrees of

(15)

2.2. QUANTUM UNITARY EVOLUTION 15

freedom:

ρS(t) = TrEUSE(t, 0)ρSE(0)U †

SE(t, 0). (2.23) Considering as initial state of the system S and E a factorized state of the form ρS(0) ⊗ ρE(0), it is always possible to define a mapping Φt whick links ρS(0) to ρS(t):

ρS(t) = Φt ρS(0). (2.24) We will not analyze the features of this kind of maps, see [32] or [33] for further details, since in the next chapters we will focus just on closed systems. However, it is worth mentioning them, because it could be possible to extend our optimization problem also to open quantum systems, in particular to Markovian systems (i.e. systems whose future evolution does not depend on their previous states).

(16)
(17)

Chapter 3

Quantum Batteries

Quantum Batteries have first been proposed by Alicki and Fannes [6] in 2013 as a part of a small quantum machine to transfer energy. From a qualitative point of view, Quantum Batteries perform the same task as the traditional batteries: storing energy. We will consider as Quantum Battery a two-level system or a quantum harmonic oscillator, following the structure of [7], [8] and [9]. Therefore, the quantity that characterizes the Quantum Batteries is the maximal amount of work that can be extracted from a quantum system via a unitary channel [34], called ergotropy.

3.1

Ergotropy

The maximal work-extraction problem is described in the following way. Con-sider a system S which can exchange work with external macroscopic sources. S is thermally isolated but it allows work transfer between its sources. Its den-sity matrix ρ evolves according to an Hamiltonian H(t) = H0 + V (t), where V (t) is responsible of the energy exchanges. We define cyclic a process in which V (t) vanishes before t = 0 and after t = τ . We call the initial state, before the cyclic process, ρ(0) = ρ0 and the final state ρ(τ ) = ρf in.

We are looking for the maximum work E that may be extracted from S for arbitrary V (t). The work done by the system can be written as:

W = Trρf inH0 − Trρ0H0, (3.1) therefore, the ergotropy is

E ≡ max

V (t)  − W  (3.2)

(18)

18 3.1. ERGOTROPY

It is worth underlining the fact that it is possible to extract more energy than it results from Eq. (3.2) allowing an open-system evolution (for example considering an interaction with a thermal bath at zero temperature). However, the energy extracted in this way is not completely available in terms of work, due to the second law of thermodynamics. The best potential V (t) has to bring our initial state ρ0 to a final state ρf in, from which no work can be extracted. Indeed, if a further extraction of work could be possible, than Eq. (3.2) would not represent the maximum extractable work. This type of states are called passive states.

3.1.1

Passive States

It is possible to obtain criteria under wich a state is passive under a given reference Hamiltonian [34,35]:

Passive State Criteria

Given a state ρ and a reference Hamiltonian H:

H ≡Xn|nihn| , with n+1 ≥ n ∀n (3.3) ρ ≡Xrn|rnihrn| , with rn+1 ≤ rn ∀n (3.4) ρ is passive with respect to H if and only if :

1) [ρ, H] = 0,

2) j < k =⇒ rj ≥ rk.

It is legitimate to ask what occurs when we combine two or more identical passive state σ(k): the resulting multi-partite state N

(k) is not necessarily passive. More precisely, it is possible to have a population inversion on the multi-partite system (the second criterion for passivity might not be fulfilled). Therefore, we can define the N-passivity as an additional property of a system:

(19)

3.1. ERGOTROPY 19

N-Passivity

Given passive state σ with respect to H, we consider N identical passive systems with state σ(k) and reference Hamiltonian H(k) (with k = 1, ..., n) and we define: 1) ρN ≡ NN k=1σ (k) 2) HN ≡PNk=1,H(k),

such as H(k)= H and σ(k)= σ ∀k. If ρN is still passive with respect to HN then σ is N-passive with respect to Ha.

aIn this N-copies scenario, the optimal global unitary may be an entangling

oper-ation. However, this does not automatically mean that entanglement will be created between the N copies (i.e. entangling operation need not create entanglement for every possible input state).

Complete passivity

In the asymptotic limit (N → ∞) an N-passive state is called completely passive. It has been proven by Pusz and Woronowicz in [36] and afterwards by Lenard in [35] that a completely passive state must either have a Gibbs form ρ = e −βH Z with Z = Tr[e −βH ] (3.5)

or be a ground state1. This can be seen as a manifestation of the second law:

no transformation can be done such that the only result is to extract energy from a single thermal bath.

Entanglement Boost

Alicki and Fannes, in their article [6], also show that for many body sys-tems the ergotropy of N -body system can be optimized considering entangling interactions to enhance the performance. They, therefore, asserted that quan-tum effects (entanglement) offer operational advantages in the extraction of work. It has been shown that these advantages are related to the power of the charging process and not directly to the maximum extractable work. Allowing entanglement between our N identical Quantum Batteries could, therefore, optimize the charging time, as shown in [10], [11] and [12].

(20)

20 3.1. ERGOTROPY

3.1.2

Ergotropy calculation

Now we have the notions needed to compute the ergotropy functional, in par-ticular for a single qubit. Let ρ be the density matrix of a system characterized by a Hamiltonian H, which we present in terms of the spectral decompositions:

ρ =Pd−1n=0rn|rnihrn| , H =Pd−1n=0n|nihn| .

(3.6)

Here {|rni and |ni}n represent the eigenvectors of ρ and H respectively, r0 ≥ r1 ≥ ... ≥ rn and 0 ≤ 1 ≤ ... ≤ n are the associated eigenvalues, wich have been properly ordered.

We want to find which is the optimal unitary U that brings ρ to a passive state σ:

Ergotropy in finite dimension

Theorem 1. The ergotropy E for the state ρ and Hamiltonian H is

E = TrρH − d−1 X

n=0

rnn (3.7)

and the optimal unitary is

U = d−1 X

n=0

|nihrn| . (3.8)

Proof. The theorem directly follows from the trace inequality of John von Neumann [37].

The passive counterpart of ρ is defined as the following density matrix: σ ≡X

n

rn|nihn| . (3.9)

By construction, its mean energy is given by E(p) ≡ TrHσ =X

n

(21)

3.1. ERGOTROPY 21

The ergotropy E of the state ρ can be expressed as

E = E − E(p). (3.11)

Ergotropy for a qubit

Now we consider the qubit case, where H = ω0

2 (I − σz) and ρ =

I+~r(t)·~σ 2 . Diagonalizing the density matrix we calculate the eigenvalues:

r0(t) = 1 + r(t) 2 r1(t) = 1 − r(t) 2 , (3.12) with r(t) = |~r(t)|.

The Hamiltonian is already diagonal with: 0 = 0 and 1 = ω0, thus, computing equations (3.10) and (3.11), we obtain:

E(p)(t) = 1−r(t) 2 ω0, E(t) = Trρ(t)H = ω0

2 1 − rz(t)

 (3.13)

and the ergotropy

E(t) = ω0

2 r(t) − rz(t). (3.14) We can note that for a pure state (r(t) = 1) energy and ergotropy coincide. We have now explicitly calculated the ergotropy for one qubit; Eq. (3.14) will be useful in the analysis of our problem. In figure 3.1 we give a pictorial representation of the behaviour of the ergotropy for a qubit.

(22)

22 3.2. PHYSICAL IMPLEMENTATIONS

Ergotropy for a qubit with diagonal density matrix

We are now interested in the study of the ergotropy for a particular configu-ration of the qubit system, where

ρ(t) = α 2(t) 0 0 β2(t) ! = I + (1 − 2β 2(t))σz 2 , (3.15)

with the trace condition of ρ expressed as α2(t) + β2(t) = 1. With these conditions we have that:

         rx,y(t) = 0 rz(t) = 1 − 2β2(t) r(t) = |rz(t)| . (3.16)

In this case, we can write the ergotropy of our system as a function of only rz: E(t) = ω0

2 (|rz(t)| − rz(t)). (3.17) From Eq. (3.13) and Eq. (3.17) it is evident that maximizing the energy and the ergotropy consists in both cases in minimizing rz(t). This means that, for diagonal density matrices, optimal processes for maximizing the energy of the system will be optimal also for the maximization of the er-gotropy. This result will be very useful in chapter7, where in more than one occasion we will work with 2 × 2 diagonal density matrices.

3.2

Physical Implementations

In chapter6and7we shall consider as Quantum Batteries TLSs and QHOs. In-deed, TLSs and QHOs are ubiquitous in atomic and condensed matter physics. They are elementary building blocks of cavity QED architectures [38] and in systems of trapped ions [39,40], ultra-cold atoms [41], superconducting cir-cuits [42] and semi-conductor quantum dots [43]. Before carrying out the analysis of our quite general charging process, we summarize some of the pos-sible experimental scenarios for Quantum Batteries that have been recently studied, such as spin-chain batteries and cavity assisted batteries, in order to give to the reader a general idea of a realistic implementation.

(23)

3.2. PHYSICAL IMPLEMENTATIONS 23

3.2.1

Spin-chain Battery

In [44], the authors consider as a Quantum Battery a system composed by a chain of n TLSs, i.e. an Heisenberg spin-chain, characterized by 2-body interactions, pictorially represented in Figure 3.2. Indeed, the global system

Figure 3.2: Many-body spin-chain as Quantum Battery for different interaction strengths. Figure from [45].

can be described by an internal Hamitonian H0 = HB+ Hg, with HB = B Pn i=1σ (i) z , Hg =Pi<j gij[σ (i) z ⊗ σz(j)+ α(σx(i)⊗ σ(j)x + σy(i)⊗ σ(j)y )], (3.18)

where gij is the interaction strength between different spins, while α can be chosen to recover Ising (α = 0), XXZ (0 < α < 1) and XXX (α = 1) Hamil-tonians. The charging process consists in simultaneously turning off HB and turning on an external field

V = ωX i

σ(i)x , (3.19)

for a fixed time interval τ . Recentely, there have been several proposals for spin-chain batteries, followed by an extensive numerical analysis [46,47]. Re-garding a practical implementation, spin chains can be engineered through the use of various crystals [48,49], where chains of Cu+2 along one crystal axis can act as spin chains. Alternatively, they can be developed using ultracold atoms [50] or trapped ions [51] .

(24)

24 3.2. PHYSICAL IMPLEMENTATIONS

3.2.2

Cavity Assisted Batteries

Ferraro et al proposed in [52] a Dicke model [53] for a cavity-assisted Quantum Battery. Indeed, the Quantum Battery consists in an array of n TLSs, coupled with a quantized single-mode electromagnetic field, as represented in figure3.3.

Figure 3.3: Dicke model for a cavity-assisted quantum battery that can be globally charged. Figure from [9].

The reference time-dependent Hamiltonian is

H = ωca†a + ω0 2 n X l=1 σz(l)+ ωcλ(t) n X l=1 σx(l) a + a†, (3.20)

with a, a† annihilation and creation operator for the single-mode cavity with frequency ωc. The charging process consists in a modulation of the coupling λ(t), starting from a global initial state:

|ψini = |ni ⊗ |Gi , (3.21) where |Gi = ⊗n|gi , with |gi the ground state of the TLS, and where |ni is the Fock state with n photons. Practical implementations of the Dicke model are a challenging research topic: there are several possible realizations, such as ion-trap cavity QED or circuit QED. An exhaustive review of recent developments of the Dicke model is presented in [54].

(25)

Chapter 4

Optimal Control Theory

In this chapter we provide the mathematical tools necessary to describe prop-erly the optimization problem for the ergotropy of a Quantum Battery. The objective of optimal control theory is to determine the control signals that will cause a process to satisfy the physical constraints and at the same time minimize (or maximize) some performance criterion [29]. In our particular case the control signal will be the intensity of the external Hamiltonian.

4.1

Optimal Control Problem

We will now focus on modelizing the process of a general control problem, in order to obtain a simple mathematical description that predicts the behaviour of a physical system in response to known inputs.

The state vector of the system at a particular time t can be described by a vector x(t) ≡         x1(t) x2(t) · · xn(t)         , 25

(26)

26 4.1. OPTIMAL CONTROL PROBLEM

where xi(t) are the state variables (or simply states) of the process at time t. Moreover, the physical inputs can be described by a control vector

u(t) ≡         u1(t) u2(t) · · um(t)         ,

where ui(t) are control inputs to the process at time t. We modelize the evo-lution of the state, depending on the control inputs, as n first-order differential equations

˙

x(t) = a(x(t), u(t), t). (4.1) Now we consider some definitions that will be useful later:

Definition 1. Control history

A history of control input values during the interval [t0, tf] is denoted by u and is called a control history, or simply a control.

Definition 2. State trajectory

A history of state values in the interval [t0, tf] is called a state trajectory and is denoted by x.

Definition 3. Admissible control

A control history which satisfies the control constraints during the entire time interval [t0, tf] is called an admissible control.

The main problem is to find an admissible control u∗ which causes the system to follow an admissible trajectory x∗ that minimizes the performance measure:

J = h(x(tf), tf) + Z tf

t0

g(x(t), u(t), t)dt, (4.2) where t0 and tf are the initial and final time; h and g are scalar functions, u∗ is called an optimal control and x∗ an optimal trajectory. It is important to underline the fact that we may not know in advance that an optimal control exists and, if it exists, it may not be unique.

(27)

4.2. VARIATIONAL APPROACH 27

4.2

Variational Approach

4.2.1

Introduction to calculus of variations

We are now going to describe some fundamental concepts of calculus of varia-tions, that will be very useful in order to describe the optimal control theory, in particular the Pontryagin’s Minimum Principle.

Definition 4. Functional

A functional J is a rule of correspondence that assigns to each function x in a certain class Ω a unique real number. Ω is called the domain of the functional, and the set of real numbers associated with the function Ω is called the range of the functional.

We are now interested in the extreme values of a functional: in analogy with the increment of a function we can now define the increment of a functional. Definition 5. Increment of a functional

If x and x + δx are functions for wich the functional J is defined, then the increment of J , denoted by ∆J , is

∆J ≡ J (x + δx) − J (x). (4.3) We write ∆J (x, δx) to emphasize that the increment depends on functions x and δx. δx is called the variation of the function x. The increment of a functional can be also written as

∆J (x, δx) = δJ (x, δx) + g(x, δx) · kδxk , (4.4) where δJ is linear in δx. If

lim

kδxk→0{g(x, δx)} = 0, (4.5)

J is said to be differentiable on x and δJ is the variation of J evaluated for the function x.

It is important to keep in mind that δJ is the linear approximation to the difference in the functional J caused by two comparison curves. This means that only when kδxk is small the variation can be a good approximation for the increment. In analogy with the the extreme value of a function, we can define the the extreme function of a functional as:

(28)

28 4.2. VARIATIONAL APPROACH

Definition 6. Extremal function

A functional J with domain Ω has a relative extremum at x∗ if there is an  > 0 such that for all functions x in Ω which satisfy kx − x∗k <  the increment of J has the same sign. If

∆J = J (x) − J (x∗) ≥ 0 (4.6) J (x∗) is a relative minimum; if

∆J = J (x) − J (x∗) ≤ 0 (4.7) J (x∗) is a relative maximum.

If (4.6) is satisfied for arbitrarily large , then J (x∗) is a global minimum. Similarly, if (4.7) holds for arbitrarily large , then J (x∗) is a global maxi-mum. x∗ is called an extremal, and J (x∗) is referred to as an extremum.

We now need a fundamental theorem that fixes the necessary condition for x∗ to be extremal:

Theorem 2. Let x be a vector function of t in the class Ω, and J (x) be a differentiable functional of x. Assume that the functions in Ω are not con-strained by any boundaries.

If x∗ is an extremal, the variation of J must vanish on x∗; that is

δJ (x∗, δx) = 0 for all δx ∈ Ω. (4.8)

We have now described all the basic concepts of the calculus of variations that we will need for our purposes.

4.2.2

Necessary Conditions for Optimal Control

with-out Control Boundaries

Now we focus on our main problem: finding an admissible control u∗ that causes the system

˙

x(t) = a(x(t), u(t), t) (4.9) to follow an admissible trajectory x∗ that minimizes the performance measure (4.2). Our first assumption is that the admissible state and control regions are

(29)

4.2. VARIATIONAL APPROACH 29

not bounded (we will analyze the other cases too). Our initial state, at a fixed time t0, is x(t0) = x0 and we will consider x as the n × 1 state vector and u as the m × 1 vector of control inputs.

Assuming that h is a differentiable function, we can write

h(x(tf), tf) = Z tf

t0

d

dt[h(x(t), t)]dt + h(x(t0), t0), (4.10) so that the performance measure can be expressed as

J (u) = Z tf t0 n g(x(t), u(t), t) + d dt[h(x(t), t)] o dt + h(x(t0), t0). (4.11)

Since x(t0) and t0 are fixed, the minimization does not affect the h(x(t0), t0) term, so we need to consider only the functional

J (u) = Z tf t0 n g(x(t), u(t), t) + d dt[h(x(t), t)] o dt. (4.12)

We now want to include the equation constraints (4.9), considering the aug-mented functional Ja(u) = Z tf t0 n g(x(t), u(t), t) +h∂h ∂x(x(t), t) iT ˙ x(t) + ∂h ∂t(x(t), t) + p T(t)[a x(t), u(t), t − ˙x(t)]odt, (4.13)

with p1(t), ..., pn(t) the Lagrange multipliers, called costates.

It’s possible to demonstrate (see [55]), using Theorem 2, that the necessary conditions for optimality without boundary conditions are:

Necessary conditions ˙x∗(t) = ∂H∂p(x∗(t), u∗(t), p∗(t), t) ˙ p∗(t) = −∂H∂x(x∗(t), u∗(t), p∗(t), t) 0(t) = ∂H∂u(x∗(t), u∗(t), p∗(t), t)        for all t ∈ [t0, tf] (4.14) and : h∂h ∂x(x ∗ (tf), tf) − p∗(tf) iT δxf + h H(x∗(tf), u∗(tf), p∗(tf), tf) + ∂h ∂t(x ∗ (tf), tf) i δtf = 0. (4.15)

(30)

30 4.2. VARIATIONAL APPROACH

We called H the Pseudo-Hamiltonian:

H(x(t), u(t), p(t), t) ≡ g(x(t), u(t), t) + pT(t)[a(x(t), u(t), t)]. (4.16) These necessary conditions (4.14) consist of a set of 2n first order differential equations and a set of m algebraic relations.

4.2.3

Boundary Conditions

In order to determine the boundary conditions we need to make the appropriate substitutions in equation4.15. Notice that in all possible cases we will assume that we have the n equations x∗(t0) = x0.

• Fixed final time and final state specified:

since x(tf) and tf are specified, we substitute δxf = 0 and δtf = 0 in equation4.15, obtaining:

x∗(tf) = xf. (4.17) • Fixed final time and final state free:

we substitute δtf = 0 and keep δxf arbitrary, obtaining: ∂h

∂x x ∗

(tf) − p∗(tf) = 0. (4.18)

• Free final time and final state fixed:

the appropriate substitution in Eq. (4.15) is δxf = 0 and δtf arbitrary, obtaining: H(x∗(tf), u∗(tf), p∗(tf), tf) + ∂h ∂t(x ∗ (tf), tf) = 0. (4.19)

• Free final time and free final state:

both δxf and δtf are arbitrary and independent; therefore, their coeffi-cients must be zero, obtaining:

p∗(tf) = ∂h ∂x x ∗ (tf, tf  (n equations), (4.20) H x∗(tf), u∗(tf), p∗(tf), tf + ∂h ∂t x ∗ (tf), tf = 0 (1 equation). (4.21) We have now fixed the most common boundary conditions; the next step is to fix an admissible control region.

(31)

4.2. VARIATIONAL APPROACH 31

4.2.4

Pontryagin’s Minimum Principle

In realistic systems (and also for our toy-models), physically realizable controls generally have intensity limitations. We will see in the next chapters that these limitations will be modelized as a boundary in the intensity of the interaction between our system (a Quantum Battery) and another ancillary system (the charger). If we consider this limitation we need to modify the third necessary condition in Eq. (4.14) considering admissible controls constraints (see [55]), obtaining the well known Pontryagin’s Minimum principle [28,29].

Theorem 3. Pontryagin’s Minimum Principle

A necessary condition for u* to minimize the functional J is

H(x∗(t), u∗(t), p∗(t), t) ≤ H(x∗(t), u(t), p∗(t), t) (4.22) for all t ∈ [t0, tf] and for all admissible controls.

We need to make an important observation: an optimal control must min-imize the Hamiltonian. However, this is a necessary, but not sufficient, condition for optimality. We have completed our analysis about optimal control theory, and now we summarize the results.

4.2.5

Summary of the necessary conditions for

optimal-ity:

We consider a system with a time evolution described by the n differential equations

˙x(t) = a(x(t), u(t), t). (4.23) The admissible control vector u∗ ∈ U with U a bounded control region, which causes the system to follow an admissible trajectory that minimize the per-formance measure

J = h(x(tf), tf) + Z tf

t0

g(x(t), u(t), t)dt, (4.24)

(32)

32 4.2. VARIATIONAL APPROACH

Necessary conditions with admissible controls constraints

˙x∗(t) = ∂H∂p(x∗(t), u∗(t), p∗(t), t) ˙

p∗(t) = −∂H∂x(x∗(t), u∗(t), p∗(t), t)

H(x∗(t), u(t), p(t), t) ≤ H(x(t), u(t), p(t), t) for all admissible u(t)

             for all t ∈ [t0, tf] (4.25) and : h∂h ∂x(x ∗ (tf), tf) − p∗(tf) iT δxf+ h H(x∗(tf), u∗(tf), p∗(tf), tf) + ∂h ∂t(x ∗ (tf), tf) i δtf = 0 (4.26)

with H(x(t), u(t), p(t), t) defined as in (4.16).

4.2.6

Example of a Bang Bang Protocol

We now discuss an important example of optimal control for a particular class of systems by using the minimum principle. In this example we are interested in minimizing the time needed to reach a given state starting from an arbitrary initial state.

Our performance measure is

J = Z tf

t0

1 dt (4.27)

and we assume that the n state equations of the system are of the form ˙

x(t) = a x(t), t + B x(t), tu(t), (4.28) where B is a n × m matrix that may be explicitly dependent on the states and time.

We fix the constraints on the control inputs:

(33)

4.2. VARIATIONAL APPROACH 33

The pseudo-Hamiltonian from (4.16) is

H x(t), u(t), p(t), t = 1 + pT(t)a(x(t), t) + B(x(t), t)u(t). (4.30) From the necessary conditions (4.26) it is necessary that

1+pT ∗(t)a(x∗

(t), t)+B(x(t), t)u∗(t) ≤ 1+pT ∗(t)a(x

(t), t)+B(x(t), t)u(t) (4.31) for all admissible u(t) and ∀t ∈ [0, tf in]. From Eq. (4.31) it is evident that the following inequality must hold

pT ∗(t)B(x(t), t)u∗(t) ≤ pT ∗(t)B(x(t), t)u(t). (4.32) Now let’s express the matrix B as

B x∗(t), t = hb1 x∗(t), t  b2 x∗(t), t  ... bm x∗(t), t i , (4.33) with bi x∗(t), t the ith comlumn of the array.

It is evident that, with this notation, Eq. (4.32) becomes

pT ∗(t)bi x∗(t), tu∗

i(t) ≤ p

T ∗(t)bi x(t), tui

(t). (4.34) We now shall assume that the control components are independent. Therefore, our goal is to minimize

pT ∗(t)bi x∗(t), tui(t) (4.35) with respect to ui(t).

Since our controls have constraints, the protocol will consist of:

Optimal Bang-Bang protocol

         pT ∗(t)b i x∗(t), t > 0 → u∗i(t) = Ui−, pT ∗(t)bi x∗(t), t < 0 → u∗i(t) = Ui+, pT ∗(t)bi x∗(t), t = 0 → undetermined. (4.36)

(34)

34 4.2. VARIATIONAL APPROACH

Figure 4.1: The relationship between a time-optimal control and its coefficient in the Hamiltonian

Problems arise when pT ∗(t)b

i x∗(t), t is zero: minimizing the Hamiltonian provides no information about how to select u∗i(t).

Two possible cases can occur: • pT ∗(t)b

i x∗(t), t passes through zero just for a fixed time t = tswitch. In that case we perform a switching of the control u∗i(t).

• pT ∗(t)b

i x∗(t), t = 0 for a finite time interval, called singular interval. In that case we can’t select u∗i(t). The analysis of the singular intervals, though, can give some important results, as we will see in our quantum version of them in chapter 6.

(35)

Chapter 5

Continuous Variable Systems

A quantum system described by observables whose numerical values belong to continuous intervals is said to be a continuous variable system (CV sys-tem). The interaction with the external enviroment can destroy the quantum features of our system through decoherence effects. However, the electromag-netic radiation has limited interaction with the enviroment and, therefore, it is very attractive for studying quantum phenomena.

For this reason from now on we will refer to a continuous variable system as a multimode quantum harmonic oscillator described by the Hamiltonian

H = N X k }ωk a†kak+ 1 2, (5.1)

where, since photons are bosons, a and a† are respectively the annihilation and creation operators, with commutation relations:

[ak, ak0] =a†, a†

k0 = 0, ak, a †

k0 = δkk0. (5.2)

The annihilation and creation operators are defined as a combination of the adimensional position and momentum operators ˆqk and ˆpk

ak = ˆqk√+i ˆ2pk, a † k = ˆ qk−iˆpk 2 . (5.3)

5.1

Fock Space

The eigenstates |nki, with eigenvalues }ωk(nk + 12), where nk is an integer (nk = 0, 1, 2, ..., ∞), of the Hamiltonian (5.1) are known as Fock states.

(36)

36 5.2. COHERENT AND SQUEEZED STATES

They are eigenstates of the number operator Nk = a † kak

a†kak|nki = nk|nki . (5.4) The vacuum state of the field mode is defined by

ak|0i = 0 (5.5)

and the state vectors for the higher excited states are given by:

|nki =

a†knk

(nk!)1/2

|0i , nk = 0, 1, 2, ..., ∞. (5.6) The Fock Space is an orthogonal and complete set of basis vectors for a Hilbert space and is a very useful representation for systems where the number of photons is very small. It is important to underline the fact that Fock states have relatively large variances on the position and momentum operators (large respect to the minimum uncertainty principle).

5.2

Coherent and Squeezed States

In this section we will discuss two important classes of states, coherent and squeezed states, that are fundamental concepts in quantum optics and in par-ticular in the definition of Gaussian States (see section5.3.1). For an extensive treatment of these states see [55,56].

5.2.1

Coherent States

The coherent state is a a state, belonging to a complete basis of the Hilbert space of a quantized electromagnetic field, with an indefinite number of photons wich allows them to have a more precisely defined phase than a vector belong-ing to the Fock space. Such state has, in some sense, a classical behaviour, since it is a minimum-uncertainty state (the Heisenberg’s uncertainty prin-ciple is saturated). More in detail, coherent states saturate the prinprin-ciple with a balanced form in the quadratures (position and momentum operators)

∆ˆq = ∆ˆp = 1

2, (5.7)

where ∆ˆx is defined as ∆ˆx = E[ˆx2] − E[ˆx]2.

(37)

op-5.2. COHERENT AND SQUEEZED STATES 37

erator

D(α) = exp(αa†− α∗a), (5.8) with α an arbitrary complex number, on the vacuum state

|αi = D(α) |0i . (5.9)

Now we summarize some important properties of the coherent states: 1) They are eigenstates of the annihilation operator a:

a |αi = α |αi . (5.10)

2) They are normalized:

|hα|αi|2 = 1. (5.11)

3) The coherent states may be expanded in terms of the number states as |αi = e−|α|2/2X αn

(n!)1/2|ni . (5.12) 4) They are not orthogonal, infact the scalar product of two coherent states

is

hβ|αi = h0|D†(β)D(α)|0i . (5.13) 5) They are a complete set of states, the completeness relation is expressed

as

1 π

Z

|αihα| d2α = 1. (5.14) 6) The expected value of the number operator is

hα|a†a|αi = |α|2. (5.15)

5.2.2

Squeezed States

Minimum-uncertainty states but with different quadratures are called squeezed states. A single-mode squeezed state is generated by using the unitary Squeez-ing operator S() = exp 1 2 ∗ a2− 1 2 a †2, (5.16) where  is a complex number.

(38)

38 5.3. GAUSSIAN STATES

and real part of ). Therefore, it can be parametrized with two real quantities r and φ such as  = re2iφ. This parametrization has a physical description in the phase space p vs q, as shown in Figure 5.1.

(a) (b)

Figure 5.1: Phase space representation of (a) a coherent state and (b) a coherent squeezed state.

In this representation the displacement operator is a translation of the un-certainty sphere and the Squeezing operator is, indeed, a ”squeezing” of that sphere along an axis fixed by φ. Therefore, a coherent squeezed state is ob-tained by first squeezing the vacuum and the displacing it:

|α, i = D(α)S() |0i . (5.17)

5.3

Gaussian States

In this section we will describe particular states of CV systems called Gaussian states that arise when the Hamiltonian of the system is linear or quadratic in the field modes. For an extensive treatment see [55,57]. Due to their el-egant mathematical framework, Gaussian states have a privileged position in quantum information and quantum communication. Indeed, they have been used to implement quantum communication protocols [58–60] and investigate about quantum correlations [61].

We will focus on the description of a Gaussian state as a density matrix; however, it is possible to give an equivalent description of it with a much more general approach called phase space method, describing a Gaussian state through a function called characteristic function. For our purposes it will

(39)

5.3. GAUSSIAN STATES 39

not be necessary to use this approach, but for an exhaustive treatment see [55].

5.3.1

Canonical Commutation Relations and Definition

of a Gaussian State

Given an n-modes system, the position and momentum operators and their commutation relations can be reformulated in a more elegant and useful way:

Canonical Commutation Relations (CCR)

By defining the vector of canonical operators ˆr = (ˆq1, ˆp1... ˆqn, ˆpn)T, Eq. (5.2) can be expediently recast as the following geometric, label-free, expression [ˆr, ˆrT]a = iΩ, (5.18) with Ω = n M j=1 Ω1, with Ω1 = 0 1 −1 0 ! . (5.19) awe have defined abT jk≡ ajbk

As we said before, Gaussian states arise when we consider systems with quadratic Hamiltonians in the field modes. The Hamiltonian in Eq. (5.1) can be generalized, allowing an interaction between different modes, as:

ˆ H = 1

2rˆ

TH ˆr + ˆrTr, (5.20)

where r is a 2n-dimensional real vector and H a symmetric and definite posi-tive matrix ( do not confuse H with ˆH, the first is a matrix, and the second is an operator). H can be taken symmetric since an antisymmetric component would add just a constant term to the Hamiltonian, because of the CCR. It is also positive because in this way we are considering systems that are stable. We are now ready to give a definition of a Gaussian state:

(40)

40 5.3. GAUSSIAN STATES

Definition of a Gaussian State

Any Gaussian state ρG may be written as

ρG=

e−β ˆH

T r[e−β ˆH] (5.21) with β ∈ R+ and ˆH defined by r and H as in Eq. (5.20).

All states of the form (5.21) for finite β are by construction mixed states

5.3.2

Displacement Operator

Our generic Hamiltonian (5.20) can be defined in an equivalent way, up to irrelevant constant terms, with a shift ¯r ≡ −H−1r of the canonical vector:

ˆ H0 = 1

2( ˆr − ¯r) T

H( ˆr − ¯r). (5.22) We can now define the Displacement operator

ˆ

Dr ≡ eirTΩ ˆr, (5.23)

which is an n-modes generalization of the Displacement operator (5.8) that creates a coherent state. It can be shown that the Hamiltonian (5.22) can be written as ˆ H0 = 1 2( ˆr − ¯r) TH( ˆr − ¯r) = 1 2D− ¯rrˆ TH ˆr ˆD ¯ r. (5.24) Note that D− ¯r = D † ¯ r.

In order to show that, it is sufficient to prove the following theorem:

Theorem 4. Action of Displacement operators on canonical opera-tors Let ˆDr ≡ eir TΩ ˆr , then D− ¯rrD¯ˆ r = ˆr − ¯r. (5.25) Proof.

Let’s define the vector of operators ˆf ( ¯r) = e−i ¯rTΩ ˆr

ˆ rei ¯rTΩ ˆr

, for wich one has ˆ

f (0) = ˆr. Computing the first derivative of ˆfj calculated in ¯r = 0 we obtain: ∂rj0fˆk ¯ r=0 =  ∂r0je−i P lmΩlmrˆmrˆ kei P str0sΩstrˆt ¯ r=0 = −iX m Ωjm[ˆrm, ˆrk] = X m ΩjmΩmk = −δjk.

(41)

5.3. GAUSSIAN STATES 41

Since all the higher-order derivatives are zero, it follows that ˆf ( ¯r) = ˆr − ¯r.

5.3.3

Symplectic transformation

Definition 7. A Symplectic transformation is described by a 2n × 2n matrix S which transforms the set of canonical coordinates ˆr to ˆr0 = S ˆr, leaving the CCR unchanged:

SΩST = Ω. (5.26)

The set of these transformations forms the real symplectic group Sp(2n, R) The symplectic group is an important tool for diagonalizing second-order Hamiltonians, thanks to the Williamson Theorem [62]:

Theorem 5. Williamson Theorem

Every positive definite real 2n×2n matrix M can be diagonalized by a symplectic transformation S ∈ Sp(2n, R) in the following form

SM ST = D with D = diag(d1, d1, ..., dn, dn), (5.27) with dj ∈ R+ ∀j ∈ [1, ..., n].

Indeed, up to a Displacement operator, we are now ready to diagonalize our purely quadratic Hamiltonian:

H = S(H)T n M j=1 ωjI2  S(H) with S(H) ∈ Sp(2n, R), (5.28) obtaining then ˆ H = 1 2rˆ TS(H)T n M j=1 ωjI2  S(H)rˆ with S(H) ∈ Sp(2n, R). (5.29)

It is possible to prove [55] that the action of a symplectic matrix S on the canonical coordinates is equivalent to

S ˆr = ˆS ˆr ˆS†, (5.30) with ˆS an operator of the form ˆS = ei12rˆTH0rˆ

and H0 a symmetric and positive definite matrix.

(42)

42 5.3. GAUSSIAN STATES

Therefore, the most general quadratic Hamiltonian can be written as ˆ H = 1 2( ˆr − ¯r) T H( ˆr − ¯r) = ˆD− ¯rS ˆˆrT Mn j=1 ωjI2  ˆ r ˆS†D¯ˆr (5.31)

It is easy to note that the central term in Eq. (5.31) can be seen as an Hamil-tonian of a set of free, non-interacting harmonic oscillators:

1 2rˆ T n M j=1 ωjI2  ˆ r = 1 2 n X j=1 ωj(ˆ(x2j + ˆp2j) = n X j=1 ˆ Hωj (5.32)

It follows that the most general expression for a Gaussian state is:

ρG= ˆD− ¯rSˆ  ⊗n j=1e −β ˆHωj Qn j=1Tr h e−β ˆHωji ˆ S†Dˆr¯, β > 0. (5.33)

5.3.4

Single-mode Gaussian States

For our purposes, it is worth to describe a little deeper the one mode Gaussian states. In this case ˆr = (ˆq1, ˆp1)T and Ω = Ω1. It can be shown that with just one mode, the Displacement operators defined in Eqs.(5.8) and (5.23) are equivalent, with a different parametrization:

¯

r = (¯r1, ¯r2) =⇒ α = − ¯ r1+ i¯r2

2 . (5.34)

The squeezing operator in Eq. (5.16) and the symplectic operator are also equivalent, up to ”passive transformations” that don’t change the energy of the state. Therefore, a generic single-mode Gaussian state has the following form:

ρG= D(α)S()ρβS†()D†(α), (5.35) with ρβ the thermal state of the system with inverse temperature β.

Ergotropy

The ergotropy of a single-mode is a quantity that can be be analytically calcu-lated (see appendix B and [8]) as a function of the first and second momenta (E[a], E[a†a], E[a2]), obtaining:

E(t) = ω0  E[a†a] − √ D − 1 2  t, (5.36)

(43)

5.3. GAUSSIAN STATES 43

with

D ≡ 1 + 2E[a†a] − 2|E[a]|22− 4 E[a2] − E[a]2 2

. (5.37) In appendixB we have also shown that in the case of a pure state D = 1: this is consistent with the fact that ergotropy and energy have to coincide for pure states.

(44)
(45)

Chapter 6

Unique System Optimization

We consider for simplicity a unique quantum system, with a completely general Hamiltonian in the following form:

H(t) = H0+ λ(t)H1. (6.1) Given H0 and H1, our goal is to find the optimal function λ(t), with λmin ≤ λ(t) ≤ λmax as a constraint, that maximizes the ergotropy of the system in a fixed time τ . We consider ρ0 = |0ih0| as the initial and ground state of H0. Moreover, since it follows a closed evolution, it preserves its purity. We have seen in Chapter 3 that, for a pure state, ergotropy and energy coincide. Consequently, our aim will be to maximize the energy of the state

E(τ ) = hρ(τ )H0i, (6.2) where h · i ≡ Tr[ · ], notation that will be used from now on. Whenever possible, we will focus on keeping the reasonings as general as we can. For example, when we have the chance, we avoid to specify the dimension of the system and the form of H0 and H1. When that is not possible, we will focus in two particular systems: the qubit and the harmonic oscillator. In this section we analyze three possible situations:

• H0 and H1 commutative,

• the “H0 vs H1” case (i.e. two switching non-commutative Hamiltonians), • H0 and H1 non-commutative.

In the first situation we will keep the reasonings general. Instead, in the H0 vs H1 case we will consider the Quantum Battery as a qubit, while for non-commutative Hamiltonians we will analyze both the qubit and the quantum

(46)

46 6.1. COMMUTATIVE CASE

harmonic oscillator scenario. The first two situations are described just for helping the reader to become familiar with the idea of charging a Quantum Battery, since they are well known and trivial cases. Instead, the problem with H0 and H1 non-commutant will require a more sophisticated analysis. In particular we will need features of Optimal Control Theory described in chapter 4.

6.1

Commutative case

In the case of commutative terms: [H0, H1] = 0 we can easily prove that the energy of the quantum system, when we turn off the interaction, remains unchanged.

E(τ ) = hρ(τ )H0i =Uτρ(0)Uτ†H0 = hρ(0)H0i = E(0), (6.3) where we have used the fact that, given

Ut= T {exp(−i } Z t 0 H(t0)dt0)}, (6.4) then [Ut, H0] = 0.

6.2

H

0

vs H

1

Now we can consider the particular case in which we can switch from H = H0 to H = λ(t)H1, with [H0, H1] 6= 0 as pictorially represented in Figure 6.1.

(47)

6.2. H0 VS H1 47

During the time interval in which H(t) = λ(t)H1, we have

H(t1), H(t2) = 0 for all ∀ t1 and t2 ∈0, τ . (6.5) Therefore, the unitary evolution operator in this particular interval is

Ut= e−i Rt 0λ(t 0)dt0H 1 = e−iH1g with g(t) ≡ Z t 0 λ(t0)dt0. (6.6) The problem is the following: given a time interval τ , during which H = λ(t)H1, which is the optimal protocol? We shall analyze a qubit system. The evolution depends on λ(t) only through g(t). Consequently, in this case, a variational approach will not be necessary.

6.2.1

Qubit Case

We want to describe a two-level system with H0 = I−σ2z and H1 = ˆc · ~σ , with ˆ

c = c1, c2, c3) a real vector with module 1. The initial state of our system is ρ0 = |0ih0|, which can be described in the Bloch sphere as:

ρ0 = I + ~ r0· ~σ

2 with ~r0 = 0, 0, 1 

(6.7) Our thesis is completely focused on the case of c3 = 0, due to the fact that a rotation along the z-axis would complicate dramatically our analysis1. This case is very simple if we look at the evolution of our qubit with the two possible Hamiltonians. When H = H0, the evolution of our qubit can be described in the Block sphere as a rotation along the z-axis, while for H = λ(t)H1 it is a rotation along an arbitrary axis in the XY plane. From Eq. (3.13) it is evident that we want to keep H = H1λmax until the Bloch vector reaches the state ~r∗ = 0, 0, −1. If the time t∗, needed to ”flip” the Bloch vector, is greater than our available time τ , then the best choice is to keep H = λmaxH1 for the whole time interval. Instead, if it is t∗ < τ , the best choice is to keep H = λmaxH1 for a time interval length t∗ then switch the Hamiltonian.

A more mathematical proof of the protocol can be given with a simple study of a function. The energy of the system at the final time τ (we fix the initial time t0 = 0) is:

E =Uτρ0Uτ†H0 , (6.8)

(48)

48 6.3. NON-COMMUTATIVE CASE

with Uτ the evolution operator in Eq. (6.6). Since we are looking for the maximum of the energy, we differentiate with respect to g(τ ) ≡ g:

∂E ∂g =Uτ(−i[H1, ρ0])U † τH0 =[H0H, H1]ρ0 = 0, with H0H = Rc(−2g)ˆ z · ~σ = eˆ igH1σze−igH1. (6.9) The versor Rc(−2g)ˆ z is the rotated of the z-axis versor ˆˆ z along the axis ˆc with an angle 2g, since we know that

Rˆn0(θ) n · ~σ = eˆ −i θ 2nˆ 0·~σ ˆ n · ~σe+iθ2ˆn 0·~σ . (6.10)

Therefore, the condition for optimality is:

[Rˆc(−2g)(ˆz) · ~σ, ~x · ~σ]σz = 0 (6.11) For Eq. (6.11) to hold it is necessary that g = π2. This is a condition on the form of λ(t). We can now describe the best approach to the problem:

Optimal protocol for H0vs H1

• case τ λmax > π2: set λ(t) = λmax for a time interval ∆t such as ∆tλmax = π2 then keep λ(t) = 0

• case τ λmax < π2: set λ(t) = λmax for the whole charging time.

6.3

Non-commutative case

In the case of [H0, H1] 6= 0 and H(t) = H0+ λ(t)H1 the analysis is non trivial: we need to use the Pontryagin’s Minimum Principle (PMP) in order to find the necessary conditions for maximum final energy. Following the approach of chapter 4, we have a density matrix as a state variable and λ(t) as unique control input. We have used a formalism different from the one in chapter 4, where our vector state and the vector costate are matrices. However, the Pon-tryagin’s Minimum Principle is completely equivalent. This is an optimization problem with fixed final time τ and free final state. The evolution of our state will be described by an unitary evolution generated by the

(49)

Hamilto-6.3. NON-COMMUTATIVE CASE 49

nian H. In order to express our optimization problem in a minimization of a performance measure, we can write the energy of the system as

E(τ ) = Z τ

0

H0Lλ(t)[ρ(t)] dt + E(0), (6.12) where we wrote the evolution of a closed system, defined in Eq. (2.22) as:

˙

ρ(t) = −i }

H(t), ρ(t) ≡ Lλ(t)[ρ(t)]. (6.13) Since we want to find an optimal λ(t)∗, we can forget about the E(0) term in Eq. (6.12).

Our performance measure will be then 2 ˜

E = − Z τ

0

H0Lλ(t)[ρ(t)] dt. (6.14)

Using the PMP notation, we want to find the minimum for J = ˜E +

Z τ 0

a(t)(hρ(t)i − 1)+πT(t)(L

λ(t)[ρ(t)] − ˙ρ(t) dt. (6.15)

In this expression π(t) is a self-adjoint operator of the same dimension of ρ(t), called costate and a(t) is a scalar function, both acting as Lagrange multipliers that enforce, respectively, the dynamical constraint Eq. (6.13), and the normalization condition hρ(t)i = 1. We can conveniently express J as

J = Z τ

0

H(t) − hπT(t)dρ(t)

dt i dt, (6.16)

where the function H(t) is the pseudo Hamiltonian of the model defined as H(t) = h(πT(t) − H

0)Lλ(t)[ρ(t)]i + a(t)(hρ(t)i − 1). (6.17)

The PMP provides us with a set of necessary conditions (6.18) that have to be satisfied by an optimal choice of the control parameter λ(t) in order to minimize ˜E:

(50)

50 6.3. NON-COMMUTATIVE CASE Necessary conditions dρ dt ∗ = ∂H(ρ ∗(t), λ(t), π(t), t) ∂π(t) dπ dt ∗ = −∂H(ρ ∗(t), λ(t), π(t), t) ∂ρ H(ρ∗(t), λ(t), π(t), t) ≤ H(ρ(t), λ(t), π(t), t) for all admissible λ(t)

                     for all t ∈ [t0, tf] (6.18) From now on we drop the ”*” symbol, for simplicity. The first equation gives as a result the Von Neumann evolution described in Eq. (6.13), as ex-pected. The second one gives us information about the evolution of π(t):

dπ dt(t) = − ∂H ∂ρ(t) = − ∂ ∂ρ(t)h(π T(t) − H 0 | {z } A )(−i }[H(t) | {z } B , ρ(t) |{z} C ]i + a(t)(hρ(t)i − 1), dπij dt (t) = + i } ∂ ∂Cij(Clm[A, B]ml) − ∂ ∂Cij[a(t)(Cll− 1)] = i }[A, B]ji− a(t)δij, dπT ij dt = dπji dt = i }[A, B]ij − a(t)δij = − i }[H(t), π T(t) − H0] − a(t)I. (6.19) We can observe that we can add an arbitrary time-dependent real function f (t) (an identity matrix with a time-dependent coefficient) to π(t) without changing the problem. Therefore, we fix this gauge choosing π(t) to be trace-less, with an evolution not dependent on a(t): we can eliminate the a(t) term in Eq. (6.19). In conclusion we have the following necessary conditions for optimality: Optimal trajectories ˙ ρ∗(t) = −i }H(t), ρ ∗(t) = L λ∗(t)[ρ∗(t)] dπ∗ dt T = −i }[H(t), π T ∗(t) − H0] H(ρ∗(t), λ(t), π(t), t) ≤ H(ρ(t), λ(t), π(t), t) for all admissible λ(t)

             for all t ∈ [t0, tf] (6.20)

(51)

6.3. NON-COMMUTATIVE CASE 51

Now we define

π0 ≡ πT − H

0 (6.21)

and state that πT(τ ) = 0 (PMP boundary condition from Eq. (4.18), with h = 0). Accordingly, we have: π0(τ ) = −H0. From the second equation of (6.20) it is easy to understand that ρ(t) and π0(t) have the same unitary evolution. Now we focus on the last equation of (6.20), we know that

H(λ) = −iλG − ih(π0[H0, ρ]i, (6.22) with G = hπ0[H1, ρ]i and imaginary, in fact:

G = h(π0[H1, ρ])Ti = h[H1, ρ]Tπ0Ti = hπ0T[ρT, H1T]i = −hπ 0∗

[H1∗, ρ∗]i = −G∗. This is the exact same situation described in the last example4.2.6of chapter

4. We want to minimize H, following the last inequality of Eq. (6.20) and respecting the constraints of the control λ. Therefore, since there is an explicit linear dependence of H from λ, we have the following protocol:

Optimal protocol for charging a qubit

             −iG(t) > 0 → λ(t) = λmin, −iG(t) < 0 → λ(t) = λmax,

G(t) = 0 → switch (if it is just in a point, otherwise it is a singular interval).

(6.23)

It is evident that, in this case, the optimal choice could be a Bang-Bang protocol. However, as in the example in chapter4, it is possible that a singular interval exists. This means that we can not say that a Bang-Bang protocol is the only optimal choice yet. It is worth underlining the fact that this result does not depend on the choice of the system. Now, our main goal is to prove that, for a qubit and for an harmonic oscillator, singular intervals are not allowed, or, at least, they slightly change the Bang-Bang protocol.

6.3.1

Qubit

In the qubit case we will use the same notation above, choosing the reference Hamiltonian of the system such that H0 = ω20(I − σz) and H1 = ~x · ~σ, with

(52)

52 6.3. NON-COMMUTATIVE CASE

~

x = (x1, x2, 0)3 a real vector with module 1. In this particular case, π0(t) is a 2×2 hermitian matrix with trace −ω0, since we know that π0(τ ) = −H0, which is an hermitian matrix with trace −ω0, and its unitary evolution preserves the trace and hermitianity. Therefore, we can describe both π0(t)4 and ρ(t) in the Bloch sphere:    ρ(t) = I+~a(t)·~σ 2 , π0(t) = −ω0I+ ~b(t)·~σ 2 . (6.24)

In appendix A.1 we have shown that in order to be in a singular interval (G(t) = 0) we must have:

az = bz = 0 or ~a k ~b. (6.25) Let us analyze separately the 2 cases:

• az = bz = 0 case

Since the evolution of π0 and ρ at a time t can be described as a rotation of the Bloch vectors along the axis

ˆ n =    2x1λ(t) 2x2λ(t) −ω0    1 pω2 0 + 4λ2(t) , (6.26)

the only way in which the Bloch vectors keep to be in the XY -plane is if λ(t) = 0. We are saying that, in this case, we need to turn off the Hamiltonian H1 to have a singular interval.

• ~a k ~b case

Since the the state ρ and the shifted costate π0 has the same evolution, once their Bloch vectors ~a and ~b become parallel, they will continue to be parallel for all the remaining time of the protocol. This means that, at the final time τ , they will still be parallel: ~a(τ ) k ~b(τ ). Therefore, since we know that π0(τ ) = −H0, with H0 = ω0

2 (I − σz), from Eq. (6.24) it is easy to see that bz(τ ) = −1 and bx,y(τ ) = 0. Due to the fact that the two Bloch vectors are parallel, it is evident that az(τ ) = ±|~a| and ax,y(τ ) = 0. In particular we know that |~a| = 1, because our state remains pure for all the protocol. What we have proved is that, in this

3The third component x

3has been taken equal zero since, as shown in appendixA.1, we

could not prove the absence of singular intervals in the most general case

4The −ω

0is irrelevant for our purposes: to be formal, we are representing ˜π in the Bloch

(53)

6.3. NON-COMMUTATIVE CASE 53

second case, in order to be in a singular interval, our state has to reach at the final time τ either our initial state, ρ(τ ) = ρin (when az = 1), or the state with maximum energy possible, ρmax = |1ih1| (with az = −1). The first option is certainly unpleasant for an optimal control method, since it does not lead to an optimal protocol and for this reason we discard it. However, the second option would be the best protocol for sure, since it would bring our system in the completely charged state.

In Figure 6.2 all the possible singular intervals are described.

Figure 6.2: Pictorial analysis of singular intervals in a qubit

If we change our Bang-Bang protocol between λmax and λmin, allowing also intervals with λ = 0, calling that a Bang-Bang-off protocol, what remains to be done is to prove that the best way to reach ρf in = |1ih1| is still through a Bang-Bang (or Bang-Bang-off) protocol. In order to do that we will study a different case: a minimum-time problem where the final state ρf in is fixed and the time-interval is free. Indeed, if we prove that the protocol that mini-mizes the time needed to reach ρf in = |1ih1| is a Bang-Bang (or Bang-Bang-off) protocol, we have completed our analysis: Bang-Bang-off algorithms would be the optimal choice in any case.

Pontryagin minimizing time

As in the example 4.2.6 in chapter 4, we are interested in minimizing the performance measure:

J = Z τ

0

(54)

54 6.3. NON-COMMUTATIVE CASE

which is a simple way to express in an integral form the charging time. Indeed, we want to minimize the time that the state needs to evolve from any ρin to ρ(τ ) = |1ih1|. The extended functional will be:

˜ J = Z τ 0 n 1 + hπTLλ[ρ(t)] − dρ dt(t)  i + a(t)(hρi − 1)o (6.28) With the same notations of the previous case, we can define the pseudo-Hamiltonian

H = 1 + hπT(L

λ[ρ(t)])i + a(t)(hρi − 1). (6.29) We can still note that we can add a function f (t) to π(t) without changing the problem, so we fix this gauge freedom imposing π(t) traceless. Doing almost the same calculations of Eq. (6.19) we obtain the following evolution of π:

dπT

dt = −i[H(t), π T

(t)]. (6.30)

This is the exact same situation of the previous case, with π0 → πT; therefore, πT and ρ have the same unitary evolution. The pseudo-Hamiltonian can be written in the same form of Eq. (6.22):

H(λ) = −iλG + cost, (6.31)

with G = hπT[H1, ρ]i. Since we want to minimize H, following the last in-equality of (6.20), we have the following protocol:

Optimal protocol for minimizing time

             −iG(t) > 0 → λ(t) = λmin, −iG(t) < 0 → λ(t) = λmax,

G(t) = 0 → switch (if it is just in a point, otherwise it is a singular interval).

(6.32)

In order to prove that singular intervals are not allowed, we will show that they are incompatible with boundary’s conditions, which, in the case of fixed final state and free charging time (as in (4.19)), are described by

H(ρ∗(τ ), λ∗(τ ), π∗(τ ).τ ) = 0. (6.33) We consider, as before, πT and ρ in the Bloch sphere (but this time we do not

Riferimenti

Documenti correlati

Due delle sepolture individuate nello spazio del presbiterio vengono scavate durante la prima tranche dei lavori, insieme ad una terza localiz- zata nella seconda cappella a

Mean Flavescence dorée phytoplasma (FDP) load measured for different cultivars, expressed as FDP cells/ng plant DNA ± Standard Error, and acquisition from different grapevine

Our new findings and the di fferences with respect to the Luh- man (2007) results can be ascribed to two effects: on one hand, Luhman (2007) used the same distance to all

We tested whether and how two teleconnection indices calculated for the winter period, namely the East Atlantic pattern (EADJF) and the Eastern Mediterranean Pattern (EMPDJF)

Here we show that interferometric survey data can be successfully combined with pointed observations performed with a single-dish telescope to produce images with high resolution

La V Giornata Internazionale di Studi sul Disegno, De-Sign Environment Landscape City, che si svolge presso il Dipartimento Architettura e Design della Scuola Politecnica

Se invece l’autostima viene concepita come uno stato transitorio modificabile dall’interazione con gli stimoli esterni, e sono presenti alti livelli di stress percepito ed

However, since im- purities in center of the BEC have a higher probability for undergoing three-body loss compared to those created near the edge, the loss signal for the