0% found this document useful (0 votes)
13 views

Switching Control of Quantum Dynamics

Uploaded by

s_bhaumik
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Switching Control of Quantum Dynamics

Uploaded by

s_bhaumik
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Università degli Studi di Padova

DIPARTIMENTO DI INGEGNERIA DELL’INFORMAZIONE


Corso di Laurea Magistrale in Ingegneria dell’Automazione

Tesi di laurea magistrale

Switching control of quantum dynamics

Candidato: Relatore:
Pierre Scaramuzza Francesco Ticozzi
Matricola 1045802

Anno Accademico 2013–2014


Abstract

This work introduces a solution based on switching techniques for control-


ling open quantum systems. Assuming the existence of a shared steady state
for a set of marginally stable generators of quantum Markov semigroups,
we propose algorithms for asymptotically stabilizing such state by suitably
switching between the dynamical generators. The problem is of interest for
state preparation protocols in experimental quantum physics and emerging
quantum technologies. In this Thesis, after a brief introduction on quantum
control and the motivation for this work, we rst provide the necessary back-
ground on linear switching systems, as well as a concise presentation of the
quantum systems and their dynamics. We focus in particular on Markovian
models for open quantum systems, generated by equations in Lindblad form,
following the analogy with classical Markov chain to better guide the reader
unfamiliar with the quantum models. We next discuss new techniques for
stabilizing quantum states by switching between Lindblad generators, and
extend our results to the stabilization of common invariant subspaces of den-
sity operators. Promising directions for further developments of the results
to state-based feedback strategies are suggested in the conclusions.

i
ii
Sommario

Questo lavoro presenta una soluzione basata su tecniche di switching per il


controllo di dinamiche di sistemi quantistici aperti. Assumendo l'esistenza di
uno stato di equilibrio condiviso per un insieme di generatori di semigruppi
di Markov quantistici marginalmente stabili, proponiamo algoritmi per sta-
bilizzare asintoticamente tale stato attraverso uno switching opportuno tra
i generatori della dinamica. Si tratta di un problema di interesse per la
preparazione di stati nel campo della sica quantistica sperimentale e di tec-
nologie quantistiche emergenti. In questa Tesi, dopo una breve introduzione
sul controllo quantistico e sulle motivazioni di questo lavoro, forniamo le
basi necessarie sui sistemi switching lineari, e successivamente una concisa
presentazione riguardo ai sistemi quantistici e alla loro dinamica. Ci concen-
triamo in particolare su modelli Markoviani per sistemi quantistici aperti,
generati da equazioni in forma di Lindblad, seguendo l'analogia con catene di
Markov classiche in modo da agevolare un lettore poco familiare con i modelli
quantistici. Discutiamo poi nuove tecniche per stabilizzare stati quantistici
operando switching tra generatori di Lindblad, ed estendiamo i nostri risul-
tati alla stabilizzazione di sottospazi invarianti di operatori densità. Sono
inne suggerite nelle conclusioni direzioni promettenti per ulteriori sviluppi
riguardo a strategie di switching basate su feedback dello stato.

iii
iv
Contents

1 Introduction 1
2 Switching systems 3
2.1 Introduction to switching systems . . . . . . . . . . . . . . . . 3
2.2 Stability of switching systems . . . . . . . . . . . . . . . . . . 4
2.2.1 Stability under arbitrary switching . . . . . . . . . . . 5
2.2.2 Stability under constrained switching . . . . . . . . . . 7
2.3 Periodic switching . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Finding a Hurwitz convex combination . . . . . . . . . 10
2.4 State feedback switching . . . . . . . . . . . . . . . . . . . . . 12
3 Quantum dynamics 17
3.1 Introduction to quantum dynamics . . . . . . . . . . . . . . . 17
3.1.1 Observables and states . . . . . . . . . . . . . . . . . . 17
3.1.2 Closed and open quantum systems . . . . . . . . . . . 19
3.2 Markovian dynamics for open systems . . . . . . . . . . . . . 21
3.2.1 Classical Markov semigroups . . . . . . . . . . . . . . . 21
3.2.2 Quantum Markov dynamics . . . . . . . . . . . . . . . 24
3.2.3 Markovian Master Equations and Lindblad form . . . . 27
3.3 Distances and norms for density operators . . . . . . . . . . . 29
3.3.1 Trace distance between quantum states . . . . . . . . . 31
3.4 Coherence-vector formulation . . . . . . . . . . . . . . . . . . 32
3.4.1 Quantum states as real vectors . . . . . . . . . . . . . 33
3.4.2 Linear and ane maps for vectorized dynamics . . . . 35
4 Switching control of quantum dynamics 37
4.1 Stability for Markovian Master Equations . . . . . . . . . . . 37
4.2 Denition of the problem . . . . . . . . . . . . . . . . . . . . . 39
4.3 Special case: unital generators . . . . . . . . . . . . . . . . . . 39
4.3.1 Symmetric unital generators . . . . . . . . . . . . . . . 41
4.4 General case: non-unital generators . . . . . . . . . . . . . . . 42

v
vi CONTENTS

4.5 Examples of stabilization by switching generators . . . . . . . 46


4.6 Stability of subspaces . . . . . . . . . . . . . . . . . . . . . . . 51
5 Conclusion 53
Chapter 1
Introduction

Quantum control is a branch of control theory on which many researchers


have worked in the last few decades. It is a complex and articulated area,
which still needs to be examined in depth for being exploited in the best
possible way. Actually, the growing interest on this research topic must
be attributed to the powerful tools that quantum control provides to the
experimentalists in several subeld of physics. Among the most important
applications there is nuclear magnetic resonance [15], widely used in medical
imaging and in spectroscopy. One of the typical tasks of this discipline is to
describe the techniques to drive a quantum initial state to a predetermined
target state. For classical systems many strategies have been developed and
adopted for achieving this task, among which there are robust control [12],
optimal control [10] techniques or even Lyapunov methods. Unfortunately,
these methods cannot often be directly applied when dealing with quantum
systems due to the troubles on exploiting measurements for quantum control.
Indeed, such microscopic dynamics are characterized by behaviors which have
no counterparts in classical physics.
One of the most important features of quantum mechanics is that it is dif-
cult to acquire information about quantum states without disturbing them.
On the other hand it is well known that the most eective strategy for con-
trolling any classical system is by exploiting feedback information [11], that
is by continuously observing state measurements. For this reason existing
procedures must be adapted to the quantum frame in order to be fully ap-
plicable and others need to be created for xing issues otherwise unsolvable.
The diculties that the control engineer has to face for overcoming these
challenges are also linked to the fact that proper dynamical models on which
conceiving control solutions are hard to dene. This is mainly due to the
interactions that can rise between a system and its environment [16], [18],
[20]. Interpreting such unknown contributions as sources of noise is leading

1
2 CHAPTER 1. INTRODUCTION

to describe open quantum systems with stochastic models, able to take into
account existing uncertainties but at the same time more dicult to handle.
The objective of this thesis is to present an open loop control technique
which indiscriminately applies to pure and mixed states. The evolution of
a quantum state is described by the generator of its dynamics, which is
directly linked to the physical elements that compose the system (magnetic
elds, laser...). Stability properties of such dynamics can be investigated
using the same mathematical tools and denitions as for classical systems.
Indeed, the most used class of models for quantum evolutions are actually
linear systems for which spectral properties are well known [9]. Switching
between marginally stable systems, we will show how to drive any initial state
to a given shared steady state. Under few assumptions and using piecewise
constant generators, we will thus manage to make a shared steady state
globally asymptotically stable for a convenient switching system. The main
issue for such technique is to nd a stabilizable switching law between given
generators, assuming that none of them is asymptotically stable.
Most of the denitions about switching systems will be given in Chapter
2, together with results about stability and algorithms for engineering sta-
bilizable switching laws. In Chapter 3 we will introduce quantum formalism
and derive Markovian Master Equations as suitable dynamical models. We
will there moreover describe a procedure to express master equations as vec-
tor dierential equations, in such a way to comfortably apply stability results
of switching linear systems. Finally, in Chapter 4, we will show how to fully
exploit switching theory in quantum eld and some numerical examples will
be provided.
Chapter 2
Switching systems

System theory deals both with continuous-time and discrete-time evolutions.


Method for controlling them have been developed simultaneously, sometimes
with specities related to the particular kind of dynamics. Nevertheless, real
systems are only rarely described by one or the other model and rather by
an interaction of them. A clear example of this behavior is given by the
motion of an automobile. The growth of its speed does not only depend on
the acceleration input, but also on the gear shift position. Acceleration is a
continuous input given by the driver while gear shift positions belong to a
discrete set. A comprehensive model of motion should then consider both
the dynamics and the inuence of one on the other.
Such kind of system is called hybrid system, and this type of dynamics
has attracted signicant attention by control researchers. Moreover, when the
real object of control is a continuous variable, for example because discrete
contribution is sparse, we are used to talk about switching systems. Apart
from being an intrinsic characteristic of many real systems, switching can
also be engineered to drive the evolution of the variables of interest. This
is the aspect we will focus on in the following sections, as we will later use
such techniques to control quantum systems. More specically, dealing with
stabilization problems, we will identify proper time-based and state-based
switching rules for making asymptotically stable steady states that would be
unstable for individual subsystems.

2.1 Introduction to switching systems


The main components of a switching system are a set of dynamical systems
and their switching signal. Each dynamical system can be expressed as a

3
4 CHAPTER 2. SWITCHING SYSTEMS

generic function
fp (x) : RN −→ RN ,
associated to an index p that belongs to a set P which we will now on consider
nite and discrete. As we will mostly deal with linear systems, we will then
describe them as real matrices Ap ∈ RN ×N . We already anticipated that the
switching signal can depend on the state of the system or on time.
In the rst case the switching signal is a piecewise constant function

σ(x) : RN −→ P,

and the state space is subdivided into regions, to each of which is associated
a value in P . When the state crosses the boundary of a region the signal σ(x)
switches and the system associated to the selected index is chosen to drive
the evolution of the state. In general, we could also suppose the existence
of a reset map which species which must be the new state of the system
after that it hurts a boundary. For our goals we will however consider the
evolution of the state to be continuous.
In the second case, the switching signal is a piecewise constant function

σ(t) : [0, +∞) −→ P,

where t denotes the evolution of the time. At given instants the signal σ(t)
switches to a value in P which must be determined by the controller. Again
the system associated to the index selected is chosen to drive the state until
the following switching.
The expression of a generic linear switching system is then the following:
(
ẋ = Ap (x(t)), σ(t) = p
(2.1)
x(0) = x0

Taking advantage of spectral theory for linear systems we are now able to
examine some fundamental properties of switching systems stability.

2.2 Stability of switching systems


There are mainly two ways for studying the stability of switching systems:
• stability under arbitrary switching,

• stability under constrained switching.


2.2. STABILITY OF SWITCHING SYSTEMS 5

While the former describes which must be the features of a set of subsys-
tems such that the switching system is stable for any switching signal, the
latter, given a particular set of subsystems, tries to nd out which must be
the switching signal that makes the switching system stable. Before exam-
ining more attentively each of these ways we give the following fundamental
denition:

Denition 2.1. A switching system (2.1) is uniformly asymptotically sta-


ble if there exist a positive constant δ and a class KL function β such that
for all switching signals σ the solutions of (2.1) with |x(0)| ≤ δ satisfy the
inequality
|x(t)| ≤ β(|x(0)|, t) ∀t ≥ 0. (2.2)
If the inequality (2.2) is valid for all switching signals and all initial condi-
tions we obtain global uniform asymptotic stability (GUAS).

2.2.1 Stability under arbitrary switching


2 2

1.5 1.5

1 1

0.5 0.5
y−axis

y−axis

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5

−2 −2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x−axis x−axis

(a) Stable system (b) Stable system


2
3

1.5 2

1
1
y−axis

y−axis

0.5
−1

0 −2

−3
−0.5
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −3 −2 −1 0 1 2 3
x−axis x−axis

(c) Stable switching system (d) Unstable switching system

Figure 2.1: Switching between stable dynamics


6 CHAPTER 2. SWITCHING SYSTEMS

When dealing with stability under arbitrary switching a necessary condi-


tion for GU AS is certainly that each individual subsystem should be asymp-
totically stable. Indeed, if any subsystem p were unstable, then the switching
system with σ ≡ p would be unstable too. Nevertheless that is not a su-
cient condition for GU AS . This fact can be illustrated by a simple example:
consider two linear, stable dynamics on R2 ,
( (
ẋ = −0.1x + 4y ẋ = −0.1x + y
(a) = and (b) = (2.3)
ẏ = −x − 0.1y ẏ = −4x − 0.1y

whose trajectories are respectively of the form of those depicted in Figure


2.1a and Figure 2.1b. In Figure 2.1c the initial state of the system lies on
the y-axis and its evolution begins according to the dynamics represented in
a red line. Each time that the state crosses the x-axis, the system switches
to the dynamics depicted in Figure 2.1a, while each time it crosses the y-axis
it switches to that depicted in Figure 2.1b. This switching rule produces a
contraction of the norm of the state and the resulting system is asymptoti-
cally stable. On the other hand, in Figure 2.1d, the initial state still lies on
the y-axis but its evolution is initially described by the dynamics represented
in a blue line. Now when the state crosses the x-axis, the system switches
to the dynamics depicted in Figure 2.1b, while when it crosses the y-axis it
switches to that depicted in Figure 2.1a. The norm of the state keeps then
increasing and the switching system is unstable. It is therefore necessary to
nd additional conditions to assure asymptotic stability. These are generally
derived by using Lyapunov techniques such as nding common Lyapunov
functions.

Denition 2.2. Given a positive denite continuously dierentiable function


V : RN → R, we will say that it is a common Lyapunov function for
the set {fp }p=1,...,m if there exists a positive denite continuous function W :
RN → R such that
∂V
fp (x) ≤ −W (x) ∀x, ∀p ∈ P.
∂x
This denition allows to formulate the following theorem:
Theorem 2.1. If all the systems fp share a common Lyapunov function,
then the switched system (2.1) is GUAS.

The proof of this theorem can be easily derived by extending that of Lya-
punov's second method. It is worth to note that the existence of a common
Lyapunov function is not a necessary condition for GU AS , as proved in [1].
2.2. STABILITY OF SWITCHING SYSTEMS 7

Moreover, nding a common Lyapunov function can be a dicult task. Any-


way, for the objectives of this paper, stability under arbitrary switching is
not a satisable requirement as our goal is to nd proper switching signals
to make asymptotically stable switching systems made of only marginally
stable subsystems.

2.2.2 Stability under constrained switching


Even if some of the subsystems being switched are unstable, reaching asymp-
totic stability for the switching system can still be possible by appropriately
choosing the switching signal. Let's suppose, for example, to dispose of two
subsystems
( (
ẋ = 0.1x + 4y ẋ = 0.1x + y
(a) = and (b) = (2.4)
ẏ = −x + 0.1y ẏ = −4x + 0.1y
whose behavior is described in Figure 2.2a and Figure 2.2b.
2.5 5

2 4

1.5 3

1 2

0.5 1
y−axis

y−axis

0 0

−0.5 −1

−1 −2

−1.5 −3

−2 −4

−2.5 −5
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −5 −4 −3 −2 −1 0 1 2 3 4 5
x−axis x−axis

(a) Unstable system (b) Unstable system


2 8

6
1.5
4

2
1
y−axis

y−axis

0.5
−2

−4
0
−6

−0.5 −8
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −8 −6 −4 −2 0 2 4 6 8
x−axis x−axis

(c) Stable switching system (d) Unstable switching system

Figure 2.2: Switching between stable dynamics


These subsystems are clearly unstable, and switching when the norm of
the state starts decreasing keeps the switching system unstable. On the other
8 CHAPTER 2. SWITCHING SYSTEMS

hand, if switching occurs when the norm of the state starts increasing, then
the switching system gets asymptotically stable. This example shows that
even if none of the individual subsystems are asymptotically stable a stabi-
lizable switching signal can sometimes be found. In these cases Lyapunov's
techniques can still be employed. Actually a common Lyapunov function
cannot exist as if it would the switching system would be GU AS for Theo-
rem 2.1 and we know it is not. Nevertheless, in particular cases of marginally
stable subsystems, a stabilizing switching signal can be found by composing
several Lyapunov functions [5]. We will not explore this way as more pow-
erful tools exist to ensure asymptotic stability. In the next sections we will
then examine two techniques to reach this aim. The rst is a time-based
switching while the second a state-based switching rule. Although they both
found on the same assumptions, we will show that the rst is much easier to
apply in quantum eld.

2.3 Periodic switching


We introduce now the mathematical background we will later base on.
Lemma 2.1. Let A1 , . . . , Am be matrices in RN ×N . Then there exists a
positive real number ε such that
m
 X  
2
exp(Am t)exp(Am−1 t) . . . exp(A1 t) = exp Ap t + Υ c t ,
p=1

for any t ≤ ε, where entries of Υc are bounded.

Indeed, given two real square matrices A and B there always exists a
complex matrix C such that

exp(A)exp(B) = exp(C).

Moreover, if ||A|| + ||B|| ≤ ln(2) then C is real [6], and for Campbell-Baker-
Hausdor formula it is given by the convergent innite expression
1
C = A + B + [A, B] + . . . ,
2
where [A, B] = AB − BA is called the commutator of A and B. We make
now the following assumption.
2.3. PERIODIC SWITCHING 9

Assumption 2.1. There exists a convex combination Ac of Ap which is


Hurwitz, that is
m
X m
X
∃ α1 , . . . , α m s.t αp Ap = Ac , αp = 1.
p=1 p=1

and Ac is Hurwitz.
From Lemma 2.1 we can now prove the most important result of this
section, as proposed in [2].
Theorem 2.2. Suppose that Assumption 2.1 holds. Then the relative switch-
ing system is stabilizable.
Proof. According to Lemma 2.1, if the length of the period ε is short enough,
m
 X  
2
exp(αm Am ε) . . . exp(α1 A1 ε) = exp αp Ap ε+Υc ε = exp((Ac +Υc ε)ε),
p=1

and
Ā := Ac + Υc ε (2.5)
is still Hurwitz. Indeed, the eigenvalues of a matrix continuously depend on
its entries. As Υc is bounded, if ε → 0, then the eigenvalues of Ā approch
those of Ac and if Ac is Hurwitz there exists a ε for which Ā is Hurwitz too.
Fixed such a ε, a periodic switching path can be dened as follows:

1 mod(t, ε) ∈ [0, α1 ε)
.

σ(t) = ..

m mod(t, ε) ∈ [( m−1
P
p=1 αp )ε, ε)

At the end of each switching period ε, a state vector v , whose dynamics is


described by the above switching law, assumes the values
v(kε) = exp(Ākε)v(0), k = 1, 2, . . . ,

where Ā was dened in (2.5). Moreover, let's dene


φ(s2 , s1 ) := eAp (s2 −tp ) eαp−1 Ap−1 ε . . . eAk−1 (tk −s1 ) , s1 ≤ s2 ,

the transition matrix from the state at instant s1 ∈ (tk−1 , tk ) to that at


instant s2 ∈ (tp , tp+1 ) according to a given switching law. For any non-
negative integers l1 ≤ l2 the evoution covers a nite number of cycles,
φ(l2 ε, l1 ε) = eĀ(l2 −l1 )ε .
10 CHAPTER 2. SWITCHING SYSTEMS

As Ā is Hurwitz, there exist positive numbers κ and λ such that


||φ(l2 ε, l1 ε)|| ≤ κe−λ(l2 −l1 )ε .

For any s1 ≤ s2 , let l1 and l2 satisfy


l1 ε ≤ s1 < (l1 + 1)ε, (l2 − 1)ε < s2 ≤ l2 ε.

Then
||φ(s2 , s1 )|| ≤ ||φ(l1 ε, s1 )||||φ(l2 ε, l1 ε)||||φ(s2 , l2 ε)||
≤ κe−λ(l2 ε−l1 ε) ||φ(0, s1 − l1 ε)||||φ(0, l2 ε − s2 )||.

Finally, denoting
κ1 = max ||φ(0, t)||,
0≤t≤ε

which is always attainable because φ(0, t) is continuous in t, we get


||φ(s2 , s1 )|| ≤ κ21 κe−λ(l2 −l1 )ε ≤ κ21 κe−λ(s2 −s1 ) .

The transition matrix is exponentially convergent and the switching system


is then stabilizable.

2.3.1 Finding a Hurwitz convex combination


The technique we have just presented does not require to nd any Lyapunov
function as it is based only on a time switching rule. Nevertheless, assuming
that none of the individual subsystems is asymptotically stable, a Hurwitz
convex combination of Ap does not always exist. When dealing with two
subsystems a brute-force method can be used to nd a couple of coecients
such to make the combination Hurwitz. In this case,
Ac = αA1 + (1 − α)A2 ,

and, dening β = 1−α


α
> 0, then

 := A1 + βA2 .

Clearly, Ā is Hurwitz if and only if  is Hurwitz. As the value of β changes,


the eigenvalues of  change too. A proper value of β is one for which all
the eigenvalues of  have negative real part. In order to nd it, one can
determine the intervals of 0 < β < +∞ in which the sign of the eigenvalues
is constant and pick any value of β for each interval to check if the relative
2.3. PERIODIC SWITCHING 11

 is Hurwitz. Moreover, when the sign of the real part of an eigenvalue


changes, that eigenvalue is purely imaginary. The endpoints of β , which
delimit intervals where the sign of the eigenvalues of  does not change, are
then those for which there exists a ωi ∈ R such that
det(A1 + βi A2 + iωi ) = 0.

The algorithm for nding a Hurwitz convex combination of two unstable


matrices can be thus synthesized:
1. Compute the set {βi |∃ωi : det(A1 + βi A2 + iωi ) = 0} and order βi such
that β1 ≤ β2 ≤ . . . ≤ βn if n is the cardinality of the set.
2. Dene a set of test points { β1 +β
2
2
, . . . , βi +β2 i+1 , . . . , 2βn + 1}.

3. Calculate the eigenvalues of  with β equal to each test point to check


if one makes  Hurwitz.
Even if this algorithm can be applied when switching between two subsys-
tems, nding convex combinations for more than two matrices is NP-hard.
Practical procedures for more complex switching systems are then unlikely
to be found.
Nevertheless it is worth to note that the existence of a Hurwitz convex
combination of Ap is not necessary for nding a stabilizing switching law.
Let's consider for example the switching system whose subsystems are
   
0 4 0 1
A1 = , A2 = ,
−1 0 −4 0

represented in Figure 2.3a and Figure 2.3b. Clearly, there cannot exist any
Hurwitz convex combination of these matrices, as
Ā = α1 A1 + α2 A2 ,

has imaginary eigenvalues


p
λ1,2 = ±i (α1 + 4α2 )(4α1 + α2 ).

However, the state-based switching law

σ(x, y) = 1 if xy ≤ 0,
(

σ(x, y) = 2 if xy > 0.

is stabilizing, as shown in Figure 2.3c.


12 CHAPTER 2. SWITCHING SYSTEMS

1 0.5

0.8 0.4 0.6

0.5
0.6 0.3
0.4
0.4 0.2
0.3
0.2 0.1
0.2

y−axis
y−axis

y−axis
0 0
0.1
−0.2 −0.1
0
−0.4 −0.2
−0.1
−0.6 −0.3 −0.2

−0.8 −0.4 −0.3

−1 −0.5 −0.4
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
x−axis x−axis x−axis

(a) First system (b) Second system (c) Stable switching

Figure 2.3: Stable switching of marginally stable systems

2.4 State feedback switching


Supposing to exactly know the state of the system at each instant, a stabiliz-
ing switching law based on a state space partition can be easily engineered.
Such technique again requires the validity of the Assumption 2.1, that is the
existence of a Hurwitz convex combination Ac of the switching subsystems.
Let P be the positive denite solution of the Lyapunov equation

ATc P + P Ac = −I. (2.6)

Let's moreover dene

Qp = ATp P + P Ap , p ∈ P,

and a set of arbitrary real numbers rp ∈ (0, 1) associated to each subsystem.


We then choose the initial value of the switching signal as any of those for
which
σ(t0 ) = arg min{xT0 Q1 x0 , . . . , xT0 Qm x0 }.
p∈P

The rst switching instant must then be chosen such that

t1 = inf{t > t0 : xT (t)Qσ(t0 ) x(t) > −rσ(t0 ) xT (t)x(t)},

and the following active subsystem is that associated to

σ(t1 ) = arg min{x(t1 )T Q1 x(t1 ), . . . , x(t1 )T Qm x(t1 )}.


p∈P

Instant t1 must exist because we supposed that none of each subsystem is


asymptotically stable. The sequences of switching times and active systems
can thus be recursively dened:

tk+1 = inf{t > tk : xT (t)Qσ(tk ) x(t) > −rσ(tk ) xT (t)x(t)}, (2.7)


2.4. STATE FEEDBACK SWITCHING 13

and
σ(tk+1 ) = arg min{x(tk+1 )T Q1 x(tk+1 ), . . . , x(tk+1 )T Qm x(tk+1 )}. (2.8)
p∈P

Before formally proving that it is a stabilizing switching law, we introduce


the following lemma:
Lemma 2.2. For each state x 6= 0 there exists a subsystem p for which

x(t)T Qp x(t) ≤ −x(t)T x(t).

Proof. As Ac is a Hurwitz matrix, according to the denition (2.6) of P ,


X X
x(t)T (ATc P + P Ac )x(t) = x(t)T ( αp ATp )P + P (

αp Ap ) x(t)
p p
T
= α1 x(t) (AT1 P + P A1 )x(t) + . . . + αm x(t) (ATm P + P Am )x(t)
T

= α1 x(t)T Q1 x(t) + . . . + αm x(t)T Qm x(t) = −x(t)T x(t) < 0.

As the sum is negative, then there must exist at least one addend which is
negative too. Considering that
αp ≥ 0, ∀p ∈ P,

then the lemma is proved.


Given such a switching law we may wonder if the switching system is
well-posed, that is if the set of jump times is nite for any nite interval.
Theorem 2.3. Under the above switching law, the switching system is well-
posed and asymptotically stable.
Proof. We want to show now that such a switching law is acceptable, that
is actually tk+1 > tk . Let θ be any real number greater than 1. Consider the
case
||x(t)|| ≤ θ||xk+1 || ∀t ∈ [tk , tk+1 ]. (2.9)
and dene
g(t) = x(t)T (Qp + I)x(t) t ∈ [tk , tk+1 ],
where p = σ(tk+ ). From Lemma 2.2, being
x(tk )T Qp x(tk ) ≤ −x(tk )T x(tk ),

then
g(tk ) ≤ 0, (2.10)
14 CHAPTER 2. SWITCHING SYSTEMS

and, from (2.7),


g(tk+1 ) ≥ (1 − rp )xTk+1 xk+1 . (2.11)
Deriving the latter we get
dg
= x(t)T (ATp (Qp + I) + (Qp + I)Ap )x(t).
dt
Denoting
ηp := ||ATp (Qp + I) + (Qp + I)Ap ||,
and using (2.9), we have
dg
| | ≤ θ2 ηp xTk+1 xk+1 ∀t ∈ [tk , tk + 1]. (2.12)
dt
According to (2.10) and (2.11)
g(tk+1 ) − g(tk ) (1 − rp )xTk+1 xk+1
≥ .
tk+1 − tk tk+1 − tk

Remembering then (2.12),


(1 − rp )xTk+1 xk+1
≤ θ2 ηp xTk+1 xk+1 ,
tk+1 − tk

and, consequently,
(1 − rp )
tk+1 − tk ≥ .
θ2 ηp
If, on the other hand, (2.9) does not hold, then

∃t∗ ∈ [tk , tk+1 ) : ||x(t∗ )|| > θ||xk+1 ||.

As the system dynamics in this time interval is described by Ap , then

x(t∗ ) = exp(Ap (t∗ − tk+1 ))xk+1 .

From the latter, and remembering that

||exp(Ap (t∗ − tk+1 ))|| ≤ exp(||Ap ||(tk+1 − t∗ )),

it follows that
ln(θ)
tk+1 − tk ≥ tk+1 − t∗ > .
||Ap ||
2.4. STATE FEEDBACK SWITCHING 15

Finally, gathering both the cases, we can say that


 
1 − rp ln(θ)
tk+1 − tk ≥ sup min , ,
θ>1 p∈P θ2 ηp ||Ap ||

and that the switching signal is valid as the dierence is always positive.
Choosing then V (x) = xT P x as Lyapunov function we get
dV
= xT (t)Qσ(t) x(t) ≤ rσ(t) xT (t)x(t) ≤ −rxT (t)x(t),
dt
where
r := min{r1 , . . . , rm },
and the theorem is proved.
We have presented two algorithms for making a switching system asymp-
totically stable. The rst is based on a periodic sequence of switchings, while
the second can be assimilated to a Lyapunov technique. Both of them are
based on the only assumption that there exists a convex combination of the
matrices which describe each subsystem. Moreover the second requires the
knowledge of the state all along its evolution. That information can be easily
obtained at any moment for classical systems by directly observing the value
of the state. This is more dicult for quantum systems as any measurement
of the state inuences the dynamics of the system. In that case, switch-
ing instants must be determined outline from the exact knowledge of the
initial state that is not always available. For these reasons, when applying
switching techniques to quantum control, we will rst consider the time-based
algorithm, assuming that there actually exists a convex combination of the
generators we can use.
16 CHAPTER 2. SWITCHING SYSTEMS
Chapter 3
Quantum dynamics

3.1 Introduction to quantum dynamics


One of the main tasks of a control engineer is to drive the behavior of phys-
ical quantities related to dynamical systems, in order to make them assume
determined values respecting given constraints. One of the typical examples
of that could be to regulate the temperature of a room within a short time
frame, or also to set the speed of a rotor by varying its power supply. The
rst step to reach those ends is always to build a mathematical model of
the physical system. Such a model is generally intended as a set of equa-
tions which describe the evolution of relevant variables. For classical systems
the knowledge of the initial values of those variables is a sucient condition
to determine their evolution and thus, if the model is accurate enough, to
recover a valid estimation of the related physical quantities behavior.
For quantum systems this challenge is much more complex. In fact, ac-
cording to quantum mechanics, all that is predictable is the probability to
observe an outcome, rather than the outcome itself. This uncertainty is not
due to a lack of quantum theory one could expect to overcome with further
studies. It is on the contrary an essential characteristic of quantum systems,
which dramatically distinguishes them from their classical counterparts. In
this such particular frame a state should no more be interpreted as the set
of variables which represent the attributes of a classical system but as the
object that contains the information about the probability to measure an
observable quantity.

3.1.1 Observables and states


In order to describe the evolution of quantum systems we need now to in-
troduce the essential mathematical framework within which our analysis will

17
18 CHAPTER 3. QUANTUM DYNAMICS

move. To any quantum system is associated a complex Hilbert space H ' CN


which we will from now on suppose to be nite-dimensional. This assump-
tion is not too restrictive for the objectives of this research and will allow
us to use the powerful tools of linear algebra. In fact, bounded self-adjoint
operators B(H) we will deal with, like observables and density operators,
will mainly be represented as N × N complex matrices belonging to the set
MN , relative to N-dimensional quantum systems. The set of complex valued
matrices MN can be itself intended as a Hilbert space equipped with the
Hilbert-Schmidt scalar product:
hX, Y iHS = tr[X † Y ]. (3.1)
Denition 3.1. An observable is a Hermitian operator X ∈ B(C) that is
associated to a physical variable.
Due to its hermiticity X can be diagonalized by an unitary matrix U ,
that is:
X = U Xd U T ,
where Xd is a diagonal matrix with real eigenvalues λi ∈ R. Thus, for the
spectral theorem, X can be expressed as the sum of orthogonal projections
Πi : X
X= λi Π i ,
i

where i Πi = IN and IN is the identity element of MN . The eigenval-


P
ues {λi } are the possible outcomes of a measurement, while {Πi } are called
projection-valued measures as they allow to compute quantum events proba-
bilities as we will soon show.
Denition 3.2. The most general expression of a state is a density operator,
that is a square matrix ρ ∈ MN with the following additional properties:
• ρ = ρ† ≥ 0;
• tr[ρ] = 1;
• tr[ρ2 ] ≤ 1.
We will now on denote the set of operators respecting those constraints
D(C) ⊂ B(C). One important property of density matrices is that they
generate a convex set, that is the convex combination of density matrices is
still a density matrix:
X X
ρ= cj ρj , cj = 1.
j j
3.1. INTRODUCTION TO QUANTUM DYNAMICS 19

If a density matrix ρ does not admit a non-trivial convex decomposition then


the state is said to be pure. This means that it can be expressed as
ρ = |ψi hψ| ,

where |ψi is a norm-1 state column vector in Dirac notation and hψ| is its
adjoint representation. An inner product can be dened between vectors that
belong to a Hilbert space H:
hψ, φi = hψ|φi.

If ρ is the external product of vectors, the state is exactly known and ρ is a


rank-one orthogonal projector. On the other side, if ρ can be expressed as a
convex combination of pure states, that is:
X
ρ= cj |ψj i hψj | ,
j

then it is called a mixed state. This can be used to represent ensembles of


identical systems prepared in dierent states or a single state prepared with
classical uncertainty. In both cases coecient cj denotes respectively the
fraction of population prepared in the j -th state and the probability of the
state to be the j -th one.
The Hilbert-Schmidt scalar product allows to link the concepts of ob-
servable and state. In fact, given a system in the state ρ, the probability of
obtaining λi as an outcome of a measurement on the observable X is given
by:
pi = tr[ρΠi ].
Moreover, the expectation of the measurement on X can be easily derived:
X X X
Eρ [X] = pi λi = tr[ρΠi ]λi = tr[ρ λi Πi ] = tr[ρX].
i i i

According to Schrödinger picture, the evolution of a quantum system is


entirely described by the evolution of its state. This means that while the
state is time-varying, following the equations that describe the dynamical
model, the observable is constant. Thus, the possible outcomes of a mea-
surement keep unchanged while their probability varies as the state evolves.

3.1.2 Closed and open quantum systems


The most basic example of quantum dynamics is given by systems that are
ideally isolated from their environment.
20 CHAPTER 3. QUANTUM DYNAMICS

Denition 3.3. A closed quantum system is a system which does not


interchange information (energy or matter) with any other one.
A model for this kind of evolution was rst postulated in 1926 by Erwin
Schrödinger. The Schrödinger equation for the state vector is the following
ODE :
˙ = −iH |ψi ,
~|ψi (3.2)
where H is the Hamiltonian operator of the system and ~ is the Planck
constant which we will hereafter suppose equal to 1. As the Hamiltonian is
a Hermitian operator, the state vector undergoes an unitary evolution,
|ψ(t)i = U (t) |ψ(0)i , (3.3)
which preserves its norm. The dynamics of density operators can be easily
derived from (3.3):
ρ(t) = |ψ(t)i hψ(t)| = U (t) |ψ(0)i hψ(0)| U (t)† = U (t)ρ(0)U (t)† , (3.4)
whose innitesimal version is given by quantum Liouville-Von Neumann
equation :
ρ̇ = −i[H(t), ρ]. (3.5)
If the system is non-conservative, and thus the Hamiltonian is time-dependent,
the evolution must be written as a Dyson expansion:
Rt
U (t) = T e t0 H(s)ds
. (3.6)
Denition 3.4. An open quantum system is a quantum system that in-
teracts with its environment.
In this case the joint Hilbert space H that supports the whole dynamics
can be factorized in the Hilbert space of the system of interest HS and that
of the environment HE . The mathematical operator which describes the
interaction is the tensor product,
H = HS ⊗ HE .

When applied to matrices, such as density operators, tensor products can be


interpreted as Kronecker products. If we suppose the coupled system driven
by an Hamiltonian of the form
Htot = HS ⊗ IE + HE ⊗ IS + HSE ,

where I denotes the identity on each subsystem, then the dynamics on H is


still unitary and it can be studied as we previously did for closed quantum
3.2. MARKOVIAN DYNAMICS FOR OPEN SYSTEMS 21

systems. Nevertheless the object we want to observe is the evolution of the


reduced state related to S . Let's suppose for example the initial state of the
joint system to be
ρ(0) = ρS (0) ⊗ ρE (0).
Then its evolution is described by some unitary propagator in the following
way:
ρ(t) = USE (t)ρ(0)USE (t)† . (3.7)
The dynamics of the reduced state ρS can be derived from the joint one using
the operation of partial trace, dened as:
trA [A ⊗ B] = B · tr[A]. (3.8)
For density operators this naturally leads to
ρS (t) = trE [ρ(t)] = ρS (t) · tr[ρE (t)]. (3.9)
The evolutions thus obtained are in general non-Markovian and for this rea-
son they are dicult to handle. We will show in the next section that under
proper assumptions these dynamics can be made Markovian in such a way
to provide mathematical models easier to control.

3.2 Markovian dynamics for open systems


3.2.1 Classical Markov semigroups
In order to highlight the importance of Markovianity assumptions for quan-
tum dynamics analysis we rst need to briey recall what a stochastic process
is and which of its properties are useful for our aims.
Denition 3.5. Let (Ω, F, P) be a probability space. A family of random
variables {X(t); t ∈ T } dened on Ω is called a stochastic process. It is a
continuous-time process if T = [a, b], −∞ ≤ a < b ≤ −∞. It is a discrete-
time process if T ⊆ Z.
As we already mentioned we are particularly interested in a specic set
of stochastic processes, that is Markov processes.
Denition 3.6. A discrete Markov process is a memoryless stochastic
process, that is one for which:

pt1 ,...,tn (xn |xn−1 ; . . . ; x1 ) = ptn−1 ,tn (xn |xn−1 ), (3.10)


where t1 , . . . , tn are picked from a countable set.
22 CHAPTER 3. QUANTUM DYNAMICS

When the set of states X = {1, 2, . . .} is countable too, this kind of process
is called Markov chain. Let's denote, for simplicity,
pij := p(xn = j|xn−1 = i), (3.11)
that is the probability to step from the state i to the state j , supposing that
this probability does not depend on time. By the law of total probability,
(3.12)
X
p(xn = j) = pij p(xn−1 = i).
i∈X

This transition law can be expressed in a compact form,


p(xn ) = P T p(xn−1 ),

where  
p00 p01 . . .
P = p10 p11 . . . ,
 
.. .. ..
. . .
is a stochastic transition matrix. A stochastic matrix is one which can have
only positive entries and whose rows sum to one. In this case all the entries
are probabilities and each row sums to one as it represents the total proba-
bility of jumping to any state from a given one. A transition matrix is then a
linear application describing an evolution which preserves the positivity and
the total probability. Moreover, as we supposed that the transition matrix
does not depend on time, the semigroup property holds. Indeed
p(xn+1 ) = P T p(xn ) = P T P T p(xn−1 ) = (P T )2 p(xn−1 ),

and by induction we get


p(xn+m ) = (P T )m (P T )n p(x0 ) = (P T )n+m p(x0 ). (3.13)
Let's consider now a time-homogeneous continuous-time stochastic pro-
cess x(t). Dening
Xs− = {x(t), ∀t ≤ s},
the past of x(t) at time s, the Markov property can be expressed as follows:
p(x(t + s) = i|Xs− ) = p(x(t + s) = i|x(s)). (3.14)
Supposing that x(0) = i, we dene Ti the exact instant at which the process
transitions away from i, that is
x(Ti ) 6= i; x(t) = i, ∀t < Ti .
3.2. MARKOVIAN DYNAMICS FOR OPEN SYSTEMS 23

We can easily observe that


p(Ti > s + t|Ti > s) = p(x(r) = i, r ∈ [s, s + t]|x(r) = i, r ∈ [0, s])
= p(x(r) = i, r ∈ [s, s + t]|x(s) = i)
(3.15)
= p(x(r) = i, r ∈ [0, t]|x(0) = i)
= p(Ti > t).

The last equation denes the loss of memory property, which completely
characterizes exponential distributions. We can thus introduce a scalar λ(i),
which depends on state i, and according to which
p(Ti > t) = e−λ(i)t .

Such parameter is fundamental, as it allows to derive a dynamical model for


the evolution of transition probabilities as we will now show. Let's dene
pij := p(x(Ti ) = j|x(0) = i),

the probability of jumping from state i to state j and


λ(i, j) := λ(i)pij .

If time h is small enough, then


p(Ti < h) = λ(i)h + o(h), (3.16)
and
p(x(h) = j|x(0) = i) = p(Ti < h, x(Ti ) = y|x(0) = i) + o(h)
= λ(i)hpij + o(h) (3.17)
= λ(i, j)h + o(h),

where o(h) in the latter equation represents the probability of seeing two or
more jumps in [0, h]. We are now able to derive the form of the generator of
a Markov dynamics, as introduced in [24]. Denoting
Pij (t) = p(x(t) = j|x(0) = i),

and remembering Markov property, (3.16), (3.17), we get:


dPij (t) Pij (t + h) − Pij (t)
= lim
dt h→0 h
1 
= lim p(x(t + h) = j|x(0) = i) − p(x(t) = j|x(0) = i)
h→0 h
24 CHAPTER 3. QUANTUM DYNAMICS

1 X
= lim p(x(t + h) = j|x(t) = y, x(0) = i)p(x(t) = y|x(0) = i)
h→0 h
y∈X

− p(x(t) = j|x(0) = i)
1 X 
= lim (1 − λ(j)h)Pij (t) + (λ(y, j)hPiy (t)) − Pij (t) + o(h)
h→0 h
y6=j
X
= −λ(j)Pij (t) + λ(y, j)Piy (t),
y6=j

which is the Kolmogorov forward equation. We will soon show that there
exist equivalent equations for quantum dynamics. These equations will be
the model we will base on for applying switching techniques.

3.2.2 Quantum Markov dynamics


We want now to show what kind of dynamical map E is apt to describe the
evolution of a quantum state ρS between two given times t0 and t1 :

E : ρS (t0 ) −→ ρS (t1 ).

Let's suppose a quantum total initial state to be decomposed in the following


way:
ρ(t0 ) = ρS (t0 ) ⊗ ρE (t0 ). (3.18)
The propagator of the reduced initial state can be thus derived from that of
the unitary complete one as
Et,t0 (ρS (t0 )) = trE [USE (t)[ρS (t0 ) ⊗ ρE (t0 )]USE (t)† ]
=
X
Kα (t)ρS (t0 )Kα (t)† , (3.19)
α

where Kα are operators that depend only on the initial state of the environ-
ment. Indeed, let's suppose the initial state of the environment to be a pure
state. The propagator of the reduced initial state can again be derived as
Et,t0 (ρS (t0 )) = trE [USE (t)[ρS (t0 ) ⊗ |ψi E hψ|]USE (t)† ]
X †
= E hφα |USE (t) |ψiE ρS (t0 ) E hψ|USE (t) |φα iE ,
α

where |φα iE is a basis of HE . Thus,

Kα (t) = E hφα |USE (t) |ψiE ,


3.2. MARKOVIAN DYNAMICS FOR OPEN SYSTEMS 25

which is an operator acting on HS . More precisely, supposing {|γiS ⊗ |φiE }


to be a basis on HS ⊗ HE , Kα is the operator whose matrix elements are

{Kα }ij = (S hγi | ⊗ E hφα |)USE (|γj iS ⊗ |ψiE ).

It is worth to note that there is no loss of generality in assuming the initial


state of the environment to be a pure state. Indeed if it is not, an extra
system can be introduced to purify it. Let
X
ρE = pα |φα i E hφα |,
α

be any mixed state and R a system in the same state space as E with or-
thonormal basis {|λiR }. For Schmidt decomposition [17], a pure state in the
extended system can be dened as
X√
|ξiER = pα |φα iE ⊗ |λα iR .
α

The reduced state in E can be recovered via the partial trace operation
X√
trR [|ξi ER hξ|] = pα pβ |φα i E hφβ |tr[|λα i R hλβ |]
αβ
X√
= pα pβ |φα i E hφβ |δαβ
αβ
X
= pβ |φα i E hφα |
α
= ρE ,

which is exactly the initial state of the environment.

Denition 3.7. A Quantum Channel is a positivity-preserving dynamical


map that describes a physical evolution for any initial state ρS (t0 ) and can
be thus expressed as the Kraus decomposition

(3.20)
X
Et,t0 (ρS (t0 )) = Kα (t)ρS (t0 )Kα (t)† ,
α

where
(3.21)
X
Kα (t)Kα (t)† = IS .
α
26 CHAPTER 3. QUANTUM DYNAMICS

Actually a quantum channel corresponds to the notion of transition ma-


trix for a continuous-time Markov chain. The positivity-preserving property
indicates that any density operator must evolve as a density operator, that is
with positive (or null) eigenvalues. Moreover the property (3.21) is necessary
for the evolution to be trace-preserving. These features are exactly the same
as those of a stochastic matrix, whose entries must be positive and which
must keep the norm of probability vectors unchanged.
By time continuity one expects that a dynamical map can be composed of
two consecutive dynamical maps, that is:
Et2 ,t0 (ρ) = Et2 ,t1 ◦ Et1 .t0 (ρ).

That is true, but let's suppose for a moment the initial state of the joint
system to be a tensor product as in (3.18). Then, as we previously saw, the
reduced evolution until t2 can be obtained from:
Et2 ,t0 (ρS (t0 )) = trE [USE (t2 )[ρS (t0 ) ⊗ ρE (t0 )]USE (t2 )† ],

and, at the same way, the evolution until t1 is:


Et1 ,t0 (ρS (t0 )) = trE [USE (t1 )[ρS (t0 ) ⊗ ρE (t0 )]USE (t1 )† ].

Both of these are clearly quantum channels but observing the following dy-
namics from t1 to t2 ,
Et2 ,t1 (ρS (t1 )) = trE [USE (t2 )ρSE (t1 )USE (t2 )† ],

we notice that ρSE (t1 ) is not necessarily factorisable as in (3.18). The last
equation may therefore not be that of a quantum channel and thus not repre-
sent a physical evolution. This peculiarity is attributable to the irreversibility
of non-unitary propagators, which is a characteristic of open systems dynam-
ics.
The impossibility to express a quantum channel as composition of in-
nitesimal quantum channels excludes the formulation of a quantum dynam-
ical model by means of dierential equations as for classical systems. Nev-
ertheless, in most of the practical situations a Markovian model that allows
such compositions may represent a good dynamical model.
Denition 3.8. A system undergoes a Markovian evolution if the follow-
ing composition law holds:

Et2 ,t0 (ρ) = Et2 ,t1 ◦ Et1 ,t0 (ρ).


3.2. MARKOVIAN DYNAMICS FOR OPEN SYSTEMS 27

This property is analogue to the semigroup property expressed in (3.13)


for classical evolutions, where conditional probabilities are interpreted as
dynamical maps. We can thus nally give the following denition:
Denition 3.9. A Quantum Dynamical Semigroup (QDS) is a family
of quantum channels for which:

1. Et ◦ Es = Et+s ,

2. tr[E(ρ)] = tr[ρ], for each quantum state ρ,

3. E(ρ) ≥ 0, for each quantum state ρ.

Supposing Markovianity of evolutions we will now be able to build easy


dierential models on which projecting eective control solutions.

3.2.3 Markovian Master Equations and Lindblad form


Assuming that Markov property holds, then the system dynamics can be
described as a dierential equation with the following form:
dρ(t) ρ(t + ε) − ρ(t) [Et+ε,t − I]
= lim = lim ρ(t), (3.22)
dt ε→∞ ε ε→∞ ε
where the generator of the evolution is
[Et+ε,t − I]
Lt = lim .
ε→∞ ε
The derivation of such a generator will be led in the same way as we did
for Markovian classical dynamics with Kolmogorov forward equation. The
most general expression for the generator of a QDS, that is the equation
that makes Deniton 3.9 hold at each instant of the evolution of the state,
is called Markovian Master Equation (MME).
Theorem 3.1. The Markovian Master Equation of a time independent gen-
erator can be written in the form

N 2 −1
1
(3.23)
X
aij Fi ρ(t)Fj† − {Fj† Fi , ρ(t)} ,
 
L(ρ(t)) = −i[H, ρ(t)] +
i,j=1
2

where H is self-adjoint, and the matrix A = (aij ) is positive semidenite.


28 CHAPTER 3. QUANTUM DYNAMICS

Proof. Let's suppose to express Kraus decomposition (3.20) with respect to


an orthonormal basis of N 2 operators {Fi }, one of these is proportional to
the identity, for example FN 2 = √1N I , and the others are traceless operators:
N 2

(3.24)
X
ρ(t) = Et,0 (ρ(0)) = cij (t)Fi ρ(0)Fj† ,
i,j=1

In this case, thus,


(3.25)
X
hFi , Fj i = δij , cij (t) = hFi , Kα (t)ihFj , Kα (t)i† ,
α

and matrix [cij (t)] is Hermitian and positive for any t.


The form of the generator L can be obtained from (3.22), that is:
Et+ε,t (ρ(t)) − ρ(t)
L(ρ(t)) = lim
ε→0
 ε
1 cN 2 N 2 (ε) − N
= lim ρ(t)
ε→0 N ε
N 2 −1
1 X ciN 2 (ε) cN 2 i (ε) †

(3.26)
+√ Fi ρ(t) + ρ(t)Fi
N i=1 ε ε
N 2 −1 
X cij (ε) †
+ Fi ρ(t)Fj
i,j=1
ε

We dene now for simplicity the following quantities:


cN 2 N 2 (ε) − N
aN 2 N 2 = lim ,
ε→0 ε
ciN 2 (ε)
aiN 2 = lim , i = 1 . . . N 2 − 1,
ε→0 ε
cij (ε)
aij = lim , i, j = 1 . . . N 2 − 1,
ε→0 ε
N 2 −1
1 X
F =√ aiN 2 Fi ,
N i=1
1 1
G= aN 2 N 2 I + (F † + F ),
2N 2
1 †
H = (F − F ),
2i
where, again, the matrix [aij ] is Hermitian and positive and H is clearly self-
adjoint. With the help of these denitions, equation (3.26) can be written
3.3. DISTANCES AND NORMS FOR DENSITY OPERATORS 29

as 2 −1
N
(3.27)
X
L(ρ(t)) = −i[H, ρ(t)] + {G, ρ(t)} + aij Fi ρ(t)Fj† .
i,j=1

As the evolution of a density operator must be trace preserving, then:


 N 2 −1  
(3.28)
X †
0 = tr[L(ρ(t))] = tr 2G + aij Fj Fi ρ(t) ,
i,j=1

from which we deduce that


N −1 2
1 X
G=− aij Fj† Fi . (3.29)
2 i,j=1

Finally, replacing the latter in (3.28) we get the standard form of the gener-
ator
N 2 −1
1
(3.30)
X
aij Fi ρ(t)Fj† − {Fj† F. , ρ(t)} ,
 
L(ρ(t)) = −i[H, ρ(t)] +
i,j=1
2

The set of {Fi } can be picked for example as the set of N -dimensional
traceless extended Gell-Mann matrices, while the family of aij species the
dissipative part of the generator. It is worth to note that (3.23) is a linear
matrix ODE. This feature will be very useful in the rest of our work, as
switching systems techniques are much more developed on linear systems.
Moreover, choosing a dierent basis, a MME can be put in a symmetrized
form called Lindblad form :
dρ(t) 1
(3.31)
X
Lk ρ(t)L†k − {Lk L†k , ρ(t)} ,

= −i[H, ρ(t)] +
dt k
2

where Lk are noise operators. The Lindblad form is easier to handle and will
be then adopted from now on.

3.3 Distances and norms for density operators


When dealing with problems of stabilizability or controllability of dynamical
systems it is fundamental to dispose of a measure such to quantify "how
30 CHAPTER 3. QUANTUM DYNAMICS

close" is a given state from another. This happens for example when study-
ing steady states properties or when trying to follow desired trajectories.
Classical dynamics are generally described by the evolution of real vectors,
on which distances can be measured by applying the well-known Euclidean
norm. On the other side, a quantum state is represented as a density matrix,
for which the familiar denition of distance is less intuitive.

Denition 3.10. A matrix norm || · ||, is a vector norm on M(H), that


is, for each A, B ∈ M:

1. ||A|| ≥ 0, ||A|| = 0 i A = 0,

2. ||αA|| = |α|||A|| for each α ∈ H,

3. ||A + B|| ≤ ||A|| + ||B||.

Among the most used matrix norms there are the Schatten p-norms, that
is, supposing to deal with N -dimensional square matrices:
N
X  p1
||A||p = σip ,
i

where {σi } is the set of singular values of A.


When p = 1, Schatten norm is called trace norm, that is:

||A||tr = tr[ A† A].

Moreover, if matrix A is Hermitian, as happens for example for density op-


erators, the trace norm is the sum of the absolute values of its eigenvalues.
When p = 2, Schatten norm is called Frobenius norm, that is
p p
||A||F r = tr[A† A] = hA, AiHS .

Frobenius norm is also often called Hilbert-Schmidt norm, as it is directly


associated to the Hilbert-Schmidt product dened in (3.1).
Although Frobenius norm appears to be the most immediate denition
of norm as it corresponds to the Euclidean norm for vectors, we will mainly
make use of trace norm as important results related to quantum dynamics
are based on it.
3.3. DISTANCES AND NORMS FOR DENSITY OPERATORS 31

3.3.1 Trace distance between quantum states


One of the most important tools provided by the trace norm is that of trace
distance.
Denition 3.11. The trace distance between two quantum states ρ and σ
is:
1
D(ρ, σ) = tr[|ρ − σ|], (3.32)
2

where the operator |A| = A† A denotes the positive square root of A† A.

We introduce now a property of the trace distance which will allow us to


prove the main result of this section.
Lemma 3.1. The following equation holds:

D(ρ, σ) = max tr[P (ρ − σ)], (3.33)


P

where {P ≤ I} is the set of all the projectors.

Proof. First of all it is fundamental to show that the dierence of two density
operators, ρ − σ , can be expressed as the dierence between two positive
operators, Q − S , with orthogonal support. In fact, for spectral theorem, as
ρ − σ is still an Hermitian matrix, then:
X X
ρ−σ = λi |ψi i hψi | − (− λj |ψj i hψj |) = Q − S,
i j

where {λi } is the set of positive eigenvalues, {λj } the set of negative eigen-
values, and {|ψi,j i} the set of the relative orthogonal eigenvectors.
Now, |ρ − σ| = Q + S , and consequently,
1 1
D(ρ, σ) = tr[|ρ − σ|] = tr[Q + S] = tr[Q],
2 2
because tr[Q − S] = 0. Choosing P as the projector on Q we get:
tr[P (ρ − σ)] = tr[P (Q − S)] = tr[Q] = D(ρ, σ).

Moreover, if P is any projector:


tr[P (ρ − σ)] = tr[P (Q − S)] ≤ tr[P Q] ≤ tr[Q] = D(ρ, σ),

which completes the proof.


32 CHAPTER 3. QUANTUM DYNAMICS

We saw in the last sections that the evolution of density matrices repre-
senting quantum states must be driven by trace-preserving maps. Exploiting
this fact and Lemma 3.1, we can prove an interesting result related to the
stability of quantum dynamics.
Theorem 3.2. Suppose E is a QDS, and let ρ and σ be density operators.
Then
D(E(ρ), E(σ)) ≤ D(ρ, σ). (3.34)
Proof. Using spectral decomposition, ρ − σ = Q − S , where Q and S are
positive matrices with orthogonal support. Let's choose the projector P
as that for which D(E(ρ), E(σ)) = tr[P (E(ρ) − E(σ)]. Remembering that
tr[Q] = tr[S], and thus tr[E(Q)] = tr[E(S)], we see that:

1
D(ρ, σ) = tr[|ρ − σ]
2
1
= tr[|Q − S|]
2
1 1
= tr[Q] + tr[S]
2 2
1 1
= tr[E(Q)] + tr[E(S)]
2 2
= tr[E(Q)]
≥ tr[P E(Q)]
≥ tr[P (E(Q) − E(S))]
= tr[P (E(ρ) − E(σ))]
= D(E(ρ), E(σ)).

This theorem shows that the trace distance between two states that un-
dergo the same physical dynamics cannot increase. Even if we have not given
yet a formal denition of stability for matrix dynamics, this means that there
cannot exist unstable steady states with respect to the metric induced by the
trace norm. Unfortunately this result generally does not hold for other norms
such as Frobenius norm.

3.4 Coherence-vector formulation


We have been dealing so far with density operators represented by com-
plex Hermitian matrices. Moreover, under proper assumptions, we derived
3.4. COHERENCE-VECTOR FORMULATION 33

dynamical models for quantum states expressed as matrices ODE. Such equa-
tions are quite dicult to handle, as most of the powerful tools of system
theory, such as stability analysis and control techniques, were basically devel-
oped for real vectors evolutions. Fortunately, there exist methods to trans-
form density matrices in vectors, and then to describe their dynamics with
linear maps. We will therefore show a handy procedure to derive this kind
of maps, and then use them for applying switching systems techniques that
will be introduced in the next chapter.

3.4.1 Quantum states as real vectors


We described Hermitian operators as square matrices on the complex eld.
According to their denition these operators form a vector space on the real
eld:
∀a, b ∈ R, ∀X, Y ∈ B(C), aX + bY ∈ B(C).
With respect to a given basis any Hermitian matrix can be univocally identi-
ed with the vector of coecients relative to the elements that compose that
basis.
Thus, if {Fi } is any set of N 2 matrices that span the real space of complex
N × N Hermitian matrices, then:

H = α0 F0 + α1 F1 + . . . + αN 2 −1 FN 2 −1 , ∀H ∈ B(C),

and we obtain the following bijective correspondence:


H ←→ vH = [ α0 α1 . . . αN 2 −1 ]T .

The value of the coecients {αi } is the usual scalar product between H and
the elements of {Fi }:
αi = hH, Fi iHS = tr[H † Fi ].

When choosing a basis for D(C), that is for matrices which all have the same
trace, one can pick the identity and a set of N 2 − 1 orthonormal traceless
matrices: 2
N −1
1 X
H = IN + αi Fi .
N i=1

This way, the rst coecient is the same for all the density operators, and
can then be disregarded when studying vector dynamics. The other N 2 − 1
coecients completely describe the evolution of the state and compose the
so called coherence vector.
34 CHAPTER 3. QUANTUM DYNAMICS

Example 3.1. Coherence vector for 2-level states

We show here how to obtain the vector form of a 2-level density operator.
The most typical basis for such kind of systems is given by the set of Pauli
matrices, usually denoted as:
     
0 1 0 −i 1 0
σx = , σy = , σz = ,
1 0 i 0 0 −1

to which the identity, σ0 = I2 , must be added. One can easily check that,
opportunely scaled, Pauli matrices form an orthonormal basis:
1 1
h √ σi , √ σj iHS = δij ,
2 2
where δij is the Kronecker product. Given any state ρ, we can easily decom-
pose it:
1 1 1 1
ρ = v0 √ σ0 + vx √ σx + vy √ σy + vz √ σz .
2 2 2 2
The values of vi can be derived in the following way:
1 1
vi = hρ, √ σi iHS = √ tr[ρσi ],
2 2
and the vector becomes  
1
1  x 
vρ = √  ,
2 y 
z
where x, y, z = tr[σx,y,z ρ]. This representation corresponds to the density
operator  
1 1 + z x − iy
ρ= .
2 x + iy 1 − z
which, as any density operator, must be positive semidenite. This means
that the following relations hold:
• z ≥ −1,

• (1 + z)(1 − z) − (x − iy)(x + iy) ≥ 0 −→ x2 + y 2 + z 2 ≤ 1.

Thus, the free part of vρ belongs to the real sphere S3 , often called Bloch
sphere.
3.4. COHERENCE-VECTOR FORMULATION 35

3.4.2 Linear and ane maps for vectorized dynamics


Given quantum states expressed in a vectorized representation we must now
nd the linear map L̂ which describes the same evolution as L(ρ). With
respect to the basis {Fi }, the density matrix ρ corresponds to the vector vρ
according to the following relation:
α0ρ
   
tr[ρF0 ]
 α1ρ   tr[ρF1 ] 
vρ =  .. = .. .
   
 .   . 
ρ
αN 2 −1 tr[ρFN 2 −1 ]

As a linear map we can nd a vectorized representation of L(ρ) with respect


to the same basis:  
α0L
 α1L 
vL =  .. .
 
 . 
L
αN 2 −1

Exploiting the linearity of the trace we thus get:


X
L(ρ) = αiL Fi
i
X
= tr[L(ρ)Fi ]Fi
i
X X
αjρ Fj Fi ]Fi

= tr[L
i j
X X
αjρ L(Fj ) Fi ]Fi

= tr[
i j
X X
αjρ tr[L(Fj )Fi ] Fi .

=
i j

In matrix form the latter is equal to


 
tr[L(F0 )F0 ] ... tr[L(FN 2 −1 )F0 ]  ρ 
 tr[L(F0 )F1 ] α0
... tr[L(FN 2 −1 )F1 ]    .. 
v̇ρ = L̂vρ =  .. .. ..   .  . (3.35)

 . . .  ρ
αN 2 −1
tr[L(F0 )FN 2 −1 ] . . . tr[L(FN 2 −1 )FN 2 −1 ]

If we suppose the rst element of the basis to be the identity, then the rst
row of the matrix we have just derived must be equal to zero, as α0ρ = 1 is
36 CHAPTER 3. QUANTUM DYNAMICS

a constant of motion. Thus the map can be expressed as the following block
matrix:  
0 0 ... 0
 
L̂ = 
 b
.
A 

The model which denes the evolution of the reduced state vrρ , obtained from
vρ by eliminating the rst constant coecient, is an ane equation:

v̇rρ = Avrρ + b.

If b = 0 the identity is a steady state for L(ρ) and the dynamics is said to be
unital. On the other side, if b 6= 0, then the generator is non-unital.
This kind of linear generator, which describes the evolution of vectorized
quantum states, will be the object of our study in the next chapter. After
having introduced switching systems theory we will apply it to sets of gener-
ators L̂p , and show how switching techniques can be eciently exploited to
control quantum systems.
Chapter 4
Switching control of quantum
dynamics

In this chapter we will show how to control a quantum state by using switch-
ing systems techniques. In particular we will prove that, under certain con-
ditions, a quantum state can be led to a stable equilibrium, even if Lindblad
generators of its dynamics are only marginally stable and share that same
equilibrium point. In the next sections the main part of our work will be de-
scribed. Firstly a description of the problem and a denition of our aims will
be given. After that we will provide a solution of the problem for a particular
kind of dynamics, followed by a more complete and exhaustive analysis of
the general case. Finally we will show how to make an invariant subspace of
density operators shared by all the generators attractive.

4.1 Stability for Markovian Master Equations


As we saw in the previous sections, a quantum state can be represented as a
density operator, that is a square N × N complex matrix ρ such that
1) ρ = ρ† ; 2) tr(ρ) = 1; 3) tr(ρ2 ) ≤ 1.
Its dynamics in an open quantum system is described by the Markovian
Master Equation
d
ρ(t) = L(ρ(t)) = −i[H, ρ(t)] + LD (ρ(t)), (4.1)
dt
where H is the Hamiltonian of the open system and LD describes the dissi-
pative part of the generator. A set of density operators S is invariant if
∀ρ(t0 ) ∈ S −→ ρ(t) ∈ S, ∀t ≥ t0 .

37
38 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

Dening the distance of a state from a set


D(ρ, S) := inf D(ρ, σ),
σ∈S

where the distance between two states is dened in (3.32), we say that a set
S is marginally stable if it is invariant and
∀ε > 0, ∃δ | D(ρ(t0 ), S) ≤ δ −→ D(Et (ρ(t0 ), S) ≤ ε, ∀t ≥ t0 .
A set S is globally asymptotically stable if it is marginally stable and
∀ρ(t0 ) −→ lim D(Et (ρ(t0 ), S) = 0.
t→+∞

The Lindblad equation is a linear matrix ODE, and can be therefore vector-
ized in such a way to obtain the following linear equation:
d
v(t) = L̂v(t), (4.2)
dt
where v(t) is a N 2 × 1 column vector univocally associated with the den-
sity operator ρ(t). Matrix L̂ form depends on the way the quantum system
interacts with its environment. In particular, as quantum channels are con-
tractions in the trace norm (see Theorem 3.2), its eigenvalues lie on the
imaginary axis if there is no dissipative part in the MME, while some of
them are in the left complex half-plane if LD 6= 0. Moreover it can be proved
[19] that if a set of density operators is invariant, then it is marginally stable.
As explained in Section 3.4, the vectorization of a linear matrix ODE
can be obtained in several ways, for example by stacking the columns of the
matrices on top of one another. Nevertheless, in the case of density operators,
Hermitian and trace constant properties may be used for creating a smarter
representation. By storing the trace value in the rst element of the vector
v(t) we obtain a representation with the rst element constant, and so whose
rst element derivative is equal to zero:
  
1

1
 ∗  
v(t) =  ..  = 

, ∀t ≥ 0.
 
 .   vr (t) 

Consequently, matrix L̂ rst row must also be equal to zero,


 
0 0 ... 0
(4.3)
 
L̂ =  ,
 b A 
4.2. DEFINITION OF THE PROBLEM 39

and the linear system (4.6) can be reduced in the ane form
v̇r (t) = Avr (t) + b. (4.4)
In this case, if matrix A is asymptotically stable, and therefore invertible,
the unique steady state is
v̄(t) = −A−1 b, (4.5)
and it can be proved [13] it is globally asymptotically stable for (4.4). On
the other side, if A is not asymptotically stable there must exist more than
one, marginally stable, steady states.

4.2 Denition of the problem


Let's suppose now to be able to switch between m dierent environment
congurations. This means that, for each of them, the quantum system
evolution is described by
d
v(t) = L̂p v(t), p = 1, . . . , m. (4.6)
dt
Our aim is to show that, if each of these systems shares one equilibrium point
with the others, it is possible, under certain conditions, to control any state
to that equilibrium by appropriately switching between them, although none
is asymptotically stable.
It is worth to note that nding a stabilizing switching law is possible only
if there exists only one shared steady state and not more. Indeed, if there
existed more, then it would be impossible to go from one to another and
none would be asymptotically stable.
Even if there does not exist much literature on stabilizing quantum sys-
tems with switching techniques, their application in the quantum eld is
particularly interesting. Indeed, as quantum states cannot be directly ob-
served without disturbing them, time-based switching laws allow to avoid
issues created by the measurement procedures. Moreover, the control strat-
egy we will present apply indiscriminately to pure and mixed states and to
any kind of Lindblad generators. We will then begin from the particular case
of unital generators, and then extend it to general of Lindblad ones.

4.3 Special case: unital generators


Let's suppose for now that, for each generator Lp ,
bp = [0 ... 0]T , p = 1, . . . , m;
40 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

and that each of them shares the same equilibrium point v̄ . Matrices L̂p are
then block diagonal:
 
0 0 ... 0
 0 
L̂p =  .. , p = 1, . . . , m.
 
 . Ap 
0
Moreover,  
1
0
v̄ =  ..  ,
 
.
0
is clearly a common steady state for such kind of systems. From a physical
point of view these dynamics are called unital as the identity (opportunely
scaled to be a valid density operator) is a xed point for each of them.
Stabilizing the state of a system to the identity means generating a completely
random state. As block diagonal matrices, the relative dynamics can be easily
described by exponentiating the blocks on the diagonal, that is
 
1 0 ... 0
  
1 1
  0 
  . (4.7)
 
 vr (t)  = 
.
.
 
 . A t  vr (0)

e p 
0
In order to guarantee the asymptotic stability of the switching system we
have to suppose that there exists a convex combination Ac of the dierent
Ap , that is that Assumption 2.1 holds. Choosing a period ε and dwelling on
each system Ap for a time proportional to its coecient αp ε, the evolution
of the reduced state vr can be described as
vr (t) = eAp (t−tp ) eαp−1 Ap−1 ε · · · eα1 A1 ε vr (t0 ), ∀t ∈ [tp , tp+1 ]. (4.8)
At the end of each switching period ε the evolution expressed in (4.8) can
also be written as
vr (kε) = e(Ac +εΥc )kε vr (0), k = 0, 1, 2 . . . (4.9)
If ε is small enough,
Ā := Ac + εΥc ,
is still Hurwitz and T
t→+∞ 
vr (t) −−−−→ 0 . . . 0 .
4.3. SPECIAL CASE: UNITAL GENERATORS 41

Finally, remembering that the rst component of v(t) is constant


t→+∞  T
v(t) −−−−→ 1 0 . . . 0 .

Obviously from the vectorized noted v̄ it is possible to derive the equivalent


density operator ρ̄ to which the corresponding switching MME converges.
Proposition 4.1. Let's suppose to dispose of a set of m unital generators
and that the Assumption 2.1 holds. Then, periodically applying to any initial
state each generator L̂p for a small enough time proportional to the relative
coecient αp , the switching system asymptotically converges to the completely
mixed state ρ = N1 I .

4.3.1 Symmetric unital generators


We have just showed that the existence of a Hurwitz convex combination
of the generators L̂p is enough to prove the stabilizability of the completely
mixed state. Actually, formulating Assumption 2.1 is not necessary if all the
generators L̂p are unital, symmetric and share only one steady state. In this
case, it is sucient to suppose that each of them is employed enough times, for
example by periodically repeating a sequence that involves all the generators
or by selecting them randomly. If such matrices are marginally stable, as
we suppose, there exists a change of basis that makes them describe the
evolution of vectors whose norm can never increase, and switching between
them must lead to the shared steady state. We will now formally prove this
conjecture summed up in the following proposition.
Proposition 4.2. Let's consider m unital, symmetric, marginally stable gen-
erators L̂p , which share only one steady state v̄ . Any switching law that
employs them all often enough is stabilizable.

Proof. According to the hypotheses, we are dealing with generators which


have all the following form:
 
0 0 ... 0
 0 
Lp =  .. , Ap = (Ap )T , p = 1, . . . , m.
 
 . Ap 
0

As the shared steady is unique it must be that whose coherent part is


 T
v̄r = 0 . . . 0 .
42 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

As a symmetric matrix, any matrix Ap is orthogonally diagonalizable:


Adp = UpT Ap Up ,

where Up is an orthogonal matrix. Moreover, for marginal stability, Adp has


real negative or null eigenvalues on its diagonal. Let's take the Lyapunov
function
1
V = vrT vr .
2
which is positive denite. Clearly that Lyapunov function can also be ex-
pressed as
1 1
V = (ṽrT UpT )(Up ṽr ) = ṽrT ṽr ,
2 2
where we denote ṽr the vector vr in the orthogonal basis of the system p
which drives its evolution. The derivative of the Lyapunov function is
∂ṽr
V̇ = ṽrT = ṽrT Ap ṽr ≤ 0,
∂t
as Ap is negative semidenite. If ṽr belongs to the eigenspace relative to an
eigenvalue equal to zero, then that vector keeps unchanged. If it does not,
and vr is not the origin, then its norm decreases. Clearly, if all the generators
are periodically employed or they are randomly selected, there exists at least
one generator for which
∂ṽr
V̇ = ṽrT = (ṽr )T Ap ṽr < 0,
∂t
for each ṽr . Indeed, if it wouldn't, there would be a state for which
∂ṽr
= Ap ṽr = 0, p = 1, 2, . . . , m.
∂t
That is impossible as the origin is supposed to be the only shared steady
state. Thus the norm of the reduced vector ṽr keeps decreasing and the
density operator ρ̄ = N1 IN is asymptotically stable for the switching dynamics
made by the Lindblad generators Lp that correspond to L̂p .

4.4 General case: non-unital generators


Non-unital generators are clearly those for which the identity is not a steady
state. When dealing with vectorized evolutions they assume the form of
block triangular matrices, as expressed in (4.3). These cases, where the re-
duced dynamics is described by an ane model, are slightly more complex.
4.4. GENERAL CASE: NON-UNITAL GENERATORS 43

Nevertheless we will show that, without making any supplementary assump-


tion with respect to the case discussed in the last section, we will manage to
make any shared xed point asymptotically stable by switching between the
generators.
The dynamics of each reduced vectorized equation can be thus expressed
as
v̇r (t) = Ap vr (t) + bp , p = 1, . . . , m. (4.10)
We will suppose from now on that there exists a common steady state v̄r for
each generator and that none of Ap matrices are Hurwitz. In this case we
cannot directly use the theory on linear switching systems because equations
like (4.10) have an ane form. Nevertheless, the following proposition will
allow us to overcome this problem.
Proposition 4.3. If all the generators L̂p share the same steady state v̄ ,
then there exists a change of basis matrix
 
1 0 ... 0
T −1 =  (4.11)
 
,
 TQ TR 

that removes the ane component bp in (4.10), that is that makes all the
generator matrices L̂dp block diagonal:
 
0 0 ... 0
 
0 0 ... 0
 0 
 = T −1 
 
L̂dp =  ..  T, p = 1, . . . , m.
 
 . Ãp   bp Ap 
0
(4.12)
For T to be such a matrix, dening as usual v̄r as the coherent part of v̄ ,
−1

the following constraint must be respected:


 
0
 .. 
TQ + TR v̄r =  .  .
0

Proof. Let's suppose the block structure of a matrix T −1 to be


 
TS TP
T −1 =  (4.13)
 
,
 TQ TR 
44 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

where TS ∈ R, TP ∈ R1×(N −1) , TQ ∈ R(N −1)×1 , TR ∈ R(N −1)×(N −1) . For


2 2 2 2

that matrix to be the requested change of basis, the steady state v̄r must
turn to a steady state for all the block diagonalized generators L̂dp . As the
only equilibrium point for all the possible generators L̂dp is the vector that
corresponds to the purely mixed state, the change of basis must be such that
  
1
 
TS TP 1
 0  
 ..  =  (4.14)
 
  v̄r  ,
    
 .  TQ TR
0

and thus the following equations must hold:


• TS + TP v̄r = 1;
 T
• TQ + TR v̄r = 0 . . . 0 .
Moreover, from (4.12) we have
T −1 L̂p = L̂dp T −1 .

Developing the last equation we get


    
TS TP 0 0 ... 0 0 0 ... 0
(4.15)
    
  = ,
 TQ TR   bp Ap   TR bp TR Ap 

and
 
0 0 ... 0
  
TS TP 0 0 ... 0
 0 
 .. (4.16)
  
  = .
 .  TQ TR   Ãp TQ Ãp TR

Ãp 
0

As (4.15) and (4.16) must give the same result, then:


TR bp = Ãp TQ ,
TR bp = TR Ap TR−1 TQ ,
T
bp − Ap TR−1 TQ = 0 . . . 0 .


If we dene from the latter v̄r as a xed point,


v̄r = −TR−1 TQ ,
4.4. GENERAL CASE: NON-UNITAL GENERATORS 45

then we obtain  T
TQ + TR v̄r = 0 . . . 0 ,
which is a condition we had already imposed. Choosing for simplicity
• TS = 1,
 
• TP = 0 . . . 0 ,

we nally get  
1 0 ... 0
T −1 =  (4.17)
 
,
 TQ TR 

which is actually a change of basis matrix.

If Assumption 2.1 holds, then the steady state v̄ can be made asymptot-
ically stable for a switching system. Indeed, periodically switching between
generators as described in 4.3, the evolution of the state is given by

v(t) = eLp (t−tp ) eαp−1 Lp−1 ε · · · eα1 L1 ε v(0)


(4.18)
d d d
= T eLp (t−tp ) T −1 T eαp−1 Lp−1 ε T −1 · · · T eα1 L1 ε T −1 v(0)
Ldp (t−tp ) αp−1 Ldp−1 ε α1 Ld1 ε
= Te e ···e T −1 v(0).

At the end of each period ε,

vr (kε) = T e(Ãc +Υε)kε T −1 vr (0), k = 0, 1, 2, . . . (4.19)

and if ε is small enough


t→+∞
v(t) −−−−→ v̄.
It is nally worth to note that the considerations made in Section 4.3.1 for
unital generators are fully extendable to the case of non-unital generators.
That is, if after having changed basis all the generators are described by
matrices Ãp that are symmetric, then any switching law is stabilizing.
Proposition 4.4. Let's suppose to dispose of a set of m non-unital genera-
tors with an unique shared steady state ρ̄ and that the Assumption 2.1 holds.
Then, periodically applying to any initial state each generator Lp for a small
enough time proportional to the relative coecient αp , the switching system
asymptotically converges to ρ̄.
46 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

4.5 Examples of stabilization by switching gen-


erators
We will now show two easy numerical examples of stabilization of quantum
systems with switching techniques. While the rst one concerns the stabi-
lization to a pure steady state, the second proves the applicability of this
control strategy to a generic mixed state.
Example 4.1. Stabilization of a pure steady state.
Let's suppose to dispose of two marginally stable generators. The rst
describes an unitary evolution driven by the Hamiltonian
 
0 0 0
H = 0 0 1 ,
0 1 0
while the second describes a dissipative evolution driven by the only Lindblad
generator  
0 1 0
L = 0 0 0 .
0 0 1
Those two generators both share the steady state
 
1 0 0
ρ̄ = 0 0 0 ,
0 0 0
as can be clearly seen by replacing the values of H and L in (3.31). One
possible choice of orthonormal basis for 3-level systems is given by the set of
Gell-Mann matrices, which represent the natural extension of Pauli matrices
for 2-level systems. We will refer to them as {σi }i=0,...,8 , where again σ0 =
√1 I , while σi , i = 1, . . . , 8, are orthonormal traceless Hermitian matrices.
3
The vectorized formulation v̄ for the steady state ρ̄ can be easily derived by
the Hilbert-Schmidt product between ρ̄ and each element of the basis σi :
 
1

 0 


 0 

 1.22 
1  
v̄i = hρ̄, σi iHS = trace[ρ̄σi ] −→ v̄ = √  0 .
3
 0


 

 0 

 0 
0.71
4.5. EXAMPLES OF STABILIZATION BY SWITCHING GENERATORS 47

The corresponding generators for the vectorized evolution can be obtained


as explained in (3.35):

 
0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 
 
0 0 0 0 −1 0 0 0 0 
 
0 0 0 0 0 0 0 −1 0 
 
L̂1 = 
0 0 1 0 0 0 0 0 0 ,

0 −1 0 0 0 0 0 0 0 
 
0 0 0 0 0 0 0 0 0 
 
0 0 0 1 0 0 0 0 −1.73
0 0 0 0 0 0 0 1.73 0

and

 
0 0 0 0 0 0 0 0 0
 0 −0.5 0 0 0 0 0 0 0 
 
 0 0 −0.5 0 0 0 0 0 0 
 
0.82 0 0 −1 0 0 0 0 0.58
 
L̂2 = 
 0 0 0 0 −0.5 0 1 0 0 .

 0 0 0 0 0 −0.5 0 1 0 
 
 0 0 0 0 0 0 −1 0 0 
 
 0 0 0 1 0 0 0 −1 0 
0 0 0 0 0 0 0 0 0

We must now nd a change of basis such to make those generators unital. A
possible choice for that is

 
1 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 
 
0 0 1 0 0 0 0 0 0 
 
1 0 0 −0.82 0 0 0 0 0 
T −1
 
0
= 0 0 0 1 0 0 0 0  .
0 0 0 0 0 1 0 0 0 
 
0 0 0 0 0 0 1 0 0 
 
0 0 0 0 0 0 0 1 0 
1 0 0 0 0 0 0 0 −1.41
48 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

Applying this change of basis we nd


 
0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 
 
0 0 0 0 −1 0 0 0 0 
 
0 0 0 0 0 0 0 0.81 0 
−1
d
 
L̂1 = T L̂1 T = 0 0 1
 0 0 0 0 0 0 ,
0 −1 0 0 0 0 0 0 0 
 
0 0 0 0 0 0 0 0 0 
 
0 0 0 −1.22 0 0 0 0 1.22
0 0 0 0 0 0 0 −2.45 0

and
 
0 0 0 0 0 0 0 0 0
0 −0.5 0 0 0 0 0 0 0 
 
0 0 −0.5 0 0 0 0 0 0 
 
0 0 0 −1 0 0 0 0 0.33
L̂d2 = T −1 L̂2 T = 
 
0 0 0 0 −0.5 0 1 0 0 .

0 0 0 0 0 −0.5 0 1 0 
 
0 0 0 0 0 0 −1 0 0 
 
0 0 0 0 0 0 0 −1 0 
0 0 0 0 0 0 0 0 0

The eigenvalues of these generators are


 
α1 = 0 2i −2i i −i 0 i −i 0 ,

and
 
α2 = 0 −1 0 −0.5 −0.5 −0.5 −0.5 −1 −1 .

Excepted the eigenvalue relative to the evolution of the trace of the density
operator we nd out that each generator has at least one more eigenvalue
equal to zero. This means that using only one generator would make impossi-
ble to stabilize the desired steady state. Nevertheless the convex combination

1 1
L̂conv = L̂1 + L̂2 ,
2 2
4.5. EXAMPLES OF STABILIZATION BY SWITCHING GENERATORS 49

has the following eigenvalues:


 
0

 −0.38 + 0.96i 


 −0.38 − 0.96i 


 −0.23 

αconv =
 −0.25 + 0.5i .


 −0.25 − 0.5i 


 −0.25 + 0.5i 

 −0.25 − 0.5i 
−0.5

Now all the eigenvalues except for one have negative real part. Quickly
switching between both the systems, and dwelling on each one for the same
time, brings then the state to the equilibrium, which is nothing but ρ̄ in a
dierent basis.
Example 4.2. Stabilization of a mixed steady state.

Let's suppose now to dispose of two marginally stable unital generators,


that is generators for which the identity is a common steady state. The
relative Lindblad operators are
 
0 1 0
G1 = 1 0 1 ,
0 1 0

and  
1 0 0
G2 = 0 0 0  .
0 0 −1
Choosing the same basis as before, that is the set of Gell-Mann matrices, we
naturally get, for the vectorized steady state v̄ ,
 
1

 0 


 0 

 0 
1  
v̄ = √  0 .
3
 0


 

 0 

 0 
0
50 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

Again, the evolution corresponding to the Lindblad operators G1 and G2 are


respectively
 
0 0 0 0 0 0 0 0 0
0 −0.5 0 0 0 0 0.5 0 0 
 
0 0 −2.5 0 0 0 0 2.5 0 
 
0 0 0 −2.5 −1.5 0 0 0 0.87 
 
0
L̂1 =  0 0 −1.5 −1 0 0 0 0.87 
,
0 0 0 0 0 −1 0 0 0 
 
0 0.5 0 0 0 0 −0.5 0 0 
 
0 0 1.5 1 0 0 0 −2.5 0 
0 0 0 0.87 0.87 0 0 0 −1.5

and  
0 0 0 0 0 0 0 0 0
0 −0.5 0 0 0 0 0 0 0
 
0 0 −0.5 0 0 0 0 0 0 
 
0 0 0 0 0 0 0 0 0 
 
L̂2 = 0
 0 0 0 −2 0 0 0 0.
0 0 0 0 0 −2 0 0 0
 
0 0 0 0 0 0 −0.5 0 0
 
0 0 0 0 0 0 0 −0.5 0
0 0 0 0 0 0 0 0 0
The eigenvalues of L1 are
 
α1 = 0 0 −1 −4 −1 0 −1 −4 −1 ,

while those of L2 are


 
α2 = 0 0 0 −0.5 −0.5 −0.5 −0.5 −2 −2 .

Each generators has more than one eigenvalue equal to zero but if we dene
like in the previous example
1 1
L̂conv = L1 + L2 ,
2 2
we nd a convex combination which eigenvalues have all negative real part
except for that relative to the trace evolution:
 
αconv = 0 −0.5 −0.75 −2.37 −0.63 −0.5 −0.75 −2.25 −1.5 ,

Switching between the generators we can now bring any state to the com-
pletely mixed state ρ̄ = 31 I .
4.6. STABILITY OF SUBSPACES 51

4.6 Stability of subspaces


When stabilization of a given state is not a pressing constraint, or cannot
be reached by switching systems, convergence to invariant subspaces of the
Hilbert space D(C), of density operators, can be an interesting challenge.
The expression of that space as a direct sum of orthogonal subspaces,
D = HS ⊕ HR ,
leads to the following block representation of any density operator ρ:
 
ρS ρP
ρ= .
ρQ ρR
The block ρS is an invariant set for the generator L if
   
ρS 0 LS (ρS ) 0
L = .
0 0 0 0
Our aim is to make such kind of set attractive, that is, supposing that there
exists an invariant set shared by every generator Lp , to nd a switching law
for which    
ρS ρP ρS 0
ρ(t) =
t→+∞
−−−−→ ∈ HS . (4.20)
ρQ ρR 0 0
The density matrix can be vectorized in order to obtain an intuitive repre-
sentation of the state:  
1
v=  vS
, (4.21)
vR
where vS is the invariant part of the vector, and vR is the part that must be
controlled to zero for making vS attractive. We saw in the previous sections
that an admissible generator for this kind of vector must be such to keep the
rst element constant:
 
0 0 ... 0
 
L̂ =  .
 b A 

More precisely, for the vectorization realized in (4.21), generators must have
the specic block structure
 
0 0 0
L̂ =  bS LS LP  ,
0 0 LR
52 CHAPTER 4. SWITCHING CONTROL OF QUANTUM DYNAMICS

in order to keep invariant states dynamics inside the invariant set. It is worth
to note that if the square block LR is not Hurwitz, then the invariant set is
not attractive. Moreover, as the dynamics of vR is driven only by LR block,
there is only need to focus on that block to design a switching law that makes
the invariant set attractive.
Let's suppose to dispose of m generators L̂p with the same invariant set
and that there exists a convex combination of the blocks LpR , that is
m m
(4.22)
X X
∃ α1 . . . αm s.t αp LpR = LcR , αp = 1,
p=1 p=1

where LcR is Hurwitz. Switching generators in a period ε accordingly to the


coecients αp , as described in the previous sections, we obtain the following
evolution
v(t) = eL̂p (t−tp−1 ) eαp−1 L̂p−1 ε · · · eα1 L̂1 ε v(t0 ), (4.23)
If ε is small enough, at the end of each period this equation can be approxi-
mated with
 
0 0 0
v(kε) = exp  bcS LcS LcP  v(t0 ), k = 0, 1, 2, . . . (4.24)
0 0 LcR

It is clear from the last relation that the components of v(t) belonging to vR (t)
decrease exponentially, and the invariant part vS (t) becomes attractive.
Proposition 4.5. Let's suppose to dispose of a set of m generators with
a common invariant set and that equation (4.22) holds. Then, periodically
applying to any initial state each generator Lp for a small enough time pro-
portional to the relative coecient αp , the switching system asymptotically
converges to the common invariant set, which thus becomes attractive.
Chapter 5
Conclusion

Our aim was to investigate how to exploit switching systems techniques for
making a given quantum state globally asymptotically stable. We there-
fore chose to restrict our analysis to Lindblad dynamics which all share one
same steady state but which are not asymptotically stable. Recalling re-
sults presented in [2], we proved that a linear switching system can be made
asymptotically stable, even if none of the individual subsystems is stable it-
self. Indeed this happens if there exists a Hurwitz convex combination of
the matrices which describe the subsystems. Unfortunately, nding such a
convex combination, if it exists, is NP-hard unless there are only two subsys-
tems, case for which we provided an easy algorithm. As a linear map acting
on density operators ρ, Lindblad equations L can be expressed as matrices
L̂ which apply a linear transformation on quantum states vρ formulated in
a vectorized notation. For actually describing the dynamics of a quantum
state, any transformation L̂ must have a dened structure. Its rst row must
thus be made by zeros for preserving the trace component and it cannot have
positive eigenvalues. These constraints lead each generator L̂p to have the
following form:
 
0 0 ... 0
(5.1)
 
L̂p = 
 bp
, p = 1, . . . , m,
Ap 

where Ap must have at least one eigenvalue with real part equal to zero in
order to be only marginally stable as we requested. If all the generators are
unital the column vectors bp are null. That means that nding a Hurwitz
convex combination of Ap is enough for making the complete switching sys-
tem asymptotically stable, as the rst component is equal to one for all the
acceptable vectors. On the other hand, if the generators L̂ are not unital,

53
54 CHAPTER 5. CONCLUSION

than the challenge is slightly more complex as the dynamics of the reduced
vectors vrρ are ane:
v̇rρ = Avrρ + b.
Nevertheless, as we assumed that all the generators share the same steady
state, there exists a change of basis T that makes all the switching dynamics
unital. Asymptotic stability can again be reached by switching between
the new submatrices Ãp . We also showed that if all the submatrices Ap of
unital generators, or Ãp of non-unital generators after changing basis, are
symmetric, then the switching system is uniformly asymptotically stable,
that is any switching law is stabilizing. Finally we proved that switching
techniques can be used for making shared invariant subspaces attractive too.
In this case the only requirement is to nd a Hurwitz convex combination
of the submatrices which describe the dynamics of the components which do
not belong to the invariant sets.
As we showed in Chapter 2, state-based switching laws can be studied
for stabilizing classical systems. Their application to quantum generators is
obstructed by the diculty of exploiting feedback information by observing
states. Nevertheless, there exist methods for approximately recovering the
state of a quantum system, for example by performing measures on light
elds which have interacted with it. This kind of operation requires the use
of stochastic models we did not mention in this work. State-based switching
is however a valuable alternative, as it allows to increase convergence rate by
applying optimization algorithms. Adapting this technique to quantum eld
is thus an interesting challenge, that deserves to be taken up.
Bibliography

[1] Liberzon, Daniel. Switching in systems and control. Springer, 2003.

[2] Sun, Zhendong. Switched linear systems: control and design. Springer,
2006.

[3] Wicks, Mark A., Philippos Peleties, and Raymond A. DeCarlo. "Con-
struction of piecewise Lyapunov functions for stabilizing switched sys-
tems." Decision and Control, 1994., Proceedings of the 33rd IEEE Con-
ference on. Vol. 4. IEEE, 1994.

[4] Wicks, M., P. Peleties, and R. DeCarlo. "Switched controller synthesis for
the quadratic stabilisation of a pair of unstable linear systems." European
Journal of Control 4.2 (1998): 140-147.

[5] Branicky, Michael S. "Multiple Lyapunov functions and other analysis


tools for switched and hybrid systems." IEEE Transactions on Automatic
Control 43.4 (1998): 475-482.

[6] Blanes, Sergio, and Fernando Casas. "On the convergence and optimiza-
tion of the BakerCampbellHausdor formula." Linear algebra and its
applications 378 (2004): 135-158.

[7] Hespanha, Joao P. "Uniform stability of switched linear systems: exten-


sions of LaSalle's invariance principle." IEEE Transactions on Auto-
matic Control 49.4 (2004): 470-482.

[8] Li, Zheng Guo, C. Y. Wen, and Yeng Chai Soh. "Stabilization of a class
of switched systems via designing switching laws." IEEE Transactions on
Automatic Control 46.4 (2001): 665-670.

[9] Fornasini, Ettore, and Giovanni Marchesini. Appunti di teoria dei sistemi.
Libreria Progetto, 1988.

55
56 BIBLIOGRAPHY

[10] Fleming, Wendell H., and R. W. Rishel. Deterministic and Stochastic


Optimal Control (Stochastic Modelling and Applied Probability). Springer,
1982.
[11] Franklin, Gene F., et al. Feedback control of dynamic systems. Vol. 3.
Reading: Addison-Wesley, 1994.
[12] Zhou, Kemin, and John Comstock Doyle. Essentials of robust control.
Vol. 104. Upper Saddle River, NJ: Prentice hall, 1998.
[13] Schirmer, S. G., and Xiaoting Wang. "Stabilizing open quantum systems
by Markovian reservoir engineering." Physical Review A 81.6 (2010):
062306.
[14] Bergholm, Ville, and Thomas Schulte-Herbrüggen. "How to Transfer be-
tween Arbitrary n-Qubit Quantum States by Coherent Control and Sim-
plest Switchable Noise on a Single Qubit." arXiv preprint arXiv:1206.4945
(2012).
[15] D'Alessandro, Domenico. Introduction to quantum control and dynam-
ics. CRC press, 2007.

[16] Alicki, R., and K. Lendi. "Quantum Dynamical Semigroups and Appli-
cations", 1987.

[17] Nielsen, Michael A., and Isaac L. Chuang. Quantum computation and
quantum information. Cambridge University Press, 2010.

[18] Petruccione, Francesco, and Heinz-Peter Breuer. The theory of open


quantum systems. Oxford University Press, 2002.

[19] Ticozzi, Francesco, and Claudio Altani. "Modeling and control of quan-
tum systems: an introduction." IEEE Transactions on Automatic Control
57.8 (2012): 1898-1917.
[20] Rivas, Ángel, and Susana F. Huelga. Open Quantum Systems: An In-
troduction. Springer, 2011.

[21] Wang, Xiaoting, and S. G. Schirmer. "Contractivity of the Hilbert-


Schmidt distance under open-system dynamics." Physical Review A 79.5
(2009): 052326.
[22] Perez-Garcia, David, et al. "Contractivity of positive and trace-
preserving maps under Lp norms." Journal of Mathematical Physics 47.8
(2006): 083506.
BIBLIOGRAPHY 57

[23] Wolf, Michael M. "Quantum Channels and Operations." Niels-Bohr In-


stitute in Copenhagen, 2008/2009.
[24] Anderson, David F. "Introduction to Stochastic Processes with Applica-
tions in the Biosciences." University of Wisconsin at Madison, 2013.

You might also like