1607.05404v1
1607.05404v1
Matrix computations are central to many algorithms in op- composition of dense non-square, low-rank matrices. As one
timization and machine learning [1–3]. At the heart of these possible application of our method, we discuss the Procrustes
algorithms regularly lies an eigenvalue or a singular value de- problem [1] of finding a closest isometric matrix.
composition of a matrix, or a matrix inversion. Such tasks Method. We have been given an N × N dense (non-
could be performed efficiently via phase estimation on a uni- sparse) Hermitian indefinite matrix A ∈ CN ×N via efficient
versal quantum computer [4], as long as the matrix can be sim- oracle access to the elements of A. The oracle either performs
ulated (exponentiated) efficiently and controllably as a Hamil- an efficient computation of the matrix elements or provides
tonian acting on a quantum state. Almost exactly twenty years access to a storage medium for the elements such as quantum
ago, Ref. [5] paved the way for such a simulation of quantum RAM [23, 24]. Our new method simulates e−i (A/N )t on an ar-
systems by introducing an efficient algorithm for exponentiat- bitrary quantum state for arbitrary times t. Note that the eigen-
ing Hamiltonians with tensor product structure—enabling ap- values of A/N are bounded by ±kAkmax, where kAkmax
plications such as in quantum computing for quantum chem- is the maximal absolute value of the matrix elements of A.
istry [6]. Step by step, more general types of quantum sys- This means that there exist matrices A for which the unitary
tems were tackled and performance increased: Aharonov and e−i (A/N )t can be far from the identity operator for a time of
Ta-Shma [7] showed a method for simulating quantum sys- −1
order kAkmax , i.e. an initial state can evolve to a perfectly dis-
tems described by sparse Hamiltonians, while Childs et al. [8] tinguishable state. For such times, the unitary e−i (A/N )t can
demonstrated the simulation of a quantum walk on a sparse be well approximated by a unitary generated by a low-rank
graph. Berry et al. [9] reduced the temporal scaling to ap- matrix.
proximately linear via higher-order Suzuki integrators. Fur-
ther improvements in the sparsity scaling were presented in Let σ and ρ be N -dimensional density matrices. The state
Ref. [10]. Beyond sparse Hamiltonians, quantum principal σ is the target state on which the matrix exponential of A/N
component analysis (qPCA) was shown to handle non-sparse is applied to, while multiple copies of ρ are used as ancillary
positive semidefinite low-rank Hamiltonians [11] when given states. Our method embeds the N 2 elements of A into a Her-
2 2
multiple copies of the Hamiltonian as a quantum density ma- mitian sparse matrix SA ∈ CN ×N , which we call “modified
trix. This method has applications in quantum process tomog- swap matrix” because of its close relation to the usual swap
raphy and state discrimination [11], as well as in quantum matrix. Each column of SA contains a single element of A.
machine learning [12–18], specifically in curve fitting [19] The modified swap matrix between the registers for a single
and support vector machines [20]. In an oracular setting, copy of ρ and σ is
Ref. [10, 21, 22] showed the simulation of non-sparse Hamil-
tonians via discrete quantum walks. The scaling in terms of N
the simulated time t is t3/2 or even linear in t. 2
×N 2
X
SA = Ajk |kihj| ⊗ |jihk| ∈ CN . (1)
j,k=1
In the spirit of Ref. [11], we provide an alternative method
for non-sparse matrices in an oracular setting which requires
only one-sparse simulation techniques. We achieve a run time This matrix is one-sparse in a quadratically bigger space and
in terms of the matrix maximum element and a t2 scaling. We reduces to the usual swap matrix for Ajk = 1 and j, k =
discuss a class of matrices with low-rank properties that make 1, . . . , N . Given efficient oracle access to the elements, we
the non-sparse methods efficient. Compared to Ref. [11] the can simulate a one-sparse matrix such as SA with a constant
matrices need not be positive semidefinite. In order to effec- number of oracle calls and negligible error [7–9, 25]. We dis-
tively treat a general non-Hermitian non-quadratic matrix, we cuss the oracle access below. This matrix exponential of SA is
make use of an indefinite “extended Hermitian matrix” that applied to a tensor product of a uniform superposition and an
incorporates the original matrix. With such an extended ma- arbitrary state. Performing SA for small ∆t leads to a reduced
trix, we are able to efficiently determine the singular value de- dynamics of σ when expanded to terms of second order in ∆t
2
as expect the method to work well for low rank matrices A that
are dense with relatively small matrix elements.
tr1 {e−i SA ∆t ρ ⊗ σ ei SA ∆t } = (2) A large class of matrices satisfies these criteria. Sample a
σ − i tr1 {SA ρ ⊗ σ}∆t + i tr1 {ρ ⊗ σ SA }∆t + O(∆t2 ). random unitary U ∈ CN ×N and r suitable eigenvalues of size
|λj | = Θ(N ) and multiply them as U diagr (λj ) U † to con-
Here, tr1 denotes the partial trace over the first register struct A. Here, diagr (λj ) is the diagonal matrix with the r
containing ρ. The first O(∆t) term is tr1 {SA ρ ⊗ σ} = eigenvalues on the diagonal and zero otherwise. A typical ran-
PN
A hj|ρ|ki|jihk|σ. Choosing ρ = |~1ih~1|, with |~1i := dom normalized vector has absolute matrix elements of size
1 P jk
j,k=1 √
√
k |ki the uniform superposition, leads to tr1 {SA ρ ⊗ O(1/ N ). The outer product of such a vector with itself has
N
σ} = N A
σ. This choice for ρ contrasts with qPCA, where ρ absolute matrix elements of size O(1/N ). Each eigenvalue
is proportional to the simulated matrix [11]. Analogously, the of absolute size Θ(N ) is multiplied with such an outer prod-
second O(∆t) term becomes tr1 {ρ ⊗ σ SA } = σ N A
. Thus uct and the r terms are summed √ up. Thus, a typical matrix
for small times, evolving with the modified swap matrix SA element of A will be of size O( r) and kAkmax = O(r).
on the bigger system is equivalent to evolving with A/N on Phase estimation. Phase estimation provides a gateway
the σ subsystem, from unitary simulation to many interesting applications. For
the use in phase estimation, we extend our method such that
∆t the matrix exponentiation of A/N can be performed condi-
tr1 {e−i SA ∆t ρ ⊗ σ ei SA ∆t } = σ − i [A, σ] + O(∆t2 )
N tioned on additional control qubits. With our method, the
A A
≈ e−i N ∆t σ ei N ∆t . (3) eigenvalues λj /N of A/N can be both positive and negative.
The modified swap operator SA Pfor a Hermitian matrix A
Let ǫ0 be the trace norm of the error term O(∆t2 ). We can with eigendecomposition A = j λj |uj ihuj | is augmented
bound this error by ǫ0 ≤ 2kAk2max∆t2 (see Appendix). Here, as |1i h1| ⊗ SA , which still is a one-sparse Hermitian operator.
kAkmax = maxmn |Amn | denotes the maximal absolute ele- The resulting unitary e−i |1ih1|⊗SA ∆t = |0i h0| ⊗ 1 + |1i h1| ⊗
ment of A. Note that kAkmax coincides with the largest ab- e−i SA ∆t is efficiently simulatable. This operator is applied to
solute eigenvalue of SA . The operation in Eq. (3) can be per- a state |cihc| ⊗ ρ ⊗ σ where |ci is an arbitrary control qubit
formed multiple times in a forward Euler fashion using mul- state. Sequential application of such controlled operations al-
tiple copies of ρ. For n steps the resulting error is ǫ = n ǫ0 . lows the use phase estimation to prepare the state [25]
The simulated time is t = n ∆t. Hence, fixing ǫ and t,
1 X λj
2 |φi = qP βj |uj i| i (5)
N
t 2 |βj |2 |λj |
n=O kAkmax (4) j
N ≥ǫ
ǫ
A
from an initial state |ψi|0 . . . 0i with O(⌈log(1/ǫ)⌉) control
steps are required to simulate e−i N t . The total run time of qubits forming an eigenvalue value register. Here, βj =
our method is nTA , the number steps n is multiplied with the huj |ψi and ǫ is the accuracy for resolving eigenvalues. To
matrix oracle access time TA (see below). achieve this accuracy, phase estimation is run for a total time
We discuss for which matrices the algorithm runs effi- t = O(1/ǫ). Thus, O(kAk2max /ǫ3 ) queries of the oracle for
ciently. Note that an upper bound for the eigenvalues of A/N A are required, which is of order O(poly log N ) under the
in terms of the maximal matrix element is |λj |/N ≤ kAkmax . low-rank assumption for A discussed above.
At a simulation time t only the eigenvalues of A/N with Matrix oracle and resource requirements. To simulate the
|λj |/N = Ω(1/t) matter. Let the number of these eigenval- modified swap matrix, we employ the methods developed in
ues be r. Thus, P effectively a matrix Ar /N is simulated with Refs. [8, 9]. First, we assume access to the original matrix A,
r
tr{A2r /N 2 } = j=1 λ2j /N 2 = Ω(r/t2 ). It also holds that
tr{A2r /N 2 } ≤ kAk2max . Thus, the rank of the effectively sim- |j ki|0 · · · 0i 7→ |j ki|Ajk i. (6)
ulated matrix is r = O(kAk2max t2 ).
This operation can be provided by quantum random access
Concretely, for the algorithm to be efficient in terms of
memory (qRAM) [23, 24] using O(N 2 ) storage space and
matrix oracle calls, we require that the number of simu-
quantum switches for accessing the data in TA = O(log2 N )
lation steps n is O(poly log N ). Let the desired error be
operations. Alternatively, there matrices whose elements are
1/ǫ = O(poly log N ). Assuming kAkmax = Θ(1), mean-
efficiently computable, i.e. TA = O(poly log N ). For the
ing a constant independent of N , we have from Eq. (4) that
one-sparse matrix SA , the unitary operation for the sparse
we can only exponentiate for a time t = O(poly log N ). For
simulation methods [8, 9] can be simply constructed from the
such times, only the large eigenvalues of A/N with |λj |/N =
oracle in Eq. (6) and is given by
Ω(1/poly log N ) matter. Such eigenvalues can be achieved
when the matrix is dense enough, for example A/N has Θ(N ) |(j, k)i|0 · · · 0i 7→ |(j, k)i|(k, j), (SA )(k,j),(j,k) i. (7)
non-zeros of size Θ(1/N ) per row. For the rank of the simu-
lated matrix in this case we find that r = O(poly log N ), thus Here, we use (j, k) as label for the column/row index of the
effectively a low-rank matrix is simulated. To summarize, we modified swap matrix.
3
We compare the required resources with those of other corresponding eigenvectors are proportional to (uj , ±vj ) ∈
methods for sparse and non-sparse matrices. For a general CM+N , see Appendix. The left and right singular vectors
N × N and s-sparse matrix, O(sN ) elements need to be of A can be extracted from the first M and last N entries,
stored. In certain cases, the sparse matrix features more struc- respectively. Since à is Hermitian, its eigenvectors can as-
ture and its elements can be computed efficiently [9, 25]. For sumed to be orthonormal: k(uj , vj )k2 = kuj k2 + kvj k2 = 1,
non-sparse matrices and the qPCA method in Ref. [11], only and (uj , vj ) · (uj , −vj )† = kuj k2 − kvj k2 = 0, from which
multiple copies of the density matrix as opposed to an opera- √ that the norm of each of the subvectors uj and vj is
follows
tion as in Eq. (6) are required for applications such as state to- 1/ 2, independent of their respective lengths M and N . The
mography. For machine learning via qPCA [11, 20], the den- important point is that the eigenvectors of the extended matrix
sity matrix is prepared from a classical source via quantum preserve the correct phase relations between the left and right
RAM [23, 24] and requires O(N 2 ) storage. In comparison, singular vectors since (eiϑj uj , vj ) is only an eigenvector of Ã
the requirements of the method in this work are in principle for the correct phase eiϑj = 1.
not higher than these sparse and non-sparse methods, both in The requirements for our quantum algorithm can be satis-
the case of qRAM access and in the case when matrix ele- fied also for the extended matrix. For randomly sampled left
ments are computed instead of stored. and right singular vectors,
√ the matrix elements
√ have maximal
size of O( rj=1 σj / M N ), thus σj = O( M N ). In ad-
P
Non-square matrices. Our method enables us also to de-
termine properties of general non-square low-rank matrices dition, an 1/(M + N ) factor arises in the simulation of the
effectively. To determine the singular value decomposition of extended matrix from the ancillary state ρ = |~1ih~1| as before,
a matrix A = U ΣV † ∈ CM×N with rank r, simulating the which leads to the requirement σj = Θ(M + N ). These two
positive semidefinite matrices AA† and A† A via qPCA yields conditions for σj can be satisfied if the matrix A is not too
the correct singular values and vectors. However, essential skewed, i.e. M = Θ(N ). In summary, by simulating the
information is missing, leading to ambiguities in the singular corresponding Hermitian extended matrices, general complex
vectors that become evident when inserting diagonal matrices matrices of low rank can be simulated efficiently, yielding the
into the singular value decomposition of AA† that change the correct singular value decomposition.
relative phases of the singular vectors, Procrustes problem. The unitary Procrustes problem is to
find the unitary matrix that most accurately transforms one
AA† = U Σ2 U † = U ΣD† V † V DΣU † =: † , (8) matrix into another. It has many applications, such as in
with D := diag(e−iϑj ), ϑj being arbitrary phases. If Avj = shape/factor/image analysis and statistics [1]. We consider
σj uj for each j = 1, . . . , r, then non-square matrices thus consider the Procrustes problem to
find the isometry that most accurately transforms one matrix
Âvj = U ΣD† V † vj = σj eiϑj uj := σj ûj , (9) into another. Formally, minimize kW B − CkF among all
isometries W ∈ CM×N , W † W = 1, with B ∈ CN ×K and
which means different phase relations between left and right C ∈ CM×K , where M > N . The problem is equivalent to
singular vectors in  from those in A. Although A and  the general problem of finding the nearest isometric matrix
still share the same singular values and even the same sin- W ∈ CM×N to a matrix A ∈ CM×N by taking A = CB † .
gular vectors up to phase factors, kA − ÂkF will in general Since our quantum algorithm is restricted to low rank matri-
(with the exception of positive semidefinite matrices, where ces, let A = CB † be low-rank with rank r and singular value
U = V ) not be zero or even be small: The matrix A cannot be decomposition A = U Σ V † with U ∈ CM×r , Σ ∈ Rr×r , and
reproduced this way—a singular value decomposition is more V ∈ CN ×r . The optimal solution to the Procrustes problem is
than a set of singular values and normalized singular vectors. W = U V † [1], setting all singular values to one, in both the
This affects all kinds of algorithms that require the appropriate low-rank and the full-rank situation. Since A is assumed to be
phase relations between each left singular vector uj and the low rank, we find a partial isometry with W † W = Pcol(V ) ,
according right singular vector vj . Such applications are de- with Pcol(V ) the projector into the subspace spanned by the
termining the best low-rank approximation of a matrix, signal columns of V . Thus, W acts as an isometry for vectors in that
processing algorithms discussed in Ref. [26], or determining subspace (see Appendix).
the nearest isometric matrix, related to the unitary Procrustes In a quantum algorithm, we want to apply the nearest low-
problem, of a non-Hermitian matrix. rank isometry to a quantum state |ψi. The state |ψi is assumed
In order to overcome this issue, consider the “extended ma- to be in or close to the subspace spanned by the columns of
trix” V . We assume that the extended matrix for A in Eq. (10) is
given in oracular form and that A is not too skewed such that
0 A
à := , (10) σj /(M + N ) = Θ(1) and kAkmax = Θ(1). We perform
A† 0
phase estimation on the input state |0, ψi|0 . . . 0i and, analo-
which was introduced for singular value computations in gous to Eq. (5), obtain a state proportional to
Ref. [27] and recently in sparse quantum matrix inversion X σj
in [25]. The eigenvalues of à correspond to {±σj } with βj± |uj , ±vj i| ± i (11)
σj
M +N
{σj } being the singular values of A for j = 1, . . . , r. The M +N ≥ǫ
4
√
with βj± = huj , ±vj |0, ψi = ±hvj |ψi/ 2. The sum has 2r [5] S. Lloyd, Science 273, 1073 (1996).
terms corresponding to the eigenvalues of the extended ma- [6] A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-
trix with absolute value greater than (M + N )ǫ. Performing Gordon, Science 309, 1704 (2005).
[7] D. Aharonov and A. Ta-Shma, in
a σz operation on the qubit encoding the sign of the respec-
Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Compu
tive eigenvalue an uncomputing
P the eigenvalue register yields STOC ’03 (ACM, New York, NY, USA, 2003) pp. 20–29.
a state proportional to j βj |uj , ±vj i. Projecting onto the uj [8] A. M. Childs, R. Cleve, E. Deotto, E. Farhi,
part (success probability 1/2) results in a state proportional to S. Gutmann, and D. A. Spielman, in
X Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Compu
|uj ihvj |ψi ∝ U V † |ψi. (12) STOC ’03 (ACM, New York, NY, USA, 2003) pp. 59–68.
σj
M +N ≥ǫ
[9] D. W. Berry, G. Ahokas, R. Cleve, and B. C. Sanders,
Comm. Math. Phys. 270, 359 (2007).
This prepares the desired state for the non-square low- [10] D. W. Berry and A. M. Childs,
rank Procrustes problem with accuracy ǫ in runtime Quantum Info. Comput. 12, 29 (2012).
O(kAk2max log2 (N + M )/ǫ3 ). Classically, performing the [11] S. Lloyd, M. Mohseni, and P. Rebentrost, Nature Physics 10,
singular value decomposition of a low-rank A without further 631 (2014).
structural assumptions takes generally O(N 3 ). [12] N. Wiebe, A. Kapoor, and K. M. Svore, arXiv preprint
Conclusion. The method presented here allows non- arXiv:1401.2142 (2014).
[13] N. Wiebe, A. Kapoor, and K. M. Svore, arXiv preprint
sparse low-rank non-positive Hermitian N ×N matrices A/N arXiv:1412.3489 (2014).
2
for a time t with accuracy ǫ in run time
to be exponentiated [14] M. Benedetti, J. Realpe-Gómez, R. Biswas, and A. Perdomo-
O tǫ kAk2max TA , where kAkmax is the maximal absolute Ortiz, arXiv preprint arXiv:1510.07611 (2015).
[15] M. Schuld, I. Sinayskiy, and F. Petruccione, Physics Letters A
element of A. The data access time is TA . If the matrix ele-
379, 660 (2015).
ments are accessed via quantum RAM or computed efficiently [16] M. Schuld, I. Sinayskiy, and F. Petruccione, arXiv preprint
and the significant eigenvalues of A are Θ(N ), our method arXiv:1601.07823 (2016).
can achieve a run time of O (poly log N ) for a large class of [17] I. Kerenidis and A. Prakash, arXiv preprint arXiv:1603.08675
matrices. Our method allows non-Hermitian and non-square (2016).
matrices to be exponentiated via extended Hermitian matrices. [18] H.-K. Lau, R. Pooser, G. Siopsis, and C. Weedbrook, arXiv
We have shown how compute the singular value decom- preprint arXiv:1603.06222 (2016).
[19] G. Wang, arXiv preprint arXiv:1402.0660 (2014).
position of a non-Hermitian non-sparse matrix on a quantum
[20] P. Rebentrost, M. Mohseni, and S. Lloyd, Physical Review Let-
computer directly while keeping all the correct relative phase ters 113, 130503 (2014).
information. As one of the many potential applications of the [21] A. Childs, Comm. Math. Phys. 294, 581 (2010).
singular value decomposition, we can find the pseudoinverse [22] A. Childs and R. Kothari, in Theory of Quantum Computation,
of a matrix and the closest isometry exponentially faster than Communication, and Cryptography, Lecture Notes in Computer
any known classical algorithm. It remains to be seen if the Science Vol. 6519, edited by W. van Dam, V. Kendon, and S.
time complexity of our method can be improved from O(t2 ) Severini (Springer, Berlin, Heidelberg, 2011), p. 94, ISBN 978-
3-642-18072-9 (2011).
to an approximately linear scaling via higher-order Suzuki-
[23] V. Giovannetti, S. Lloyd, and L. Maccone, Phys. Rev. Lett. 100,
Trotter steps or other techniques. In addition, by using a (pos- 160501 (2008).
sibly unknown) ancillary state other than the uniform super- [24] V. Giovannetti, S. Lloyd, and L. Maccone, Phys. Rev. A 78,
position, the oracular setting of the present work and the to- 052310 (2008).
mography setting of [11] could be combined. [25] A. W. Harrow, A. Hassidim, and S. Lloyd, Phys. Rev. Lett. 103,
We are grateful to Iman Marvian for insightful discussions. 150502 (2009).
We acknowledge support from DARPA, NSF, and AFOSR. [26] A. Steffens, P. Rebentrost, I. Marvian, J. Eisert, and S. Lloyd,
to be submitted (2016).
AS thanks the German National Academic Foundation (Studi-
[27] G. Golub and W. Kahan, Journal of the Society for Industrial
enstiftung des deutschen Volkes) and the Fritz Haber Institute and Applied Mathematics, Series B: Numerical Analysis 2, 205
of the Max Planck Society for support. (1965).
∗
[email protected]
† Appendix
[email protected]
[1] G. H. Golub and C. F. Van Loan, Matrix computations, Vol. 3
(JHU Press, 2012).
[2] S. Boyd and L. Vandenberghe, Convex Optimization (Cam- Norms. Denote the maximum absolute element of a ma-
bridge University Press, 2004). trix A ∈ CN ×N with kAkmax := maxj,k |Ajk |. The
[3] K. P. Murphy, Machine Learning: A Probabilistic Perspective Frobenius or Hilbert-Schmidt norm is given by kAkF :=
(MIT Press, 2012). q P 2
Pr
[4] M. A. Nielsen and I. L. Chuang, Quantum computation and j,k |Ajk | and its nuclear norm by kAk∗ := i=1 σi ,
quantum information (Cambridge university press, 2010). where r is the rank and σj are the singular values.
5
Modified swap matrix. The modified swap matrix is de- Extended matrices. We define the Hermitian extended
fined as matrix à of a complex-valued, not necessarily square matrix
N
A ∈ CM×N as
2
×N 2
X
SA = Ajk |kihj| ⊗ |jihk| ∈ CN . (13)
0 A
à = ∈ C(M+N ) × (M+N ) . (19)
j,k=1 A† 0
Using block matrix identities for the determinant, we obtain
Taking Ajk → 1 leads to the original swap matrix S =
PN N 2 ×N 2
its characteristic polynomial
j,k=1 |kihj| ⊗ |jihk| ∈ C . The N 2 eigenvalues of
√ √
SA are χà (λ) = λ|M−N | det (λ1 + AA† )(λ1 − AA† ). (20)
A11 , A22 , . . . , AN N , A12 , −A12 , . . . , Aj,k>j , −Aj,k>j , . . . ,
The eigenvalues of à are either zero or correspond to {±σj },
(14)
the singular values of A for j = 1, . . . , r with an additional
where k > j denotes an index k greater than j. The maximal
sign. Hence, if A has low rank r, then à has low rank 2r. The
absolute eigenvalue of SA is thus maxj,k |Ajk | ≡ kAkmax ,
corresponding eigenvectors are proportional to (uj , ±vj ) ∈
corresponding to the maximal absolute matrix element of A.
CM+N since
The square of the modified swap matrix is
∓σj 1 A
uj
N · = 0, (21)
A† ∓σj 1 ±vj
|Ajk |2 |kihk| ⊗ |jihj| ≤ kAk2max 1. (15)
X
2
(SA ) =
j,k=1 where uj and vj are the jth left and right singular vector of A,
2 respectively. The important point is that the eigenvectors of
Its eigenvalues are |Ajk | and the maximal eigenvalue is
the extended matrix preserve the correct phase relations be-
kAk2max . This already points to the result that the second or-
tween the left and right singular vectors since (eiϑj uj , ±vj ) is
der error of our method naturally scales with kAk2max , which
we will now derive. only an eigenvector of à for the correct phase eiϑj = 1,
Error analysis. In the following, we estimate the error
∓σj 1 A
iϑ
∓σj eiϑj uj ± Avj
e j uj
from the second-order term in ∆t in the expansion Eq. (2). · =
A† ∓σj 1 ±vj eiϑj A† uj − σj vj
The nuclear norm of the operator part of the second order er-
ror is ∓uj
=(eiϑj − 1)σj . (22)
vj
1
ǫρ,σ = ktr1 {SA ρ ⊗ σ SA } − tr1 {(SA )2 ρ ⊗ σ} (16)
2 The right hand side is only equal to zero for the correct phase
1 eiϑj = 1.
− tr1 {ρ ⊗ σ (SA )2 }k∗ .
2 Low-rank Procrustes. Let the isometry be W = U V †
In Ref. [11], this error was equal to ǫqPCA = kρ − σk∗ ≤ 2, with U ∈ CM×r and V ∈ CN ×r . Assume that M > N ,
ρ,σ
which is achieved in the present algorithm by choosing A such giving orthogonal columns in the full-rank Procrustes prob-
that Ajk = 1 for each j, k. Here, our algorithm coincides with lem (r = N ). We find for the low-rank (partial) isometry that
the qPCA method for ρ chosen as the uniform superposition. r
For general low-rank A, we bound Eq. (16) via the triangle ~vj ~vj† .
X
W †W = V U †U V † = V V † = (23)
inequality. Taking the nuclear norm of the first term results in j=1