0% found this document useful (0 votes)
7 views

7 - Phase Estimation and Factoring

Uploaded by

Shashank S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

7 - Phase Estimation and Factoring

Uploaded by

Shashank S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Understanding Quantum Information and Computation

Fundamentals of quantum algorithms


Lesson 3: Phase estimation and factoring

Contents

1. The phase-estimation problem


2. The phase-estimation procedure
• Low precision warm-up
• Quantum Fourier transform
• General procedure
3. Integer factorization by phase estimation
• Order finding by phase estimation
• Integer factorization by order finding
1. The phase-estimation problem
Spectral theorem for unitary matrices
The spectral theorem is an important fact in linear algebra. Here is a statement of
a special case of this theorem, for unitary matrices.

Spectral theorem for unitary matrices

Suppose U is an N × N unitary matrix.

There exists an orthonormal basis {∣ψ1 ⟩, . . . , ∣ψN ⟩} of vectors along


with complex numbers

2πiθ1 2πiθN
λ1 = e , . . . , λN = e

such that

N
U = ∑ λk ∣ψk ⟩⟨ψk ∣
k=1
Spectral theorem for unitary matrices
Spectral theorem for unitary matrices

Suppose U is an N × N unitary matrix.

There exists an orthonormal basis {∣ψ1 ⟩, . . . , ∣ψN ⟩} of vectors along


with complex numbers

2πiθ1 2πiθN
λ1 = e , . . . , λN = e

such that

N
U = ∑ λk ∣ψk ⟩⟨ψk ∣
k=1

Each vector ∣ψk ⟩ is an eigenvector of U having eigenvalue λk :

2πiθk
U∣ψk ⟩ = λk ∣ψk ⟩ = e ∣ψk ⟩
Phase estimation problem
In the phase estimation problem, we’re given two things:
1. A description of a unitary quantum circuit on n qubits.
2. An n-qubit quantum state ∣ψ⟩.

We’re promised that ∣ψ⟩ is an eigenvector of the unitary operation U described


by the circuit, and our goal is to approximate the corresponding eigenvalue.

Phase estimation problem

Input: A unitary quantum circuit for an n-qubit operation U


and an n qubit quantum state ∣ψ⟩
Promise: ∣ψ⟩ is an eigenvector of U
Output: An approximation to the number θ ∈ [0, 1) satisfying

2πiθ
U∣ψ⟩ = e ∣ψ⟩
Phase estimation problem
Phase estimation problem

Input: A unitary quantum circuit for an n-qubit operation U


and an n qubit quantum state ∣ψ⟩
Promise: ∣ψ⟩ is an eigenvector of U
Output: An approximation to the number θ ∈ [0, 1) satisfying

2πiθ
U∣ψ⟩ = e ∣ψ⟩

e
2πiθ i 2π We can approximate θ by a fraction
θ
y
θ≈
2m
−1 1
m
for y ∈ {0, 1, . . . , 2 − 1}.

This approximation is taken “modulo 1.”


−i
2. The phase-estimation procedure
Warm-up: using the phase kickback
Given a circuit for U, we can create a circuit for a controlled-U operation:

Let’s consider this circuit:

∣0⟩ H H








∣ψ⟩ ⎪

⎪ U







Warm-up: using the phase kickback
∣0⟩ H H








∣ψ⟩ ⎪

⎪ U






∣π0 ⟩ ∣π1 ⟩ ∣π2 ⟩ ∣π3 ⟩

∣π0 ⟩ = ∣ψ⟩∣0⟩
1 1
∣π1 ⟩ = √ ∣ψ⟩∣0⟩ + √ ∣ψ⟩∣1⟩
2 2
2πiθ
1 1 1 e
∣π2 ⟩ = √ ∣ψ⟩∣0⟩ + √ (U∣ψ⟩)∣1⟩ = ∣ψ⟩ ⊗ ( √ ∣0⟩ + √ ∣1⟩)
2 2 2 2
Warm-up: using the phase kickback
∣0⟩ H H








∣ψ⟩ ⎪

⎪ U






∣π0 ⟩ ∣π1 ⟩ ∣π2 ⟩ ∣π3 ⟩

2πiθ
1 e
∣π2 ⟩ = ∣ψ⟩ ⊗ ( √ ∣0⟩ + √ ∣1⟩)
2 2
2πiθ 2πiθ
1+e 1−e
∣π3 ⟩ = ∣ψ⟩ ⊗ ( ∣0⟩ + ∣1⟩)
2 2
Warm-up: using the phase kickback
2πiθ 2πiθ
1+e 1−e
∣ψ⟩ ⊗ ( ∣0⟩ + ∣1⟩)
2 2

Measuring the top qubit yields the outcomes 0 and 1 with these probabilities:

»» 1 + e2πiθ »» 2 »» 1 − e2πiθ »»2


» »» » »»
p0 = »»» p1 = »»»
2 2
»» = cos (πθ) »» = sin (πθ)
»» 2 »» »» 2 »»
» » » »

0.8
probability

0.6
0
1
0.4

0.2

0.25 0.5 0.75 1

θ
Iterating the unitary operation
How can we learn more about θ? One possibility is to apply the controlled-U operation
twice (or multiple times):

∣0⟩ H H








∣ψ⟩ ⎪

⎪ U U






Performing the controlled-U operation twice has the effect of squaring the eigenvalue:
1

0.8
probability

0.6
0
1
0.4

0.2

0.25 0.5 0.75 1

θ
Two control qubits
Let’s use two control qubits to perform the controlled-U operations — and then we’ll see
how best to proceed.

∣0⟩ H

∣0⟩ H








∣ψ⟩ ⎪

⎪ U U U






∣π1 ⟩ ∣π2 ⟩

1 1
1
∣π1 ⟩ = ∣ψ⟩ ⊗ ∑ ∑ ∣a1 a0 ⟩
2
a0 =0 a1 =0

1 1
1 2πia0 θ
∣π2 ⟩ = ∣ψ⟩ ⊗ ∑ ∑ e ∣a1 a0 ⟩
2
a0 =0 a1 =0
Two control qubits
Let’s use two control qubits to perform the controlled-U operations — and then we’ll see
how best to proceed.

∣0⟩ H

∣0⟩ H








∣ψ⟩ ⎪

⎪ U U U






∣π3 ⟩

1 1
1 2πi(2a1 +a0 )θ
∣π3 ⟩ = ∣ψ⟩ ⊗ ∑ ∑ e ∣a1 a0 ⟩
2
a0 =0 a1 =0

3
1 2πixθ
= ∣ψ⟩ ⊗ ∑ e ∣x⟩
2
x=0
Two control qubits
3
1 2πixθ
∑e ∣x⟩
2
x=0

y
What can we learn about θ from this state? Suppose we’re promised that θ = 4
for y ∈ {0, 1, 2, 3}.
Can we figure out which one it is?

Define a two-qubit state for each possibility:

3
1 xy
2πi 4
∣ϕy ⟩ = ∑e ∣x⟩
2
x=0

1 1 1 1
∣ϕ0 ⟩ = ∣0⟩ + ∣1⟩ + ∣2⟩ + ∣3⟩
2 2 2 2
1 i 1 i
∣ϕ1 ⟩ = ∣0⟩ + ∣1⟩ − ∣2⟩ − ∣3⟩
2 2 2 2
1 1 1 1
∣ϕ2 ⟩ = ∣0⟩ − ∣1⟩ + ∣2⟩ − ∣3⟩
2 2 2 2
1 i 1 i
∣ϕ3 ⟩ = ∣0⟩ − ∣1⟩ − ∣2⟩ + ∣3⟩
2 2 2 2

These vectors are orthonormal — so they can be discriminated perfectly by a projective measurement.
Two control qubits
3
1 xy
2πi 4
∣ϕy ⟩ = ∑e ∣x⟩
2
x=0

1 1 1 1
∣ϕ0 ⟩ = ∣0⟩ + ∣1⟩ + ∣2⟩ + ∣3⟩
2 2 2 2
1 i 1 i ⎛1 1 1 1⎞
∣ϕ1 ⟩ = ∣0⟩ + ∣1⟩ − ∣2⟩ − ∣3⟩ ⎜
⎜ ⎟
2 2 2 2 1 ⎜1
⎜ i −1 −i ⎟


V= ⎜
⎜ ⎟

1 1 1 1 2 ⎜
⎜ 1 −1 1 −1⎟⎟
∣ϕ2 ⟩ = ∣0⟩ − ∣1⟩ + ∣2⟩ − ∣3⟩ ⎜
⎜ ⎟

2 2 2 2 ⎝1 −i −1 i⎠
1 i 1 i
∣ϕ3 ⟩ = ∣0⟩ − ∣1⟩ − ∣2⟩ + ∣3⟩
2 2 2 2

The unitary matrix V whose columns are ∣ϕ0 ⟩, ∣ϕ1 ⟩, ∣ϕ2 ⟩, ∣ϕ3 ⟩ has this action:

V∣y⟩ = ∣ϕy ⟩ (for every y ∈ {0, 1, 2, 3})

We can identify y by performing the inverse of V then a standard basis measurement.


V ∣ϕy ⟩ = ∣y⟩ (for every y ∈ {0, 1, 2, 3})
Two-qubit phase estimation
⎛1 1 1 1⎞

⎜ ⎟
1 ⎜
⎜1 i −1 −i ⎟


QFT4 = ⎜
⎜ ⎟

2 ⎜
⎜1 −1 1 −1⎟⎟

⎜ ⎟

⎝1 −i −1 i⎠

This matrix is associated with the discrete Fourier transform (for 4 dimensions).
When we think about this matrix as a unitary operation, we call it the quantum Fourier transform.

The complete circuit for learning y ∈ {0, 1, 2, 3} when θ = y/4:

∣0⟩ H

QFT4
∣0⟩ H








∣ψ⟩ ⎪

⎪ U U U







Two-qubit phase estimation
The complete circuit for learning y ∈ {0, 1, 2, 3} when θ = y/4:

∣0⟩ H

QFT4
∣0⟩ H









∣ψ⟩ ⎨
⎪ U U U






The outcome probabilities when we run the circuit, as a function of θ:


1

0.8

0
probability

0.6
1
2
0.4 3

0.2

0.25 0.5 0.75 1


θ
Quantum Fourier transform
The quantum Fourier transform is defined for each positive integer N as follows.

N−1 N−1
1 xy
2πi N
QFTN = √ ∑ ∑ e ∣x⟩⟨y∣
N x=0 y=0

N−1
1 xy
2πi N
QFTN ∣y⟩ = √ ∑ e ∣x⟩
N x=0

Example

1 1 1
QFT2 = √ ( )=H
2 1 −1

Example

⎛1 1 1 ⎞
1 ⎜
⎜ √ √ ⎟
−1−i 3 ⎟
QFT3 = √ ⎜
⎜1 −1+i 3 ⎟


⎜ 2 2 ⎟

3 ⎜ √ √ ⎟
⎝1 −1−i 3 −1+i 3 ⎠
2 2
Quantum Fourier transform
The quantum Fourier transform is defined for each positive integer N as follows.

N−1 N−1
1 xy
2πi N
QFTN = √ ∑ ∑ e ∣x⟩⟨y∣
N x=0 y=0

N−1
1 xy
2πi N
QFTN ∣y⟩ = √ ∑ e ∣x⟩
N x=0

Example

⎛1 1 1 1⎞

⎜ ⎟
1 ⎜1
⎜ i −1 −i ⎟


QFT4 = ⎜
⎜ ⎟

2 ⎜
⎜1 −1 1 −1⎟⎟

⎜ ⎟

⎝1 −i −1 i⎠
Quantum Fourier transform
The quantum Fourier transform is defined for each positive integer N as follows.

N−1 N−1
1 xy
2πi N
QFTN = √ ∑ ∑ e ∣x⟩⟨y∣
N x=0 y=0

N−1
1 xy
2πi N
QFTN ∣y⟩ = √ ∑ e ∣x⟩
N x=0

Example

⎛1 1 1 1 1 1 1 1 ⎞

⎜ 1+i −1+i
−1 −1−i
−i 1−i ⎟

⎜1

√ i √ √ √ ⎟


⎜ 2 2 2 2 ⎟


⎜ 1 −1 −i 1 −1 −i ⎟⎟


i i ⎟


⎜1 ⎟
1 ⎜

−1+i
√ −i 1+i
√ −1 1−i
√ i √ ⎟
−1−i ⎟

QFT8 = √ ⎜

⎜ 2 2 2 2 ⎟


2 2⎜



1 −1 1 −1 1 −1 1 −1 ⎟⎟



⎜ ⎟
⎜1

−1−i
√ i 1−i
√ −1 1+i
√ −i √ ⎟
−1+i ⎟


⎜ 2 2 2 2 ⎟


⎜ ⎟

⎜1
⎜ −i −1 i 1 −i −1 i ⎟⎟


⎝1 1−i −1−i −1+i √ ⎠
1+i
√ −i √ −1 √ i
2 2 2 2
Quantum Fourier transform
The quantum Fourier transform is defined for each positive integer N as follows.

N−1 N−1 N−1 N−1


1 xy
2πi N 1 xy
QFTN = √ ∑ ∑ e ∣x⟩⟨y∣ = √ ∑ ∑ ωN ∣x⟩⟨y∣
N x=0 y=0 N x=0 y=0

Useful shorthand notation:

2πi 2π 2π
ωN = e N = cos ( ) + i sin ( )
N N

ω4
ω3
ω8

ω16
ω2 ω1
Circuits for the QFT
We can implement QFTN efficiently with a quantum circuit when N is a power of 2.

The implementation makes use of controlled-phase gates:

α
⎛1 0 0 0 ⎞

⎜ ⎟
⎜0
⎜ 1 0 0 ⎟⎟


⎜ ⎟


⎜0 0 1 0 ⎟⎟

⎜ ⎟

⎝0 0 0 e
iα ⎠

The implementation is recursive in nature. As an example, here is the circuit for QFT32 :

π π π π
16 8 4 2
H × ×

× ×

× ×
QFT16
×

×
Circuits for the QFT
Cost analysis

Let sm denote the number of gates we need for m qubits.


• For m = 1, a single Hadamard gate is required.
• For m ≥ 2, these are the gates required:
sm−1 gates for the QFT on m − 1 qubits
m − 1 controlled phase gates
m − 1 swap gates
1 Hadamard gate

1 m=1
sm = {
sm−1 + 2m − 1 m ≥ 2

This is a recurrence relation with a closed-form solution:

m
2
sm = ∑ (2k − 1) = m
k=1

Additional remarks:
• The number of swap gates can be reduced.
• Approximations to QFT2m can be done at lower cost (and lower depth).
Phase estimation procedure
The general phase-estimation procedure, for any choice of m:


⎪ H ⎫


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ H ⎪


⎪ ⎪

m ⎪ ⎪
∣0 ⟩ ⎪ ⎪


⎪ QFT2m ⎬
⎪ y

⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪

⎩ H ⎭








⎪ 2 m−1
∣ψ⟩ ⎨
⎪ U U U
2






Warning
k
If we perform each U -operation by repeating a controlled-U operation k times, increasing
the number of control qubits m comes at a high cost.
Phase estimation procedure

⎪ H ⎫


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ H ⎪


⎪ ⎪

m ⎪ ⎪
∣0 ⟩ ⎪ ⎪


⎪ QFT2m ⎬
⎪ y

⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ ⎪

⎩ H ⎭








∣ψ⟩ ⎪ 2 m−1

⎪ U U U
2






∣π⟩

m m
2 −1 2 −1
1 m
2πix(θ−y/2 )
∣π⟩ = ∣ψ⟩ ⊗ ∑ ∑ e ∣y⟩
2m
y=0 x=0

»» »2
» 1 2 −1 2πix(θ−y/2m ) »»»
m

py = »»»» m ∑ e »»
»»
»» 2 x=0 »»
»
Phase estimation procedure
Best approximations Worse approximations
m
Suppose y/2 is the best approximation Suppose there’s a better approximation to
m
to θ: θ between y/2 and θ:

»» y » »» y »
»»θ − m »»» ≤ 2−(m+1) »»θ − m »»» ≥ 2−m
»» 2 »»1 »» 2 »»1

Then the probability to measure y will Then the probability to measure y will be
relatively high: relatively low:

4 1
py ≥ ≈ 0.405 py ≤
π2 4

m=3
1

0.8
probability

0.6 3
4
0.4 5

0.2

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


θ
Phase estimation procedure
Best approximations Worse approximations
m
Suppose y/2 is the best approximation Suppose there’s a better approximation to
m
to θ: θ between y/2 and θ:

»» y » »» y »
»»θ − m »»» ≤ 2−(m+1) »»θ − m »»» ≥ 2−m
»» 2 »»1 »» 2 »»1

Then the probability to measure y will Then the probability to measure y will be
relatively high: relatively low:

4 1
py ≥ ≈ 0.405 py ≤
π2 4

m=4
1

0.8
probability

0.6 7
8
0.4 9

0.2

0.3 0.4 0.5 0.6 0.7


θ
Phase estimation procedure
Best approximations Worse approximations
m
Suppose y/2 is the best approximation Suppose there’s a better approximation to
m
to θ: θ between y/2 and θ:

»» y » »» y »
»»θ − m »»» ≤ 2−(m+1) »»θ − m »»» ≥ 2−m
»» 2 »»1 »» 2 »»1

Then the probability to measure y will Then the probability to measure y will be
relatively high: relatively low:

4 1
py ≥ ≈ 0.405 py ≤
π2 4

m=5
1

0.8
probability

0.6 15
16
0.4 17

0.2

0.4 0.5 0.6


θ
Phase estimation procedure
Best approximations Worse approximations
m
Suppose y/2 is the best approximation Suppose there’s a better approximation to
m
to θ: θ between y/2 and θ:

»» y » »» y »
»»θ − m »»» ≤ 2−(m+1) »»θ − m »»» ≥ 2−m
»» 2 »»1 »» 2 »»1

Then the probability to measure y will Then the probability to measure y will be
relatively high: relatively low:

4 1
py ≥ ≈ 0.405 py ≤
π2 4

m
To obtain an approximation y/2 that is very likely to satisfy

»» y »
»»θ − m »»» < 2−m
»» 2 »»1

we can run the phase estimation procedure using m control qubits several times and take y to be the
mode of the outcomes.

(The eigenvector ∣ψ⟩ is unchanged by the procedure and can be reused as many times as needed.)
3. Integer factorization by phase estimation
The order-finding problem
For each positive integer N we define

ZN = {0, 1, . . . , N − 1}

For instance, Z1 = {0}, Z2 = {0, 1}, Z3 = {0, 1, 2}, and so on.


We can view arithmetic operations on ZN as being defined modulo N.

Example

Let N = 7. We have 3 ⋅ 5 = 15, which leaves a remainder of 1 when divided by 7.

This is often expressed like this:

3 ⋅ 5 ≡ 1 (mod 7)

We can also simply write 3 ⋅ 5 = 1 when it’s clear we’re working in Z7 .

The elements a ∈ ZN that satisfy gcd(a, N) = 1 are special.


ZN = {a ∈ ZN ∶ gcd(a, N) = 1}

Example

Z21 = {1, 2, 4, 5, 8, 10, 11, 13, 16, 17, 19, 20}
The order-finding problem
Fact
∗ k
For every a ∈ ZN there must exist a positive integer k such that a = 1. The smallest such

k is called the order of a in ZN .

Example
For N = 21, these are the smallest powers for which this works:

1 6 6 6
1 =1 5 =1 11 = 1 17 = 1
6 2 2 6
2 =1 8 =1 13 = 1 19 = 1
3 6 3 2
4 =1 10 = 1 16 = 1 20 = 1

Order-finding problem

Input: Positive integers a and N with gcd(a, N) = 1.


r
Output: The smallest positive integer r such that a ≡ 1 (mod N)

No efficient classical algorithm for this problem is known — an efficient algorithm for order-finding
implies an efficient algorithm for integer factorization.
Order-finding by phase-estimation
To connect the order-finding problem to phase estimation, consider a system whose classical
state set is ZN .

For a given element a ∈ ZN , define an operation as follows:

Ma ∣x⟩ = ∣ax⟩ (for each x ∈ ZN )

This is a unitary operation — but only because gcd(a, N) = 1!

Example

Let N = 15 and a = 2. The operation Ma has this action:

M2 ∣0⟩ = ∣0⟩ M2 ∣5⟩ = ∣10⟩ M2 ∣10⟩ = ∣5⟩


M2 ∣1⟩ = ∣2⟩ M2 ∣6⟩ = ∣12⟩ M2 ∣11⟩ = ∣7⟩
M2 ∣2⟩ = ∣4⟩ M2 ∣7⟩ = ∣14⟩ M2 ∣12⟩ = ∣9⟩
M2 ∣3⟩ = ∣6⟩ M2 ∣8⟩ = ∣1⟩ M2 ∣13⟩ = ∣11⟩
M2 ∣4⟩ = ∣8⟩ M2 ∣9⟩ = ∣3⟩ M2 ∣14⟩ = ∣13⟩
Order-finding by phase-estimation
To connect the order-finding problem to phase estimation, consider a system whose classical
state set is ZN .

For a given element a ∈ ZN , define an operation as follows:

Ma ∣x⟩ = ∣ax⟩ (for each x ∈ ZN )

This is a unitary operation — but only because gcd(a, N) = 1!

Main idea

The eigenvalues of Ma are closely connected with the order of a.

By approximating certain eigenvalues with enough precision using phase estimation,


we’ll be able to compute the order.
Eigenvectors and eigenvalues
This is an eigenvector of Ma :

r−1
∣1⟩ + ∣a⟩ + ⋯ + ∣a ⟩
∣ψ0 ⟩ = √
r

The associated eigenvalue is 1:

2 r r−1
∣a⟩ + ∣a ⟩ + ⋯ + ∣a ⟩ ∣a⟩ + ⋯ + ∣a ⟩ + ∣1⟩
Ma ∣ψ0 ⟩ = √ = √ = ∣ψ0 ⟩
r r

To identify more eigenvectors, first recall that

2πi/r
ωr = e

This is another eigenvector of Ma :

−1 −(r−1) r−1
∣1⟩ + ωr ∣a⟩ + ⋯ + ωr ∣a ⟩
∣ψ1 ⟩ = √
r
Eigenvectors and eigenvalues
−1 2 −(r−1) r
∣a⟩ + ωr ∣a ⟩ + ⋯ + ωr ∣a ⟩
Ma ∣ψ1 ⟩ = √
r
−1 2 −(r−2) r−1
ωr ∣1⟩ + ∣a⟩ + ωr ∣a ⟩ + ⋯ + ωr ∣a ⟩
= √
r
−1 −2 2 −(r−1) r−1
∣1⟩ + ωr ∣a⟩ + ωr ∣a ⟩ + ⋯ + ωr ∣a ⟩
= ωr ( √ )
r
= ωr ∣ψ1 ⟩

Additional eigenvectors can be identified by similar reasoning…


For each j ∈ {0, . . . , r − 1}, this is an eigenvector of Ma :

−j −j(r−1) r−1
∣1⟩ + ωr ∣a⟩ + ⋯ + ωr ∣a ⟩
∣ψj ⟩ = √
r
j
Ma ∣ψj ⟩ = ωr ∣ψj ⟩
A convenient eigenvector
−1 −(r−1) r−1
∣1⟩ + ωr ∣a⟩ + ⋯ + ωr ∣a ⟩
∣ψ1 ⟩ = √
r
1
2πi r
Ma ∣ψ1 ⟩ = ωr ∣ψ1 ⟩ = e ∣ψ1 ⟩

Suppose we’re given ∣ψ1 ⟩ as a quantum state. We can attempt to learn r as follows:

1. Perform phase estimation on the state ∣ψ1 ⟩ and a quantum circuit implementing Ma .
m
The outcome is an approximation y/2 ≈ 1/r.
m
2. Output 2 /y rounded to the nearest integer:

m m
2 2 1
round( y ) = ⌊ y + ⌋
2

How much precision do we need to correctly determine r?

»» y 1 »»» 1 2
m
»» round( y ) = r
»» 2m − r »»» ≤ ⇒
» » 2N2

Choosing m = 2 lg(N) + 1 in phase estimation makes such an approximation likely.


A random eigenvector
−j −j(r−1) r−1
∣1⟩ + ωr ∣a⟩ + ⋯ + ωr ∣a ⟩
∣ψj ⟩ = √
r
j
j 2πi r
Ma ∣ψj ⟩ = ωr ∣ψ1 ⟩ = e ∣ψ1 ⟩

Suppose we’re given ∣ψj ⟩ as a quantum state for a random choice of j ∈ {0, . . . , r − 1}. We can
attempt to learn j/r as follows:

1. Perform phase estimation on the state ∣ψj ⟩ and a quantum circuit implementing Ma .
m
The outcome is an approximation y/2 ≈ j/r.
2. Among the fractions u/v in lowest terms satisfying u, v ∈ {0, . . . , N−1} and v =/ 0, output
m
the one closest to y/2 . This can be done efficiently using the continued fraction algorithm.

How much precision do we need to correctly determine u/v = j/r?

»» y j »»» 1 u j
»»
»» 2m − r »»» ≤ ⇒ v = r
» » 2N2

Choosing m = 2 lg(N) + 1 for phase estimation makes such an approximation likely.


We might get unlucky: j could have common factors with r.
A random eigenvector
−j −j(r−1) r−1
∣1⟩ + ωr ∣a⟩ + ⋯ + ωr ∣a ⟩
∣ψj ⟩ = √
r
j
j 2πi r
Ma ∣ψj ⟩ = ωr ∣ψ1 ⟩ = e ∣ψ1 ⟩

Suppose we’re given ∣ψj ⟩ as a quantum state for a random choice of j ∈ {0, . . . , r − 1}. We can
attempt to learn j/r as follows:

1. Perform phase estimation on the state ∣ψj ⟩ and a quantum circuit implementing Ma .
m
The outcome is an approximation y/2 ≈ j/r.
2. Among the fractions u/v in lowest terms satisfying u, v ∈ {0, . . . , N−1} and v =/ 0, output
m
the one closest to y/2 . This can be done efficiently using the continued fraction algorithm.

How much precision do we need to correctly determine u/v = j/r?

»» y j »»» 1 u j
»»
»» 2m − r »»» ≤ ⇒ v = r
» » 2N2

If we can draw independent samples, for j ∈ {0, . . . , r − 1} is chosen uniformly, we can recover
r with high probability by computing the least common multiple of the values of v we observed.
Implementation

To find the order of a ∈ ZN , we apply phase estimation to the operation Ma . Let’s measure the cost
as a function of n = lg(N).


⎪ H









⎪ H

m ⎪
∣0 ⟩ ⎪


⎪ QFT2m













⎩ H








∣ψ⟩ ⎪
2 m−1


Ma Ma Ma
2






Cost for each controlled unitary


2
Using the techniques from Lesson 6, we can implement Ma at cost O(n ).
k m−1 k
We need to implement Ma for each k = 1, 2, 4, 8, . . . , 2 . Each Ma can be implemented
as follows:
k
Compute b = a (mod N).
Use a circuit for Mb .
k 2
The cost to implement Mb = Ma is O(n ).
Implementation

To find the order of a ∈ ZN , we apply phase estimation to the operation Ma . Let’s measure the cost
as a function of n = lg(N).


⎪ H









⎪ H

m ⎪
∣0 ⟩ ⎪


⎪ QFT2m













⎩ H








∣ψ⟩ ⎪
2 m−1


Ma Ma Ma
2






Cost for phase estimation

• m Hadamard gates: cost O(n)


3
• m controlled unitary operations: cost O(n )
2
• Quantum Fourier transform: cost O(n )
3
Total cost: O(n )
Implementation

⎪ H









⎪ H

m ⎪
∣0 ⟩ ⎪


⎪ QFT2m













⎩ H








∣1⟩ ⎪
2 m−1
∣ψ⟩ ⎨

Ma Ma Ma
2






Remaining issue: getting one of the eigenvectors ∣ψ0 ⟩, . . . , ∣ψr−1 ⟩.

Solution: replace the eigenvector ∣ψ⟩ with the state ∣1⟩.

This works because of the following equation:

∣ψ0 ⟩ + ⋯ + ∣ψr−1 ⟩
∣1⟩ = √
r

The outcome is the same as if we chose j ∈ {0, 1, . . . , r − 1} uniformly and used ∣ψ⟩ = ∣ψj ⟩.
Factoring through order-finding
The following method succeeds in finding a factor of N with probability at least 1/2, provided N is
odd and not a prime power.
Factor-finding method

1. Choose a ∈ {2, . . . , N − 1} at random.


2. Compute d = gcd(a, N). If d ≥ 2 then output d and stop.
3. Compute the order r of a modulo N.
r/2
4. If r is even, then compute d = gcd(a − 1, N). If d ≥ 2, output d and stop.
5. If this step is reached, the method has failed.

Main idea
r
1. By the definition of the order, we know that a ≡ 1 (mod N).

r r
a ≡ 1 (mod N) ⇔ N divides a − 1

2. If r is even, then

r r/2 r/2
a − 1 = (a + 1)(a − 1)

r/2 r/2
Each prime dividing N must therefore divide either (a + 1) or (a − 1).
r/2
For a random a, at least one of the prime factors of N is likely to divide (a − 1).

You might also like