An Introduction To Lattices and Their Applications in Communications
An Introduction To Lattices and Their Applications in Communications
and their
Applications in
Communications
Frank R. Kschischang
Chen Feng
University of Toronto, Canada
1 Fundamentals
2 Packing, Covering, Quantization, Modulation
5 Communications Applications
2
Notation
4
Euclidean Space
Lattices are discrete subgroups (under vector addition) of
finite-dimensional Euclidean spaces such as Rn. x d(x
• ,y
)
In R we have
n
•y
n i (0,
• an inner product: (x, y) xi yi
l xl
✓ =1 1) l
• a �norm: l x l � (x, ly
•
• x)a metric: d (x, y) � lx − 0
(1,
yl 0)
• Vectors x and y are orthogonal if (x, y) = 0.
• A ball centered at the origin in Rn is the set
B r = {x ∈ Rn : l x l ≤ r }.
Definition
Given m linearly independent (row) vectors g1, . . . , gm ∈ Rn, the
lattice Λ generated by them is defined as the set of all integer linear
combinations of the gi ’s:
m
Λ(g1, . . . , gm) � ci gi : c1 ∈ Z, c2 ∈ Z, . . . , cm ∈ Z
.
i =1
• g1, g2, . . . , gm: the generators of Λ
• n: the dimension of Λ
• m: the rank of Λ
• We will focus only on full-rank lattices (m = n) in this tutorial
6
Example: Λ( ( 112 , 32 , , − 23
2(
−3g2
3g1 + g2
g1
0
g2
7
Example: Λ( ( 32 ,23 , (1,
0)
+g
2
3g1
g1
−3g2 0
g2
8
Generator Matrix
Definition
A generator matrix GΛ for a lattice Λ ⊆ Rn is a matrix whose rows
generate Λ:
g1
GΛ= .. ∈ R n×n and Λ = {cGΛ : c ∈ Z n }.
gn
Example:
l l
1/2 2/3 3/2 2/3
G1= and G2=
1/2 −2/3 1
0
generate the previous examples.
By definition, a generator matrix is full rank.
9
When do G and Gt Generate the Same Lattice?
10
Proof
For “⇒”: Assume that G and Gt generate the same lattice. Then
there are integer matrices V and Vt such that
Gt = VG and G = VtGt.
Hence,
Gt = VVtGt = (VVt)Gt,
from which it follows that VVt is the identity matrix. However, since
det(V) and det(Vt) are integers and the determinant function is
multiplicative, we have det(V) det(Vt) = 1. Thus det(V) is a unit
in Z and so V is unimodular.
For “⇐”: Assume that Gt = UG for a unimodular matrix U, let Λ
be generated by G and let Λt be generated by Gt. An element
λt ∈ Λt can be written, for some c ∈ Zn as
λt = cGt = cUG = ctG ∈ Λ, which shows, since ct = cU ∈ Zn, that
Λt ⊆ Λ. On the other hand, we have G = U−1Gt and a similar
argument shows that Λ ⊆ Λt. 11
Lattice Determinant
Definition
The determinant, det(Λ), of a full-rank lattice Λ is given as
det(Λ) = |
for Λ.
12
Fundamental Region
Definition
A set R ⊆ Rn is called a fundamental region of a lattice Λ ⊆ Rn if
the following conditions are satisfied:
u
1 R n= λ ∈Λ(λ + R).
2 For every λ1 , λ2 ∈ Λ with λ1 /= λ2 , (λ1 + R ) ∩ (λ2 + R ) = ∅.
In other words, the translates of a fundamental region R by lattice
points form a disjoint covering (or tiling) of Rn.
• A fundamental region R cannot contain two points x1 and x2
whose difference is a nonzero lattice point, since if
x1 − x2 = λ ∈ Λ, λ /= 0, for x1, x2 ∈ R, we would have
x1 ∈ 0 + R and x1 = x2 + λ ∈ λ + R, contradicting
Property 2.
• Algebraically, the points of a fundamental region form a
complete system of coset representatives of the cosets of Λ in
R n. 13
Fundamental Regions for Λ((1/2, 2/3), (1/2, −2/3))
14
Fundamental Parallelepiped
Definition
The fundamental parallelepiped of a generating set
g1, . . . , gn ∈ Rn for a lattice Λ is the set
n
n
P(g1 , . . . , gn) ai gi : (a1, . . . , an) ∈ [0,
� i =1
1)
g1 .
g1
0
0
g2 g2
(( 1 ( (( 3 2
P 2 ,2
1 3
, − 23 P 2 ,3 , (1,
, 2 0)
15
Their Volume = det(Λ)
Proposition
Given a lattice Λ, the fundamental parallelepiped of every generating
set for Λ has the same volume, namely det(Λ).
16
All Fundamental Regions Have the Same Volume
Proposition
More generally, every fundamental region R of Λ has the same
volume, namely det(Λ).
17
Voronoi Region
Definition
Given a lattice Λ ⊆ Rn and a point λ ∈ Λ, a Voronoi region of λ is
defined as
18
Nearest-Neighbor Quantizer
Definition
(NN)
A nearest neighbor quantizer QΛ : Rn → Λ associated with a
lattice Λ maps a vector to the closest lattice
point
(NN)
QΛ (x) = arg min lx −
λ ∈Λ
λl ,
where ties are broken systematically.
19
Minimum Distance
Definition
The minimum distance of a lattice Λ ⊆ Rn is defined as
dmin(Λ) = min l λ l .
λ∈Λ∗
Fact:
) Λ
in (
dmin(Λ) > 0
m
d
Proof: exercise.
20
Successive Minima
Recall that B r denotes the n-dimensional ball of radius r centered at
the origin: Br � {x ∈ Rn : l x l ≤
r }.
Definition
For a lattice Λ ⊂ Rn, let
22
Dual Lattice
Definition
The dual of a full-rank lattice Λ ⊂ Rn is the set
Λ⊥ = {x ∈ Rn : ∀λ ∈ Λ, (x, λ) ∈ Z},
Theorem
det(Λ) · det(Λ⊥) = 1.
Proof: follows from the fact that det(G−1) = (det G)−1.
Remark: the generator matrix for Λ⊥ serves as a parity-check
23
Nested Lattices
Definition
A sublattice Λt of Λ is a subset of Λ, which itself is a lattice. A pair
of lattices (Λ, Λt) is called nested if Λt is a sublattice of Λ.
24
Nested Lattices: Nesting Matrix
GΛI = JGΛ,
25
Nested Lattices: Diagonal Nesting
Theorem
Let Λt ⊂ Λ be a nested lattice pair. Then there exist generator
matrices GΛ and GΛI for Λ and Λt, respectively, such that
26
Smith Normal Form
The Smith normal form is a canonical form for matrices with entries
in a principal ideal domain (PID).
Definition
Let A be a nonzero m × n matrix over a PID. There exist invertible
m × m and n × n matrices P, Q such that the product
27
Diagonal Nesting Follows from Smith Normal Form
or, equivalently,
J = U−1DV−1.
Thus
GΛI = JGΛ = U−1DV−1GΛ
or
(UGΛI ) = D(V−1GΛ).
28
Nested Lattices: Labels and Enumeration
With a diagonal nesting in which GΛI = JGΛ with
J = diag(c1, c2, . . . , cn), we get a useful labelling scheme for
lattice vectors in the fundamental parallelepiped of Λt: each such
point is of the form
(a1, a2, . . . , an)GΛ
where
0 ≤ a1 < c1, 0 ≤ a2 < c2, . . . , 0 ≤ an < cn.
IT ni
Note that there are det(J) = =1 ci labelled points. 29
Nested Lattices: Linear Labelling
If we periodically extend the labels to all the lattice vectors, then
the labels are linear in Zc1 × Zc2 × · · · × Zcn , i.e.,
30
Complex Lattices
The theory of lattices extends to Cn, where we have many choices
for what is meant by “integer.” Generally we take the ring R of
integers as a subring of C forming a principal ideal domain.
Examples:
• R = {a + bi : a, b ∈ Z} (Gaussian integers)
• R = {a + be2πi/3 : a, b ∈ Z} (Eisenstein integers)
Definition
Given m linearly independent (row) vectors g1, . . . , gm ∈ Cn, the
complex lattice Λ generated by them is defined as the set of all
R-linear combinations of the gi ’s:
m
Λ(g1, . . . , gm) � ci gi : c1 ∈ R, c2 ∈ R, . . . , cm ∈ R .
i =1
32
Balls in High Dimensions
33
Sphere Packing
Definition
A lattice Λ ⊂ Rn is said to pack B r if
λ1 , λ2 ∈ Λ, λ1 /= λ2 → (λ1 + Br ) ∩ (λ2 + Br ) = ∅.
34
Effective Radius
Definition
The effective radius of a lattice Λ is the radius of a ball of volume
det(Λ):
( )
det(Λ) 1/n
reff(Λ) = .
Vn
ff
re
rp
ac
k
35
Packing Efficiency
Definition
The packing efficiency of a lattice Λ is defined as
rpack(Λ)
ρpack (Λ) =
. reff(Λ)
(Λ)
• the packing density =
36
Packing Efficiency (Cont’d)
37
Sphere Covering
Definition
A lattice Λ ⊂ Rn is said to cover Rn with Br if
n
(λ + Br ) =
λ∈ΛR .
38
Covering Efficiency
It is easy to see that rcov(Λ) is the outer radius of the Voronoi region
V(Λ), i.e., the radius of the smallest (closed) ball containing V.
ff
re
r co
v
Definition
The covering efficiency of a lattice Λ is
rcov(Λ)
ρcov (Λ) = .
reff(Λ)
• Clearly, ρcov(Λ) ≥ 1.
• ρcov(Λ) is invariant to scaling.
39
Covering Efficiency (Cont’d)
40
Quantization
Definition
A lattice quantizer is a map Q Λ : Rn → Λ for some lattice Λ ⊂ Rn.
(NN)
• If we use the nearest-neighbor quantizer Q Λ , then
(NN )
the
quantization error x e � x − QΛ (x) ∈ V(Λ).
• Suppose that xe is uniformly distributed over the Voronoi region
V(Λ), then the second moment per dimension is given as
1 1 1
σ2(Λ) = E xel 2] = lx el 2dx .
n V(Λ)
[l e
n
det(Λ)
• Clearly, the smaller is σ2(Λ), the better is the quantizer.
41
Quantization: Figure of Merit
Definition
A figure of merit of the nearest-neighbor lattice quantizer is the
normalized second moment, given as
σ2(Λ)
G (Λ) = .
det(Λ)2/n
42
Quantization: Figure of Merit (Cont’d)
43
Modulation: AWGN channel
input output
x y= x+
z
+ z
An additive-noise channel is given by the input/output relation
y = x + z,
44
Modulation: Error Probability
Suppose that (part of) a lattice Λ is used as a codebook, then the
transmitted signal x ∈ Λ.
Since the pdf is monotonically decreasing with the norm of the noise
lz l , given a received vector y, it is natural to decode x as the
closest lattice point:
(NN)
ˆx = arg λ ∈Λ min ly − λ l = QΛ
(y).
The error probability is thus defined as
det(Λ)2/n
µ(Λ, Pe ) = .
σ2(P e)
46
Modulation: Figure of Merit (Cont’d)
47
Fun Facts about Lattices (Lifted from the Pages of [Zamir,2014])
• The seventeenth century astronomer Johannes Kepler
conjectured that the face-centered cubic lattice forms the best
sphere-packing in three dimensions. While Gauss showed that
no other lattice packing is better, the perhaps harder part—of
excluding non-lattice packings—remained open until a full
(computer-aided) proof was given in 1998 by Hales.
• The optimal sphere packings in 2 and 3 dimensions are lattice
packings—could this be the case in higher dimensions as well?
This remains a mystery.
• The early twentieth century mathematician Hermann
Minkowski used lattices to relate n-dimensional geometry with
number theory—an area he called “the geometry of numbers.”
The Minkowski-Hlawka theorem (conjectured by Minkowski
and proved by Hlawka in 1943) will play the role of Shannon’s
random coding technique in Part 4.
• Some of the stronger (post-quantum) public-key algorithms
today use lattice-based cryptography.
48
Fields
Definition
Recall that a field is a triple (F, +, ·) with the properties that
1 (F, +) forms an abelian group with identity 0,
2 (F∗, ·) forms an abelian group with identity 1,
3 for all x, y, z ∈ F, x · (y + z ) = (x · y ) + (x · z ),
i.e., multiplication ‘·’ distributes over addition ‘+’.
Roughly speaking, fields enjoy all the usual familiar arithmetic
properties of real numbers, including addition, subtraction,
multiplication and division (by nonzero elements), the product of
nonzero elements is nonzero, etc.
• R and C form (infinite) fields under real and complex
arithmetic, respectively.
• Z does not form a field (since most elements don’t have
multiplicative inverses).
50
Finite Fields
Definition
A field with a finite number of elements is called a finite field.
• Fp = {0, 1, . . . , p − 1} forms a field under integer
arithmetic modulo p, where p is a prime.
• Zm = {0, 1, . . . , m − 1} does not form field under integer
arithmetic modulo m, when m is composite, since if m = ab
with 1 < a < m then ab = 0 mod m, yet a and b are
nonzero elements of Zm. Such “zero divisors” cannot be
present in a field.
The following facts are well known:
• A q-element finite field Fq exists if and only if q = pm for a
prime integer p and a positive integer m. Thus there are finite
fields of order 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, . . ., but none of
order 6, 10, 12, 14, 15, . . ..
• Any two finite fields of the same order are isomorphic; thus we
refer to the finite field Fq of order q. 51
The Vector Space Fnq
The set of n-tuples
F qn= {(x 1, . . . , n ) : x1∈ F q, . . . , nx ∈ Fq }
x
forms a vector space over Fq with
1 vector addition defined componentwise
2
vector
scalar x ∈ F nq, via ax =defined,
multiplication (ax 1, . .for
. , any
n scalar a ∈ F and any
q
ax ).
• Any subset of C ⊆ Fnq forming a vector space under the
operations inherited from Fnq, is called a subspace of Fn .
• A set of vectors {v , . . . , vq} ⊆ Fn is called linearly
1 k
independent if theqonly solution to the equation
0 = a1v1 + · · · + anvn in unknown scalars a1, . . . , an is
the trivial one (with a1 = · · · = an = 0).
52
Dimension
53
Linear Block Codes over Fq
Definition
An (n, k) linear code over Fq is a k-dimensional subspace of Fn q.
GRREF = Ik P
C⊥ n
= {v ∈ Fq : ∀c ∈ C, (v, c) =
0}.
• The dual of an (n, k) linear code is an (n, n − k) linear code.
• A generator matrix H for C ⊥ is called a parity-check matrix
for C and must satisfy GH T = 0k×(n−k) for every generator
matrix G of C .
• Equivalently, we may write
C = {c ∈ Fnq : cHT = } ,
0
displaying C as the k-dimensional solution space of a system of
n − k homogenous equations in n unknowns.
57
Computing H from G
G= I P
H = −PT I
58
Error-Correcting Capability under Additive Errors
n
Let C be a linear (n, k) code over F q. Let E ⊂ F be
q a general set
of error patterns, and suppose that when c ∈ C is sent, an adversary
may add any vector e ∈ E , so that y = c + e is received.
input output
c y= c+
e
+
c1 + e1 = c2 + e2 ⇔ c1 − c2 = e2 − e1
Theore
m
Let E ⊂ F nqbe a set of error patterns and let
∆ E = {e1 − e2 : e1, e2 ∈ E }. An adversary restricted to adding
patterns of E to codewords of a code C cannot cause confusion at
the receiver if and only if ∆ E ∩ C ∗= ∅.
Example: if E consists of the all-zero pattern and all patterns of
Hamming weight one, then ∆ E consists of the all-zero pattern and
all patterns of Hamming weight one or two. Thus C is
single-error-correcting if and only if it contains no nonzero
codewords of weight smaller than 3.
60
Linear Codes: A Quick Summary
61
From Codes to Lattices: Construction A
Definition
The modulo-p-reduction of an integer vector
v = (v1, . . . , vn) ∈ Zn is the vector
63
Nested Construction A
ΛC n
2 = {x ∈ Z | x mod p ∈ C 2 }.
64
Properties
65
Other Constructions
66
Balanced Families
Definition
A family B of (n, k) linear codes over a finite field F is called
balanced if every nonzero vector in Fn appears in the same number,
N B , of codes from B.
For example, the set of all linear (n, k) codes is a balanced family.
degree N B degree qk − 1
···
vectors |B| codes
··
. ..
. ·
qn − 1 nonzero
1 f (w) = q − 1
k
f (2)
|B| (v). n∗
C ∈B w∈C ∗ qn − 1 q
v∈(F )
NB f (v ) = f (w ).
v n∗
q C ∈B w ∈C
∈(F ) ∗
1 if v ∈ A,
f (v)
= 0
otherwise.
Then
Now if
qk − 1 ∗
|A | < 1
qn− 1
then the average intersection count is < 1. But since |C ∗∩ A| is an
integer, this would mean that B contains at least one code with
C ∗ ∩ A = ∅.
• Setting A = ∆ E , we see that qn −1 |∆E ∗| < 1, or more
k
q −1
if
loosely if n−k
|∆E | < q ,
then B contains at least one (n, k) linear code that can correct
all additive errors in a set E .
• For example setting E to a Hamming ball yields (essentially)
the Gilbert-Varshamov bound.
71
Constructing mod-p Lattices of Constant Volume
It is natural to construct a family of lattices in fixed dimension n,
with a fixed determinant V f , using lifted (n, k) codes with fixed k,
where 0 < k < n. Free parameter: p.
Unscaled Construction A, lifting code C over Fp , gives
pZn ⊂ ΛC ⊂ Zn
det=p n
de t= p n det
det= (γ p )n d et = V f det = γ n
→∞ →0
72
Example: Lifting ((1, 1)) mod p with fixed Vf
γp/2
γp/ 2
-γp/2
-γp/2
p= 2 p= 3
General case
As p → ∞:
• fine lattice γZn grows
increasingly “fine”
• Voronoi region of
coarse lattice γpZn
grows increasingly
p=5 p = 23 large
Yellow-shaded region: V(γpZ2)
73
Minkowski-Hlawka Theorem
Minkowski-Hlawka Theorem
Let f be a Riemann integrable function Rn → R of bounded support
(i.e., f (v) = 0 if l v l exceeds some bound). Then, for any integer k,
0 < k < n, and any fixed V f , the approximation
1
f (w) ≈ Vf−1 f
|B|
C ∈B Rn (v)dv
w∈γΛ∗C
where B is any balanced family of linear (n, k) codes over Fp ,
becomes exact in the limit as p → ∞, γ → 0 with γn pn−k = V f
fixed.
74
Minkowski-Hlawka Theorem: A Proof
Let V be the Voronoi region of γpZn. Then, when p is sufficiently
large (so that supp(f ) ⊆ V),
1 1
f (w) = f (w) supp(f ) ⊆
|B| V |B|
C ∈B C ∈B w∈(γΛ C ∗ ∩V)
= p − 1
w∈γΛ∗C k
f averaging lemma
pn − 1
v∈((γZn )∗∩V) (v)
= p − 1 γ−n
k
n
f multiply by unity
pn − 1
v∈((γZn )∗∩V) (v)γ
k−n −n
→ p γ f (v)dv sum → integral
Rn
= V f−1 f
Rn (v)dv.
75
Minkowski-Hlawka Theorem: Equivalent Form
Theorem
Let E be a bounded subset of Rn that is Jordan-measurable (i.e.,
Vol(E ) is the Riemann integral of the indicator function of E ); let k
be an integer such that 0 < k < n and let V f be a positive real
number. Then the approximation
1
|γΛ∗
C∩ E | ≈ Vol(E )/ V f
|B| C ∈B
77
Goodness for Packing
Theore
m
For any n > 1 and any E> 0, there exists a lattice Λn of dimension
n such that
rpack n
ρpack (Λn) = (Λ ) ≥
reff (Λn) . 2(1 +
1
E)
78
Lower Bound on Packing Radius
)Λ
r
in (
m
Br
d
Λ∗
79
Goodness for Packing: A Proof
For any n > 1 and any E> 0, let Br be the ball with
n
n
Vol(Br ) = r V n = V f /(1 + E) < V f .
Then, 1
|γΛ∗∩ B | → Vol(B )/ V < 1.
|B| C ∈B C r r f
rpack(Λn) ≥ r/2.
✓
On the other hand, reff (Λn) = n V f /V n = r (1 + E). Hence,
rpack (Λn ) 1
ρpack (Λn) = ≥
reff (Λn) . 2(1 +
E)
80
From Existence to Concentration
as n → ∞.
Proof: Recall that 1 |γΛ∗∩ B |r→ (1/(1 + E)) n, as p → ∞ .
|B| C ∈B C
Consider the random variable |Λ ∗
n∩ B |,r where Λ isnuniform over
{γΛ∗C | C ∈ B}. By Markov’s inequality,
E [|Λ∗
n ∩ B r|] 1
Pr[|Λ∗
n∩ B |r ≥ 1] ≤ = |γΛ∗
C∩ B |.r
1 |B|
C
∈B
Hence, Pr[|Λ∗n ∩ B r | ≥ 1] → 0, as n →
∞. 81
Goodness for Modulation
Theorem
There exists a sequence of lattices Λn such that for all 0 < Pe < 1,
µ(Λn, Pe ) → 2πe, as n → ∞.
82
Upper Bound on the Error Probability
For a specific (non-random) lattice Λ, the error probability Pe (Λ) is
upper bounded by
83
Average Error Probability P¯e
1
Pe
� Pe (γΛC )
|B| C ∈B
¯
1
≤ Pr[z ∈/ Br ] fr (v )|(γΛC )∗ ∩ (v + B r )|
+ |B| C ∈B
(B r dv
1
= Pr[z / Br ] + fr |(γΛC )∗ ∩ (v + Br )|
∈ (v ) |B| C ∈B dv
Br
If rnoise = reff /(1 + E), then there exist Er1, E2> 0 such that
ef
rnoise f
= (1 + E1)(1 +
. 2)
Now, we set r = reff /(1 + E1).EThen rnoise = r/(1 + E2),
( )n ( ) n
Vol(B r ) r 1
= = , and
Vf 1+
reff
E1
Pr[z ∈/ Br ] = Pr[lzl > r ]
2
2
= Pr[lzl /n
> r /n]
2
2
2 85
Goodness for Modulation: A Proof
det(Λ)2/n
µ(Λ, Pe ) = .
σ2(P e)
For any target error probability δ > 0, if we set rnoise = reff /(1 + E)
for some E> 0, then P¯ ≤ δ for sufficiently large n. Hence, there
exists a lattice Λn with Pee (Λ ) ≤ δ and σ2 (δ) ≥ noise
r
2
/ n. n
Therefore,
2/n 2/n 2
µ(Λn, δ) = V f ≤ Vf = nVn2/n r 2ef f → 2πe(1 + E)
σ2(δ) rnoise
2 /n r 2 .
noise
The theorem follows because we can make Earbitrarily small.
86
From Existence to Concentration
Concentration for Large n
Let Λn be a random lattice of dimension n uniformly distributed
over {γΛ∗C | C ∈ B}. Then,
as n → ∞.
Proof: For any target error probability δ > 0 and any large L > 0, if
we set rnoise = reff /(1 + E) for some E> 0, then P¯e≤ δ/L for
sufficiently very large n.
Consider the random variable Pe (Λn), where Λn is uniform over
{γΛ∗C | C ∈ B}. By Markov’s inequality,
E [Pe (Λn)] P¯ 1
Pr[Pe (Λn) ≥ δ] ≤ = ≤ .
δ L
e
Hence, with probability at least 1 − 1/L, Λn has Pe (Λn) ≤ δ and
σ2 (δ) ≥ rnoise δ
2 87
Simultaneous Goodness
Theorem
Let Λn be a random lattice of dimension n uniformly distributed
over {γΛ∗C | C ∈ B}. Then for any 0 < Pe < 1 and any E> 0,
l
1
Pr ρpack(Λn) ≥ and µ(Λ n, P ) ≤ 2πe(1 + E) →
2(1 +
1 e
E)
as n → ∞.
Proof: a union-bound argument.
88
Goodness of Nested Lattices
89
Nested Lattices Good for (Almost) Everything
In fact, with a refined argument, one can prove that, with high
probability, both Λn and Λtn are simultaneously good for packing,
modulation, covering, and quantization.
Remark 1: goodness for covering implies goodness for quantization
Remark 2: in order to prove the goodness for covering, we need
some constraints on k and kt of the underlying linear codes. This is
beyond the scope of this tutorial.
90
Practical Ensembles of Lattices
For linear codes, practical ensembles include Turbo codes, LDPC
codes, Polar codes, Spatially-Coupled LDPC codes.
What about their lattice versions?
• LDPC Lattices: M-R. Sadeghi, A. H. Banihashemi, and D.
Panario, 2006
• Low-Density Lattice Codes: N. Sommer, M. Feder, and O.
Shalvi, 2008
• Low-Density Integer Lattices: N. Di Pietro, J. J. Boutros, G.
Z´emor, and L. Brunel, 2012
• Turbo Lattices: A. Sakzad, M.-R. Sadeghi, and D. Panario,
2012
• Polar Lattices: Y. Yan, C. Ling, and X. Wu, 2013
• Spatially-Coupled Low-Density Lattices: A. Vem, Y.-C.
Huang,
K. Narayanan, and H. Pfister, 2014
91
Towards a Unified Framework
A unified framework
It is possible to generalize the balanced families to “almost
balanced” families so that goodness of some (practical) linear codes
over Fp implies goodness of lattices.
93
Encoding
94
Encoding with a Random Dither
x = [λ + u] mod
Λt= λ + u − Q (λ +
Λ I
u) NN
Clearly, x ∈ V(Λt), and we will now show that in fact x is uniformly
distributed and hence has
1
E [lxl2 ] = σ2(Λt).
n
95
The Role of the Random Dither
Crypto Lemma
If the dither u is uniform over the Voronoi region V(Λt ) and
independent of λ, then x = [λ + u] mod Λt is uniform over V(Λt),
independent of λ.
Hence, n1 E xl ] = σ2(Λ ).
t
[l practice
In 2 one often uses a non-random dither chosen to achieve a
96
Decoding
A sensible (though suboptimal) decoding rule at the output of a
Gaussian noise channel:
• Given y, map y − u to the nearest point of the fine lattice
Λ.
• Reduce mod Λt if necessary.
λˆ = Q NN
Λ (y − u) mod
t
Λ
Understanding the decoding: Let λ t = Q NN (λ + u).
ΛI
Then,
y− u=x+ z−
u = λ + u − λt +z −
u x
= λ + z − λt
Hence, λˆ = λ if and only if QNN ∈ Λ t . Therefore,
Λ
(z) NN
/ Λ ] ≤ Pr[QNN
Λ (z) /= 0] = ∈/
Pr[λˆ /= λ] = Pr[QΛ
t (z)
Pr[z V(Λ)].
∈ 97
Rate versus SNR
R = 1 log2
det(Λt)
n
det(Λ)(
1 det(Λt )2/n
=
det(Λ)2/n
log2 ( )
2
1 σ2 (Λt )/G (Λt)
σ2(P e) · µ(Λ, P )
= ( 2 e )
1 σ (Λ ) ( t
log2 t
2 1
σ (P e)
2 − log 2 G (Λ ) · µ(Λ, P e )
( ) 2 ( )
P ( t 1 µ(Λ, Pe )
= − log2 2πeG (Λ ) − .
2 2 N 1 2 2 2π
loglog log
2
2 2 e
shapi n g codin g
1 loss loss
99
Summary of Nested Lattice Codes
100
Outline
102
AWGN Channel Coding
input output
x + y= x+
z
z
y = x + z, where zi ∼ N(0, N), independent components,
and independent of x.
Average power constraint: n1 E l ] ≤ P.
[lx 2
C AWG = log2 1 +N
N 2
103
Key Intuition (Erez&Zamir’04)
x = [λ + u] mod Λt
= λ + u − QNN (λ +
ΛI
u)
Clearly, x ∈ V(Λ t ) and n E l 2] = σ2(Λ ).
t
[lx 1
105
Decoding with the MMSE Estimator
(NN)
y ×
+ QΛ mod Λt
λˆ
α −u
λˆ = Q NN
Λ (αy − u) mod
Λt ,
where α is the MMSE coefficient.
Note that when α = 1, it reduces to our previous case.
106
Error Probability
Let λ t = Q NN (λ + u).
ΛI
Then,
αy − u = α(x + z) −
u = α(λ + u − λt +z) −
u x
= λ + (α − 1)(λ + u − λt ) + αz −
λt λ + (α − 1)x + αz
=
−λ t
Pe � Pr[λˆ /=
NN t
λ] = Pr[QΛ α ) ∈/
(n Λ]
≤ Pr[QNN
Λ (nα) /= 0]
= Pr[nα ∈/ V(Λ)].
107
The Role of the MMSE Estimator
The effective channel noise is nα (instead of z), and the second
moment per dimension of nα is
1
σ2(nα ) � E 2
αl ]
n
[ln 2 2
22
= (α − 1) σ (x) + α σ (z)
2
2
= (α − 1) P + α N.
as n → ∞.
This can be done with some additional steps.
109
Dirty-Paper Coding
S
m TX + RX m
X Y ˆ
Z
In the dirty-paper channel Y = X + S + Z , where Z is an
unknown additive noise, and S is an interference signal known to
the transmitter but not to the receiver.
The channel input satisfies an average power constraint:
2
E l x l ≤ nP.
If S and Z are statistically independent Gaussian variables, then the
channel capacity
1 P
CDP = AWG = log2 () 1 +
C N . 2 N
110
Encoding
−α ×
λ + mod Λt x
x = [λ + u − αs] mod
Λt
111
Decoding
(NN)
y ×
+ QΛ mod Λt
λˆ
α −u
λˆ = Q NN
Λ (αy − u) mod
Λt ,
where α is the MMSE coefficient.
112
Error Probability
Let λ t = Q NN (λ + u − αs).
ΛI
Then,
αy − u = α(x + s + z) −
u = α(λ + u − αs − λt +s + z) −
u x
= λ + (α − 1)(λ + u − αs − λ t ) + αz −
λt λ + (α − 1)x + αz
=
−λ t
Pe � Pr[λˆ /= λ] ≤ Pr[nα ∈/
V(Λ)].
113
Achievable Rate
σ2(nα) = (α − 1)2P + α 2 N.
114
Gaussian Two-Way Relay Channel
Y MAC Relay
X2
Z +
X1
Y1 X BC Y2
λ1 User 1 + + User 2 λ2
Z1 Z2
λˆ λˆ
2 1
YMAC = X1 + X2 + Z Y1 = XBC + Z1 Y2 = XBC
+ Z2
where Z ∼ N(0, N), Z1 ∼ N(0, N1), and Z2 ∼ N(0, N2).
1 power
Average 1 2
E [lx1 l 2 ] constraints:
≤ P , E x2 l 2 ] ≤ P2 , and E [l xBC l
1 1 ]≤ P .
[l n BC
n
For simplicity, we first consider the symmetric case P1 = P2 = PBC
and Nn 1 = N2 = N. 115
Transmission Strategy
116
1st Phase
Encoding
:
Decoding:
λˆ = Q NN
Λ (αy − 1− u ) mod
u 2 Λt
117
1st Phase: Error Probability
= λ + λ + (α − 1) (λ + u − λ t) + αz − λt1 − t
1 2 i i i
λ i 2
t
= λ1 + λ2 + (α − 1)(x1 + x2) + αz −λ 1 −
λ t2 .
t NN
Note that λ=ˆ [λ1 + λ2] mod Λ ift and only if Q Λ (nα ) ∈ Λ .
Hence,
Pe ≤ Pr[nα ∈ V(Λ)].
118
1st Phase: Achievable Rate
Note that
(
σ2(na) = (α −1) 2 σ2(x1) + σ2(x2) + α2σ2(z) = (α −1) 2 2P +
α 2 N.
119
Summary of the Symmetric Case
120
Asymmetric Powers
Recall that the channel model is
YMAC = X1 + X2 + Z
Y1 = XBC + Z1
Y2 = XBC + Z2
N1 = N2 = N
Key idea: use the same fine lattice at both users but different
coarse lattices, each sized to meet its user’s power constraint
121
A Triple of Nested Lattices
Λ t1 ⊂ Λ t 2 ⊂ Λ
with
σ2(Λt1) = P1 and σ2(Λt2) = P2 ,
1 det(Λ1t ) 1
R 1 = log2 and R2 det(Λ2t )
n det(Λ) = log2
n
det(Λ)
122
1st Phase: Encodng
Clearly, u2 ] Λt 2
1
E [lx l 2t ] = σ2 (Λ ) = P
n i i i
.
123
1st Phase: Decoding
λˆ = Q NN
Λ (αy − u 1 − u ) mod
Λt 1 2
To understand the decoding, let λ ti = QNN (λ + u ) for i = 1,
Λ Ii i
2.
Then, once again, i
αy − u1 − u2 = λ1 + λ2 + nα − λt1 − λt2,
where
nα � (α − 1)(x1 + x2) + αz.
Let λ = [λ1 + λ2 − λt2] mod Λt1. Then,
λˆ = λ if and only if QNN t
Λ (nα) ∈ Λ 1.
124
1st Phase: Achievable Rates
Note that
(
σ2(na) = (α−1)2 σ2(x1) + σ2(x2) +α2σ2(z) = (α−1)2 (P 1 +P 2 )+α 2 N.
125
2nd Phase: Coding Scheme
λ = [λ1 + λ2 − λt2]
mod Λt1.
and
[λ − λ2 + λt2] mod Λt1 = λ1 .
126
2nd Phase: Achievable Rates
127
Asymmetric Powers: A Summary
128
Compute-and-Forward
X1 Y1 R0
User 1 Relay 1
X2 Y2 R0
User 2 Channel Relay 2 Dest.
X3 Y3 R0
User 3 Relay 3
L
Yk = hkX + Z k
=1
129
Encoding
x = [λ + u ]
mod Λt
130
Relay Decoding
( L
ˆt k = NN
QΛ αk yk − ak u mod Λt
=1
131
Error Probability
Let λ t = Q NN
I
(λ + u ) for = 1, . . . , L. Then,
Λ
α k yk − k u
a
(
= αk hk x + z − ak u
= αk hk (λ + u − λ t ) + z − ak u
x
= ak λ + (α h − a )(λ + u − λ t) + α z − ak λ t
k k k k
= (α h − a ) 22P + α N
k k k k
Pa hT
The optimal α∗k = PIh k kI 2 +N
k
, where ak = (ak1, . . . , akL)
hk = (hk1, . . . , hkL), and
and
P2 (ak hkT )2
σ2(nα ∗k ) = P l akl 2− .
P l h k l 2+ N
133
Decoding at the Destination
134
Decoding at the Destination (Cont’d)
1 2 P 12 , R0
P R 2 ≤ min σ2(n
log
α1∗ ) 2 ,...σ
, 2(nαL∗ )
log
135
Finding the Best Integer Coefficients
Problem formulation:
maximize R
(
1 P
subject to ∀k : R ≤ 2
log 2 σ2(nαk∗ )
R ≤ R0
A = {a k } is full rank over Fp
136
Finding the Best Integer Coefficients (Cont’d)
Note that
P2 (ak hkT )2
σ22(nα ∗k ) = P l ak l −
P l h kl 2+ N
P2
= ak P I L − hT h aTk .
P l h kl 2+ N k k
k
M
137
Compute-and-Forward: A Summary
Achievable rate:
( (
1
1 , R0
σ2(nα1∗ ) , . . . , 2 log σ2(nα∗L )
2
A P2 2 ,
R ≤ min log
where A is full rank. P
138
Successive Compute-and-Forward
139
Conclusion
1 Fundamentals
2 Packing, Covering, Quantization, Modulation
5 Communications Applications
Lattices give a structured approach to Gaussian information theory
problems, though the asymptotic results are still based on
random-(linear)-coding arguments.
Much work can be done in applying these tools to new problems,
and searching for constructions having tractable implementation
complexity.
140
Bibliography
Fundamentals of Lattices
1 J. W. H. Cassels. An Introduction to the Geometry of
Numbers. Springer, 1971.
2 J. H. Conway and N. J. A. Sloane. Sphere Packings, Lattices
and Groups. Springer-Verlag, New York, 3rd Ed., 1999.
3 P. M. Gruber and C. G. Lekkerkerker. Geometry of Numbers.
North-Holland Mathematical Library, Vol. 37, 1987.
4 D. Micciancio and S. Goldwasser. Complexity of Lattice
Problems: A Cryptographic Perspective. Kluwer, 2002.
5 V. Vaikuntanathan. Lattices in Computer Science. Class notes
at the University of Toronto.
6 R. Zamir. Lattice Coding for Signals and Networks. Cambridge
University Press, Cambridge, 2014.
141
Bibliography (Cont’d)
Asymptotically-Good Lattices
1 U. Erez, S. Litsyn, and R. Zamir. Lattices which are good for
(almost) everything. IEEE Trans. Inform. Theory, 51:3401–
3416, Oct. 2005.
2 G. D. Forney, M. D. Trott, and S.-Y. Chung.
Sphere-bound-achieving coset codes and multilevel coset
codes.
IEEE Trans. Inform. Theory, 46:820–850, May 2000.
3 H. A. Loeliger. Averaging bounds for lattices and linear
codes.
IEEE Trans. Inform. Theory, 43:1767–1773, Nov. 1997.
4 O. Ordentlich and U. Erez. A simple proof for the existence of
good pairs of nested lattices. In Proc. of IEEEI, 2012.
5 N. D. Pietro. On Infinite and Finite Lattice Constellations for
the Additive White Gaussian Noise Channel. PhD Thesis,
2014.
142
Bibliography (Cont’d)
Applications of Lattices
1 U. Erez, S. Shamai (Shitz), and R. Zamir. Capacity and lattice
strategies for cancelling known interference. IEEE Trans.
Inform. Theory, 51:3820–3833, Nov. 2005.
2 U. Erez and R. Zamir. Achieving 1/2 log(1 + SNR) on
the AWGN channel with lattice encoding and decoding.
IEEE Trans. Inform. Theory, 50:2293–2314, Oct. 2004.
3 B. Nazer and M. Gastpar. Compute-and-forward: harnessing
interference through structured codes. IEEE Trans. Inform.
Theory, 57:6463–6486, Oct. 2011.
4 M. P. Wilson, K. Narayanan, H. Pfister, and A. Sprintson.
Joint physical layer coding and network coding for bidirectional
relaying. IEEE Trans. Inform. Theory, 56:5641–5654, Nov.
2010.
143
Bibliography (Cont’d)
More on Applications of Lattices
1 C. Feng, D. Silva, and F. R. Kschischang. An algebraic
approach to physical-layer network coding. IEEE Trans. Inform.
Theory, 59:7576–7596, Nov. 2013.
2 W. Nam, S.-Y. Chung, and Y. H. Lee. Capacity of the
Gaussian two-way relay channel to within 1/2 bit. IEEE Trans.
Inform. Theory, 56:5488–5494, Nov. 2010.
3 B. Nazer. Successive compute-and-forward. In Proc. of IZS,
2012.
4 R. Zamir, S. Shamai, and U. Erez. Nested linear/lattice codes
for structured multiterminal binning. IEEE Trans. Inform.
Theory, 48:1250–1276, Jun. 2002.
5 J. Zhu and M. Gastpar. Multiple access via
compute-and-forward. submitted to IEEE Trans. Inform.
Theory, Jul. 2014.
144