Algebraic Characterizations of Distance
Algebraic Characterizations of Distance
Algebraic Characterizations of
Distance-regular Graphs
(Master of Science Thesis)
With 49 illustrations
By Safet Penjić
[email protected]
Supervised by:
Assoc. prof. dr. Štefko Miklavič
Faculty of Mathemathics, Natural Sciences
and Information Technologies
University of Primorska
[email protected]
Abstract 5
II Distance-regular graphs 33
7 Definitions and easy results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8 Characterization of DRG involving the distance matrices . . . . . . . . . . . . . 43
9 Examples of distance-regular graphs . . . . . . . . . . . . . . . . . . . . . . . . . 55
10 Characterization of DRG involving the distance polynomials . . . . . . . . . . . 64
11 Characterization of DRG involving the principal idempotent matrices . . . . . . 73
Conclusion 135
Index 137
Bibliography 139
3
4 CONTENTS
Through this thesis we introduce distance-regular graphs, and present some of their
characterizations which depend on information retrieved from their adjacency matrix,
principal idempotent matrices, predistance polynomials and spectrum. Let Γ be a finite
simple connected graph. In Chapter I we present some basic results from Algebraic graph
theory: we prove Perron-Frobenius theorem, we show how to compute the number of walks of
a given length between two vertices of Γ, how to compute the total number of (rooted) closed
walks of a given length of Γ, we introduce adjaceny matrix A of Γ, principal idempotent
matrices E i of Γ and introduce adjacent (Bose-Mesner) algebra of Γ and Hoffman polynomial
of Γ. All of these results are needed in Chapters II and III. In Chapter II we define
distance-regular graphs, show some examples of these graph, introduce distance-i matrix A i ,
i = 0, 1, ..., D (where D is the diameter of graph Γ), introduce predistance polynomials pi ,
i = 0, 1, ..., d (d is the number of distinct eigenvalues) of Γ and prove the following sequence of
equivalences: Γ is distance-regular ⇐⇒ Γ is distance-regular around each of its vertices and
with theP same intersection array ⇐⇒ distance matrices of Γ satisfy
A iA j = D k k
k=0 pij A k , (0 ≤ i, j ≤ D) for some constants pij ⇐⇒ for some constants ah , bh , ch
(0 ≤ h ≤ D), c0 = bD = 0, distance matrices of Γ satisfy the three-term recurrence
AhA = bh−1Ah−1 + ahAh + ch+1Ah+1 , (0 ≤ h ≤ D) ⇐⇒ {I, A, ..., AD } is a basis of the
adjacency algebra A(Γ) ⇐⇒ A acts by right (or left) multiplication as a linear operator on
the vector space span{I, A 1 , A 2 , ..., A D } ⇐⇒ for any integer h, 0 ≤ h ≤ D, the distance-h
matrix A h is a polynomial of degree h in A ⇐⇒ Γ is regular, has spectrally maximum
diameter (D = d) and the matrix A D is polynomial in A ⇐⇒ the number a`uv of walks of
length ` between two vertices u, v ∈ V only depends on h = ∂(u, v) ⇐⇒ for any two
h+1
vertices u, v ∈ V at distance h, we have ahuv = ahh and ah+1 uv = ah for any 0 ≤ h ≤ D − 1, and
auv = aD for h = D ⇐⇒ A iE j = pjiE j (pji are some constants) ⇔ A i = dj=0 pjiE j ⇔
D D
P
A i = dj=0 pi (λj )E
P
E j ⇔ A i ∈ A, (i, j = 0, 1, ..., d(= D)) ⇐⇒ for every 0 ≤ i ≤ d and for
every pair of vertices u, v of Γ, the (u, v)-entry of E i depends only PDon the distance between u
and v ⇐⇒ E j ◦ A i = qij A i (qij are some constants) ⇔ E j = i=0 qij A i ⇔
E j = n1 di=0 qi (λj )A Ai (where qi (λj ) := mj ppii (λ
(λj )
P
0)
) ⇔ E j ∈ D i, j = 0, 1, ..., d(= D) ⇐⇒
(j) (j) (j)
A ◦ A i = ai A i (ai are some constants) ⇔ A j = di=0 ai A i ⇔ A j = di=0 dl=0 qi` λjl A i ⇔
j P P P
A j ∈ D i, j = 0, 1, ..., d. Finally, in Chapter III, we introduce one interesting family of
orthogonal polynomials - the canonical orthogonal system, and prove three more
characterizations of distance-regularity which involve the spectrum: Γ is distance-regular
⇐⇒ the number of vertices at distance k from every vertex u ∈ V is |Γk (u)| = pk (λ0 ) for
n
0 ≤ k ≤ d (where {pk }0≤k≤d are predistance polynomials) ⇐⇒ qk (λ0 ) = P 1 for
u∈V sk (u)
0 ≤ k ≤ d (where qk = p0 + ... + pk , sk (u) = |Γ0 (u)| + |Γ1 (u)| + ... + |Γk (u)| and n is number of
P d
u∈V n/(n − kd (u))
X π02 Qd
vertices) ⇐⇒ P = (where π h = (λh − λi ) and
u∈V kd (u)/(n − kd (u)) i=0
m(λi )πi2 i=0
i6=h
kd (u) = |Γd (u)|). Largest part of main results on which I would like to bring attention, can be
5
found in [23], [38], [24] and [9].
Keywords: graph, adjacency matrix, principal idempotent matrices, adjacency algebra,
distance matrix, distance ◦-algebra, distance-regular graph, distance polynomials, predistance
polynomials, spectrum, orthogonal systems
6
Chapter I
FIGURE 1
Types of graphs (the simple graphs are I and VIII, all others are not simple).
7
8 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
FIGURE 2
The cube (V = {0, 1, 2, 3, 4, 5, 6, 7},
E = {{0, 1}, {0, 2}, {2, 3}, {1, 3}, {0, 4}, {1, 5}, {2, 6}, {3, 7}, {4, 5}, {4, 6}, {6, 7}, {5, 7}}).
Matrix A (or A (Γ)) stands for adjacency 01-matrix of a graph Γ - with rows and columns
indexed by the vertices of Γ and (A A)uv = 1 iff u ∼ v and equal to 0 otherwise.
A sequence of edges that link up with each other is called a walk. The length of a walk is
the number of edges in the walk. Consecutive edges in a walk must have a vertex in common,
so a walk determines a sequence of vertices. In general, a walk of length n from the vertex u
to the vertex v is a sequence [x1 , e1 , x2 , e2 , ..., en , xn+1 ] of edges and vertices with property
ei = {xi , xi+1 } for i = 1, ..., n and x1 = u, xn+1 = v. The vertices x2 , x3 , ..., xn are called
internal vertices. If [x1 , e1 , x2 , e2 , ..., xn , en , xn+1 ] is a walk from u to v then
[xn+1 , en , xn , en−1 , ..., x2 , e1 , x1 ] is a walk from v to u. We may speak of either of these walks as
a walk between u and v. If u = v, then the walk is said to be closed.
FIGURE 3
Simple graph and its adjacency matrix.
For two vertices u, v ∈ V, an uv-path (or path) is a walk from u to v with all of its edges
distinct. A path is called simple if all of its vertices are different. A path from u to u is called
a cycle, if all of its internal vertices are different, and the length of a shortest cycle of a graph
is called its girth. A simple graph is called connected if there is a path between every pair of
distinct vertices of the graph.
FIGURE 4
Simple connected graph (paths [u, g, w, k, y, i, x, h, w, f, v] and [u, w, y, z, w, v] go from u to v).
1. BASIC DEFINITIONS FROM GRAPH THEORY 9
FIGURE 5
Simple graph drawn in two different ways (examples of walks are [g, a, f, h, b], [c, f, h, c, a, g]
and [g, b, c, f, f, a, g]. Walk [g, b, c, f, f, a, g] has length 7. The vertex sequences for these
walks, respectively, are [y, w, z, y, x, w], [x, z, y, x, z, w, y] and [y, w, x, z, y, z, w, y]).
The distance ∂(x, y) (or distΓ (x, y)) in Γ of two vertices x, y is the length of a shortest
xy-simple path in Γ; if no such simple path exists, we set dist(x, y) = ∞. The eccentricity of a
vertex u is ecc(u) := maxv∈V dist(u, v) and the diameter of the graph is D := maxu∈V ecc(u).
The set Γk (u) denotes the set of vertices at distance k from vertex u. Thus, the degree of
vertex u is δu := |Γ1 (u)| ≡ |Γ(u)|.
FIGURE 6
A Hamiltonian cycle of the cube (Hamiltonian cycle - cycle that visits every vertex of the
graph exactly once, except for the last vertex, which duplicates the first one) where is, for
example ∂(2, 7) = 2, ecc(5) = 3, D = 3, δ4 = 3.
With Matm×n (F) we will denote the set of all m × n matrices whose entries are numbers
from a field F (in our case F is the set of real numbers R or thePset of complex numbers C).
For every B ∈ Matn×n (F) define the trace of B by trace(B) = ni=1 bii = b11 + b22 + ... + bnn .
An eigenvector of a matrix A is a nonzero v ∈ Fn such that Av = λv for some scalar λ ∈ F.
An eigenvalue of A is a scalar λ such that Av = λv for some nonzero v ∈ Fn . Any such pair,
(λ, v), is called an eigenpair for A. We will denote the set of all distinct eigenvalues by σ(A).
Vector space Eλ = ker(A − λI) := {x | (A − λI)x = 0} is called an eigenspace for A. For square
matrices A, the number ρ(A) = maxλ∈σ(A) |λ| is called the spectral radius of A.
The spectrum of a graph Γ is the set of numbers which are eigenvalues of A (Γ), together
with their multiplicities as eigenvalues of A (Γ). If the distinct eigenvalues of A (Γ) are
λ0 > λ1 > ... > λs−1 and their multiplicities are m(λ0 ), m(λ1 ),...,m(λs−1 ), then we shall write
m(λ0 ) m(λ1 ) m(λ )
spec(Γ) = {λ0 , λ1 , ..., λs−1 s−1 }.
10 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
FIGURE 7
Petersen graph drawn in two ways and its adjacency matrix. It is not hard to compute that
trace(A) = 0, det((A − λI)) = (λ − 3)(x − 1)5 (λ + 2)4 , σ(A) = {3, 1, −2},
dim(ker(A − 3I)) = 1, dim(ker(A − I)) = 5, dim(ker(A + 2I)) = 4, ρ(A) = 3,
spec(Γ) = {31 , 15 , −24 }.
Let σ(A) be the set of all (different) eigenvalues for some matrix A, and let λ ∈ σ(A). The
algebraic multiplicity of λ is the number of times it is repeated as a root of the characteristic
polynomial (recall that polynomial p(λ) = det(A − λI) is called the characteristic polynomial
for A). In other words, alg multA (λi ) = ai if and only if (x − λ1 )a1 ...(x − λs )as = 0 is the
characteristic equation for A. When alg multA (λ) = 1, λ is called a simple eigenvalue. The
geometric multiplicity of λ is dim ker(A − λI). In other words, geo multA (λ) is the maximal
number of linearly independent eigenvectors associated with λ.
Matrix A ∈ Matn×n (F) is said to be a reducible matrix when there exists a permutation
matrix P (a permutation matrix is a square 0-1 matrix thathas exactly one entry 1 in each
X Y
row and each column and 0s elsewhere) such that P > AP = , where X and Z are both
0 Z
square. Otherwise A is said to be an irreducible matrix. P > AP is called a
symmetric permutation of A - the effect of P > AP is to interchange rows in the same way as
columns are interchanged.
In the rest of this chapter we recall some basic results from algebraic graph theory, that we
will need later:
(a.1) Since Γ is connected, A is an irreducible nonnegative matrix. Then, by the
Perron-Frobenius theorem, the maximum eigenvalue λ0 is simple, positive (in fact, it coincides
with the spectral radius of A ), and has a positive eigenvector v , say, which is useful to
normalize in such a way that minu∈V v u = 1. Moreover, Γ is regular if and only if v = j , the
all-1 vector (then λ0 = δ, the degree of Γ).
Al )uv .
(a.2) The number of walks of length l ≥ 0 between vertices u and v is aluv := (A
m(λ ) m(λ ) m(λ )
(a.3) If Γ = (V, E) has spectrum spec(Γ) = {λ0 0 , λ1 1 , ..., λd d } then the total
Al ) = di=0 m(λi )λli .
P
number of (rooted) closed walks of length l ≥ 0 is trace(A
(a.4) If Γ has d + 1 distinct eigenvalues, then {I, A , A 2 , ..., A d } is a basis of the adjacency
or Bose-Mesner algebra A(Γ) of matrices which are polynomials in A . Moreover, if Γ has
diameter D,
dimA(Γ) = d + 1 ≥ D + 1,
because {I, A , A 2 , ..., A D } is a linearly independent set of A(Γ). Hence, the diameter is always
less than the number of distinct eigenvalues: D ≤ d.
(a.5) A graph Γ = (V, E) with eigenvalues λ0 > λ1 > ... > λd is a regular graph if and only
if there exists a polynomial H ∈ Rd [x] such that H(A
A) = J , the all-1 matrix. This polynomial
2. PERRON-FROBENIUS THEOREM 11
is unique and it is called the Hoffman polynomial. It has zeros at the eigenvalues λi , i 6= 0,
and H(λ0 ) = n := |V |. Thus,
d
n Y
H= (x − λi ),
π0 i=1
where π0 := di=1 (λ0 − λi ).
Q
2 Perron-Frobenius theorem
(2.01) Lemma
Let h·, ·i be the standard inner product for Rn (hx, yi = x> y), and let A be a real
symmetric n × n matrix. If U is an A-invariant subspace of Rn , then U ⊥ is also A-invariant.
(2.02) Lemma
Consider arbitrary rectangular matrix P of order m × n in which columns are linearly
independent. The column space of P is A-invariant if and only if there is a matrix D such
that AP = P D.
Proof: Denote by M the column space of P, i.e. M = span{P∗1 , P∗2 , ..., P∗n } where P∗i is ith
column of matrix P. Because columns of P are linearly independent we have dim(M) = n.
(⇒) Assume that M is A-invariant. That means AP∗1 ∈ M, AP∗2 ∈ M,..., AP∗n ∈ M.
Since M = span{P∗1 , P∗2 , ..., P∗n } and vectors P∗1 , P∗2 , ..., P∗n are linearly independent they
form a basis for vector space M. Now, there are unique coefficients dij ∈ F such that
or, simply, AP = P D.
(⇐) Assume that there is a matrix D such that AP = P D. Because
M = span{P∗1 , P∗2 , ..., P∗n } and vectors P∗1 , P∗2 , ..., P∗n are linearly independent they form
basis for vector space M. First note that AP∗i is the i-th column of AP. Since AP = P D, this
12 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
is equal to the i-th column of P D. But i-th column of P D is d1i P∗1 + d2i P∗2 + ... + dn,i P∗n .
Therefore, AP∗i is a linear combination of P∗1 , ..., P∗n .
Now, pick arbitrary x ∈ M. We know that there unique scalars c1 , c2 , ..., cn ∈ F such that
x = c1 P∗1 + c2 P∗2 + ... + cn P∗n . Now we have
(2.03) Lemma
Let A be a real symmetric matrix. If u and v are eigenvectors of A with different
eigenvalues, then u and v are orthogonal.
Proof: Suppose that Au = µu and Av = ηv. As A is symmetric, equation (1) implies that
µhv, ui = hv, Aui = hAv, ui = ηhv, ui. As µ 6= η, we must have hv, ui = 0.
(2.04) Lemma
The eigenvalues of a real symmetric matrix A are real numbers.
Proof: Let u be an eigenvector of A with eigenvalue λ. Then by taking the complex conjugate
of the equation Au = λu we get Au = λu, which is equivalent with Au = λu and so u is also
an eigenvector of A. Now since eigenvector are not zero we have hu, ui > 0. Vectors u and u
are eigenvectors of A, and if they have different corresponding eigenvalues λ and λ, than by
Lemma 2.03 hu, ui = 0, a contradiction. We can conclude λ = λ and the lemma is proved.
(2.05) Lemma
Let A be an n × n real symmetric matrix. If U is a nonzero A-invariant subspace of Rn ,
then U contains a real eigenvector of A.
Proof: We know from Lemma 2.04 that the eigenvalues of a real symmetric matrix A are
real numbers. Pick one real eigenvalue, θ say. Notice that we can find at last one real
eigenvector for θ (we know from definition of eigenvalue that there is some nonzero
eigenvector v, and if this vector have entry(s) which are complex we can consider equations
Av = θv and Av = θv (this is true) from which A(v + v) = θ(v + v)). Hence a real symmetric
matrix A has at least one real eigenvector (any vector in the kernel of (A − θI), to be precise).
Let R be a matrix whose columns form an orthonormal basis for U. Then, because U is
A-invariant, AR = RB for some square matrix B (Lemma 2.02). Since RT R = I we have
RT AR = RT RB = B,
which implies that B is symmetric, as well as real. Since every symmetric matrix has at least
one eigenvalue, we may choose a real eigenvector u of B with eigenvalue λ. Then
ARu = RBu = λRu. Now, since u 6= 0 and the columns of R are linearly independent we
have Ru 6= 0. Notice that if v = [v1 , ..., vn ]T then Av = v1 A∗1 + ... + vn A∗n where A∗i are ith
column of matrix A. Therefore, Ru is an eigenvector of A contained in U.
(2.06) Lemma
Let A be a real symmetric n × n matrix. Then Rn has an orthonormal basis consisting of
eigenvectors of A.
Proof: Let {u1 , ..., um } be an orthonormal (and hence linearly independent) set of m < n
eigenvectors of A, and let M be the subspace that they span. Since A has at least one
2. PERRON-FROBENIUS THEOREM 13
(2.07) Proposition
Suppose that A is an n × n matrix, with entries in R. Suppose further that A has
eigenvalues λ1 , λ2 , ..., λn ∈ R, not necessarily distinct, with corresponding eigenvectors
v1 , ..., vn ∈ Rn and that v1 , ..., vn are linearly independent. Then
P −1 AP = D
λ1 0 ... 0
0 λ2 ... 0
where P = v1 v2 ... vn and D = diag(λ1 , λ2 , ..., λn ) = .. .. .
..
. . ... .
0 0 ... λn
Proof: Since v1 , ... vn are linearly independent, they form a basis for Rn , so that every
u ∈ Rn can be written uniquely in the form
respectively, so that
AP c = P Dc.
Note that c ∈ Rn is arbitrary. This implies that (AP − P D)c = 0 for every c ∈ Rn . Hence we
must have AP = P D. Since the columns of P are linearly independent, it follows that P is
invertible. Hence P −1 AP = D as required.
(2.08) Proposition
Suppose that A is an n × n matrix, with entries in R. Suppose further that A is
diagonalizable. Then A has n linearly independent eigenvectors in Rn .
Proof: Suppose that A is diagonalizable. Then there exists an invertible matrix P, with
entries in R, such that D = P −1 AP is a diagonal matrix, with entries in R. Denote by v1 , ... vn
the columns of P . From AP = P D (where D = diag(λ1 , λ2 , ..., λn )) it is not hard to show that
Av1 = λ1 v1 , ..., An vn = λn vn .
14 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
(2.09) Proposition
M be a n × n
Let real symmetric matrix. Then there exist an orthogonal matrix
P = v1 v2 ... vn such that
M = P DP >
where D is diagonal matrix whose diagonal entries are the eigenvalues of M , namely
D = diag(λ1 , λ2 , ..., λn ), not necessarily distinct, with corresponding eigenvectors
v1 , v2 , ..., vn ∈ R.
That is
P −1 = P > . (4)
Next, since {v1 , v2 , .., vn } is linearly independent set of eigenvectors, we have P −1 M P = D
(Proposition 2.07), where D is diagonal matrix whose diagonal entries are the eigenvalues of
M , namely D = diag(λ1 , λ2 , ..., λn ), not necessarily distinct, which correspond to eigenvectors
v1 , ..., vn ∈ Rn . With another words M = P DP −1 . By Equation (4) the result follows.
hy, M yi
≤ λ0 , ∀y ∈ Rn \{0}
hy, yi
hy, M yi
= λ0 ⇔ hy, P DP > yi = hy, λ0 yi ⇔ hP > y, DP > yi = hP > y, λ0 I(P > y)i
hy, yi
2. PERRON-FROBENIUS THEOREM 15
Proof: (i) Suppose S is a dependent set. If the vectors in S are arranged so that
M = {x1 , x2 , ..., xr } is a maximal linearly independent subset, then
r
X
xr+1 = α i xi ,
i=1
By assumption the v∗j ’s from {vi1 , vi2 , ..., viri }, 1 ≤ i ≤ k, are linearly independent, and hence
αij = 0 ∀i, j. Therefore, B is linearly independent.
16 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
Recall: Let σ(A) be a set of all (distinct) eigenvalues for some matrix A, and let λ ∈ σ(A).
The algebraic multiplicity of λ is the number of times it is repeated as a root of the
characteristic polynomial. In other words, alg multA (λi ) = ai if and only if
(x − λ1 )a1 ...(x − λs )as = 0 is the characteristic equation for A. The geometric multiplicity of λ
is dim ker(A − λI). In other words, geo multA (λ) is the maximal number of linearly
independent eigenvectors associated with λ.
Proof: (⇐) Suppose geo multA (λi ) = alg multA (λi ) = ai for each eigenvalue λi . If there are k
distinct eigenvalues, and if Bi is a basis for ker(A − λi I), then B = B1 ∪ B2 ∪ ... ∪ Bk contains
P k
i=1 ai = n vectors. We just proved in Proposition 2.11(ii) that B is a linearly independent
set, so B represents a complete set of linearly independent eigenvectors of A, and we know
this insures that A must be diagonalizable.
(⇒) Conversely, if A is diagonalizable, and if λ is an eigenvalue for A with
alg multA (λ) = a, then there is a nonsingular matrix P such that
−1 λIa×a 0
P AP = D =
0 B
Recall: Matrix A ∈ Matn×n (F) is said to be a reducible matrix when there exists a
permutation matrix P (a permutation matrix is a square 0-1 matrix that has exactly
one
X Y
entry 1 in each row and each column and 0s elsewhere) such that P > AP = , where X
0 Z
and Z are both square. Otherwise A is said to be an irreducible matrix. P > AP is called a
symmetric permutation of A - the effect of P > AP is to interchange rows in the same way as
columns are interchanged.
hy,M yi hz, M zi
that is hy,yi
≥ λ0 . By Theorem 2.10 ≤ λ0 , ∀z ∈ Rn \{0}. We may conclude
hz, zi
hy, M yi
= λ0
hy, yi
Since all elements in sum are nonnegative, then for each j = 1, 2, ..., n, (M )ij or yj must be
equal to 0. If (M )ij = 0 for all j = 1, 2, ..., n, then, since M is symmetric, we would have that
M is reducible matrix, a contradiction. So there must bi some j such that (M )ij 6= 0. Now
consider different case. If there exist one and just one j such that (M )ij = 0, and that j is i
i.e. if (M )ii = 0, and (M )ij > 0 for all j 6= i, j = 1, 2, ..., n then we would obtain that all
entries in y are equal to 0, a contradiction. So, there must be some j ∈ {1, 2, ..., n} such that
j 6= i and (M )ij 6= 0. For this j we must have that yj = 0. Repeating this process over and
over for every such yj (and on similar way using irreducibility and fact that y is eigenvalue)
we get that y = 0, which is a contradiction.
Assumption that there exist some i ∈ {1, 2, ..., n} such that yi = 0 lead us in contradiction,
so it is not true. Therefore, entries of eigenvector y are all strictly positive, which also implies
that
any eigenvector x for the eigenvalue λ0 cannot have entries that are 0. (5)
Next we want to show that alg multA (λ0 ) = 1. First consider geometric multiplicity.
Suppose there are two linearly independent eigenvectors x1 , x2 ∈ ker(A − λ0 I) for the
eigenvalue λ0 . Then vector z = αx1 + βx2 is also eigenvector for the eigenvalue λ0 , for every
α, β ∈ R. This means that for some choice of α and β we can find one entry zi = 0, a
contradiction with (5). So, eigenvalue λ0 must have geometric multiplicity 1. Since for
diagonalizable matrices algebraic multiplicity is equal to geometric multiplicity for every
eigenvalue λ (Theorem 2.12), and every symmetric matrix is diagonalizable (by Lemma 2.06
and Proposition 2.07) we may conclude that alg multA (λ0 ) = 1.
It is only left to shown that for all other eigenvalues λi of M we must have |λi | ≤ λ0 .
Assume that there exist some eigenvalue λi such that |λi | > λ0 . Let y be eigenvector that
correspond to λi . Notice that
hy, M yi
M y = λi y ⇐⇒ y > M y = λi y > y ⇐⇒ = λi .
hy, yi
If we denote by z vector z = |y|, since |hy, M yi| = |y > M y| = |y|> M |y| = z > M z (matrix M is
nonnegative) we have
(2.14) Example
0 0 0 1
0 0 0 1
a) Consider matrix A =
0 0 0 1 . Characteristic polynomial of A is
1 1 1 0
2
√ √ √
char(λ) = λ (λ − 3)(λ + 3). It follow that maximal eigenvalue λ0 = 3 is simple,√positive
and coincides with spectral radius of A. Eigenvector for eigenvalue λ0 is v = (1, 1, 1, 3)> , so
it is positive.
FIGURE 8
Simple graph Γ1 and its adjacency matrix.
0 0 0 0 1 0
0 0 0 1 0 0
0 0 0 1 0 0
b) Consider matrix A = . Characteristic polynomial of A is
0 1 1 0 1 0
1 0 0 1 0 1
0 0 0 0 1 0
2
char(λ) = λ (λ − 1)(λ − 2)(λ + 1)(λ + 2). It follow that maximal eigenvalue λ0 = 2 is simple,
positive and coincides with spectral radius of A. Eigenvector for eigenvalue λ0 is
v = (1, 1, 1, 2, 2, 1)> , so it is positive.
FIGURE 9
Simple graph Γ2 and its adjacency matrix. ♦
(2.15) Proposition
Let Γ be a regular graph of degree k. Then:
(i) k is an eigenvalue of Γ.
(ii) If Γ is connected, then the multiplicity of k is one.
(iii) For any eigenvalue λ of Γ, we have |λ| ≤ k.
Proof: Recall that, degree of v is the number of edges of which v is an endpoint, and that
graph is regular of degree k (or k-valent) if each of its vertices has degree k.
3. THE NUMBER OF WALKS OF A GIVEN LENGTH BETWEEN TWO VERTICES 19
so that k is an eigenvalue of Γ.
(ii) Let x = [x1 x2 ..., xk ]> denote any non-zero vector for which Ax = kx (that is let x be
arbitrary eigenvector that correspond to eigenvalue k) and suppose that xj is an entry of x
having the largest absolute value. Since Ax = kx we have
a11 x1 + a12 x2 + ... + a1n xn kx1
Ax = .. ..
= .
.
an1 x1 + an2 x2 + ... + ann xn kxn
aj1 x1 + aj2 x2 + ... + ajn xn = kxj
(Ax)j = kxj
where (Ax)j denote jth entry of vector Ax. So 0 xi = kxj where the summation is over those
P
k vertices xi which are adjacent to xj . By the maximal property of xj , it follows that xi = xj
for all these vertices. If Γ is connected we may proceed successively in this way, eventually
showing that all entries of x are equal. Thus x is a multiple of j , and the space of eigenvectors
associated with the eigenvalue k has dimension one.
(iii) Suppose that Ay = λy, y 6= 0, and let yj denote an Pentry of y which is largest in
0
absolute value. By the same argument as in (ii), we have yi = λyj , where the summation
is over those k vertices yi which are adjacent to yj , and so
|λ||yj | = | 0 yi | ≤ 0 |yi | ≤ k|yj |.
P P
INDUCTION STEP
Denote the (u, v)-entry of A by auv and denote the (u, v)-entry of A L by aLuv .
Suppose that the result is true for l = L, that is, there is aLuv walks of length L in Γ
between u and v. Consider identity A L+1 = A LA . We have
X
AL+1 )uv = aL+1
(A uv = aLuz azv .
z∈V
(3.02) Example
Consider graph Γ3 given in Figure 10. Let’s say that we want to find number of walks of
length 4 and 5, between vertices 3 and 7. Then first that we need to do is to find adjacency
matrix for Γ3 . After that we need to find (3, 7)-entry (or (7, 3)-entry) of A 4 and A 5 .
FIGURE 10
Simple graph Γ3 and its adjacency matrix.
7 6 5 1 1 0 0 6 12 18 7 6 1 1 1 13
6 12 2 5 0 1 1 6 18 14 17 2 7 1 1 18
5 2 7 0 5 1 1 5 7 17 2 12 2 6 6 7
1 5 0 7 2 5 5 1 6 2 12 2 17 7 7 6
We have A4 =
1 0 5 2 12 6 6 1
and A5 =
1 7 2 17 14 18 18 1
. ♦
0 1 1 5 6 7 6 0 1 1 6 7 18 12 13 1
0 1 1 5 6 6 7 0 1 1 6 7 18 13 12 1
6 6 5 1 1 0 0 7 13 18 7 6 1 1 1 12
4. THE TOTAL NUMBER OF (ROOTED) CLOSED WALKS OF A GIVEN LENGTH 21
(4.02) Lemma
Let A be an n × n matrix with entries in R and suppose that A has r different eigenvalues
σ(A) = {λ1 , λ2 , ..., λr }. Let Ei denote eigenspace that correspondent to eigenvalue λi :
Suppose further that dim(Ei ) = mi for all 1 ≤ i ≤ r. Then matrix A is diagonalizable if and
only if m1 + m2 + ... + mr = n.
Proof: The geometric multiplicity of λ is dim ker(A − λI). In other words, geo multA (λ) is
the maximal number of linearly independent eigenvectors associated with λ. By assumption
geo multA (λi ) = dim(Ei ) = mi (1 ≤ i ≤ r). Let B i denote a basis for eigenspace Ei . Consider
set B = B 1 ∪ B 2 ∪ ... ∪ B r . Before we begin with proof of this lemma we want to answer the
following question: How much of vectors are in set B?
For eigenspaces E1 , E2 , ...,Er we have Ei ∩ Ej = {0} for i 6= j. Why? Because, if there is
some nonzero vector u ∈ span(B) such that u ∈ Ei and u ∈ Ej for i 6= j we will have
Au = λi u and Au = λj u =⇒ λi u = λj u =⇒ (λi − λj )u = 0
and since dim(Ei ) = |B i | = mi we can conclude that set B have m1 + m2 + ... + mr elements,
i.e.
|B| = m1 + m2 + ... + mr .
(⇒) Assume that A is diagonalizable. Then A has n linearly independent eigenvectors, say
{u1 , u2 , ..., un }, in Rn (Proposition 2.08). Since for every ui (1 ≤ i ≤ n) there exist some Ej
(j ∈ {1, 2, ..., r}) such that ui ∈ Ej , and since Ei ∩ Ej = {0} we must have
dim(E1 ) + dim(E2 ) + ... + dim(Er ) ≥ n that is
m1 + m2 + ... + mr ≥ n.
On the other hand, since B i is basis for Ei ⊆ Rn , and since B = B 1 ∪ B 2 ∪ ... ∪ B r we must
have span(B) ⊆ Rn , that is |B| ≤ n. With another words
m1 + m2 + ... + mr ≤ n.
Therefore
m1 + m2 + ... + mr = n.
22 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
(4.04) Lemma
Let Γ = (V, E) denote a simple graph with adjacency matrix A and with d + 1 distinct
eigenvalues λ0 , λ1 , ..., λd . Then there exist matrices E 0 , E 1 , ..., E d such that for every
function f (x) that have finite value on σ(A A) we have
A) = f (λ0 )E
f (A E 0 + f (λ1 )E
E 1 + ... + f (λd )E
E d.
(4.05) Proposition
Let Γ = (V, E) denote a simple graph with adjacency matrix A , with d + 1 distinct
eigenvalues λ0 , λ1 , ..., λd and let E 0 , E 1 , ..., E d be principal idempotents of Γ. Then each power
of A can be expressed as a linear combination of the idempotents E i
d
X
k
A = λkiE i ;
i=0
d
P
A) =
Proof: We have p(A E i , for every polynomial p ∈ R[x], where λi ∈ σ(A
p(λi )E A) (Lemma
i=0
4.04). If for polynomial p(x) we pick p(x) = xk we have
d
X
k
A = λkiE i .
i=0
(4.06) Proposition
Let Γ = (V, E) denote a simple graph with adjacency matrix A , spectrum
m(λ ) m(λ ) m(λ )
spec(Γ) = {λ0 0 , λ1 1 , ..., λd d } and let E 0 , E 1 , ..., E d be principal idempotents of Γ.
Then
E i ) = m(λi ), i = 0, 1, ...d.
trace(E
Proof: For each eigenvalue λi , 0 ≤ i ≤ d, we know that E i = Ui Ui> where Ui is matrix whose
columns form an orthonormal basis for the eigenspace Ei = ker(A − λi I). From linear algebra
we also know that
trace(AB) = trace(BA),
where A and B are appropriate matrices for which product exist - proof of this is easy:
Xm Xm Xn
trace(Am×n Bn×m ) = (AB)ii = (A)ik (B)ki =
i=1 i=1 k=1
n X
X m n
X
= (B)ki (A)ik = (BA)kk = trace(Bn×m Am×n ).
k=1 i=1 k=1
Therefore,
u1
u2
E i ) = trace(Ui Ui> ) = trace(Ui> Ui ) = trace( ..
trace(E [u1 |u2 |...|umi ]) = trace(Imi ×mi ),
.
umi
E i ) = m(λi ).
where mi = m(λi ). Therefore, trace(E
24 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
(4.07) Theorem
m(λ ) m(λ ) m(λ )
If Γ = (V, E) has spectrum spec(Γ) = {λ0 0 , λ1 1 , ..., λd d } then the total number of
Al ) = di=0 m(λi )λli .
P
(rooted) closed walks of length l ≥ 0 is trace(A
Proof: Number of closed walks of length k from vertex i to i is (A Ak )ii . Therefore, to obtain
k
the number of all closed walks of length k, we have to add values (A AP )ii over all i, that is, we
have to take the trace of A . From Proposition 4.05, we have A = di=0 λkiE i . If we take
k k
(4.08) Example
Consider
√ graph Γ√
4 given in Figure 11. This graph has three eigenvalues λ0 = 2,
λ1 = 2 − 2 , λ2 = − 25 − 21 , and spectrum:
5 1
√ !2 √ !2
5 1 − 5 1
spec(Γ4 ) = {21 , − , − }
2 2 2 2
FIGURE 11
Simple graph Γ4 and its adjacency matrix.
(5.02) Proposition
Let Γ = (V, E) denote a simple graph with adjacency matrix A and with d + 1 distinct
eigenvalues λ0 , λ1 , ..., λd . Principal idempotents E 0 , E 1 , ..., E d satisfy the following properties:
E i if i = j
(i) E iE j = δij E i = ;
0 if i 6= j
(ii) AE i = λiE i , where λi ∈ σ(A A);
Pd
A) =
(iii) p(A p(λi )EE i , for every polynomial p ∈ R[x], where λi ∈ σ(A A);
i=0
d
P
(iv) E 0 + E 1 + ... + E d = E i = I;
i=0
d
P
(v) λiE i = A , where λi ∈ σ(A
A).
i=0
Proof: From Definition 4.03 we have that E i = Ui Ui> , where Ui is a matrix which columns
> an orthonormal basis for eigenspace Ei = ker(A
form A − λi I), (0 ≤ i ≤ d). We know that
U1
U >
2 > I if i = j,
. [U1 |U2 |...|Ud ] = I, so Ui Uj = Therefore,
.. 0 otherwise.
Ud>
Ui Uj>
if i = j
E iE j = Ui Ui> Uj Uj> = = δij E i ,
0 6 j
if i =
(5.03) Proposition
Suppose that non-zero vectors v1 , ..., vr in a finite-dimensional real inner product space are
pairwise orthogonal. Then they are linearly independent.
α1 v1 + ... + αr vr = 0.
(5.04) Proposition
If simple graph Γ has d + 1 distinct eigenvalues, then {I, A, A2 , ..., Ad } is a basis of the
adjacency (Bose-Mesner) algebra A(Γ).
26 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
Proof: We have that the set {E E 0 , E 1 , ..., E d } form orthogonal set (Proposition 5.02(i)). That
means that set {E
E 0 , E 1 , ..., E d } is linearly independent (Proposition 5.03).
Next, from Proposition 5.02(iii) we see that if for polinomyal p we pick 1, x, x2 , ..., xd ,
then we can write A i , for every i ∈ {0, 1, 2, ..., d}, like linear combination of E 0 , E 1 , ..., E d :
I = E 0 + E 1 + ... + E d ,
A = λ0E 0 + λ1E 1 + ... + λdE d ,
A 2 = λ20E 0 + λ21E 1 + ... + λ2dE d ,
...
d
A = λd0E 0 + λd1E 1 + ... + λddE d .
Matrix B > above is Vandermonde matrix, and it is not hard to prove that columns in B >
constitute a linearly independent set (see [37], page 185) (hint: columns of B > form a linearly
independent set if and only if ker(B > ) = {0}).
Now, we set up question: Is it {I, A , A 2 , ..., A d } linearly independent set? Assume it is not.
Then, they would be some numbers α0 , α1 , ..., αd ∈ R such that
α0 I + α1A + α2A 2 + ... + αdA d = 0. We would then obtain that
β0E 0 + β1E 1 + ... + βdE d = 0
where
βi = α0 + α1 λi + ... + αd λdi , 0 ≤ i ≤ d.
In general, it may happen that βi = 0 for all i, even if some of αi are not zero. But, from fact
that
β0 1 λ0 λ20 ... λd0 α0
β1 1 2 d
λ1 λ1 ... λ1 α1
β2 1
= λ2 λ22 ... λ22 α2
.. .. .. .. .. ..
. . . . . .
βd 1 λd λd ... λdd
2
αd
| {z }
=B >
where B > is actually a Vandermonde matrix, above system have unique solution, and this
imply that if some of αi are nonzero, then also some of βi are nonzero. We obtain that
{E
E 0 , E 1 , ..., E d } is linearly dependent set, a contradiction.
Since matrix B is invertible, every E k , k = 0, 1, 2, ..., d can be expressed as linear
combination of I, A , A 2 , ..., AP d
. Matrices A d+1 , A d+2 ,... we can write as linear combination of
I, A , A 2 ,..., A d because A ` = di=0 λ`iE i for every ` (Proposition 4.05). So {I, A , A 2 , ..., A d } is
maximal linearly independent set.
Therefore, {I, A, A2 , ..., Ad } is a basis of the adjacency algebra A(Γ).
(5.05) Observation
From the last part of the proof of Proposition 5.04 we have that that principal
idempotents are in fact also elements of Bose-Mesner algebra.
5. THE ADJACENCY (BOSE-MESNER) ALGEBRA A(Γ) 27
(5.06) Proposition
Let Γ = (V, E) denote a graph with diameter D. Prove that the set {I, A , A 2 , ..., A D } is
linearly independent.
Proof: Assume that α0 I + α1A + ... + αDA D = 0 for some real scalars α0 , ..., αD , not all 0.
Let i = max{0 ≤ j ≤ D : αj 6= 0}. Then
1
Ai = (α0 I + α1A + ... + αi−1A i−1 ). (7)
αi
Pick x, y ∈ V with ∂(x, y) = i. Recall that for 0 ≤ j ≤ D, the (x, y)-entry of A j is equal to the
number of all walks from x to y that are of length j (see Lemma 3.01). Therefore the
(x, y)-entry of A j is 0 for 0 ≤ j ≤ i − 1, and the (x, y)-entry of A i is nonzero. But this
contradicts Equation (7).
(5.07) Proposition
In simple graph Γ with d + 1 distinct eigenvalues and the diameter D, the diameter is
always less than the number of distinct eigenvalues: D ≤ d.
Proof: If Γ has d + 1 distinct eigenvalues, then {I, A , A 2 , ..., A d } is a basis of the adjacency
or Bose-Mesner algebra A(Γ) of matrices which are polynomials in A (Proposition 5.04).
Moreover, if Γ has diameter D,
dimA(Γ) = d + 1 ≥ D + 1,
because {I, A , A 2 , ..., A D } is a linearly independent set of A(Γ) (Proposition 5.06). Hence, the
diameter is always less than the number of distinct eigenvalues: D ≤ d.
(5.08) Example
a) Consider graph Γ5 given in Figure 12. Eigenvalues of Γ5 are λ0 = 3 and λ1 = −1, so
d + 1 = 2. Diameter is D = 1. Therefore D = d.
FIGURE 12
Simple graph Γ5 and its adjacency matrix.
√
b) Consider graph√Γ6 given in Figure 13. Eigenvalues of Γ6 are λ0 = 3, λ1 = 5, λ2 = 1,
λ3 = −1 and λ4 = − 5, so d + 1 = 5. Diameter of Γ6 is D = 3. Therefore D < d.
28 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
FIGURE 13
Simple graph Γ6 and its adjacency matrix. ♦
6 Hoffman polynomial
Matrices P and Q with dimension n × n are said to be similar matrices whenever there
exists a nonsingular matrix R such that P = R−1 QR. We write P ' Q to denote that P and
Q are similar.
Similar matrices have the same characteristic polynomial, so they have the same
eigenvalues with the same multiplicities.
(6.02) Lemma
If A and B are similar matrices and if A is diagonalizble, then B is diagonalizable.
Proof: Since A and B are similar, there exists a nonsingular matrix P, such that
B = P −1 AP. Since matrix A is diagonalizable, we know that there exist an invertible matrix
R, with entries in R, such that A = RA0 R−1 , where A0 is diagonal matrix, with entries in R.
Now we have
B = P −1 AP = P −1 RA0 R−1 P = (P −1 R)A0 (P −1 R)−1 .
If we define D := P −1 R we have
B = DA0 D−1 .
Therefore, B is diagonalizable.
6. HOFFMAN POLYNOMIAL 29
(6.03) Lemma
Let A and B be a diagonalizable matrices. Then AB = BA if and only if A and B can be
simultaneously diagonalized i.e.,
A = U A0 U −1 and B = U B0 U −1
for some invertible matrix U, where A0 and B0 are diagonal matrices.
Proof: (⇒) Assume that matrices A and B commutes, and assume that
σ(A) = {λk0 , λk1 , ..., λkd }, with multiplicities m(λk0 ), m(λk1 ), ...m(λkd ). Since A is
diagonalizable there exist invertible matrix P such that A0 = P −1 AP where columns of P are
eigenvectors of A and A0 = diag(λ1 , λ2 , ..., λn ) (λ’s
not necessary distinct). We can reorder
λk0 I 0 ... 0
0 λk I ... 0
1
columns in matrix P so that P produces A0 = .. .. where each I is the
..
. . .
0 0 ... λkd I
identity matrix of appropriate size.
Now consider matrix D = P −1 BP (from which it follow that B = P DP −1 ). We have
AB = BA
P A0 P P DP −1
−1
= P DP −1 P A0 P −1
P A0 DP −1 = P DA0 P −1
A0 D = DA0
λ1 0 ... 0 d11 d12 ... d1n d11 d12 ... d1n λ1 0 ... 0
0 λ2 ... 0 d21 d22 ... d2n
d21 d22 ... d2n 0 λ2 ... 0
= ..
.. .. .. . .. .. .. .. .. .. ..
. ..
. . . . . . . . . .
0 0 ... λn dn1 dn2 ... dnn dn1 dn2 ... dnn 0 0 ... λn
λi dij = dij λj
(λi − λj )dij = 0.
B1 0 ... 0
0 B2 ... 0
So, if λi 6= λj we have dij = 0, and from this it follow D = .. .. for some
..
. . .
0 0 ... Bd
matrices B1 , B2 , ..., Bd , where each Bi is of the dimension m(λki ) × m(λki ). Since B is
diagonalizable and D = P −1 BP it follows that D is diagonalizable (Lemma 6.02), so there
exists an invertible matrix R, with entries in R such that R−1 DR is a diagonal matrix, with
entries in R, that is
D = RD0 R−1 .
Similar matrices have the same eigenvalues with the same multiplicities (Lemma 6.01), so we
have that D0 = B0 that is
D = RB0 R−1
and from form of matrix D we can notice that R have form
R1 0 ... 0
0 R2 ... 0
R = .. .. .
..
. . .
0 0 ... Rd
30 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
Now we have
B = P DP −1 = P RB0 R−1 P −1 .
Notice that RA0 R−1 is equal to A0 because Ri λi IRi−1 = λi I. Therefore, we have found matrix
U := P R such that
A = U A0 U −1 and B = U B0 U −1 .
(⇐) Conversely, assume that there exist matrix U such that
A = U A0 U −1 and B = U B0 U −1
where A0 and B0 are diagonal matrices, for example
a11 0 ... 0 b11 0 ... 0
0 a22 ... 0 0 b22 ... 0
A0 = .. and B = .. .
.. .. 0 .. ..
. . . . . .
0 0 ... ann 0 0 ... bnn
Notice that A0 B0 = B0 A0 . Then
AB = U A0 U −1 U B0 U −1 = U A0 B0 U −1 = U B0 A0 U −1 = U B0 U −1 U A0 U −1 = BA.
Therefore, matrices A and B commute.
(6.04) Corollary
Let A and B be symmetric matrices. Then A and B are commuting matrices if and only if
there exists an orthogonal matrix U such that
A = U A0 U > , B = U B0 U > ,
where A0 is a diagonal matrix whose diagonal entries are the eigenvalues of A, and B0 is a
diagonal matrix whose diagonal entries are the eigenvalues of B.
Proof: Proof follow from Lemma 2.09 and from proof of Lemma 6.03. If we use Lemma 2.09,
in the proof of Lemma 6.03 matrices P −1 , R−1 and U −1 we can replace with P > , R> and U > ,
respectively.
Proof: (⇒) Assume that there exist polynomial H(x) = h0 + h1 x + h2 x2 + ... + hk xk such
A) that is J = h0 I + h1A + h2A 2 + ... + hkA k . Then we have
that J = H(A
AJ = h0A + h1A 2 + h2A 3 + ... + hkA k+1 and
J A = h0A + h1A 2 + h2A 3 + ... + hkA k+1 ,
that is AJ = J A (AA commutes with J ). With another words (since A is symmetric)
a11 a12 ... a1n 1 1 ... 1 1 1 ... 1 a11 a12 ... a1n
a12 a22 ... a2n 1 1 ... 1 1 1 ... 1 a12 a22 ... a2n
.. = .. .. .. .
.. .. .. .. .. .. . ..
. ..
. . . . . . . . . .
a1n a2n ... ann 1 1 ... 1 1 1 ... 1 a1n a2n ... ann
6. HOFFMAN POLYNOMIAL 31
We denoted valency of vertex u by δu , and it is not hard to see that δu = (A AJ )uv for arbitrary
v, and that δv = (JJ A )uv for arbitrary u. Since (A
AJ )uv = (J
J A )uv we have δu = δv for every
u, v ∈ V. So Γ is regular.
Next, we want to prove that graph Γ is connected. It is not hard to see that, if u and v are
any vertices of Γ, there is, for some t, a nonzero number as the (u, v)-th entry of A t ;
otherwise, no linear combination of the powers of A could have 1 as the (u, v)-th entry, and
J = H(A A) would be false. Thus, for some t, there is at least one path of length t from u to v.
But this means Γ is connected.
(⇐) Conversely, assume that Γ is regular (of degree k) and connected. As we saw in the
proof on necessity, because Γ is regular, A commutes with J . Thus, since A and J are
symmetric commuting matrices, there exists an orthogonal matrix U such that
J = U J0 U > , A = U A0 U > ,
where J0 is a diagonal matrix whose diagonal entries are the eigenvalues of J , namely
J0 = diag(n, 0, 0, ..., 0), and A0 is a diagonal matrix whose diagonal entries are the eigenvalues
of A, namely A0 = diag(λt1 , λt2 , ..., λtn ). Now j = [ 1 1 ... 1 ]> is an eigenvector of both A and
J , with k and n the corresponding eigenvalues, a consequence of the fact that Γ is regular of
degree k. Because Γ is connected, k is an eigenvalue of A of multiplicity 1 (Proposition 2.15)
(also, from the same proposition, an eigenvalue of largest absolute value; see also Theorem
2.13). Let λ0 = k, λ1 , ..., λd be the distinct eigenvalues of A , and let
d
Q
n (x − λi )
i=1
H(x) = d
Q
(λ0 − λi )
i=1
where n is order of A . We can always reorder columns of matrix U in A = U A0 U > and obtain
that A0 is of form A0 = diag(λ0 , λs2 , ..., λsn ). Then
λ0 0 ... 0 H(λ0 ) 0 ... 0 n 0 ... 0
0 λs ... 0 0 H(λs2 ) ... 0 0 0 ... 0
2
H(A0 ) = H( .. .. ) = .. = .. .. = J0 ,
.. .. .. ..
. . . . . . . . .
0 0 ... λsn 0 0 ... H(λsn ) 0 0 ... 0
that is
H(A0 ) = J0 ,
say
J0 = h0 I + h1 A0 + h2 A20 + ... + hd Ad0 .
Notice that
A = U A0 U > ,
A 2 = A · A = U A0 U > U A0 U > = U A20 U > ,
...
Ad = A · ... · A} = U A0 U > U A0 U > ... U A0 U > = U Ad0 U > ,
| · A{z
d times
so
J = U J0 U > = U (h0 I + h1 A0 + h2 A20 + ... + hd Ad0 )U > = H(A
A).
32 CHAPTER I. BASIC RESULTS FROM ALGEBRAIC GRAPH THEORY
Let us call
d
Q
n (x − λi )
i=1
H(x) = d
Q
(λ0 − λi )
i=1
the Hoffman polynomial of graph Γ, and say that the polynomial and graph are associated
with each other. It is clear from this formula that this polynomial is of smallest degree for
which J = H(AA) holds. Further, the distinct eigenvalues of A , other than λ0 , are roots of
H(x).
Chapter II
Distance-regular graphs
In this chapter we will define distance-regular graphs and show some examples of graphs
that are distance-regular. Main results will be the following characterizations:
(A) Γ is distance-regular if and only if it is distance-regular around each of its vertices and
with the same intersection array.
(B) A graph Γ = (V, E) with diameter D is distance-regular if and only if, for any integers
0 ≤ i, j ≤ D, its distance matrices satisfy
D
X
A iA j = pkij A k (0 ≤ i, j ≤ D)
k=0
A)
A h = ph (A (0 ≤ h ≤ D).
33
34 CHAPTER II. DISTANCE-REGULAR GRAPHS
Characterizations (F), (H) and (I) have two terms which are maybe unfamiliar:
predistance polynomials and distance ◦-algebra D. Predistance polynomials are defined in
Definition 11.07 and distance ◦-algebra D in Definition 8.07. Here we can say that predistance
polynomials {pi }0≤i≤d , dgr pi = i, are a sequence of orthogonal polynomials with respect to
the scalar product hp, qi = n1 trace(p(AA)q(AA)) normalized in such a way that kpi k2 = pi (λ0 ),
m0 m1 md
where spec(AA) = {λ0 , λ1 , ..., λd }, and that vector space D = span{I, A , A 2 , ..., A D } forms
an algebra with the entrywise (Hadamard) product of matrices, defined by
(X ◦ Y )uv = (X)uv (Y )uv .
(F) Let Γ be a graph with diameter D, adjacency matrix A and d + 1 distinct eigenvalues
λ0 > λ1 > ... > λd . Let A i , i = 0, 1, ..., D, be the distance-i matrices of Γ, E j , j = 0, 1, ..., d, be
the principal idempotents of Γ, let pji , i = 0, 1, ..., D, j = 0, 1, ..., d, be constants and pj ,
j = 0, 1, ..., d, be the predistance polynomials. Finally, let A be the adjacency algebra of Γ,
and d = D. Then
(I) Let Γ be a graph with diameter D, adjacency matrix A and d + 1 distinct eigenvalues
λ0 > λ1 > ... > λd . Let A i , i = 0, 1, ..., D, be the distance-i matrix of Γ, E j , j = 0, 1, ..., d, be
(j)
the principal idempotents of Γ, and let ai , i = 0, 1, ..., D, j = 0, 1, ..., d, be constants. Finally,
7. DEFINITIONS AND EASY RESULTS 35
⇐⇒ A j ∈ D, j = 0, 1, ..., d.
FIGURE 14
Petersen graph. For example, we have ∂(v1 , v2 ) = 2, Γ1 (v1 ) = {u1 , v3 , v4 },
Γ2 (v2 ) = {u1 , u3 , u4 , u5 , v1 , v3 }, |Γ1 (v1 ) ∩ Γ2 (v2 )| = |{u1 , v3 }| = 2.
FIGURE 15
The cube.
From Definition 9.01 we will see that the cube is from family of Hamming graphs, and in
Lemma 9.08 we will prove that Hamming graphs are distance-regular. If we compare this
proof with the proof of Lemma 9.08, the proof of Lemma 9.08 is much more elegant.
Let V = {0, 1, ..., 7} denote set of vertices of the cube. Notice that the diameter of graph
is 3 (D = 3). We must show that there exist numbers phij (0 ≤ h, i, j ≤ 3) such that for any
pair x, y ∈ V with ∂(x, y) = h we have
|Γi (x) ∩ Γj (y)| = |{z ∈ V : ∂(x, z) = i and ∂(z, y) = j}| = phij . Because we want to use only
definition, we must to consider all possible numbers phij , and for every of this number we must
examine all possible pairs. With another words, since ∂(x, y) = ∂(y, x), we will have to
examine
|Γ0 (x) ∩ Γ0 (y)|, |Γ0 (x) ∩ Γ1 (y)|, |Γ0 (x) ∩ Γ2 (y)|, |Γ0 (x) ∩ Γ3 (y)|
|Γ1 (x) ∩ Γ1 (y)|, |Γ1 (x) ∩ Γ2 (y)|, |Γ1 (x) ∩ Γ3 (y)|
|Γ2 (x) ∩ Γ2 (y)|, |Γ2 (x) ∩ Γ3 (y)|
|Γ3 (x) ∩ Γ3 (y)|
for every pair of vertices x, y ∈ V .
FIGURE 16
The cube drawn on four different way, and subsets of vertices at given distances from the root.
Consider the number p0ii for 0 ≤ i ≤ 3, that is consider |Γ0 (x) ∩ Γ0 (y)|, |Γ1 (x) ∩ Γ1 (y)|,
|Γ2 (x) ∩ Γ2 (y)|, |Γ3 (x) ∩ Γ3 (y)| for every two vertices x, y such that ∂(x, y) = 0. Note that for
x, y ∈ V we have ∂(x, y) = 0 if and only if x = y. Therefore p0ii = |Γi (x) ∩ Γi (x)| = |Γi (x)| for
every x ∈ V. But it is easy to see that |Γi (x)| = 1 if i ∈ {0, 3}, and |Γi (x)| = 3 if i ∈ {1, 2}.
Consider the number p0ij for 0 ≤ i, j ≤ 3, where i 6= j. Note that for x, y ∈ V we have
∂(x, y) = 0 if and only if x = y. Since i 6= j we have p0ij = |Γi (x) ∩ Γj (x)| = |∅| = 0 for every
x ∈ V.
7. DEFINITIONS AND EASY RESULTS 37
Next we want to find numbers p1ij for 0 ≤ i, j ≤ 3. Note that for x, y ∈ V we have
∂(x, y) = 1 if and only if x and y are neighbors. Using Figure 16 one can easily find that
p100 = 0, p101 = 1 = p110 (for example |Γ0 (0) ∩ Γ1 (2)| = |{0} ∩ {0, 3, 6}| = |{0}| = 1),
p102 = 0 = p120 , p103 = 0 = p130 (for example |Γ0 (0) ∩ Γ3 (2)| = |{0} ∩ {5}| = |∅| = 0), p111 = 0,
p112 = 2 = p121 , p113 = 0 = p131 , p122 = 0, p123 = 3 = p132 , p133 = 0.
We will left to reader, like an easy exercise, to evaluate p2ij for 0 ≤ i, j ≤ 3 and p3ij for
0 ≤ i, j ≤ 3 (p200 = 0, p201 = 0 = p210 , p202 = 1 = p220 , p203 = 0 = p230 , p211 = 2, ...) ♦
It is clear from the solution of Example 7.02, that the given definition of distance-regular
graphs is very inconvenient if we want to check whether a given graph is distance-regular or
not. Therefore, we want to obtain characterizations of distance-regular graphs, which will
relieve check whether a given graph is distance-regular or not. In Theorem 8.12
(Characterization A), Theorem 8.15 (Characterization B), Theorem 8.22 (Characterization C)
and so on, we will obtain statements that are equivalent with definition of distance-regular
graph and which are ”easier” to apply.
(7.03) Proposition
Let Γ = (V, E) be a distance-regular graph with diameter D. Then:
(i) For 0 ≤ h, i, j ≤ D we have phij = 0 whenever one of h, i, j is greater than the sum of
the other two.
(ii) For 0 ≤ h, i, j ≤ D we have phij 6= 0 whenever one of h, i, j is equal to the sum of the
other two.
(iii) For every x ∈ V and for every integer 0 ≤ i ≤ D we have p0ii = |Γi (x)|.
(iv) Γ is regular with valency p011 .
Proof: (i) Pick x, y ∈ V with ∂(x, y) = h and assume phij 6= 0. This means that there is z ∈ V
such that ∂(x, z) = i and ∂(y, z) = j. By the triangle inequality of path-length distance ∂ we
have h ≤ i + j, i ≤ h + j and j ≤ i + h. It follows that none of h, i, j is greater of the sum of
the other two.
FIGURE 17
Illustration for sets Γi (x) in connected graph Γ
(ii) Assume that one of h, i, j is the sum of the other two. If h = i + j, then pick x, y ∈ V
38 CHAPTER II. DISTANCE-REGULAR GRAPHS
with ∂(x, y) = h, and let z denote a vertex which is at distance i from x and which lies on
some shortest path between x and y. Note that z is at distance j from y, and so phij 6= 0.
If i = h + j, then pick x, z ∈ V with ∂(x, z) = i. Let y denote a vertex which is at distance
h from x and which lies on some shortest path between x and z. Note that ∂(y, z) = j, and so
z ∈ Γi (x) ∩ Γj (y). Therefore, phij = |Γi (x) ∩ Γj (y)| =
6 0. The case j = h + i is done analogously.
(iii) Pick x ∈ V and note that p0ii = |Γi (x) ∩ Γi (x)| = |Γi (x)|.
(iv) Immediately from (iii) above.
(7.04) Lemma
Let Γ = (V, E) be a distance-regular graph with diameter D, and let ki = p0ii . Then:
(i) kh phij = kj pjih for 1 ≤ i, j, h ≤ D;
(ii) ph1,h−1 + ph1h + ph1,h+1 = k1 for 0 ≤ h ≤ D;
(iii) if h + i ≤ D then ph1,h−1 ≤ pi1,i+1 .
Proof: (i) Fix x ∈ V. Let us count the number of pairs y, z ∈ V such that ∂(x, y) = h,
∂(x, z) = j and ∂(y, z) = i. We can choose y in kh different ways (kh = p0hh = |Γh (x)|), and for
every such y, there is phij vertices z (∂(x, z) = j and ∂(y, z) = i). Therefore, there is kh phij such
pairs.
On the other hand, we can choose z in kj different ways, and for every such z, there is pjih
vertices y (∂(x, y) = h and ∂(y, z) = i). Therefore, there is kj pjih such pairs.
It follows that kh phij = kj pjih .
FIGURE 18
Illustration for numbers phij and for sets Γh (x) (vertices that are on distance h from x) of
distance-regular graph.
(ii) ph1,h−1 + ph1h + ph1,h+1 is the number of neighbors of arbitrary vertex from Γh (x),
1 ≤ h ≤ D. Since Γ is regular with valency k1 (by Proposition 7.03(iii)), this number is equal
to k1 .
(iii) Pick arbitrary y ∈ Γh (x) and arbitrary z ∈ Γh+i (x) ∩ Γi (y) (such z exist because
h + i ≤ D). Notice that z is on distance i from y. We have Γh−1 (x) ∩ Γ1 (y) ⊆ Γi+1 (z) ∩ Γ1 (y),
because all vertices that are in Γh−1 (x) ∩ Γ1 (y) are on distance i + 1 from z, and maybe there
are some vertices in Γi+1 (z) ∩ Γ1 (y), which are not in Γh−1 (x). Therefore
ph1,h−1 = |Γ1 (y) ∩ Γh−1 (x)| ≤ |Γ1 (y) ∩ Γi+1 (z)| = pi1,i+1 .
For better understanding of distance-regular graphs we will next introduce concept of local
distance-regular graphs.
7. DEFINITIONS AND EASY RESULTS 39
FIGURE 19
Intersection numbers around y.
FIGURE 20
Simple connected regular graph that is distance-regular around vertices 1 and 8 (intersection
numbers around 1 are c0 = 0, a0 = 0, b0 = 4, c1 = 1, a1 = 0, b1 = 3, c2 = 2, a2 = 0, b2 = 2,
c3 = 3, a3 = 0, b3 = 1, c4 = 4, a4 = 0, b4 = 4). This graph is known as Hoffman graph.
40 CHAPTER II. DISTANCE-REGULAR GRAPHS
FIGURE 21
Simple connected graph that is distance-regular around vertex 14.
Then every vertex has the same eccentricity ε and diameter of Γ is D = ε. Directly from
definition of distance-regularity around vertex it follow that for every 0 ≤ h ≤ D there exist
numbers ch , ah and bh such that for any pair of vertices x, y ∈ Γ with ∂(x, y) = h, we have
(7.08) Comment
Let x, y be any pair of vertices with ∂(x, y) = h and let Γ = (V, E) denote arbitrary
connected graph which is distance-regular around each of its vertices and with the same
intersection array. Then the intersection number ah is equal to the number of neighbors of
vertex x that are on distance h from y, coefficient bh presents the number of neighbors of
vertex x that are on distance h + 1 from y and coefficient ch presents the number of neighbors
of vertex x that are on distance h − 1 from y. ♦
7. DEFINITIONS AND EASY RESULTS 41
FIGURE 22
Illustration for coefficient ah , bh and ch in connected graph which is distance-regular around
each of its vertices and with the same intersection array. ♦
(7.09) Lemma
Let Γ = (V, E) denote arbitrary connected graph which is distance-regular around each of
its vertices and with the same intersection array. Then the following (i)-(iii) hold.
(i) Γ is regular with valency k = b0 .
(ii) a0 = 0 and c1 = 1.
(iii) ai + bi + ci = k for 0 ≤ i ≤ D.
Proof: Consider graph pictured on Figure 22. Where are edges of this graph? We can notice
that it is not possible to have edge between sets Γi (y) and Γi+2 (y) for some 1 ≤ i ≤ D − 2
(Why?). So every edge in this graph is between Γi (y) and Γi+1 (y), and because of that
ah + bh + ch is valency of vertex x.
(i) Pick x ∈ V. We have |Γ1 (x)| = |Γ1 (x) ∩ Γ1 (x)| = b0 (see Comment 7.07). It follows that
Γ is regular with valency k = b0 .
(ii) Pick x ∈ V and note that we have a0 = |Γ1 (x) ∩ Γ0 (x)| = |∅| = 0. Pick y ∈ V such that
∂(x, y) = 1 and note that we have c1 = |Γ1 (x) ∩ Γ0 (y)| = |{y}| = 1.
(iii) Pick x ∈ V, 0 ≤ i ≤ D and y ∈ Γi (x). Note that, by definition of path-length distance,
all neighbors of y are at distance either i − 1 from x, or i form x, or i + 1 from x. Therefore,
Γ1 (y) is a disjoint union of Γ1 (y) ∩ Γi−1 (x), Γ1 (y) ∩ Γi (x) and Γ1 (y) ∩ Γi+1 (x), and so we have
k = b0 = |Γ1 (y)| = |Γ1 (y) ∩ Γi−1 (x)| + |Γ1 (y) ∩ Γi (x)| + |Γ1 (y) ∩ Γi+1 (x)| = ci + ai + bi .
(7.10) Proposition
Let Γ = (V, E) denote arbitrary connected graph which is distance-regular around each of
its vertices and with the same intersection array and let ki = p0ii . Then
(i) b0 ≥ b1 ≥ b2 ≥ ... ≥ bD−1 ;
(ii) c1 ≤ c2 ≤ c3 ≤ ... ≤ cD ;
(iii) ki−1 bi−1 = ki ci for 1 ≤ i ≤ D;
(iv) ki = (b0 b1 ...bi−1 )/(c1 c2 ...ci ) for 1 ≤ i ≤ D.
Proof: (i) Pick x, y ∈ V with ∂(x, y) = i. Consider a shortest path [x, z1 , z2 , ..., zi−2 , zi−1 , y]
from x to y. Consider the distance-partitions of Γ with respect to vertices x and z1 (see Figure
23 for illustration). Denote by Bi set Bi = Γi+1 (x) ∩ Γ1 (y), by Bi−1 set Bi−1 = Γi (z1 ) ∩ Γ1 (y)
and notice that bi = |Bi |, bi−1 = |Bi−1 |. Pick arbitrary vertex w ∈ Bi . We have that w ∼ y and
∂(z1 , w) = i. This mean that w ∈ Bi−1 . We conclude that Bi ⊆ Bi−1 , and therefore
bi = |Bi | ≤ |Bi−1 | = bi−1 .
42 CHAPTER II. DISTANCE-REGULAR GRAPHS
FIGURE 23
Illustration for coefficient ai , bi and ci in connected graph Γ with two different partition.
(ii) We will keep all notations from (i). Notice that y must be in Γi−1 (z1 ). Now, denote by
M set M = Γi−1 (x) ∩ Γ1 (y), by N set N = Γi−2 (z1 ) ∩ Γ1 (y), and notice that |M | = ci ,
|N | = ci−1 . Pick arbitrary u ∈ N. Note that since u is a neighbor of y, we have
∂(x, u) ∈ {i − 1, i, i + 1}. But on the other hand, ∂(u, x) ≤ i − 1, since ∂(u, z1 ) = i − 2 and z1
is a neighbor of x. Therefore ∂(x, u) = i − 1, and so u ∈ M. Therefore N ⊆ M and so
ci−1 = |N | ≤ |M | = ci .
(iii) In Lemma 7.04(i) we had shown that ki−1 pi−1 i
1i = ki p1,(i−1) , 1 ≤ i ≤ D; but in new
symbols that means precisely ki−1 bi−1 = ki ci for 1 ≤ i ≤ D.
(iv) Pick y ∈ V and consider distance partition with respect with y. We claim that
|Γi (y)| = (b0 b1 ...bi−1 )/(c1 c2 ...ci ). We will prove the result using induction on n.
BASIS OF INDUCTION
Observe that b0 = k1 , so the formula holds for i = 1.
INDUCTION STEP
Assume now that formula holds for i < D. We will show that formula holds also for i + 1.
Note that by (iii) we have ki+1 = bi ki /ci+1 . Since by the induction hypothesis we have
ki = (b0 b1 ...bi−1 )/(c1 c2 ...ci ), the result follows.
8. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE MATRICES 43
FIGURE 24
Octahedron.
Distance-i matrices for octahedron (that is for graph which is pictured on Figure 24) are
1 0 0 0 0 0 0 1 0 1 1 1 0 0 1 0 0 0
0 1 0 0 0 0 1 0 1 0 1 1 0 0 0 1 0 0
0 0 1 0 0 0 , A 1 = 0 1 0 1 1 1 and A 2 = 1 0 0 0 0 0
A0 =
0
.
0 0 1 0 0
1
0 1 0 1 1
0
1 0 0 0 0
0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 1
0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 0
(8.02) Theorem
For arbitrary graph Γ = (V, E) which is distance-regular around each of its vertices and
with the same intersection array, the distance-i matrices of Γ satisfies
where ai , bi and ci are the intersection numbers of Γ (see Comment 7.07) and A−1 , AD+1 are
the zero matrices.
Proof: (1◦ ) Let Γ = (V, E) be a distance-regular around each of its vertices with diameter D.
Then ∀x, y1 , y2 , y3 ∈ V for which ∂(x, y1 ) = h − 1, ∂(x, y2 ) = h and ∂(x, y3 ) = h + 1, there
exist constants ah , bh and ch (0 ≤ h < D) (known as intersection numbers) such that
44 CHAPTER II. DISTANCE-REGULAR GRAPHS
ah = |Γ1 (y2 ) ∩ Γh (x)|, ah−1 = |Γ1 (y1 ) ∩ Γh−1 (x)|, ah+1 = |Γ1 (y3 ) ∩ Γh+1 (x)|,
bh = |Γ1 (y2 ) ∩ Γh+1 (x)|, bh−1 = |Γ1 (y1 ) ∩ Γh (x)|, bh+1 = |Γ1 (y3 ) ∩ Γh+2 (x)|,
ch = |Γ1 (y2 ) ∩ Γh−1 (x)|, ch−1 = |Γ1 (y1 ) ∩ Γh−2 (x)|, ch+1 = |Γ1 (y3 ) ∩ Γh (x)|,
FIGURE 25
Illustration for numbers ah , bh and ch .
Similarly
ah , if ∂(u, v) = h
bh−1 , if ∂(u, v) = h − 1
(bh−1Ah−1 + ahAh + ch+1Ah+1 )uv = .
c h+1 , if ∂(u, v) = h + 1
0, otherwise
(ρρU turns out to be the characteristic vector of U , that is, (ρρU )x = 1 if x ∈ U and (ρρU )x = 0
otherwise).
8. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE MATRICES 45
(8.04) Proposition
Let Γ = (V, E) denote connected graph which is distance-regular around vertex y, and let
ck , ak and bk be the intersection numbers around y (k = 0, 1, ..., ecc(y)). Then the polynomials
obtained from the recurrence
satisfy
A)eey = ρ Vk = A ke y
rk (A
where k = 0, 1, ..., ecc(y) and Vk := Γk (y).
Proof: Let Γ = (V, E) denote connected graph which is distance-regular around vertex y.
Since
1, if x ∈ Vk 1, if x ∈ Γk (y) 1, if ∂(x, y) = k
(ρρVk )x = = = Ak )xy )
(= (A
0, otherwise 0, otherwise 0, otherwise
and
1, if ∂(1, y) = k
= 0, otherwise
= 1, if ∂(2, y) = k
X X
ρ Vk := ez = ez =
0, otherwise Ak )∗y
= (A
(8)
z∈Vk
..
z∈Γk (y)
.
1, if ∂(n, y) = k
=
0, otherwise
notice that
a11 a12 ... a1n
a21 a22 ... a2n |
Aρ Vk )x = ..
(A .. ρ Vk =
..
. . ... . |
an1 an2 ... ann x
(= 1 or 0)
(= 1 or 0)
= ax1 ax2 ... axn = |Γ(x) ∩ Γk (y)|,
..
.
(= 1 or 0)
that is
ak ,
if ∂(y, x) = k
bk−1 , if ∂(y, x) = k − 1
Aρ Vk )x = |Γ(x) ∩ Vk | =
(A ,
ck+1 , if ∂(y, x) = k + 1
0, otherwise
so we have
Aρ Vk = bk−1ρ Vk−1 + akρ Vk + ck+1ρ Vk+1 .
| | | |
Ak )∗1 (A
On the other hand, since A k = (A Ak )∗2 ... (AAk )∗y ... (A
Ak )∗n
| | | |
(8)
Ak )∗y = A ke y , 1 ≤ k ≤ D
ρ Vk = (A (9)
or in details
AA 0e y = 0 + a0A 0e y + c1A 1e y ,
AA 1e y = b0A 0e y + a1A 1e y + c2A 2e y ,
AA 2 e y = b 1 A 1 e y + a2 A 2 e y + c 3 A 3 e y ,
...
AA me y = bm−1A m−1e y + amA me y + 0,
where, m = ecc(y), b−1 = cm+1 = 0. On the other hand the polynomials obtained from the
recurrence
xrk = bk−1 rk−1 + ak rk + ck+1 rk+1 , with r0 = 1, r1 = x,
satisfy
A) = bk−1 rk−1 (A
A rk (A A) + ak rk (A
A) + ck+1 rk+1 (A
A),
A)eey
A r0 (A = 0 + a0 r0 (AA)eey + c1 r1 (A
A)eey
A)eey
A r1 (A A)eey + a1 r1 (A
= b0 r0 (A A)eey + c2 r2 (A
A)eey
A)eey
A r2 (A A)eey + a2 r2 (A
= b1 r1 (A A)eey + c3 r3 (A
A)eey
...
A)eey
A rm (A A)eey + am rm (A
= bm−1 rm−1 (A A)eey + 0.
A) = I and r1 (A
In the end, if we consider equations (9), (10) and (11), since r0 (A A) = A , with
help of mathematical induction on k, we have rk (AA)eey = ρ Vk .
(8.05) Proposition
Let Γ = (V, E) denote arbitrary connected graph with diameter D which is distance-regular
around each of its vertices and with the same intersection array. Then for 0 ≤ i ≤ D there
exists a polynomial pi of degree i such that
A).
A i = pi (A
A) = β0i I + β1i A + ... + βiiA i , then β0i , β1i , ..., βii depends only on aj , bj , cj .
Moreover, if pi (A
From the induction hypothesis we know that for A i and A i−1 there exists a polynomials pi
and pi−1 of degree i and i − 1 such that A i = pi (A
A) and A i−1 = pi−1 (AA). The result now
1
follows from equation A i+1 = ci+1 AA i − bi−1A i−1 − aiA i ) and induction hypothesis.
(A
8. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE MATRICES 47
(8.06) Lemma
Let A i ∈ MatΓ (R) (1 ≤ i ≤ D) denote a distance-i matrices. Vector space D defined by
D = span{I, A , A 2 , ..., A D }
forms an algebra with the entrywise (Hadamard) product of matrices, defined by
(X ◦ Y )uv = (X)uv (Y )uv .
Proof: If we want to prove this lemma, we must to show that all condition from definition of
algebra1 are satisfied. Here we will only show that for arbitrary X, Y in D we have X ◦ Y ∈ D.
First notice that Ai ◦ Aj ∈ D for 0 ≤ i, j ≤ D since Ai ◦ Aj = 0 if i 6= j and is Ai if i = j.
But now, the general proof that for X, Y in D we have X ◦ Y ∈ D is a consequence of the
fact, that D is a vector space.
Rest of the proof is left to reader like an easy exercise i.e. it is left to show that
D is a vector space;
(X ◦ Y ) ◦ Z = X ◦ (Y ◦ Z), ∀X, Y, Z ∈ D;
X ◦ (Y + Z) = (X ◦ Y ) + (X ◦ Z), ∀X, Y, Z ∈ D;
(X + Y ) ◦ Z = (X ◦ Y ) + (Y ◦ Z), ∀X, Y, Z ∈ D;
∀X, Y ∈ D and ∀α ∈ R we have α(X ◦ Y ) = (αX) ◦ Y = X ◦ (αY ).
FIGURE 26
Intersection A ∩ D for regular graphs. ♦
(8.09) Corollary
Let Γ = (V, E) denote arbitrary connected graph which is distance-regular around each of
its vertices and with the same intersection array, and let A i , 1 ≤ i ≤ D, be a distance-i
matrices. Then
A n ∈ D.
for arbitrary non-negative integers n. Moreover, if A n = β0A 0 + β1A 1 + ... + βDA D , then β0 ,
β1 , ..., βD depends only on aj , bj , cj .
BASIS OF INDUCTION
A0 = I ∈ span{A
It is clear that the result holds for n = 0 and n = 1 (A A0 , A 1 , ..., A D } and
A 1 = A ∈ span{A A0 , A 1 , ..., A D }).
INDUCTION STEP
Assume now that the result holds for n. Then there are scalars α0 , ..., αD such that
n
A = α0A 0 + α1A 1 + ... + αDA D . We have
The result now follows from Theorem 8.02. Result for A n+1 is then some linear combination
of A 0 , A 1 , ..., A D , say δ0A 0 + δ1A 1 + ... + δdA D where δ0 , δ1 ..., δD depends only on aj , bj , cj .
Recall from Definition 5.01 that the adjacency algebra A of a graph Γ is the algebra of
polynomials in the adjacency matrix A = A (Γ). By Proposition 5.04, dimension of A is d
where d + 1 is number of distinct eigenvalues of Γ.
(8.10) Corollary
Let Γ = (V, E) denote arbitrary connected graph which is distance-regular around each of
its vertices and with the same intersection array. Then we have
A = D.
(8.11) Lemma
Let Γ = (V, E) denote connected graph which is distance-regular around each of its vertices
h
and with the same intersection array. Then for 0 ≤ i, j ≤ D there exist numbers αij
(0 ≤ h ≤ D) such that
D
X
h
A iA j = αij Ah,
h=0
Proof: From Corollary 8.10 A = span{A A0 , A 1 , ..., A D }. That means that for every A i , A j ∈ A
h
we have A iA j ∈ A and so there exist unique scalars αij (0 ≤ i, j, h ≤ D) such that
0 1 D
A iA j = αij A 0 + αij A 1 + ... + αij AD .
Notice that
0 1 D h
(αij A 0 + αij A 1 + ... + αij A D )xy = αij if ∂(x, y) = h.
If we consider Comment 7.07, since distance is unique, for ∂(x, y) = h, we have
X aj , if ∂(x, y) = j
h
A1A j )xy =
α1j = (A (A Aj )zy = |Γ1 (x) ∩ Γj (y)| =
A1 )xz (A cj+1 , if ∂(x, y) = j + 1 ,
bj−1 , if ∂(x, y) = j − 1
z∈V
and X
h
αij = (A Aj )zy = |Γi (x) ∩ Γj (y)|.
Ai )xz (A
z∈V
ch (u, v):=|Γh−1 (u) ∩ Γ(v)|, ah (u, v):=|Γh (u) ∩ Γ(v)|, bh (u, v):=|Γh+1 (u) ∩ Γ(v)|,
do not depend on the chosen vertices u and v, but only on their distance h; in which case they
are denoted by ch , ah , and bh , respectively).
Proof: (⇒) If Γ = (V, E) is distance-regular then by definition there exist numbers phij
(0 ≤ i, j, h ≤ D) such that for any u, v ∈ V with ∂(u, v) = h we have |Γi (u) ∩ Γj (v)| = phij . If
we set j = 1, i ∈ {h − 1, h, h + 1} we have that for any two vertices u, v ∈ V at distance
∂(u, v) = h, 0 ≤ h ≤ D, the numbers
ch (u, v):=phh−1,1 = |Γh−1 (u) ∩ Γ(v)|, ah (u, v):=phh1 = |Γh (u) ∩ Γ(v)|,
ch (u, v)=|Γh−1 (u) ∩ Γ(v)|, ah (u, v)=|Γh (u) ∩ Γ(v)|, bh (u, v)=|Γh+1 (u) ∩ Γ(v)|,
do not depend on the chosen vertices u and v, but only on their distance h. With another
words we have ch (u, v) = ch , ah (u, v) = ah , bh (u, v) = bh , where numbers ch , ah , bh are
intersection numbers from Comment 7.07. From Corollary 8.11 we have
h
|Γi (x) ∩ Γj (y)| = αij .
for x, y with ∂(x, y) = h and for 0 ≤ i, j ≤ D. Therefore, Γ is distance-regular, with phij = αij
h
,
for 0 ≤ i, j, h ≤ D.
50 CHAPTER II. DISTANCE-REGULAR GRAPHS
(8.13) Comment
Thus, one intuitive way of looking at distance-regularity is to ”hang” the graph from a
given vertex and observe the resulting different ”layers” in which the vertex set is partitioned;
that is, the subsets of vertices at given distances from the root: If vertices in the same layer
are ”neighborhood-indistinguishable” from each other, and the whole configuration does not
depend on the chosen vertex, the graph is distance-regular (see Figure 16 for illustration, hang
of the cube).
Second thing is, that for distance-regular graphs we have (see Corollary 8.10)
A ∩ D = A = D.
FIGURE 27
Intersection A ∩ D for distance-regular graphs. ♦
Proof: (⇒) Let Γ = (V, E) be a distance-regular graph with diameter D. Pick two arbitrary
vertices u and v on distance h (∂(u, v) = h), where 0 ≤ h ≤ D. Now, for every 0 ≤ i, j ≤ D
we have X
AiA j )uv =
(A Aj )xv = |Γi (u) ∩ Γj (v)| = phij = (phij A h )uv
Ai )ux (A
(A
x∈V
8. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE MATRICES 51
where phij are numbers from definition of DRG (Definition 7.01). From uniqueness of distance
we have
D
X
A iA j = pkij A k .
k=0
(⇐) Assume that for any integers 0 ≤ i, j ≤ D, distance matrices of a graph Γ = (V, E)
satisfy
XD
A iA j = pkij A k (0 ≤ i, j ≤ D)
k=0
for some constants pkij . Pick two arbitrary vertices u and v on distance h (∂(u, v) = h), where
0 ≤ h ≤ D. Consider the following equations
X XD
|Γ1 (u) ∩ Γh (v)| = A1 )ux (A
(A Ah )xv = (A
A1A h )uv =( pk1hA k )uv = ph1h ,
x∈V k=0
X XD
|Γ1 (u) ∩ Γh−1 (v)| = A1 )ux (A
(A Ah−1 )xv = (A
A1A h−1 )uv = ( pk1,h−1A k )uv = ph1,h−1 ,
x∈V k=0
X XD
|Γ1 (u) ∩ Γh+1 (v)| = A1 )ux (A
(A Ah+1 )xv = (A
A1A h+1 )uv = ( pk1,h+1A k )uv = ph1,h+1 .
x∈V k=0
Now, we see that the numbers |Γ1 (u) ∩ Γh (v)|, |Γ1 (u) ∩ Γh−1 (v)|, |Γ1 (u) ∩ Γh+1 (v)| depend
only on distance between u and v, so the result follows from Theorem 8.12 (Characterization
A).
(8.16) Exercise
Show that graph pictured on Figure 24 is distance-regular, and find numbers phij
(0 ≤ i, j, h ≤ D) from definition of DRG (Definition 7.01).
Solution: It is not hard to compute the distance matrices for a given graph
1 0 0 0 0 0 0 1 0 1 1 1 0 0 1 0 0 0
0 1 0 0 0 0 1 0 1 0 1 1 0 0 0 1 0 0
0 0 1 0 0 0 , A 1 = 0 1 0 1 1 1 , A 2 = 1 0 0 0 0 0
A0 =
0
.
0 0 1 0 0 1
0 1 0 1 1 0
1 0 0 0 0
0 0 0 0 1 0 1 1 1 1 0 0 0 0 0 0 0 1
0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 0
Now we have A 0A 0 = A 0 , A 0A 1 = A 1A 0 = A 1 , A 0A 2 = A 2A 0 = A 2 ,
A0 + 2A
A 1A 1 = 4A A1 + 4A
A2 ,
A 1A 2 = A 2A 1 = A 1 ,
A 2A 2 = A 0 ,
so from Theorem 8.15 (Characterization B) we can conclude that given graph is
distance-regular. From obtained equations we have p000 = 1, p101 = p110 = 1, p202 = p220 = 1,
p011 = 4, p111 = 2, p211 = 4, p112 = p121 = 1, p022 = 1, and all the rest numbers are equal to 0. ♦
52 CHAPTER II. DISTANCE-REGULAR GRAPHS
where, by convention, b−1 = cD+1 = 0. Now, pick two arbitrary vertices u, v ∈ V on distance h
(∂(u, v) = h) where 0 ≤ h ≤ D. Consider equations that follows
X
|Γh (u) ∩ Γ1 (v)| = Ah )ux (A
(A A)xv = (A
AhA )uv = (bh−1A h−1 + ahA h + ch+1A h+1 )uv =
x∈V
ah , if ∂(u, v) = h
= bh−1 , if ∂(u, v) = h − 1 = ah ,
ch+1 , if ∂(u, v) = h + 1
X
|Γh−1 (u) ∩ Γ1 (v)| = Ah−1 )ux (A
(A A)xv = (A
Ah−1A )uv = (bh−2A h−2 + ah−1A h−1 + chA h )uv =
x∈V
ah−1 , if ∂(u, v) = h − 1
= bh−2 , if ∂(u, v) = h − 2 = ch ,
ch , if ∂(u, v) = h
X
|Γh+1 (u) ∩ Γ1 (v)| = Ah+1 )ux (A
(A A)xv = (A
Ah+1A )uv = (bhA h + ah+1A h+1 + ch+2A h+2 )uv =
x∈V
ah+1 , if ∂(u, v) = h + 1
= bh , if ∂(u, v) = h = bh .
ch+2 , if ∂(u, v) = h + 2
(8.18) Example
We want to show that graph pictured on Figure 15 is distance-regular, and we want to
find his intersection array.
A 0A = 0 + 0 A 0 + 1 A 1 ,
A 1A = 3 A 0 + 0 A 1 + 2 A 2 ,
A 2A = 2 A 1 + 0 A 2 + 3 A 3 ,
A 3A = 1 A 2 + 0 A 3 + 0,
and from the obtain equations (and Theorem 8.17 (characterization B’)) we conclude that
given graph is distance-regular. From this we also see that
a0 = 0, a1 = 0, a2 = 0, a3 = 0,
b0 = 3, b1 = 2, b2 = 1, b3 = 0,
c0 = 0, c1 = 1, c2 = 2, c3 = 3.
Demanded intersection array is {3, 2, 1; 1, 2, 3}. ♦
A = span{I, A , A 2 , ..., A D }.
A = span{I, A , A 2 , ..., A D }.
(8.20) Comment
Since A = D (Corollary 8.10), {I, A , A 2 , ..., A d } is basis of the adjacency algebra A
(Proposition 5.04, where d + 1 is number of distinct eigenvalues) and
A = span{I, A , A 2 , ..., A D },
D = span{I, A , A 2 , ..., A D },
we have that for any distance-regular graph Γ = (V, E) with diameter D, there exist D + 1
distinct eigenvalues. So, just by realizing that a graph is distance-regular, we automatically
know how many eigenvalues it’s adjacency matrix has! ♦
(8.21) Exercise
Compare number of distinct eigenvalues of distance-regular graphs given in Figure 15,
Figure 24, Figure 41 and Figure 28, with diameter of these graphs.
54 CHAPTER II. DISTANCE-REGULAR GRAPHS
FIGURE 28
Petersen graph.
Solution: Eigenvalues for octahedron (Figure 24) are -2, 0, 4 (diameter of octahedron is 2).
Eigenvalues for the cube (Figure 15) are
√ -3,
√ -1, 1, 3 (diameter of the cube is 3). Eigenvalues of
Heawood graph (Figure 41) are -3, − 2, 2 and 3 (diameter of Heawood graph is 3).
Eigenvalues of Petersen graph (Figure 28) are -2, 1, 3 (diameter of Petersen graph is 2). ♦
Proof: This theorem can be proved on many different ways, but in our case, we want to use
Lemma 8.19.
(⇒) Assume that a graph Γ = (V, E) with diameter D is distance-regular. Notice that the
set {AA0 , A 1 , ..., A D } is linearly independent because no two vertices u, v can have two different
distances from each other, so for any position (u, v) in the set of distance matrices, there is
only one matrix with a one entry in that position, and all the other matrices have zero. So
this set is a linearly independent set of D + 1 elements. Since any distance-i matrix of
distance-regular graph Γ can be written as a polynomial in A that is of degree i (Proposition
8.05) (we have A i ∈ A for any i = 0, 1, ..., D) and since dim(A) = D + 1 (for example see
Lemma 8.19), the set {A A0 , A 1 , ..., A D } must span arbitrary polynomial p(A), and be a basis
for A(Γ).
(⇐) Assume that the set {I, A , ..., A D } is a basis of the adjacency algebra A(Γ). Because
A is algebra and by assumption A = span{I, A , A 2 , ..., A D } it follow that A iA j ∈ A for every
k
i, j. Now, there are unique αij ∈ R such that
D
X
0 1 D k
A iA j = αij A 0 + αij A 1 + ... + αij AD = αij A k (0 ≤ i, j ≤ D).
k=0
Proof: (⇒) Assume that a graph Γ = (V, E) with diameter D is distance-regular. Then by
Corollary 8.10 and Lemma 8.19 we have A = span{I, A , A 2 , ..., A D } = span{I, A , A 2 , ..., A D },
and from this it is not hard to see that A acts by right (or left) multiplication as a linear
operator on the vector space span{I, A 1 , A 2 , ..., A D }.
9. EXAMPLES OF DISTANCE-REGULAR GRAPHS 55
(⇐) Now assume that in a graph Γ = (V, E) with diameter D, matrix A acts by right
multiplication as a linear operator on the vector space span{I, A 1 , A 2 , ..., A D }. That means
so for some constants βh−1 , βh , βh+1 , (0 ≤ h ≤ D), its distance matrices satisfy the three-term
recurrence
A hA = βh−1A h−1 + βhA h + βh+1A h+1 .
Result now follows from Theorem 8.17 (Characterization B’).
(9.02) Example
Fix a set S = {a, b} (|S| = 2). Let V = {aaa, aab, aba, abb, baa, bab, bba, bbb}, and
E = {{x, y} : x, y ∈ V, x and y differ in exactly 1 coordinate}. Then graph pictured on
Figure 29 is Hamming graph H(3, 2).
FIGURE 29
Hamming graph H(3, 2). ♦
(9.03) Example
Fix a set S = {a, b, c, d} (|S| = 4). Let V = {a, b, c, d}, and
56 CHAPTER II. DISTANCE-REGULAR GRAPHS
FIGURE 30
Hamming graph H(1, 4). ♦
(9.04) Example
Fix a set S = {a, b, c} (|S| = 3). Let V = {aa, ab, ac, ba, bb, bc, ca, cb, cc}, and
E = {{x, y} : x, y ∈ V, x and y differ in exactly 1 coordinate}. Then graph pictured on
Figure 31 is Hamming graph H(2, 3).
FIGURE 31
Hamming graph H(2, 3). ♦
(9.05) Example
The Hamming graphs H(n, 2) are the n-dimensional hypercubes, Qn . Q4 is shown on
Figure 32.
FIGURE 32
Hamming graph H(4, 2). ♦
We will show that the Hamming graphs are distance-regular. First, we need Lemma 9.06
and Lemma 9.07.
9. EXAMPLES OF DISTANCE-REGULAR GRAPHS 57
(9.06) Lemma
For all vertices x, y of H(n, q), distance ∂(x, y) = i if and only if num(x, y) = i, where
num(x, y) is defined to be the number of coordinates in which vertices x and y are different
when considered as words (or n-tuples).
(9.07) Lemma
The Hamming graphs are vertex-transitive.
Proof: Recall: The simple graphs Γ1 = (V1 , E1 ) and Γ2 = (V2 , E2 ) are isomorphic if there is a
one-to-one and onto function f from V1 to V2 with the property that a and b are adjacent in
Γ1 if and only if f (a) and f (b) are adjacent in Γ2 , for all a and b in V1 . Such a function f is
called an isomorphism. An isomorphism of a graph Γ with itself is called an automorphism of
Γ. Thus an automorphism f of Γ is a one-one function of Γ onto itself (bijection of Γ) such
that u ∼ v if and only if f (u) ∼ f (v). Two vertices u and v of the graph Γ are similar if for
some automorphism α of Γ, α(u) = v. A fixed point is not similar to any other point. A graph
is vertex-transitive if every pair of vertices are similar.
By definition of vertex-transitivity, H(n, q) is vertex-transitive if for all pairs of vertices
x, y there exists an automorphism of the graph that maps x to y. In this proof, we will
interpret vertices of H(n, q) like words (sequences) of integers d1 d2 ...dn where each di is
between 0 and q − 1. Why this interpretation? In this way, if we for example consider H(5, 3),
we can sum up two vertices termwise modulo q, for example x = 00122, y = 00121, z = 11002
then x + z = 11121 and y + z = 11120. This interpretation will help us to easier show that the
Hamming graph is vertex-transitive.
Let v be a fixed vertex and x ∈ V (H(n, q)). Then the mapping ρv : x → x + v, where
addition is done termwise modular q, will be an automorphism of the graph since if the words
(or n-tuples) x, y differ in exactly 1 term, then the words x + v and y + v will differ in exactly
1 term thus preserving the adjacency relation. And for any two vertices, x, y ∈ V (H(n, q)),
the automorphism ρy−x maps x to y. Thus, Hamming graphs are vertex-transitive.
(9.08) Lemma
The Hamming graph H(n, q) is distance-regular (with ai = i(q − 2) (0 ≤ i ≤ n),
58 CHAPTER II. DISTANCE-REGULAR GRAPHS
FIGURE 33
To get neighbor of y we need to pick an term of y, say a, and change it in an element that is
different from a, say to b.
Number bi is the number of neighbors of y that are also distance i + 1 from x. For a we
must pick zero and change it to b 6= 0. So for vertex u there are n − i places in which to differ
from y and q − 1 letters to choose from. So bi = (n − i)(q − 1).
As for ci , we are counting the number of vertices that are distance i − 1 from x and
adjacent to y. For term a we will pick one of nonzero terms from y1 y2 ...yi , and b must be zero.
So we can change any of the i nonzero terms to choose to turn back to zero. So ci = i. Thus
the Hamming graph is distance-regular.
SECOND WAY
Pick x, y ∈ V with ∂(x, y) = i. By Lemma 9.06 x and y are differ in i terms and assume
that x = x1 x2 ...xn , y = y1 y2 ...yn differ in coordinates with indexes {h1 , h2 , ..., hi }. Note that
bi = |Γ1 (x) ∩ Γi+1 (y)|. Pick z ∈ Γ1 (x) ∩ Γi+1 (y), and assume that z and x differ in jth
coordinate. If j ∈ {h1 , h2 , ..., hi }, then because ∂(x, y) = i we have ∂(z, y) ∈ {i − 1, i}, a
contradiction. Therefore j 6∈ {h1 , h2 , ..., hi }. So we have n − i possibilities for j, and for each of
these possibilities we have q − 1 choices for the jth coordinate of z. Therefore
bi = (n − i)(q − 1).
Let us now compute ci = |Γ1 (x) ∩ Γi−1 (y)| (1 ≤ i ≤ n). Pick z ∈ Γ1 (x) ∩ Γi−1 (y), and
assume that z and x differ in jth coordinate. If j 6∈ {h1 , h2 , ..., hi }, then ∂(z, y) ∈ {i, i + 1}, a
contradiction. Therefore j ∈ {h1 , h2 , ..., hi }. So we have i possibilities for j, and for each of
these possibilities, the jth coordinate of z must be equal to the jth coordinate of y. Therefore
ci = i.
9. EXAMPLES OF DISTANCE-REGULAR GRAPHS 59
It is an easy exercise to prove that H(n, q) is regular graph. This shows that H(n, q) is
distance-regular.
FIGURE 34
Johnson graph J(4, 2), drawn in two different ways (this graph is also known as octahedron).♦
FIGURE 35
Johnson graph J(3, 2). ♦
FIGURE 36
Johnson graph J(5, 3). ♦
We will show the Johnson graphs are distance-regular but we need the following lemma
first.
(9.13) Lemma
If x, y are vertices of the Johnson graph J(n, r), then ∂(x, y) = i if and only if
|x ∩ y| = r − i.
...
By definition of distance, there exists a path of length i from x to y. Thus, there exists a
vertex z that is distance i − 1 from x and adjacent to y (∂(x, z) = i − 1, ∂(z, y) = 1). So by
the induction hypothesis
|z\x| = i − 1 and |y\z| = 1.
Now we notice that
Since (y\x) ∩ z ⊆ z\x and (y\x)\z ⊆ y\z we have |y\x| ≤ |z\x| + |y\z| = (i − 1) + 1, so
|y\x| ≤ i which implies
|x ∩ y| ≥ r − i. (13)
From Equations (12) and (13) we conclude |x ∩ y| = r − i as desired.
(⇐) Now suppose |x ∩ y| = r − i. We need to show that ∂(x, y) = i. If ∂(x, y) < i then, by
the induction hypothesis, |x ∩ y| > r − i, a contradiction. So
∂(x, y) ≥ i.
(9.14) Lemma
Johnson graph J(n, r) is distance-regular (with intersection numbers
ai = (r − i)i + i(n − r − i), bi = (r − i)(n − r − i), ci = i2 ).
Proof: It is enough to show that the intersection numbers for Johnson graphs are
independent of choice of vertex for the graph to be distance-regular. We will prove this lemma
on two ways.
FIRST WAY
Let x, y be vertices of J(n, r) such that ∂(x, y) = i. By Lemma 9.13 that means
|x ∩ y| = r − i. Say, without lost of generality that x = {c1 , ..., cr−i , x1 , ..., xi } and
y = {c1 , ..., cr−i , y1 , ..., yi }. To get a neighbor of y, we need to pick an element of y, say a
62 CHAPTER II. DISTANCE-REGULAR GRAPHS
(a ∈ {c1 , ..., cr−i , y1 , ..., yi }), and change it in element that is not in y, say to b
(b 6∈ {c1 , ..., cr−i , y1 , ..., yi }). There are four ways this can be done.
Case 1: If a is an element of x ∩ y = {c1 , ..., cr−i } and b is an element of x\y = {x1 , ..., xi },
then z will differ from y in 1 element and from x in i elements because y1 , ..., yi ∈ z but
y1 , ..., yi 6∈ x (a was common to both x and y but b does not belong to y). This gives a
neighbor of y such that ∂(x, z) = i.
FIGURE 37
To get a neighbor of y, we need to pick an element of y, say a, and change it in element that
is not in y, say to b.
numbers for J(n, r) are independent of choice of vertex, the Johnson graph is distance-regular.
SECOND WAY
Pick x, y ∈ V (Γ) with ∂(x, y) = h. Let x = {x1 , x2 , ..., xr−h , xr−h+1 , ..., xr } and
y = {x1 , x2 , ..., xr−h , yr−h+1 , ..., yr } (see Lemma 9.13). Pick z ∈ Γ1 (x) ∩ Γh+1 (y). Note that z
and x differ in exactly one element and assume x\z = {xj }. If j ≥ r − h + 1 then
{x1 , x2 , ..., xr−h } ⊆ z. This implies that {x1 , x2 , ..., xr−h } ⊆ z ∩ y and therefore |z ∩ y| ≥ r − h.
This shows, by Lemma 9.13, that ∂(z, y) ≤ h, a contradiction.
Therefore j ∈ {1, 2, ..., r − h}. So, to get z from x we have to replace any of elements
{x1 , ..., xr−h }. This gives us (r − h)(n − r − h) possibilities for z in total. This shows that
bh = (r − h)(n − r − h) (0 ≤ h ≤ D − 1).
Pick z ∈ Γ1 (x) ∩ Γh−1 (y). Hence |z\x| = 1 and |z ∩ x| = r − h + 1. Again, assume
x\z = |xj |. If j ∈ {1, 2, ..., r − h}, then |z ∩ y| = r − h − 1. Therefore, to get z from x, we have
to replace one of the {xr−h+1 , ..., xr } with one of {yr−h+1 , ..., yr }. This gives us h2 possibilities
in total. Therefore ch = h2 (1 ≤ h ≤ d).
(9.16) Exercise
Prove that the Petersen graph GP G(5, 2) is distance-regular.
Solution: Consider Theorem 8.12 (Characterization A). We will draw graph with subsets of
vertices at given distance from the root, where for a root we will consider all possibilities. If
vertices in the same layer are ”neighborhood-indistinguishable” from each other, and the
whole configuration does not depend on the chosen vertex, the graph is distance-regular. For
illustration see Figure 38. Therefore, Petersen graph GP G(5, 2) is distance-regular.
FIGURE 38
Petersen graph GP G(5, 2) drawn on 10 different ways, each with different root. ♦
64 CHAPTER II. DISTANCE-REGULAR GRAPHS
A) (0 ≤ k ≤ D),
A k = pk (A
(10.02) Lemma
If the graph Γ is regular, connected and of diameter 2, then Γ is distance-polynomial.
FIGURE 39
The 3-prism (example of distance-polynomial graph which is not distance-regular).
(10.03) Comment
From Proposition 8.05 we see that, distance-regular graphs are distance-polynomial, that
is, in a distance-regular graph, each distance matrix A h is a polynomial of degree h in A :
A) ∈ A(Γ) (0 ≤ h ≤ D).
A h = ph (A
The simplest example (that we took from [48]) of a distance-polynomial graph which is
not distance-regular is the 3-prism Γ (Figure 39). Γ clearly has diameter 2, is connected and
is regular. Thus Γ is distance-polynomial. It is straightforward to check that Γ is not
distance-regular. A distance-polynomial graph which is not distance-regular need not have
diameter 2. This example show that classes of distance-regular and distance-polynomial are
distinct.
We shall see that distance polynomials satisfies some nice properties which facilitate the
10. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE POLYNOMIALS 65
FIGURE 40
Illustration of classes for distance-regular and distance-polynomial graphs. ♦
(10.04) Exercise
Find distance polynomials of distance-regular graph which is given in Figure 41.
FIGURE 41
Heawood graph.
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1
0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0
A0 = , A1 = ,
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 0
66 CHAPTER II. DISTANCE-REGULAR GRAPHS
0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0
0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 0
1 0 0 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 1 0 1 0 1 0 0 0 1
0 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 0 1 0 1 0
1 0 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0
0 1 0 1 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 0 0 0 1 0 0 0 1 0
1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1
A2 = , A3 = .
0 1 0 1 0 1 0 0 0 1 0 1 0 1 1 0 1 0 1 0 0 0 0 0 1 0 0 0
1 0 1 0 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1
0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 0 1 0 0 0 0 0 1 0
1 0 1 0 1 0 1 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1
0 1 0 1 0 1 0 1 0 1 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 0 0 0
1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0 0 0
0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0
(10.05) Proposition
Let Γ = (V, E) be a simple connected graph with adjacency matrix A , |V | = n and let
R[x] = {a0 + a1 x + ... + am xm |ai ∈ R} be a set of all polynomials of degree m ∈ N, with
coefficients from R. Define the inner product of two arbitrary elements p, q ∈ R[x] with
1
hp, qi = A)q(A
trace(p(A A)).
n
Prove that R[x] is inner product space.
Proof: We need to verify that R[x] is vector space, and that defined product h·, ·i satisfy
axioms from definition of general inner product2 . We will left this like an easy exercise.
(10.06) Exercise
Let Γ = (V, E) denote regular graph with diameter D, valency λ0 , and let {ph }0≤h≤D be
distance polynomials. Then
(i) kh := |Γh (u)| = ph (λ0 ), for arbitrary vertex u (kh is number independent of u);
(ii) kph k2 = ph (λ0 );
for any 0 ≤ h ≤ D.
1
..
Solution: (i) By j we will denote vector which entries are all ones, j = . . From
1
Proposition 2.15, j is eigenvector for A with eigenvalue λ0 so Aj = λ0j ,
2
Recall: An inner product on a real (or complex) vector space V is a function that maps each ordered pair
of vectors x, y to a real (or complex) scalar hx, yi such that the following four properties hold.
hx, xi is real with hx, xi ≥ 0, and hx, xi = 0 if and only if x = 0,
hx, αyi = αhx, yi for all scalars α,
hx, y + zi = hx, yi + hx, zi,
hx, yi = hy, xi (for real spaces, this becomes hx, yi = hy, xi).
Notice that for each fixed value of x, the second and third properties say that hx, yi is a linear function of y.
Any real or complex vector space that is equipped with an inner product is called an inner-product space.
10. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE POLYNOMIALS 67
= (cm λm m−1
0 j + cm−1 λ0 j + ... + c1 λ0j + c0j )u =
= cm λm m−1
0 + ch−1 λ0 + ... + c1 λ0 + c0 = ph (λ0 ).
(ii) Let chuv = 1 if shortest path from u to v is of length h and let chuv = 0 otherwise. Notice
that we have
i j 1, if ∂(h, k) = i = j
ckh chk = .
0, otherwise
If we denote vertices of graph Γ with numbers from 1 to n, that is V = {1, 2, ..., n}, we have
ch11 ch12 ... ch1n
ch ch ... ch
21 22 2n h h h h h h number of vertices which are
A h = .. .. .. , cu1 c1u + cu2 c2u + ... + cun cnu = ,
. . ... . on distance h from vertex u
chn1 chn2 ... chnn
n
X (i)
AhA h ) =
trace(A (chk1 ch1k + chk2 ch2k + ... + chkn chnk ) = |Γh (1)| + |Γh (2)| + ... + |Γh (n)| = nkh .
k=1
Finally
1 1
kph k2 = hph , ph i = A)ph (A
trace(ph (A AhA h ) = kh = |Γh (u)| = ph (λ0 ).
A)) = trace(A
n n
♦
In terms of notation from Proposition 7.10, we have kh = (b0 b1 ...bh−1 )/(c1 c2 ...ch ) for
1 ≤ h ≤ D.
(10.07) Proposition
Let {pk }0≤k≤D denote distance polynomials for some regular graph Γ = (V, E) which has n
vertices, and diameter D. Then
kh , if h = l
hph , pl i =
0, otherwise.
where inner product of two polynomials is defined with hp, qi = n1 trace(p(A)q(A)), and
kh = |Γh (u)| is number independent of u.
Proof: Let chuv = 1 if shortest path from u to v is of length h and let chuv = 0 otherwise.
Notice that we have
h ` 1, if ∂(u, v) = h = `
cuv cvu = ,
0, otherwise
that is
1, if ∂(u, v) = h = `
Ah )uv (A
(A A` )uv = .
0, otherwise
68 CHAPTER II. DISTANCE-REGULAR GRAPHS
It follows from Exercise 10.06, that hph , ph i is kh . Now assume that h 6= ` and compute
hph , p` i. We have hph , p` i = n1 trace(ph (A A)) = n1 trace(A
A)p` (A AhA ` ). Pick a vertex u of Γ and
compute (u, u)-entry of A hA ` :
ch11 ch12 ... ch1n c`11 c`12 ... c`1n
ch ch ... ch c` c` ... c`
21 22 2n 21 22 2n
X
AhA ` )uu = ..
(A .. .. .. .. .. = Ah )ux (A
(A Al )xu .
. . ... . . . ... . x∈V
chn1 chn2 ... chnn c`n1 c`n2 ... c`nn uu
As h 6= `, either (A
Ah )ux = 0, or (A
A` )xu = 0. Therefore (A
AhA l )uu = 0, and so trace(A
AhA l ) = 0.
This shows that hph , p` i = 0.
Proof: (⇒) Let Γ = (V, E) be distance-regular graph with diameter D. Then, every
condition from Proposition 8.05 is satisfied, so we have
A)
A h = ph (A (0 ≤ h ≤ D).
(⇐) Now assume that for any integer h, 0 ≤ h ≤ D, the distance-h matrix A h is a
polynomial of degree h in A ; that is A h = ph (A A). If Γ has d + 1 distinct eigenvalues, then
{I, A , A 2 , ..., A d } is a basis of the adjacency or Bose-Mesner algebra A(Γ) of matrices which
are polynomials in A (Proposition 5.04). Moreover, since Γ has diameter D,
dimA(Γ) = d + 1 ≥ D + 1,
because {I, A , A 2 , ..., A D } is a linearly independent set of A(Γ) (Proposition 5.06). Hence, the
diameter is always less than the number of distinct eigenvalues:
D ≤ d. (15)
Is it true that for any connected graph Γ we have A 0 + A 1 + ... + A D = J , the all-1
matrix? Yes, and it is an easy exercise to explain why. Now, notice that I + A + ... + A D = J ,
A) + p1 (A
that is p0 (A A) + ... + pD (A
A) = J , and degree of h = p0 + p1 + ... + pD is D. Comment
after Theorem 6.05 say that Hoffman polynomial H is polynomial of smalest degree for which
J = H(A A) and this polynomial has degree d, where d + 1 is the number of distinct eigenvalues
of Γ. Thus, assuming that Γ has d + 1 distinct eigenvalues and using (15) we have
D ≤ d ≤ dgr (h) = dgr (p0 + p1 + ... + pD ) = D.
The above reasoning’s lead to D = d, and to conclusion that {I, A , A 2 , ..., A D } is a basis of the
adjacency algebra A(Γ).
As distance matrices Ai are polynomials in A, they belong to the Bose-Mesner algebra.
Distance matrices are clearly linearly independent, and since dimension of Bose-Mesner
algebra is d + 1 = D + 1, they form a basis for Bose-Mesner algebra. By Theorem 8.22
(characterization C), Γ is distance-regular.
The existence of the first two distance polynomials, p0 and p1 , is always guaranteed since
A 0 = I and A 1 = A .
Recall that eccentricity of a vertex u is ecc(u) := maxv∈V ∂(u, v). Now, if every vertex
u ∈ V has the maximum possible eccentricity allowed by the spectrum (that is, the number of
distinct eigenvalues minus one: ecc(u) = d, ∀u ∈ V ), the existence of the highest degree
distance polynomial suffices:
10. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE POLYNOMIALS 69
(10.09) Theorem
A graph Γ = (V, E) with diameter D and d + 1 distinct eigenvalues is distance-regular if
and only if all its vertices have spectrally maximum eccentricity d (⇒ D = d) and the distance
matrix A d is a polynomial of degree d in A :
A).
A d = pd (A
This was proved by Fiol, Garriga and Yebra [19] in the context of ”pseudo-distance-regularity”
- a generalization of distance-regularity that makes sense even for non-regular graphs. We will
prove similar theorem in Section 11, and our proof will use Lemma 13.07 that we had found in
[13] and part of proof from [21].
That is, the number (A A` )uv = a`h of walks of length ` between two vertices u, v ∈ V only
depends on h = ∂(u, v).
(⇐) Conversely,Passume that, for a certain graph and any 0 ≤ k ≤ D, there are constants
a`k satisfying A ` = D ` `
k=0 akA k (` ≥ 0), where ak is number of walks of length ` between two
vertices on distance k. As a matrix equation,
I a00 0 0 ... 0 I
A a1 a1 0 ... 0 A
2 0 1
A a2 a2 a2 ... 0 A 2
= 0 1 2
.. .. .. .. .. ..
. . . . . .
D
A a0 a1 a2 ... aD
D D D
D AD
| {z }
T
where the lower triangular matrix T, with rows and columns indexed with the integers 0, 1 ...,
D, has entries (T )`k = a`k . In particular, note that a00 = a11 = 1 and a10 = 0. Moreover, since
akk > 0, such a matrix has an inverse which is also a lower triangular matrix and hence each
A k is a polynomial of degree k in A . Therefore, according to Theorem 10.08 (characterization
D), we are dealing with a distance-regular graph. (Of course, the entries of T −1 are the
coefficients of the distance polynomials.)
We do not need to impose the invariance condition for each value of `. For instance, if Γ is
regular we have the following result:
h+1
(ahuv - number of walks of length h) and ah+1
uv = ah for any 0 ≤ h ≤ D − 1, and aD D
uv = aD for
h = D.
Proof: To illustrate some typical reasoning’s involving the intersection numbers, let us prove
characterization E’ from the characterization A.
(⇒) Assume first that Γ is distance-regular. We shall use induction on k.
BASIS OF INDUCTION
The result clearly holds for k = 0 since a0uu = 1 = a00 and a1uu = 0 = a10 (a0uu is number of
walks of length 0 from u to u, and a1uu is number of walks of length 1 from u to u.)
FIGURE 42
Illustration for computing numbers akuv and ak+1
uv .
INDUCTION STEP
Assume that auv k−1
= ak−1 k k
k−1 and auv = ak−1 for any vertices u, v at distance k − 1. Then, for
any vertices u, v at distance k we get equation, say:
X
k−1
akuv = ak−1
uw = ak−1 |Γk−1 (u) ∩ Γ(v)| (16)
w∈Γk−1 (u)∩Γ(v)
so we have
akuv = ak−1
k−1 ck for all u, v ∈ V at distance k,
k−1
and from that akk = ak−1 ck . Notice that
(17) k−1
= akk−1 |Γk−1 (u) ∩ Γ(v)| + ak−1 ck |Γk (u) ∩ Γ(v)| =
= akk−1 |Γk−1 (u) ∩ Γ(v)| + akk |Γk (u) ∩ Γ(v)| (18)
we have
k−1
ak+1 k
uv = ak−1 ck + ak−1 ck ak for every u, v ∈ V.
akk
|Γk−1 (u) ∩ Γ(v)| =
ak−1
k−1
10. CHARACTERIZATION OF DRG INVOLVING THE DISTANCE POLYNOMIALS 71
akk
ck (u, v) = ck = . (19)
ak−1
k−1
k+1
Analogously, from ak+1
uv = ak and ak+1 k k
uv = ak−1 |Γk−1 (u) ∩ Γ(v)| + ak |Γk (u) ∩ Γ(v)| (see (18))
we get
k+1 akk
ak = ak−1 k−1 + akk |Γk (u) ∩ Γ(v)|,
k
ak−1
where we have used the above value of ck . Consequently, the value
ak+1
k akk−1
|Γk (u) ∩ Γ(v)| = −
akk ak−1
k−1
ak+1
k akk−1
ak (u, v) = ak = k − k−1 . (20)
ak ak−1
shows that bk is also independent of u, v and, hence, since Equations (19) and (20) are true, Γ
is a distance-regular graph.
In Proposition 10.05 we have define inner product in R[x] with hp, qi = n1 trace(p(A)q(A)).
We also have:
(10.12) Proposition
Let Γ = (V, E) be a simple, connected graph with spectrum
m(λ ) m(λ ) m(λ )
spec(Γ) = {λ0 0 , λ1 1 , ..., λd d }, let p and q be arbitrary polynomials, and let |V | = n
(number of vertices in Γ is n). Then
d
1X
hp, qi = mk p(λk )q(λk ).
n k=0
Proof: By Lemma 2.06, there are n orthonormal vectors v1 , ..., vn , that are eigenvectors of
the adjacency matrix A of Γ. For these eigenvectors there are some eigenvalues λi1 , λi2 , ...,
λin , not necessary distinct, and because of Proposition 2.07 we have that D = P −1 AP where
λi1 0 ... 0
0 λi ... 0
2
P = v1 v2 ... vn and D = .. .. ,
..
. . ... .
0 0 ... λin
For arbitrary matrices A, B for which product AB and BA exist, we know that
Now
1 (21)
hp, qi =
trace(p(A A)q(AA)) =
n
1 (22) 1
= trace(P p(D)q(D)P > ) = trace(p(D)q(D)P > P ) =
n n
n d
1 1X 1X
= trace(p(D)q(D)) = p(λik )q(λik ) = m(λk )p(λk )q(λk ).
n n k=0 n k=0
(10.13) Proposition
Let Γ = (V, E) denote a distance-regular graph with adjacency matrix A and with spectrum
m(λ ) m(λ ) m(λ )
spec(Γ) = {λ0 0 , λ1 1 , ..., λd d }. Then multiplicities m(λi ), for any λi ∈ spec(Γ), can be
computed by using all the distance polynomials {pi }di=0 of graph Γ:
d
!−1
X 1
m(λi ) = n pj (λi )2 (0 ≤ i ≤ d)
j=0
kj
where kj := pj (λ0 ).
p0 (λ0 ) p0 (λ1 ) ... p0 (λd )
p1 (λ0 ) p1 (λ1 ) ... p1 (λd )
Proof: Consider matrix P = .. .. , where pi (x)’s are distance
..
. . .
pd (λ0 ) pd (λ1 ) ... pd (λd )
polynomials. From Proposition 10.07
kh , if h = l
hph , pl i = ,
0, otherwise
while from Proposition 10.12
d
1X
hp, qi = m(λk )p(λk )q(λk ).
n k=0
where the different eigenvalues of Γ are in decreasing order, λ0 > λ1 > ... > λd , and the
superscripts stand for their multiplicities mi = m(λi ). Then all the multiplicities add up to
n = |V |, the number of vertices of Γ.
Proof: We know that an eigenvalue of A is scalar λ such that A v = λv, for some nonzero
v ∈ Rn . From (A A − λI)v = 0 it follow that A v = λv if and only if det(A A − λI) = 0, that is iff
λ ∈ {λ0 , λ1 , ..., λd } where det(A m0 m1
A − λI) = (λ − λ0 ) (λ − λ1 ) ...(λ − λd )md . Recall that
number mi is called algebraic multiplicity of λi .
Since A is a real symmetric matrix, it follows from Proposition 2.09 that A is
diagonalizable, and (by Theorem 2.12) it follows that geo multA (λ) = alg multA (λ) for each
λ ∈ σ(AA) = {λ0 , λ1 , ..., λd }, where geo multA (λi ) is geometric multiplicity of λi , that is
dim ker(AA − λi I) = dim(Ei ). Finaly, from Lemma 4.02, m0 + m1 + ... + md = n, and result
follows.
In Definition 4.03 we have defined principal idempotents of A with E i := Ui Ui> where Ui
are the matrices whose columns form an orthonormal basis of eigenspace Ei := ker(A − λi I)
(λ0 ≥ λ1 ≥ ... ≥ λd are distinct eigenvalues of A ).
(11.02) Example
Let Γ = (V, E) denote a regular graph with λ0 as its largest eigenvalue. Then (from
Proposition 2.15) multiplicity of λ0 is 1 and j = (1, 1, ..., 1)> is eigenvector for λ0 . From this it
follows
1 1 ... 1
1
> j j> 1 . 1 1 1 ... 1
E 0 = U0 U0 = = . 1 ... 1 = . . .. .
kjj k kjj k kjj k2 . n .. ..
.
1
1 1 ... 1
♦
74 CHAPTER II. DISTANCE-REGULAR GRAPHS
(11.03) Exercise
A path graph Pn (n ≥ 1) is a graph with vertex set {1, 2, ..., n} and edge set
{{1, 2}, {2, 3}, ..., {n − 1, n}} (graph with n ≥ 1 vertices, that can be drawn so that all of its
vertices and edges lie on a single straight line, i.e. two vertices will have degree 1, and other
n − 2 vertices will have degree 2). Illustration for P3 is on Figure 43 (left).
Cartesian product Γ1 × Γ2 of graphs Γ1 and Γ2 is a graph such that
(i) the vertex set of Γ1 × Γ2 is the Cartesian product V (Γ1 ) × V (Γ2 ) and
(ii) any two vertices (u, u0 ) and (v, v 0 ) are adjacent in Γ1 × Γ2 if and only if either
(a) u = v and u0 is adjacent with v 0 in Γ2 , or
(b) u0 = v 0 and u is adjacent with v in Γ1 .
Illustration for P3 × P3 is on Figure 43 (right).
Determine principal idempotents for graph Γ = P3 × P3 .
FIGURE 43
Path graph P3 and graph P3 × P3 .
√ √ √ √
2 − 2 0 − 2 0 2 √0 2 −2
3 0 −1 0 −2 0 −1 0 3
√ √ √
− 2 2 − 2 −2
0 2 0 −2 0 −2 0 2 0 √ √0 0 √0 2 √ 2
−1
0 3 0 −2 0 3 0 −1
0
√ − 2 √2 2 0 − 2 −2
√ 2 0
√
0 −2 0 2 0 2 0 −2 0 − 2 0 2 2 0 −2 − 2 0 2
1 1
E 2 = −2 0 −2 0 4 0 −2 0 −2 , E 3 = √0 0 0 0 0 0 √0 0 0 ,
8 0 8 √ √
−2 0 2 0 2 0 −2 0 2 − 2 −2 − 2
√0 0 √2 2 0
√ √
−1 0 3 0 −2 0 3 0 −1 −2 − 2 − 2
0
√ 2 √ 0 2 2
√ √0
0 2 0 −2 0 −2 0 2 0 2 −2 2 − 2 − 2
3 0 −1 0 −2 0 −1 0 3 √ √0 0 √0 √2
−2 2 0 2 0 − 2 0 − 2 2
√ √ √ √
1 − 2 1 − 2 2√ − 2 1 − 2 1
−√2 2
√
− 2 2 −2 2 2
√
− 2 2
√
− 2
√ √ √ √
1
√ − 2 1
√ − 2 2√ − 2 1
√ − 2 1
√
− 2 2√ − 2 2√ −2 2 2√ − 2 2√ − 2
1
E4 = 2 −2 2 2 −2 2 4√ −2 2 2 −2 2 2 .
√ √ √ √
16
− 2 − 2 −2 2 − 2 − 2
√2 √2 √2 √2
1
√ − 2 1
√ − 2 2√ − 2 1
√ − 2 1
√
− 2 2 − 2 2 −2 2 2 − 2 2 − 2
√ √ √ √
1 − 2 1 − 2 2 − 2 1 − 2 1
(11.04) Proposition
Set {E
E 0 , E 1 , ..., E d } is an orthogonal basis of adjacency algebra A(Γ).
Proof: By Proposition 4.05 we have that A = span{E E 0 , E 1 , ..., E d }. We have seen that
E iE j = δij E i (Proposition 5.02), and since orthogonal set is linearly independent (Proposition
5.03), the result follow.
(11.05) Proposition
Let Γ = (V, E) denote a simple graph with adjacency matrix A and with d + 1 distinct
eigenvalues. Principal idempotents of Γ satisfy the following equation
d
1 Y
Ei = A − λj I), (0 ≤ i ≤ d)
(A
φi j=0
j6=i
Qd
where φi = j=0(j6=i) (λi − λj ).
76 CHAPTER II. DISTANCE-REGULAR GRAPHS
Proof: We know that for a set of m points S = {(x1 , y1 ), (x2 , y2 ), ..., (xm , ym )} there is unique
polynomial Q m
(x − xj )
m
j6j=1
X =i
p(x) = yi
Q m
i=1 (xi − xj )
j=1
j6=i
A) = f (λ0 )E
By Lemma 4.04, we know that f (A E 0 + f (λ1 )E
E 1 + ... + f (λd )E
E d . If for function f
above we pick
1, if x = λi
gi (x) = ,
0, if x 6= λi
we have
d
Q
A − λj I)
(A
j=0 d
j6=i 1 Y
A) = E i and p(A
p(A A) = d
= A − λj I),
(A
Q φi j=0
(λi − λj ) j6=i
j=0
j6=i
(11.06) Theorem
Principal idempotents of Γ represents the orthogonal projectors onto Ei = ker(A
A − λi I)
(along im(AA − λi I)).
Proof: First recall some basic definitions from Linear algebra. Subspaces X , Y of a space V
are said to be complementary whenever
V =X +Y and X ∩ Y = {0},
in which case V is said to be the direct sum of X and Y, and this is denoted by writing
V = X ⊕ Y. This is equivalent to saying that for each v ∈ V there are unique vectors x ∈ X
and y ∈ Y such that v = x + y. Vector x is called the projection of v onto X along Y. Vector
11. CHARACTERIZATION OF DRG INVOLVING THE PRINCIPAL IDEMPOTENT MATRICES77
(and (I − E i )v is in ker(E E i − E 2i )v = (E
E i ) becouse E i ((I − E i )v) = (E E i − E i )v = 0).
Furthermore, im(E E i ) ∩ ker(EE i ) = {0} because
x ∈ im(E
E i ) ∩ ker(E
E i) =⇒ x = E i v and E i x = 0 =⇒ x = E i v = E 2i v = E i x = 0,
and thus (23) is established. Now since we know im(E E i ) and ker(EE i ) are complementary, we
can conclude that E i is a projector because each v ∈ V can be uniquely written as v = x + y,
where x ∈ im(EE i ) and y ∈ ker(E
E i ), and (24) guarantees E i v = x.
With this we had showed that E i is projector on im(E E i ) and that
Rn = im(E
E i ) ⊕ ker(E
E i ).
im(E E>
E i )⊥ = ker(E i )
E i )⊥ .
E i ) = ker(E
im(E
Ei = E>
i ⇐⇒ im(E E>
E i ) = im(E i ) ⇐⇒ im(E E i )⊥ ⇐⇒ im(E
E i ) = ker(E E i )⊥ker(E
E i ).
Rn = im(E
E i ) ⊕ ker(E E i )⊥ ⊕ ker(E
E i ) = ker(E E i )⊥ .
E i ) ⊕ im(E
E i ) = im(E
Thus
A − λi I).
E i ) = im(Ui ) = ker(A
im(E
78 CHAPTER II. DISTANCE-REGULAR GRAPHS
Pk
To show ker(E E i ) = im(A − λi I), use A = j=1 λj E j with the already established properties of
the E i ’s to conclude
k k
!
X X
A − λi I) = E i
E i (A λj E j − λi Ej = 0 ⇒ im(A
A − λi I) ⊆ ker(E
E i ).
j=1 j=1
A − λi I) = im(Ei ), so
But we already know that ker(A
A − λi I) = n − dim ker(A
dim im(A A − λi I) = n − dim im(E
E i ) = dim ker(E
E i ),
and therefore,
A − λi I) = ker(Ei ).
im(A
Therefore, E i is orthogonal projector onto Ei (along im(A
A − λi I)).
FIGURE 44
E i projects on the λi -eigenspace Ei .
on the space of all polynomials with degree at most d, normalized in such a way that
kpi k2 = pi (λ0 ).
(11.08) Problem
Prove that polynomials pi (x) from Definition 11.07 exists for all i = 0, 1, ..., d (so that
given definition makes sense). ♦
Solution: Consider linearly independent set {1, x, x2 , ..., xd } of d + 1 elements. Since we have
scalar product h?, ?i we can use Gram-Schmidt orthogonalization procedure and form
orthonormal system {r0 , r1 , ..., rd } (because of definition Gram-Schmidt orthogonalization
procedure notice that for our system {r0 , r1 , ..., rd } we will have dgr rj = j and krj k = 1).
Now for arbitrary numbers α0 , α1 , ..., αd set {α0 r0 , α1 r1 , ..., αd rr } is orthogonal set (because
hαj rj , αi ri i = αj αi hrj , ri i = 0 for i 6= j). This means that if we for arbitrary rj define
c := rj (λ0 ) and pj (x) := crj (x) we have
kpj k2 = hcrj , crj i = c2 krj k = c · c = crj (λ0 ) = pj (λ0 )
11. CHARACTERIZATION OF DRG INVOLVING THE PRINCIPAL IDEMPOTENT MATRICES79
that is kpj k2 = pj (λ0 ). Therefore, set {p0 , p1 , ..., pd } where pj (x) := rj (λ0 )rj (x) is orthogonal
system and kpj k2 = pj (λ0 ) for j = 0, 1, ..., d. ♦
(11.09) Comment
We can now observe polinomyal p0 from Definition 11.07. Notice that dgr (p0 ) = 0 so we
can, for example, say that p0 = c. Since
d d
1X c2 X
hp0 , p0 i = mk p0 (λk )p0 (λk ) = mk = c2
n k=0 n k=0
and kpi k2 = pi (λ0 ) we have that c2 = c, and this is possible if and only if c = 1. Therefore
p0 = 1.
If Γ is δ-regular then
by T heo. 4.07
h1, xi = n1 di=0 mi λi ======= trace(A
P
A) = 0,
k1k2 = n1 di=0 mi = 1,
P
by T heo. 4.07
kxk2 = n1 di=0 mi λ2i ======= trace(A A2 ) = δ = λ0 .
P
It is clear from the above three lines, that if p1 = x, then we have that p1 is orthogonal to p0
and that kp1 k2 = p1 (λ0 ). ♦
(11.10) Comment
From Proposition 10.07 we see that distance polynomials of regular graph are orthogonal
with respect to the scalar product hp, qi = n1 trace(p(A
A)q(A
A)). Since this polynomials satisfy
2
condition kpi k = pi (λ0 ) (see Exercise 10.06), we have that if distance polynomials pi of
regular graph have degree i then they are in fact predistance polynomials. ♦
(11.11) Proposition
md
Let Γ = (V, E) be a simple (connected) regular graph, with spec(Γ)P= {λm m1
0 , λ1 , ..., λd },
0
i
and let p0 , p1 , ..., pd , be sequence of predistance polynomials. If qi = j=0 pj then
A) = J ,
qd (A
Proof: To prove the claim, we first show that qi is the (unique) polynomial p of degree i that
maximizes p(λ0 ) subject to the constraint that hp, pi = hqi , qi i. To show this property, write a
polynomial p of degree i as p = ij=0 αj pj for certain αj (for fixed i). Then the problem
P
reduces to maximizing p(λ0 ) = ij=0 αj pj (λ0 ) subject to ij=0 αj2 pj (λ0 ) = hqi , qi i, becouse
P P
Xi i
X i
X i
X
2
hp, pi = h α j pj , α j pk i = αj hpj , pj i = αj2 pj (λ0 ).
j=0 k=0 j=0 j=0
Notice that
Xi i
X i
X i
X
hp, qi i = h α j pj , pk i = αj hpj , pj i = αj pj (λ0 ),
j=0 k=0 j=0 j=0
Xi i
X i
X i
X
hqi , qi i = h pj , pk i = hpj , pj i = pj (λ0 ).
j=0 k=0 j=0 j=0
80 CHAPTER II. DISTANCE-REGULAR GRAPHS
or in another words
i
X
(1 − αj2 )pj (λ0 ) = 0.
j=0
Since pj (λ0 ) > 0, j = 0, 1, ..., d (Problem 11.08), we have that given constraint become
1 − αj2 = 0 for j = 0, 1, ..., i. Now it is not hard to see that for maximal p(λ0 ) subject to the
constraint that hp, pi = hqi , qi i we must have α0 = α1 = ... = αi = 1, and therefore qi is the
optimal p.
Same conclusion we will obtain if we consider Cauchy-Schwartz inequality
|hp, qi i| ≤ kpkkqi k (with equality iff polynomials p and qi are linearly dependent), that is
|hp, qi i|2 ≤ kpk2 kqi k2 or in another words
" i
#2 " i
#" i # " i #" i #
2 (25) (26)
X X X X X
p(λ0 ) = αj pj (λ0 ) ≤ αj2 pj (λ0 ) pj (λ0 ) = pj (λ0 ) pj (λ0 ) = qi (λ0 )2
j=0 j=0 j=0 j=0 j=0
with equality if and only if all αj are equal to one. The constraint and the fact that
pj (λ0 ) > 0 for all j guarantees that qi is the optimal p.
On the other hand, since hp, pi = n1 p(λ0 )2 + n1 dj=1 mj p(λj )2 (Definition 11.07), that is
P
d
1 1X
p(λ0 )2 = hp, pi − mj p(λj )2 ,
n n j=1
the objective of the optimization problem is clearly equivalent to minimizing dj=1 mj p(λj )2 .
P
For i = d, there is a trivial solution for this: take the polynomial that is zero on λj for all
j = 1, 2, ..., d. Hence (since qd is the optimal p) we may conclude that qd (λj ) = 0 for
j = 1, 2, ..., d, and from the constraint it futher follows that
d
X 1 1
qd (λ0 ) = pj (λ0 ) = hqd , qd i = hp, pi = p(λ0 )2 = qd (λ0 )2
j=1
n n
that is
qd (λ0 ) = n.
Recall that in Example 11.02 we had E 0 = n1 J , and from Proposition 5.02(iii)
A) = di=0 p(λi )E
P
p(A E i , so we have
d
X
A) =
qd (A E i = qd (λ0 )E
qd (λi )E E0 = J .
i=0
11. CHARACTERIZATION OF DRG INVOLVING THE PRINCIPAL IDEMPOTENT MATRICES81
(11.12) Lemma
A), then A i = pi (A
Let p0 , p1 , ..., pd , be sequence of predistance polynomials. If A d = pd (A A)
for all i = 0, 1, ..., d.
FIGURE 45
If ∂(x, y) < i and ∂(y, z) = d then ∂(x, z) > d − i.
Proof: Since pi is a polynomial of degree i, it follows that if x and y are two vertices at
distance larger than i, then (pi (A A))xy = 0. Suppose now that A d = pd (A A). In Proposition
13.07, that is obtained independently of this lemma, we will see that for any orthogonal
system r0 , r1 , ..., rd we have that rd−i (x) = ri (x)rd (x) for some polynomial ri (x) of degree i.
Since predistance polynomials form an orthogonal system, it follow pi (A A) = pd−i (A
A)A
Ad . If the
distance between x and y is smaller than i, then for all vertices z at distance d from y, we
have that the distance between z and x is more than d − i (by the triangle inequality), hence
A))xz = 0. Thus
(pd−i (A
X
A))xy = (pd−1 (A
(pi (A A)A
Ad )xy = A))xz (A
(pd−i (A Ad )zy = 0
z
(if the second factor in sum (that is (A Ad )zy ) is non zero, then (by the previous comments) the
A))xz ) is zero). Therefore, for arbitrary x, y we have that
first factor (that is (pd−i (A
A)xy = 0 for ∂(x, y) > i and for ∂(x, y) < i. Because this holds for all i = 0, 1, ..., d and
pi (A
because di=0 pi (A
P
A) = qd (AA) = J , it follows that pi (AA) = A i for all i = 0, 1, ..., d.
(11.13) Lemma
The algebras A and R[x]/hZi, with their respective scalar products hR, Si = n1 trace(RS)
and hp, qi = n1 di=0 mi p(λi )q(λi ), are isometric (where λ0 > λ1 > ... > λd is a mesh of real
P
numbers and hZi is the ideal generated by the polynomial Z = di=0 (x − λi ) - much more
Q
about R[x]/hZi we will say in Section 12).
(11.14) Theorem
Let Γ be a regular graph and let p0 , p1 , ..., pd be its sequence of predistance polynomials.
Let δd = kAAd k2 = n1 trace(A
AdA d ). Then δd ≤ pd (λ0 ), and equality is attained if and only if
A).
A d = pd (A
82 CHAPTER II. DISTANCE-REGULAR GRAPHS
1 XX 1X
= (R ◦ S)uv = (R ◦ S)uv
n u v n uv
Let {pi }0≤i≤d be sequence of predistans polynomials. From Lemma 11.13 space A is
isometric with R[x]/hZi, so since {pi }0≤i≤d is orthogonal basis for R[x]/hZi we have also that
{Ri = pi (A)}0≤i≤d are orthogonal basis for A. If we use given scalar product, we can expand
{Ri }0≤i≤d to the basis of space T , say to {Ri }0≤i≤d+D−1 . Now arbitrary matrix S ∈ T we can
write in form
d+D−1 d d+D−1
X hS, Ri i X hS, pi (AA)i X hS, Ri i
S= Ri = p i A
(A ) + Ri
i=0
kRi k2 i=0
A)k2
kpi (A i=d+1
kRi k2
| {z } | {z }
∈A ∈A⊥
T −→ A
denoted by
S −→ S.
e
Using in A the orthogonal base p0 , p1 , ..., pd of predistance polynomials, this projection can
be expressed as
d d
X hS, pi i X hS, pi i
S=
e
2
pi = pi .
i=0
kpi k p (λ )
i=0 i 0
Prop. 11.11 hA
Ad , Hi hA
Ad , A d i δd
====== 2
pd = 2
pd = pd ,
kpd k kpd k pd (λ0 )
where
1 1X 1 XX 1X
δd = AdA d ) =
trace(A AdA d )uu =
(A Ad )uv (A
(A Ad )vu = |Γd (u)|
n n u n u v n u∈V
| {z }
|Γd (u)∩Γd (u)|
δd2
2 2 2 δd
kN k = kA Ad k − kA d k = δd −
e = δd 1 − .
pd (λ0 ) pd (λ0 )
This implies the inequality. Moreover, equality is attained if and only if N is zero (δd = pd (λ0 )
⇔ N = {0} ⇔ A e d = A d ⇔ A d ∈ A ⇔ pd (A
A) = A d ).
We point out that the relation δd ≤ pd (λ0 ) holds for any graph. Now we can prove one
characterization that is very similar to one given in Theorem 10.09 from Section 10.
Proof: Let Γk be the graph with the same vertex set as Γ and where two vertices are
adjacent whenever they are at distance k in Γ. Then, for example A d is adjacency matrix for
Γd . For matrix A we know that Aj = λ0j so that
A kj = λk0j for any k ∈ N.
A) we have A dj = q(A
If A d = q(A A)jj = q(λ0 )jj and this is possible if and only if Γd is a regular
graph of degree q(λ0 ). Next, notice that δd = q(λ0 ) because
1X 1X
δd = kAd k2 = hA
Ad , A d i = |Γd (u)| = q(λ0 ) = q(λ0 ).
n u∈V n u∈V
A 0 = I = E 1 + E 2 + ... + E d
We know that distance-i matrices of distance-regular graph satisfies three term recurrence
AA k = bk−1A k−1 + akA k + ck+1A k+1 for some constants ak , bk and ck (Theorem 8.02). Since
d
! d
! d
! d
!
X X X X
= λj E j Ej
pk (λj )E − bk−1 Ej
pk−1 (λj )E − ak Ej
pk (λj )E
j=0 j=0 j=0 j=0
d
! d
X X
= Ej
λj pk (λj )E + −bk−1 pk−1 (λj ) − ak pk (λj ) E j
j=0 j=0
that is
d
1 X
A k+1 = λj pk (λj ) − bk−1 pk−1 (λj ) − ak pk (λj ) E j ,
ck+1 j=0
Proof: (⇒) Suppose that Γ is a distance-regular graph, so that it has spectrally maximum
diameter D = d (Theorem 8.22 (characterization C) and Proposition 5.04). We know that
d
X
A) =
p(A E i,
p(λi )E
i=0
for every polynomial p ∈ R[x], where λi ∈ σ(AA) (Proposition 5.02). Now, taking p in equation
above to be the distance-polynomial pk , 0 ≤ k ≤ d, we get
d
X
Ak = E i (0 ≤ k ≤ d)
pk (λi )E
i=0
We have alredy considered matrix P in the proofe of Proposition 10.13 where we noticed that
m(λ0 ) p0k(λ0 0 ) m(λ0 ) p1k(λ1 0 ) ... m(λ0 ) pdk(λd 0 )
m(λ1 ) p0k(λ0 1 ) m(λ1 ) p1k(λ1 1 ) ... m(λ1 ) pdk(λd 1 )
−1 1
P =
.. .. ..
n . . .
p0 (λd ) p1 (λd ) pd (λd )
m(λd ) k0 m(λd ) k1 ... m(λd ) kd
11. CHARACTERIZATION OF DRG INVOLVING THE PRINCIPAL IDEMPOTENT MATRICES87
is the inverse of P. So
m(λ0 ) p0k(λ0 0 ) m(λ0 ) p1k(λ1 0 ) ... m(λ0 ) pdk(λd 0 )
E0 A0
E 1 1 p0 (λ1 ) p1 (λ1 ) pd (λ1 )
m(λ1 ) k0 m(λ1 ) k1 ... m(λ1 ) kd A 1
.. = .
.
. n .
. .
. .. ..
. . .
Ed m(λd ) p0k(λ0 d ) m(λd ) p1k(λ1 d ) ... m(λd ) pdk(λd d ) Ad
Consequently,
d d
X m(λi ) X pj (λi )
Ei = (P −1 )ij A j = A j , (0 ≤ i ≤ d),
j=0
n j=0
p j (λ0 )
and, equating the corresponding (u, v) entries, it follows that for vertices u, v with ∂(u, v) = h,
the (u, v)-entry of E i is equal to m(λi )ph (λi )
nph (λ0 )
. Therefore, the (u, v)-entry of E i depends only on
the distance between u and v.
(⇐) Conversly, assume that for every 0 ≤ i ≤ d and for every pair of vertices u, v of Γ, the
(u, v)-entry of E i depends only on the distance between u and v. Then
D
X
E` = qj`A j (0 ≤ ` ≤ d)
j=0
for some constants qj` . Notice that the set {A A0 , A 1 , ..., A D } is linearly independent because no
two vertices u, v can have two different distances from each other, so for any position (u, v) in
the set of distance matrices, there is only one matrix with a one entry in that position, and all
the other matrices have zero. So this set is a linearly independent set of D + 1 elements.
The fact that {EE 0 , E 1 , ..., E d } is a basis of adjacency algebra A(Γ) (Proposition 11.04),
(any element of the A can be writen like linear combination of E i ’s), since {I, A , ..., A D } is
linearly independent set and since the above equation imply that every E i ’s can be writen as
linear combination of A i ’s we have that {I, A , ..., A D } is also a basis of A and the result
follows.
8.22 (characterization C) we have that set {I, A , ..., A D } is basis for A = span{I, A , ..., A d } =
= span{EE 0 , E 1 , ..., E d }, and therefore for evry E j there are unique constants c0j , c1j , ..., cDj
such that
XD
Ej = cij Ai .
i=0
So we have !
D
X
E j ◦ Ai = ckj A k ◦ A i = cij A i
k=0
Now if we define qij := cij it follow E j ◦ A i = qij A i .
PD
E j ◦ A i = qij A i ⇒ E j =
(E i=0 qij A i ). For arbitrary E j we have
E j ◦ A 0 + E j ◦ A 2 + ... + E j ◦ A D = E j ◦ (A
A0 + A 1 + ... + A D ) = E j ◦ J = E j .
PD PD
On the other hand i=0 E j ◦ A i = i=0 qij A i and the result follow.
PD
Ej =
(E i=0 qij A i ⇒ E j ∈ D). This is trivial.
E j ∈ D ⇒ Γ distance-regular). Since {E
(E E 0 , E 1 , ..., E d } is orthogonal basis for A and
PD
E j = i=0 qij A i it follow that A ⊆ D. Next, since {I, A , A 2 , ..., A d } is basis of A, and
{I, A, A2 , ..., AD } is linearly independent set we have dimA = d + 1 ≤ D + 1 = dimD, which
imply A = D, and the result follow.
⇐⇒ Aj ∈ D, j = 0, 1, ..., d.
11. CHARACTERIZATION OF DRG INVOLVING THE PRINCIPAL IDEMPOTENT MATRICES89
(j) (j)
Proof: We will show that: Γ distance-regular ⇒ A j ◦ A i = ai A i ⇒ A j = D
P
i=0 ai A i ⇒
A j ∈ D ⇒ Γ distance-regular and that Γ distance-regular ⇔ A j = D d j
P P
i=0 `=0 qi` λ` A i .
(j)
(Γ distance-regular ⇒ A j ◦ A i = ai A i ). Since Γ is distance-regular, by Theorem 8.22
(characterization C) we have that basis for the adjacency algebra A = span{I, A , ..., A d } is
{I, A , ..., A D }, so d = D. Now we have that for arbitrary A j ∈ A there exist unique constants
cj0 , cj1 , ..., cjD such that
D
X
j
A = cj0 I + cj1A + ... + cjDA D = cjkA k .
k=0
So
XD
Aj ◦ A i ) = (
(A cjkA k ) ◦ A i = cjiA i
k=0
(j)
Therefore, if we define ai := cji , it follow
(j)
A j ◦ A i = ai A i , i, j = 0, 1, ..., d(= D).
Aj ◦ A i = a(j) (j)
j Pd
(A i Ai ⇒ A = j=0 ai A i ). For arbitrary A j we have
D
X D
X
j j
A ◦ Ai) = A ◦
(A Ai = Aj ◦ J = Aj
i=0 i=0
(j)
and since Aj ◦ Ai = ai Ai it follow
D
X D
X
j (j)
A ◦ Ai) =
(A ai A i .
i=0 i=0
Therefore
D
X
j (j)
A = ai A i .
i=0
Pd (j)
Aj =
(A i=0 ai A i ⇒ A j ∈ D). This is trivial.
d
X
A = j
λj` E ` .
`=0
(j)
If we denote the (u, v)-entry of A j by auv , and (u, v)-entry of E ` denote by muv (λ` ), previous
equation imply
Xd
(j)
auv = (A j
A )uv = muv (λ` )λj` .
`=0
90 CHAPTER II. DISTANCE-REGULAR GRAPHS
In this section we begin by surveying some known results about orthogonal polynomials of
a discrete variable. We will describe one interesting family of orthogonal polynomials: the
canonical orthogonal system. We begin by presenting some notation and basic facts. In
Definition 12.04 we will define the scalar product associated to (M, g), and in Definition 14.21
we will define canonical orthogonal system. Let we here say that for set of finite many real
numbers P M = {λ0 , λ1 , ..., λd } and for g` := g(λ` ), we define the scalar product
hp, qi := d`=0 g` p(λ` )q(λ` ), where p, q ∈ Rd [x], and for this product we say that is associated
to (M, g). Let us also say that sequence of polynomials (pk )0≤k≤d , defined with p0 := q0 = 1,
p1 := q1 − q0 , p2 := q2 − q1 , ... , pd−1 := qd−1 − qd−2 , pd := qd − qd−1 = H0 − qd−1 will be called
the canonical orthogonalQsystem associated to (M,Q g), where qk denote the orthogonal
projection of H0 := g0 π0 i=1 (x − λi ) (where π0 = di=1 (λ0 − λi )) onto Rk [x]. Main results
1 d
where qk = p0 + ... + pk , sk (u) = |Nk (u)| = |Γ0 (u)| + |Γ1 (u)| + ... + |Γk (u)|.
m(λ0 ) m(λ1 ) m(λd )
(J’) A regular graph Γ with n vertices and spectrum spec(Γ) = {λ0 , λ1 , ..., λd } is
91
92 CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
d
Q
where πh = (λh − λi ) and kd (u) = |Γd (u)|.
i=0
i6=h
(12.01) Example
For example consider quotient ring R[x]/I where I = h(x − 1)(x + 4)i and two elements
I + x, I + x2 + 1 ∈ R[x]/I. Then
(a)
(I + x) + (I + x2 + 1) = I + x2 + x + 1 = I + 1 · (x2 + 3x − 4) +(−2x + 5) = I − 2x + 5
| {z }
(x−1)(x+4)
(b)
♦
1
Recall: A nonempty subset I of a ring R is said to be a (two-sided) ideal of R if I is a subgroup of R under
addition, and if ar ∈ I and ra ∈ I for all a ∈ I and all r ∈ R. With another words rI = {ra | a ∈ I} ⊆ I and
Ir = {ar | a ∈ I} ⊆ I for all r ∈ R.
12. BASIC FACTS ON ORTHOGONAL POLYNOMIALS OF A DISCRETE VARIABLE93
It is not hard to show that in this ring, that is in R[x]/hZi, also holds
α(pq) = (αp)q = p(αq)
for all p, q ∈ R[x]/hZi and allQ
scalars α. Therefore R[x]/hZi forms quotient algebra.
Next, notice that Z(x) = d`=0 (x − λ` ) is polynomial of degree d + 1. From Abstract
algebra we also know that every element of R[x]/hZi can be uniquely expressed in the form
hZi + (b0 + b1 x + ... + bd xd )
where b0 , ..., bd ∈ R, and if we set r(x) = b0 + b1 x + ... + bd xd then r(x) is polynomial such that
either r(x) = 0 or dgr r < dgr Z. This imply that R[x]/hZi we can identify as Rd [x]. With this
in mind it is not hard to prove following lemma.
By FM we will denote set of all real functions defined on the mesh M = {λ0 , λ1 , ..., λd }
FM := {f : M → R | M = {λ0 , λ1 , ..., λd }, λ0 > λ1 > ... > λd }.
Proof: We will left like interesting exercise for reader to show that FM satisfy all axioms
from definition of vector space2 .
Let M = {λ0 , λ1 , ..., λd } be a mesh of real numbers and consider functions ei , defined on
following way
1 if i = j
ei (λj ) =
0 otherwise.
Is the set of functions {e0 , e1 , ..., ed } linearly independent? That is, are there scalars α0 , α1 , ...,
αd , not all zero, such that
(α0 e0 + α1 e1 + ... + αd ed )(λi ) = 0, for all i = 1, 2, ..., n?
2
A nonempty set V is said to be a vector space over a field F if: (i) there exists an operation called addition
that associates to each pair x, y ∈ V a new vector x + y ∈ V called the sum of x and y; (ii) there exists an
operation called scalar multiplication that associates to each α ∈ F and x ∈ V a new vector αx ∈ V called the
product of α and x; (iii) these operations satisfy the following axioms:
(V1)-(V5). V is an additive Abelian group (with neutral element 0).
(V6). 1v = v, for all v ∈ V where 1 is the (multiplicative) identity in F
(V7). α(βv) = (αβ)v for all v ∈ V and all α, β ∈ F.
(V8)-(V9). There worth two law of distribution:
(a) α(u + v) = αu + αv for all u, v ∈ V and all α ∈ F;
(b) (α + β)v = αv + βv for all v ∈ V and all α, β ∈ F.
The members of V are called vectors, and the members of F are called scalars. The vector 0 ∈ V is called
the zero vector, and the vector −x is called the negative of the vector x. We mention only in passing that if we
replace the field F by an arbitrary ring R, then we obtain what is called an R-module (or simply a module over
R).
94 CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
Interesting question, which we will not consider here, is: Is it possible to define
multiplication of elements in FM such that FM form an algebra that is isomorphic with Rd [x]?
From now on, we are interest in sets R[x]/hZ(x)i, Rd [x] and FM just as vector spaces, and
we invite reader to show that these sets are isomorphic as vector spaces, that is, that we have
following natural identifications
FM ←→ R[x]/hZ(x)i ←→ Rd [x] (29)
For simplicity, we represent by the same symbol, say p, any of the three mathematical objects
identified in (29). When we need to specify one of the above three sets, we will be explicit.
For sake of next definition, it is interesting to consider mesh M = {λ0 , λ1 , λ2 , λ3 }, and real
numbers π0 , π1 , π2 and π3 defined as follows:
3
Y
π0 = |λ0 − λ1 ||λ0 − λ2 ||λ0 − λ3 | = |λ0 − λ` |,
`=0 (`6=0)
3
Y
π1 = |λ1 − λ0 ||λ1 − λ2 ||λ1 − λ3 | = |λ1 − λ` |,
`=0 (`6=1)
3
Y
π2 = |λ2 − λ0 ||λ2 − λ1 ||λ2 − λ3 | = |λ2 − λ` |,
`=0 (`6=2)
3
Y
π3 = |λ3 − λ0 ||λ3 − λ1 ||λ3 − λ2 | = |λ3 − λ` |.
`=0 (`6=3)
3
Y
π1 = (−1)(λ1 − λ0 )(λ1 − λ2 )(λ1 − λ3 ) = (−1)1 (λ1 − λ` ),
`=0 (`6=1)
3
Y
2
π2 = (−1)(λ2 − λ0 )(−1)(λ2 − λ1 )(λ2 − λ3 ) = (−1) (λ2 − λ` ),
`=0 (`6=2)
3
Y
π3 = (−1)(λ3 − λ0 )(−1)(λ3 − λ1 )(−1)(λ3 − λ2 ) = (−1)3 (λ3 − λ` ).
`=0 (`6=3)
d
(−1)k Y
Zk := (x − λ` ) (0 ≤ k ≤ d).
πk
`=0 (`6=k)
(12.06) Proposition
Interpolating polynomials Zk (0 ≤ k ≤ d) satisfy
Proof:
d d
(−1)k Y 1 k
Y
Zk (λh ) = (λh − λ` ) = (−1) (λh − λ` ) = δhk .
πk πk
`=0 (`6=k) `=0 (`6=k)
| {z }
0, if h 6= k
π , if h = k
k
d
X
hZh , Zk i = g` Zh (λ` )Zk (λ` ) = gh Zh (λh ) · Z (λ ) = gh δhk = δhk gk .
| k{z h}
`=0
1, if k = h
0, if k 6= h
For a given set of m points S = {(x1 , y1 ), (x2 , y2 ), ..., (xm , ym )} in which the xi ’s are
distinct, we know that there are unique polynomial
p(t) = α0 + α1 t + α2 t2 + ... + αm−1 tm−1
of degree m − 1 that passes through each point in S. In fact this polynomial must be given by
m
Y
(t − xj )
m
X j=1 (j6 = i)
p(t) = yi
Ym
i=1
(xi − xj )
j=1 (j6=i)
Consider arbitrary polynomial p(x) = ax3 + bx2 + cx + d ∈ R3 [x] and consider set of points
S = {(λ0 , p(λ0 )), (λ1 , p(λ1 )), (λ2 , p(λ2 )), (λ3 , p(λ3 ))}. Lagrange interpolation polynomial for
this set is
Q3 !
3 3 3
j=1 (j6=i) (t − λj )
i
X X (−1) Y
p(t) = p(λi ) Q3 = p(λi ) Q 3 (t − λj )
(λ − λ ) (−1)i (λ − λ )
i=1 j=1 (j6=i) i j i=1 j=1 (j6=i) i j j=1 (j6=i)
3 i 3 3
X (−1) Y X
= p(λi ) (t − λj ) =
p(λi )Zi (t).
i=1
πi i=1
j=1 (j6=i)
(12.07) Proposition
Let M = {λ0 , λ1 , ..., λd }, λ0 > λ1 > ... > λd be mesh of real numbers. For arbitrary
polynomial p ∈ Rd [x] we have
Xd
p= p(λk )Zk
k=0
where {Z0 , Z1 , ..., Zd } is the family of interpolating polynomials from Definition 12.05.
Proof: Let p(x) = ad xd + ... + a1 x + a0 be arbitrary polynomial of degree d, and consider set
of points S = {(λ0 , p(λ0 )), (λ1 , p(λ1 )), ..., (λd , p(λd ))}. Lagrange interpolation polynomial for S
is unique polynomial of degree d that passes through each point in S, and is given by
Qd !
d d d
X j=0 (j6=i) (t − λ j ) X (−1) i Y
p(t) = p(λi ) Qd = p(λi ) Qd (t − λj )
(λ − λ ) (−1)i (λ − λ )
i=0 j=0 (j6=i) i j i=0 j=0 (j6=i) i j j=0 (j6=i)
12. BASIC FACTS ON ORTHOGONAL POLYNOMIALS OF A DISCRETE VARIABLE97
d i d d
p(λi ) (−1)
X Y X
= (t − λj ) = p(λi )Zi (t).
i=0
πi i=0
j=0 (j6=i)
we have
3
X (−1)k
λ3k = 1.
k=0
πk
(12.08) Corollary
Momentlike parameters πk := d`=0 (`6=k) |λk − λ` | satisfy
Q
d d
X (−1)k X (−1)k
λik =0 (0 ≤ i ≤ d − 1), λdk = 1.
k=0
πk k=0
πk
98 CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
Proof: For function p from Proposition 12.07 if we use p(x) = 1, p(x) = x, ..., p(x) = xd we
have
d d k d
i (−1)
X X Y
i i
x = λk Z k = λk (x − λj ), (0 ≤ i ≤ d).
k=0 k=0
πk
j=0 (j6=k)
(12.09) Proposition
Suppose that V is a finite-dimensional real inner product space. If {v1 , ..., vn } is an
orthogonal basis of V, then for every vector u ∈ V, we have
hu, v1 i hu, vn i
u= 2
v1 + ... + vn .
kv1 k kvn k2
Furthermore, if {v1 , ..., vn } is an orthonormal basis of V, then for every vector u ∈ V, we have
u = hu, v1 iv1 + ... + hu, vn ivn .
Proof: Since {v1 , ..., vn } is a basis of V, there exist unique α1 , ..., αn ∈ R such that
u = α1 v1 + ... + αn vn .
For every i = 1, ..., n, we have
hu, vi i = hα1 v1 + ... + αn vn , vi i = α1 hv1 , vi i + ... + αn hvn , vi i = αi hvi , vi i
since hvi , vj i = 0 if j 6= i. Clearly vi 6= 0, so that hvi , vi i =
6 0, and so
hu, vi i
αi =
hvi , vi i
for every i = 1, ..., n. The first assertion follows immediately. For the second assertion, note
that kvi k2 = hvi , vi i = 1 for every i = 1, ..., n.
xk − k−1
P
x1 hui , xk iui
u1 = and uk = Pi=1
k−1
for k = 2, ..., n
kx1 k kxk − i=1 hui , xk iui k
is an orthonormal basis for S.
12. BASIC FACTS ON ORTHOGONAL POLYNOMIALS OF A DISCRETE VARIABLE99
Proof: Proof can be find in any book of linear algebra (for example see [37], page 309).
(12.12) Problem
Let p(x) = (x − λ0 )(x − λ1 )(x − λ2 ) ∈ R[x] be irreducible polynomial and let
3
R := {(α1 , α2 , α3 )> | α1 , α2 , α3 ∈ R}. Consider vector space R[x]/I where
I = hpi := {pq | q ∈ R[x]}. Recall that elements of R[x]/I are cossets of form I + r where for
any I + r1 , I + r2 ∈ R[x]/I, α ∈ R we define
vector addition: (I + r1 ) + (I + r2 ) = I + (r1 + r2 ) and
scalar multiplication: α(I + r1 ) = I + (αr1 ).
Show that vector spaces R[x]/I and R3 are isomorphic.
Solution: Roots of polynomial p(x) are λ0 , λ1 and λ2 . From Abstract algebra we know3 that
arbitrary element I + r ∈ R[x]/I can be uniquely expressed in the form
I + (ax2 + bx + c)
Φ : R[x]/I −→ R3 ,
Φ((I+ax2 +bx+c)+(I+a1 x2 +b1 x+c1 )) = Φ(I+(a+a1 )x2 +(b+b1 )x+(c+c1 )) = (a+a1 , b+b1 , c+c1 )> =
I + (a − a1 )x2 + (b − b1 )x + (c − c1 ) = I
that is
(a − a1 )x2 + (b − b1 )x + (c − c1 ) ∈ I.
With another words
(a − a1 )x2 + (b − b1 )x + (c − c1 ) = p(x)q(x)
for some q(x) ∈ R[x]. If q 6= 0 then degree of left side of above equation is 2, but degree of
right side is at last 3, a contradiction. So q = 0, which imply that
(a − a1 )x2 + (b − b1 )x + (c − c1 ) = 0
that is
a = a1 , b = b1 , c = c1 , ⇒ (a, b, c)> = (a1 , b1 , c1 )>
Therefore
Φ(I + ax2 + bx + c) = Φ(I + a1 x2 + b1 x + c1 ).
3
Theorem Suppose p = a0 + a1 x + ... + an xn ∈ R[x] (R is a ring), an 6= 0, and let I = hpi = p R[x] = {pf :
f ∈ R[x]}. Then every element of R[x]/I can be uniquely expressed in the form I + (b0 + b1 x + ... + bn−1 xn−1 )
where b0 , ..., bn−1 ∈ F.
100CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
Finally, we want to show that Φ is injective and sirjective. Pick arbitrary (a, b, c)> ∈ R3 .
Then there exists at last one element from R[x]/I which are mapping in (a, b, c)> (for example
I + (ax2 + bx + c)). Therefore Φ is sirjective. Now assume that
for some a, b, c, a1 , b1 , c1 ∈ R[x]. With another words (a, b, c)> = (a1 , b1 , c1 )> which imply
a = a1 , b = b1 , c = c1 so
I + (ax2 + bx + c) = I + (a1 x2 + b1 x + c1 ).
13 Orthogonal systems
A family of polynomials r0 , r1 , ..., rd is said to be an orthogonal system when each
polynomial rk is of degree k and hrh , rk i = 0 for any h 6= k.
(13.01) Lemma
Let r0 , r1 , ..., rd be orthogonal system. Then every of rk (x), k = 0, 1, ..., d, is orthogonal
on arbitrary polynomial of lower degree.
Proof: From Proposition 5.03 we know that {r0 , r1 , ..., rd } is linearly independent set. So for
any k − 1 where 0 ≤ k − 1 < d the set {r0 , r1 , ..., rk−1 } is basis for Rk−1 [x] (Rk−1 [x] is vector
space of all polynomials of degree at most k − 1). Now notice that arbitrary q ∈ Rk−1 [x] can
be write like linear combination of {r0 , r1 , ..., rk−1 }. So we have
(13.02) Example
In this example we work in space R[x]/hZi where Z = (x − λ0 )(x − λ1 )(x P− λ2 )(x − λ3 ),
λ0 = 3, λ1 = 1, λ2 = −1, λ3 = −3, and inner product is defined by hp, qi = 3i=0 gi p(λi )q(λi ),
g0 = g1 = g2 = g3 = 1/4.
It is not hard to check that family of polynomials {r0 , r1 , r2 , r3 } = {1, x, x2 − 5, 5x3 − 41x}
is orthogonal system. From Lemma 13.01 every rk is orthogonal on arbitrary polynomial of
lower degree. Easy computation gives
r0 0 1 0 0 r0
r1 5 0 1 0 r1
x =
r2 0 16/5
.
0 144/720 r2
r3 0 0 144/16 0 r3
Notice that xr3 = 144 r = 9x2 − 45 and xr3 = 5x4 − 41x2 = 5Z + 9x2 − 45 where
16 2
4
Z = (x − 3)(x − 1)(x + 1)(x + 3) = x − 10x2 + 9.
3
Given any k = 0, 1, 2, 3 let Zk∗ = `=0,`6=k (x − λ` ). For any k we have
Q
hr3 , Z0∗ i = 144, hr3 , Z1∗ i = 144, hr3 , Z2∗ i = 144, hr3 , Z3∗ i = 144.
3
X
hr3 , Zk∗ i = gi r3 (λi )Zk∗ (λi ) = gk r3 (λk )Zk∗ (λk ) = gk r3 (λk )(−1)k πk =
i=0
that is
hr3 , Zk∗ i = gk r3 (λk )(−1)k πk = ξ0 kr3 k2 = const.
so if we, for example, in above equality set k = 0, we have
and since
ξ0 kr3 k2 = hr3 , Zk∗ i
we have
(−1)k gk πk r3 (λk ) = g0 π0 r3 (λ0 ),
r3 (λk ) g0 π0
= (−1)k .
r3 (λ0 ) gk π k
We have already seen that
3
Y
Zk∗ = (x − λ` ) = ξ0 r3 + ξ1 r2 + ξ2 r1 + ξ3 r0 ,
`=0,`6=k
ξ0 r3 = Zk∗ − (ξ1 r2 + ξ2 r1 + ξ3 r0 ),
1 ∗ 1
r3 − Zk = − (ξ1 r2 + ξ2 r1 + ξ3 r0 ),
ξ0 ξ0
so for any k = 0, 1, 2, 3 we get
kr3 k2 1
r3 − Zk∗ ∈ R2 [x].
r3 (λ0 ) g0 π0
102CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
Then, the equality xr3 = αr2 + βr3 (for some α, β ∈ R, because hxr3 , r0 i = hr3 , xr0 i = 0,
hxr3 , r1 i = hr3 , xr1 i = 0), holding in R[x]/hZi, and the comparison of the degrees allows us to
establish the existence of ψ ∈ R such that xr3 = αr2 + βr3 + ψZ in R[x]. In our example
r0 0 1 0 0 r0 0
r1 5 0 1 0 r1 0
xr2 = 0 16/5
+ .
0 144/720 r2 0
r3 0 0 144/16 0 r3 5Z
kr3 k2 1
Notice that ψ is the first coefficient of r3 , we get, from fact that r3 − Z∗
r3 (λ0 ) g0 π0 k
∈ R2 [x]
kr3 k2 1
ψ= .
r3 (λ0 ) g0 π0
♦
(13.03) Proposition
Let Z := d`=0 (x − λ` ) where M = {λ0 , λ1 , ..., λd }, λ0 > λ1 > ... > λd , is a mesh of real
Q
numbers. Every orthogonal system r0 , r1 , ..., rd satisfies the following properties:
(a) There exists a tridiagonal matrix R (called the recurrence matrix of the system) such
that, in R[x]/hZi):
r0 a0 c 1 0 r0
r 1 b 0 a1 c2 0 r1
r 2 0 b1 a2 . . . . . . r2
.. .. ..
..
xrr := x . = 0 . . ... 0 . = Rr ,
rd−2 ..
. . . . ad−2 cd−1 0 rd−2
rd−1 0 bd−2 ad−1 cd rd−1
rd 0 bd−1 ad rd
Proof: (a) Working in R[x]/hZi, we have hxrk , rh i = 0 provided that k < h − 1 (because rh is
orthogonal to arbitrary polynomial of lower degree, and rk is of degree k) and, by symmetry
(hxrk , rh i = hrk , xrh i), the result is also zero when h < k − 1. Therefore, for k = 0, we can
write,
d
X hx r0 , rh i hx r0 , r0 i hx r0 , r1 i hx r0 , r2 i hx r0 , rd i
xr0 = rh = r0 + r1 + r2 + ... + rd =
h=0
krh k2 kr0 k 2 kr1 k 2 kr2 k 2 krd k2
hx r0 , r0 i hx r0 , r1 i
= 2
r0 + r1 ,
kr0 k kr1 k2
13. ORTHOGONAL SYSTEMS 103
One question that immediately jump up is: What for xrd ? Since we work in R[x]/hZi we have
that xrd is degree < d + 1 (in space R[x] polynomial xrd is of degree d + 1). Notice that
xrd = ψZ + p(x) for some ψ ∈ R and some p(x) ∈ Rd [x] (of what degree is p(x) - is this
matter?). Next, we want to consider hxrd , r1 i, ...,hxrd , rd−1 i:
hx rk+1 , rk i
bk = (0 ≤ k ≤ d − 1), bd = 0,
krk k2
hx rk , rk i
ak = (0 ≤ k ≤ d),
krk k2
hx rk−1 , rk i
c0 = 0, ck = (1 ≤ k ≤ d),
krk k2
from which we get
xr0 = a0 r0 + c1 r1 ,
xrk = bk−1 rk−1 + ak rk + ck+1 rk+1 , k = 1, 2, ..., d − 1,
xrd = bd−1 rd−1 + ad rd .
Given any k = 0, 1, ..., d let πk = (−1)k d`=0 (`6=k) (λk − λ` ), Z∗k := d`=0,`6=k (x − λ` ) =
Q Q
= ξ0 rd + ξ1 rd−1 + ..., and notice that ξ0 does not depend on k. We have
d
X
hrd , Z∗k i = gi rd (λi )Z∗k (λi ) = gk rd (λk )(−1)k πk ,
i=0
rd (λk ) g 0 π0
gk rd (λk )(−1)k πk = g0 rd (λ0 )π0 =⇒ = (−1)k .
rd (λ0 ) gk πk
Moreover, since
hZk∗ , rd i hZk∗ , r1 i hZk∗ , r0 i
Zk∗ = r d + ... + r1 + r0
krd k2 kr1 k2 kr0 k2
(Fourier expansion) we have
krd k2 1 rd (λk ) g0 π0
rd − Z ∗ ∈ Rd−1 [x], = (−1)k .
rd (λ0 ) g0 π0 k rd (λ0 ) gk πk
krd k2 1
α= = ψ.
rd (λ0 ) g0 π0
xr0 = a0 r0 + c1 r1 ,
xrk = bk−1 rk−1 + ak rk + ck+1 rk+1 , k = 1, ..., d − 1
we realize that c1 , c2 , ..., cd are nonzero. For k = 0, 1, ..., d − 1, from the equality
d
X hx rk , rh i hx rk , rk−1 i hx rk , rk i hx rk , rk+1 i
xrk = rh = rk−1 + rk + rk+1
h=0
krh k2 krk−1 k2 krk k2 krk+1 k2
| {z } | {z } | {z }
bk−1 ak ck+1
we have
hx rk+1 , rk i hx rk+1 , rk i krk+1 k2 by definition of h·,·i krk+1 k2 hx rk , rk+1 i krk+1 k2
bk = = = == = = = = == = = = ck+1 ,
krk k2 krk+1 k2 krk k2 krk k2 krk+1 k2 krk k2
that the parameters b0 , b1 , ..., bd−1 are also nonnull and, moreover, bk ck+1 > 0 for any
k = 0, 1, ..., d − 1.
13. ORTHOGONAL SYSTEMS 105
(c) Pick arbitrary λh for some h = 0, 1, ..., d. In proof of (a) we have seen that
xr0 = a0 r0 + c1 r1 ,
Therefore, an eigenvector associated to λh is (r0 (λh ), r1 (λh ), ..., rd−1 (λh ), rd (λh ))> .
Now we want to show that the matrix R diagonalizes with eigenvalues the elements of M.
From above we have that {(λ0 , r 0 ), (λ1 , r 1 ), ..., (λd , r d )} is set of eigenpairs for R where
r 0 = (r0 (λ0 ), r1 (λ0 ), ..., rd−1 (λ0 ), rd (λ0 ))> , r 1 = (r0 (λ1 ), r1 (λ1 ), ..., rd−1 (λ1 ), rd (λ1 ))> , ...,
r d = (r0 (λd ), r1 (λd ), ..., rd−1 (λd ), rd (λd ))> . If we use Proposition 2.11 we have that
{rr 0 , r 1 , ..., r d } is a linearly independent set. Therefore, by Proposition 2.07, R is diagonalizes
with eigenvalues the elements of M.
(d) In proof of (a) we have obtained that
rd (λk ) g0 π0
= (−1)k
rd (λ0 ) gk πk
for any k = 0, 1, ..., d. Since d`=0 (`6=k) |λk − λ` | = πk > 0, gk > 0 for k = 0, 1, ..., d and
Q
rd (λk ) = (−1)k rd (λ0 ) ggk0 ππk0 , we observe that rd takes alternating signs on the points of
M = {λ0 , λ1 , ..., λd }. Hence, this polynomial has d simple roots θi whose mesh
Md = {θ0 , θ1 , ..., θd−1 } interlaces M. Noticing that
d
Y
Z= (x − λk )
k=0
and λ0 > θ0 > λ1 > θ1 > ... > λd−1 > θd−1 > λd , so Z takes alternating signs over the elements
of Md . From the equality bd−1 rd−1 = (x − ad )rd − ψZ, since rd (θi ) = 0 for i = 0, 1, ..., d − 1, it
106CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
turns out that rd−1 takes alternating signs on the elements of Md ; whence
Md−1 = {γ0 , γ1 , ..., γd−2 } interlaces Md and rd has alternating signs on Md−1 (because
rd (θi ) = 0 and θ0 > γ0 > θ1 > γ1 > ... > θd−2 > γd−2 > θd−1 ). Recursively, suppose that, for
k = 1, ..., d − 2, the polynomials rk+1 and rk+2 have simple real roots αi and βi , respectly, and
that Mk+1 = {α0 , α1 , ..., αk } interlaces Mk+2 = {β0 , β1 , ..., βk+1 }, so that rk+2 takes
alternating signs on Mk+1 . Then, the result follows by just evaluating the equality
bk rk = (x − ak+1 )rk+1 − ck+2 rk+2 at the points of Mk+1 .
(13.04) Example
Consider space R[x]/hZi where Z = (x − 3)(x − 1)(x + 1)(x + 3), λ0 = 3, λ1 = 1, λ2 = −1,
λ3 = −3 i.e. M = {3, 1, −1, −3}, and g0 = g1 = g2 = g3 = 1/4.
In Example 13.02 we have shown that r0 = 1, r1 = x, r2 = x2 − 5, r3 = 5x3 − 41x,
r3 (λk ) g0 π0
= (−1)k ,
r3 (λ0 ) gk π k
and it is not hard to compute that π0 = 48, π1 = 16, π2 = 16 and π3 = 48. From last equation
we observe that r3 takes alternating signs on the points of M. Easy computation will give
that roots of r3 are r r
41 41
, 0, − .
5 5
q q
Since 3 > 41 5
> 1 > 0 > −1 > − 41
5
> −3 mesh M3 interlaces M. Next notice that in
same example we had xr3 = 144 r + 5Z. From the equality 144
16 2
r = xr3 − 5Z it turns out that
16 2
r2 takes alternating signs on the elements of M3 (since Z takes alternating signs on the
elements of M3 ); whence M2 interlaces M3 and r3 has alternating signs on M2 . Indeed,
roots of r2 are √ √
5 and − 5,
q √ √ q
and notice that 41 5
> 5 > 0 > − 5 > − 41
5
. ♦
(13.05) Problem
Let Z := d`=0 (x − λ` ) where M = {λ0 , λ1 , ..., λd }, λ0 > λ1 > ... > λd , is a mesh of real
Q
numbers. If r0 , r1 , ..., rd is orthogonal system associated to (M, g) in space R[x]/hZi, prove
or disprove that ri (λ0 ) > 0, i = 0, 1, ..., d.
Hint: From Proposition 13.03(d) we see that every orthogonal system r0 , r1 , ..., rd satisfies
the following property: For every i = 1, ..., d the polynomial ri has real simple roots, and if Mi
denotes the mesh (set of finite many distance real numbers) of the ordered roots of ri , then
(the points of) the mesh Md interlaces M = {λ0 , λ1 , ..., λd } and, for each i = 1, 2, ..., d − 1,
Mi interlaces Mi+1 . If elements of setQMi we denote by Mi = {θi1 , θi2 , ..., θii , } this mean
that every ri is of the form ri (x) = ci ij=1 (x − θij ) for some ci ∈ R.
(13.06) Proposition
Let r0 , r1 , ..., rd−1
Qd, rd be an orthogonal1 system with respect to theQscalar product associated
to (M, g), let Z = k=0 (x − λk ), H0 = g0 π0 i=1 (x − λi ) and π0 = di=1 (λ0 − λi ). Then the
Qd
following assertions are all equivalent:
(a) r0 = 1 and the entries of the tridiagonal matrix R associated to (rk )0≤k≤d , satisfy
ak + bk + ck = λ0 , for any k = 0, 1, ..., d;
(b) r0 + r1 + ... + rd = H0 ;
(c) krk k2 = rk (λ0 ) for any k = 0, 1, ..., d.
13. ORTHOGONAL SYSTEMS 107
Proof: We will show that (a) ⇒ (b), (b) ⇔ (c) and that (c) ⇒ (a). Let j := (1, 1, ..., 1)> .
(a) ⇒ (b): Consider the tridiagonal matrix R (Proposition 13.03) associated to the
orthogonal system (rk )0≤k≤d
r0 a0 c 1 0 r0
b 0 a1 c 2 0
r1 r1
0 b 1 a2 . . . . . .
r2 r2
.. . ..
..
xrr := x = 0 .. . ... 0 . = Rr .
.
rd−2 ..
rd−2
. . . . ad−2 c d−1 0
rd−1 0 bd−2 ad−1 cd rd−1
rd 0 bd−1 ad rd
d
ak +bk +ck =λ0
X
> > > > > >
0 = j (xrr − Rr ) = x(jj r ) − j Rr ======= x(jj r ) − λ0j r = (x − λ0 )jj r = (x − λ0 ) rk ,
k=0
d
X
hr0 , rk i = hr0 , r0 i = 1
k=0
d
1X 1
we have hr0 , rk i = ξ
and
ξ k=0
| {z }
=H0
d
X
hr0 , H0 i = gi r0 (λi )H0 (λi ) = g0 r0 (λ0 )H0 (λ0 ) = r0 (λ0 ) = 1
i=0
Pd
it turns out that ξ = 1. Consequently, k=0 rk = H0 .
(b) ⇔ (c): Assume that r0 + r1 + ... + rd = H0 . Then
d
X
2
krk k = hrk , rk i = hrk , r0 + r1 + ... + rd i = hrk , H0 i = gi rk (λi )H0 (λi ) = rk (λ0 ).
i=0
Conversely, assume that krk k2 = rk (λ0 ) for any k = 0, 1, ..., d. By Fourier expansion
(Proposition 12.09) we have
d
X
xH0 = λ0 H0 = λ0 rk ,
k=0
(because (x − λ0 )H0 = g01π0 Z and in R[x]/hZi this mean that (x − λ0 )H0 = 0) and, from the
linear independence of the polynomials rk , we get ak + bk + ck = λ0 .
xp0 = a0 p0 + c1 p1 ,
xpi = bi−1 pi−1 + ai pi + ci+1 pi+1 , i = 1, 2, ..., d − 1,
xpd = bd−1 pd−1 + ad pd .
Notice that
Assume that for any i = 1, 2, ..., k we have that there exist some polynomial pi of degree i
such that pd−i (x) = pi (x)pd (x) (this assumption include that pd−k (x) = pk (x)pd (x) and
pd−(k−1) (x) = pk−1 (x)pd (x) for some pk and pk−1
). From equation
1
pd−j−1 = bd−j−1 (x − ad−j )pd−j − cd−j+1 pd−j+1 that we had above, we have
1
pd−(k+1) (x) = pd−k−1 (x) = (x − ad−k )pd−k (x) − cd−k+1 pd−k+1 (x) =
bd−k−1
1
= (x − ad−k )pk (x)pd (x) − cd−k+1 pk−1 (x)pd (x) ,
bd−k−1
which provides the induction step.
(where 0 on the right denotes the zero functional, i.e. the functional which sends everything
in Rd [x] to 0 ∈ R). The above equality above is an equality of maps, which should hold for
any p ∈ Rd [x], we evaluate either side on. In particular, evaluating both sides on Zi , we have
(α0 [λ0 ] + ... + αd [λd ])(Zi ) = α0 [λ0 ](Zi ) + ... + αd [λd ](Zi ) = αi Zi (λi ) = αi
on the left (by Proposition 12.06 (Zk (λh ) = δhk )) and 0 on the right. Thus we see that αi = 0
for each i, so {[λ0 ], ..., [λd ]} is linearly independent.
Now we show that {[λ0 ], ..., [λd ]} spans R∗d [x]. Let [λ] ∈ R∗d be arbitrary. For each i, let βi
denote the scalar [λ](Zi ). We claim that
Again, this means that both sides should give the same result when evaluating on any p ∈ Rd .
By linearity, it suffices to check that this is true on the basis {Z0 , ..., Zd }. Indeed, for each i we
have
(β0 [λ0 ] + ... + βd [λd ])(Zi ) = β0 [λ0 ](Zi ) + ... + βd [λd ](Zd ) = βi = [λ](Zi ),
again by the Proposition 12.06 (Zk (λh ) = δhk ) and definition the βi . Thus, [λ] and
β0 [λ0 ] + ... + βd [λd ] agree on the basis, so we conclude that they are equal as elements of R∗d .
Hence {[λ0 ], ..., [λd ]} spans R∗d and therefore forms a basis of R∗d .
Proof of next theorem can be find in almost any book of linear algebra (for example see
[2], page 117).
(14.03) Theorem
Suppose ϕ is a linear functional on V. Then there is a unique vector v ∈ V such that
ϕ(u) = hu, vi for every u ∈ V.
(14.04) Observation (induced isomorphism between the space Rd [x] and its dual)
The scalar product associated to (M, g) induces an isomorphism between the space Rd [x]
and its dual, where each polynomial p corresponds to the functional ωp , defined as
ωp (q) := hp, qi and, conversely, each form ω is associated to a polynomial pω through
hq, pω i = ω(q).
Indeed, consider mapping ω : Rd [x] → R∗d [x] defined by ω(p) = ωp where ωp (?) = h?, pi. To
show that ω is bijection, pick arbitrary ϕ ∈ R∗d [x]. From Theorem 14.03 we know that there is
a unique polynomial p ∈ Rd [x] such that ϕ(q) = hq, pi for every q ∈ Rd [x]. Since ωp (?) = h?, pi
we have that
ϕ(?) = h?, pi = ωp (?)
Indeed, consider isomorphism ω : Rd [x] → R∗d [x] defined by ω(p) = ωp where ωp (?) = h?, pi.
Pick arbitrary p ∈ Rd [x]. Since {[λ` ]}0≤`≤d , is basis for R∗d [x] there exist unique scalars β0 , ...,
βd such that
ω(p) = ωp = β0 [λ0 ] + ... + βd [λd ].
d
X d
X
ωp (q) = hp, qi = g` p(λ` )g(λ` ) = g` p(λ` )[λ` ](g)
`=0 `=0
14. THE CANONICAL ORTHOGONAL SYSTEM 111
From last two equations we can conclude that β0 = g0 p(λ0 ), ..., βd = gd p(λd ), that is
d
X
ωp = g` p(λ` )[λ` ].
`=0
Now consider isomorphism P : R∗d [x] → Rd [x] defined with P (ω) = pω where pω is unique
polynomial (see Theorem 14.03) such that ω(q) = hq, pω i for every q ∈ Rd [x]. Pick arbitrary
ω ∈ R∗d [x]. Since {Z` }0≤`≤d is basis for Rd [x] there exist unique scalars γ0 , ..., γd such that
P (ω) = pω = γ0 Z0 + ... + γd Zd .
We know that ω(q) = hq, pω i for every q ∈ Rd [x]. If, for q we pick up Z` we have
ω(Z` ) = hZ` , pω i = hZ` , γ0 Z0 + ... + γd Zd i, and since hZh , Zk i = δhk gk we obtain ω(Z` ) = g` γ` .
We see that γ` = g1` ω(Z` ) and therefore
d
X 1
pω = ω(Z` )Z` .
`=0
g`
d d
X 1 X 1
Hk := p[λk ] = [λk ](Z` )Z` = δ`k Z` =
`=0
g`
`=0
g`
1 (−1)k
= Zk = (x − λ0 )...(x\
− λk )...(x − λd ),
gk gk πk
(where (x\− λk ) denotes that this factor is not present in the product) and their scalar
products are
d
X 1 1 1
hHh , Hk i = g` Hh (λ` )Hk (λ` ) = gh Zh (λh )Hk (λh ) = Hk (λh ) = Zk (λh ) = δhk .
`=0
gh gh gh
Moreover, property
d
X (−1)k
λik = 0 (0 ≤ i ≤ d − 1),
k=0
πk
k
(see Proposition 12.08) is equivalent to stating that the form dk=0 (−1)
P
πk
[λk ] annihilates on the
d−1
space Rd−1 [x]. Indeed, for arbitrary polynomial p(x) = ad−1 x + ... + a1 x + a0 of degree
d − 1 we have
d d
X (−1)k X (−1)k
[λk ](p(x)) = [λk ](ad−1 xd−1 + ... + a1 x + a0 ) =
k=0
πk k=0
πk
d d d
X (−1)k X (−1)k X (−1)k
= ad−1 [λk ]xd−1 + ... + a1 [λk ]x + a0 [λk ]x0 = 0.
k=0
πk k=0
πk k=0
πk
112CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
V =X +Y and X ∩ Y = 0,
in which case V is said to be the direct sum of X and Y, and this is denoted by writing
V = X ⊕ Y.
For a subset L of an inner-product space V, the orthogonal complement L⊥ (pronounced
”L perp”) of L is defined to be the set of all vectors in V that are orthogonal to every vector
in L. That is,
L⊥ = {x ∈ V | hm, xi = 0 for all m ∈ L}.
If L is a subspace of a finite-dimensional inner-product space V, then
V = L ⊕ L⊥ .
1 1 1 1 1
Z0 (λ0 )Z0 + Z1 (λ0 )Z1 + ... + Zd (λ0 )Zd = Z0 (λ0 )Z0 = Z0 = H0 .
g0 g1 gd g0 g0
♦
Rd [x] = R0 [x] ⊕ R⊥ ⊥
0 [x], H0 = q0 + t0 where q0 ∈ R0 [x], t0 ∈ R0 [x],
Rd [x] = R1 [x] ⊕ R⊥ ⊥
1 [x], H0 = q1 + t1 where q1 ∈ R1 [x], t1 ∈ R1 [x],
14. THE CANONICAL ORTHOGONAL SYSTEM 113
..
.
Rd [x] = Rd−1 [x] ⊕ R⊥ ⊥
d−1 [x], H0 = qd−1 + td−1 where qd−1 ∈ Rd−1 [x], td−1 ∈ Rd−1 [x]
FIGURE 46
Obtaining the qk by projecting H0 onto Rk [x].
We know that we can think of flat surfaces passing through the origin whenever we
encounter the term ”subspace” in higher dimensions. Alternatively, the polynomial qk can be
defined on following way.
FIGURE 47
Obtaining the q’s as closes points to H0 .
Proof: If qk is orthogonal projection of H0 onto Rk [x], then qk − m ∈ Rk [x] for all m ∈ Rk [x],
114CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
H0 − qk ∈ R⊥
k [x],
and (qk − m) ⊥ (H0 − qk ). The Pythagorean theorem says kx + yk2 = kxk2 + kyk2 whenever
x⊥y, and hence
b 2 = kH0 − qk + qk − mk
kH0 − mk b 2 = kH0 − qk k2 + kqk − mk
b 2 =⇒ kqk − mk
b = 0,
and thus m
b = qk .
Let S denote the sphere in Rd [x] such that 0 and H0 are antipodal points on it; that is,
the sphere with center 21 H0 and radius 12 kH0 k
( 2 )
1 1
S = p ∈ Rd [x] : kp − H0 k2 = kH0 k
2 2
(if x and y are points on sphere and distance between them is equal to the diameter of sphere,
then y is called an antipodal point of x, x is called an antipodal point of y, and the points x
and y are said to be antipodal to each other).
(14.11) Lemma
Sphere S in Rd [x] such that 0 and H0 are antipodal points on it can also be written as
Proof: We have
2
1 1
kp − H0 k2 = kH0 k ,
2 2
1 1 1
hp − H0 , p − H0 i = kH0 k2 ,
2 2 4
1 1 1 1
hp, pi − hp, H0 i + h H0 , pi + hH0 , H0 i = kH0 k2 ,
2 2 4 4
1 1
hp, pi − hH0 , pi + kH0 k2 = kH0 k2 ,
4 4
kpk2 = hH0 , pi,
and in Observation 14.08 we had hH0 , pi = p(λ0 ), so
kpk2 = p(λ0 ).
(14.12) Problem
Prove that the projection qk is on the sphere S k := S ∩ Rk [x].
S = {p ∈ Rd [x] : hH0 − p, pi = 0} .
Since H0 − qk ∈ R⊥
k [x] and qk ∈ Rk [x] we have
hH0 − qk , qk i = 0.
(14.13) Problem
Prove that S 0 = {0, 1}.
g0 α2 + g1 α2 + ... + gd α2 = α,
α2 = α,
α2 − α = 0,
α(α − 1) = 0 ⇒ α = 0 or α = 1.
Therefore S 0 = {0, 1}. ♦
(14.14) Problem
Let Rn [x] represent the vector space of polynomials (with coefficients in R) whose degree is
at most n. For every a ∈ R let U a = {p ∈ R[x] : p(a) = 0}.
(a) Find a basis of M = U a ∩ Rn [x] for all a ∈ R;
(b) Show that (U 3 + U 4 ) ∩ Rn [x] = Rn [x].
Solution: (a) Since U a is space of all polynimials whose root is a, and Rn [x] is space of all
polynomials with degree at most n, we have that M = U a ∩ Rn [x] is space of all polynomials
with degree at most n, whose root is a. Elements from M are in form
(x − a)(αn−1 xn−1 + ... + α1 x + α0 ) for some αn−1 , ..., α1 , α0 ∈ R. Now it is not hard to prove
that {x − a, (x − a)x, (x − a)x2 , ..., (x − a)xn−1 } is basis of M (this basis we have obtained by
multiplying the standard basis {1, x, x2 , ..., xn−1 } of Rn−1 [x] by x − a).
(b) Notice that U 3 is space of all polynomials whose root is 3, and U 4 is space of all
polynomials whose root is 4, and elements of U 3 + U 4 are in the form
(x − 3)q1 (x) + (x − 4)q2 (x) where q1 (x), q2 (x) are some polynomials (of arbitrary degree). For
arbitrary polynomial p ∈ Rn [x] we have
(14.15) Problem
Let p ∈ Rn−1 [x] be a polynomial of degree n − 1 (n − 1 > 0).
(a) Let Rn−1 [x] be the vector space of polynomials with degree 6 n − 1 over R. Show that
{p(x), p(x + 1), . . . , p(x + n − 1)} is a basis of Rn−1 [x].
p(x) p(x + 1) p(x + 2) ... p(x + n)
p(x + 1) p(x + 2) p(x + 3) . . . p(x + n + 1)
(b) Let Mn = . Show that
.. .. .. .. ..
. . . . .
p(x + n) p(x + n + 1) p(x + n + 2) . . . p(x + 2n)
det Mn = 0 for every x ∈ R.
cx + d = (α + β)(ax + b) + βa,
c d bc
α= − β, β = − 2 ,
a a a
that is r(x) ∈ span{p(x), p(x + 1)}. Therefore, {p(x), p(x + 1)} is basis of R1 [x].
INDUCTION STEP
Assume that the result holds for n − 2 ≥ 1 that is assume that for arbitrary polynomial
p ∈ Rn−2 [x] of degree n − 2 set {p(x), p(x + 1), ..., p(x + n − 2)} is a basis of Rn−2 [x], and use
this assumption to show that {p(x), p(x + 1), . . . , p(x + n − 1)} is a basis of Rn−1 [x].
Let q(x) = p(x + 1) − p(x). Then dgr q = dgr p − 1. Indeed, if
p(x) = an−1 xn−1 + an−2 xn−2 + ... + a1 x + a0 , then
(14.16) Problem
Prove that space Rk [x] ∩ Rk−1 [x]⊥ has dimension one.
FIGURE 48
Vector space Rd [x] = Rk [x] ⊕ Rk⊥ [x] = Rk−1 [x] ⊕ Rk−1
⊥
[x]. ♦
(14.17) Proposition
The polynomial qk , which is the orthogonal projection of H0 on Rk [x], can be defined as the
unique polynomial of Rk [x] satisfying
where S k is the sphere {q ∈ Rk [x] : kqk2 = q(λ0 )}. Equivalently, qk is the antipodal point of
the origin in S k .
1 1
qk (λ0 ) = kqk k2 = − kH0 − qk k2 = − min kH0 − qk2 .
g0 g0 q∈Sk
1
Next, since kH0 k2 = g0
1
− min kH0 − qk2 = kH0 k2 − min kH0 − qk2 = max kqk2 = max q(λ0 )
g0 q∈Sk q∈Sk q∈Sk q∈Sk
118CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
FIGURE 49
Orthogonal projection of H0 on Rk [x], sphere S k and illustration for kH0 k, kqk k and kH0 − qk k.
With the notation qd := H0 , we obtain the family of polynomials q0 , q1 , ..., qd−1 , qd . Let us
remark some of their properties.
(14.18) Corollary
The polynomials q0 , q1 , ..., qd−1 , qd , satisfy the following:
(a) Each qk has degree exactly k.
1
(b) 1 = q0 (λ0 ) < q0 (λ1 ) < ... < qd−1 (λ0 ) < qd (λ0 ) = g0
.
(c) The polynomials q0 , q1 , ..., qd−1 constitute an orthogonal system with respect to the
scalar product associated to the mesh {λ1 > λ2 > ... > λd } and the weight function
λk → (λ0 − λk )gk , k = 1, ..., d.
Proof: (a) Notice that S 0 = {0, 1} (see Problem 14.13). Consequently, q0 = 1. Assume that
qk−1 has degree k − 1, but qk has degree lesser than k. Because of the uniqueness of the
projection and since
we would have qk = qk−1 , and this imply that H0 − qk−1 would be orthogonal to Rk [x]. In
particular,
d
X
h(x − λ0 )H0 , qk−1 i = g` (λ` − λ0 )H0 (λ` )qk−1 (λ` ) = 0.
`=0
d
X
2
0 = hH0 − qk−1 , (x − λ0 )qk−1 i = h(x − λ0 )qk−1 , qk−1 i = g` (λ` − λ0 )qk−1 (λ` ).
`=0
Hence, qk−1 (λ` ) = 0 for any 1 ≤ ` ≤ d and qk−1 would be null (because polynomial qk−1 of
degree k − 1 have d roots), a contradiction. The result follows.
(b) Each qk has degree exactly k and since qk (λ0 ) = kqk k2 we have qk−1 (λ0 ) ≤ qk (λ0 ). If
qk−1 (λ0 ) = qk (λ0 ), from Proposition 14.17 we would get qk−1 = qk , which is not possible
because of (a). The result follows.
(c) Let 0 ≤ h < k ≤ d − 1. Since H0 − qk is orthogonal to Rk [x] we have, in particular, that
d
X d
X
g` (λ0 − λ` )qk (λ` )qh (λ` ) = (λ0 − λ` )g` qk (λ` )qh (λ` ),
`=0 `=0
The polynomial qk , as the orthogonal projection of H0 onto Rk [x], can also be seen as the
orthogonal projection of qk+1 onto Rk [x], as qk+1 − qk = H0 − qk − (H0 − qk+1 ) is orthogonal to
Rk [x] (with another words for arbitrary qk+1 there exist unique qk ∈ Rk [x] and tk ∈ R⊥k [x] such
that qk+1 = gk + tk ). Consider the family of polynomials defined as
p0 := q0 = 1,
p1 := q1 − q0 ,
p2 := q2 − q1 ,
..
.
pd−1 := qd−1 − qd−2 ,
pd := qd − qd−1 = H0 − qd−1 .
Note that, then, qk = p0 + p1 + ... + pk (0 ≤ k ≤ d), and, in particular, p0 + p1 + ... + pd = H0 .
Let us now begin the study of the polynomials (pk )0≤k≤d .
(14.19) Proposition
The polynomials p0 , p1 , ..., pd−1 , pd constitute an orthogonal system with respect to the
scalar product associated to (M, g).
Proof: From pk = qk − qk−1 we see that pk has degree k. Moreover, for arbitrary u ∈ Rk−1 [x]
(14.20) Example
ConsiderPspace R3 [x], let λ0 = 3, λ1 = 1, λ2 = −1, λ3 = −3, g0 = g1 = g2 = g3 = 1/4, and
let hp, qi = 3i=0 gi p(λi )q(λi ) denote inner product in R3 [x], p, q ∈ R3 [x]. Then
hαx3 + βx2 + γx + δ, 1i = 5β + δ,
hαx3 + βx2 + γx + δ, 1i = 5β + δ,
(iv) The entries of the recurrence matrix R associated to (pk )0≤k≤3 , satisfy
ak + bk + ck = λ0 , for any k = 0, 1, 2, 3. ♦
p0 := q0 = 1, p1 := q1 − q0 , p2 := q2 − q1 , ...,
(14.22) Proposition
Let r0 , r1 , ..., rd−1 , rd be an orthogonal system with respect to the scalar product associated
to (M, g). Then the following assertions are all equivalent:
(a) (rk )0≤k≤d is the canonical orthogonal system associated to (M, g);
(b) r0 = 1 and the entries of the recurrence matrix R associated to (rk )0≤k≤d , satisfy
ak + bk + ck = λ0 , for any k = 0, 1, ..., d;
(c) r0 + r1 + ... + rd = H0 ;
(d) krk k2 = rk (λ0 ) for any k = 0, 1, ..., d.
Proof: Let (pk )0≤k≤d be the canonical orthogonal system associated to (M, g). Notice that
pk , rk ∈ Rk [x] ∩ R⊥ ⊥
k−1 [x]. The space Rk [x] ∩ Rk−1 [x] has dimension one (see Problem 14.16),
and hence the polynomials rk , pk are proportional: rk = ξk pk . Let j := (1, 1, ..., 1)> .
(a) ⇒ (b): We have r0 = p0 = 1. Consider the recurrence matrix R (Proposition 13.03)
associated to the canonical orthogonal system (rk )0≤k≤d = (pk )0≤k≤d
p0 a0 c 1 0 p0
p 1 b 0 a1 c 2 0 p1
p 2 0 b 1 a2 . . . . . . p2
. ..
xpp := x ... =
..
0 .. . ... 0 . = Rp .
pd−2 ..
pd−2
. . . . ad−2 c d−1 0
pd−1 0 bd−2 ad−1 cd pd−1
pd 0 bd−1 ad pd
d
X
xqd = xH0 = λ0 H0 = λ0 pk ,
k=0
1
(because (x − λ0 )H0 = and in R[x]/hZi this mean that (x − λ0 )H0 = 0) and, from the
g0 π 0
Z
linear independence of the polynomials pk , we get ak + bk + ck = λ0 .
(b) ⇒ (c): Working in R[x]/hZi and from xrr = Rr , we have:
d
X
> > > > > >
0 = j (xrr − Rr ) = x(jj r ) − j Rr = x(jj r ) − λ0j r = (x − λ0 )jj r = (x − λ0 ) rk .
k=0
exists ξ such that dk=0 rk = ξH0 = dk=0 ξpk . Since, also, dk=0 rk = dk=0 ξk pk , where ξ0 = 1
P P P P
(since by assumption
Pd we have r0 = 1), it turns out that ξ0 = ξ1 = ... = ξd = ξ = 1.
Consequently, k=0 rk = H0 .
(c) ⇒ (d): krk k2 = hrk , rk i = hrk , r0 + r1 + ... + rd i = hrk , H0 i = rk (λ0 ).
(d) ⇒ (a): From rk = ξk pk , we have ξk2 kpk k2 = krk k2 = rk (λ0 ) = ξk pk (λ0 ) = ξk kpk k2 .
Whence ξk = 1 and rk = pk .
(15.01) Proposition
Let X and Y be complementary subspaces of a vector space V. Projector P onto X along
Y, is orthogonal if and only if
Proof: First recall some basic definitions from Linear algebra. Subspaces X , Y of a space V
are said to be complementary whenever
V =X +Y and X ∩ Y = {0},
in which case V is said to be the direct sum of X and Y, and this is denoted by writing
V = X ⊕ Y. This is equivalent to saying that for each v ∈ V there are unique vectors x ∈ X
and y ∈ Y such that v = x + y. The vector x is called the projection of v onto X along Y. The
vector y is called the projection of v onto Y along X . Operator P defined by P v = x is unique
linear operator and is called the projector onto X along Y. Vector m is called the
15. CHARACTERIZATIONS INVOLVING THE SPECTRUM 123
(15.02) Proposition
Let zui represents the orthogonal projection of the u-canonical vector
eu = (0, 0, ..., 0, 1, 0, ..., 0)> on Ei = ker(A − λi i), that is zui := E i eu . Then (u, v) entry of the
principal idempotent E i correspond to the scalar products hzui , zvi i that is
E i )uv = hzui , zvi i (u, v ∈ V ).
(E
(32)
= heu , E i (E
E i ev )i = hE
E i eu , E i ev i = hzui , zvi i.
(15.03) Example
Let Γ = (V, E) denote regular graph with λ0 as his largest eigenvalue. Then multiplicity of
λ0 is 1 and j = (1, 1, ..., 1)> is appropriate eigenvalue for λ0 (see Proposition 4.18). So
U0 = √1n j , and
0
.
h i .. 1 1
E 0 eu = U0 U0> eu = U0 √1n ... √1n 1 = √ U0 = j
. n n
..
0
E 0 )uv = h n1 j , n1 j i =
From this it follow (E 1
n2
n = 1/n for any u, v ∈ V, and hence
1
E0 = J.
n
♦
d
X vi
ei = zil = v + zi
`=0
kvv k2
⊥
where zi` ∈ ker(A
A − λ` I) and zi ∈ v .
Proof: Let Ei denote the eigenspace Ei = ker(A A − λi I), and let dim(Ei ) = mi , for 0 ≤ i ≤ d.
Since A is real symmetric matrix, it is diagonalizable (Lemma 2.09), and for diagonalizable
matrices we have
m0 + m1 + ... + md = n (33)
(Lemma 4.02).
Matrix A is symmetric n × n matrix, so A have n distinct eigenvectors B = {u1 , u2 , ..., un }
which form orthonormal basis for Rn (Lemma 2.06). Notice that for every vector ui ∈ B there
exist Ej such that ui ∈ Ej . Since Ei ∩ Ej = ∅ for i 6= j, it is not possible that eigenvector ui
(1 ≤ i ≤ n) belongs to different eigenspace. So, by Equation (33), we can divide set B to sets
B 0 , B 1 , ...,B d such that
B i is a basis for Ei , B = B 0 ∪ B 1 ∪ ... ∪ B d and B i ∩ B j = ∅.
Let Ui (1 ≤ i ≤ d) be a matrices which columns are orthonormal basis for ker(A A − λi I) i.e.
which columns are vectors from B i , and consider matrix P = [U0 |U1 |...|Ud ]. We have
P > P = P P > = I,
>
U0
U >
1
I = PP> = [U0 |U1 |...|Ud ] . = U0 U0> + U1 U1> + ... + Ud Ud> = E 0 + E 1 + ... + E d ,
..
Ud>
15. CHARACTERIZATIONS INVOLVING THE SPECTRUM 125
that is
E 0 + E 1 + ... + E d = I.
From Definition 4.03 we have that matrices E i (1 ≤ i ≤ d) are known by name principal
idempotents. Every of E i (1 ≤ i ≤ d) is n × n matrix, and if we columns of E k denote by z1k ,
z2k , ..., znk we have
| | | | | | | | | | | |
z10 z20 ... zn0 + z11 z21 ... zn1 + ... + z1d z2d ... znd = e 1 e 2 ... e n
| | | | | | | | | | | |
Since
v1
1 v2
e i = zi0 + zi1 + ... + zid = vi . + zi1 + ... + zid
kvv k2 .. | {z }
=zi
vn
we have
vi
ei = v + zi .
kvv k2
(15.06) Lemma
Let mu (λi ) be u-local multiplicity of λi and muv (λi ) be uv-local multiplicities. Then
d
X
Ak )uv =
(i) (A λki muv (λi ) (the number of closed walks of length k going through vertex u,
i=0
can be computed in a similar way as the whole number of such rooted walks in Γ is computed
by using the ”global” multiplicities);
126CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
d
X
(ii) mu (λi ) = 1 (for each vertex u, the u-local multiplicities of all the eigenvalues add
i=0
up to 1);
X
(iii) mu (λi ) = mi for i = 0, 1, ..., d (the multiplicity of each eigenvalue λi is the sum,
u∈V
extended to all vertices, of its local multiplicities).
E i ) = m(λi ) (i = 0, 1, ..., d)
trace(E
d
X
hp, qiu = (p(A
A)q(A
A))uu = mu (λi )p(λi )q(λi ).
i=0
(15.08) Lemma
The scalar product defined in Definition 11.07
d
1 1X
hp, qi = trace(p(A
A)q(A
A)) = mk p(λk )q(λk )
n n k=0
is simply the average, over all vertices, of the local scalar products
1X
hp, qi = hp, qiu
n u∈V
15. CHARACTERIZATIONS INVOLVING THE SPECTRUM 127
Proof: We have
1X 1
hp, qiu = (hp, qiu + hp, qiv + ... + hp, qiz ) =
n u∈V n
d d d
!
1 X X X
= mu (λi )p(λi )q(λi ) + mv (λi )p(λi )q(λi ) + ... + mz (λi )p(λi )q(λi ) =
n i=0 i=0 i=0
d
1X
= (mu (λi ) + mv (λi ) + ... + mz (λi ))p(λi )q(λi ) =
n i=0
d
! d
1X X 1X
= mu (λi ) p(λi )q(λi ) = mi p(λi )q(λi ) = hp, qi.
n i=0 u∈V n i=0
| {z }
E i )=m(λi )
=trace(E
(15.09) Observation
Because of Proposition 13.06 (u-local) predistance polynomials, satisfying the same
properties as the predistance polynomials. For instance,
hpuk , pu` iu = δkl puk (λ0 ). ♦
Before presenting the main property of these polynomials, we need to introduce a little
more notation. Let Nk (u) be the set of vertices that are at distance not greater than k from u,
the so-called k-neighborhood of u (that is Nk (u) = Γ0 (u) ∪ Γ1 (u) ∪ ... ∪ Γk (u) =
= {v : P
∂(u, v) ≤ k}). For any vertex subset U , let ρ U be the characteristic vector of U ; that is
ρ U := u∈U e u (mapping ρ we had define in Definition 8.03).
(15.10) Lemma
Let Nk (u) be k-neighborhood of vertex u and let ρ U be the characteristic vector of U . Then
(i) ρ Nk (u) is just the u column (or row) of the sum matrix I + A + ... + A k ;
(ii) kρρNk (u)k2 = sk (u) := |Nk (u)|.
P P
Proof: (i) It is not hard to see that ρ Γ0 (u) = v∈Γ0 (u) e v = (I)∗u , ρ Γ1 (u) = v∼u e v = (A A)∗u ,
P P
ρΓ2 (u) = v∈Γ2 (u) ev = (AA2 )∗u , ..., ρΓk (u) = v∈Γk (u) ev = (A Ak )∗u , and the result follow.
(ii) kρρNk (u)k2 = hρρNk (u), ρ Nk (u)i = (I + A + ... + A k )>
∗u (I + A + ... + A k )∗u = |Nk (u)|.
(15.11) Lemma
Let u be an arbitrary vertex of a simple graph Γ, and let pP ∈ Rk [x]. Then
(i) there exists scalars αv , v ∈ Nk (u), such that p(A
A)eeu = v∈Nk (u) αve v ;
(ii) kpku = kp(AA)eeu k.
(ii) We have
s1u s1u s1u
s2u s2u s2u
A)eeu k2 = hp(A
kp(A A)eeu i = h .. , .. i = (s1u , s2u , ..., snu )> .. =
A)eeu , p(A
. . .
snu snu snu
A))uu = kpku .
A)p(A
= (p(A
(15.12) Lemma
Let u be a fixed vertex of a regular graph Γ. Then, for any polynomial q ∈ Rk [x],
q(λ0 )
≤ kρρNk (u)k
kqku
and equality holds if and only if
1 1
A)eeu =
q(A ρ Nk (u),
kqku kρρNk (u)k
where Nk (u) is k-neighborhood of vertex u (Nk (u) = {v : ∂(u, v) ≤ k}).
Proof: Let q ∈ Rk [x] and if we set p = kqkq u we have kpku = 1. By Lemma 15.11(i) there
P
exists scalars αj , j ∈ Nk (u), such that p(A
A)eeu = j∈Nk (u) αj e j . Then
X X X
1 = kpk2u = kp(A
A)eeu k2u = hp(A A)eeu i = h
A)eeu , p(A αj e j , αve v i = αj2
j∈Nk (u) v∈Nk (u) j∈Nk (u)
1 1 1
A)eeu = p(A
p(A A)( j + zu ) = p(A
A)jj + p(A
A)zu = p(λ0 )jj + p(A
A)zu
n n n
and
X X 1 1 X X
αj e j = αj ( j + zj ) = αj j + αj zj .
n n
j∈Nk (u) j∈Nk (u) j∈Nk (u) j∈Nk (u)
A − λ0 I) we get
Thus, projecting onto ker(A
1 1 X X
p(λ0 ) = αj whence p(λ0 ) = αj .
n n
j∈Nk (u) j∈Nk (u)
With Nk = {j1 , j2 , ..., js } notice that problem of maximize value p(λ0 ), is equivalent to the
following constrained optimization problem:
P
• maximize f (j1 , j2 , ..., js ) = j∈Nk (u) αj
• subject to j∈Nk (u) αj2 = 1.
P
qP p
The absolute maximum turns out to be j∈Nk (u) 1 = |Nk (u)| = kρρNk (u)k, and it is
1 1 1
attained at αj = qP =p =p .
j∈N (u) 1 |N k (u)| s k (u)
k
15. CHARACTERIZATIONS INVOLVING THE SPECTRUM 129
If we set αj = √ 1
P
A)eeu =
in equation p(A j∈Nk (u) αj e j we get
sk (u)
A)
q(A X 1 Lemma 15.10 1
eu = p e j ======= ρ Nk (u)
kqku s k (u) kρρNk (u)k
j∈N (u) k
(15.13) Lemma
Let {puk }0≤k≤du be sequence of (u-local) predistance polynomials, let qku := kh=0 puh and let
P
sk (u) := |Nk (u)|. Then
(i) qku (λ0 ) = kqku k2u ;
(ii) qku (λ0 ) = sk (u) if and only if qku (A
A)eeu = ρ Nk (u).
q(λ0 )
(ii) By Lemma 15.12, for arbitrary q ∈ Rk [x] we have = kρρNk (u)k if and only if
kqku
1 1
A)eeu =
q(A ρ Nk (u). If we q replace by qku we get
kqku kρρNk (u)k
qku (λ0 ) 1 1
= kρρNk (u)k iff qku (A
A)eeu = ρ Nk (u).
kqku ku u
kqk ku kρρNk (u)k
Since qku (λ0 ) = kqku k2u we have kqku ku = kρρNk (u)k and from this it follow
thus
qku (λ0 ) = sk (u) iff qku (A
A)eeu = ρ Nk (u)
(15.14) Proposition
Let Γ denote a simple connected graph with predistance polynomials {pk }0≤k≤d . If Γ is
A) = A k for any 0 ≤ k ≤ d (predistance polynomials are, in this case,
distance-regular then pk (A
distance polynomials).
where qk = p0 + ... + pk , sk (u) = |Nk (u)| = |Γ0 (u)| + |Γ1 (u)| + ... + |Γk (u)|.
Proof: (⇒) Assume that Γ is distance-regular. Then predistance polynomials {pk }0≤k≤d are
in fact distance polynomials (Proposition 15.14) and by Proposition 10.07 we have
kph k2 = |Γh (u)|. Now, the number of vertices at distance not greater than k from any given
vertex u is a constant since
k
X k
X
sk (u) = |Γh (u)| = ph (λ0 ) = qk (λ0 ).
h=0 h=0
We have
1 1
=
sk (u) qk (λ0 )
that is
X 1 n
=
u∈V
sk (u) qk (λ0 )
where we have used relationship hp, qi = n1 u∈V hp, qiu from Lemma 15.08 between the scalar
P
products involved. Thus, we conclude that qk (λ0 ) never exceeds the harmonic mean of the
numbers sk (u):
n
qk (λ0 ) ≤ P .
u∈V 1/sk (u)
kqk k2u 1
What is more, equality can only hold if and only if inequality qk (λ0 )2
≥ sk (u)
above, is also
equality, that is (Lemma 15.12)
qk (λ0 ) 1 1
= kρρNk (u)k ⇐⇒ A)eeu =
qk (A ρ Nk (u) ⇐⇒
kqk ku kqk ku kρρNk (u)k
kqk ku
⇐⇒ A)eeu =
qk (A ρ Nk (u).
kρρNk (u)k
But for (u-local) predistance polynomials we have (Lemma 15.13(ii))
15. CHARACTERIZATIONS INVOLVING THE SPECTRUM 131
n
qku (A
A)eeu = ρ Nk (u) ⇐⇒ qku (λ0 ) = sk (u) ⇐⇒ qku (λ0 ) =P ,
u∈V 1/sk (u)
and, hence, qk = αu qku for every vertex u ∈ V and some constants αu . Let us see that all these
constants are equal to 1. Let u, v be two adjacent vertices and assume k ≥ 1. Using the
second equality in Lemma 15.13(ii) we have that
(qku (A
A))uv = (qku (A
A)eu )v = (ρρNk (u))v = (I + A + ... + A k )uv = 1 that is
u
(qk (A A))vu = 1, and, therefore, (since qk = αu qku )
A))uv = (qkv (A
1 1
A))uv =
(qk (A A))vu = 1
(qk (A
αu αv
Hence αu = αv and, since Γ is supposed to be connected, qk = αqku for some constant α and
any vertex u. Moreover, using these equalities and Lemma 15.08,
n 1 X X X 1 X
qk (λ0 ) = qk (λ0 ) 1= qku (λ0 ) = kqku k2u = 2 kqk k2u =
α α u∈V u∈V u∈V
α u∈V
1 X n n
= 2
hqk , qk iu = 2 kqk k2 = 2 qk (λ0 ),
α u∈V α α
Alternatively, considering the ”base vertices” one by one, we may give a characterization
which does not use the sum polynomials qk or the harmonic means of the sk (u)’s:
But, once more, not all the conditions in Characterization J or Characterization K are
necessary to ensure distance-regularity. In fact, if the graph is regular (which guarantees the
case k = 1 since then p1 = x), only the case k = d − 1 matters. First we need Lemma 15.17
and Lemma 15.18.
(15.17) Lemma
Let Γ = (V, E) be simple connected graph with predistance polynomials {pk }0≤k≤d and
md
spectrum spec(Γ) = {λm m1
0 , λ1 , ..., λd }. Then
0
φ0 pd (λ0 )
(i) m(λi ) = (0 ≤ i ≤ d),
φi pd (λi )
d
!−1
X π02
(ii) pd (λ0 ) = n ,
j=0
m(λj )πj2
Qd
where φi = j=0(j6=i) (λi − λj ) and πi ’s are moment-like parameters (πi = |φi |).
where (x\
− λi ) denotes that this factor is not present in the product, and
d
Y
Zi∗ (λi ) = (λ0 − λj ) = (λi − λ1 )(λi − λ2 )...(λ\
i − λi )...(λi − λd ) =
j=1 j6=i
d
1X
kpd k2 = hpd , pd i = m(λj )pd (λj )2 ,
n j=0
d 2 d
φ20
1X φ0 pd (λ0 ) 1 X
pd (λ0 ) = m(λj ) = pd (λ0 )2 ,
n j=0 φj m(λj ) n φ2 m(λj )
j=0 j
d
X π02
n = pd (λ0 ) ,
j=0
m(λj )πj2
(15.18) Lemma
md
For a regular graph Γ with n vertices and spectrum spec(Γ) = {λm m1
0 , λ1 , ..., λd } we have
0
P d
n/(n − kd (u)) X π02 n
P u∈V = 2
⇐⇒ qd−1 (λ0 ) = P 1
u∈V kd (u)/(n − kd (u)) i=0
m(λi )πi u∈V sd−1 (u)
g0 = n1 ), we have qd−1 (λ0 ) = qd (λ0 ) − pd (λ0 ) = n − pd (λ0 ). By Lemma 15.17(ii) the value of
P −1
d π02
pd (λ0 ) is n j=0 m(λj )π 2 . Notice equivalence that follow
j
n n n
qd−1 (λ0 ) = P 1 ⇔ n − pd (λ0 ) = P 1 ⇔ n− P 1 = pd (λ0 )
u∈V sd−1 (u) u∈V sd−1 (u) u∈V sd−1 (u)
n n
P P P
u∈V sd−1 (u) − n u∈V sd−1 (u) − u∈V 1
⇔ P 1 = pd (λ0 ) ⇔ P 1 = pd (λ0 )
u∈V sd−1 (u) u∈V sd−1 (u)
P n−sd−1 (u) d
!−1 P n d
u∈Vsd−1 (u)
X π02 u∈V n−|Γd (u)|
X π02
⇔ 1 = pd (λ0 ) = n ⇔ =
m(λj )πj2 |Γd (u)| m(λj )πj2
P P
u∈V sd−1 (u) j=0 u∈V n−|Γd (u)| j=0
d
Q
where πh = (λh − λi ) and kd (u) = |Γd (u)|.
i=0
i6=h
n Pd
Proof: If qk (λ0 ) = P 1 is satisfied for k = d − 1, we infer that qd−1 (A
A) = h=0 A h
u∈V sk (u)
(from prove of Theorem 15.15 (characterization J)) and so
A) = J − d−1
P
A) − qd−1 (A
A) = H(A
pd (A i=0 A i = A d , where H is the Hoffman polynomial. Thus,
from Theorem 11.15 (characterization D’), the result follow.
134CHAPTER III. CHARACTERIZATION OF DRG WHICH INVOLVE THE SPECTRUM
Proof: We will left this proof like intersting exercise (challenge). Proof can be found in [20]
Theorem 4.4.
Theorem 15.20 was proved by Fiol and Garriga [20], generalizing some previous results.
Finally, notice that, since Ak = pk (A) implies kh (u) = ph (λ0 ) for every u ∈ V - see Proposition
10.07 - both characterizations (D’) and (K’) are closely related.
Conclusion
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied
to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic
approaches. There are several branches of algebraic graph theory, involving the use of linear
algebra, the use of group theory, and the study of graph invariants.
In this thesis we had try to show this connection with linear algebra. We had shown how
from given graph obtain adjacency matrices, principal idempotent matrices, distance matrices
and predistance polynomials. There are many questions that raise from this, as example:
what are the connections between the spectra of these matrices and the properties of the
graphs, what we can say about graph from its distance matrices, are there some connection
between orthogonal polynomials and properties of the graphs. This question we had tried to
answer in Chapter II and III, in case when we have distance-regular graphs.
Further study that would be interesting to explore is use some of this results and make
connection with group theory, or to be more precisely with Coding theory (Coding theory is
the study of the properties of codes and their fitness for a specific application. Codes are used
for data compression, cryptography, error-correction and more recently also for network
coding). If we have some distance-regular graph is it possible to make some code that would
be, for example, efficient and reliable for data transmission methods.
135
(This page is intentionally left blank.)
136
Index
D, 9 characteristic
H(x), 32 polynomial, 10
Nk (u), 127 vector, 127
P ' Q, 28 crossed uv-local multiplicities, 125
Pn , 74
Zk , 95 diagonalizable, 13
A, 8 diameter, 9
A (Γ), 8 direct sum, 76, 122
Γk (u), 9 distance polynomials, 64
alg multA (λ), 10 edge
δx , 7 incident, 7
distΓ (x, y), 9 eigenpair, 9
ecc(u), 9 eigenspace, 9
geo multA (λ), 10 eigenvalue, 9
E, 9 eigenvector, 9
Matm×n (F), 9
num(x, y), 57 fixed point, 57
⊕, 76, 122 Fourier
∂(x, y), 9 coefficients, 98
πk , 95 expansion, 98
ρ(A), 9 function
ρ , 44 weight, 94
σ(A), 9
spec(Γ), 9 geometric multiplicity, 10
trace(B), 9 girth, 8
f (A), 21 Gram-Schmidt sequence, 98
k-neighborhood, 127 graph
ki , 38 k-regular, 7
mu (λi ), 125 connected, 8
muv (λi ), 125 distance-polynomial, 64
sk (u), 127 distance-regular, 35
u ∼ v, 7 distance-regular around y, 39
generalized Petersen graph, 63
algebra Hamming, 55
Johnson, 59
adjacency, 24
path, 74
Bose-Mesner, 24
regular, 7
distance ◦-algebra, 47
vertex-transitive, 57
algebra over F, 47
algebraic Hoffman polynomial, 32
multiplicity, 10
antipodal point, 114 ideal
automorphism two-sided, 92
of Γ, 57 inner product, 66
137
138 INDEX
scalar product
Bibliography
[2] S. Axler: ”Linear Algebra Done Right”, 2nd Edition., Springer, 2004
[3] G. Bachman, L. Narici, E. Beckenstein: ”Fourier and Wavelet Analysis”, Springer, 2000.
[6] R. E. Blahut: ”Algebraic Codes for Data Transmission”, Cambridge University Press,
2003
[10] S. Cañez: ”Notes on dual spaces”, Lecture notes from Math 110 - Linear Algebra
download from
http://math.berkeley.edu/∼scanez/courses/math110/Home/Home files/dual-spaces.pdf
[13] E. R. van Dam: ”The spectral excess theorem for distance-regular graphs: a global
(over)view”, The electronic journal of combinatorics 15 (#R129), 2008
[14] L. Debnath, P. Mikusinski: ”Hilbert Spaces with Applications”, Elsevier Academic Press,
2005.
[17] D. S. Dummit, R. M. Foote: ”Abstract Algebra”, John Wiley & Sons Inc, third edition,
2004.
139
140 BIBLIOGRAPHY
[21] M. A. Fiol, S. Gago, E. Garriga: ”A New Approach to the Spectral Excess Theorem for
Distance-Regular Graphs”, Linear Algebra and its Applications, 2009.
[22] M. A. Fiol: ”On pseudo-distance-regularity”, Linear Algebra Appl. 323 (2001) 145-165.
[28] J. I. Hall: ”Polynomial Algebra over Fields”, lecture notes downloaded from
http://www.mth.msu.edu/∼jhall/classes/codenotes/PolyAlg.pdf
[31] A. J. Hoffman: ”On the polynomial of a graph”, Amer. Math. Monthly 70 (1963), 30-36.
[32] R. A. Horn, C.R. Johnson: ”Matrix analysis”, Cambridge University Press, 1990.
[33] T. W. Judson: ”Abstract Algebra: Theory and Applications”, Thomas W. Judson, 1997.
[36] J. L. Massey: ”Applied Digital Information Theory II”, Lecture Notes, script was used for
a lecture hold by prof. dr. James L. Massey from 1981 until 1997 at the ETH Zurich.
downloaded from http://www.isiweb.ee.ethz.ch/archive/massey scr/
[37] C. D. Meyer: ”Matrix analysis and applied linear algebra”, SIAM, 2000.
[38] Š. Miklavič: Part of Lectures from ”PhD Course Algebraic Combinatorics, Computability
and Complexity” in TEMPUS project SEE Doctoral Studies in Mathematical Sciences,
2011
[39] L. Mirsky: ”An introduction to linear algebra”, Oxford University Press, 1955.
BIBLIOGRAPHY 141
[40] P. J. Olver and C. Shakiban: ”Applied linear algebra”, Pearson Prentice Hall, 2006.
[45] K. H. Rosen: ”Discrete Mathematics and Its Applications”, McGraw-Hill, Fifth Edition,
2003.
[46] K. A. Ross and C. R. B. Wright: ”Discrete Mathematics”, Prentice Hall, Fifth Edition,
2003.