Vdoc - Pub - Foundations of Discrete Harmonic Analysis Applied and Numerical Harmonic Analysis
Vdoc - Pub - Foundations of Discrete Harmonic Analysis Applied and Numerical Harmonic Analysis
Vasily N. Malozemov
Sergey M. Masharsky
Foundations
of Discrete
Harmonic
Analysis
Applied and Numerical Harmonic Analysis
Series Editor
John J. Benedetto
University of Maryland
College Park, MD, USA
Editorial Board
Emmanuel Candes
Stanford University
Stanford, CA, USA
Peter Casazza
University of Missouri
Columbia, MO, USA
Gitta Kutyniok
Technische Universität Berlin
Berlin, Germany
Ursula Molter
Universidad de Buenos Aires
Buenos Aires, Argentina
Michael Unser
Ecole Polytechnique Federal De Lausanne
Lausanne, Switzerland
Foundations of Discrete
Harmonic Analysis
Vasily N. Malozemov Sergey M. Masharsky
Mathematics and Mechanics Faculty Mathematics and Mechanics Faculty
Saint Petersburg State University Saint Petersburg State University
Saint Petersburg, Russia Saint Petersburg, Russia
This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered
company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
LN-ANHA Series Preface
The Lecture Notes in Applied and Numerical Harmonic Analysis (LN-ANHA) book
series is a subseries of the widely known Applied and Numerical Harmonic
Analysis (ANHA) series. The Lecture Notes series publishes paperback volumes,
ranging from 80 to 200 pages in harmonic analysis as well as in engineering and
scientific subjects having a significant harmonic analysis component. LN-ANHA
provides a means of distributing brief-yet-rigorous works on similar subjects as the
ANHA series in a timely fashion, reflecting the most current research in this rapidly
evolving field.
The ANHA book series aims to provide the engineering, mathematical, and
scientific communities with significant developments in harmonic analysis, ranging
from abstract harmonic analysis to basic applications. The title of the series reflects
the importance of applications and numerical implementation, but richness and
relevance of applications and implementation depend fundamentally on the struc-
ture and depth of theoretical underpinnings. Thus, from our point of view, the
interleaving of theory and applications and their creative symbiotic evolution is
axiomatic.
Harmonic analysis is a wellspring of ideas and applicability that has flourished,
developed, and deepened over time within many disciplines and by means of
creative cross-fertilization with diverse areas. The intricate and fundamental rela-
tionship between harmonic analysis and fields such as signal processing, partial
differential equations (PDEs), and image processing is reflected in our
state-of-the-art ANHA series.
Our vision of modem harmonic analysis includes mathematical areas such as
wavelet theory, Banach algebras, classical Fourier analysis, time-frequency analy-
sis, and fractal geometry, as well as the diverse topics that impinge on them.
For example, wavelet theory can be considered an appropriate tool to deal with
some basic problems in digital signal processing, speech and image processing,
geophysics, pattern recognition, bio-medical engineering, and turbulence. These
areas implement the latest technology from sampling methods on surfaces to fast
algorithms and computer vision methods. The underlying mathematics of wavelet
theory depends not only on classical Fourier analysis but also on ideas from abstract
v
vi LN-ANHA Series Preface
harmonic analysis, including von Neumann algebras and the affine group. This
leads to a study of the Heisenberg group and its relationship to Gabor systems and
of the metaplectic group for a meaningful interaction of signal decomposition
methods.
The unifying influence of wavelet theory in the aforementioned topics illustrates
the justification for providing a means for centralizing and disseminating infor-
mation from the broader, but still focused, area of harmonic analysis. This will be a
key role of ANHA. We intend to publish with the scope and interaction that such a
host of issues demands.
Along with our commitment to publish mathematically significant works at the
frontiers of harmonic analysis, we have a comparably strong commitment to publish
major advances in applicable topics such as the following, where harmonic analysis
plays a substantial role:
The above point of view for the ANHA book series is inspired by the history of
Fourier analysis itself, whose tentacles reach into so many fields.
In the last two centuries Fourier analysis has had a major impact on the
development of mathematics, on the understanding of many engineering and sci-
entific phenomena, and on the solution of some of the most important problems in
mathematics and the sciences. Historically, Fourier series were developed in the
analysis of some of the classical PDEs of mathematical physics; these series were
used to solve such equations. In order to understand Fourier series and the kinds of
solutions they could represent, some of the most basic notions of analysis were
defined, for example, the concept of “function.” Since the coefficients of Fourier
series are integrals, it is no surprise that Riemann integrals were conceived to deal
with uniqueness properties of trigonometric series. Cantor’s set theory was also
developed because of such uniqueness questions.
A basic problem in Fourier analysis is to show how complicated phenomena,
such as sound waves, can be described in terms of elementary harmonics. There are
two aspects of this problem: first, to find, or even define properly, the harmonics or
spectrum of a given phenomenon, e.g., the spectroscopy problem in optics; second,
LN-ANHA Series Preface vii
ix
x Preface
The seminar on discrete harmonic analysis and computer aided geometric design
(shortly, DHA&CAGD) was held in St. Petersburg University from 2004 to 2014.
The seminar’s website is http://dha.spb.ru. The site was used to publish the
proceedings of the seminar’s members; these proceedings served as a basis for the
books [46, 47, 34, 44, 7] published later on. Contents of the proceedings and the
mentioned books can be considered as an addendum to this book.
Acknowledgements First of all, the authors are thankful to the students and postgraduates who,
over the years, attended the course of lectures on discrete harmonic analysis and offered beautiful
solutions to some exercises.
The first author separately expresses his gratitude to his permanent co-author Prof. A. B.
Pevnyi and to his former postgraduate students M. G. Ber and A. A. Tret’yakov. It is with these
people that we accomplished our first works in the field of discrete harmonic analysis. We also
give thanks to O. V. Prosekov, M. I. Grigoriev, and N. V. Chashnikov. By turn, they administered
the website of DHA&CAGD over 10 years.
Contents
1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Greatest Common Divisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Relative Primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Bitwise Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Roots of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.8 Finite Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Signal Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Space of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Parseval Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Cyclic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6 Cyclic Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7 Optimal Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.8 Optimal Signal–Filter Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.9 Ensembles of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.10 Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3 Spline Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.1 Periodic Bernoulli Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2 Periodic B-splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3 Discrete Periodic Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
xiii
xiv Contents
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Acronyms
xv
Chapter 1
Preliminaries
1.1 Residuals
Consider j ∈ Z and N being a natural number. There exists a unique integer p such
that
p ≤ j/N < p + 1. (1.1.1)
It is referred to as an integral part of the fraction j/N and is noted as p = j/N . The
difference r = j − pN is called a remainder after division of j by N or a modulo
N residual of j. It is noted as r = j N . For a given j we get a representation
j = pN + r , where p = j/N and r = j N .
It is not difficult to show that
j N ∈ 0 : N − 1. (1.1.2)
j + k N N = j N (1.1.4)
hold for any integer k. The formal proof is carried out in this way. As long as
j/N ≤ j/N < j/N + 1, after addition of k we obtain
j + k N N = j + k N − ( j + k N )/N N = j − j/N N = j N .
We mention two other properties of residuals that are simple yet important: for
any integers j and k
j + k N = j N + k N = j N + k N N ,
jk N = j N k N = j N k N N .
Take nonzero integers j and k. The largest natural number that divides both j and k
is called the greatest common divisor of these numbers and is denoted by gcd ( j, k).
Designate by M the set of linear combinations of the numbers j and k with integer
coefficients:
M = {a j + bk | a ∈ Z, b ∈ Z}.
We see that r ∈ M and r < d. It is possible only when r = 0, i.e. when j is divisible
by d. Similarly we ascertain that k is divisible by d as well.
Now let j and k be divisible by a natural number d . Then d is also divisible by d .
Hence d = gcd ( j, k). The theorem is proved.
gcd ( j, k) = a0 j + b0 k. (1.2.1)
1.2 Greatest Common Divisor 3
Natural numbers n and N are referred to as relative primes if gcd (n, N ) = 1. For
relative primes n and N equality (1.2.1) takes the form
a0 n + b0 N = 1. (1.3.1)
Thus, for relative primes n and N there exist integers a0 and b0 such that equal-
ity (1.3.1) holds.
The inverse assertion is also valid: equality (1.3.1) guarantees relative primality
of n and N . It follows from Theorem 1.2.1, since the unity is absolutely the smallest
natural number.
Theorem 1.3.1 If the product jn for some j ∈ Z is divisible by N , and the integers
n and N are relative primes, then j is divisible by N .
Proof Multiply both sides of equality (1.3.1) by j and take modulo N residuals. We
get a0 jn N = j N and
a0 jn N N = j N .
Since n and N are the relative primes, Theorem 1.3.1 yields that x0 − x is divisible
by N . Taking into account the inequality |x0 − x | ≤ N − 1 we conclude that x = x0 .
Let’s summarize.
4 1 Preliminaries
Theorem 1.3.2 If gcd (n, N ) = 1 then the equation xn N = k has a unique solu-
tion on the set 0 : N − 1 for any k ∈ 0 : N − 1.
1.4 Permutations
Denote f ( j) = jn N . By virtue of Theorem 1.3.2, provided that gcd (n, N ) = 1, the
function f ( j) bijectively maps the set JN = {0, 1, . . . , N − 1} onto itself. Essen-
tially, f performs a permutation of the elements of JN . This is called an Euler
permutation.
We describe a simple way of calculating the values f ( j). It is obvious that f (0) =
0 and
f ( j + 1) = ( j + 1)n N = jn N + n N = f ( j) + n N .
f (0) = 0; f ( j + 1) = f ( j) + n N , j = 0, 1, . . . , N − 2 (1.4.1)
that makes it possible to consequently discover the values of the Euler permutation.
The results of calculations with formula (1.4.1) for n = 3 and N = 8 are presented
in Table 1.1.
Later on we will need two other permutations, revν and greyν . They are defined
on a set {0, 1, . . . , 2ν − 1} for a natural ν.
Recall that with the use of consequent bisections we can uniquely represent any
integer j ∈ 0 : 2ν − 1 in the form
where every coefficient jk is equal to either zero or unity. Instead of (1.4.2), more
compact notation is used: j = ( jν−1 , jν−2 , . . . , j0 )2 . The right side of the latter
equality is referred to as a binary code of the number j.
Introduce a notation
revν ( j) = ( j0 , j1 , . . . , jν−1 )2 .
A number revν ( j) belongs to a set 0 : 2ν − 1, and its binary code equals to the
reverted binary code of a number j. Identifier “rev” corresponds to a word reverse.
Subscript ν determines the amount
of reverted binary digits.
It is clear that revν revν ( j) = j for j ∈ 0 : 2ν − 1. Hence, in particular, it follows
that the mapping j → revν ( j) is a permutation of a set {0, 1, . . . , 2ν − 1}.
By a definition, rev1 ( j) = j for j ∈ 0 : 1. It is reckoned that rev0 (0) = 0.
Table 1.2 shows how to form a permutation revν for ν = 3.
We continue with an investigation of a permutation revν .
Theorem 1.4.1 The following recurrent relation holds:
rev0 (0) = 0;
k ∈ 0 : 2ν−1 − 1, ν = 1, 2, . . .
Proof Replace the second and the third lines in (1.4.3) with a single line
2 0 2 1 3
3 0 4 2 6 1 5 3 7
Theorem 1.4.1 makes it possible to consequently calculate the values revν ( j) for
ν = 1, 2, . . . for all j ∈ {0, 1, . . . , 2ν − 1} at once. Table 1.3 presents the results
of calculations of rev1 ( j), rev2 ( j), and rev3 ( j). A transition between the (ν − 1)-th
and the ν-th rows was performed with accordance to formula (1.4.3). It was also
taken into account that
grey0 (0) = 0;
k ∈ 0 : 2ν−1 − 1, ν = 1, 2, . . .
k ∈ 0 : 2ν−1 − 1.
Hence the function greyν ( j) as well bijectively maps onto itself the set {2ν−1 , . . . ,
2ν − 1}. Joining these two facts we conclude that the function greyν ( j) bijectively
maps the set {0, 1, . . . , 2ν − 1} onto itself. In the other words, the mapping j →
greyν ( j) is a permutation of the set {0, 1, . . . , 2ν − 1}.
1.4 Permutations 7
2 0 1 3 2
3 0 1 3 2 6 7 5 4
Formula (1.4.5) make it possible to consequently calculate the values greyν ( j) for
ν = 1, 2, . . . for all j ∈ {0, 1, . . . , 2ν − 1} at once. Table 1.4 contains the results of
calculations of grey1 ( j), grey2 ( j), and grey3 ( j).
We adduce a characteristic property of a permutation greyν .
Theorem 1.4.2 For ν ≥ 1, the binary codes of two adjacent elements greyν (k) and
greyν (k + 1), k ∈ 0 : 2ν − 2, differ in a single digit only.
Proof When ν = 1, the assertion is obvious since grey1 (0) = (0)2 and grey1 (1) =
(1)2 . We perform an induction step from ν − 1 to ν, ν ≥ 2.
Let k ∈ 0 : 2ν−1 − 1 and greyν−1 (k) = ( pν−2 , . . . , p0 )2 . According to (1.4.5)
By an inductive hypothesis, the binary codes ( pν−2 , . . . , p0 )2 and (qν−2
, . . . , q0 )2
differ only in a single digit. According to (1.4.5)
greyν (k) = greyν (2ν − 1 − k ) = (1, pν−2 , . . . , p0 )2 ,
greyν (k + 1) = greyν 2ν − 1 − (k − 1) = (1, qν−2 , . . . , q0 )2 .
8 1 Preliminaries
It is evident that the binary codes of numbers greyν (k) and greyν (k + 1) also differ
in a single digit only. The theorem is proved.
p = j ⊕k ⇔ pν = jν + kν 2 , ν = 0, 1, . . . , s − 1.
( j ⊕ k) ⊕ l = j ⊕ (k ⊕ l). (1.5.2)
( p ⊕ k) ⊕ k = p ⊕ (k ⊕ k) = p.
(k j)α = kα − jα 2 , α ∈ 0 : s − 1.
k j = k ⊕ j.
1.5 Bitwise Summation 9
It is assumed that the reader is familiar with the arithmetic operations on complex
numbers. We remind some notations:
z = u + iv a complex number,
u = Re z a real part of a complex number,
v = Im z an imaginary part of a complex number,
z = u√− iv a conjugate complex number,
|z| = u 2 + v 2 a modulus of a complex number.
|z 1 + z 2 |2 = |z 1 |2 + |z 2 |2 + 2 Re(z 1 z 2 ),
|z 1 + i z 2 |2 = |z 1 |2 + |z 2 |2 + 2 Im(z 1 z 2 ).
|z 1 + i z 2 |2 = (z 1 + i z 2 )(z 1 − i z 2 ) = |z 1 |2 + |z 2 |2 − i(z 1 z 2 − z 1 z 2 )
= |z 1 |2 + |z 2 |2 + 2 Im(z 1 z 2 ).
n−1
1 − zn
zk = .
k=0
1−z
n−1
z
kz k = [1 − nz n−1 + (n − 1)z n ]. (1.6.2)
k=1
(1 − z)2
n−1
n−1
n
n−1
(1 − z) kz k = kz k − (k − 1)z k = z − (n − 1)z n + zk
k=1 k=1 k=2 k=2
1−z n−2
z
= z − (n − 1)z n + z 2 = [1 − nz n−1 + (n − 1)z n ],
1−z 1−z
ω N = cos 2π
N
+ i sin 2π
N
.
1
ω−k
N = = cos 2πk − i sin 2πk
cos(2π k/N ) + i sin(2π k/N ) N N
= cos 2π(−k)
N
+ i sin 2π(−k)
N
.
r = 2, 3, . . .
r
r
r f ( j) = (−1)r −k f ( j + k).
k=0
k
Exercises
j j −1
− =− − 1.
N N
1.2 Prove that for j ∈ Z and natural n and N the following equality is valid:
n jn N = n j N .
1.3 Let f ( j) = jn N . Prove that, provided n 2 N = 1, the equality f f ( j) = j
is valid for j ∈ 0 : N − 1.
1.8 Assume that integers n 1 , n 2 , . . . , n s are relatively prime with an integer m. Prove
that the product of these numbers N = n 1 n 2 · · · n s is also relatively prime with m.
1.11 Under conditions of the previous exercise, prove that any integer j ∈ 0 : N − 1
can be uniquely represented in a form
s
j= jα Nα ,
α=1 N
1.12 Let conditions of the Exercise 1.10 hold. For each α ∈ 1 : s the equation
x Nα n α = 1 has a unique solution on the set 0 : n α − 1. We denote it by pα . Prove
that any integer k ∈ 0 : N − 1 can be uniquely represented in a form
s
k= k α pα N α ,
α=1 N
1.14 We take p = ( pν−1 , pν−2 , . . . , p0 )2 . Prove that the unique solution of the
equation greyν ( j) = p is an integer j = ( jν−1 , jν−2 , . . . , j0 )2 which has
jν−1 = pν−1 ,
1.17 Let n and N be relatively prime natural numbers. We put εn = ωnN , where
N −1 j N −1
ω N = exp(2πi/N ). Prove that the sets εnk k=0 and ω N j=0 are equal, i.e. that
they consist of the same elements.
1.18 Prove that for relative primes m and n there exist unique integers p ∈ 0 : m − 1
and q ∈ 0 : n − 1 with the following properties: gcd ( p, m) = 1, gcd (q, n) = 1, and
p q
ωmn = ωm ωn .
1.19 Prove that
N −1
N −1
j
zk = (z − ω N ).
k=0 j=1
1.20 Let Pr be an algebraic polynomial of the r -th degree. Prove that a finite dif-
ference of the (r + 1)-th order of Pr equals to zero identically.
Chapter 2
Signal Transforms
2.1.1 We fix a natural number N . The term signal is used to refer to an N -periodic
complex-valued function of an integer argument x = x( j), j ∈ Z. We denote the set
of all signals by C N . Two operations are introduced in C N in a natural manner—the
operation of addition of two signals and the operation of multiplication of a signal
by a complex number:
y = x1 + x2 ⇔ y( j) = x1 ( j) + x2 ( j), j ∈ Z;
y = c x ⇔ y( j) = c x( j), j ∈ Z.
N −1
c(k) δ N ( j − k) = 0 for j ∈ 0 : N − 1.
k=0
As it was mentioned, the left side of this equality equals to c( j), thus c( j) = 0 for
all j ∈ 0 : N − 1.
According to Lemma 2.1.1 any signal x can be expanded over the linearly inde-
pendent system (2.1.2). It means that the system (2.1.2) is a basis of the space C N .
Moreover, the dimension of C N is equal to N .
Lemma 2.1.2 Given a signal x, the following equality holds for all l ∈ Z:
N −1
N −1
x( j + l) = x( j). (2.1.3)
j=0 j=0
Proof Let l = pN + r , where p = l/N and r = l N (see Sect. 1.1). Using the
N -periodicity of a signal x and the fact that r ∈ 0 : N − 1, we obtain
N −1
N −1
−r −1
N N −1
x( j + l) = x( j + r ) = x( j + r ) + x( j + r − N )
j=0 j=0 j=0 j=N −r
N −1
r −1
N −1
= x( j ) + x( j ) = x( j).
j =r j =0 j=0
N −1
N −1
x(l − j) = x( j). (2.1.4)
j=0 j=0
2.1 Space of Signals 17
Indeed,
N −1
N −1
N −1
x(l − j) = x(l) + x l + (N − j) = x(l) + x(l + j )
j=0 j=1 j =1
N −1
N −1
= x( j + l) = x( j).
j =0 j=0
m−1
y( j) = x( j − pn), j ∈ Z.
p=0
We assert that y ∈ Cn .
Proof We need to verify that for any j and l from Z there holds the equality y( j +
ln) = y( j), or, equivalently,
m−1
m−1
x j − ( p − ln) = x( j − pn). (2.1.5)
p=0 p=0
It corresponds to (2.1.5).
2.1.4 We introduce the inner (scalar) product and the norm in C N :
N −1
x, y = x( j) y( j), x = x, x 1/2
.
j=0
δ N (· − k), δ N (· − l) = δ N (k − l).
18 2 Signal Transforms
N −1
N −1
δ N (· − k), δ N (· − l) = δ N ( j − k) δ N ( j − l) = xk ( j) δ N (l − j)
j=0 j=0
= xk (l) = δ N (l − k) = δ N (k − l).
Provided x = O, the inequality turns into an equality if and only if y = cx for some
c ∈ C.
z 2
= z, y − c x = z, y = y − c x, y
= y 2 − cx, y = y 2 − |x, y |2 / x 2 .
x 2
× y 2
− |x, y |2 = x 2
× z 2.
Hence follows both the inequality (2.1.6) and the condition of turning this inequality
into an equality. The lemma is proved.
y = x1 x2 ⇔ y( j) = x1 ( j) x2 ( j), j ∈ Z.
2.1.6 Along with a signal x we will consider signals x, Re x, Im x, |x| with values
x( j) = x( j), [Re x]( j) = Re x( j), [Im x]( j) = Im x( j), |x|( j) = |x( j)|. Note that
x x = |x|2 .
A signal x is called even if x(− j) = x( j) and odd if x(− j) = −x( j) for all
j ∈ Z. A signal x is called real if Im x = O and imaginary if Re x = O.
2.2.1 We take the N -th degree root of unity which we denote by ω N = exp(2πi/N ).
N −1
1 kj
ω = δ N ( j), j ∈ Z. (2.2.1)
N k=0 N
Proof The left side of (2.2.1) contains an N -periodic function, it follows from the
relation
k( j+l N ) kj kl kj
ωN = ω N ω NN = ω N for l ∈ Z
j
By putting z = ω N we obtain
N −1
1 kj
Nj
1 − ωN
ωN = j
= 0 = δ N ( j) for j ∈ 1 : N − 1.
N k=0 N (1 − ω N )
N −1
−k j
X (k) = x( j) ω N , k ∈ Z. (2.2.2)
j=0
N −1
1 kj
x( j) = X (k) ω N , j ∈ Z. (2.2.3)
N k=0
N −1
N −1 N −1
1 kj 1 −kl kj
X (k) ω N = x(l) ω N ωN
N k=0 N k=0 l=0
N −1
N −1 N −1
1 k( j−l)
= x(l) ωN = x(l) δ N ( j − l) = x( j).
l=0
N k=0 l=0
N −1
1
x( j) = X (k) u k ( j). (2.2.4)
N k=0
N −1
N −1
(k−l) j
u k , u l = u k ( j) u l ( j) = ωN = N δ N (k − l).
j=0 j=0
It is ascertained that the system (2.2.5) forms an orthogonal basis in the space C N .
This basis is called exponential.
The coefficients of the expansion (2.2.4) are determined uniquely. More to the
point, if
N −1
1
x( j) = a(l) u l ( j), j ∈ Z, (2.2.6)
N l=0
then necessarily a(k) = X (k) for all k ∈ 0 : N − 1. Indeed, let us multiply both parts
of equality (2.2.6) by u k ( j) scalarly. According to Lemma 2.2.2 we obtain
N −1
1
x, u k = a(l) u l , u k = a(k),
N l=0
thus
N −1
−k j
a(k) = x, u k = x( j) ω N = X (k).
j=0
N −1
1
δ N ( j) = u k ( j).
N k=0
We have the expansion of the unit pulse over the exponential basis. All the coefficients
in this expansion are equal to unity. By virtue of the uniqueness of such an expansion,
F N (δ N ) = 1I.
Proof
Necessity Let x be a real signal. We write
N −1
N −1
−k j −k j
X (−k) = x( j) ω N = x( j) ω N = X (k).
j=0 j=0
22 2 Signal Transforms
Hence it follows that X (−k) = X (k) for all k ∈ Z. We ascertained evenness of the
spectrum X .
Sufficiency By virtue of evenness of the spectrum X , Theorem 2.2.1, and the corol-
lary to Lemma 2.1.2 (for l = 0), we have
N −1 N −1
1 (−k) j 1 kj
x( j) = X − (−k) ω N = X (−k) ω N
N k=0 N k=0
N −1
1 kj
= X (k) ω N = x( j).
N k=0
Proof
Necessity By virtue of evenness of the signal x and the corollary to Lemma 2.1.2
(for l = 0) we get
N −1
N −1
−k(− j) −k j
X (k) = x − (− j) ω N = x(− j) ω N
j=0 j=0
N −1
−k j
= x( j) ω N = X (k).
j=0
Sufficiency We have
N −1 N −1
1 kj 1 kj
x(− j) = X (k) ω N = X (k) ω N = x( j).
N k=0 N k=0
2.2.5 Below we present two examples of DFT calculation. Note that it is sufficient
to define signals from C N by their values on the main period 0 : N − 1.
1 for j ∈ 0 : m − 1 and j ∈ N − m + 1 : N − 1;
x( j) =
0 for j ∈ m : N − m.
2.2 Discrete Fourier Transform 23
m−1 N −1
m−1
−k j k(N − j) kj
X (k) = ωN + ωN = ωN .
j=0 j=N −m+1 j=−(m−1)
n−1
−k j
2n−1
−k( j−n)−kn
n−1
−k j
X (k) = ωN − ωN = (1 − ω−kn
N ) ωN .
j=0 j=n j=0
1 1
=
1− ω−k
N 1 − cos N + i sin 2πk
2πk
N
1
=
2 sin πk
N
sin πkN
+ i cos πkN
sin πk − i cos πk 1 πk
= N N
= 1 − i cot . (2.2.7)
2 sin πk
N
2 N
x 2
= N −1 X 2
. (2.3.2)
revisit the Example 2.2.2 from the previous section. For the signal that was considered
there we have
x 2
= N = 2n,
n−1
(2k + 1)π 2
n−1
1
X 2
=4 1 − i cot =4 2 (2k+1)π
.
k=0
2n k=0
sin 2n
n−1
1
= n2.
k=0
sin2 (2k+1)π
2n
x( j) = j, j ∈ 0 : N − 1.
N −1
−k j
X (k) = j ωN .
j=0
N −1
−k j
( j + 1) ω N = X (k) + N δ N (k) = X (k).
j=0
N −1
N −1
−k j −k( j+1)
(j + 1) ω N = ωkN ( j + 1) ω N
j=0 j=0
N
−k j
= ωkN j ωN = ωkN X (k) + N .
j =1
We come to the equation X (k) = ωkN X (k) + N , from which, by virtue of (2.2.7),
it follows that
26 2 Signal Transforms
N ωkN N 1 πk
X (k) = = − = − N 1 − i cot ,
1 − ωkN 1 − ω−k
N
2 N
k ∈ 1 : N − 1.
N −1
(N − 1)N (2N − 1)
x 2
= j2 =
j=1
6
1 N −1
1 2 1
X 2
= N (N − 1)2 + N 2 .
4 4 k=1
sin2 πk
N
N −1
(N − 1)(2N − 1) 1 1 1
= (N − 1)2 + .
6 4 4 k=1 sin2 πk
N
N −1
1 N2 − 1
πk
= .
k=1
sin2 N 3
1 kj
m−1
h m ( j) = ω .
m k=0 N
m−1
x( j) = x(ln) h( j − ln), j ∈ Z. (2.4.1)
l=0
Proof By virtue of a DFT inversion formula and the theorem’s hypothesis we have
1 −(N −k) j
μ−1 N −1
kj
x( j) = X (k) ω N + X − (N − k) ω N
N k=0 k=N −μ+1
μ−1
1 kj
= X (k) ω N . (2.4.2)
N k=−μ+1
kj
We fix an integer j and put y(k) = ω N , k ∈ −μ + 1 : μ − 1. By extending y on Z
periodically with a period of m we obtain a signal y that belongs to Cm . Let us
calculate its DFT. According to Lemma 2.1.2,
m−1 μ−1
y(k − μ + 1) ωm−(k−μ+1)l = ω N ωm−k l
k j
Y (l) =
k=0 k =−μ+1
μ−1
k( j−ln)
= ωN = m h m ( j − ln).
k=−μ+1
1
m−1 m−1
y(k) = Y (l) ωm
lk
= h m ( j − ln) ωm
lk
, k ∈ Z.
m l=0 l=0
kj
m−1
ωN = h m ( j − ln) ωm
lk
, k ∈ −μ + 1 : μ − 1.
l=0
μ−1
m−1
1
x( j) = X (k) h m ( j − ln) ωm
lk
N k=−μ+1 l=0
⎧ ⎫
m−1 ⎨1 μ−1
⎬
= h m ( j − ln) X (k) ωk(ln)
⎩N N
⎭
l=0 k=−μ+1
m−1
= h m ( j − ln) x(ln).
l=0
28 2 Signal Transforms
μ−1
kj sin π j (2μ − 1)/N
ωN = . (2.4.4)
k=−μ+1
sin(π j/N )
2.4.2 The sampling theorem is related to the following interpolation problem: con-
struct a signal x ∈ C N that satisfies to the conditions
x(ln) = z(l), l ∈ 0 : m − 1,
(2.4.5)
X (k) = 0, k ∈ μ : N − μ,
m−1
x( j) = z(l) h m ( j − ln). (2.4.6)
l=0
Proof The conditions (2.4.5) are in fact a system of N linear equations with respect
to N variables x(0), x(1), …, x(N − 1). Let us consider a homogeneous system
x(ln) = 0, l ∈ 0 : m − 1,
X (k) = 0, k ∈ μ : N − μ.
According to the sampling theorem, it has only zero solution. Therefore the sys-
tem (2.4.5) is uniquely resolvable for any z(l).
Formula (2.4.6) follows from (2.4.1).
2.4.3 The interpolation formula (2.4.6) can be generalized to the case of an even m.
For m = 2μ we put
1
μ−1
kj
h m ( j) = cos(π j/n) + ωN .
m k=−μ+1
2.4 Sampling Theorem 29
m−1
x( j) = z(l) h m ( j − ln)
l=0
x(ln) = z(l), l ∈ 0 : m − 1.
1
μ−1
−1
h m (ln) = (−1)l + ωmkl + ωm(k+m)l .
m k=0 k=−μ+1
−1
m−1
ωm(k+m)l = ωmk l .
k=−μ+1 k =μ+1
μl
The summand (−1)l can be written in a form (−1)l = ωm . As a result we come to
the required formula
1 kl
m−1
h m (ln) = ω = δm (l).
m k=0 m
m−1
m−1
x(ln) = z(l ) h m (l − l )n = z(l ) δm (l − l ) = z(l).
l =0 l =0
μ−1
kj sin π j (m − 1)/N
ωN =
k=−μ+1
sin(π j/N )
sin(π j/n) cos(π j/N ) − cos(π j/n) sin(π j/N )
=
sin(π j/N )
= sin(π j/n) cot(π j/N ) − cos(π j/n).
N −1
u( j) = x(k) y( j − k), j ∈Z
k=0
F N (x ∗ y) = X Y (2.5.1)
x ∗ y = F N−1 (X Y ). (2.5.2)
Proof The equality x ∗ y = y ∗ x follows directly from (2.5.2). Let us verify the
associativity. Take three signals x1 , x2 , x3 and denote their spectra by X 1 , X 2 , X 3 .
Relying on (2.5.1) and (2.5.2) we obtain
(x1 ∗ x2 ) ∗ x3 = F N−1 F N (x1 ∗ x2 ) X 3 = F N−1 (X 1 X 2 ) X 3
= F N−1 X 1 (X 2 X 3 ) = F N−1 X 1 F N (x2 ∗ x3 ) = x1 ∗ (x2 ∗ x3 ).
N −1
[x ∗ δ N ]( j) = x(k) δ N ( j − k) = x( j).
k=0
x ∗ y = δN . (2.5.3)
It exists if and only if each component of the spectrum X is nonzero. In that case
y = F N−1 (X −1 ), where X −1 (k) = [X (k)]−1 . Let us verify this.
Applying an operation F N to both sides of (2.5.3) we get an equation X Y = 1I
with respect to Y . This equation is equivalent to (2.5.3). Such a method is called a
transition into a spectral domain. The latter equation is resolvable if and only if each
component of the spectrum X is nonzero. The solution is written explicitly in a form
Y = X −1 . The inversion formula yields y = F N−1 (X −1 ). This very signal is inverse
to x.
for any x1 , x2 from C N and any c1 , c2 from C. The simplest example of a linear
transform is a shift operator P that maps a signal x to a signal x = P(x) with
samples x ( j) = x( j − 1).
A transform L : C N → C N is referred to as stationary if L P(x) = P L(x)
for all x ∈ C N . It follows from the definition that
L Pk (x) = Pk L(x) , k = 0, 1, . . .
Proof
Necessity Taking into account that Pk (x) = x(· − k) we rewrite formula (2.1.1) in
a form
N −1
x= x(k) Pk (δ N ).
k=0
N −1
N −1
k
L(x) = x(k) L P (δ N ) = x(k) Pk L(δ N ) .
k=0 k=0
N −1
L(x) = x(k) Pk (h) = x ∗ h.
k=0
N −1
L(x) = h ∗ x = h(k) Pk (x).
k=0
Now we write
N −1
P L(x) = h(k) Pk+1 (x) = L P(x) .
k=0
r
r
h r ( j) = (−1)r −l δ N ( j + l). (2.5.5)
l=0
l
r N −1
r r −l
x( j) =
r
(−1) x(k) δ N ( j + l − k)
l=0
l k=0
N −1
r
r
= x(k) (−1)r −l δ N ( j − k) + l
k=0 l=0
l
N −1
= x(k) h r ( j − k) = [x ∗ h r ]( j),
k=0
as it was to be ascertained.
Thus, the operator r : C N → C N is a filter with an impulse response h r of a
form (2.5.5). It is obvious that h r = r (δ N ).
kj
2.5.5 We take a filter L(x) = x ∗ h and denote H = F N (h). Recall that u k ( j) = ω N .
L(u k ) = H (k) u k , k ∈ 0 : N − 1.
Proof We have
N −1
[L(u k )]( j) = [h ∗ u k ]( j) = h(l) u k ( j − l)
l=0
N −1
N −1
h(l) ω−kl
k( j−l) kj
= h(l) ω N = ωN N = H (k) u k ( j).
l=0 l=0
Theorem 2.5.4 states that exponential functions u k form a complete set of eigen-
functions of any filter L(x) = x ∗ h; in addition, an eigenfunction u k corresponds to
an eigenvalue H (k) = [F N (h)](k), k ∈ 0 : N − 1.
The signal H is referred to as a frequency response of the filter L.
34 2 Signal Transforms
N −1
Rx y ( j) = x(k) y(k − j), j ∈ Z,
k=0
F N (Rx y ) = X Y , (2.6.2)
N −1
N −1
−k j k(− j)
Y1 (k) = y1 ( j) ω N = y(− j) ω N
j=0 j=0
N −1
kj
= y( j) ω N = Y (k).
j=0
F N (Rx x ) = X X = |X |2 . (2.6.3)
k=0 k=0
N −1
= |x(k)|2 = Rx x (0).
k=0
2.6 Cyclic Correlation 35
N −1
2.6.2 The orthonormal basis {δ N (· − k)}k=0 in a space C N consists of shifts of the
unit pulse. Are there any other signals whose shifts form orthonormal bases? This
question can be answered positively.
N −1
Lemma 2.6.1 Shifts {x(· − k)}k=0 of a signal x form an orthonormal basis in a
space C N if and only if Rx x = δ N .
Proof Since
N −1
Rx x (l) = x( j) x( j − l) = x, x(· − l) ,
j=0
N −1
x(· − k), x(· − k ) = x( j − k) x ( j − k) − (k − k)
j=0
N −1
= x( j ) x j − k − k N = x, x(· − k − k N) .
j =0
Equivalence of the relations (2.6.4) and (2.6.5) guarantees the validity of the lemma’s
statement.
N −1
Theorem 2.6.2 Shifts {x(· − k)}k=0 of a signal x form an orthonormal basis in a
space C N if and only if |X (k)| = 1 for k ∈ 0 : N − 1.
Proof
N −1
Necessity If {x(· − k)}k=0 is an orthonormal basis then, by virtue of Lemma 2.6.1,
Rx x = δ N . Hence F N (Rx x ) = 1I. On the strength of (2.6.3) we have |X |2 = 1I, thus
|X (k)| = 1 for k ∈ 0 : N − 1.
N −1
x= c(k) y(· − k) (2.6.6)
k=0
and calculate the coefficients c(k). In order to do that we multiply both sides of (2.6.6)
scalarly by y(· − l), l ∈ 0 : N − 1. We gain x, y(· − l) = c(l) or
N −1
c(l) = x( j) y( j − l) = Rx y (l).
j=0
f (x) := r (x) 2
→ min,
(2.7.1)
x(ln) = z(l), l ∈ 0 : m − 1; x ∈ C N .
Here we need to construct the possibly smoothest signal that takes given values z(l)
in the nodes ln. The smoothness is characterized by the squared norm of the finite
difference of the r -th order. Most commonly r = 2.
Let us perform a change of variables
N −1
−k j
X (k) = x( j) ω N , k ∈ 0 : N − 1,
j=0
and rewrite the Problem (2.7.1) in new variables X (k). We start with a goal function.
As it was mentioned in par. 2.5.4, the equality r (x) = x ∗ h r holds, where h r is
determined by formula (2.5.5): h r = r (δ N ). Using the Parseval equality (2.3.2) and
the Convolution Theorem 2.5.1 we obtain
r (x) 2
= x ∗ h r 2 = N −1 F N (x ∗ h r ) 2
= N −1 X Hr 2
N −1
= N −1 |X (k) Hr (k)|2 .
k=0
2.7 Optimal Interpolation 37
Here
N −1
−k j
Hr (k) = h r ( j) ω N
j=0
r
N −1
r −k( j+l)+kl
= (−1)r −l δ N ( j + l) ω N
l=0
l j=0
r N −1
r −k j
= (−1)r −l ωkl
N δ N ( j) ω N
l=0
l j=0
r
r
= (−1)r −l N = (ω N − 1) .
ωkl k r
l=0
l
Denote
2π k 2 2π k 2π k πk
αk := |ωkN − 1|2 = cos − 1 + sin2 = 2 1 − cos = 4 sin2 .
N N N N
N −1
1 r
r (x) 2
= α |X (k)|2 . (2.7.2)
N k=0 k
1
n−1
X ( p + qm) = Z ( p), p ∈ 0 : m − 1, (2.7.3)
n q=0
38 2 Signal Transforms
N −1
1 r
α |X (k)|2 → min,
N k=0 k
(2.7.4)
1
n−1
X ( p + qm) = Z ( p), p ∈ 0 : m − 1.
n q=0
1 r
n−1
α |X ( p + qm)|2 → min,
N q=0 p+qm
(2.7.5)
n−1
X ( p + qm) = n Z ( p).
q=0
1 r
n−1
α |X (qm)|2 → min,
N q=1 qm
n−1
X (qm) = n Z (0).
q=0
n−1 −1
Denoting λ p = n α −r
p+qm we obtain
q=0
1 r
n−1
1
α |X ( p + qm)|2 ≥ λ p |Z ( p)|2 .
N q=0 p+qm m
or X ( p + qm) = c p α −r
p+qm , with some c p ∈ C for all q ∈ 0 : n − 1. The variables
X ( p + qm) must satisfy to the constraints of the Problem (2.7.5), so it is necessary
that
n−1
cp α −r
p+qm = n Z ( p).
q=0
Hence c p = λ p Z ( p).
It is affirmed that for every p ∈ 1 : m − 1 the unique solution of the Prob-
lem (2.7.5) is the sequence
X ∗ ( p + qm) = λ p Z ( p) α −r
p+qm , q ∈ 0 : n − 1. (2.7.8)
Moreover, the minimal value of the goal function equals to m −1 λ p |Z ( p)|2 . Let us
note that λ p is a harmonic mean of the numbers
Formulae (2.7.6) and (2.7.8) define X ∗ on the whole main period 0 : N − 1. The
inversion formula yields the unique solution of the Problem (2.7.1):
N −1
1 kj
x∗ ( j) = X ∗ (k) ω N , j ∈ Z. (2.7.9)
N k=0
The minimal value of the goal function of the Problem (2.7.1) is a total of the minimal
values of the goal functions of the Problems (2.7.5) for p = 0, 1, . . . , m − 1, so that
1
m−1
f (x∗ ) = λ p |Z ( p)|2 .
m p=1
40 2 Signal Transforms
2.7.3 Let us modify formula (2.7.9) to the form more convenient for calculations.
Represent indices k, j ∈ 0 : N − 1 in a way k = p + qm, j = s + ln, where p, l ∈
0 : m − 1 and q, s ∈ 0 : n − 1. In accordance with (2.7.6) and (2.7.8) we write
1
m−1 n−1
( p+qm)(s+ln)
x∗ (s + ln) = X ∗ ( p + qm) ω N
N p=0 q=0
⎡⎛ ⎞ ⎤
1
m−1
n−1
= ⎣⎝ 1 X ∗ ( p + qm) ωnqs ⎠ ω N ⎦ ωmpl
ps
m p=0 n q=0
⎡ ⎛ ⎞ ⎤
1 1
m−1
n−1
= Z (0) + ⎣λ p Z ( p) ⎝ 1 α −r ωqs ⎠ ω N ⎦ ωmpl .
ps
m m p=1 n q=0 p+qm n
(1) we form two arrays of constants that depend only on m, n and r : one-dimensional
⎛ ⎞−1
n−1
λp = n ⎝ α −r
p+qm
⎠ , p ∈ 1 : m − 1,
q=0
s ∈ 1 : n − 1, p ∈ 1 : m − 1;
B[s, 0] = Z (0), s ∈ 1 : n − 1,
B[s, p] = "
Z ( p) D[s, p], s ∈ 1 : n − 1, p ∈ 1 : m − 1;
(4) applying the inverse DFT of order m to all n − 1 rows of the matrix B we obtain
a solution of the Problem (2.7.1):
x∗ (ln) = z(l), l ∈ 0 : m − 1,
2.7 Optimal Interpolation 41
1
m−1
x∗ (s + ln) = B[s, p] ωmpl ,
m p=0
l ∈ 0 : m − 1, s ∈ 1 : n − 1.
2.8.1 We will proceed with a more detailed analysis of linear stationary operators
(a. k. a. filters).
L(x) := x ∗ h = Rx x . (2.8.1)
A matched filter exists. For instance, one may consider h( j) = x(− j), j ∈ Z. In
this case
N −1
[x ∗ h]( j) = x(k) x(k − j) = Rx x ( j).
k=0
Theorem 2.8.1 Let x ∈ C N be a signal with all spectral components being nonzero.
Then the impulse response h of a matched with the signal x filter is determined
uniquely.
N −1
1 kj
h( j) = X (k) ω N , j ∈ Z.
N k=0
Now we have
N −1
1 kj
h(− j) = X (k) ω N = x( j),
N k=0
N −1
1 kj
h( j) = H (k) ω N , j ∈ Z,
N k=0
gives the analytical representation of impulse responses of all filters matched with
the signal x. Indeed, signals H of a form (2.8.2), and solely such signals, satisfy to
a condition X (H − X ) = O which is equivalent to X H = F N (Rx x ). Applying the
operator F N−1 to both sides of the latter equality we gain x ∗ h = Rx x .
2.8.2 We denote by R "x x = Rx x (0) −1 Rx x a normalized auto-correlation of a
nonzero signal x. If it holds
"x x = δ N ,
R (2.8.3)
where ck are nonzero complex coefficients whose moduli are pairwise equal.
√
Only the sufficiency
√ needs proof. Let |c(k)| ≡ A > 0. Since c(k) = X (k), it
holds |X (k)| ≡ A. The Parseval equality (2.3.2) yields
N −1
N −1
Rx x (0) = |x( j)|2 = N −1 |X (k)|2 = A, (2.8.4)
j=0 k=0
√
so that |X (k)| ≡ Rx x (0). The latter identity is equivalent to (2.8.3).
2.8 Optimal Signal–Filter Pairs 43
2.8.3 A value
N −1
E(x) = |x( j)|2
j=0
where X = F N (x).
Given a nonzero signal x, we associate with it a side-lobe blanking filter (SLB
filter) whose impulse response h is determined by a condition
x ∗ h = E(x) δ N . (2.8.6)
N −1
h( j) = N −1 E(x) [X (k)]−1 ω N ,
kj
j ∈ Z.
k=0
2.8.4 Let us consider a set of signals with a given energy. We will examine an
extremal problem of selecting from this set a signal whose SLB filter has an impulse
response with the smallest energy. This problem can be formalized as follows:
x ∗ h = E(x) δ N ; E(x) = A; x, h ∈ C N .
Here A is a fixed positive number. We rewrite the problem (P) in a more compact
form:
γ := A−1 E(h) → min,
(2.8.7)
x ∗ h = A δ N ; E(x) = A; x, h ∈ C N .
Theorem 2.8.3 The minimal value of γ in the Problem (2.8.7) equals to unity. It is
achieved on any delta-correlated signal x∗ with E(x∗ ) = A. Moreover, the optimal
SLB filter h ∗ is a matched filter.
Proof Let us transfer the Problem (2.8.7) into a spectral domain. According to (2.8.5)
we gain
44 2 Signal Transforms
N −1
A
γ := |X (k)|−2 → min,
N k=0
N −1
1
|X (k)|2 = 1.
AN k=0
The left side of the inequality (2.8.8) consists of the value γ −1 , which is not greater
than unity. So γ is not less than unity. The equality to#unity is achieved if and only
N −1
if all the values ak are equal√to each other. Since k=0 k = AN it means that
a
ak ≡ A, therefore |X (k)| ≡ A. Taking into account Theorem 2.8.2 we come to
the following conclusion: there holds the inequality γ ≥ 1; the equality γ = 1 is
achieved on all delta-correlated signals x∗ with E(x∗ ) = A and solely on them.
As to the optimal SLB filters for the indicated signals x∗ , these are necessarily the
matched filters. Indeed, as long as x∗ is delta-correlated, we have
On the other hand, the constraints of the problem (P) yield x∗ ∗ h ∗ = E(x∗ ) δ N .
Therefore x∗ ∗ h ∗ = Rx∗ x∗ , which in accordance with (2.8.1) means that h ∗ is an
impulse response of a matched with x∗ filter. The theorem is proved.
2.9.1 A finite set of signals x from C N with the same energy will be referred to as
an ensemble of signals and will be denoted as Q. For the definiteness sake we will
assume that
Theorem 2.9.1 Let Q be an ensemble consisting of m signals, and let the condition
(2.9.1) hold. Then
2
Rc N − 1 Ra 2
N + ≥ 1. (2.9.2)
A m−1 A
The essential role in the proof of the above theorem is played by the following
assertion.
Lemma 2.9.1 For arbitrary signals x and y from C N there holds an equality
N −1
N −1
|Rx y ( j)|2 = Rx x ( j) R yy ( j). (2.9.3)
j=0 j=0
N −1
N −1
1 2
|Rx y ( j)|2 = [F N (Rx y )](k)
j=0
N k=0
N −1 N −1
1 1
= |X (k) Y (k)|2 = |X (k)|2 |Y (k)|2 .
N k=0 N k=0
N −1
N −1
1
Rx x ( j) R yy ( j) = [F N (Rx x )](k) [F N (R yy )](k)
j=0
N k=0
N −1
1
= |X (k)|2 |Y (k)|2 .
N k=0
The right sides of the presented relations are equal, therefore the left sides are equal
as well. The lemma is proved.
N −1
2 N −1 ⎛ ⎞
Rx x ( j) = Rx x ( j) ⎝ R yy ( j)⎠
j=0 x∈Q j=0 x∈Q y∈Q
N −1
= Rx x ( j) R yy ( j)
x∈Q y∈Q j=0
N −1
= |Rx y ( j)|2
x∈Q y∈Q j=0
N −1
N −1
= |Rx y ( j)|2 + |Rx x ( j)|2 . (2.9.4)
x∈Q y∈Q j=0 x∈Q j=0
y=x
Let us estimate the left and the right sides of this equality. Since Rx x (0) = E(x) = A
for all x ∈ Q, we have
N −1
2 2
Rx x ( j) ≥ Rx x (0) = m 2 A2 .
j=0 x∈Q x∈Q
N −1
|Rx y ( j)|2 ≤ Rc2 N (m − 1) m,
x∈Q y∈Q j=0
y=x
N −1
N −1
|Rx x ( j)| =
2
|Rx x ( j)|2 + |Rx x (0)|2
x∈Q j=0 x∈Q j=1 x∈Q
≤ Ra2 (N − 1) m + m A2 .
m (m − 1) A2 ≤ m (m − 1) N Rc2 + m (N − 1) Ra2 .
N −1
N −1
R yx (k) = y( j) x( j − k) = x( j) y( j + k) = Rx y (−k),
j=0 j=0
We come to the following conclusion: the signals x and y are non-correlated when
the signal x is orthogonal to all shifts of the signal y and the signal y is orthogonal
to all shifts of the signal x.
Let Qc be an ensemble consisting of pairwise non-correlated signals. In this case
Rc = 0. Furthermore, for such ensembles the equality (2.9.4) gets the form
2
N −1
N −1
R ( j) = |Rx x ( j)|2 . (2.9.5)
x x
j=0 x∈Qc x∈Qc j=0
Since non-correlated signals are at least orthogonal, the amount of signals in Qc does
not exceed N . We will show that in a space C N there exist ensembles containing
exactly N pairwise non-correlated signals.
−1 pj
Put Qc = {u p } Np=0 , where u p ( j) = ω N . This is an ensemble of signals with A =
N because E(u p ) = N for all p ∈ 0 : N − 1. Further,
N −1
N −1
j ( p− p )+ p k pk
Ru p u p (k) = u p ( j) u p ( j − k) = ωN = N ω N δ N ( p − p ).
j=0 j=0
As it was mentioned in par. 2.6.1, valid are the relations |Rx x (k)| ≤ Rx x (0) = A.
Assume that for some x ∈ Qc and k ∈ 1 : N − 1 there holds |Rx x (k)| < A. Then
N −1
|Rx x (k)|2 < N 2 A2 .
x∈Qc k=0
√
|Rx y ( j)| ≡ A/ N for all x, y ∈ Qa , x = y. (2.9.8)
k( j 2 + pj)
akp ( j) = ω N , k, p ∈ 0 : N − 1, gcd (k, N ) = 1. (2.9.9)
Lemma 2.9.2 Provided that N is odd, the signals akp are delta-correlated.
N −1
N −1
Rx x ( j) = x(l) x(l − j) = x(l + j) x(l)
l=0 l=0
N −1
k(l 2 +2l j+ j 2 + pl+ pj)−k(l 2 + pl)
= ωN
l=0
N −1
k( j 2 + pj) 2kl j
= ωN ωN = N x( j) δ N (2k j). (2.9.10)
l=0
2.9 Ensembles of Signals 49
The number N is relatively prime with k (by the proviso) and is relatively prime
with 2 (due to oddity), therefore N is relatively prime with the product 2k. Since
gcd (2k, N ) = 1, the mapping j → 2k j N is a permutation of a set 0 : N − 1 that
maps zero to zero. This fact, along with the definition of the unit pulse δ N , guarantees
the validity of the equality (2.9.12) and, as a consequence, of (2.9.11).
On the basis of (2.9.10) and (2.9.11) we gain
k( j 2 + pj) s( j 2 + pj)
x( j) = ω N , y( j) = ω N .
Proof We have
N −1
N −1
|Rx y ( j)|2 = x(l + j) y(l) x(q + j) y(q)
l=0 q=0
N −1
N −1
k(l 2 +2l j+ j 2 + pl+ pj)−s(l 2 + pl) −k(q 2 +2q j+ j 2 + pq+ pj)+s(q 2 + pq)
= ωN ωN .
l=0 q=0
N −1
N −1
(k−s)(l−q)(l+q+ p)+2k(l−q) j
|Rx y ( j)| =
2
ωN
q=0 l=0
N −1
N −1
(k−s)l(l+2q+ p)+2kl j
= ωN
q=0 l=0
N −1
N −1
(k−s)l(l+ p)+2kl j 2(k−s)lq
= ωN ωN
l=0 q=0
N −1
(k−s)l(l+ p)+2kl j
=N ωN δ N 2(k − s)l N . (2.9.14)
l=0
The number N is relatively prime with k − s (by the hypothesis) and is relatively
prime with 2 (due to oddity), therefore gcd (2(k − s), N ) = 1. In this case the map-
ping l → 2(k − s)l N is a permutation of a set 0 : N − 1 that maps zero to zero.
Using this fact and the definition of the unit pulse δ N we conclude that the sum in the
right side of (2.9.14) contains only one nonzero term that corresponds to l = 0. We
come to the identity |Rx y ( j)|2 ≡ N , which is equivalent to (2.9.13). The theorem is
proved.
Signals akp of a form (2.9.9) have the same energy E(akp ) = N . According to
Lemma 2.9.2, provided that N is odd, any set of these signals constitutes an ensemble
Qa with A = N and Ra = 0. Assume that signals akp from Qa satisfy to two additional
conditions:
– they have the same p;
– for any pair of signals akp , asp from Qa with k > s, the difference k − s is relatively
prime with N .
Then, by virtue of Theorem 2.9.3, the following identity is valid:
√
|Rx y ( j)| ≡ N for all x, y ∈ Qa , x = y.
√ √
It coincides with (2.9.8) since in this case A/ N = N .
Note that when N is prime, the following signals satisfy to all conditions formu-
lated above: a1, p , a2, p , . . . , a N −1, p . The amount of these signals is N − 1.
2.10 Uncertainty Principle 51
supp x = { j ∈ 0 : N − 1 | x( j) = 0}.
We denote by |supp x| the number of indices contained in a support. Along with the
support of a signal x we will consider the support of its spectrum X .
Theorem 2.10.1 (Uncertainty Principle) Given any nonzero signal x ∈ C N , the fol-
lowing inequality holds:
|supp x| × |supp X| ≥ N. (2.10.1)
The point of the inequality (2.10.1) is that the support of a nonzero signal and the
support of its spectrum cannot both be small.
2.10.2 We precede the proof of Theorem 2.10.1 by an auxiliary statement.
Lemma 2.10.1 Let m := |supp x| > 0. Then for any q ∈ 0 : N − 1 the sequence
X (q + 1), X (q + 2), . . . , X (q + m)
m
−(q+l) jk
m
q+l
X (q + l) = x( jk ) ω N = zk x( jk ), l ∈ 1 : m, (2.10.2)
k=1 k=1
−j
where z k = ω N k . It is clear that z k are pairwise different points on a unit
circle of a complex plane. We denote a = x( j1 ), . . . , x( jm ) , b = X (q + 1), . . . ,
q+l m
X (q + m) , Z = {z k }l,k=1 , and rewrite the equality (2.10.2) in a form b = Z a. It
is sufficient to show that the matrix Z is invertible. In this case the condition a = O
will imply b = O.
We have ⎡ q+1 q+1 ⎤
z1 · · · zm
Z = ⎣ ··· ··· ···· ⎦.
q+m q+m
z1 · · · zm
where in the right side we can see a nonzero Vandermonde determinant. Therefore,
= 0. This guarantees invertibility of the matrix Z .
The lemma is proved.
2.10.3 Now we turn to proving Theorem 2.10.1. Let supp X = {k1 , . . . , kn }, where
0 ≤ k1 < k2 < · · · < kn < N . We fix s ∈ 1 : n − 1. According to the lemma, the
sequence X (ks + 1), . . . , X (ks + m) contains a nonzero element. But the first
nonzero element after X (ks ) is X (ks+1 ). Therefore,
ks+1 ≤ ks + m, s ∈ 1 : n − 1. (2.10.3)
Further, the sequence X (kn + 1), . . . , X (kn + m) also contains a nonzero element,
and by virtue of N -periodicity of a spectrum the first nonzero element after X (kn )
is X (k1 + N ). Therefore,
k1 + N ≤ kn + m. (2.10.4)
2.10.4 The signal x = δ N turns the inequality (2.10.1) into an equality. In fact, we
can describe the whole set of signals that turn the inequality (2.10.1) into an equality.
and let the equality mn = N holds for this signal. Then necessarily
qj
x( j) = c ω N δn ( j − p), (2.10.6)
where q ∈ 0 : m − 1, p ∈ 0 : n − 1, and c ∈ C, c = 0.
kn + m ≤ kn−1 + 2m ≤ · · · ≤ k1 + nm = k1 + N ≤ kn + m.
2.10 Uncertainty Principle 53
Hence
ks = k1 + (s − 1)m, s ∈ 1 : n. (2.10.7)
1 1
n n−1
k j (q+sm) j
x( j) = X (ks ) ω Ns = X (q + sm) ω N
N s=1 N s=0
1 1
n−1
qj
= ωN X (q + sm) ωns j .
n s=0 m
Denote H (s) = 1
m
X (q + sm), s ∈ 0 : n − 1, and h = Fn−1 (H ). Then
qj
x( j) = ω N h( j). (2.10.8)
h( j) = c δn ( j − p), (2.10.9)
where c = h( p). The conclusion of the theorem will follow from this equality and
from (2.10.8).
Again, by virtue of n-periodicity of the signal h and (2.10.8) we have
q( p+sn)
x( p + sn) = ω N h( p) = 0, s ∈ 0 : m − 1.
That is, we pointed out m indices from the main period where the samples of the
signal x are not zero. By the theorem hypothesis |supp x| = m, so on other indices
from 0 : N − 1 the signal x is zero. In particular, for j ∈ 0 : n − 1, j = p, there will
qj
be 0 = x( j) = ω N h( j). We derived that h( j) = 0 for all j ∈ 0 : n − 1, j = p. It
means that the signal h can be represented in a form (2.10.9).
The theorem is proved.
2.10.5 To make the picture complete, let us find the spectrum of a signal x of a
form (2.10.6). We write
N −1
m−1
qj −k j (q−k)( p+sn)
X (k) = c ω N δn ( j − p) ω N =c ωN
j=0 s=0
(q−k) p
m−1
(q−k) p
= c ωN ωm(q−k)s = m c ω N δm (k − q).
s=0
54 2 Signal Transforms
Therefore, the signal x of a form (2.10.6) is not equal to zero on the indices j =
p + sn, s ∈ 0 : m − 1, while its spectrum X is not equal to zero on the indices
k = q + tm, t ∈ 0 : n − 1.
Exercises
2.1 Prove that a signal x ∈ C N is even if and only if the value x(0) is real and
x(N − j) = x( j) holds for j ∈ 1 : N − 1.
2.2 Prove that a signal x ∈ C N is odd if and only if Re x(0) = 0 and x(N − j) =
−x( j) holds for j ∈ 1 : N − 1.
2.3 Prove that any signal can be uniquely represented as a sum of an even and an
odd signal.
2.7 Let N = mn. Prove that for any signal x ∈ C N there holds
m−1 N −1
x(s + ln) = x( j) δn (s − j) for all s ∈ Z.
l=0 j=0
Numbers k and N in the Exercises 2.8 and 2.9 are natural relative primes.
2.10 Prove that a signal x ∈ C N is odd if and only if its spectrum X is pure imagi-
nary.
Exercises 55
2.11 Let a and b be two real signals from C N . We aggregate a complex signal
x = a + ib. Prove that the spectra A, B, and X of these signals satisfy to the following
relations
A(k) = 21 [X (k) + X (N − k)],
for all k ∈ Z.
2.12 Let N be an even number. We associate a real signal x with a complex signal
xa with a spectrum
⎧
⎪ X (k) for k = 0 and k = N /2,
⎨
X a (k) = 2X (k) for k ∈ 1 : N /2 − 1,
⎪
⎩
0 for k ∈ N /2 + 1 : N − 1.
Prove that Re xa = x.
2.13 Formulate and solve the problem analogous to the previous one for an odd N .
2.14 Prove that for an even N and for k ∈ 0 : N /2 − 1 there hold
N /2−1
% & −k j
X (k) = x(2 j) + ω−k
N x(2 j + 1) ω N /2 ,
j=0
N /2−1
% & −k j
X (N /2 + k) = x(2 j) − ω−k
N x(2 j + 1) ω N /2 .
j=0
N /2−1
% & −k j
X (2k) = x( j) + x(N /2 + j) ω N /2 ,
j=0
N /2−1
% & − j −k j
X (2k + 1) = x( j) − x(N /2 + j) ω N ω N /2 .
j=0
In the Exercise 2.16 through 2.19 it is required to calculate the Fourier spectrum
of given signals.
πj
2.16 x( j) = sin , j ∈ 0 : N − 1.
N
2.17 x( j) = (−1) j , j ∈ 0 : N − 1. Consider the cases of N = 2n and N = 2n +
1 separately.
56 2 Signal Transforms
j for j ∈ 0 : n,
2.18 x( j) =
j − N for j ∈ n+1 : N −1 (N = 2n + 1).
j for j ∈ 0 : n,
2.19 x( j) =
N − j for j ∈ n + 1 : N − 1 (N = 2n).
j2
2.20 Let x( j) = ω N . Find the amplitude spectrum |X | of the signal x.
The transforms presented in the Exercises 2.24 and 2.25 are referred to as pro-
longations of a signal.
x( j/n) if j n = 0,
2.26 xn ( j) =
0 for others j ∈ Z (xn ∈ Cn N ).
2.27 xn ( j) = x j/n (xn ∈ Cn N ).
The transforms presented in the Exercises 2.26 and 2.27 are referred to as stretches
of a signal.
m−1
2.29 yn ( j) = x( j + pn) for N = mn (yn ∈ Cn ).
p=0
m−1
2.30 yn ( j) = x( p + jm) for N = mn (yn ∈ Cn ).
p=0
Exercises 57
m−1
2.31 y( j) = x p + j/mm for N = mn (y ∈ C N ).
p=0
F N4 (x) = N 2 x.
2.45 A signal x ∈ C N is called binary if it takes the values +1 and −1 only. Prove
that there are no delta-correlated signals among binary ones if N = 4 p 2 , p being a
natural number.
2.46 The Exercise 2.26 introduced a signal xn ∈ Cn N that was a stretch of a signal
x ∈ C N . What is the relation between auto-correlations of these signals?
2.47 Take four signals x, y, w, and z, and form four new signals u 1 = Rx y , v1 =
Rwz , u 2 = Rxw , and v2 = R yz . Prove that Ru 1 v1 = Ru 2 v2 .
2.48 Let x and y be non-correlated signals. Prove that signals Rxw and R yz are also
non-correlated regardless of w and z.
2.49 We remind that the signals from a basis of shifts of a unit pulse are pairwise
orthogonal. Prove that they are pairwise correlated.
N −1
2.50 Prove that a system of shifts {x( j − k)}k=0 is linearly independent on Z if
and only if all the components of the spectrum X are nonzero.
N −1 N −1
2.51 Systems of shifts {x(· − k)}k=0 and {y(· − k)}k=0 are called biorthogonal if
there holds x(· − k), y(· − k ) = δ N (k − k ). Prove that the criterion of biorthog-
onality is satisfying to the condition Rx y = δ N .
N −1
2.52 Let a system of shifts {x(· − k)}k=0 be linearly independent on Z. Prove that
N −1
there exist the unique signal y ∈ C N such that the systems of shifts {x(· − k)}k=0
N −1
and {y(· − k)}k=0 are biorthogonal.
2.53 Let x ∈ C N be a nonzero signal. A value
−2 x( j − 1) + c x( j) = g( j), j ∈ Z,
n
T (t) = a(k) exp(2πikt)
k=−n
Comments
In this chapter, we introduce the basic concepts of the discrete harmonic analysis
such as discrete Fourier transform, cyclic convolution, and cyclic correlation. The
peculiarity of the presentation is that we consider a signal as an element of the
functional space C N .
We systematically use the N -periodic unit pulse δ N . The expansion (2.1.1) of
an arbitrary signal over the shifts of the unit pulse corresponds to the expansion
of a vector over the unitary vectors. Lemma 2.1.1 is elementary; however, it lets
us easily prove Theorem 2.5.3 about general form of a linear stationary operator.
Theorem 2.6.2 is a generalization of Lemma 2.1.1. It makes it clear when shifts of a
signal form an orthonormal basis in the space C N .
A solution of the optimal interpolation problem is obtained in [3]. More sophis-
ticated question of this solution’s behavior when r → ∞ is investigated in the same
paper. A similar approach is used in [2] for solving the problem of discrete periodic
data smoothing.
The problem of the optimal signal–filter pair was studied in [11]. In our book we
revise all the notions needed for the problem’s setting and give its simple solution.
A generalization of these results is presented in the paper [38].
The sections on ensembles of signals and the uncertainty principle are written on
the basis of the survey [45] and the paper [9], respectively. The point of the uncertainty
principle is that the number of indices comprising the support of a signal and the
number of indices comprising the support of its spectrum cannot be simultaneously
small. The more localized is a signal in time domain, the more dispersed is its
frequency spectrum.
Additional exercises are proposed to be solved by the reader. They are intended to
help in mastering the discrete harmonic analysis techniques. These exercises intro-
duce, in particular, such popular signal transforms as prolongation, stretching, and
subsampling. Some exercises prepare the reader for the further theory development.
These are, first of all, the Exercises 2.14 and 2.15. Special signals are considered.
We attract the reader’s attention to Frank signal (Exercise 2.43). Detailed studying
and generalization of this signal is undertaken in the paper [26].
Chapter 3
Spline Subspaces
N −1
1 k
(ω − 1)−r ω N ,
kj
br ( j) = j ∈ Z, (3.1.1)
N k=1 N
N −1
br ( j) = 0. (3.1.2)
j=0
For r = 0 we have
N −1
1 kj 1
b0 ( j) = ω N = δ N ( j) − . (3.1.3)
N k=1 N
Further,
N −1
1 k k(r − j)
br (r − j) = (ω − 1)−r ω N
N k=1 N
N −1 N −1
1 −k −r −k j 1 (N −k) j
= (1 − ω N ) ω N = (1 − ω NN −k )−r ω N
N k=1 N k=1
N −1
1
(1 − ωkN )−r ω N = (−1)r br ( j).
kj
=
N k=1
ω−k k 2 −k 2k k k −k
N (ω N − 1) = ω N (ω N − 2 ω N + 1) = ω N − 2 + ω N
2π k πk
= −2 1 − cos = −4 sin2 . (3.1.7)
N N
Taking into account Lemma 2.1.2 we gain
N −1
N −1
−k( j+r )+kr −k j
b2r ( j + r ) ωN = ωkr
N b2r ( j) ω N
j=0 j=0
2 −r
= ωkr
N (ω N − 1)
k −2r
= ω−k N (ω N − 1)
k
π k −2r
= (−1)r 2 sin .
N
The lemma is proved.
3.1 Periodic Bernoulli Functions 63
3.1.2 Shifts of a Bernoulli function can be used for expansion of arbitrary signals.
N −1
x( j) = c + r x(k) br ( j − k), j ∈ Z, (3.1.8)
k=0
N −1
where c = N −1 j=0 x( j).
Proof We denote
N −1
Ir ( j) = r x(k) br ( j − k).
k=0
N −1
I0 ( j) = x(k) [δ N ( j − k) − N −1 ] = x( j) − c, (3.1.9)
k=0
N −1
N −1
y(k) x(k) = − x(k) y(k − 1). (3.1.10)
k=0 k=0
N −1
N −1
y(k) x(k) = y(k) [x(k + 1) − x(k)]
k=0 k=0
N −1
N −1
= [y(k − 1) − y(k)] x(k) = − x(k) y(k − 1).
k=0 k=0
Recall that r (x) = r −1 (x) . Taking into account (3.1.10) and (3.1.4) we gain
N −1
Ir ( j) = − r −1 x(k) [br ( j − k) − br ( j − k + 1)]
k=0
N −1
= r −1 x(k) br −1 ( j − k) = Ir −1 ( j).
k=0
64 3 Spline Subspaces
n−1 N −1
−k j k(N − j)
X 1 (k) = (n − j) ω N + n − (N − j) ω N
j=0 j=N −n+1
n−1
−k j
n−1
kj
=n+ (n − j) ω N + (n − j ) ω N
j=1 j =1
n−1
n−1
k(n− j)−kn −k kj
= n + 2 Re (n − j) ω N = n + 2 Re ωm j ωN .
j=1 j=1
n−1
z
jz j = [(n − 1) z n − nz n−1 + 1], z = 1. (3.2.3)
j=1
(1 − z)2
n−1
ωkN
[(n − 1) ωmk − n ωmk ω−k
kj
j ωN = N + 1].
j=1
(1 − ωkN )2
n−1
1
ωm−k [n − 1 − n ω−k −k
kj
j ωN = − πk N + ωm ].
j=1
4 sin2 N
3.2.2 We put
Q 1 = x1 ; Q r = Q 1 ∗ Q r −1 , r = 2, 3, . . . (3.2.4)
− 0
Proof When r = 1, formula (3.2.5) coincides with the DFT inversion formula which
reconstructs the signal x1 from its spectrum X 1 . We perform an induction step from
r to r + 1. From validity of (3.2.5) it follows that F N (Q r ) = X 1r holds. By virtue of
the convolution theorem we write
F N (Q r +1 ) = F N (Q 1 ∗ Q r ) = X 1 X 1r = X 1r +1 .
N −1
1 r +1 kj
Q r +1 ( j) = X (k) ω N , j ∈ Z.
N k=0 1
N −1
1 kj
Q r ( j) = ω = δ N ( j)
N k=0 N
3.2.3 Later on we will need the values Q r ( pn) for p ∈ 0 : m − 1. Let us calculate
them. We will use the fact that every index k ∈ 0 : N − 1 can be represented in a
form k = qm + l, where q ∈ 0 : n − 1 and l ∈ 0 : m − 1. According to (3.2.5) we
have
1 r
m−1 n−1
pn(qm+l)
Q r ( pn) = X (qm + l) ω N
N l=0 q=0 1
n−1
1 pl 1 r
m−1
= ωm X 1 (qm + l) .
m l=0 n q=0
3.2 Periodic B-splines 67
=4
=3
=2
0
2
Denoting
1 r
n−1
Tr (l) = X (qm + l) (3.2.6)
n q=0 1
we gain
1
m−1
Q r ( pn) = Tr (l) ωmpl . (3.2.7)
m l=0
We note that a signal Tr (l) is real, m-periodic, and even. Reality follows from
the definition (3.2.6), and m-periodicity from Lemma 2.1.3. The formula (3.2.7) and
Theorem 2.2.2 guarantee evenness of Tr (l).
Figure 3.2 shows graphs of a signal Tr (l) on the main period 0 : m − 1 for m =
512, n = 2, and r = 2, 3, 4.
Let us transform formula (3.2.6). For l ∈ 1 : m − 1 we introduce a value
n−1
1 π(qm + l) −2r
r (l) = 2 sin .
n q=0 N
n−1
1 2 sin(π(qm + l)/m) 2r πl
2r
Tr (l) = = 2 sin r (l).
n q=0 2 sin(π(qm + l)/N ) m
3.2.4 We will determine the relation between discrete periodic B-splines and discrete
periodic Bernoulli functions.
Proof According to (3.2.2) and (3.1.7) a value X 1 (k) for k ∈ 1 : N − 1 can be rep-
resented in a form
ωm−k (ωmk − 1)2
X 1 (k) = −k .
ω N (ωkN − 1)2
N −1 N −1
1 r 1 k k( j+r −r n)
(ω − 1)2r (ωkN − 1)−2r ω N
kj
X 1 (k) ω N =
N k=1 N k=1 m
N −1
1 k 2r
2r k( j+r −(r − p)n)
= (ω N − 1)−2r (−1)2r − p ωN
N k=1 p=0
p
N −1
1 k r
2r k( j+r −ln)
= (ω N − 1)−2r (−1)r −l ωN
N k=1 l=−r
r − l
N −1
1 k
r
2r k( j+r −ln)
= (−1)r −l (ω N − 1)−2r ω N
l=−r
r −l N k=1
r
2r
= (−1)r −l b2r ( j + r − ln).
l=−r
r −l
Now the statement of the theorem follows from the formula (3.2.5).
3.3 Discrete Periodic Splines 69
m−1
S( j) = c( p) Q r ( j − pn). (3.3.1)
p=0
Proof Let
m−1
S( j) := c( p) Q r ( j − pn) = 0 ∀ j ∈ Z
p=0
for some complex coefficients c( p). We will show that all c( p) are equal to zero. We
have
N −1
m−1 N −1
−k j −k( j− pn)−kpn
0= S( j) ω N = c( p) Q r ( j − pn) ω N
j=0 p=0 j=0
m−1
N −1
−k j
= c( p) ωm−kp Q r ( j) ω N . (3.3.2)
p=0 j=0
−1 −k j
According to (3.2.5) there holds Nj=0 Q r ( j) ω N = X 1r (k). We denote C(k) =
m−1 −kp
p=0 c( p) ωm . Then equality (3.3.2) can be rewritten as C(k)X 1 (k) = 0. The
r
m−1
Q r ( j − pn) ≡ n 2r −1 . (3.3.3)
p=0
70 3 Spline Subspaces
N −1
1 r k( j− pn)
m−1 m−1
Q r ( j − pn) = X 1 (k) ωN
p=0
N k=0 p=0
N −1 m−1
1 r kj 1
−kp
= X (k) ω N ω
n k=0 1 m p=0 m
N −1
1 r 1 r
n−1
kj
= X 1 (k) ω N δm (k) = X (lm) ωnl j .
n k=0 n l=0 1
m−1
S( j) = d + d(l) b2r ( j + r − ln), (3.3.4)
l=0
m−1
where l=0 d(l) = 0.
Proof
Necessity According to (3.3.1) and (3.2.9) we have
m−1 r
1 2r r −k 2r
S( j) = c( p) n + (−1) b2r j + r − (k + p)n
p=0
N k=−r
r −k
n 2r
m−1 m−1 r
r −k 2r
= c( p) + c( p)(−1) b2r j + r − k + p m n .
N p=0 p=0 k=−r
r −k
m−1
m−1
r
r −k 2r
d(l) = c( p) (−1) = 0.
l=0 p=0 k=−r
r −k
3.3 Discrete Periodic Splines 71
m−1
g( j) = d(l) b2r ( j + r − ln) (3.3.5)
l=0
m−1
which has l=0 d(l) = 0. Let us calculate G = F N (g). We have
N −1
m−1 N −1
−k j −k( j−ln)−kln
G(k) = g( j) ω N = d(l) b2r ( j + r − ln) ω N
j=0 l=0 j=0
m−1
N −1
−k j
= d(l) ωm−kl b2r ( j + r ) ω N .
l=0 j=0
m−1
Denote D(k) = l=0 d(l) ωm−kl . The theorem’s hypothesis yields D(0) = 0. Taking
into account (3.1.6) we gain
0 for k = 0,
G(k) = −2r (3.3.6)
(−1)r 2 sin(π k/N ) D(k) for k ∈ 1 : N − 1.
For k = 0, m, 2m, . . . , (n − 1)m this formula is true by virtue of (3.3.7) and the
equalities A(0) = 0 and X 1 (m) = X 1 (2m) = · · · = X 1 (n − 1)m = 0. For other
k ∈ 1 : N − 1, according to (3.3.6) and (3.2.2), we gain
−2r
G(k) = (−1)r 2 sin(π k/m) X 1r (k) D(k) = A(k) X 1r (k).
72 3 Spline Subspaces
1
m−1
a( p) = A(k) ωmkp .
m k=0
N −1 N −1
1 kj 1 kj
g( j) = G(k) ω N = A(k) X 1r (k) ω N
N k=0 N k=0
N −1 m−1
1 −kp kj
= a( p) ωm X 1r (k) ω N
N k=0 p=0
N −1
1 r
m−1
k( j− pn)
= a( p) X 1 (k) ω N
p=0
N k=0
m−1
= a( p) Q r ( j − pn). (3.3.9)
p=0
Remark 3.3.1 The proof contains a scheme of transition from the expansion (3.3.5)
of a signal g to the expansion (3.3.9). The scheme looks this way:
d(l) → D(k) → A(k) → a( p) .
3.3.3 Let us present an important (for what follows) property of discrete periodic
splines.
where d(l) are the coefficients from the representation (3.3.4) of the spline S.
Proof We denote by Ir (x) the expression in the left side of equality (3.3.10). Accord-
ing to (3.3.4) and (3.1.4) we have
N −1 m−1
Ir (x) = d(l) br r − (ln − j) r x( j).
j=0 l=0
m−1
N −1
Ir (x) = (−1)r d(l) r x( j) br (ln − j)
l=0 j=0
m−1
= (−1)r d(l) x(ln) − c ,
l=0
N −1 m−1
where c = N −1 j=0 x( j). Taking into account the equality l=0 d(l) = 0 we
come to (3.3.10). The theorem is proved.
3.4.1 We consider the following interpolation problem on the set Srm of discrete
periodic splines of order r :
where z(l) are arbitrary complex numbers. A detailed notation of the problem (3.4.1)
by virtue of (3.3.1) looks this way:
m−1
c( p) Q r (l − p)n = z(l), l ∈ 0 : m − 1. (3.4.2)
p=0
m−1
c( p) h(l − p) = z(l), l ∈ 0 : m − 1,
p=0
m−1
m−1
H (k) = h( p) ωm−kp = Q r ( pn) ωm−kp .
p=0 p=0
74 3 Spline Subspaces
According to (3.2.7) we have H (k) = Tr (k), where, as it was mentioned after the
proof of Lemma 3.2.2, all values Tr (k) are positive. The system (3.4.3) has a unique
solution C(k) = Z (k)/Tr (k), k ∈ 0 : m − 1. The DFT inversion formula yields
1
m−1
c( p) = Z (k)/ Tr (k) ωmkp , p ∈ 0 : m − 1. (3.4.4)
m k=0
Let us summarize.
Theorem 3.4.1 The interpolation problem (3.4.1) has a unique solution. Coeffi-
cients of the interpolation spline S∗ are determined by formula (3.4.4).
3.4.2 We will show that a discrete interpolation spline S∗ has an extremal property.
Incidentally we will clarify the role of the parameter r .
Consider an extremal problem
N −1
f (x) := |r x( j)|2 → min,
j=0
(3.4.5)
x(ln) = z(l), l ∈ 0 : m − 1; x ∈ C N .
N −1
r
f (x) = f (S∗ + η) = S∗ ( j) + r η( j)2
j=0
N −1
= f (S∗ ) + f (η) + 2 Re r S∗ ( j) r η( j).
j=0
N −1
m−1
r S∗ ( j) r η( j) = (−1)r d∗ (l) η(ln) = 0.
j=0 l=0
Therefore, f (x) = f (S∗ ) + f (η). Hence follows the inequality f (x) ≥ f (S∗ ) that
guarantees optimality of S∗ .
Let us verify the uniqueness of a solution of the problem (3.4.5). Assume that
f (x) = f (S∗ ). Then f (η) = 0. This is possible only when r η( j) = 0 for all j ∈ Z.
Theorem 3.1.2 yields η( j) ≡ const. But η(ln) = 0 holds for l ∈ 0 : m − 1, so
η( j) ≡ 0. We gain x = S∗ . The theorem is proved.
3.5 Smoothing of Discrete Periodic Data 75
N −1
f (x) := |r x( j)|2 → min,
j=0
(3.5.1)
m−1
g(x) := |x(ln) − z(l)| ≤ ε, x ∈ C N ,
2
l=0
m−1
q(c) := |c − z(l)|2 → min, (3.5.2)
l=0
m−1
q(c + h) = c − z(l) + h 2
l=0
m−1
= q(c) + m |h| + 2 Re 2
c − z(l) h.
l=0
It is obvious that a unique minimum point c∗ of the function q(c) is determined from
a condition
m−1
c − z(l) = 0,
l=0
so that
1
m−1
c∗ = z(l). (3.5.3)
m l=0
Herein
m−1
m−1
ε∗ := q(c∗ ) = − c∗ − z(l) z(l) = |z(l)|2 − m |c∗ |2 . (3.5.4)
l=0 l=0
76 3 Spline Subspaces
where the minimum is taken among all x ∈ C N . We take an arbitrary spline S ∈ Srm
and write down an expansion
Fα (S + H ) = α f (S + H ) + g(S + H )
N −1
= α f (S) + f (H ) + 2 Re r S( j) r H ( j)
j=0
m−1
m−1
+ g(S) + |H (ln)|2 + 2 Re [S(ln) − z(l)] H (ln).
l=0 l=0
m−1
Fα (S + H ) = Fα (S) + α f (H ) + |H (ln)|2
l=0
m−1
+ 2 Re [(−1)r α d(l) + S(ln) − z(l)] H (ln).
l=0
Here d(l) are the coefficients of the expansion (3.3.4) of the spline S over the shifts
of the Bernoulli function.
Suppose that there exists a spline Sα ∈ Srm satisfying to the conditions
m−1
Fα (Sα + H ) = Fα (Sα ) + α f (H ) + |H (ln)|2 .
l=0
3.5 Smoothing of Discrete Periodic Data 77
N −1
m−1
| H ( j)| = 0 and
r 2
|H (ln)|2 = 0.
j=0 l=0
The former equality holds only when H ( j) ≡ const (see Theorem 3.1.2). According
to the latter one we have H ( j) ≡ 0.
It is remaining to verify that the system (3.5.6) has a unique solution in the class
of splines S of a form (3.3.4). We take a solution d0 , d0 (0), d0 (1), . . . , d0 (m − 1)
of the homogeneous system
By virtue of positiveness of α this equality can be true only when all d0 (l) are
equal to zero. But in this case there holds S0 ( j) ≡ d0 . At the same time, S0 (ln) =
(−1)r +1 α d0 (l) = 0 holds for l ∈ 0 : m − 1, so that d0 = 0. Thus it is proved that the
homogeneous system (3.5.7) has only zero solution. As a consequence we gain that
the system (3.5.6) has a unique solution for all z(l), l ∈ 0 : m − 1.
Let us summarize.
Theorem 3.5.1 The auxiliary problem (3.5.5) has a unique solution Sα . This is a
discrete periodic spline of a form (3.3.4) whose coefficients are determined from the
system of linear equations (3.5.6).
3.5.4 We will show that the system (3.5.6) can be solved explicitly. In order to do
this we transit into a spectral domain:
m−1
m−1
(−1) α r
d(l) ωm−kl +d ωm−kl
l=0 l=0
m−1
m−1
+ d( p) b2r (l − p)n + r ωm−kl
l=0 p=0
m−1
= z(l) ωm−kl .
l=0
78 3 Spline Subspaces
m−1
m−1
(−1) α D(k) + m d δm (k) +
r
d( p) ωm−kp b2r (ln + r ) ωm−kl = Z (k).
p=0 l=0
m−1
Note that D(0) = l=0 d(l) = 0. Putting
m−1
Br (k) = b2r (ln + r ) ωm−kl
l=0
k ∈ 0 : m − 1.
For k = 0 we have
1
m−1
1
d= Z (0) = z(l) = c∗ .
m m l=0
n−1
1 π(qm + k) −2r
r (k) = 2 sin .
n q=0 N
Proof We have
m−1 N −1
1 j (ln+r ) j −kl
Br (k) = (ω − 1)−2r ω N ωm
N l=0 j=1 N
3.5 Smoothing of Discrete Periodic Data 79
N −1 m−1
1 j 1 l( j−k)
(ω N − 1)−2r ω N
rj
= ωm
n j=1 m l=0
N −1
1 j
(ω − 1)−2r ω N δm ( j − k).
rj
=
n j=1 N
−m + 2 ≤ j − k ≤ N − 2.
The unit pulse δm in the latter sum is nonzero only when j − k = qm, q ∈ 0 : n − 1.
Taking into account this consideration and equality (3.1.7) we gain
1 qm+k
n−1
r (qm+k)
Br (k) = (ω − 1)−2r ω N
n q=0 N
3.5.5 A solution of the auxiliary problem (3.5.5) is obtained in a form (3.3.4). Let
us convert it to a form (3.3.1).
m−1
Sα ( j) = cα ( p) Q r ( j − pn), (3.5.12)
p=0
where
1
m−1 kp
Z (k) ωm
cα ( p) = 2r
. (3.5.13)
m k=0 Tr (k) + α 2 sin(π k/m)
Proof We have
m−1
Sα ( j) = d + d(l) b2r ( j + r − ln), (3.5.14)
l=0
1 Z (0)
m−1
1
d= Z (0) = Q r ( j − pn).
m m p=0 Tr (0)
The latter equality is true by virtue of (3.2.8) and (3.3.3). Further, the remark to
Theorem 3.3.1 yields
m−1
m−1
d(l) b2r ( j + r − ln) = a( p) Q r ( j − pn).
l=0 p=0
Here
1
m−1
−2r
a( p) = (−1)r 2 sin(π k/m) D(k) ωmkp
m k=1
1
m−1 kp
Z (k) ωm
=
m k=1 2 sin(π k/m) 2r α + r (k)
1
m−1 kp
Z (k) ωm
= 2r
.
m k=1 Tr (k) + α 2 sin(π k/m)
Note that when α = 0 formula (3.5.13) for the coefficients of a smoothing spline
coincides with the formula (3.4.4) for the coefficients of an interpolation spline.
3.5.6 We introduce a function ϕ(α) = g(Sα ). According to (3.5.1), (3.5.6) and the
Parseval equality we have
3.5 Smoothing of Discrete Periodic Data 81
m−1
α2
m−1
ϕ(α) = α 2 |d(l)|2 = |D(k)|2
l=0
m k=0
α
2 m−1
|Z (k)|2 1
m−1
|Z (k)|2
= 2
= .
m k=1 α + r (l) m k=1 1 + r (k)/α 2
We remind that z(l) ≡ const, therefore at least one of the components Z (1), . . . ,
Z (m − 1) of the discrete Fourier transform is nonzero.
The function ϕ(α) strictly increases on the semiaxis (0, +∞), whereby
limα→+0 ϕ(α) = 0. Let us determine the limit of ϕ(α) for α → +∞. Taking into
account (3.5.3) and (3.5.4) we gain
1 1
m−1 m−1
1
lim ϕ(α) = |Z (k)|2 = |Z (k)|2 − |Z (0)|2
α→+∞ m k=1 m k=0 m
m−1 2 m−1
1
m−1
= |z(l)| −
2
z(l) = |z(l)|2 − m |c∗ |2 = ε∗ ,
l=0
m l=0 l=0
where ε∗ is the critical value of the parameter ε. Hence it follows, in particular, that
the equation ϕ(α) = ε with 0 < ε < ε∗ has a unique positive root α∗ .
Theorem 3.5.3 The discrete periodic spline Sα∗ is a unique solution of the problem
(3.5.1).
Proof We take an arbitrary signal x satisfying to the constraints of the problem (3.5.1)
and assume that there holds f (x) ≤ f (Sα∗ ). Then
Taking into account that Sα∗ is a unique minimum point of Fα∗ on C N we conclude that
x( j) ≡ Sα∗ ( j). It means that in case of x( j) ≡ Sα∗ ( j) there holds f (x) > f (Sα∗ ).
The theorem is proved.
3.6.1 It is ascertained in par. 3.5.6 that a unique solution of the smoothing prob-
lem (3.5.1) for 0 < ε < ε∗ is a discrete periodic spline Sα ( j) of a form (3.5.12) with
α = α∗ , where α∗ is a unique positive root of the equation ϕ(α) = ε. Here we will
consider a question of calculation of α∗ .
82 3 Spline Subspaces
We introduce a function
1 1
m−1
|Z (k)|2
ψ(β) = ϕ = .
β m k=1 1 + r (k) β 2
We take an interval (−τ, +∞), where τ = mink∈1:m−1 [r (k)]−1 . On this interval
there hold inequalities ψ (β) < 0 and ψ (β) > 0, therefore the function ψ(β) is
strictly decreasing and strictly convex on (−τ, +∞). In addition to that we have
ψ(0) = ε∗ and lim ψ(β) = 0. If β∗ is a positive root of the equation ψ(β) = ε
β→+∞
then α∗ = 1/β∗ . Thus, instead of ϕ(α) = ε we can solve the equation ψ(β) = ε.
Let us consider the equivalent equation [ψ(β)]−1/2 = ε−1/2. We will solve it by
the Newton method with an initial approximation β0 = 0. Working formula of the
method looks this way:
Let us find out what this method corresponds to when it is applied to the equation
ψ(β) = ε.
Lemma 3.6.1 The function [ψ(β)]−1/2 strictly increases and is concave on the inter-
val (−τ, +∞).
It is obvious that [ψ −1/2 (β)] > 0. The inequality [ψ −1/2 (β)] ≤ 0 is equivalent to
the following:
3 ψ (β) ≤ 2 ψ(β) ψ (β).
2
(3.6.2)
Then
m−1
ηk
m−1
ηk
ψ(β) = , ψ (β) = −2 ,
k=1
(β + θk )2 k=1
(β + θk )3
m−1
ηk
ψ (β) = 6 .
k=1
(β + θk )4
so
−1/2 −1/2 ψ (βk )
0<ψ (β) ≤ ψ (βk ) 1 − (β − βk ) .
2 ψ(βk )
We denote the function in the right side of the inequality (3.6.4) by ζk (β). A graph
of this function is a hyperbola. By virtue of (3.6.4) this hyperbola lies under the graph
of the function ψ(β). Since ζk (βk ) = ψ(βk ) and ζk (βk ) = ψ (βk ), the mentioned
graphs are tangent to each other when β = βk (see Fig. 3.3). Moreover, the root βk+1
of the equation ζk (β) = ε is calculated with the aid of the formula (3.6.1).
According to what has been said it is reasonable to refer to the iterative method
(3.6.1) for solving the equation ψ(β) = ε as a tangent hyperbolas method.
84 3 Spline Subspaces
( )
( )
0 +1 ∗
m−1
S1 ( j) = c( p) Q 1 ( j − pn). (3.7.1)
p=0
We assume that the coefficients c( p) are continued with a period m on all integer
indices p. In particular, c(m) = c(0).
Lemma 3.7.1 The values S1 (0), S1 (1), . . . , S1 (N ) are calculated consecutively by
the scheme
S1 (0) = n c(0);
k ∈ 0 : n − 1, l ∈ 0 : m − 1.
m−1
m−1
S1 ( j) = c( p) Q 1 (k − ( p − l)n) = c( p + l) Q 1 (k − pn)
p=0 p=0
m−1
= c( p + l) Q 1 (k + (m − p)n).
p=0
3.7 Calculation of Discrete Spline’s Values 85
For p ∈ 2 : m − 1 we have
n ≤ k + (m − p)n ≤ n − 1 + (m − 2)n = N − n − 1.
Hence see (3.2.4) and (3.2.1) Q 1 (k + (m − p)n) = 0 holds for the given p. We
gain
S1 ( j) = c(l) Q 1 (k) + c(l + 1) Q 1 (k + N − n).
On the basis of (3.7.3) and (3.7.4) we come to (3.7.2). The lemma is proved.
Below we present a program that implements calculations along the scheme (3.7.2).
Program Code
s1(0) := n ∗ c(0); j := 0;
for l := 0 to m − 1 do
begin h := c(l + 1) − c(l);
for k := 1 to n do
begin j := j + 1;
s1( j) := s1( j − 1) + h end
end
Sν = Q 1 ∗ Sν−1 , ν = 2, 3, . . . (3.7.5)
m−1
Sr ( j) = c( p) Q r ( j − pn), j ∈ Z. (3.7.6)
p=0
m−1 N −1
m−1
= c( p) Q 1 (l) Q r −1 ( j − l − pn) = c( p) Q r ( j − pn).
p=0 l=0 p=0
N −1
y( j) = x(k) Q 1 ( j − k).
k=0
n−1
y( j) = nx( j) + (n − k)[x( j + k) + x( j − k)], (3.7.7)
k=1
j ∈ 0 : N − 1.
N −1
N −1
y( j) = x(k) Q 1 (k − j) = x(k + j) Q 1 (k)
k=0 k=0
n−1 N −1
= (n − k)x(k + j) + n − (N − k) x j − (N − k)
k=0 k=N −n+1
n−1
= nx( j) + (n − k)[x( j + k) + x( j − k)].
k=1
3.7 Calculation of Discrete Spline’s Values 87
d0 = x( j); dk = x( j + k) + x( j − k), k ∈ 1 : n − 1; tk = n − k.
n−1
y( j) = dk tk .
k=0
h k = dk + h k−1 , k = 0, 1, . . . , n − 1; h −1 = 0. (3.7.8)
n−1
n−1
n−2
y( j) = (h k − h k−1 ) tk = h k tk − h k tk+1
k=0 k=0 k=−1
n−2
n−1
= h n−1 tn−1 + hk = hk .
k=0 k=0
Thus,
n−1
y( j) = hk , (3.7.9)
k=0
Program Code
for j := 0 to n − 1 do
begin h := x( j); s := h;
for k := 1 to n − 1 do
begin h := h + x( j + k) + x( j − k) ;
s := s + h end;
y( j) := s
end
88 3 Spline Subspaces
The program uses only additions. The number of additions is 3(n − 1)N .
The values x( j) for j from (−n + 1) to N + n − 2 must be given explicitly. By
virtue of periodicity one should put
m−1
S( j) = c( p) Q r ( j − pn) (3.8.1)
p=0
and transform its coefficients by a rule ξ = Fm (c). Taking into account the DFT
inversion formula we write
m−1 m−1
1
S( j) = ξ(k) ωmkp Q r ( j − pn)
m p=0 k=0
1 kp
m−1 m−1
= ξ(k) ωm Q r ( j − pn) . (3.8.2)
k=0
m p=0
We introduce a notation
1 kp
m−1
μk ( j) = ω Q r ( j − pn), k ∈ 0 : m − 1. (3.8.3)
m p=0 m
m−1
S( j) = ξ(k) μk ( j). (3.8.4)
k=0
It is obvious that the signals μk belong to Srm . According to (3.8.4) they form a
basis in Srm . We will show that this basis is orthogonal.
3.8.2 As a precursor, let us obtain an expansion of the signal μk over the exponential
basis.
3.8 Orthogonal Basis in a Space of Splines 89
1 r
n−1
(qm+k) j
μk ( j) = X (qm + k) ω N . (3.8.5)
N q=0 1
1 r
n−1 m−1
(qm+k ) j
μk ( j) = X (qm + k ) ω N δm (k − k )
N q=0 k =0 1
1 r
n−1
(qm+k) j
= X (qm + k) ω N .
N q=0 1
μk , μk = 0 for k = k ,
(3.8.6)
μk 2 = 1
m
T2r (k), k ∈ 0 : m − 1.
N −1
μk , μk = μk ( j) μk ( j)
j=0
N −1
1 r
n−1
1 (qm+k−q m−k ) j
= X 1 (qm + k) X 1r (q m + k ) ωN
N q,q =0 N j=0
1 r
n−1
= X (qm + k) X 1r (q m + k ) δ N (q − q )m + k − k .
N q,q =0 1
90 3 Spline Subspaces
1 2r
n−1
1
μk =
2
X (qm + k) = T2r (k).
N q=0 1 m
3.8.3 It is ascertained that the splines {μk }m−1k=0 form an orthogonal basis in a
space Srm . A transition from the expansion (3.8.1) to the expansion (3.8.4) is based on
the coefficients transform ξ = Fm (c). An inverse transition is related to the inversion
formula for DFT: c = Fm−1 (ξ ).
Note that (3.8.5) and (3.2.2) yield
μ0 ( j) ≡ N −1 n 2r . (3.8.7)
1 −(m−k) p
m−1
μm−k ( j) = ω Q r ( j − pn) = μk ( j),
m p=0 m
Indeed,
1 k( p−l)+kl
m−1
μk ( j + ln) = ω Q r j − ( p − l)n = ωmkl μk ( j).
m p=0 m
N −1
p
μk ( j) = 0.
j=0
3.8 Orthogonal Basis in a Space of Splines 91
N −1
p m−1 n−1
p
μk ( j) = μk (q + ln)
j=0 l=0 q=0
m−1
n−1
p
n−1
p
= ωmklp μk (q) = m δm (kp) μk (q) .
l=0 q=0 q=0
N −1
μk ( j) = 0, k ∈ 1 : m − 1.
j=0
N −1
2
μk ( j) = 0.
j=0
We see that for the mentioned indices k there always exist some complex values
among μk ( j). Along with that, if m is even then all values μm/2 ( j) are real because
there holds
1
m−1
μm/2 ( j) = (−1) p Q r ( j − pn). (3.8.10)
m p=0
1
m−1
μk ( j) = Q r ( j − pn) ωmpk .
m p=0
m−1
Q r ( j − pn) = μk ( j) ωm− pk , p ∈ 0 : m − 1.
k=0
In particular,
m−1
Q r ( j) = μk ( j), j ∈ Z. (3.8.11)
k=0
92 3 Spline Subspaces
m−1
ϕ( j) = ξ(k) μk ( j). (3.9.1)
k=0
m−1
ϕ( j − pn) = ξ(k) ωm−kp μk ( j). (3.9.2)
k=0
m−1
ϕ( j − pn) = ξ(k) μk ( j) ωm−kp .
k=0
1 kp
m−1
ξ(k) μk ( j) = ω ϕ( j − pn), k ∈ 0 : m − 1. (3.9.3)
m p=0 m
If every ξ(k) is nonzero then we can divide (3.9.3) by ξ(k) and thus gain an
expansion of all splines μk ( j) over the system {ϕ( j − pn)}m−1
p=0 . Therefore this
system is a basis in Srm .
p=0 be a basis in Sr . If at least one coefficient
Conversely, let {ϕ( j − pn)}m−1 m
ξ(k) in the expansion (3.9.1) is equal to zero then according to (3.9.3) the system
{ϕ( j − pn)}m−1
p=0 is linearly dependent. But this contradicts with a definition of a
basis. The theorem is proved.
3.9.2 Two splines ϕ and ψ from Srm are called dual if for all p, q ∈ 0 : m − 1 there
holds
ϕ(· − pn), ψ(· − qn) = δm ( p − q). (3.9.4)
Thus, duality of splines ϕ and ψ is characterized by the fact that the systems of their
p=0 and {ψ( j − pn)} p=0 are biorthogonal.
shifts {ϕ( j − pn)}m−1 m−1
3.9 Bases of Shifts 93
m−1
ψ( j) = η(k) μk ( j).
k=0
! m−1
m−1 "
ϕ(· − pn), ψ(· − qn) = ξ(k) ωm−kp μk , η(l) ωm−lq μl
k=0 l=0
m−1
= ξ(k) η(k) ωmk(q− p) μk 2
k=0
1
m−1
= ξ(k) η(k) T2r (k) ωmk(q− p) . (3.9.5)
m k=0
Theorem 3.9.2 Splines ϕ and ψ from Srm are dual if and only if their coefficients
ξ(k), η(k) in the expansions over the orthogonal basis satisfy to the condition
−1
ξ(k) η(k) = T2r (k) , k ∈ 0 : m − 1. (3.9.6)
Proof
Necessity We take (3.9.4) and put p = 0 there. According to (3.9.5) we gain
1
m−1
ξ(k) η(k) T2r (k) ωmkq = δm (q).
m k=0
Therefore
m−1
ξ(k) η(k) T2r (k) = δm (q) ωm−kq = 1, k ∈ 0 : m − 1,
q=0
Sufficiency obviously follows from (3.9.5) and (3.9.6). The theorem is proved.
3.9.3 Theorem 3.9.2 lets us introduce a self-dual spline. It is obtained when ξ(k) =
η(k), k ∈ 0 : m − 1. In this case the condition (3.9.6) takes a form
−1
|ξ(k)|2 = T2r (k) , k ∈ 0 : m − 1.
The simplest self-dual spline is defined by the formula (see Fig. 3.4)
94 3 Spline Subspaces
− 0
m−1
μk ( j)
ϕr ( j) = √ , j ∈ Z.
k=0
T2r (k)
According to (3.9.5) we have ϕr (· − pn), ϕr (· − qn) = δm (q − p). The latter
m−1
means that the shifts ϕr ( j − pn) p=0 form an orthonormal system.
m−1
μk ( j)
Rr ( j) = . (3.9.7)
k=0
T2r (k)
− 0
We will show how the dual splines Q r ( j) and Rr ( j) help in solving a problem of
spline processing of discrete periodic data with the least squares method.
N −1
F(S) := |S( j) − z( j)|2 → min, (3.9.8)
j=0
where the minimum is taken among all S ∈ Srm . Given an arbitrary H ∈ Srm , we have
m−1
S∗ ( j) = d(q) Rr ( j − qn). (3.9.9)
q=0
! m−1
"
d(q) Rr (· − qn) − z, Q r (· − pn) = 0, p ∈ 0 : m − 1.
q=0
N −1
d( p) = z, Q r (· − pn) = z( j) Q r ( j − pn), (3.9.10)
j=0
p ∈ 0 : m − 1.
Thus, a unique solution of the problem (3.9.8) is the spline (3.9.9) with the coefficients
being calculated with formula (3.9.10).
The problem (3.9.8) can be interpreted as a problem of orthogonal projection of
a signal z on a subspace Srm .
3.9.5 We can transit from the expansion (3.9.9) of the spline S∗ ( j) over the basis
m−1 m−1
Rr ( j − qn) q=0 to the expansion over the basis Q r ( j − pn) p=0 . In order to do
this we use formulae (3.9.7) and (3.9.2) and write down
96 3 Spline Subspaces
m−1
m−1
m−1
ωm
−kq
S∗ ( j) = d(q) Rr ( j − qn) = d(q) μk ( j)
q=0 q=0 k=0
T2r (k)
m−1
μk ( j)
m−1
= d(q) ωm−kq .
k=0
T2r (k) q=0
m−1
S∗ ( j) = c( p) Q r ( j − pn).
p=0
μν+1 ν ν
k ( j) = cν (k) μk ( j) + cν (m ν+1 + k) μm ν+1 +k ( j), (3.10.1)
2r
where cν (l) = 2 cos(πl/m ν ) .
n ν −1
1 (qm +k) j
μνk ( j) = yν (qm ν + k) ω N ν .
N q=0
We note that
yν+1 (l) = cν (l) yν (l), l ∈ 0 : N − 1. (3.10.2)
3.10 Wavelet Subspaces 97
2n ν −1
1 (qm +k) j
μν+1
k ( j) = yν+1 (qm ν+1 + k) ω N ν+1
N q=0
n ν −1
1 (2qm +k) j
= yν+1 (2qm ν+1 + k) ω N ν+1
N q=0
n ν −1
1 ((2q+1)m ν+1 +k) j
+ yν+1 (2q + 1)m ν+1 + k ω N
N q=0
n ν −1
1 (qm +k) j
= cν (k) yν (qm ν + k) ω N ν
N q=0
n ν −1
1 (qm +m +k) j
+ cν (m ν+1 + k) yν (qm ν + m ν+1 + k) ω N ν ν+1
N q=0
= cν (k) μνk ( j) + cν (m ν+1 + k) μνm ν+1 +k ( j).
wkν+1 , μν+1
k = aν (k) cν (k) μνk 2 + aν (m ν+1 + k) cν (m ν+1 + k) μνm ν+1 +k 2 ,
The second column of the determinant is nonzero, therefore the equality (3.10.4) is
possible only if there exists a number λν (k) such that
Putting
λν (m ν+1 + k) = −λν (k), k ∈ 0 : m ν+1 − 1, (3.10.5)
k ∈ 0 : m ν − 1.
Thus, a spline wkν+1 of a form (3.10.3) is orthogonal to μν+1 k if and only if coeffi-
cients aν (k) can be represented by (3.10.6), where the numbers λν (k) are of the prop-
erty (3.10.5). A condition wkν+1 ( j) ≡ 0 is equivalent to λν (k) = 0, k ∈ 0 : m ν+1 − 1.
According to (3.10.5) numbers ρν (k) = λν (k) ωm−kν satisfy to the equality
ρν (m ν+1 + k) = ρν (k), k ∈ 0 : m ν+1 − 1. It means that λν (k) can be represented
in a form λν (k) = ρν (k) ωmk ν , where ρν (k) is an arbitrary m ν+1 -periodic sequence
whose members are all nonzero.
We will consider the simplest case of ρν (k) ≡ 1. It corresponds to splines wkν+1
of a form (3.10.3) with the coefficients
wkν+1 , μν+1
k = 0 for all k, k ∈ 0 : m ν+1 − 1,
(3.10.9)
wkν+1 = μνk μνm ν+1 +k μν+1
k .
m ν+1 − 1. As long as each wkν+1 belongs to Srm ν , we have Wr ν+1 ⊂ Srm ν . As it was
m
m ν+1
noted earlier, the inclusion Sr ⊂ Sr holds as well. According to Theorem 3.10.2
mν
the splines
form an orthogonal basis in Srm ν . The space Srm ν itself can be considered as an
m m
orthogonal sum of the subspaces Sr ν+1 and Wr ν+1 , i. e.
Here Srm t = Sr1 is a one-dimensional space consisting of signals that are identically
equal to a complex constant (see par. 3.3.1).
Let us formulate the obtained result as a theorem.
Theorem 3.10.3 A space of discrete periodic splines Srm with m = 2t can be decom-
posed into orthogonal sum
ν −1
t m
S( j) = α + αν (k) wkν ( j),
ν=1 k=0
3.11.1 We return to the problem of discrete spline interpolation (see the Sect. 3.4).
We denote the only spline from the set Srm that satisfies to interpolation conditions
S(ln) = z(l), l ∈ 0 : m − 1,
by Sr,n ( j). By this we emphasize dependency of the interpolating spline on the param-
eters r and n (with fixed m ≥ 2). We are interested in behavior of the spline Sr,n ( j)
whether r → ∞ or n → ∞.
In this section we consider the case r → ∞.
3.11.2 Recall that
m−1
Sr,n ( j) = c( p) Q r ( j − pn), (3.11.1)
p=0
whereby
[Fm (c)](k) = Z (k)/Tr (k), k ∈ 0 : m − 1. (3.11.2)
X r = Z Vr , (3.11.4)
N −1 m−1
−k( j− pn)−kpn
X r (k) = c( p) Q r ( j − pn) ω N =
j=0 p=0
N −1
m−1
−k j
= c( p) ωm−kp Q r ( j) ω N = Fm (c) (k) F N (Q r ) (k).
p=0 j=0
where
Vr (k) = F N (Q r ) (k)/Tr (k). (3.11.6)
For another thing, Tr (0) = n 2r −1 and Tr (k) > 0 for all k ∈ Z. Therefore,
n−1
sin(π(qm + l)/N ) 2r −1
=n .
s=0
sin(π(sm + l)/N )
3.11.3 As it follows from (3.11.6) and (3.11.5), the signal Vr is N -periodic, real, and
even. For all natural r the following equalities hold: Vr (qm) = nδn (q), q ∈ 0 : n − 1,
and
n−1
Vr (qm + l) = n, l ∈ 1 : m − 1. (3.11.8)
q=0
In case of an even m we additionally put V∗ (m/2) = n/2. With the aid of the
equality V∗ (N − k) = V∗ (k) we spread V∗ onto the whole main period 0 : N − 1.
Figure 3.6 depicts the graphs of V∗ (k) for m = 3, n = 3, and m = 4, n = 3.
0 1 4 5 8
= 3, =3
0 1 2 3 6 9 10 11
= 4, =3
3.11 First Limit Theorem 103
# Let us verify
$ validity of the limit relation (3.11.10) for q = 0 and l ∈ 1 :
(m − 1)/2 . According to (3.11.3) we have
n−1
αl r −1
Vr (l) = n 1 + .
s=1
αsm+l
0
104 3 Spline Subspaces
n−1
# $
Vr (qm + l) = n, l ∈ 1 : (m − 1)/2 .
q=0
All the values Vr (qm + l) are non-negative and Vr (l) → n when r → ∞, so for
q ∈ 1 : n − 1 there holds
# $
lim Vr (qm + l) = 0, l ∈ 1 : (m − 1)/2 .
r →∞
# $
Note that for q ∈ 1 : n# − 1 and l $∈ 1 : (m − 1)/2 the index qm + l varies from
m + 1 to (n − 1)m + (m − 1)/2 . Bearing in mind the equality
# $
(m − 1)/2 + m/2 = m − 1
we obtain # $
(n − 1)m + (m − 1)/2 = N − m/2 − 1.
# $
q ∈ 1 : n − 1, l ∈ 1 : (m − 1)/2 .
lim Vr ( m2 ) = n
2
= V∗ ( m2 ).
r →∞
It is also clear that, due to the fact that both signals Vr and V∗ are even, there holds
lim Vr (n − 1)m + m
2
= lim Vr ( m2 ) = V∗ ( m2 ) = V∗ (n − 1)m + m
2
.
r →∞ r →∞
According to (3.11.8)
3.11 First Limit Theorem 105
n−1
Vr (qm + m
2
) = n.
q=0
Taking into account the relations Vr (m/2) → n/2 and Vr (n − 1)m + m/2 → n/2
we conclude that
lim Vr (qm + m
2
) = 0 = V∗ (qm + m
2
), q ∈ 1 : n − 2.
r →∞
lim Vr (qm + m
2
) = V∗ (qm + m
2
) (3.11.11)
r →∞
holds.
When m = 2, the set 1 : m − 1 consists of a single index l = 1 = m/2. In this
case the relation (3.11.11) proves the lemma. Thereafter we assume that m ≥ 3. This
guarantees that the set m/2 + 1 : m − 1 is not empty.
According to (3.11.8),
n−1
Vr (qm + l) = n, l ∈ m/2 + 1 : m − 1. (3.11.12)
q=0
Furthermore,
We took into account that the following inequalities hold for the given l:
# $
1 ≤ m − l ≤ (m − 1)/2 .
q ∈ 0 : n − 2, l ∈ m/2 + 1 : m − 1.
X ∗ = Z V∗ , tm,n = F N−1 (X ∗ ).
kj
Proof Using the inversion formula and the fact that |ω N | = 1 holds for all integer k
and j we gain
N −1
1 kj
|Sr,n ( j) − tm,n ( j)| ≤ |X r (k) − X ∗ (k)| × |ω N |
N k=0
N −1
1
= |Z (k)| × |Vr (k) − V∗ (k)|.
N k=0
Now the conclusion of the theorem immediately follows from Lemma 3.11.2.
3.11.5 Let us find out the nature of the limit signal tm,n . Consider two cases depending
upon whether m is even.
Therefore,
0 for k ∈ μ : N − μ,
X ∗ (k) =
n Z (k) for the other k ∈ 0 : N − 1.
We will use the sampling theorem (see Sect. 2.4 of Chap. 2). According to it there
holds
N −1
tm,n ( j) = tm,n (ln) h m,n ( j − ln), (3.11.14)
l=0
where
μ−1
1 kj
h m,n ( j) = ωN . (3.11.15)
m k=−μ+1
3.11 First Limit Theorem 107
N −1 μ−1 N −1
1 1
tm,n (ln) = X ∗ (k) ω N =
kln
Z (k) ωm +
kl
Z (k) ωm .
kl
N k=0 m k=0 k=N −μ+1
1
m−1
tm,n (ln) = Z (k) ωmkl = z(l).
m k=0
N −1
tm,n ( j) = z(l) h m,n ( j − ln). (3.11.17)
l=0
On the basis of (3.11.17) and (3.11.16) we conclude that the limit signal tm,n ( j)
is an interpolating trigonometric polynomial defined on a set of integer numbers.
Case of m = 2μ. We have m/2 + 1 = μ + 1 and
⎧
⎪
⎨0n for k ∈ μ + 1 : N − μ − 1,
⎪
V∗ (k) = for k = μ and k = N − μ,
⎪
⎪ 2
⎩n for the other k ∈ 0 : N − 1.
We will show that the representation (3.11.17) still holds for the limit signal tm,n ,
however, unlike with the case of m = 2μ − 1, the kernel h m,n has a form (3.11.18).
This statement is equivalent to the following: for signal (3.11.17) there holds
F N (tm,n ) = X ∗ . Let us verify this equality.
108 3 Spline Subspaces
We write
N −1 m−1
−k( j−ln)−kln
F N (tm,n ) (k) = z(l) h m,n ( j − ln) ω N
j=0 l=0
m−1 N −1
−k j
= z(l) ωm−kl h m,n ( j) ω N = Z (k) V∗ (k) = X ∗ (k).
l=0 j=0
Let us summarize.
Section 2.4 of Chap. 2 denotes compact representations for the kernels h m,n ; namely,
(2.4.3) for an odd m and (2.4.7) for an even m.
%r,n ( j) = 1
Q Q r,n ( j), j ∈ Z.
n 2r −1
By virtue of Lemma 3.3.2,
m−1
%r,n ( j − pn) ≡ 1.
Q
p=0
3.12 Second Limit Theorem 109
%r,n ( j)
Taking into account non-negativity of Q r,n ( j) we conclude that the values of Q
belong to the segment [0, 1].
Let us find out how the normalized B-spline Q %r,n ( j) behaves when n is growing.
Periodic B-splines of higher orders are defined with the aid of convolution:
& m
Bν (x) = Bν−1 (t) B1 (x − t) dt, ν = 2, 3, . . .
0
m−1
Bν (x − l) ≡ 1. (3.12.2)
l=0
and for l ∈ 1 : m − 1
1 for k = l,
B1 (k − l) =
0 for the others k ∈ 0 : m.
Hence it follows that L 1 (k) = 1 for all k ∈ 0 : m. This guarantees validity of the
identity (3.12.2) for ν = 1.
We perform an induction step from ν to ν + 1. We have
m−1 & m
m−1
L ν+1 (x) = Bν+1 (x − l) = Bν (t) B1 (x − l − t) dt.
l=0 l=0 0
110 3 Spline Subspaces
For any m-periodic function f (t) that is integrable on the main period [0, m], the
following formula holds:
& m & m
f (t − ξ ) dt = f (t) dt ∀ξ ∈ R.
0 0
& m
m−1 & m m−1
L ν+1 (x) = Bν (t − l) B1 (x − t) dt = B1 (x − t) Bν (t − l) dt
l=0 0 0 l=0
& m & m
= B1 (x − t) dt = B1 (−t) dt = 1.
0 0
As it was noted, Bν (x) ≥ 0 for all x ∈ R. Lemma 3.12.1 yields Bν (x) ≤ 1. Thus,
values of a B-spline Bν (x) belong to the segment [0, 1] for all natural ν and all real x.
Proof When ν = 1, the stated properties of the B-spline B1 (x) hold. Let us perform
an induction step from ν to ν + 1.
We have
& m & x+m/2
Bν+1 (x) = Bν (t) B1 (x − t) dt = Bν (t) B1 (x − t) dt.
0 x−m/2
Therefore,
& x & x+1
Bν+1 (x) = Bν (t) (1 − x + t) dt + Bν (t) (1 + x − t) dt.
x−1 x
By the induction hypothesis, Bν ∈ C 2ν−2 (R). It follows from (3.12.3) and (3.12.4)
that Bν+1 ∈ C 2ν (R).
If x ∈ [k, k + 1], where k is an integer number, then (x − 1) ∈ [k − 1, k] and (x +
1) ∈ [k + 1, k + 2]. According to (3.12.4) and the induction hypothesis, Bν+1 (x) on
the segment [k, k + 1] coincides with some algebraic polynomial of order not higher
than 2ν − 1. Hence Bν+1 (x) on this segment is a polynomial of order not higher than
2ν + 1.
The lemma is proved.
Lemma 3.12.3 For any natural ν and all real x, y there holds
3.12.3 Let us find out how a continuous periodic B-spline Bν (x) and a discrete
%r,n ( j) are related.
normalized B-spline Q
Lemma 3.12.4 For any given order ν there exists a non-negative Aν such that
% j Aν
Q ν,n ( j) − Bν ≤ for all j ∈ Z and n ≥ 2. (3.12.5)
n n
Further,
& m & m
Bν+1 (x) = Bν (t) B1 (x − t) dt = Bν (t + x) B1 (−t) dt
&0 m 0
& m
= Bν x − (m + t) B1 (m − t) dt = Bν (x − t) B1 (t) dt,
0 0
112 3 Spline Subspaces
therefore
& m &
j j 1 N j −t t
Bν+1 = Bν − t B1 (t) dt = Bν B1 dt
n 0 n n 0 n n
N −1 &
1 k+1 j −t t
= Bν B1 dt.
n k=0 k n n
We have
N −1 &
% j 1 k+1 % k j −t t
Q ν+1,n ( j) − Bν+1 ≤ Q ν,n ( j − k) B1 − Bν B1 dt
n n k n n n
k=0
N −1 &
1 k+1 % j −t k j −t k t
= Q ν,n ( j − k) − Bν B1 + Bν B1 − B1 dt.
n k n n n n n
k=0
We know that B1 nk and Bν j−tn
do not exceed unity in modulus. Lemma 3.12.3
yields
k
t k − t 1
B1 − B1 ≤ ≤ for t ∈ [k, k + 1].
n n n n
We come to an inequality
j Aν + 2
%
Q ν+1,n ( j − k) − Bν+1 ( ) ≤ m .
n n
To finish the proof of the lemma it is remaining to put Aν+1 = m(Aν + 2).
3.12.4 Recall that the spline Sr,n of a form (3.12.1) satisfies to interpolation condi-
tions
Sr,n (ln) = z(l), l ∈ 0 : m − 1. (3.12.6)
Hence, just like in Sect. 3.4, we can obtain an expression for Cn = Fm (cn ). Let us
do it. Denote Z = Fm (z),
%n = Fm (%
G n = Fm (gn ), G gn ).
3.12 Second Limit Theorem 113
1 %n = 1 G n .
gn (l) =
% gn (l), G
n 2r −1 n 2r −1
Equation (3.12.6) in a spectral domain takes a form (see Sect. 3.4)
%n = Z .
Cn G
%n .
Therefore, Cn = Z /G
Lemma 3.12.5 A sequence of spectra G % with com-
%n converges to a spectrum G
ponents
m−1
%
G(k) = Br (l) ωm−kl . (3.12.7)
l=0
%
Furthermore, G(k) > 0 for all k ∈ Z.
Proof According to Lemma 3.12.4 we have
As a consequence,
m−1
m−1
%n (k) = lim
lim G gn (l) ωm−kl =
% %
Br (l) ωm−kl =: G(k).
n→∞ n→∞
l=0 l=0
% are positive.
Let us verify that all components of the limit spectrum G
It follows from (3.12.7) and (3.12.2) that
m−1
m−1
%
G(0) = Br (l) = Br (m − 1 − l) = 1.
l=0 l=0
2r
n−1
π(qm + k) −2r
%n (k) = sin π k
G n sin .
m q=0
mn
2r π k −2r
%n (k) ≥ sin π k
G .
m m
%
Passing to the limit as n → ∞ we gain G(k) > 0, k ∈ 1 : m − 1.
The lemma is proved.
1 1 Z (k) kp
m−1 m−1
cn ( p) = Cn (k) ωmkp = ω .
m k=0 %n (k) m
m k=0 G
1 Z (k) kp
m−1
lim cn ( p) = ωm =: c∗ ( p), (3.12.8)
n→∞ %
m k=0 G(k)
m−1
Sr (x) = c∗ ( p) Br (x − p).
p=0
Second Limit Theorem For all real x the following limit relation holds:
Sr,n (nx) − Sr (x) ≤ Sr nx − Sr (x) + Sr,n (nx) − Sr nx .
n n
(3.12.10)
The first summand in the right side of (3.12.10) vanishes as n → ∞ by virtue of
continuity of the spline Sr and by inequalities
1 nx
− ≤ − x ≤ 0.
n n
Let us make an estimate of the second summand. We write
m−1
≤
cn ( p) Q%r,n nx − pn − Br nx − p + cn ( p) − c∗ ( p) Br nx − p .
n n
p=0
Sr (l) = z(l), l ∈ 0 : m − 1.
In other words, the limit spline Sr satisfies to the same interpolation conditions as all
discrete splines Sr,n .
Exercises
3.2 Prove that the discrete Bernoulli function of the first order b1 ( j) can be repre-
sented as
1 N +1
b1 ( j) = −j for j ∈ 1 : N .
N 2
3.3 Prove that the discrete Bernoulli function of the second order b2 ( j) can be
represented as
N 2 − 1 ( j − 1)(N − j + 1)
b2 ( j) = − +
12N 2N
for j ∈ 1 : N .
r
r −l 2r
Q r ( j) =
2r
(−1) δ N ( j + r − ln).
l=−r
r −l
where the minimum is taken among all S ∈ Srm . Prove that a unique (up to an additive
constant) solution of this problem is the interpolation spline S∗ satisfying to the
conditions S∗ (ln) = x(ln), l ∈ 0 : m − 1.
3.12 Prove that the smoothing spline Sα from par. 3.5.3 is real-valued if the initial
data z(l), l ∈ 0 : m − 1, are real.
3.14 Formula (3.8.3) defines μk ( j) for all k ∈ Z. Prove that with j being fixed the
m-periodic with respect to k sequence μk ( j) is even.
3.16 Consider the expansion (3.8.4) of a spline S over the orthogonal basis. Let the
m-periodically continued coefficients ξ(k) form an even signal. Prove that in this
case S( j) takes only real values.
3.17 Under the conditions of the previous exercise, let the signal ξ composed of
the coefficients of the expansion (3.8.4) be real and even. Prove that this guarantees
reality and evenness of the spline S( j).
3.18 Prove that a self-dual spline ϕr ( j) defined by formula (3.9.7) is real and even.
3.19 Prove that the spline Rr ( j) dual to B-spline Q r ( j) is real and even.
N −1
1
Q rν ( j) =
lj
yν (l) ω N
N l=0
r
2r
Q rν+1 ( j) = Q rν ( j − pn ν ).
p=−r
r−p
118 3 Spline Subspaces
3.24 Prove that an m ν -periodic on k sequence {aν (k)} defined by formula (3.10.7)
is even.
3.25 Prove that an m ν+1 -periodic on k sequence wkν+1 ( j) of a form (3.10.3) with
the coefficients (3.10.7) is even.
3.26 Let wkν+1 be the splines introduced in par. 3.10.2 and k ∈ 1 : m ν+1 − 1, k =
m ν+1 /2. Prove that there holds
N −1
2
wkν+1 ( j) = 0.
j=0
3.28 Let a spline ϕ belong to a wavelet subspace Wrm ν . Prove that the system of
m ν −1
shifts ϕ(· − ln ν ) l=0 forms a basis in Wrm ν if and only if each coefficient in the
m ν −1
expansion of ϕ over the basis wkν k=0 is nonzero.
m ν −1
3.29 Spline wavelets ϕ, ψ ∈ Wrm ν are called dual if their shifts ϕ(· − ln ν ) l=0
m ν −1
and ψ(· − ln ν ) l=0 are biorthogonal. Prove that ϕ and ψ are dual if and only if
m ν −1
their coefficients βν (k), γν (k) in the expansions over the basis wkν k=0 satisfy to
the condition
−1
βν (k) γ ν (k) = m ν wkν 2 , k = 0, 1, . . . , m ν − 1.
m ν+1 −1
Prν+1 ( j) = wkν+1 ( j).
k=0
Prove that
ν −1
m
Prν+1 ( j) = dν ( p) Q rν ( j − pn ν ),
p=0
where dν = Fm−1ν (aν ) and the sequence {aν (k)} is defined by formula (3.10.7).
3.31 Prove that the coefficients dν ( p) from the previous exercise can be represented
as
r
2r
dν ( p) = (−1) p+1 Q ν2r ( p + l + 1)n ν .
l=−r
r − l
3.35 Prove that a continuous periodic B-spline Bν (x) introduced in par. 3.12.2 is an
even function.
Comments
Discrete periodic Bernoulli functions are introduced in the paper [4]. Ibid the theorem
about expansion of an arbitrary signal over the shifts of Bernoulli functions is proved.
This theorem plays an important role in discrete harmonic analysis.
Discrete periodic splines and their numerical applications are the central point of
the paper [29]. Piecewise polynomial nature of B-splines is investigated in [28].
Defining a spline as a linear combination of the shifts of a B-spline is a standard
procedure. Less standard is an equivalent definition via a linear combination of the
shifts of a Bernoulli function (Theorem 3.3.1). The latter definition is essentially used
in devising a fundamental relation (3.3.10) which, in turn, is a basis of establishing
the minimal norm property (Theorem 3.4.2). In continuous context the minimal norm
property is peculiar to natural splines [27].
A solution of the discrete spline interpolation problem (along with the minimal
norm property) is obtained in [29]. We note that discrete spline interpolation is used
in construction of lifting schemes of wavelet decompositions of signals [33, 34,
53]. Hermite spline interpolation and its applications to computer aided geometric
design are considered in [6]. Common approaches to wavelet processing of signals
are presented in the monograph [18].
An analysis of the problem of discrete periodic data smoothing is performed within
the framework of a common smoothing theory [42]. Along with that, to implement
the common approach we utilize the techniques of discrete harmonic analysis to
the full extent. We hope that a reader will experience an aesthetic enjoyment while
examining this matter.
Formula (3.8.3) defining an orthogonal basis in a space of signals is the beginning
of discrete spline harmonic analysis per se. Many problems are dealt with in terms of
coefficients of an expansion over the orthogonal basis. In particular, it is these terms
that are used to state a criterion of duality of two splines (Theorem 3.9.2). In practical
terms, the spline dual to a B-spline helps solving a problem of spline processing of
discrete periodic data with the least squares method.
Orthogonal splines are used to obtain a wavelet decomposition of the space of
splines.
120 3 Spline Subspaces
Sections 3.8–3.10 are written on the basis of the paper [13]. In continuous context
a question of orthogonal periodic splines and their applications was considered in [40,
43, 52].
Limit properties of discrete periodic splines are investigated in the papers [19,
20]. The papers [7, 21] are devoted to application of discrete periodic splines to the
problems of geometric modeling.
Some of the additional exercises are of interest on their own. For example, the
problem 3.23 presents a so-called calibration relation for B-splines. A property of an
interpolating spline noted in the problem 3.11 is referred to as the best approximation
property. Problems 3.30–3.34 introduce a notion of B-wavelet and examine some of
its properties.
Chapter 4
Fast Algorithms
N −1
−k j
X (k) = x( j) ω N
j=0
N −1
2π k N −1
2π k
= x(0) + x( j) cos j −i x( j) sin j .
j=1
N j=1
N
N −1
N −1
A(k) = x( j) c j , B(k) = x( j) s j .
j=1 j=1
Then
X (k) = x(0) + A(k) − i B(k). (4.1.1)
Note that
c j + c j−2 = cos(α j) + cos α( j − 2) = 2 cos α( j − 1) cos(α) = 2 cos(α) c j−1 ,
s j + s j−2 = sin(α j) + sin α( j − 2) = 2 sin α( j − 1) cos(α) = 2 cos(α) s j−1 .
This induces the recurrent relations that serve as a basis for further transforms:
N −1
N −1
A(k) = x( j) c j = g j − 2 cos(α) g j+1 + g j+2 c j
j=1 j=1
N −1
N N +1
= g j c j − 2 cos(α) g j c j−1 + g j c j−2
j=1 j=2 j=3
N −1
= g1 c1 + g2 c2 − 2 cos(α) g2 c1 + g j c j − 2 cos(α) c j−1 + c j−2
j=3
= g1 c1 + g2 c2 − 2 cos(α) c1 ) = g1 c1 − g2 c0 = g1 cos(α) − g2 .
Similarly, with a reference to (4.1.4) and (4.1.3), we convert the expression for B(k):
N −1
N −1
B(k) = x( j) s j = g j − 2 cos(α) g j+1 + g j+2 s j
j=1 j=1
= g1 s1 + g2 s2 − 2 cos(α) g2 s1 = g1 s1 + g2 s2 − 2 cos(α) s1 )
= g1 s1 = g1 sin(α).
Program Code
g := x(N − 1); g1 := 0; a := 2 ∗ cos(2 ∗ π ∗ k / N );
for j := N − 2 downto 1 do
begin g2 := g1; g1 := g;
g := x( j) + a ∗ g1 − g2 end
4.2.1 With the aid of Goertzel algorithm, it is possible to calculate the whole spec-
trum of a signal. However this is not the best way. More effective methods exist that
are called fast Fourier transforms (FFTs). There are several FFT algorithms, and all
of them depend on arithmetic properties of the length of the period N . We will focus
on the case of N = 2s .
Our approach to FFT is related to constructing a recurrent sequence of orthogonal
bases in a space of signals. This matter is considered in the present section. The next
section is devoted to a description of FFT.
4.2.2 In a space C N with N = 2s we will construct a recurrent sequence of orthog-
N −1
onal bases f 0 , f 1 , . . . , f s . Here f ν = { f ν (k; j)}k=0 . A signal f ν (k; j) as an ele-
ment of a space C N will be denoted as f ν (k). We put Nν = N /2ν and ν = 2ν−1 .
N −1
A sequence f ν = { f ν (k)}k=0 , ν = 0, 1, . . . , s, is defined as follows:
f 0 (k) = δ N (· − k), k ∈ 0 : N − 1;
f ν (l + pν+1 ) = f ν−1 (l + 2 pν ) + ω
l
ν+1
f ν−1 l + (2 p + 1)ν ,
(4.2.1)
f ν (l + ν + pν+1 ) = f ν−1 (l + 2 pν ) − ω l
ν+1
f ν−1 l + (2 p + 1) ν ,
124 4 Fast Algorithms
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
p ∈ 0 : N1 − 1, σ ∈ 0 : 1.
How does the basis f s look like? Answering this question needs some additional
preparation.
The following lemma helps to clear up the explicit form of signals f ν (k).
ν+1 −1
l rev (q)
f ν (l + pν+1 ) = ων+1ν f 0 (q + pν+1 ), (4.2.4)
q=0
p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.
f ν (l + pν+1 ) = f ν (l + σ ν + pν+1 )
l +σ ν
= f ν−1 (l + 2 pν ) + ω ν+1
f ν−1 l + (2 p + 1)ν
ν −1
l revν−1 (q)
= ων f 0 (q + 2 pν )
q=0
ν −1
l revν−1 (q)
+ ω
l
ν+1
ων f 0 q + (2 p + 1)ν .
q=0
Hence
ν −1
l rev (q)
f ν (l + pν+1 ) = ων+1ν f 0 (q + pν+1 )
q=0
ν −1
l rev (q+ν )
+ ων+1ν f 0 (q + ν ) + pν+1
q=0
ν+1 −1
l rev (q)
= ων+1ν f 0 (q + pν+1 ).
q=0
N −1
l revs (q)
f s (l; j) = ωN δ N ( j − q)
q=0
l revs ( j)
= ωN = u l revs ( j) , l, j ∈ 0 : N − 1. (4.2.5)
Proof The assertion is known to be true for ν = 0 (the corollary to Lemma 2.1.4);
therefore, we assume that ν ∈ 1 : s. We take k, k ∈ 0 : N − 1 and represent them
in a form k = l + pν+1 , k = l + p ν+1 , where l, l ∈ 0 : ν+1 − 1 and p, p ∈
0 : Nν − 1. Bearing in mind formula (4.2.4), the definition of signals f 0 (k) and
Lemma 2.1.4, we write
The argument of the unit pulse δ N does not exceed N − 1 in absolute value. When p =
p , it is other than zero for all q, q ∈ 0 : ν+1 − 1 because |q − q | ≤ ν+1 − 1.
Hence f ν (k), f ν (k ) = 0 for p = p .
Let p = p . Then
ν+1 −1
(l−l )revν (q)
f ν (k), f ν (k ) = ων+1
q=0
ν+1 −1
(l−l )q
= ων+1 = ν+1 δν+1 (l − l ). (4.2.7)
q =0
We used formula (2.2.1) and the fact that the mapping q → revν (q) is a permutation
of the set {0, 1, . . . , ν+1 − 1}. On the basis of (4.2.7) we conclude that the scalar
product f ν (k), f ν (k ) is nonzero only when p = p and l = l , i.e. only when
4.2 First Sequence of Orthogonal Bases 127
4.2.6 Let us show that the signals f ν (l + pν+1 ) given some fixed ν, l and p ∈ 0 :
Nν − 1 differ from f ν (l) only by a shift of an argument.
ν+1 −1
l rev (q)
f ν (l; j) = ων+1ν δ N ( j − q).
q=0
N −1
1
x0 = xν (k) f ν (k). (4.3.1)
2ν k=0
N −1
N −1
xν (k) = x0 ( j) f ν (k; j) = x revs ( j) f ν (k; j). (4.3.2)
j=0 j=0
In particular,
N −1
x0 (k) = x revs ( j) δ N ( j − k) = x revs (k) .
j=0
xν (l + pν+1 ) = x0 , f ν (l + pν+1 )
= x0 , f ν−1 (l + 2 pν ) + ωl
ν+1
f ν−1 l + (2 p + 1)ν
−l
= xν−1 (l + 2 pν ) + ω ν+1
xν−1 l + (2 p + 1)ν .
Similarly,
−l
xν (l + ν + pν+1 ) = xν−1 (l + 2 pν ) − ων+1
xν−1 l + (2 p + 1)ν .
−l
xν (l + pν+1 ) = xν−1 (l + 2 pν ) + ων+1
xν−1 l + (2 p + 1)ν ,
−l
(4.3.3)
xν (l + ν + pν+1 ) = xν−1 (l + 2 pν ) − ω ν+1
x ν−1 l + (2 p + 1) ν ,
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
Along this scheme we calculate the coefficients of expansion of the signal x0 over
all bases f ν right up to f s . Note that according to (4.3.2) and (4.2.5) we have
N −1
N −1
−k rev ( j) −k j
xs (k) = x revs ( j) ω N s = x( j ) ω N = X (k).
j=0 j =0
Thus, the coefficients xs (k) are nothing else but the spectral components of the signal
x on the main period.
s s
Calculations with formula (4.3.3) require N ν ν = 1
2
Nν ν+1 = 1
2
sN =
ν=1 ν=1
s
1
2
N log2 N multiplications and 2 Nν ν = N log2 N additions.
ν=1
Scheme (4.3.3) is one of the versions of the fast Fourier transform for N = 2s . It
is referred to as the decimation-in-time Cooley–Tukey algorithm.
4.3 Fast Fourier Transform 129
xs (k) = X (k), k ∈ 0 : N − 1;
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = s, s − 1, . . . , 1.
Along formula (4.3.4) we descend down to x0 (k) = x revs (k) . Replacing k by
revs (k) we obtain
x(k) = x0 revs (k) , k ∈ 0 : N − 1.
Thereby we have pointed out the fast algorithm of reconstructing a signal x from its
spectrum X = F N (x) for N = 2s .
4.4.1 We rewrite formula (4.2.2) of transition from the basis f ν−1 to the basis f ν :
l+σ ν
f ν (l + σ ν + pν+1 ) = f ν−1 (l + 2 pν ) + ω ν+1
f ν−1 l + (2 p + 1)ν ,
(4.4.1)
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.
Let us analyze the structure of this formula. It is convenient to assume that the
basis f ν−1 is divided into ν blocks; the blocks are marked with an index l. Each
block contains Nν−1 signals with inner indices 2 p and 2 p + 1 for p ∈ 0 : Nν − 1.
According to (4.4.1) a block with an index l generates two blocks of the basis f ν
with indices l and l + ν , herein each block contains Nν signals with an inner
index p. The complete scheme of branching for N = 23 is presented in Fig. 4.1. By
virtue of Theorem 4.2.2 all the bases f 0 , f 1 , . . . , f s are orthogonal. According to
Theorem 4.2.3 the signals of each block of the basis f ν differ only by shifts of an
argument; the length of a shift is a multiple of ν+1 .
4 4 4 4 4 4 4 4
2 2 2 2 2 2 2 2
1 1 1 1 1 1 1 1
f 0 (k) = δ N (· − k), k ∈ 0 : N − 1;
f ν ( pν+1 ) = f ν−1 (2 pν ) + f ν−1 (2 p + 1)ν ,
(4.4.2)
f ν (ν + pν+1 ) = f ν−1 (2 pν ) − f ν−1 (2 p + 1)ν ,
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
4.4 Wavelet Bases 131
The signals { f ν (ν + pν+1 )} will enter a wavelet basis, while the signals
{ f ν ( pν+1 )} will participate in further branching. The recurrent relations (4.4.2)
can be written down in a single line:
f ν (σ ν + pν+1 ) = f ν−1 (2 pν ) + (−1)σ f ν−1 (2 p + 1)ν , (4.4.3)
p ∈ 0 : Nν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.
ν −1 ν −1
{ f ν ( pν+1 )} Np=0 , { f ν (ν + pν+1 )} Np=0
belong to Vν−1 and are pairwise orthogonal, and their total amount coincides with
the dimension of Vν−1 , we conclude that they form an orthogonal basis of Vν−1 .
Moreover, Vν−1 is an orthogonal sum of Vν and Wν , i.e.
Vν−1 = Vν ⊕ Wν , ν = 1, . . . , s. (4.4.4)
C N = V0 = V1 ⊕ W1 = (V2 ⊕ W2 ) ⊕ W1 = . . .
= Vs ⊕ Ws ⊕ Ws−1 ⊕ · · · ⊕ W2 ⊕ W1 . (4.4.5)
Here Vs = lin f s (0) . According to (4.2.5) there holds f s (0; j) ≡ 1, so Vs is a
subspace of signals that are identically equal to a complex constant.
Subspaces Wν are referred to as wavelet subspaces. Identity (4.2.8) yields
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
This means that the basis of Wν consists of shifts of the signal f ν (ν ; j); the shifts
are multiples of ν+1 .
132 4 Fast Algorithms
ν+1 −1
revν (q)
f ν (ν ; j) = ων+1
ν
δ N ( j − q)
q=0
ν −1 ν+1 −1
revν (q)
revν (q)
= ω2 δ N ( j − q) + ω2 δ N ( j − q). (4.4.7)
q=0 q=ν
ν −1 ν+1 −1
f ν (ν ; j) = δ N ( j − q) − δ N ( j − q). (4.4.8)
q=0 q=ν
s
N ν −1
ϕ0 ( p) = f 0 ( p) = δ N (· − p), p ∈ 0 : N − 1;
ϕν ( p + σ Nν ) = f ν (σ ν + pν+1 ),
p ∈ 0 : Nν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.
ϕ0 ( p) = δ N (· − p), p ∈ 0 : N − 1;
134 4 Fast Algorithms
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
In our new notations, Haar basis (4.5.1) will be constituted of the signals
ϕs (0); ϕν ( p + Nν ), p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1.
ξ0 ( p) = x, ϕ0 ( p) = x, f 0 ( p) = x( p),
ξ0 ( p) = x( p), p ∈ 0 : N − 1;
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
s
N ν −1
−s −ν
x =2 ξs (0) ϕs (0) + 2 ξν ( p + Nν ) ϕν ( p + Nν ). (4.5.5)
ν=1 p=0
4.5.2 We will give an example of expanding a signal over Haar basis. Let N = 23
and the signal x be defined by its samples on the main period as
x = (1, −1, −1, 1, 1, 1, −1, −1). Calculations performed along formula (4.5.4) are
presented in the Table 4.1.
According to (4.5.5) we obtain the expansion
x= 1
4
4 ϕ2 (3) + 21 2 ϕ1 (4) − 21 2 ϕ1 (5) = f 2 (6) + f 1 (1) − f 1 (3).
4.5 Haar Basis. Fast Haar Transform 135
This result can be verified immediately taking into account the form of Haar basic
functions shown in Fig. 4.3.
s
2 Nν = 2 (2s−1 + 2s−2 + · · · + 2 + 1) = 2(N − 1).
ν=1
ξν−1 (2 p) = 1
2
ξν ( p) + ξν ( p + Nν ) ,
ξν−1 (2 p + 1) = 1
2
ξν ( p) − ξν ( p + Nν ) ,
p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1.
4.6.1 If we take an orthogonal system of signals and perform the same permutation of
an argument of each signal, the transformed signals will remain pairwise orthogonal.
This simple idea allows us to construct new orthogonal bases in a space C N .
Let N = 2s and f 0 , f 1 , . . . , f s be orthogonal bases in C N defined in par. 4.2.2.
We put
gν (k; j) = f ν revs (k); revs ( j) , ν ∈ 0 : s.
136 4 Fast Algorithms
In particular,
g0 (k; j) = δ N revs ( j) − revs (k) = δ N ( j − k).
It is evident that for each ν ∈ 0 : s the signals gν (0), gν (1), …, gν (N − 1) are pairwise
orthogonal and there holds gν (k)2 = 2ν , k ∈ 0 : N − 1.
g0 (k) = δ N (· − k), k ∈ 0 : N − 1;
s (2l)
gν (2l Nν + p) = gν−1 (l Nν−1 + p) + ωrev
N gν−1 (l Nν−1 + Nν + p),
(4.6.2)
s (2l)
gν (2l + 1)Nν + p = gν−1 (l Nν−1 + p) − ωrev N gν−1 (l N ν−1 + N ν + p),
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
Let ν ≥ 2. Then
revs (2l + σ )Nν + p = revs (lν−2 2s−1 + · · · + l0 2s−ν+1 + σ 2s−ν
+ ps−ν−1 2s−ν−1 + · · · + p0 )
= p0 2s−1 + · · · + ps−ν−1 2ν + σ 2ν−1 + l0 2ν−2 + · · · + lν−2
= ν+1 revs−ν ( p) + σ ν + revν−1 (l),
as it was to be ascertained.
4.6 Decimation in Frequency 137
rev (l)
s (2l)
It is remaining to check that ων+1
ν−1
= ωrev
N for l ∈ 0 : ν − 1. For ν = 1 this is
obvious, and for ν ≥ 2 this is a consequence of the equality
ν+1 −1
q revs (l)
gν (l Nν + p) = ωN g0 (q Nν + p), (4.6.5)
q=0
p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.
gν (l Nν + p; j)
= f ν revs (lν−1 2s−1 + · · · + l0 2s−ν + ps−ν−1 2s−ν−1 + · · · + p0 ); revs ( j)
= f ν revν (l) + ν+1 revs−ν ( p); revs ( j)
ν+1 −1
rev (l) revν (q)
= ων+1
ν
f 0 q + ν+1 revs−ν ( p); revs ( j)
q=0
138 4 Fast Algorithms
ν+1 −1
rev (l) revν (q)
= ων+1
ν
f 0 revs (revν (q)Nν + p); revs ( j)
q=0
ν+1 −1
rev (l) q
= ων+1
ν
g0 (q Nν + p); j .
q=0
gν (l Nν + p; j) ≡ gν (l Nν ; j − p), (4.6.7)
p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.
ν+1 −1
q revs (l)
gν (l Nν ; j) = ωN δ N ( j − q Nν ).
q=0
Therefore
ν+1 −1
q revs (l)
gν (l Nν ; j − p) = ωN δ N j − (q Nν + p)
q=0
ν+1 −1
q revs (l)
= ωN g0 (q Nν + p; j) = gν (l Nν + p; j).
q=0
N −1
1
y= yν (k) gν (k). (4.6.8)
2ν k=0
y0 (k) = y(k), k ∈ 0 : N − 1;
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
N −1
N −1
−revs (k) j
ys (k) = y( j) gs (k; j) = y( j) ω N = Y revs (k) .
j=0 j=0
Hence it follows that the components of Fourier spectrum Y of a signal y are deter-
mined by the formula
Y (k) = ys revs (k) , k ∈ 0 : N − 1.
s (2l)
yν−1 (l Nν−1 + Nν + p) = 1
2
ωrev
N yν (2l Nν + p) − yν (2l + 1)Nν + p ,
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = s, s − 1, . . . , 1.
g0 (k) = δ N (· − k), k ∈ 0 : N − 1;
gν ( p) = gν−1 ( p) + gν−1 ( p + Nν ),
(4.6.10)
gν ( p + Nν ) = gν−1 ( p) − gν−1 ( p + Nν ),
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
140 4 Fast Algorithms
The signals {gν ( p + Nν )} will enter a wavelet basis and the signals {gν ( p)} will
participate in further branching.
The signals
gs (0); gν ( p + Nν ), p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1, (4.6.11)
gν ( p + Nν ; j) ≡ gν (Nν ; j − p), p ∈ 0 : Nν − 1.
ν+1 −1
q revs (1)
gν (Nν ; j) = ωN δ N ( j − q Nν )
q=0
ν+1 −1
= (−1)q δ N ( j − q Nν )
q=0
ν −1
ν −1
= δ N ( j − q Nν−1 ) − δ N ( j − Nν − q Nν−1 )
q=0 q=0
= δ Nν−1 ( j) − δ Nν−1 ( j − Nν ).
s
N ν −1
−s −ν
y=2 ys (0) gs (0) + 2 yν ( p + Nν ) gν ( p + Nν ). (4.6.12)
ν=1 p=0
Here yν (k) = y, gν (k) . Bearing in mind (4.6.10) we deduce recurrent relations for
the coefficients of expansion (4.6.12):
4.6 Decimation in Frequency 141
y0 (k) = y(k), k ∈ 0 : N − 1;
yν ( p) = yν−1 ( p) + yν−1 ( p + Nν ),
(4.6.13)
yν ( p + Nν ) = yν−1 ( p) − yν−1 ( p + Nν ),
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
As an example we will expand the signal from par. 4.5.2 over basis (4.6.11). It
is convenient to rename this signal as y in lieu of x. Calculations performed along
formula (4.6.13) are presented in Table 4.2.
According to (4.6.12) we obtain the expansion
This result can be verified immediately taking into account the form of Haar basic
functions shown in Fig. 4.4.
Scheme (4.6.13) of calculation of the coefficients of expansion (4.6.12) is referred
to as the decimation-in-frequency fast Haar transform. This transform requires only
additions; the number of operations is 2(N − 1).
Note that the coefficients of expansion (4.6.12) are contained in the table {yν (k)}
constructed along formula (4.6.9). Thus, in a process of calculating Fourier coeffi-
cients we incidentally calculate Haar coefficients as well.
4.6.6 Formula (4.6.13) can be inverted:
p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1.
N k −1
k
N ν −1
N −1
N −1
x= x( p) δ N (· − p) = ξ0 ( p) ϕ0 ( p)
p=0 p=0
N 1 −1
= ξ0 (2 p) ϕ0 (2 p) + ξ0 (2 p + 1) ϕ0 (2 p + 1) .
p=0
We put
N 1 −1
x= ξ0 (2 p + 1) ϕ0 (2 p) + ξ0 (2 p) ϕ0 (2 p + 1) .
p=0
4.7 Sampling Theorem in Haar Bases 143
Then
x = 21 (x +
x ) + 21 (x −
x ) =: 1
2
v1 + 21 w1 . (4.7.2)
N 1 −1
v1 = ξ0 (2 p) + ξ0 (2 p + 1) ϕ0 (2 p) + ϕ0 (2 p + 1)
p=0
N 1 −1
= ξ1 ( p) ϕ1 ( p),
p=0
N 1 −1
w1 = ξ0 (2 p) − ξ0 (2 p + 1) ϕ0 (2 p) − ϕ0 (2 p + 1)
p=0
N 1 −1
= ξ1 ( p + N1 ) ϕ1 ( p + N1 ).
p=0
N k −1
vk = ξk ( p) ϕk ( p)
p=0
Nk+1 −1
= ξk (2 p) ϕk (2 p) + ξk (2 p + 1) ϕk (2 p + 1)
p=0
Nk+1 −1
vk = ξk (2 p + 1) ϕk (2 p) + ξk (2 p) ϕk (2 p + 1) .
p=0
Then
vk = 21 (vk +
vk ) + 21 (vk −
vk ) =: 1
2
vk+1 + 21 wk+1 . (4.7.3)
Here
Nk+1 −1
vk+1 = ξk+1 ( p) ϕk+1 ( p),
p=0
144 4 Fast Algorithms
Nk+1 −1
wk+1 = ξk+1 ( p + Nk+1 ) ϕk+1 ( p + Nk+1 ).
p=0
Nk+1 −1 ν −1
k+1
N
x = 2−k−1 ξk+1 ( p)ϕk+1 ( p) + 2−ν ξν ( p + Nν )ϕν ( p + Nν ).
p=0 ν=1 p=0
Theorem 4.7.1 (Sampling Theorem) Let x ∈ C N be a signal such that for some
k ∈ 1 : s there holds ξν ( p + Nν ) = 0 for all p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Then
N k −1
ξν−1 (2 p) = 1
2
ξν ( p) + ξν ( p + Nν ) .
Taking into account the hypothesis of the theorem we gain ξν ( p) = 2 ξν−1 (2 p) for
all p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Hence for p ∈ 0 : Nk − 1 there holds
2
k
−1
ϕk (0; j) = f k (0; j) = δ N ( j − q),
q=0
N k −1
The latter formula shows that in the premises of Theorem 4.7.1, the signal x is
a step-function defined by equalities x( j) = x(2k p) for j ∈ {2k p, 2k p + 1, . . . ,
2k ( p + 1) − 1}, p = 0, 1, . . . , Nk − 1.
4.7.2 Now we turn to Haar basis related to decimation in frequency.
Lemma 4.7.2 Given a signal y ∈ C N , for each k ∈ 1 : s there holds an equality
N k −1
k
N ν −1
y( j) = 1
2
y( j) + y( j − N1 ) + 1
2
y( j) − y( j − N1 )
=: 1
2
v1 ( j) + w1 ( j).
1
2
(4.7.6)
N −1
v1 = y( p) δ N (· − p) + δ N (· − p + N1 N)
p=0
N −1
= y0 ( p) g0 ( p) + g0 ( p + N1 N)
p=0
N 1 −1
N 1 −1
= y0 ( p) g1 ( p) + y0 ( p + N1 ) g1 ( p)
p=0 p=0
N 1 −1
= y1 ( p) g1 ( p),
p=0
N 1 −1
N 1 −1
w1 = y0 ( p) g1 ( p + N1 ) − y0 ( p + N1 ) g1 ( p + N1 )
p=0 p=0
N 1 −1
= y1 ( p + N1 ) g1 ( p + N1 ).
p=0
N k −1
vk = yk ( p) gk ( p)
p=0
=: 1
2
vk+1 + 21 wk+1 . (4.7.7)
Here
Nk+1 −1 Nk+1 −1
vk+1 = yk ( p) gk+1 ( p) + yk ( p + Nk+1 ) gk+1 ( p)
p=0 p=0
Nk+1 −1
= yk+1 ( p) gk+1 ( p)
p=0
and
Nk+1 −1 Nk+1 −1
wk+1 = yk ( p) gk+1 ( p+ Nk+1 ) − yk ( p+ Nk+1 ) gk+1 ( p+ Nk+1 )
p=0 p=0
Nk+1 −1
= yk+1 ( p + Nk+1 ) gk+1 ( p + Nk+1 ).
p=0
Nk+1 −1 ν −1
k+1
N
y = 2−k−1 yk+1 ( p) gk+1 ( p) + 2−ν yν ( p + Nν ) gν ( p + Nν ).
p=0 ν=1 p=0
Theorem 4.7.2 (Sampling Theorem) Let y ∈ C N be a signal such that for some
k ∈ 1 : s there holds yν ( p + Nν ) = 0 for all p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Then
4.7 Sampling Theorem in Haar Bases 147
N k −1
y= y( p) gk ( p). (4.7.8)
p=0
yν−1 ( p) = 1
2
yν ( p) + yν ( p + Nν ) .
Taking into account the hypothesis of the theorem we gain yν ( p) = 2 yν−1 ( p) for all
p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Hence for p ∈ 0 : Nk − 1 there holds
k+1 −1
gk (0; j) = δ N ( j − q Nk ) = δ Nk ( j).
q=0
N k −1
y( j) = y( p) δ Nk ( j − p).
p=0
The latter formula shows that in the premises of Theorem 4.7.2, the signal y is
Nk -periodic.
Essentially, we ascertained that with the aid of expanding a signal over Haar basis
related to decimation in frequency one can detect its hidden periodicity.
4.8.1 Formula (4.6.12) gives an expansion of a signal y ∈ C N over the discrete Haar
basis related to decimation in frequency. As it was mentioned in par. 4.6.4, the basic
signals satisfy to the identity
gν ( p + Nν ; j) ≡ gν (Nν ; j − p),
148 4 Fast Algorithms
p ∈ 0 : Nν − 1, ν = 1, . . . , s − 1.
In the other words, every basic signal of the ν-th level is a shift of a single signal
gν (Nν ). We introduce a notation ψν ( j) = gν (Nν ; j). According to Theorem 4.6.4
we have
ψν ( j) = δ Nν−1 ( j) − δ Nν−1 ( j − Nν ), ν = 1, . . . , s. (4.8.1)
β = ys (0);
yν ( p) = yν ( p + Nν ), p ∈ 0 : Nν − 1, ν = 1, . . . , s.
s
N ν −1
y( j) = 2−s β + 2−ν
yν ( p) ψν ( j − p), j ∈ Z. (4.8.2)
ν=1 p=0
Proof By virtue of (4.8.1) the signal ψν is Nν−1 -periodic. Since Nν−1 = 2Nν , of
the same property is the signal in the right side of (4.8.3). Hence it is sufficient to
verify equality (4.8.3) on the main period 0 : Nν−1 − 1. When j = 0 or j = Nν , it
is a consequence of (4.8.1). When j ∈ 1 : Nν − 1 or j ∈ Nν + 1 : Nν−1 − 1, both
sides of (4.8.3) are equal to zero. The lemma is proved.
As long as both sides of this equality are Nν−1 -periodic (in terms of j) signals, it is
sufficient to verify it on the period p : p + Nν−1 − 1. When j = p or j = p + Nν ,
equality (4.8.5) is true. It is also true for any other j from the given period because
in this case both sides of (4.8.5) are equal to zero.
Now (4.8.4) follows from (4.8.3) and (4.8.5).
4.8 Convolution Theorem in Haar Bases 149
ψν ( j − l Nν ) = (−1)l ψν ( j).
Taking into account that j − p = j − p Nν − p/Nν Nν we come to (4.8.6).
4.8.2 We will investigate how the decimation-in-frequency discrete Haar transform
acts on a cyclic convolution. Recall that a cyclic convolution of signals x and y from
C N is a signal u = x ∗ y with samples
N −1
u( j) = x(k) y( j − k).
k=0
s
N ν −1
x( j) = 2−s α + 2−ν
xν ( p) ψν ( j − p), (4.8.7)
ν=1 p=0
s
N ν −1
u( j) = 2−s γ + 2−ν
u ν ( p) ψν ( j − p), (4.8.8)
ν=1 p=0
p
N ν −1
u ν ( p) =
xν (q)
yν ( p − q) −
xν (q)
yν ( p − q + Nν ), (4.8.9)
q=0 q= p+1
p ∈ 0 : Nν − 1, ν = s − 1, s − 2, . . . , 1.
s
N ν −1
y( j − k) = 2−s β + 2−ν
yν ( p) ψν ( j − k − p).
ν=1 p=0
As it was mentioned in par. 4.8.1, there holds ψν (− j) = ψν ( j). Along with (4.8.6)
it gives us
ψν ( j − k − p) = ψν k − ( j − p) = (−1)( j− p)/Nν ψν k − j − p Nν .
150 4 Fast Algorithms
We have
N ν −1
s
y( j − k) = 2−s β + 2−ν (−1)( j− p)/Nν
yν ( p) ψν k − j − p Nν .
ν=1 p=0
j−p j − j −q Nν j −q − j −q Nν +q j −q
= = = .
Nν Nν Nν Nν
We come to a formula
N ν −1
s
y( j − k) = 2−s β + 2−ν (−1)( j−q)/Nν
yν j −q Nν ψν (k − q). (4.8.10)
ν=1 q=0
Let us substitute (4.8.7) and (4.8.10) into a convolution formula. Bearing in mind
reality, orthogonality and norming of the basic functions we gain
N ν −1
s
u( j) = 2−s αβ + 2−ν (−1)( j−q)/Nν
xν (q)
yν j −q Nν .
ν=1 q=0
We write
(−1)( j−q)/Nν
yν j −q Nν
= (−1) j/Nν +( j Nν −q)/Nν
yν j Nν − q Nν
N ν −1
= (−1) j/Nν (−1)( p−q)/Nν
yν p − q Nν δ Nν j Nν −p .
p=0
therefore
Nν −1 N
ν −1
−s
s
−ν ( p−q)/Nν
u( j) = 2 αβ + 2 (−1)
xν (q)
yν p − q Nν ψν ( j − p).
ν=1 p=0 q=0
But we already have representation (4.8.8) for the signal u. By virtue of uniqueness of
expansion over an orthogonal basis we conclude that γ = αβ and the sum in braces
is nothing else but
u ν ( p). It is remaining to note that
4.8 Convolution Theorem in Haar Bases 151
N ν −1
p
(−1)( p−q)/Nν yν p − q
xν (q) Nν =
xν (q)
yν ( p − q)
q=0 q=0
N ν −1
−
xν (q)
yν ( p − q + Nν ).
q= p+1
4.8.3 Now we turn to the discrete Haar basis related to decimation in time (see
Sects. 4.4.2 and 4.5.1). We have
For the sake of simplicity we will write ϕν ( j) instead of ϕν (Nν ; j). Formula (4.4.8)
yields
ν −1 ν+1 −1
ϕν ( j) = δ N ( j − q) − δ N ( j − q). (4.8.11)
q=0 q=ν
s
N ν −1
x( j) = 2−s α + 2−ν
ξν ( p) ϕν ( j − pν+1 ). (4.8.12)
ν=1 p=0
Proof The right side of (4.8.13) is N -periodic because N = 2Nν ν = Nν ν+1 ; the
left side is N -periodic as well. Hence it is sufficient to verify equality (4.8.13) for
j ∈ 0 : N − 1.
152 4 Fast Algorithms
In the following lemma we will use the operation ⊕ of bitwise summation mod-
ulo 2 (see Sect. 1.5).
Lemma 4.8.5 Given k ∈ 0 : N − 1, k = (ks−1 , . . . , k0 )2 , valid is the equality
ϕν ( j ⊕ k) = (−1)kν−1 ϕν j − k/ν+1 ν+1 , (4.8.16)
j ∈ 0 : N − 1.
Proof We write
j = j/ν+1 ν+1 + jν−1 ν + j ν ,
It is clear that
j ⊕ k = j/ν+1 ⊕ k/ν+1 ν+1 + jν−1 + kν−1 2 ν + j ν ⊕ k ν .
N −1
z( j) = x(k) y( j ⊕ k), j ∈0: N −1 (4.8.20)
k=0
s
N ν −1
y( j) = 2−s β + 2−ν
ην ( p) ϕν ( j − pν+1 ),
ν=1 p=0
s
N ν −1
−s
z( j) = 2 γ + 2 −ν
ζν ( p) ϕν ( j − pν+1 ),
ν=1 p=0
N ν −1
ζν ( p) =
ξν (q)
ην ( p ⊕ q), (4.8.21)
q=0
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
N ν −1
s
y( j ⊕ k) = 2−s β + 2−ν
ην ( p) ϕν ( j ⊕ k) ⊕ pν+1 .
ν=1 p=0
154 4 Fast Algorithms
Since
( j ⊕ k) ⊕ pν+1 = k ⊕ ( j ⊕ pν+1 )
= k ⊕ ( j/ν+1 ⊕ p) ν+1 + jν−1 ν + j ν ,
N ν −1
s
y( j ⊕ k) = 2−s β + 2−ν ην ( p) ϕν k − ( j/ν+1 ⊕ p)ν+1 .
(−1) jν−1
ν=1 p=0
N ν −1
s
y( j ⊕ k) = 2−s β + 2−ν ην q ⊕ j/ν+1 ϕν (k − qν+1 ).
(−1) jν−1
ν=1 q=0
(4.8.22)
Let us substitute (4.8.12) and (4.8.22) into (4.8.20). Bearing in mind reality,
orthogonality, and norming of the basic signals we gain
N ν −1
s
z( j) = 2−s αβ + 2−ν (−1) jν−1
ξν (q)
ην q ⊕ j/ν+1 . (4.8.23)
ν=1 q=0
ν −1
N
(−1) jν−1
ην q ⊕ j/ν+1 = ην (q ⊕ p) (−1) j/ν δ Nν j/ν+1 − p
p=0
N ν −1
=
ην ( p ⊕ q) ϕν ( j − pν+1 ). (4.8.24)
p=0
s
N ν −1 N
ν −1
z( j) = 2−s αβ + 2−ν
ξν (q)
ην ( p ⊕ q) ϕν ( j − pν+1 ).
ν=1 p=0 q=0
Formula (4.8.21) shows that coefficients of the ν-th level in the expansion of a
dyadic convolution z over Haar basis related to decimation in time are obtained as
a result of a dyadic convolution of expansion coefficients of the ν-th level of the
signals x and y.
4.9 Second Sequence of Orthogonal Bases 155
w0 (k) = δ N (· − k), k ∈ 0 : N − 1;
wν (l + pν+1 ) = wν−1 (l + 2 pν ) + wν−1 l + (2 p + 1)ν ,
(4.9.1)
wν (l + ν + pν+1 ) = wν−1 (l + 2 pν ) − wν−1 l + (2 p + 1)ν ,
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
p ∈ 0 : N1 − 1, σ ∈ 0 : 1.
How do the signals ws (k; j) look like? Answering this question needs some
additional preparation.
4.9.2 We introduce a sequence of matrices
1 1 Aν−1 Aν−1
A1 = ; Aν = , ν = 2, . . . , s. (4.9.4)
1 −1 Aν−1 −Aν−1
Proof When ν = 1, the assertion is obvious because A1 [k, j] = (−1)k j holds for
k, j ∈ 0 : 1. We perform an induction step from ν − 1 to ν.
We take k, j ∈ 0 : ν+1 − 1 and represent them in the following manner: k =
kν−1 2ν−1 + l, j = jν−1 2ν−1 + q. Here kν−1 , jν−1 ∈ 0 : 1 and l, q ∈ 0 : ν − 1.
According to (4.9.4) we have
l, q ∈ 0 : ν − 1, σ, τ ∈ 0 : 1.
ν+1 −1
wν (l + pν+1 ) = Aν [l, q] w0 (q + pν+1 ), (4.9.8)
q=0
p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.
wν (l + pν+1 ) = wν (l + σ ν + pν+1 )
= wν−1 (l + 2 pν ) + (−1)σ wν−1 l + (2 p + 1)ν
ν −1
= Aν−1 [l , q] w0 (q + 2 pν )
q=0
ν −1
+ (−1)σ Aν−1 [l , q] w0 q + (2 p + 1)ν .
q=0
4.9 Second Sequence of Orthogonal Bases 157
ν −1
+ Aν [l, q + ν ] w0 (q + ν + pν+1 )
q=0
ν+1 −1
= Aν [l, q] w0 (q + pν+1 ).
q=0
N −1
ws (l; j) = As [l, q] δ N ( j − q) = As [l, j] = (−1){l, j}s ,
q=0
l, j ∈ 0 : N − 1.
The functions
vk ( j) = (−1){k, j}s , k, j ∈ 0 : N − 1 (4.9.9)
ν+1 −1
wν (k), wν (k ) = Aν [l, q] Aν [l , q].
q=0
Aν Aν = 2ν I2ν , ν = 1, 2, . . . , (4.9.12)
ν+1 −1
wν (k), wν (k ) = Aν [l, q] Aν [q, l ] = (Aν Aν )[l, l ] = 2ν I2ν [l, l ].
q=0
Now we can make a conclusion that the scalar product wν (k), wν (k ) is nonzero
only when p = p and l = l , i.e. only when k = k . In the latter case wν (k)2 = 2ν
for all k ∈ 0 : N − 1.
The theorem is proved.
N −1
1
x= xs (k) vk . (4.10.1)
N k=0
N −1
xs (k) = x( j) vk ( j), k ∈ 0 : N − 1. (4.10.2)
j=0
N −1
1
x= xν (k) wν (k). (4.10.3)
2ν k=0
x0 (k) = x(k), k ∈ 0 : N − 1;
xν (l + pν+1 ) = xν−1 (l + 2 pν ) + xν−1 l + (2 p + 1)ν ,
(4.10.4)
xν (l + ν + pν+1 ) = xν−1 (l + 2 pν ) − xν−1 l + (2 p + 1)ν ,
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = s, s − 1, . . . , 1.
p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 0, 1, . . . s.
We will not go into details. We just note that the branching scheme (a) in Fig. 4.2 in
this case also generates Haar basis related to decimation in time.
4.11 Ordering of Walsh Functions 161
4.11.1 Signals of the exponential basis are ordered by frequency. Index k in a notation
kj
u k ( j) = ω N = exp i 2πk
N
j , k = 0, 1, . . . , N − 1
from this point of view. Unfortunately, the value {k, j}s does not increase mono-
tonically together with j. An example of this value’s behavior for N = 23 and
k = 2 = (0, 1, 0)2 is presented in Table 4.3.
To attain monotonicity, we represent vk ( j) in a form
s−1
α=0 kα ( jα + jα+1 2+···+ js−1 2 )
s−1−α
vk ( j) = (−1) .
Note that
so
j/2α = js−1 2s−1−α + · · · + jα+1 2 + jα .
We gain
s−1 α
vk ( j) = (−1) α=0 kα j/2 , j ∈ 0 : N − 1.
Introducing a notation
s−1
θk ( j) = kα j/2α
α=0
we come to a representation
{k, j}3 0 0 1 1 0 0 1 1
162 4 Fast Algorithms
vk ( j) = (−1)θk ( j) , j ∈ 0 : N − 1. (4.11.1)
s−1
s−1
θk (N ) = kα 2s−α = 2 kα 2s−1−α = 2 revs (k)
α=0 α=0
(a definition of the permutation revs can be found in Sect. 1.4). Therefore, the right
side of (4.11.1) for j = N is equal to unity. By virtue of N -periodicity, the left side
of (4.11.1) equals to unity too. Indeed, vk (N ) = vk (0) = 1. Thus, equality (4.11.1)
holds for j ∈ 0 : N .
We rewrite (4.11.1) in a form
vk ( j) = exp iπ θk ( j) , j ∈ 0 : N.
It is evident that the function θk ( j) varies from 0 to 2 revs (k) monotonically non-
decreasing while j increases from 0 to N . As a consequence, the argument π θk ( j) of
a complex number vk ( j) varies from 0 to 2π revs (k) monotonically non-decreasing
while j increases from 0 to N . Hence a point vk ( j) runs around the unit circle of the
complex plane revs (k) times. Thus a number revs (k) is treated as a frequency of a
function vk .
We denote vk = vrevs (k) . A function
vk has a frequency of k because revs revs (k) =
k. It can be represented in a form
s−1
vk ( j) = (−1)
α=0 ks−1−α jα , k, j ∈ 0 : N − 1.
Walsh functions v0 ,
v1 , . . . ,
v N −1 are ordered by frequency. They comprise a Walsh–
Paley basis of the space C N .
Figure 4.6 depicts the functions vk ( j) for N = 8.
4.11.2 There exists another ordering of Walsh functions—by the number of sign
changes on the main period. To clarify this matter we need to return to Hadamard
matrices (see par. 4.9.2).
We denote by walν (k) the number of sign changes in a row of Hadamard matrix
Aν with the index k ∈ 0 : ν+1 − 1. According to (4.9.4) we have wal1 (0) = 0 and
wal1 (1) = 1.
wal1 (k) = k, k ∈ 0 : 1;
k ∈ 0 : ν − 1, ν = 2, . . . , s.
Proof The first relation is true. We will verify (4.11.2) and (4.11.3). Recall that
Aν [k, j] = (−1){k, j}ν . Hence it follows that for k, j ∈ 0 : ν − 1 there hold
At first, let us show that (4.11.2) holds. Since Aν [2k, 2 j] = Aν [2k, 2 j + 1], we
may not take into account the elements Aν [2k, 2 j + 1] while determining the number
of sign changes. The remaining elements are Aν [2k, 2 j] = Aν−1 [k, j]; they have
walν−1 (k) sign changes by a definition. Relation (4.11.2) is ascertained.
Let us rewrite equality (4.11.3) in a form
j = 1, . . . , ν+1 − 1.
164 4 Fast Algorithms
We will show that one of the rows of G j has a sign change while another one does
not. Let us consider two cases.
(a) j = 2 j + 1, j ∈ 0 : ν − 1. We write
Aν [2k, 2 j ] Aν [2k, 2 j + 1]
Gj =
Aν [2k + 1, 2 j ] Aν [2k + 1, 2 j + 1]
Aν−1 [k, j ] Aν−1 [k, j ]
= .
Aν−1 [k, j ] −Aν−1 [k, j ]
We see that G j has only one sign change either in the first or in the second row.
The sequence G 1 , . . . , G ν+1 −1 accumulates ν+1 − 1 sign changes in the rows
of the matrix Aν with indices 2k and 2k + 1, which conforms to (4.11.4).
The theorem is proved.
Corollary 4.11.1 The mapping k → walν (k) is a permutation of the set {0, 1, . . . ,
ν+1 − 1}.
4.11.4 With the aid of the permutations revs and wals we defined frequency and
number of sign changes of Walsh functions. It turns out that these permutations are
bound with each other through the permutation greys (see Sect. 1.4).
4.11 Ordering of Walsh Functions 165
Proof Let us remind the recurrent relations for the permutations revν and greyν :
rev1 (k) = k, k ∈ 0 : 1;
k ∈ 0 : ν − 1, ν = 2, 3, . . . ;
grey1 (k) = k, k ∈ 0 : 1;
k ∈ 0 : ν − 1, ν = 2, 3, . . .
4.12.1 Let us write down the expansion of a signal x ∈ C N over the Walsh basis
ordered by frequency:
N −1
1
x= ξ(k)
vk , (4.12.1)
N k=0
where ξ(k) = x, vk . We are interested in a case when ξ(k) = 0 for k ∈ ν+1 :
N − 1, ν ∈ 0 : s − 1.
4.12 Sampling Theorem in Walsh Basis 167
We denote
N ν −1
h ν ( j) = δ N ( j − q).
q=0
The function h ν ( j) is an N -periodic step. On the main period, it is equal to unity for
j ∈ 0 : Nν − 1 and is equal to zero for j ∈ Nν : N − 1.
Theorem 4.12.1 Consider expansion (4.12.1). If there holds ξ(k) = 0 for k ∈
ν+1 : N − 1, ν ∈ 0 : s − 1, then
ν+1 −1
x( j) = x(l Nν ) h ν ( j − l Nν ), j ∈ 0 : N − 1. (4.12.2)
l=0
Formula (4.12.2) shows that in the premises of the theorem the signal x( j) is a step-
function. It equals to x(l Nν ) for j ∈ l Nν : (l + 1)Nν − 1, l = 0, 1, . . . , ν+1 − 1.
4.12.2 We will prepend the proof of the theorem with a few auxiliary assertions.
Lemma 4.12.1 Valid is the formula
ν −1
N
δν+1 revs ( j) = δ N ( j − q), j ∈ 0 : N − 1. (4.12.3)
q=0
ν−1
δν+1 ( j) = δ2 ( jα ). (4.12.6)
α=0
Proof We have
ν+1 −1
1
h ν ( j) =
vk ( j), j ∈ 0 : N − 1. (4.12.8)
ν+1 k=0
Proof Denote the right side of (4.12.8) as f ν ( j). We will show that
f ν revs ( j) = δν+1 ( j). (4.12.9)
ν+1 −1
1
f ν revs ( j) = (−1){revs (k), revs ( j)}s
ν+1 k=0
ν+1 −1 s−1
1
= (−1)kα jα .
ν+1 k=0 α=0
4.12 Sampling Theorem in Walsh Basis 169
1 ν−1
1
1
f ν revs ( j) = ··· (−1)kα jα
ν+1 kν−1 =0 k0 =0 α=0
ν−1
1
1
= (−1)kα jα .
ν+1 α=0 kα =0
1 1 kα jα
1 1
(−1)kα jα = ω = δ2 ( jα ).
2 k =0 2 k =0 2
α α
We gain
ν−1
f ν revs ( j) = δ2 ( jα ).
α=0
Lemma 4.12.4 For all k and l from the set 0 : ν+1 − 1 there holds
Proof Let l = (lν−1 , lν−2 , . . . , l0 )2 and k = (kν−1 , kν−2 , . . . , k0 )2 . Then revν (l) =
(l0 , l1 , . . . , lν−1 )2 and
Moreover,
revs (k) = k0 2s−1 + k1 2s−2 + · · · + kν−1 2s−ν ,
so that
{revs (k), l Nν }s = k0 lν−1 + k1lν−2 + · · · + kν−1l0 . (4.12.12)
h ν ( j ⊕ l Nν ) = h ν ( j − l Nν ). (4.12.13)
Proof Let us use the fact that δ N ( j ⊕ k) = δ N ( j − k) for all j and k from 0 : N − 1.
The definition of h ν yields
N ν −1
N ν −1
h ν ( j ⊕ l Nν ) = δ N ( j ⊕ l Nν ) − q = δ N ( j ⊕ l Nν ) ⊕ q
q=0 q=0
N ν −1
= δ N j ⊕ (l Nν ⊕ q) .
q=0
ν −1 ν −1
N
N
h ν ( j ⊕ l Nν ) = δ N j ⊕ (l Nν + q) = δ N j − (l Nν + q)
q=0 q=0
N ν −1
= δ N ( j − l Nν ) − q) = h ν ( j − l Nν ).
q=0
4.12.3 Now we turn to the proof of the theorem. On the basis of the theorem’s
hypothesis and formula (4.12.1) we write
ν+1 −1
1
x( j) = ξ(k)
vk ( j), j ∈ 0 : N − 1. (4.12.14)
N k=0
ν+1 −1
D(l) = d(k) (−1){revν (l), k}ν
k=0
ν+1 −1 ν+1 −1
= vk ( j) (−1){revs (k), l Nν }s =
vk ( j)
vk (l Nν ),
k=0 k=0
l ∈ 0 : ν+1 − 1.
4.12 Sampling Theorem in Walsh Basis 171
We note that
vk ( j ) =
vk ( j)
vk ( j ⊕ j ), j, j ∈ 0 : N − 1. (4.12.15)
Indeed,
vk ( j ) = (−1){revs (k), j}s +{revs (k), j }s
vk ( j)
s−1
s−1
= (−1)ks−1−α jα + jα 2 = (−1)ks−1−α ( j⊕ j )α =
vk ( j ⊕ j ).
α=0 α=0
ν+1 −1
D(l) =
vk ( j ⊕ l Nν ) = ν+1 h ν ( j ⊕ l Nν ) = ν+1 h ν ( j − l Nν ).
k=0
With the aid of the DWT inversion formula and equality (4.12.10) we can recon-
vk ( j) for k ∈ 0 : ν+1 − 1:
struct the values
ν+1 −1
1
vk ( j) = d(k) = D(l) (−1){revν (l), k}ν
ν+1 l=0
ν+1 −1
= h ν ( j − l Nν ) (−1){revs (k), l Nν }s
l=0
ν+1 −1
= h ν ( j − l Nν )
vk (l Nν ). (4.12.16)
l=0
4.12.4 Theorem 4.12.1 can be inverted. To be more precise, the following assertion
is true.
172 4 Fast Algorithms
ν+1 −1
x( j) = a(l) h ν ( j − l Nν ). (4.12.17)
l=0
N −1
ξ(k) = vk ( j)
x( j)
j=0
ν+1 −1 N −1
ξ(k) = a(l) h ν ( j ⊕ l Nν )
vk ( j)
l=0 j=0
ν+1 −1 N −1 ν+1 −1
1
= a(l)
vk ( j)
v p ( j)
v p (l Nν )
ν+1 l=0 j=0 p=0
ν+1 −1 ν+1 −1 N −1
1
= a(l)
v p (l Nν )
vk ( j)
v p ( j).
ν+1 l=0 p=0 j=0
4.13.1 We still presume that N = 2s . Let us take arbitrary nonzero complex numbers
t (0), t (1), . . . , t (N /2 − 1) and construct one more sequence of bases in C N :
g0 (k) = δ N (· − k), k ∈ 0 : N − 1;
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
4.13 Ahmed–Rao Bases 173
s (2l)
Formula (4.13.1) differs from (4.6.2) only by the coefficient ωrev
N being replaced
with t (l). A transition from the basis gν−1 to the basis gν can be written in a single
line:
1
gν (2l + σ )Nν + p = (−1)σ τ [t (l)]τ gν−1 (l Nν−1 + τ Nν + p), (4.13.2)
τ =0
1
g1 (σ N1 + p) = (−1)σ τ [t (0)]τ g0 (τ N1 + p), (4.13.3)
τ =0
p ∈ 0 : N1 − 1, σ ∈ 0 : 1.
Let us express signals of the ν-th level through signals of the zero level. In order
to do this we introduce a sequence of matrices T1 , T2 , . . . , Ts by the rule
1 t (0)
T1 = ;
1 −t (0)
l, q ∈ 0 : ν − 1, σ, τ ∈ 0 : 1, ν = 2, . . . , s.
ν+1 −1
gν (l Nν + p) = Tν [l, q] g0 (q Nν + p), (4.13.5)
q=0
p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.
1
ν −1
4.13.2 Let us find an explicit expression for the elements of the matrix Tν .
ν−1
Tν [l, q] = (−1){l,q}ν t (l/2α+1 )
qα
. (4.13.7)
α=0
l /2α = l/2α+1 .
s−1
gs (k; j) = (−1){k, j}s t (k/2α+1 )
jα
α=0
s−1
t (k/2α+1 )
jα
= vk ( j) , k, j ∈ 0 : N − 1, (4.13.8)
α=0
4.13.3 Let us clarify the conditions when the family gν consists of pairwise orthog-
onal signals.
1
1
T1 [k, j] T1 [k , j] = (−1)k j [t (0)] j (−1)k j [t (0)] j
j=0 j=0
1
1
(k−k ) j
= (−1)(k−k ) j = ω2 = 2δ2 (k − k ).
j=0 j=0
ν+1 −1
Tν [k, j] Tν [k , j]
j=0
ν −1
1
= Tν [2l + σ, 2q + τ ] Tν [2l + σ , 2q + τ ]
q=0 τ =0
ν −1
1
= (−1)σ τ [t (l)]τ Tν−1 [l, q] (−1)σ τ [t (l )]τ Tν−1 [l , q]
q=0 τ =0
ν −1
1
= Tν−1 [l, q] Tν−1 [l , q] (−1)(σ −σ )τ [t (l)]τ [t (l )]τ
q=0 τ =0
1
= 2ν−1 δν (l − l ) (−1)(σ −σ )τ [t (l)]τ [t (l )]τ .
τ =0
ν+1 −1
1
Tν [k, j] Tν [k , j] = 2ν−1 (−1)(σ −σ )τ = 2ν δ2 (σ − σ ).
j=0 τ =0
We see that the left side of (4.13.9) is nonzero only when l = l and σ = σ , i.e. only
when k = k . In the latter case we have
ν+1 −1
|Tν [k, j]|2 = 2ν , k ∈ 0 : ν+1 − 1.
j=0
gν (k), gν (k ) = gν (l Nν + p), gν (l Nν + p )
ν+1 −1 ν+1 −1
= Tν [l, q] Tν [l , q ] δ N (q − q )Nν + ( p − p ) .
q=0 q =0
When q = q , the corresponding terms in the double sum are equal to zero. Taking
into account (4.13.9) we gain
ν+1 −1
gν (k), gν (k ) = δ N ( p − p ) Tν [l, q] Tν [l , q]
q=0
= 2ν δ N ( p − p ) δν+1 (l − l ).
We see that the scalar product gν (k), gν (k ) is nonzero only when p = p and l = l ,
i.e. only when k = k . In the latter case gν (k)2 = 2ν holds for all k ∈ 0 : N − 1.
The theorem is proved.
4.13.4 We will consider the particular case of choosing the coefficients t (l) whose
moduli are equal to unity. Let us fix r ∈ 1 : s and put
s (2l)
(r )
ωrev
N for l ∈ 0 : r − 1,
t (l) = (4.13.11)
1 for l ∈ r : s − 1.
defined by the recurrent relations (4.13.1) with the coefficients t (l) = t (r ) (l). Since
|t (r ) (l)| ≡ 1, Theorem 4.13.3 yields that signals (4.13.12) form an orthogonal basis
in a space C N for all ν ∈ 1 : s and r ∈ 1 : s. A collection of signals (4.13.12) with
ν = s is referred to as Ahmed–Rao basis with an index r .
s (2l)
When r = s, formula (4.13.11) takes a form t (s) (l) = ωrev N , l ∈ 0 : s − 1.
In this case the recurrent relations (4.13.1) with t (l) = t (s) (l) coincide with (4.6.2)
and therefore generate a sequence of bases leading to the exponential basis. An
explicit expression for the signals of this sequence is presented in Theorem 4.6.2. In
particular, formula (4.6.1) yields
revs (k) j
gs(s) (k; j) = ω N , k, j ∈ 0 : N − 1.
Note that Walsh basis is obtained both by sequence (4.9.1) and sequence (4.13.1)
with t (l) ≡ 1.
revs (2l/2α+1 )
t (r ) (l/2α+1 ) = ω N , α ∈ 0 : ν − 1. (4.13.14)
ν−1
ν−1
2α revs (l)qα
t (r ) (l/2α+1 ) (−1)−lα qα ω N
qα
=
α=0 α=0
ν−1
revs (l) qα 2α
= (−1){l,q}ν ω N α=0
revs (l)q
= (−1){l,q}ν ω N . (4.13.16)
4.13 Ahmed–Rao Bases 179
Combining formula (4.13.7) with t (l) = t (r ) (l) and (4.13.16), we come to (4.13.13).
The lemma is proved.
Here p is the index of the most significant nonzero digit in a binary code of l.
We introduce one more notation. If q = (qν−1 , qν−2 , . . . , q0 )2 then we put
When α ∈ r − 1 : ν − 1, the left side of (4.13.15) equals to unity. The right side
of (4.13.15) also equals to unity both when α ∈ r : ν − 1 and α = r − 1. When
α ∈ 0 : r − 2, we have to repeat the manipulations from the proof of the former
lemma replacing ν by r .
Now we take l ∈ r +1 : ν+1 − 1 and represent l in form (4.13.17). When α ∈
0 : p − r , we have p − (α + 1) ≥ r − 1, therefore
t (r ) (l/2α+1 ) = 1, α ∈ 0 : p − r. (4.13.19)
the right side of (4.13.20) also equals to unity both when α ∈ p + 1 : ν − 1 and α =
p. When α ∈ p − r + 1 : p − 1, an inequality p − (α + 1) ≤ r − 2 holds, therefore
l/2α+1 ∈ 0 : r − 1 and
ν−1
ν−1
2α revs (l)qα
t (r ) (l/2α+1 ) (−1)−lα qα ω N
qα
=
α=0 α= p−r +1
ν−1
ν−1 revs (l) qα 2α
= (−1)− α= p−r +1 lα qα ωN α= p−r +1
.
Substituting this expression into (4.13.7) and taking into account that
ν−1
qα 2α = [q] p−r +1 ,
α= p−r +1
ν−1
{l, q}ν − lα qα = {l, q} p−r +1 ,
α= p−r +1
0 : 2r − 1.
Proof For k ∈ 0 : r +1 − 1, k = (kr −1 , . . . , k0 )2 , we have
s−1
[ j] p−r +1 = jα 2α = 2 p−r +1 j ,
α= p−r +1
We have
p−r p−r −α
(−1){k, j } p−r +1
= (−1) α=0 kα ( jα + jα+1 2+···+ j p−r 2 )
p−r
= (−1) α=0 kα j /α+1 .
Denote
p−r
θk ( j ) = kα j /α+1 . (4.13.21)
α=0
In this notation
gs(r ) (k; j) = exp iπ [θk ( j ) + 2−s+ p−r +2 revs (k) j ] .
By putting
ζk ( j) = θk ( j ) + 2−s+ p−r +2 revs (k) j
we come to a representation
gs(r ) (k; j) = exp iπ ζk ( j) , j ∈ 0 : N − 1. (4.13.22)
One can see that formula (4.13.22) is valid for j = N as well. Indeed, on the
strength of N -periodicity we have gs(r ) (k; N ) = gs(r ) (k; 0) = 1. At the same time
j = 2s− p+r −1 and j = 0 when j = N , so ζk (N ) = 2revs (k) and exp iπ ζk (N ) =
1.
Since ζk (0) = 0 and ζk (N ) = 2revs (k), to complete the proof of the theorem we
only need to verify that the function ζk ( j) monotonically nondecreases while j varies
from 0 to N .
We take j, l ∈ 0 : N and represent them in a form j = j 2 p−r +1 + j , l =
p−r +1
l2 + l . Presuppose that j > l. Then j ≥ l because | j − l | ≤ 2 p−r +1 − 1.
When j = l , the inequality ζk ( j) ≥ ζk (l) follows from monotonic nondecrease of
the function θk ( j ).
Assume that j > l . Let us estimate θk ( j ). According to (4.13.21) we gain
p−r
p−r
θk (2 p−r +1 − 1) ≤ kα 2 p−r +1 /α+1 = kα 2 p−r −α+1
α=0 α=0
p−r
= 2−s+ p−r +2 kα 2s−1−α ≤ 2−s+ p−r +2 revs (k).
α=0
and
|θk ( j ) − θk (l )| ≤ 2−s+ p−r +2 revs (k).
Now we write
4.14.1 We will show that discrete Fourier transform of any order can be reduced to
DFT whose order is a power of two.
We take Fourier matrix FN of order N ≥ 3 with the elements
kj
FN [k, j] = ω N , k, j ∈ 0 : N − 1.
(k− j)2
G N [k, j] = ω2N , k, j ∈ 0 : N − 1.
F N = DN G N DN . (4.14.1)
Proof We have
−k j −k 2 (k− j)2 − j2
F N [k, j] = ω N = ω2N ω2N ω2N .
N −1
D N G N D N [k, j] = D N [k, l] × G N D N [l, j]
l=0
N −1
N −1
= D N [k, l] G N [l, l ]D N [l , j]
l=0 l =0
184 4 Fast Algorithms
N −1
− j2
= D N [k, l]G N [l, j]ω2N
l=0
−k 2 − j2
−k 2 (k− j)2 − j2
= ω2N G N [k, j] ω N 2 = ω2N ω2N ω2N .
k ∈ 0 : N − 1.
M := 2m > 2N − 1.
We introduce a signal
h ∈ C M with the following values on the main period:
h[0 : M − 1] = a0 , a1 , . . . , a N −1 ,0, . . . , 0 , a N −1 , a N −2 , . . . , a1 .
!" #
(M−(2N −1)) times
M [k, j] =
G h(k − j), k, j ∈ 0 : M − 1. (4.14.3)
Note that
M [k, j] = G N [k, j] for k, j ∈ 0 : N − 1.
G (4.14.4)
h(− j) =
h(M − j) = a j for j ∈ 1 : N − 1.
For k ∈ 0 : N − 1 we gain
M [k, 0 : N − 1] =
G h(k),
h(k − 1), . . . , h(1), h(0), h(−1), . . . ,
h(−N + k + 1)
= ak , ak−1 , . . . , a1 , a0 , a1 , . . . , a N −k−1 . (4.14.5)
M
F N x = D N , O G z. (4.14.8)
M we gain
By a definition (4.14.3) of the matrix G
M−1
M
G z [k] = z( j)
h(k − j), k ∈ 0 : M − 1. (4.14.9)
j=0
The right side of this equality contains a convolution z ∗h. The convolution theorem
yields
z ∗
z) F M (
h = F M−1 F M ( h) . (4.14.10)
= F M (
The spectrum H h) does not depend on x. It can be calculated in advance:
N −1
N −1
(k) = 1 + j2 −k j j2 −k(M− j)
H ω2N ω M + ω2N ω M
j=1 j=1
N −1
j2 2πk
= 1+2 ω2N cos M
j , k ∈ 0 : M − 1. (4.14.11)
j=1
−k 2
X (k) = ω2N X (k), k ∈ 0 : N − 1.
Exercises
Prove that h revs ( j) = δ Nν ( j), j ∈ 0 : N − 1.
4.3 Let ψν ( j) and ϕν ( j) be the signals introduced in Sects. 4.8.1 and 4.8.3, respec-
tively. Prove that
ϕν revs ( j) = ψν ( j), j ∈ 0 : N − 1.
ψ1 ( j) = δ N ( j) − δ N ( j − N /2),
ψν ( j) = ψν−1 (2 j), ν = 2, . . . , s.
4.5 Expand the unit pulse δ N over the Haar basis related to decimation in time.
4.6 Expand the unit pulse δ N over the Haar basis related to decimation in frequency.
4.7 Let q = (qs−1 , qs−2 , . . . , q0 )2 . Prove that
s
δ N ( j − q) = 2−s + (−1)qν−1 2−ν ϕν ( j − q/ν+1 ν+1 ).
ν=1
s
δ N ( j − q) = 2−s + (−1)qs−ν 2−ν ψν ( j − q Nν ).
ν=1
Exercises 187
s
N ν −1
−s −ν
x( j) = 2 + 2 ψν ( j − p).
ν=1 p=0
s
N ν −1
k+1 −1
h k ( j) = δ N ( j − q), k ∈ 1 : s − 1.
q=0
s
h k ( j) = 2−s+k + 2−ν+k ϕν ( j).
ν=k+1
4.12 Prove that the following expansion is true for the unit steps from the previous
exercise:
k+1 −1
s−k
h k ( j) = 2−s+k + 2−ν ψν ( j − p).
ν=1 p=0
s
N ν −1
holds for j ∈ 0 : N − 1.
4.14 We take expansion (4.8.2) of a signal y( j). Prove that for all q ∈ Z there holds
N ν −1
s
y( j − q) = 2−s β + 2−ν (−1)( p−q)/Nν
yν p − q Nν ψν ( j − p).
ν=1 p=0
188 4 Fast Algorithms
1/vk ( j) = vk ( j),
vk ( j) vk ( j) = vm ( j),
where m = k ⊕ k .
4.19 We denote by vk(1) ( j) Walsh functions with the main period 0 : N1 − 1, i.e.
{N − 1 − j, j}s = 0
4.22 Let vk ( j) be one of Walsh functions with the property vk (N − 1) = −1. Prove
that
vk (N − 1 − j) = −vk ( j), j ∈ 0 : N − 1.
4.27 We turn to discrete Walsh transform W N (see par. 4.10.1). Prove the Parseval
equality
x2 = N −1 W N (x)2 .
V2 p+1 (k) = V2 p (k − N1 ), k ∈ 0 : N − 1.
4.32 Along with the Fourier spectrum V p of the Walsh function v p with the main
period 0 : N − 1 we consider the Fourier spectrum V p(1) of the Walsh function v (1)
p
with the main period 0 : N1 − 1. Prove that the spectra V p and V p(1) are bound with
the relations
V p (2k) = 2V p(1) (k), V p (2k + 1) = 0,
N1 −1
1
VN1 + p (2k) = 0, VN1 + p (2k + 1) = V (1) (l) h(k − l)
N1 l=0 p
v2k ( j) =
vk (2 j)
4.35 Let vk be a Walsh function ordered by the number of sign changes (see
v N −1 = v1 .
par. 4.11.3). Prove that
4.36 We take a step-function (4.12.17) and expand it over the Walsh basis ordered
by frequency. What are the coefficients ξ(k) of this expansion for k ∈ 0 : ν+1 − 1?
f ( j1 N + j0 ) = v j1 ( j0 ), j1 , j0 ∈ 0 : N − 1,
Comments
introduced in [30]. The paper [25] is devoted to a comparative study of the two Haar
bases.
Among the results concerning Haar bases we lay emphasis on a convolution
theorem; to be more accurate, on a cyclic convolution theorem in case of Haar basis
related to decimation in frequency, and on a dyadic convolution theorem in case of
decimation in time. This topic has a long history. The final results were obtained
in [24].
The recurrent sequence of orthogonal bases (4.6.2) can be defined explicitly, as it
was done in [35], or can be deduced from (4.2.1) with the aid of reverse permutation.
In our book we use the latter approach as being more didactic. This approach is
published for the first time ever.
The recurrent sequence of orthogonal bases (4.9.1) that leads to fast Walsh trans-
form was studied in [37]. A question of ordering of discrete Walsh functions and
their generalizations was considered in [23, 49].
Note that the recurrent relations (4.2.1) that play a central role in construction
of fast algorithms have just been presented. However they could be deduced on the
basis of a factorization of a discrete Fourier transform matrix into a product of sparse
matrices. This requires more sophisticated technical tools, in particular, a Kronecker
product of matrices. On the subject of details and generalizations, refer to [12, 22,
32, 44, 46, 48, 50].
A sampling theorem in Walsh basis (in a more general form) is published in [39].
Ahmed–Rao bases are introduced in the book [1]. In case of N = 2s they para-
metrically consolidate Walsh basis and the exponential basis. Signals of Walsh
q
basis take two values of ±1; signals of the exponential basis take N values ω N ,
q ∈ 0 : N − 1. Signals of Ahmed–Rao basis with an index r ∈ 1 : s take 2 values r
q
ω2r , q ∈ 0 : 2r − 1. Ahmed–Rao bases are studied in the papers [14–16].
In Sect. 4.14 we ascertain that calculation of DFT of any order can be reduced to
calculation of DFT whose order is a power of two. This fact is mentioned in [46,
pp. 208–211].
Solutions
To Chapter 1
Hence
f ( j) ∈ {0, d, 2d, . . . , (q − 1)d}.
{a j + bk | a ∈ Z, b ∈ Z} = { p( j − k) + qk | p ∈ Z, q ∈ Z}
Let us verify that this representation of a number j is unique. Assume that there
is another representation j = j1 n 2 + j2 n 1 N with j1 ∈ 0 : n 1 − 1, j2 ∈ 0 : n 2 − 1.
Then ( j1 − j1 )n 2 + ( j2 − j2 )n 1 N = 0 holds. It means that
( j1 − j1 )n 2 + ( j2 − j2 )n 1 = pN
(a1 a2 · · · as )(n 1 n 2 · · · n s ) + pm = 1.
jn 1 n 2 = p1 n 1 n 1 n 2 = n 1 p1 n 2 = 0.
Furthermore, by virtue of relative primality of Ns+1 and n s+1 there exist integer
numbers as+1 and bs+1 with the property
1.11 According to the result of the previous exercise there exist integer numbers
a1 , a2 , . . . , as such that
s
aα Nα = 1. (S.1)
α=1
We put jα = aα jn α . Taking into account the result of Problem 1.2 we gain
s s s
jα Nα = aα jn α Nα = aα j Nα n α Nα
α=1 N α=1 N α=1 N
s
= j aα N α = j.
α=1 N
where kα ∈ 0 : n α − 1. Then
s
(kα − kα ) pα Nα = 0,
α=1 N
so that
s
(kν − kν ) pν Nν = pN
ν=1
i. e. kα = kn α .
When jν−1 = 0, this is another notation of the second line of relations (1.4.5). Let
jν−1 = 1. Equality (1.4.6) yields
ν ν
greyν 2ν−1 + jν−k 2ν−k = 2ν−1 + greyν−1 (1 − jν−k ) 2ν−k .
k=2 k=2
ν
greyν−1 jν−1 + jν−k 2 2ν−k = jν−1 + jν−2 2 2ν−2
k=2
ν
+ greyν−2 jν−2 + jν−k 2 2ν−k .
k=3
1.14 By virtue of the result of the previous exercise it is sufficient to solve a system
of equations
jν−1 = pν−1 ,
1.15 We have
n
3 n n−1
k − (k − 1)3 = k3 − k 3 = n3.
k=1 k=1 k=0
n
3 n
k − (k − 1)3 = (3k 2 − 3k + 1).
k=1 k=1
Therefore
n
3 k 2 = n 3 + 23 n(n + 1) − n = 1
2
n(2n 2 + 3n + 1) = 1
2
n(n + 1)(2n + 1),
k=1
n n
n n
Sn = [n − (n − k)] = (n − k ) = n 2n − Sn .
k=0
n−k k =0
k
It is remaining to take into account that by virtue of relative primality of n and N the
N −1
set of powers kn N k=0 is a permutation of the set {0, 1, . . . , N − 1}.
1.18 We have a0 m + b0 n = 1 for some integer a0 and b0 . We write
Hence
ωmn = ωmb0 m ωna0 n .
1.19 Denote
N −1
PN −1 (z) = zk .
k=0
For z = 1 we have
1 − zN
PN −1 (z) = .
1−z
j
It is clear that PN −1 (ω N ) = 0 for j = 1, 2, . . . , N − 1. Thus, we know N − 1 dif-
ferent roots ω N , ω2N , . . . , ω NN −1 of the polynomial PN −1 (z) of degree N − 1. This
lets us write down the representation
N −1
j
PN −1 (z) = (z − ω N ).
j=1
1.20 Let
r
Pr (z) = ak z k .
k=0
Then
r −1
Pr ( j) = ar ( j + 1)r − j r + ak ( j + 1)k − j k =: Pr −1 ( j).
k=0
To Chapter 2
2.1 If x is an even signal then x(0) = x(0) holds, so the value x(0) is real. By virtue
of N -periodicity we have x(N − j) = x( j) for j ∈ 1 : N − 1.
Conversely, let x(0) be a real number and x(N − j) = x( j) hold for j ∈ 1 : N −
1. Then x(− j) = x( j) holds for j ∈ 0 : N − 1, and N -periodicity yields x(− j) =
x( j) for all j ∈ Z.
This exercise shows how to determine an even signal through its values on the
main period only.
200 Solutions
2.3 Assume that there exist an even signal x0 and an odd signal x1 such that x( j) =
x0 ( j) + x1 ( j) holds. Then
x(− j) = x 0 (− j) + x 1 (− j) = x0 ( j) − x1 ( j).
Now one can easily verify that the signals x0 and x1 of a form (S.4) are even and
odd, respectively, and that x = x0 + x1 holds.
2.6 We will use Lemma 2.1.4. Taking into account that r ≤ N − 1 we gain
r r
rr −s r −s r
(δ N )
r 2
= (−1) δ N (· + s), (−1) δ N (· + s )
s=0
s s =0
s
r r 2
r r r
= (−1)s+s δ N (s − s ) = .
s,s =0
s s s=0
s
m−1 m−1 N −1
x(s + ln) = x( j) δ N (s + ln − j)
l=0 l=0 j=0
N −1 N −1
m−1
= x( j) δmn (s − j) + ln = x( j) δn (s − j).
j=0 l=0 j=0
2.8 It is sufficient to verify that the equality jk N = 0 holds if and only if jk = 0
and j N = 0.
Let jk N = 0. It means that j = p k N holds for some integer p. Hence it follows
that j is divisible both by k and by N , i. e. that jk = 0 and j N = 0.
The converse proposition constitutes the contents of Problem 1.9 (for s = 2).
N −1 N −1 N −1 N −1
δ N (k j + l) = δ N k j N + l = δ N ( j + l) = δ N ( j) = 1.
j=0 j=0 j =0 j=0
2.10 The DFT inversion formula yields that a signal oddity condition x(− j) =
−x( j) is equivalent to the identity
N −1 N −1
−k j −k j
X (k) ω N = − X (k) ω N , j ∈ Z,
k=0 k=0
which, in turn, holds if and only if X (k) = −X (k) or X (k) = −X (k) for all k ∈ Z.
The latter characterizes the spectrum X as pure imaginary.
where the spectra A and B are even (see Theorem 2.2.3). Further,
N −1 N −1
(N −k) j −k j
X (N − k) ω N = X (k) ω N .
k=N /2+1 k=N /2+1
= Re x( j) = x( j).
2.13 We take a real signal x and correspond it with a complex signal xa with a
spectrum ⎧
⎪ X (k) for k = 0,
⎨
X a (k) = 2X (k) for k ∈ 1 : (N − 1)/2,
⎪
⎩
0 for k ∈ (N + 1)/2 : N − 1.
Since
(N −1)/2 N −1 N −1
kj (N −k) j −k j
X (k) ω N = X (N − k) ω N = X (k) ω N ,
k=1 k=(N +1)/2 k=(N +1)/2
we gain $ %
N −1
1 kj
Re xa ( j) = Re X (k) ω N = Re x( j) = x( j).
N k=0
Solutions 203
N −1 N −1
−(N /2+k) j −k j
X (N /2 + k) = x( j) ω N = (−1) j x( j) ω N
j=0 j=0
N /2−1 N /2−1
−k(2 j) −k(2 j+1)
= x(2 j) ω N − x(2 j + 1) ω N
j=0 j=0
N /2−1
−k j
= x(2 j) − ω−k
N x(2 j + 1) ω N /2 .
j=0
N /2−1 N /2−1
−(2k+1) j −(2k+1)(N /2+ j)
X (2k + 1) = x( j) ω N + x(N /2 + j) ω N
j=0 j=0
N /2−1
− j −k j
= x( j) − x(N /2 + j) ω N ω N /2 .
j=0
N −1 N −1
π j −k j 1 −(2k−1) j −(2k+1) j
X (k) = sin ω = ω2N − ω2N
j=0
N N 2i j=0
& '
−(2k−1)N −(2k+1)N
1 1 − ω2N 1 − ω2N
= −(2k−1)
− −(2k+1)
2i 1 − ω2N 1 − ω2N
& '
1 1 1
= −(2k−1)
− −(2k+1)
i 1 − ω2N 1 − ω2N
& '
−(2k−1) −(2k+1)
1 ω2N − ω2N
= −(2k−1) −(2k+1) −4k
i 1 − ω2N − ω2N + ω2N
π−2k
2 ω2N sin
= −(2k−1)
N
−(2k+1) −4k
1 − ω2N − ω2N + ω2N
π π
2 sin N sin
= −1 −2k
= N
π
.
ω2N
2k
− ω2N
1
− ω2N + ω2N cos 2πk
N
− cos N
j nj
2.17 Since (−1) j = ω2 = ω2n , Lemma 2.2.1 for N = 2n yields
N −1 N −1
−k j (n−k) j
X (k) = (−1) j ω N = ωN = N δ N (k − n).
j=0 j=0
204 Solutions
πk
Let N = 2n + 1. We will show that X (k) = 1 + i tan N
holds for k ∈ 0 : N − 1.
The definition of DFT yields
n n−1
−2k j −k(2 j+1)
X (k) = (−1) 2j
ωN + (−1)2 j+1 ω N
j=0 j=0
n−1
−2k j
= ω−2kn
N + (1 − ω−k
N ) ωN .
j=0
1 − ω−2kn 1 − ω−2kn
X (k) = ω−2kn + (1 − ω−k
N )
N
= ω−2kn + N
1 − ω−2k 1 + ω−k
N N
N N
2 πk
= = 1 + i tan .
1 + ω−k
N
N
2.18 We write
n 2n n
−k j −k( j−N ) −k j
X (k) = j ωN + ( j − N ) ωN = j ωN .
j=0 j=n+1 j=−n
n n+1 n
(1 − z) X (k) = jz j − ( j − 1)z j = −nz −n − nz n+1 + zj
j=−n j=−n+1 j=−n+1
−n+1 −n
z −z n+1
z (z − 1)
= −2nz −n + = −2nz −n + = −N z −n .
1−z 1−z
2π kn 2π kn (N − 1)π k (N − 1)π k
z −n = ωkn
N = cos + i sin = cos + i sin
N N N N
π k π k
= (−1)k cos − i sin ,
N N
πk πk πk πk πk πk
1 − z = 2 sin sin + i cos = 2i sin cos − i sin .
N N N N N N
We come to the final formula
Solutions 205
1 (−1)k
X (k) = Ni , k ∈ 1 : N − 1.
2 πk
sin
N
2.19 The definition of DFT yields
n N −1 n−1 n−1
−k j k(N − j) −k j
= n ω−kn
kj
X (k) = j ωN + (N − j) ω N N + j ωN + j ωN
j=0 j=n+1 j=1 j=1
⎧ ⎫
⎨ n−1 ⎬
−k j
= n (−1)k + 2 Re j ωN .
⎩ ⎭
j=1
n−1
z
jz j = 1 − nz n−1 + (n − 1)z n , z = 1 (S.5)
j=1
(1 − z) 2
ω−k
n−1
−k j
j ωN = N
1 − n(−1)k ωkN + (n − 1)(−1)k . (S.6)
j=1
(1 − ω−k
N )
2
ω−k 1
N
=− , k ∈ 1 : N − 1. (S.7)
(1 − ω−k πk
N )
2
4 sin2
N
Indeed,
ω−k ω−k 1
N
= N
=
(1 − ω−k
N )
2 1− 2 ω−k
N + ω−2k
− 2 + ω−k
N N ωkN
1 1
=− =− .
2π k πk
2 1 − cos 4 sin2
N N
On the basis of (S.6) and (S.7) we gain
206 Solutions
⎧ ⎫
⎨ n−1 ⎬ + 1 πk ,
−k j
Re j ωN =− 1 − (−1)k + n(−1)k 2 sin2
⎩ ⎭ πk N
j=1 4 sin2
N
1 1 − (−1)k
= − n(−1)k − ,
2 πk
4 sin2
N
so that
1 − (−1)k
X (k) = − , k ∈ 1 : N − 1.
πk
2 sin2
N
2.20 According to Lemmas 2.1.2 and 2.2.1 we have
N −1 N −1 N −1
j 2 −k j ( j 2 −l 2 )−k( j−l)
ω−l +kl
2
X (k) X (k) = ωN N = ωN
j=0 l=0 j, l=0
N −1 N −1 N −1 N −1
( j−l)(( j−l)+2l−k) j ( j+2l−k)
= ωN = ωN
l=0 j=0 l=0 j=0
N −1 N −1 N −1
j ( j−k) 2 jl j ( j−k)
= ωN ωN = N ωN δ N (2 j).
j=0 l=0 j=0
√
It is clear that provided N is odd the equality
|X (k)| = N holds for all
k ∈ Z. As
for N = 2n, there holds |X (k)|2 = N 1 + ωn(n−k) = N 1 + (−1) n−k
, so in this
-
N
N −1 N −1
−k( j+l)+kl −k j
X l (k) = x( j + l) ω N = ωkl
N x( j) ω N = ωkl
N X (k), k ∈ Z.
j=0 j=0
2.22 We have
N −1
1 lj −l j −k j
X l (k) = ω N + ω N x( j) ω N
2 j=0
⎛ ⎞
N −1 N −1
1⎝ −(k−l) j −(k+l) j ⎠
= x( j) ω N + x( j) ω N
2 j=0 j=0
= 21 X (k − l) + X (k + l) , k ∈ Z.
Solutions 207
N −1 N −1
−k j (r p+q N ) −r k pj N
Y p (k) = x pj N ω N = x pj N ω N
j=0 j=0
N −1
−r k N j
= x( j) ω N = X r k N , k ∈ 0 : N − 1.
j=0
N −1 N −1
$ N −1
%
−k j 1 lj −k j
X n (k) = x( j) ωn N = X (l) ω N ωn N
j=0 j=0
N l=0
⎧ ⎫
N −1 ⎨1 N −1 ⎬ N −1
− j (k−ln)
= X (l) ωn N = X (l) h(k − ln),
⎩N ⎭
l=0 j=0 l=0
N −1
1 kj
where h(k) = ωn N .
N j=0
2.25 By virtue of N -periodicity we have x( j) = x j N , hence
n N −1 n−1 N −1
−k j −k(l N + p)
X n (k) = x( j) ωn N = x(l N + p) ωn N
j=0 l=0 p=0
⎛ ⎞
N −1 n−1 N −1
−kp −kp
= x( p) ωn N ωn−kl = ⎝ x( p) ωn N ⎠ n δn (k).
p=0 l=0 p=0
N −1
X n (k) = x(l) ωn−kln
N = X k N , k ∈ 0 : n N − 1.
l=0
208 Solutions
N −1 n−1 N −1 n−1
−k(ln+ p) −kp
X n (k) = x(l) ωn N = x(l) ω−kl
N ωn N .
l=0 p=0 l=0 p=0
n−1
1 kp
We denote h(k) = ωn N . Then
n p=0
X n (k) = n X k N h(k), k ∈ 0 : n N − 1.
n−1 N −1
−k jm −k j
Yn (k) = x( jm) ωnm = x( j ) ω N δm ( j )
j=0 j =0
⎧ ⎫
N −1 ⎨1 m−1 ⎬
−k j − pjn
= x( j) ω N ωmn
⎩m ⎭
j=0 p=0
m−1 N −1 m−1
1 − j (k+ pn) 1
= x( j) ω N = X (k + pn).
m p=0 j=0
m p=0
2.29 The inclusion yn ∈ Cn follows from Lemma 2.1.3. Let us calculate the spec-
trum of the signal yn . We have
n−1 m−1 N −1
Yn (k) = x( j + pn) ωn−k( j+ pn) = x(l) ω−kml
N = X (km).
j=0 p=0 l=0
m−1 m−1 N −1
1 l( p+ jm)
yn ( j) = x( p + jm) = X (l) ω N
p=0
N p=0 l=0
Solutions 209
m−1
Yn (k) = X (k + qn) h(k + qn),
q=0
m−1
1 pj
where h( j) = ωN .
m p=0
m−1
2.31 Let us introduce a signal yn ( j) = x( p + jm), yn ∈ Cn . Then y( j) =
p=0
yn ( j/m). We denote X = F N (x), Y = F N (y), and Yn = Fn (yn ). Taking into con-
sideration solutions of Exercises 2.27 (changing n by m and N by n) and 2.30, we
write
Y (k) = m Yn kn h(k), k ∈ 0 : N − 1,
m−1
Yn (k) = X (k + qn) h(k + qn), k ∈ 0 : n − 1,
q=0
m−1
1 pj
where h( j) = ω N . Hence
m p=0
m−1
Y (k) = m h(k) X kn + qn h kn + qn , k ∈ 0 : N − 1.
q=0
2.32 We have
N −1 N
−k( j+1)+k −k j
X 1 (k) = c j+1 ω N = ωkN c j ωN
j=0 j=1
⎛ ⎞
N −1
−k j
= ωkN ⎝ c j ωN + c N − c0 ⎠ = ωkN X 0 (k) + (c N − c0 ) .
j=0
210 Solutions
2.33 A setting of the exercise states that y(k) = X (N − k) holds for all k ∈ Z.
According to (2.1.4) we have
N −1 N −1
(N −k) j kj
[F N (y)]( j) = X (N − k) ω N = X (k) ω N = N x( j).
k=0 k=0
N −1
k(N − j)
[F N2 (x)]( j) = X (k) ω N = N x(N − j).
k=0
2.35 We have
N −1 N −1
1 −1 1 (k−l) j+l j
[F N (X ∗ Y )]( j) = 2 X (l) Y (k − l) ω N
N N k=0 l=0
N −1 N −1
1 lj (k−l) j
= X (l) ω N Y (k − l) ω N
N2 l=0 k=0
N −1 N −1
1 lj kj
= X (l) ω N Y (k) ω N = x( j) y( j),
N2 l=0 k=0
X ∗ Y = N F N (x y) = N F N (1I) = N 2 δ N .
2.37 Provided the signals x and y are even, the corresponding spectra X and Y are
real (see Theorem 2.2.3). The spectrum of the convolution x ∗ y, which is equal to
X Y , is also real; hence, the convolution x ∗ y itself is even.
2.38 Auto-correlation is even because its Fourier transform is real. The immediate
proof is also possible:
Solutions 211
N −1 N −1
Rx x (− j) = x(k) x(k + j) = x(k − j) x(k) = Rx x ( j).
k=0 k=0
N −1
Rx x ( j) = F N (Rx x ) (0) = |X (0)|2 .
j=0
F N (Rx x ∗ R yy ) = F N (Rx x ) F N (R yy ) = |X |2 |Y |2 .
N −1
E(u) = N −1 E(U ) = N −1 |X (k) Y (k)|2 = Rx x (0) R yy (0) = E(x) E(y).
k=0
N 2 −1 N −1 N −1 N −1
−k j j j −k( j1 N + j0 ) −k j0 j ( j0 −k)
V (k) = v( j) ω N 2 = ω N1 0 ω N 2 = ωN 2 ω N1
j=0 j1 , j0 =0 j0 =0 j1 =0
N −1
−k j −kk
=N ω N 2 0 δ N j0 − k N = N ω N 2 N .
j0 =0
2.44 Let us investigate, for example, a case of an odd N . Since the number s(s N + 1)
is even for any s ∈ Z, we have
( j+s N )( j+s N +1)+2q( j+s N ) s N (s N +1)
a( j + s N ) = ω2N = a( j) ω2N = a( j).
N −1 N −1
Raa ( j) = a(k) a(k − j) = a(k + j) a(k)
k=0 k=0
N −1
(k+ j)(k+ j+1)+2q(k+ j)−k(k+1)−2qk
= ω2N
k=0
N −1
j ( j+1)+2q j 2k j
= ω2N ω2N = N a( j) δ N ( j) = N δ N ( j).
k=0
Each product x(k) x(k − 1) equals either to +1 or to −1. Their sum can equal to
zero only when N is even, i. e. when N = 2n. √
√ Further, a binary delta-correlated signal satisfies to a relation |X (0)| = Rx x (0) =
N or, in more detail,
3 3 √
3x(0) + x(1) + · · · + x(N − 1)3 = 2n.
The left side of the latter equality is an integer number, so the square root of 2n must
be integer as well. This is possible only when n = 2 p 2 . Thus, binary delta-correlated
signals can exist only for N = 4 p 2 .
If p = 1, a binary delta-correlated signal exists. It is, for instance,
x = (1, 1, 1, −1). There is a hypothesis that no binary delta-correlated signals exist
for p > 1. This hypothesis is not entirely proved yet.
According to the same Exercise 2.26, the discrete Fourier transform of the signal from
3 32
the right side of the formula (S.8) looks this way: F N (Rx x ) k N = 3 X k N 3 .
We gained that the DFTs of the signals from the left and from the right sides of
formula (S.8) are equal. Therefore, equal are the signals themselves.
Ru 1 v1 = U1 V 1 = (X Y )(W Z ),
Ru 2 v2 = U2 V 2 = (X W )(Y Z ).
The right sides of these relations are equal. Hence the left sides are equal too.
N −1
c(k) x( j − k) = 0 for all j ∈ Z.
k=0
2.52 By virtue of the result of the previous exercise, we need to construct a signal y
such that Rx y = δ N . Using the correlation theorem we write the equivalent condition
X Y = 1I. Here each component of the spectrum X is nonzero (see Exercise 2.50),
therefore Y = (X )−1 . A desired signal is obtained with the aid of the DFT inversion
formula.
214 Solutions
N −1
1
|x( j)|2 ≤ max |x( j)|2 ,
N j∈0:N −1
j=0
N −1
|x( j)|2 ≥ max |x( j)|2 .
j∈0:N −1
j=0
N −1 N −1 N −1
−k j −k j −k j
− 2 x( j − 1) ω N + c x( j) ω N = g( j) ω N .
j=0 j=0 j=0
Since
N −1 N −1
−k j −k j
2 x( j − 1) ω N = [x( j + 1) − 2x( j) + x( j − 1)] ω N
j=0 j=0
N −1
−k j
= (ωkN −2+ ω−k
N ) x( j) ω N
j=0
πk
= −4 sin2 N
X (k),
Hence
X (k) = G(k)/ 4 sin2 (π k/N ) + c , k ∈ Z.
A desired signal x( j) is obtained with the aid of the DFT inversion formula.
kj
exp(2πikt j ) = ω N ,
n
kj
x( j) = a(k) ω N , j ∈ Z. (S.9)
k=−n
N −1
1 kj
x( j) = N a(k) ω N , j ∈ Z.
N k=0
N −1
−k j
N a(k) = x( j) ω N
j=0
or
N −1
1
a(k) = f (t j ) exp(−2πikt j ), k ∈ Z.
N j=0
To Chapter 3
3.1 We have
N −1
1 (N −k) j
br ( j) = (ω NN −k − 1)−r ω N
N k=1
N −1
1
(ωkN − 1)−r ω N = br ( j).
kj
=
N k=1
j j
b1 ( j + 1) − b1 (1) = [b1 (k + 1) − b1 (k)] = b0 (k) = − j/N .
k=1 k=1
b1 ( j) = c − ( j − 1)/N , j ∈ 2 : N.
N
1 N −1
c= ( j − 1) = .
N2 j=1
2N
3.3 According to (3.1.4) and the result of the previous exercise, for j ∈ 1 : N − 1
we have
j j j
1 N +1
b2 ( j + 1) − b2 (1) = [b2 (k + 1) − b2 (k)] = b1 (k) = −k
k=1 k=1
N k=1
2
4 5
1 N +1 ( j + 1) j j (N − j)
= j− = .
N 2 2 2N
( j − 1)(N − j + 1)
b2 ( j) = c + , j ∈ 1 : N.
2N
Like
N in the previous exercise, the constant c can be determined from a condition
j=1 b2 ( j) = 0 equivalent to (3.1.2) for r = 2:
N N −1
1 1
c=− ( j − 1)(N − j + 1) = − j (N − j)
2N 2 j=1
2N 2 j=1
4 5
1 N 2 (N − 1) (N − 1)N (2N − 1) N2 − 1
=− − =− .
2N 2 2 6 12N
N 2 − 1 ( j − 1)(N − j + 1)
b2 ( j) = − + , j ∈ 1 : N.
12N 2N
3.4 According to (3.2.6) and (3.2.2), for n = 2 we have N = 2m and
N −1
1 −kpn kp n
Q r (· − pn), Q r (· − p n) = X 1r (k) ω N X 1r (k) ω N
N k=0
N −1
1 k( p − p)n
= X 12r (k) ω N = Q 2r ( p − p)n .
N k=0
3.7 When r = 1, the assertion follows from the definition of Q 1 ( j). We perform an
induction step from r − 1 to r , r ≥ 2. According to (3.2.4) we have
& n−1 N −1
'
Q r ( j) = + Q 1 (k) Q r −1 ( j − k)
k=0 k=N −n+1
n−1
= Q 1 (k) Q r −1 ( j − k). (S.10)
k=−n+1
Given j ∈ 0 : (r − 1)(n − 1), we take the right side of (S.10) and consider the
summand corresponding to k = 0. This summand equals to Q 1 (0) Q r −1 ( j). By virtue
of the inductive hypothesis it is positive. As far as the other summands are nonneg-
ative, we gain Q r ( j) > 0.
If j ∈ (r − 1)(n − 1) + 1 : r (n − 1), we will consider the summand correspond-
ing to k = n − 1. It is equal to Q 1 (n − 1) Q r −1 ( j − n + 1). Since j − n + 1 belongs
to the set (r − 2)(n − 1) + 1 : (r − 1)(n − 1), we have Q r −1 ( j − n + 1) > 0. There-
fore, in this case the right side of (S.10) also contains a positive summand which
guarantees positivity of Q r ( j).
218 Solutions
For j = r (n − 1) we have
so that the right side of (S.10) contains the only nonzero summand corresponding to
k = n − 1. We gain
Q r r (n − 1) = Q 1 (n − 1) Q r −1 (r − 1)(n − 1) = 1.
(r − 1)(n − 1) + 1 : N − (r − 1)(n − 1) − 1
3.8 Let us calculate the DFT of the signal x( j) that stands in the right side of the
equality being proved. We have
(n−1)/2 N −1 (n−1)/2
−k j k(N − j) kj
X (k) = ωN + ωN = ωN .
j=0 j=N −(n−1)/2 j=−(n−1)/2
sin(π k/m)
X (k) = , k ∈ 1 : N − 1.
sin(π k/N )
We see that the DFTs of the signals x and Q 1/2 are equal. Therefore, the signals
themselves are also equal.
sin(π k/m)
G(0) = 0, G(k) = for k ∈ 1 : N − 1.
sin(π k/N )
Therefore
N −1 N −1
1 −k j 1 (N −k) j
Q 01/2 ( j) = G(k) ω N =− G(N − k) ω N
N k=1
N k=1
N −1
1 kj
=− G(k) ω N = −Q 01/2 ( j).
N k=1
Hence follows the equality Re Q 01/2 = O which means by a definition that the signal
Q 01/2 is pure imaginary.
3.10 Let us use the equality c( p) = c(− p), evenness of a B-spline Q r ( j), and
formula (2.1.4). We gain
m−1
m−1
S(− j) = c(− p) Q r − j + (− p)n = c( p) Q r ( j − pn) = S( j),
p=0 p=0
r (x − S) 2
= r (S∗ − S) + r (x − S∗ ) 2
= r (S∗ − S) 2 + r (x − S∗ ) 2
N −1
+ 2 Re r S∗ ( j) − S( j) r x( j) − S ∗ ( j) .
j=0
N −1
m−1
r S∗ ( j) − S( j) r x( j) − S ∗ ( j) = (−1)r d(l) x(ln) − S ∗ (ln) = 0.
j=0 l=0
Here d(l) are the coefficients of the expansion of the discrete periodic spline S∗ − S
over the shifts of the Bernoulli function. Combining the given equalities we gain
r (x − S) 2
= r (S∗ − S) 2
+ r (x − S∗ ) 2 .
r (S∗ − S1 ) 2
= 0.
3.12 Basic functions of the spline Sα presented in a form (3.3.4) are real. Its coeffi-
cients are determined from the system of linear equations (3.5.6) with the real matrix.
Provided the right side of the system is real its solution is real as well.
3.13 The assertion follows from formula (3.8.5).
3.14 A conclusion about evenness of μk ( j) with respect to k follows from formu-
lae (3.8.7) and (3.8.8) and from the result of Exercise 2.1.
3.15 In this case an orthogonal basis is formed by two splines μ0 ( j) and μ1 ( j).
According to (3.8.7) and (3.8.10) we have
μ0 ( j) ≡ 1, μ1 ( j) = 1
2
Q 1 ( j) − Q 1 ( j − 2) .
m−1
S( j) = c( p) Q r ( j − pn).
p=0
m−1
S( j) = c( p) Q r ( j − pn)
p=0
lets us draw a conclusion that S is real (which is obvious) and even (see Exercise 3.10).
3.18 The coefficients ξ(k) = [T2r (k)]−1/2 in the expansion of ϕr ( j) over the orthog-
onal basis are real and comprise an even sequence. It is remaining to use the result
of Exercise 3.17.
3.19 The solution is similar to the previous one.
3.20 According to (3.9.7) and (3.8.9) we have
we have
2r r
2r 2r
cν (l) = ωm−lr (ωm
l
+ 1) 2r
= ωm−l(r − p)
= ωm−lp .
ν ν
p=0
p ν
p=−r
r−p ν
3.23 Taking into account (3.10.2) and the result of the previous exercise we gain
N −1 N −1
1 1
Q rν+1 ( j)
lj lj
= yν+1 (l) ω N = cν (l) yν (l) ω N
N l=0
N l=0
r N −1 r
1 2r l( j− pn ν ) 2r
= yν (l) ω N = Q rν ( j − pn ν ).
N p=−r
r−p l=0 p=−r
r−p
3.24 We have
aν (−k) = ωm−kν cν (m ν+1 − k) μνm ν+1 −k 2 .
Let us use the fact that the sequences {cν (k)} and {μνk } are even with respect to
k. (The former one is even by a definition. As for evenness of the latter one, see
Exercise 3.14.) Taking into account that m ν+1 − k = m ν − (m ν+1 + k) we gain
3.25 In the same way as in the solution of the previous exercise we have
ν+1
w−k ( j) = a ν (k) μνk ( j) + a ν (m ν+1 + k) μνm ν+1 +k ( j) = wkν+1 ( j).
N −1 m ν+1 −1 n ν+1 −1
ν+1 2 ν+1 2
wk ( j) = wk ( p + ln ν+1 )
j=0 l=0 p=0
m ν+1 −1 n ν+1 −1
ν+1 2
= ωm2klν+1 wk ( p)
l=0 p=0
n ν+1 −1
2
= m ν+1 δm ν+1 (2k) wkν+1 ( p) .
p=0
222 Solutions
Similarly, with a reference to Exercise 3.14, one can deduce the equality
3.28 If
m ν −1
ϕ( j) = βν (k) wkν ( j)
k=0
m ν −1
ϕ( j − ln ν ) = βν (k) wkν ( j) ωm−lk
ν
.
k=0
m ν −1
1
βν (k) wkν ( j) = ωm
lk
ν
ϕ( j − ln ν ).
mν k=0
Now the solution finishes in the same way as the proof of Theorem 3.9.1.
Therefore the equality ϕ(· − ln ν ), ψ(· − l n ν ) = δm ν (l − l ) holds if and only if
m ν −1
Prν+1 ( j) = aν (k) μνk ( j). (S.11)
k=0
Since
m ν −1
1
μνk ( j) = ωmkpν Q rν ( j − pn ν ),
mν p=0
we have
m ν −1
$ m ν −1
%
1
Prν+1 ( j) = Q rν ( j − pn ν ) aν (k) ωmkpν .
p=0
mν k=0
1 ν
μνm ν+1 +k 2
= T (m ν+1 + k).
m ν 2r
Therefore,
r
1 ν 2r
aν (k) = T2r (m ν+1 + k) (−1)l ωmk(l+1) .
mν l=−r
r − l ν
m ν −1
1
dν ( p) = aν (k) ωmkpν
mν k=0
r m ν −1
1 2r
= 2 (−1) l
T2rν (m ν+1 + k) ωmk(νp+l+1)
mν l=−r
r −l k=0
r $ m −1
%
1 2r 1 ν
= (−1) l
T2rν (k) ωm(k−m ν+1 )( p+l+1)
mν l=−r
r − l m ν k=0
ν
r
1 2r
= (−1) p+1 Q ν2r ( p + l + 1) n ν .
m ν l=−r r − l
224 Solutions
n ν −1
1 (qm ν +k) j
μνk ( j) = yν (qm ν + k) ω N
N q=0
(see par. 3.10.1). Bearing in mind m ν -periodicity of the sequence {aν (k)} we gain
m ν −1 n ν −1
1 (qm ν +k) j
Prν+1 ( j) = aν (qm ν + k) yν (qm ν + k) ω N
N k=0 q=0
N −1
1 lj
= aν (l) yν (l) ω N .
N l=0
Hence
F N (Prν+1 ) (l) = aν (l) yν (l), l ∈ 0 : N − 1.
m ν+1 −1 m ν+1 −1
Prν+1 (· − ln ν+1 ), Prν+1 (· − l n ν+1 ) = ωm−kl
ν+1
wkν+1 , ωm−kν+1l wkν+1
k=0 k =0
m ν+1 −1
= wkν+1 2
ωm−k(l−l
ν+1
)
,
k=0
m ν −1
Prν+1 ( j − n ν ) = dν ( p) Q rν j − ( p + 1)n ν
p=0
m ν −1
= dν ( p − 1) Q rν ( j − pn ν ).
p=0
Here
1
r
2r
dν ( p − 1) = (−1) p Q ν2r ( p + l)n ν .
mν l=−r
r −l
Solutions 225
Since
1
r
2r
dν (− p − 1) = (−1) p Q ν2r ( p − l)n ν
mν l=−r
r +l
p 1
r
2r
= (−1) Q ν2r ( p + l)n ν = dν ( p − 1),
m ν l=−r r − l
we gain
Prν+1 (− j − n ν ) = Prν+1 ( j − n ν ).
This means that the real spline Prν+1 ( j − n ν ) is even with respect to j.
3.35 B-spline B1 (x) is even, it follows from its definition. Assume that Bν−1 (−x) =
Bν−1 (x) holds for some ν ≥ 2. In this case
6 6
m m
Bν (−x) = Bν−1 (t) B1 (x + t) dt = Bν−1 (m − t) B1 x − (m − t) dt
60 m 0
To Chapter 4
4.1 Let j = ( js−1 , js−2 , . . . , j0 )2 . A condition j = pNν implies that not all com-
ponents js−ν−1 , . . . , j0 are equal to zero. And then
4.2 Use the solution of the previous exercise. Bear in mind that the equality
revs ( pNν ) = revν ( p) holds for p ∈ 0 : ν+1 − 1.
4.3 We have
ϕν ( j) = ϕν (Nν ; j) = f ν (ν ; j),
ψν ( j) = gν (Nν ; j) = f ν revs (Nν ); revs ( j) .
226 Solutions
4.4 The first equality follows from formula (4.8.1). To prove the second equality,
let us use the result of Exercise 2.4. We gain
It can be obtained in the same way as in the example from par. 4.5.2, but it also can
be deduced analytically. Indeed, according to (4.5.3) we have
Hence
s s s
2−ν ϕν ( j) = 2 2−ν ϕν−1 (0; j) − 2−ν ϕν (0; j)
ν=1 ν=1 ν=1
s−1 s
= 2−ν ϕν (0; j) − 2−ν ϕν (0; j)
ν=0 ν=1
−s
= ϕ0 (0; j) − 2 ϕs (0; j).
It can be obtained in the same way as in the example from par. 4.6.5, but it also can
be deduced analytically. Indeed, according to (4.6.10) we have
Hence
s s s
−ν −ν
2 ψν ( j) = 2 2 gν−1 (0; j) − 2−ν gν (0; j)
ν=1 ν=1 ν=1
s−1 s
= 2−ν gν (0; j) − 2−ν gν (0; j)
ν=0 ν=1
−s
= g0 (0; j) − 2 gs (0; j).
s Nν −1
δ N ( j − q) = 2−s α + 2−ν ξν ( p) ϕν ( j − pν+1 ).
ν=1 p=0
Here
N −1
α = δ N (· − q), ϕs (0) = δ N ( j − q) = 1,
j=0
Note that
therefore condition (S.12) holds only for p = q/ν+1 . Referring to formula (4.4.6)
again and bearing in mind that qν−1 ∈ 0 : 1 we gain
Thus,
s
δ N ( j − q) = 2−s + 2−ν (−1)qν−1 ϕν ( j − q/ν+1 ν+1 ).
ν=1
228 Solutions
s Nν −1
−s −ν
δ N ( j − q) = 2 β + 2 yν ( p) ψν ( j − p).
ν=1 p=0
Here
N −1
β = δ N (· − q), gs (0) = δ N ( j − q) = 1,
j=0
yν ( p) = δ N (· − q), gν ( p + Nν ) = gν ( p + Nν ; q) = gν (Nν ; q − p)
= ψν (q − p) = δ Nν−1 (q − p) − δ Nν−1 (q − p − Nν ).
If qs−ν = 0 then
we conclude that $
1 for p = q Nν ,
yν ( p) =
0 for p = q Nν .
If qs−ν = 1 then
2 gν−1 ( p; j) = gν ( p; j) + ψν ( j − p), p ∈ 0 : Nν − 1.
Therefore
s Nν −1
−ν
2 ψν ( j − p)
ν=1 p=0
s Nν −1
= 2−ν [2 gν−1 ( p; j) − gν ( p; j)]
ν=1 p=0
s−1 Nν+1 −1 s−1 Nν −1 N −1
= 2−ν gν ( p; j) − 2−ν gν ( p; j) − 2−s gs (0; j) + g0 (0; j)
ν=0 p=0 ν=0 p=0 p=0
N −1 s−1 Nν −1
= δ N ( j − p) − 2−s − 2−ν δ Nν ( j − p). (S.13)
p=0 ν=0 p=Nν+1
Nν −1
δ Nν ( j − p) = js−ν−1 . (S.14)
p=Nν+1
Nν −1 Nν −1
δ Nν ( j − p) = δ Nν ( js−ν−1 2s−ν−1 + · · · + j0 ) − p . (S.15)
p=Nν+1 p=Nν+1
If js−ν−1 = 0 then the right side of (S.15) equals to zero. In this case (S.15) corre-
sponds to (S.14). Let js−ν−1 = 1. Then the right side of (S.15) equals to unity. In
this case (S.15) also corresponds to (S.14).
Substituting (S.14) into (S.13) and taking into account that
N −1
δ N ( j − p) ≡ 1
p=0
230 Solutions
we gain
s−1
x( j) = 1 − 2−ν js−ν−1 = 1 − 2 j/N , j ∈ 0 : N − 1.
ν=0
We have
Nν −1
−s
s
−ν
y revs ( j) = 2 + 2 f ν ν + pν+1 ; revs ( j)
ν=1 p=0
Nν −1
s
= 2−s + 2−ν f ν ν + ν+1 revs−ν ( p); revs ( j) .
ν=1 p=0
According to (4.6.3) there holds ν + ν+1 revs−ν ( p) = revs (Nν + p), therefore
f ν ν + ν+1 revs−ν ( p); revs ( j) = f ν revs ( p + Nν ); revs ( j)
= gν ( p + Nν ; j) = ψν ( j − p).
Nν −1
s
y revs ( j) = 2−s + 2−ν ψν ( j − p) = 1 − 2 j/N .
ν=1 p=0
Hence
y( j) = 1 − 2 revs ( j)/N , j ∈ 0 : N − 1.
s s−1 s
2−ν ϕν ( j) = 2−ν ϕν (0; j) − 2−ν ϕν (0; j)
ν=k+1 ν=k ν=k+1
−k −s
=2 ϕk (0; j) − 2 ϕs (0; j). (S.16)
k+1 −1
ϕk (0; j) = δ N ( j − q) =: h k ( j).
q=0
Nν ≥ 2k = k+1 .
Hence
s−k s−k−1 s−k
2−ν ψν ( j − p) = 2−ν gν ( p; j) − 2−ν gν ( p; j)
ν=1 ν=0 ν=1
k+1 −1 k+1 −1
gs−k ( p; j) = δk+1 ( j − p) ≡ 1.
p=0 p=0
Nν −1
s
x( j ⊕ q) = 2−s α + 2−ν ξν ( p) ϕν ( j ⊕ q) ⊕ pν+1 .
ν=1 p=0
Since
( j ⊕ q) ⊕ pν+1 = j ⊕ (q ⊕ pν+1 )
= j ⊕ (q/ν+1 ⊕ p)ν+1 + qν−1 ν + qν ,
Nν −1
−s
s
qν−1 −ν
x( j ⊕ q) = 2 α + (−1) 2 ξν ( p) ϕν j − (q/ν+1 ⊕ p)ν+1 .
ν=1 p=0
232 Solutions
Nν −1
s
y( j − q) = 2−s β + 2−ν yν ( p) (−1)(q+ p)/Nν ψν j − q + p Nν .
ν=1 p=0
Finally we gain
Nν −1
−s
s
−ν
y( j − q) = 2 β + 2 (−1)( p −q)/Nν yν p − q Nν ψν ( j − p ).
ν=1 p =0
gν (k; j) = δ Nν ( j − k)
Nν −1
w( j) = a(k) ψν ( j − k).
k=0
2Nν −1
w( j) = w(k) δ Nν−1 ( j − k). (S.18)
k=0
Solutions 233
2Nν −1
w( j) = − w(k) δ Nν−1 ( j − k − Nν ). (S.19)
k=0
Summing (S.18) and (S.19) and taking into account (4.8.1) we gain
2Nν −1
2 w( j) = w(k) ψν ( j − k)
k=0
Nν −1 Nν −1
= w(k) ψν ( j − k) + w(k + Nν ) ψν ( j − k − Nν )
k=0 k=0
Nν −1
= [w(k) − w(k + Nν )] ψν ( j − k).
k=0
4.17 Since discrete Walsh functions vk ( j) take only two values +1 and −1, there
holds [vk ( j)]2 ≡ 1. An equivalent notation is 1/vk ( j) = vk ( j).
Further,
s−1
vk ( j) vk ( j) = (−1)kα +kα 2 jα = vm ( j),
α=0
where m = k ⊕ k .
4.18 Take into account that 2k + 1 = 2k ⊕ 1 and use the result of the previous
exercise.
4.19 We remind that the numbers vk (0), vk (1), . . . , vk (N − 1) form the row of
the Hadamard matrix As with the index k. The required formulae follow from the
recurrent relation 4 5
As−1 As−1
As = .
As−1 −As−1
vk ( j) = vk (N − 1)vk ( j).
234 Solutions
(N − 1 − j)α = 1 − jα , α ∈ 0 : s − 1.
Hence s−1
vk ( j) = (−1) α=0 kα (N −1− j)α = vk (N − 1 − j).
N1 + p = (1, 0, ps−3 , . . . , p0 )2 .
We have
s−3
{3N2 + p, j}s = js−1 + 1 + pα jα ,
α=0
s−3
{N1 + p, j + N2 N }s = js−1 + 12 + pα jα .
α=0
Since
(−1) js−1 +12 = (−1) js−1 +1 ,
For N = 8 we have
j ∈ 0 : 7.
4.25 As it was noted in the solution of the previous exercise, rν ( j) = (−1) js−ν holds.
Bearing this in mind we gain
s
s
vk ( j) = (−1)ks−ν js−ν = [rν ( j)]ks−ν , k ∈ 0 : N − 1.
ν=1 ν=1
4.26 Let us use the identity v0 (k) ≡ 1 and the fact that vk ( j) = v j (k). We write
N −1 N −1
vk ( j) = v0 (k) v j (k).
k=0 k=0
Now the required equality follows from orthogonality of Walsh functions and the
equality v0 , v0 = N .
N −1 N −1
xs 2
= x( j) vk ( j) x s (k)
k=0 j=0
N −1 7 N −1 8
= x( j) x s (k) vk ( j) = N x 2 ,
j=0 k=0
4.28 On the basis of the definitions of the discrete Walsh transform and the dyadic
convolution we write
N −1 N −1
W N (z) (k) = x(l) y( j ⊕ l) vk ( j ⊕ l) ⊕ l
j=0 l=0
N −1 N −1
= x(l) y( j) vk ( j ⊕ l).
l=0 j=0
Hence
N −1 N −1
W N (z) (k) = x(l) vk (l) y( j) vk ( j) = W N (x) (k) W N (y) (k)
l=0 j=0
as was to be proved.
4.29 We denote V p = F N (v p ). Taking into account that v1 ( j) = (−1) j0 = (−1) j
for j ∈ 0 : N − 1 we gain
N −1 N −1 N −1
−k j j −k j − j (k−N1 )
V1 (k) = (−1) j ω N = ω2 ω N = ωN = N δ N (k − N1 ).
j=0 j=0 j=0
N −1 1 N1 −1
j/2 −k j −k(2l+q)
V2 (k) = (−1) ωN = (−1)l ω N
j=0 q=0 l=0
1 N1 −1
−kq
= ωN ω−l(k−N
N1
2)
= N1 (1 + ω−k
N )δ N1 (k − N2 ).
q=0 l=0
V3 (k) = N1 (1 − ω−k
N )δ N1 (k − N2 ).
The Fourier spectra V2 (k) and V3 (k) on the main period are not equal to zero only
for k = N2 = N /4 and k = N2 + N1 = 3N /4. Herein
ν+1 −1 Nν −1 ν+1 −1 Nν −1
−k( pNν +q) −kp −kq
Rν (k) = (−1) p
ωN = (−1) p ων+1 ωN
p=0 q=0 p=0 q=0
ν+1 −1 Nν −1 Nν −1
− p(k−ν ) −kq −kq
= ων+1 ωN = 2ν δν+1 (k − ν ) ωN .
p=0 q=0 q=0
It is clear that the Fourier spectrum Rν (k) on the main period is not equal to zero
only for k = ν + lν+1 = (2l + 1)ν , l ∈ 0 : Nν − 1. Given these k, according
to (2.2.7) we gain
Nν −1 Nν −1
−(2l+1)ν q −(2l+1)q
Rν (k) = 2ν ωN = 2ν ω Nν−1
q=0 q=0
4.31 It is known that v2 p+1 ( j) = v2 p ( j) v1 ( j) (see Exercise 4.18). Let us use the
result of Exercise 2.35 which yields
V2 p+1 = N −1 (V2 p ∗ V1 ).
N −1
V2 p+1 (k) = V2 p (l) δ N (k − l − N1 ) = V2 p (k − N1 ).
l=0
N1 −1 N1 −1
−k j −k(N1 + j)
V p (k) = v p ( j) ω N + v p (N1 + j) ω N
j=0 j=0
N1 −1
−k j
= 1 + (−1)k v (1)
p ( j) ω N .
j=0
238 Solutions
N1 −1 N −1
1 1 1
VN1 + p (2k) = V p (2l + 1) VN1 2(k − l) − 1 + V p (2l) VN1 2(k − l) .
N N
l=0 l=0
(S.23)
Both sums in the right side
of (S.23)
equal to zero: the former one due to V p (2l + 1),
the latter one due to VN1 2(k − l) . Thus, VN1 + p (2k) = 0 for k ∈ 0 : N1 − 1. Further,
using equality (S.21) we gain, similar to (S.23),
N1 −1
1
VN1 + p (2k + 1) = V p(1) (l) VN1 2(k − l) + 1 ,
N1 l=0
k ∈ 0 : N1 − 1.
N −1 N −1
−k j −k( j−N2 )
V3N2 + p (k) = v N1 + p ( j + N 2 ) ω N = v N1 + p ( j) ω N
j=0 j=0
We gain
s−2
{revs (2k), j}s = {revs (k), 2 j}s = ks−2−α jα .
α=0
4.35 It is required to verify that there holds the equality wal−1s (N − 1) = 1 or,
which is equivalent, wals (1) = N − 1. By the definition, wals (1) is the number of
sign changes of the Walsh function v1 ( j) on the main period. Since v1 ( j) = (−1) j
for j ∈ 0 : N − 1, we have wals (1) = N − 1.
4.36 It follows from the proof of Theorem 4.12.2 that the formula
ν+1 −1
ξ(k) = Nν a(l) vk (l Nν )
l=0
4.37 With the aid of Theorem 4.11.1 we consecutively fill out the table of values of
the permutations wal1 (k), wal2 (k), and wal3 (k) (Table S.1).
On the basis of the definition of an inverse mapping we gain
wal−1
3 (k) = {0, 4, 6, 2, 3, 7, 5, 1}.
2 0 3 1 2
3 0 7 3 4 1 6 2 5
240 Solutions
Therefore
N −1
F(k1 N + k0 ) = f ( j1 N + j0 ) vk1 N +k0 ( j1 N + j0 )
j1 , j0 =0
N −1
= v j1 ( j0 ) vk1 ( j1 ) vk0 ( j0 )
j1 , j0 =0
N −1 N −1
= vk1 ( j1 ) v j1 ( j0 ) vk0 ( j0 )
j1 =0 j0 =0
N −1
=N vk1 ( j1 ) δ N ( j1 − k0 ) = N vk1 (k0 ).
j1 =0
References
1. Ahmed, N., Rao, K.R.: Orthogonal Transforms for Digital Signal Processing. Springer, Hei-
delberg, New York (1975)
2. Ber, M.G., Malozemov, V.N.: On the recovery of discrete periodic data. Vestnik Leningrad.
Univ. Math. 23(3), 8–14 (1990)
3. Ber, M.G., Malozemov, V.N.: Interpolation of discrete periodic data. Probl. Inf. Transm. 28(4),
351–359 (1992)
4. Ber, M.G., Malozemov, V.N.: Best formulas for the approximate calculation of the discrete
Fourier transform. Comput. Math. Math. Phys. 32(11), 1533–1544 (1992)
5. Blahut, R.E.: Fast Algorithms for Digital Signal Processing. Addison-Wesley, Reading, MA
(1984)
6. Chashnikov, N.V.: Hermite spline interpolation in the discrete periodic case. Comput. Math.
Math. Phys. 51(10), 1664–1678 (2011)
7. Chashnikov, N.V.: Discrete Periodic Splines and Coons Surfaces. Lambert Academic Publish-
ing (2010) (in Russian)
8. Cooley, J.W., Tukey, J.W.: An algorithm for the machine calculation of complex Fourier series.
Math. Comput. 19(90), 297–301 (1965)
9. Donoho, D.L., Stark, P.B.: Uncertainty principles and signal recovery. SIAM J. Appl. Math.
49(3), 906–931 (1989)
10. Goertzel, G.: An algorithm for the evaluation of finite trigonometric series. Am. Math. Monthly
65, 34–35 (1958)
11. Ipatov, V.P.: Periodic Discrete Signals with Optimal Correlation Properties. Radio i Svyaz,
Moscow (1992). (in Russian)
12. Johnson, J., Johnson, R.W., Rodriguez, D., Tolimieri, R.: A methodology for designing, mod-
ifying and implementing Fourier transform algorithms on various architectures. Circuits Syst.
Signal Process. 9(4), 449–500 (1990)
13. Kirushev, V.A., Malozemov, V.N., Pevnyi, A.B.: Wavelet decomposition of the space of discrete
periodic splines. Mathem. Notes 67(5), 603–610 (2000)
14. Korovkin, A.V.: Generalized discrete Ahmed-Rao transform. Vestnik Molodyh Uchenyh 2,
33–41 (2003). (in Russian)
15. Korovkin, A.V., Malozemov, V.N.: Ahmed-Rao bases. Mathem. Notes 75(5), 780–786 (2004)
16. Korovkin, A.V., Masharsky, S.M.: On the fast Ahmed-Rao transform with subsampling in
frequency. Comput. Math. Math. Phys. 44(6), 934–944 (2004)
17. Lvovich, A.A., Kuzmin, B.D.: Analytical expression for spectra of Walsh functions.
Radiotekhnika 35(1), 33–39 (in Russian) (1980)
18. Mallat, S.: A Wavelet Tour of Signal Processing, 2nd edn. Press, Acad (1999)
19. Malozemov, V.N., Chashnikov, N.V.: Limit theorems of the theory of discrete periodic splines.
J. Math. Sci. 169(2), 188–211 (2010)
20. Malozemov, V.N., Chashnikov, N.V.: Limit theorems in the theory of discrete periodic splines.
Doklady Math. 83(1), 39–40 (2011)
21. Malozemov, V.N., Chashnikov, N.V.: Discrete periodic splines with vector coefficients for
computer-aided geometric design. Doklady Math. 80(3), 797–799 (2009)
22. Malozemov, V.N., Masharsky, S.M.: Glassman’s formula, fast Fourier transform, and wavelet
expansions. Am. Math. Soc. Transl. 209(2), 93–114 (2003)
23. Malozemov, V.N., Masharsky, S.M.: Generalized wavelet bases related with discrete Vilenkin-
Chrestenson transform. St. Petersburg Math. J. 13(1), 75–106 (2002)
24. Malozemov, V.N., Masharsky, S.M.: Haar spectra of discrete convolutions. Comput. Math.
Math. Phys. 40(6), 914–921 (2000)
25. Malozemov, V.N., Masharsky, S.M.: Comparative study of two wavelet bases. Probl. Inf.
Transm. 36(2), 114–124 (2000)
26. Malozemov, V.N., Masharsky, S.M., Tsvetkov, K.Yu.: Frank signal and its generalizations.
Probl. Inf. Transm. 37(2), 100–107 (2001)
27. Malozemov, V.N., Pevnyi, A.B.: Polynomial Splines. LGU, Leningrad (1986). (in Russian)
28. Malozemov, V.N., Pevnyi, A.B.: Discrete periodic B-splines. Vestnik St. Petersburg Univ. Math.
30(4), 10–14 (1997)
29. Malozemov, V.N., Pevnyi, A.B.: Discrete periodic splines and their numerical applications.
Comput. Math. Math. Phys. 38(8), 1181–1192 (1998)
30. Malozemov, V.N., Pevnyi, A.B., Tretyakov, A.A.: Fast wavelet transform for discrete periodic
signals and patterns. Probl. Inf. Transm. 34(2), 161–168 (1998)
31. Malozemov, V.N., Prosekov, O.V.: Fast Fourier transform of small orders. Vestnik St. Petersburg
Univ. Math. 36(1), 28–35 (2003)
32. Malozemov, V.N., Prosekov, O.V.: Parametric versions of the fast Fourier transform. Doklady
Math. 78(1), 576–578 (2008)
33. Malozemov, V.N., Solov’eva, N.A.: Parametric lifting schemes of wavelet decompositions. J.
Math. Sci. 162(3), 319–347 (2009)
34. Malozemov, V.N., Solov’eva, N.A.: Wavelets and Frames in Discrete Analysis. Lambert Aca-
demic Publishing (2012) (in Russian)
35. Malozemov, V.N., Tret’yakov, A.A.: New approach to the Cooley-Tukey algorithm. Vestnik
St. Petersburg Univ. Math. 30(3), 47–50 (1997)
36. Malozemov, V.N., Tret’yakov, A.A.: The Cooley-Tukey algorithm and discrete Haar transform.
Vestnik St. Petersburg Univ. Math. 31(3), 27–30 (1998)
37. Malozemov, V.N., Tret’yakov, A.A.: Partitioning, orthogonality and permutations. Vestnik St.
Petersburg Univ. Math. 32(1), 14–19 (1999)
38. Malozemov, V.N., Tsvetkov, K.Yu.: On optimal signal-filter pairs. Probl. Inf. Transm. 39(2),
216–226 (2003)
39. Malozemov, V.N., Tsvetkov, K.Yu.: A sampling theorem in Vilenkin-Chrestenson basis. Com-
mun. Appl. Anal. 10(2), 201–207 (2006)
40. Kamada, Masaru: Toraichi, Kazuo, Mori, Ryoichi: Periodic spline orthogonal bases. J. Approx.
Theory 55(1), 27–34 (1988)
41. McClellan, J.H., Rader, C.M.: Number Theory in Digital Signal Processing. Prentice-Hall,
Englewood Cliffs, NJ (1979)
42. Morozov, V.A.: Regular Methods for Solving Ill-Posed Problems. Nauka, Moscow (1987). (in
Russian)
43. Narcowich, F.J., Ward, J.D.: Wavelets associated with periodic basis functions. Appl. Comput.
Harmonic Anal. 3(1), 40–56 (1996)
44. Prosekov, O.V., Malozemov, V.N.: Parametric Variants of the Fast Fourier Transform. Lambert
Academic Publishing (2010) (in Russian)
References 243
45. Sarwate, D., Pursley, M.: Cross-correlation properties of pseudorandom and related sequences.
Proc. IEEE 68(5), 593–619 (1980)
46. Malozemov, V.N. (ed.): Selected Chapters of Discrete Harmonic Analysis and Geometric Mod-
eling. Part One. VVM, St. Petersburg (2014). (in Russian)
47. Malozemov, V.N. (ed.): Selected Chapters of Discrete Harmonic Analysis and Geometric Mod-
eling. Part Two. VVM, St. Petersburg (2014). (in Russian)
48. Temperton, C.: Self-sorting in-place fast Fourier transform. SIAM J. Sci. Statist. Comput.
12(4), 808–823 (1991)
49. Trakhtman, A.M., Trakhtman, V.A.: Fundamentals of the Theory of Discrete Signals on Finite
Intervals. Sov. Radio, Moscow (1975). (in Russian)
50. Vlasenko, V.A., Lappa, Yu.M., Yaroslavsky, L.P.: Methods of Synthesis of Fast Algorithms for
Signal Convolution and Spectral Analysis. Nauka, Moscow (1990). (in Russian)
51. Zalmanzon, L.A.: Fourier, Walsh and Haar Transforms and Their Application to Control.
Communications and Other Fields. Nauka, Moscow (1989). (in Russian)
52. Zheludev, V.A.: Wavelets based on periodic splines. Rus. Acad. Sci. Dokl. Math. 49(2), 216–
222 (1994)
53. Zheludev, V.A., Pevnyi, A.B.: Biorthogonal wavelet schemes based on discrete spline interpo-
lation. Comput. Math. Math. Phys. 41(4), 502–513 (2001)
Index
A Continuous periodic
Ahmed–Rao basis, 177 B-spline, 109
Algorithm spline, 114
Cooley–Tukey Convolution
decimation in frequency, 139 cyclic, 30
decimation in time, 128 dyadic, 153
Goertzel, 121 skew-cyclic, 151
Amplitude spectrum, 56 theorem, 30
Auto-correlation, 34 in Haar basis, 149, 153
normalized, 42 Cooley–Tukey algorithm
decimation in frequency, 139
decimation in time, 128
B Correlation
Basis auto, 34
Ahmed–Rao, 177 cross, 34
exponential, 21 cyclic, 34
Haar, decimation in frequency, 140 theorem, 34
Haar, decimation in time, 132 Cross-correlation, 34
in space of splines, orthogonal, 90 Cyclic
of shifts, 35 convolution, 30
Walsh–Hadamard, 159
correlation, 34
Walsh–Paley, 162
wavelet, 130, 139
Bernoulli, discrete functions, 61
Binary D
code, 4 Delta-correlated signal, 42
signal, 58 DFT inversion formula, 20
Bitwise summation, 8 Discrete Fourier transform, 19
B-spline Discrete functions
continuous periodic, 109 Ahmed–Rao, 180
discrete periodic, 65 Bernoulli, 61
normalized, 108 Rademacher, 189
Walsh, 157
ordered by frequency, 162
C ordered by sign changes, 164
Cauchy–Bunyakovskii inequality, 18 Discrete periodic B-spline, 65
E
Energy of signal, 43 L
Ensemble of signals, 44 Linear transform, 31
Equality
Parseval, 24
M
Parseval, generalized, 24
Matched filter, 41
Euler permutation, 4
Matrix
Even signal, 19 Hadamard, 155
Exponential basis, 21
N
F Non-correlated signals, 47
Fast Fourier transform, 128, 139 Normalized
Fast Haar transform auto-correlation, 42
decimation in frequency, 142 B-spline, 108
decimation in time, 135 signal, 17
Fast transform
Fourier, 128, 139 O
Haar, 135, 142 Odd signal, 19
Walsh, 160 Orthogonal basis in space of splines, 90
Fast Walsh transform Orthogonal signals, 17
decimation in time, 160
Filter, 32
matched, 41 P
SLB, 43 Parseval equality, 24
Filter response generalized, 24
frequency, 33 Peak factor of signal, 58
impulse, 32 Permutation
Fourier spectrum, 20 Euler, 4
Frank signal, 57 greyν , 6
Frank–Walsh signal, 190 revν , 4, 124, 136
Frequency response, 33 walν , 164
Prolongation of signal, 56
G R
Generalized Parseval equality, 24 Rademacher, discrete functions, 189
Goertzel algorithm, 121 Real signal, 19
Index 247