0% found this document useful (0 votes)
145 views

Vdoc - Pub - Foundations of Discrete Harmonic Analysis Applied and Numerical Harmonic Analysis

This document provides lecture notes on the foundations of discrete harmonic analysis. It was written by Vasily N. Malozemov and Sergey M. Masharsky from Saint Petersburg State University in Russia. The notes are published as part of the Lecture Notes in Applied and Numerical Harmonic Analysis book series, which focuses on brief yet rigorous works in harmonic analysis and related fields.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views

Vdoc - Pub - Foundations of Discrete Harmonic Analysis Applied and Numerical Harmonic Analysis

This document provides lecture notes on the foundations of discrete harmonic analysis. It was written by Vasily N. Malozemov and Sergey M. Masharsky from Saint Petersburg State University in Russia. The notes are published as part of the Lecture Notes in Applied and Numerical Harmonic Analysis book series, which focuses on brief yet rigorous works in harmonic analysis and related fields.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 257

Lecture Notes in Applied and Numerical Harmonic Analysis

Vasily N. Malozemov
Sergey M. Masharsky

Foundations
of Discrete
Harmonic
Analysis
Applied and Numerical Harmonic Analysis

Lecture Notes in Applied and Numerical Harmonic


Analysis

Series Editor
John J. Benedetto
University of Maryland
College Park, MD, USA

Editorial Board
Emmanuel Candes
Stanford University
Stanford, CA, USA

Peter Casazza
University of Missouri
Columbia, MO, USA

Gitta Kutyniok
Technische Universität Berlin
Berlin, Germany

Ursula Molter
Universidad de Buenos Aires
Buenos Aires, Argentina

Michael Unser
Ecole Polytechnique Federal De Lausanne
Lausanne, Switzerland

More information about this subseries at http://www.springer.com/series/13412


Vasily N. Malozemov Sergey M. Masharsky

Foundations of Discrete
Harmonic Analysis
Vasily N. Malozemov Sergey M. Masharsky
Mathematics and Mechanics Faculty Mathematics and Mechanics Faculty
Saint Petersburg State University Saint Petersburg State University
Saint Petersburg, Russia Saint Petersburg, Russia

ISSN 2296-5009 ISSN 2296-5017 (electronic)


Applied and Numerical Harmonic Analysis
ISSN 2512-6482 ISSN 2512-7209 (electronic)
Lecture Notes in Applied and Numerical Harmonic Analysis
ISBN 978-3-030-47047-0 ISBN 978-3-030-47048-7 (eBook)
https://doi.org/10.1007/978-3-030-47048-7

Mathematics Subject Classification (2010): 42C10, 42C20, 65D07, 65T50, 65T60

© Springer Nature Switzerland AG 2020


This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered
company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
LN-ANHA Series Preface

The Lecture Notes in Applied and Numerical Harmonic Analysis (LN-ANHA) book
series is a subseries of the widely known Applied and Numerical Harmonic
Analysis (ANHA) series. The Lecture Notes series publishes paperback volumes,
ranging from 80 to 200 pages in harmonic analysis as well as in engineering and
scientific subjects having a significant harmonic analysis component. LN-ANHA
provides a means of distributing brief-yet-rigorous works on similar subjects as the
ANHA series in a timely fashion, reflecting the most current research in this rapidly
evolving field.
The ANHA book series aims to provide the engineering, mathematical, and
scientific communities with significant developments in harmonic analysis, ranging
from abstract harmonic analysis to basic applications. The title of the series reflects
the importance of applications and numerical implementation, but richness and
relevance of applications and implementation depend fundamentally on the struc-
ture and depth of theoretical underpinnings. Thus, from our point of view, the
interleaving of theory and applications and their creative symbiotic evolution is
axiomatic.
Harmonic analysis is a wellspring of ideas and applicability that has flourished,
developed, and deepened over time within many disciplines and by means of
creative cross-fertilization with diverse areas. The intricate and fundamental rela-
tionship between harmonic analysis and fields such as signal processing, partial
differential equations (PDEs), and image processing is reflected in our
state-of-the-art ANHA series.
Our vision of modem harmonic analysis includes mathematical areas such as
wavelet theory, Banach algebras, classical Fourier analysis, time-frequency analy-
sis, and fractal geometry, as well as the diverse topics that impinge on them.
For example, wavelet theory can be considered an appropriate tool to deal with
some basic problems in digital signal processing, speech and image processing,
geophysics, pattern recognition, bio-medical engineering, and turbulence. These
areas implement the latest technology from sampling methods on surfaces to fast
algorithms and computer vision methods. The underlying mathematics of wavelet
theory depends not only on classical Fourier analysis but also on ideas from abstract

v
vi LN-ANHA Series Preface

harmonic analysis, including von Neumann algebras and the affine group. This
leads to a study of the Heisenberg group and its relationship to Gabor systems and
of the metaplectic group for a meaningful interaction of signal decomposition
methods.
The unifying influence of wavelet theory in the aforementioned topics illustrates
the justification for providing a means for centralizing and disseminating infor-
mation from the broader, but still focused, area of harmonic analysis. This will be a
key role of ANHA. We intend to publish with the scope and interaction that such a
host of issues demands.
Along with our commitment to publish mathematically significant works at the
frontiers of harmonic analysis, we have a comparably strong commitment to publish
major advances in applicable topics such as the following, where harmonic analysis
plays a substantial role:

Bio-mathematics, bio-engineering, Machine learning;


and bio-medical signal processing; Phaseless reconstruction;
Communications and RADAR; Quantum informatics;
Compressive sensing (sampling) Remote sensing;
and sparse representations; Sampling theory;
Data science, data mining Spectral estimation;
and dimension reduction; Time-frequency and time-scale
Fast algorithms; analysis–Gabor theory
Frame theory and noise reduction; and wavelet theory
Image processing and
super-resolution;

The above point of view for the ANHA book series is inspired by the history of
Fourier analysis itself, whose tentacles reach into so many fields.
In the last two centuries Fourier analysis has had a major impact on the
development of mathematics, on the understanding of many engineering and sci-
entific phenomena, and on the solution of some of the most important problems in
mathematics and the sciences. Historically, Fourier series were developed in the
analysis of some of the classical PDEs of mathematical physics; these series were
used to solve such equations. In order to understand Fourier series and the kinds of
solutions they could represent, some of the most basic notions of analysis were
defined, for example, the concept of “function.” Since the coefficients of Fourier
series are integrals, it is no surprise that Riemann integrals were conceived to deal
with uniqueness properties of trigonometric series. Cantor’s set theory was also
developed because of such uniqueness questions.
A basic problem in Fourier analysis is to show how complicated phenomena,
such as sound waves, can be described in terms of elementary harmonics. There are
two aspects of this problem: first, to find, or even define properly, the harmonics or
spectrum of a given phenomenon, e.g., the spectroscopy problem in optics; second,
LN-ANHA Series Preface vii

to determine which phenomena can be constructed from given classes of har-


monics, as done, for example, by the mechanical synthesizers in tidal analysis.
Fourier analysis is also the natural setting for many other problems in engi-
neering, mathematics, and the sciences. For example, Wiener’s Tauberian theorem
in Fourier analysis not only characterizes the behavior of the prime numbers but
also provides the proper notion of spectrum for phenomena such as white light; this
latter process leads to the Fourier analysis associated with correlation functions in
filtering and prediction problems, and these problems, in turn, deal naturally with
Hardy spaces in the theory of complex variables.
Nowadays, some of the theory of PDEs has given way to the study of Fourier
integral operators. Problems in antenna theory are studied in terms of unimodular
trigonometric polynomials. Applications of Fourier analysis abound in signal pro-
cessing, whether with the fast Fourier transform (FFT), or filter design, or the
adaptive modeling inherent in time-frequency-scale methods such as wavelet
theory.
The coherent states of mathematical physics are translated and modulated
Fourier transforms, and these are used, in conjunction with the uncertainty prin-
ciple, for dealing with signal reconstruction in communications theory. We are back
to the raison d’être of the ANHA series!

University of Maryland John J. Benedetto


College Park, MD, USA Series Editor
Preface

Discrete harmonic analysis is a mathematical discipline predominately targeted to


advanced applications of digital signal processing. A notion of a signal requires a
closer definition. A signal in discrete harmonic analysis is defined as a
complex-valued periodic function of an integer argument.
In this book we study transforms of signals. One of the fundamental transforms
is the discrete Fourier transform (DFT). In 1965, Cooley and Tukey in their paper
[8] proposed the fast Fourier transform (FFT), a fast method of calculation of the
DFT. Essentially, this discovery set the stage for development of discrete harmonic
analysis as a self-consistent discipline.
The DFT inversion formula causes a signal to be expanded over the exponential
basis. Expanding a signal over various bases is the main technique of digital signal
processing.
An argument of a signal is interpreted as time. Components of the discrete
Fourier transform comprise a frequency spectrum of a signal. Analysis in the time
and frequency domain lets us uncover the structure of a signal and determine ways
of transforming a signal to obtain required properties.
In practice, we are faced with a necessity to process signals of various natures
such as acoustic, television, seismic, radio signals, or signals coming from the outer
space. These signals are received by physical devices. When we take a reading of a
device at regular intervals we obtain a discrete signal. It is this signal that is a
subject of further digital processing. To start with, we calculate a frequency spec-
trum of the discrete signal. It corresponds to representing a signal in a form of a sum
of simple summands being its frequency components. By manipulating with fre-
quency components we achieve an improvement of specific features of a signal.
This book is aimed at an initial acquaintance with the subject. It is written on the
basis of the lecture course that the first author has been delivering since 1995 on the
Faculty of Mathematics and Mechanics of St. Petersburg State University.
The book consists of four chapters. The first chapter briefly exposes the facts that
are being used in the main text. These facts are well known and are related to
residuals, permutations, complex numbers, and finite differences.

ix
x Preface

In the second chapter we consider basic transforms of signals. The centerpieces


are discrete Fourier transform, cyclic convolution, and cyclic correlation. We study
the properties of these transforms. As an application, we provide solutions to the
problems of optimal interpolation and optimal signal–filter pair. Separate sections
are devoted to ensembles of signals and to the uncertainty principle in discrete
harmonic analysis.
In the third chapter we introduce discrete periodic splines and study their fun-
damental properties. We establish an extremal property of the interpolation splines.
In terms of splines, we offer an elegant solution to the problem of smoothing of
discrete periodic data. We construct a system of orthogonal splines. With the aid of
dual splines, we solve the problem of spline processing of discrete data with the
least squares method.
We obtain a wavelet expansion of an arbitrary spline. We prove two limit
theorems related to interpolation splines.
The focus of the fourth chapter is on fast algorithms: the fast Fourier transform,
the fast Haar transform, and the fast Walsh transform. To build a fast algorithm we
use an original approach stemming from introduction of a recurrent sequence of
orthogonal bases in the space of discrete periodic signals. In this way we manage to
form wavelet bases which altogether constitute a wavelet packet. In particular, Haar
basis is a wavelet one. We pay a lot of attention to it in the book.
We investigate an important question of ordering of Walsh functions. We ana-
lyze in detail Ahmed–Rao bases that fall in between Walsh basis and the expo-
nential basis.
The main version of the fast Fourier transform (it is called the Cooley–Tukey
algorithm) is targeted to calculate the DFT whose order is a power of two. At the
end of the fourth chapter, we show how to use the Cooley–Tukey algorithm to
calculate a DFT of any order.
A specific feature of the book is a big number of exercises. They allow us to
lessen the burden of the main text. Many special and auxiliary facts are formalized
as exercises. All the exercises are endowed with solutions. Separate exercises or
exercise groups are independent, so you as a reader can select only those that seem
interesting to you. The most efficient way is solving an exercise and then checking
your solution against the one presented in the book. It will let you actively master
the matter.
At the end of the book we put the list of references. We lay emphasis on the
books [5, 41, 49] that we used to study up the fundamentals of discrete harmonic
analysis back in the day.
The first version of the book was published in 2003 as a preprint. In 2012 Lan’
publishers published the book in Russian. This English edition is an extended and
improved version of the Russian edition.
Preface xi

The seminar on discrete harmonic analysis and computer aided geometric design
(shortly, DHA&CAGD) was held in St. Petersburg University from 2004 to 2014.
The seminar’s website is http://dha.spb.ru. The site was used to publish the
proceedings of the seminar’s members; these proceedings served as a basis for the
books [46, 47, 34, 44, 7] published later on. Contents of the proceedings and the
mentioned books can be considered as an addendum to this book.

St. Petersburg, Russia Vasily N. Malozemov


July 2019 Sergey M. Masharsky

Acknowledgements First of all, the authors are thankful to the students and postgraduates who,
over the years, attended the course of lectures on discrete harmonic analysis and offered beautiful
solutions to some exercises.
The first author separately expresses his gratitude to his permanent co-author Prof. A. B.
Pevnyi and to his former postgraduate students M. G. Ber and A. A. Tret’yakov. It is with these
people that we accomplished our first works in the field of discrete harmonic analysis. We also
give thanks to O. V. Prosekov, M. I. Grigoriev, and N. V. Chashnikov. By turn, they administered
the website of DHA&CAGD over 10 years.
Contents

1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Greatest Common Divisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Relative Primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Bitwise Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Roots of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.8 Finite Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Signal Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Space of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Parseval Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Cyclic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6 Cyclic Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7 Optimal Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.8 Optimal Signal–Filter Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.9 Ensembles of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.10 Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3 Spline Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.1 Periodic Bernoulli Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2 Periodic B-splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3 Discrete Periodic Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

xiii
xiv Contents

3.5 Smoothing of Discrete Periodic Data . . . . . . . . . . . . . . . . . . . . . 75


3.6 Tangent Hyperbolas Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.7 Calculation of Discrete Spline’s Values . . . . . . . . . . . . . . . . . . . 84
3.8 Orthogonal Basis in a Space of Splines . . . . . . . . . . . . . . . . . . . 88
3.9 Bases of Shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.10 Wavelet Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.11 First Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.12 Second Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4 Fast Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.1 Goertzel Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.2 First Sequence of Orthogonal Bases . . . . . . . . . . . . . . . . . . . . . . 123
4.3 Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4 Wavelet Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.5 Haar Basis. Fast Haar Transform . . . . . . . . . . . . . . . . . . . . . . . . 132
4.6 Decimation in Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.7 Sampling Theorem in Haar Bases . . . . . . . . . . . . . . . . . . . . . . . 142
4.8 Convolution Theorem in Haar Bases . . . . . . . . . . . . . . . . . . . . . 147
4.9 Second Sequence of Orthogonal Bases . . . . . . . . . . . . . . . . . . . . 155
4.10 Fast Walsh Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.11 Ordering of Walsh Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.12 Sampling Theorem in Walsh Basis . . . . . . . . . . . . . . . . . . . . . . 166
4.13 Ahmed–Rao Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.14 Calculation of DFT of Any Order . . . . . . . . . . . . . . . . . . . . . . . 183
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Acronyms

DFT Discrete Fourier transform


DHT Discrete Haar transform
DWT Discrete Walsh transform
FFT Fast Fourier transform
SLBF Side-lobe blanking filter

xv
Chapter 1
Preliminaries

The following notations are used throughout the book:


Z, R, C sets of integer, real, and complex numbers, respectively;
m : n a set of consequent integer numbers {m, m + 1, . . . , n}.
The notation A := B or B =: A means that A equals to B by a definition.

1.1 Residuals

Consider j ∈ Z and N being a natural number. There exists a unique integer p such
that
p ≤ j/N < p + 1. (1.1.1)

It is referred to as an integral part of the fraction j/N and is noted as p =  j/N . The
difference r = j − pN is called a remainder after division of j by N or a modulo
N residual of j. It is noted as r =  j N . For a given j we get a representation
j = pN + r , where p =  j/N  and r =  j N .
It is not difficult to show that

 j N ∈ 0 : N − 1. (1.1.2)

Indeed, multiply the inequalities (1.1.1) by N and subtract pN . We obtain 0 ≤


j − pN < N , which is equivalent to (1.1.2).
It follows from the definitions that the equalities

( j + k N )/N  =  j/N  + k, (1.1.3)

 j + k N  N =  j N (1.1.4)

© Springer Nature Switzerland AG 2020 1


V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7_1
2 1 Preliminaries

hold for any integer k. The formal proof is carried out in this way. As long as
 j/N  ≤ j/N <  j/N  + 1, after addition of k we obtain

 j/N  + k ≤ ( j + k N )/N <  j/N  + k + 1.

This is equivalent to (1.1.3). Equality (1.1.4) is a direct consequence of (1.1.3).


Indeed,

 j + k N  N = j + k N − ( j + k N )/N  N = j −  j/N  N =  j N .

We mention two other properties of residuals that are simple yet important: for
any integers j and k
   
 j + k N =  j N + k N =  j N + k N N ,
   
 jk N =  j N k N =  j N k N N .

The proof of these equalities relies on formula (1.1.4).

1.2 Greatest Common Divisor

Take nonzero integers j and k. The largest natural number that divides both j and k
is called the greatest common divisor of these numbers and is denoted by gcd ( j, k).
Designate by M the set of linear combinations of the numbers j and k with integer
coefficients:
M = {a j + bk | a ∈ Z, b ∈ Z}.

Theorem 1.2.1 The smallest natural number in M equals to gcd ( j, k).

Proof Let d = a0 j + b0 k be the smallest natural number in M. We will show that


j is divisible by d. Using the representation j = pd + r , where r ∈ 0 : d − 1, we
write
r = j − pd = j − p(a0 j + b0 k) = (1 − pa0 ) j − pb0 k.

We see that r ∈ M and r < d. It is possible only when r = 0, i.e. when j is divisible
by d. Similarly we ascertain that k is divisible by d as well.
Now let j and k be divisible by a natural number d  . Then d is also divisible by d  .
Hence d = gcd ( j, k). The theorem is proved. 

According to Theorem 1.2.1, there exist integers a0 and b0 such that

gcd ( j, k) = a0 j + b0 k. (1.2.1)
1.2 Greatest Common Divisor 3

Formula (1.2.1) is referred to as a linear representation of the greatest common


divisor.
Note that
gcd ( j, k) = gcd (| j|, |k|)

since the integers j and − j have the same divisors.

1.3 Relative Primes

Natural numbers n and N are referred to as relative primes if gcd (n, N ) = 1. For
relative primes n and N equality (1.2.1) takes the form

a0 n + b0 N = 1. (1.3.1)

Thus, for relative primes n and N there exist integers a0 and b0 such that equal-
ity (1.3.1) holds.
The inverse assertion is also valid: equality (1.3.1) guarantees relative primality
of n and N . It follows from Theorem 1.2.1, since the unity is absolutely the smallest
natural number.
Theorem 1.3.1 If the product jn for some j ∈ Z is divisible by N , and the integers
n and N are relative primes, then j is divisible by N .
Proof Multiply both sides of equality (1.3.1) by j and take modulo N residuals. We
get a0 jn N =  j N and
 
a0  jn N N =  j N .

It is clear that  j N = 0 if  jn N = 0. This is a symbolic equivalent of the theorem’s


statement. 
Now multiply both sides of equality (1.3.1) by a number k ∈ 0 : N − 1. Using
modulo N residuals we come to the relation
 
a0 k N n N = k.

This result can be interpreted as follows: an equation xn N = k for any k ∈ 0 :


N − 1 has a solution x0 = a0 k N on the set 0 : N − 1. Let’s show that the solution
is unique. Assume that x  n N = k for some x  ∈ 0 : N − 1. Then
 
(x0 − x  )n N = x0 n N − x  n N N = 0.

Since n and N are the relative primes, Theorem 1.3.1 yields that x0 − x  is divisible
by N . Taking into account the inequality |x0 − x  | ≤ N − 1 we conclude that x  = x0 .
Let’s summarize.
4 1 Preliminaries

Theorem 1.3.2 If gcd (n, N ) = 1 then the equation xn N = k has a unique solu-
tion on the set 0 : N − 1 for any k ∈ 0 : N − 1.

1.4 Permutations

Denote f ( j) =  jn N . By virtue of Theorem 1.3.2, provided that gcd (n, N ) = 1, the
function f ( j) bijectively maps the set JN = {0, 1, . . . , N − 1} onto itself. Essen-
tially, f performs a permutation of the elements of JN . This is called an Euler
permutation.
We describe a simple way of calculating the values f ( j). It is obvious that f (0) =
0 and  
f ( j + 1) = ( j + 1)n N =  jn N + n N =  f ( j) + n N .

We come to the recurrent relation

f (0) = 0; f ( j + 1) =  f ( j) + n N , j = 0, 1, . . . , N − 2 (1.4.1)

that makes it possible to consequently discover the values of the Euler permutation.
The results of calculations with formula (1.4.1) for n = 3 and N = 8 are presented
in Table 1.1.
Later on we will need two other permutations, revν and greyν . They are defined
on a set {0, 1, . . . , 2ν − 1} for a natural ν.
Recall that with the use of consequent bisections we can uniquely represent any
integer j ∈ 0 : 2ν − 1 in the form

j = jν−1 2ν−1 + jν−2 2ν−2 + · · · + j1 2 + j0 , (1.4.2)

where every coefficient jk is equal to either zero or unity. Instead of (1.4.2), more
compact notation is used: j = ( jν−1 , jν−2 , . . . , j0 )2 . The right side of the latter
equality is referred to as a binary code of the number j.
Introduce a notation

revν ( j) = ( j0 , j1 , . . . , jν−1 )2 .

Table 1.1 Euler permutation for n = 3 and N = 8


j 0 1 2 3 4 5 6 7
 j38 0 3 6 1 4 7 2 5
1.4 Permutations 5

Table 1.2 Permutation revν for ν = 3


j ( j2 , j1 , j0 )2 ( j0 , j1 , j2 )2 rev3 ( j)
0 (0, 0, 0)2 (0, 0, 0)2 0
1 (0, 0, 1)2 (1, 0, 0)2 4
2 (0, 1, 0)2 (0, 1, 0)2 2
3 (0, 1, 1)2 (1, 1, 0)2 6
4 (1, 0, 0)2 (0, 0, 1)2 1
5 (1, 0, 1)2 (1, 0, 1)2 5
6 (1, 1, 0)2 (0, 1, 1)2 3
7 (1, 1, 1)2 (1, 1, 1)2 7

A number revν ( j) belongs to a set 0 : 2ν − 1, and its binary code equals to the
reverted binary code of a number j. Identifier “rev” corresponds to a word reverse.
Subscript ν determines  the amount
 of reverted binary digits.
It is clear that revν revν ( j) = j for j ∈ 0 : 2ν − 1. Hence, in particular, it follows
that the mapping j → revν ( j) is a permutation of a set {0, 1, . . . , 2ν − 1}.
By a definition, rev1 ( j) = j for j ∈ 0 : 1. It is reckoned that rev0 (0) = 0.
Table 1.2 shows how to form a permutation revν for ν = 3.
We continue with an investigation of a permutation revν .
Theorem 1.4.1 The following recurrent relation holds:

rev0 (0) = 0;

revν (2k) = revν−1 (k),


(1.4.3)
revν (2k + 1) = 2ν−1 + revν−1 (k),

k ∈ 0 : 2ν−1 − 1, ν = 1, 2, . . .

Proof Replace the second and the third lines in (1.4.3) with a single line

revν (2k + σ ) = σ 2ν−1 + revν−1 (k),


(1.4.4)
σ ∈ 0 : 1, k ∈ 0 : 2ν−1 − 1, ν = 1, 2, . . .

When ν = 1, formula (1.4.4) becomes of a known form rev1 (σ ) = σ , σ ∈ 0 : 1. Let


it be ν ≥ 2. For any k ∈ 0 : 2ν−1 − 1 and σ ∈ 0 : 1 we have

k = kν−2 2ν−2 + · · · + k1 2 + k0 , 2k + σ = kν−2 2ν−1 + · · · + k0 2 + σ,

revν (2k + σ ) = σ 2ν−1 + k0 2ν−2 + · · · + kν−2 = σ 2ν−1 + revν−1 (k).

The theorem is proved. 


6 1 Preliminaries

Table 1.3 Consequent calculation of permutations revν


ν revν ( j) for j = 0, 1, . . . , 2ν − 1
1 0 1

2 0 2 1 3

3 0 4 2 6 1 5 3 7

Theorem 1.4.1 makes it possible to consequently calculate the values revν ( j) for
ν = 1, 2, . . . for all j ∈ {0, 1, . . . , 2ν − 1} at once. Table 1.3 presents the results
of calculations of rev1 ( j), rev2 ( j), and rev3 ( j). A transition between the (ν − 1)-th
and the ν-th rows was performed with accordance to formula (1.4.3). It was also
taken into account that

revν (2k + 1) = revν (2k) + 2ν−1 , k = 0, 1, . . . , 2ν−1 − 1.

Now we turn to a permutation greyν . It is defined recursively:

grey0 (0) = 0;

greyν (k) = greyν−1 (k),


(1.4.5)
greyν (2 − 1 − k) = 2ν−1 + greyν−1 (k),
ν

k ∈ 0 : 2ν−1 − 1, ν = 1, 2, . . .

Let’s assure that the mapping j → greyν ( j) is indeed a permutation of a set


{0, 1, . . . , 2ν − 1}.
When ν = 1, this is obvious since, by a definition, grey1 ( j) = j for j ∈ 0 : 1.
Assume that the assertion is true for greyν−1 . According to the second line of (1.4.5)
the function greyν ( j) bijectively maps the set {0, 1, . . . , 2ν−1 − 1} onto itself. The
third line of (1.4.5) after argument replacement k  = 2ν−1 − 1 − k takes the form

greyν (2ν−1 + k  ) = 2ν−1 + greyν−1 (2ν−1 − 1 − k  ), (1.4.6)

k  ∈ 0 : 2ν−1 − 1.

Hence the function greyν ( j) as well bijectively maps onto itself the set {2ν−1 , . . . ,
2ν − 1}. Joining these two facts we conclude that the function greyν ( j) bijectively
maps the set {0, 1, . . . , 2ν − 1} onto itself. In the other words, the mapping j →
greyν ( j) is a permutation of the set {0, 1, . . . , 2ν − 1}.
1.4 Permutations 7

Table 1.4 Consequent calculation of permutations greyν


ν greyν ( j) for j = 0, 1, . . . , 2ν − 1
1 0 1

2 0 1 3 2

3 0 1 3 2 6 7 5 4

Formula (1.4.5) make it possible to consequently calculate the values greyν ( j) for
ν = 1, 2, . . . for all j ∈ {0, 1, . . . , 2ν − 1} at once. Table 1.4 contains the results of
calculations of grey1 ( j), grey2 ( j), and grey3 ( j).
We adduce a characteristic property of a permutation greyν .

Theorem 1.4.2 For ν ≥ 1, the binary codes of two adjacent elements greyν (k) and
greyν (k + 1), k ∈ 0 : 2ν − 2, differ in a single digit only.

Proof When ν = 1, the assertion is obvious since grey1 (0) = (0)2 and grey1 (1) =
(1)2 . We perform an induction step from ν − 1 to ν, ν ≥ 2.
Let k ∈ 0 : 2ν−1 − 1 and greyν−1 (k) = ( pν−2 , . . . , p0 )2 . According to (1.4.5)

greyν (k) = (0, pν−2 , . . . , p0 )2 ,


(1.4.7)
greyν (2ν − 1 − k) = (1, pν−2 , . . . , p0 )2 .

If k ∈ 0 : 2ν−1 − 2 and greyν−1 (k + 1) = (qν−2 , . . . , q0 )2 then by an inductive


hypothesis the binary codes ( pν−2 , . . . , p0 )2 and (qν−2 , . . . , q0 )2 differ only in one
digit. Since greyν (k + 1) = (0, qν−2 , . . . , q0 )2 , the same is true for the binary codes
of numbers greyν (k) and greyν (k + 1).
In case of k = 2ν−1 − 1 we have k + 1 = 2ν − 1 − k so that the binary codes of
numbers greyν (k) and greyν (k + 1) differ in the most significant digit only, as can
be seen from (1.4.7).
It is remaining to consider the indices k from a set 2ν−1 : 2ν − 2. Put k  = 2ν −
1 − k. It is clear that k  ∈ 1 : 2ν−1 − 1. Let

greyν−1 (k  ) = ( pν−2 , . . . , p0 )2 ,

greyν−1 (k  − 1) = (qν−2 , . . . , q0 )2 .


By an inductive hypothesis, the binary codes ( pν−2 , . . . , p0 )2 and (qν−2

, . . . , q0 )2
differ only in a single digit. According to (1.4.5)

greyν (k) = greyν (2ν − 1 − k  ) = (1, pν−2 , . . . , p0 )2 ,
  
greyν (k + 1) = greyν 2ν − 1 − (k  − 1) = (1, qν−2 , . . . , q0 )2 .
8 1 Preliminaries

It is evident that the binary codes of numbers greyν (k) and greyν (k + 1) also differ
in a single digit only. The theorem is proved. 

1.5 Bitwise Summation

Take two integers j = ( js−1 , js−2 , . . . , j0 )2 and k = (ks−1 , ks−2 , . . . , k0 )2 from


the set 0 : 2s − 1. An operation of bitwise summation ⊕ associates the numbers
j and k with an integer p = ( ps−1 , ps−2 , . . . , p0 )2 that has pν =  jν + kν 2 for
ν = 0, 1, . . . , s − 1. Thus,

p = j ⊕k ⇔ pν =  jν + kν 2 , ν = 0, 1, . . . , s − 1.

It follows from a definition that

j ⊕ j = 0 for all j ∈ 0 : 2s − 1. (1.5.1)

Bitwise summation operation is commutative and associative, i.e. j ⊕ k = k ⊕ j and

( j ⊕ k) ⊕ l = j ⊕ (k ⊕ l). (1.5.2)

Let’s verify the associativity. For ν ∈ 0 : s − 1 we have


 
( j ⊕ k)ν + lν 2 =  jν + kν 2 + lν 2 = ( jν + kν ) + lν 2
 
=  jν + (kν + lν )2 = jν + kν + lν 2 2 =  jν + (k ⊕ l)ν 2 .

This corresponds to (1.5.2).


An equation x ⊕ k = p with fixed k and p from 0 : 2s − 1 has a unique solution
x = p ⊕ k on the set 0 : 2s − 1. Indeed, according to (1.5.2) and (1.5.1)

( p ⊕ k) ⊕ k = p ⊕ (k ⊕ k) = p.

By virtue of the mentioned properties of bitwise summation we may affirm that


the mapping j → j ⊕ k with a fixed k is a permutation of the set {0, 1, . . . , 2s − 1}.
Table 1.5 shows an example of such a permutation for s = 3 and k = 5 = (1, 0, 1)2 .
We could introduce an operation of bitwise subtraction k j by setting

(k j)α = kα − jα 2 , α ∈ 0 : s − 1.

But this operation is redundant because

k j = k ⊕ j.
1.5 Bitwise Summation 9

Table 1.5 Permutation j → j ⊕ k for s = 3 and k = 5 = (1, 0, 1)2


j ( j2 , j1 , j0 )2 j ⊕k
0 (0, 0, 0)2 5
1 (0, 0, 1)2 4
2 (0, 1, 0)2 7
3 (0, 1, 1)2 6
4 (1, 0, 0)2 1
5 (1, 0, 1)2 0
6 (1, 1, 0)2 3
7 (1, 1, 1)2 2

Let’s verify the last equality. We write it in an expanded form

kα − jα 2 = kα + jα 2 , α ∈ 0 : s − 1, (1.5.3)

where kα , jα ∈ 0 : 1. We fix α. When kα = jα , equality (1.5.3) is valid (0 = 0). Let


kα = jα . Then kα + jα 2 = 1. At the same time, kα − jα 2 = 1 for kα = 1, jα = 0;
and for kα = 0, jα = 1 we have

kα − jα 2 = −12 = 2 − 12 = 1.

1.6 Complex Numbers

It is assumed that the reader is familiar with the arithmetic operations on complex
numbers. We remind some notations:

z = u + iv a complex number,
u = Re z a real part of a complex number,
v = Im z an imaginary part of a complex number,
z = u√− iv a conjugate complex number,
|z| = u 2 + v 2 a modulus of a complex number.

It is obvious that |z|2 = zz. Also valid are the formulae

|z 1 + z 2 |2 = |z 1 |2 + |z 2 |2 + 2 Re(z 1 z 2 ),

|z 1 + i z 2 |2 = |z 1 |2 + |z 2 |2 + 2 Im(z 1 z 2 ).

Let’s verify, for example, the latter one. We have


10 1 Preliminaries

|z 1 + i z 2 |2 = (z 1 + i z 2 )(z 1 − i z 2 ) = |z 1 |2 + |z 2 |2 − i(z 1 z 2 − z 1 z 2 )
= |z 1 |2 + |z 2 |2 + 2 Im(z 1 z 2 ).

The following two formulae are well known: for a natural n


n  
 n
(z 1 + z 2 ) =n
z 1n−k z 2k , (1.6.1)
k=0
k
n 
where k
is a binomial coefficient; for z = 1


n−1
1 − zn
zk = .
k=0
1−z

Substituting the values z 1 = 1, z 2 = 1, and z 1 = 1, z 2 = −1 into (1.6.1) we obtain,


in particular,
 n   n  
n n
= 2n , (−1)k = 0.
k=0
k k=0
k

We will need one more formula: for z = 1


n−1
z
kz k = [1 − nz n−1 + (n − 1)z n ]. (1.6.2)
k=1
(1 − z)2

In order to prove it we write


n−1 
n−1 
n 
n−1
(1 − z) kz k = kz k − (k − 1)z k = z − (n − 1)z n + zk
k=1 k=1 k=2 k=2
1−z n−2
z
= z − (n − 1)z n + z 2 = [1 − nz n−1 + (n − 1)z n ],
1−z 1−z

which is equivalent to (1.6.2) when z = 1.

1.7 Roots of Unity

Let N be a natural number, N ≥ 2. We introduce a complex number

ω N = cos 2π
N
+ i sin 2π
N
.

With respect to Moivre formula for a natural k we write


1.7 Roots of Unity 11

ωkN = cos 2πk


N
+ i sin 2πk
N
. (1.7.1)

In particular, ω NN = 1. The number ω N is referred to as the N -th degree root of unity.


Formula (1.7.1) is valid for k = 0. It is also valid for negative integer powers
of ω N . Indeed,

1
ω−k
N = = cos 2πk − i sin 2πk
cos(2π k/N ) + i sin(2π k/N ) N N

= cos 2π(−k)
N
+ i sin 2π(−k)
N
.

It means that formula (1.7.1) is valid for all k ∈ Z.


We note that ω−1N = ω N and ωn N = ω N for a natural n. From (1.7.1) and from the
n

properties of trigonometric functions it follows that for all integers j and k


kj j k+ j
(ωkN ) j = ω N , ωkN ω N = ω N .

With respect to Euler formula we write ω N = exp(2πi/N ). It is this compact


form of the number ω N that will be used throughout the book.

1.8 Finite Differences

Take a complex-valued function of an integer argument f ( j), j ∈ Z. The finite


differences of the function f are defined recursively:

[( f )]( j) = [1 ( f )]( j) = f ( j + 1) − f ( j),


 
[r ( f )]( j) =  r −1 ( f ) ( j) = [r −1 ( f )]( j + 1) − [r −1 ( f )]( j),

r = 2, 3, . . .

Usually the notation r f ( j) is used instead of [r ( f )]( j).


The finite difference of the r -th order r f ( j) can be expressed by means of the
values of the function f ( j) directly. The following formula is valid:

r  
r
r f ( j) = (−1)r −k f ( j + k).
k=0
k

It can be easily proved by an induction on r .


It is obvious that a finite difference of any order of the function f ( j) ≡ const
equals to zero identically.
12 1 Preliminaries

Exercises

1.1 Let j ∈ Z and N be a natural number. Prove that

j j −1
− =− − 1.
N N

1.2 Prove that for j ∈ Z and natural n and N the following equality is valid:

n jn N = n j N .
 
1.3 Let f ( j) =  jn N . Prove that, provided n 2  N = 1, the equality f f ( j) = j
is valid for j ∈ 0 : N − 1.

1.4 We put f ( j) =  jn + l N , where n and N are relatively prime natural numbers


and l ∈ Z. Prove that the sequence f (0), f (1), . . . , f (N − 1) is a permutation of
numbers 0, 1, . . . , N − 1.

1.5 Find the values of a function f ( j) =  jn N for j = 0, 1, . . . , N − 1 in case


of gcd (n, N ) = d.

1.6 Prove that gcd ( j, k) = gcd ( j − k, k).

1.7 Let n 1 , n 2 be relative primes, N = n 1 n 2 and j ∈ 0 : N − 1. Prove that there


exist unique integers j1 ∈ 0 : n 1 − 1 and j2 ∈ 0 : n 2 − 1 such that j =  j1 n 2 +
j2 n 1  N .

1.8 Assume that integers n 1 , n 2 , . . . , n s are relatively prime with an integer m. Prove
that the product of these numbers N = n 1 n 2 · · · n s is also relatively prime with m.

1.9 We take pairwise relatively prime numbers n 1 , n 2 , . . . , n s . Prove that if a num-


ber j ∈ Z is divisible by each n α , α ∈ 1 : s, then j is also divisible by their product
N = n1n2 · · · ns .

1.10 Let n 1 , n 2 , . . . , n s be pairwise relatively prime numbers unequal to unity. Let


N = n 1 n 2 · · · n s . We denote Nα =N /n α . Prove that there exist integers b1 , b2 , . . . , bs
such that
b1 N1 + b2 N2 + · · · + bs Ns = 1.

1.11 Under conditions of the previous exercise, prove that any integer j ∈ 0 : N − 1
can be uniquely represented in a form

s 
j= jα Nα ,
α=1 N

where jα ∈ 0 : n α − 1. Find the explicit expression for the coefficients jα .


Exercises 13

1.12 Let conditions of the Exercise 1.10 hold. For each α ∈ 1 : s the equation
x Nα n α = 1 has a unique solution on the set 0 : n α − 1. We denote it by pα . Prove
that any integer k ∈ 0 : N − 1 can be uniquely represented in a form

s 
k= k α pα N α ,
α=1 N

where kα ∈ 0 : n α − 1. Find the explicit expression for the coefficients kα .


1.13 Let j = ( jν−1 , jν−2 , . . . , j0 )2 . Prove that
ν

greyν ( j) = jν−1 2ν−1 +  jν−k+1 + jν−k 2 2ν−k .
k=2

1.14 We take p = ( pν−1 , pν−2 , . . . , p0 )2 . Prove that the unique solution of the
equation greyν ( j) = p is an integer j = ( jν−1 , jν−2 , . . . , j0 )2 which has

jν−1 = pν−1 ,

jν−k =  pν−1 + pν−2 + · · · + pν−k 2 , k = 2, . . . , ν.

1.15 Prove that



n
n(n + 1)(2n + 1)
k2 = .
k=1
6

1.16 Prove that  


n
n
k = n 2n−1 .
k=1
k

1.17 Let n and N be relatively prime natural numbers. We put εn = ωnN , where
  N −1  j  N −1
ω N = exp(2πi/N ). Prove that the sets εnk k=0 and ω N j=0 are equal, i.e. that
they consist of the same elements.
1.18 Prove that for relative primes m and n there exist unique integers p ∈ 0 : m − 1
and q ∈ 0 : n − 1 with the following properties: gcd ( p, m) = 1, gcd (q, n) = 1, and
p q
ωmn = ωm ωn .
1.19 Prove that
N −1
 
N −1
j
zk = (z − ω N ).
k=0 j=1

1.20 Let Pr be an algebraic polynomial of the r -th degree. Prove that a finite dif-
ference of the (r + 1)-th order of Pr equals to zero identically.
Chapter 2
Signal Transforms

2.1 Space of Signals

2.1.1 We fix a natural number N . The term signal is used to refer to an N -periodic
complex-valued function of an integer argument x = x( j), j ∈ Z. We denote the set
of all signals by C N . Two operations are introduced in C N in a natural manner—the
operation of addition of two signals and the operation of multiplication of a signal
by a complex number:

y = x1 + x2 ⇔ y( j) = x1 ( j) + x2 ( j), j ∈ Z;
y = c x ⇔ y( j) = c x( j), j ∈ Z.

As a result C N becomes a linear complex space. Zero element of C N is a signal O


such that O( j) = 0 for all j ∈ Z.
2.1.2 A unit N -periodic pulse is a signal δ N which is equal to unity if j is divisible
by N and equal to zero for other j ∈ Z. It is clear that δ N (− j) = δ N ( j).
Lemma 2.1.1 Given x ∈ C N , valid is the equality
N −1

x( j) = x(k) δ N ( j − k), j ∈ Z. (2.1.1)
k=0

Proof Both sides of (2.1.1) contain N -periodic functions, therefore it is sufficient


to verify the equality for j ∈ 0 : N − 1. Since the inequalities −(N − 1) ≤ j − k ≤
N − 1 hold for k, j ∈ 0 : N − 1, it follows that δ N ( j − k) = 0 for k = j. Hence
N −1

x(k) δ N ( j − k) = x( j) δ N (0) = x( j).
k=0

The lemma is proved. 


© Springer Nature Switzerland AG 2020 15
V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7_2
16 2 Signal Transforms

Formula (2.1.1) gives an analytic representation of a signal x through its values


on the main period JN = 0 : N − 1.
Consider a system of shifts of the unit pulse

δ N ( j), δ N ( j − 1), . . . , δ N ( j − N + 1). (2.1.2)

This system is linearly independent on Z. Indeed, let

N −1

c(k) δ N ( j − k) = 0 for j ∈ 0 : N − 1.
k=0

As it was mentioned, the left side of this equality equals to c( j), thus c( j) = 0 for
all j ∈ 0 : N − 1.
According to Lemma 2.1.1 any signal x can be expanded over the linearly inde-
pendent system (2.1.2). It means that the system (2.1.2) is a basis of the space C N .
Moreover, the dimension of C N is equal to N .

2.1.3 The following assertion will be frequently used later on.

Lemma 2.1.2 Given a signal x, the following equality holds for all l ∈ Z:

N −1
 N −1

x( j + l) = x( j). (2.1.3)
j=0 j=0

Proof Let l = pN + r , where p = l/N  and r = l N (see Sect. 1.1). Using the
N -periodicity of a signal x and the fact that r ∈ 0 : N − 1, we obtain

N −1
 N −1
 −r −1
N N −1

x( j + l) = x( j + r ) = x( j + r ) + x( j + r − N )
j=0 j=0 j=0 j=N −r
N −1
 r −1
 N −1

= x( j ) + x( j ) = x( j).
j =r j =0 j=0

The lemma is proved. 

Corollary 2.1.1 Under conditions of Lemma 2.1.2, valid is the equality

N −1
 N −1

x(l − j) = x( j). (2.1.4)
j=0 j=0
2.1 Space of Signals 17

Indeed,

N −1
 N −1
 N −1

 
x(l − j) = x(l) + x l + (N − j) = x(l) + x(l + j )
j=0 j=1 j =1
N −1
 N −1

= x( j + l) = x( j).
j =0 j=0

The following result is related to Lemma 2.1.2.


Lemma 2.1.3 Let N = mn, where m and n are natural numbers, x ∈ C N and


m−1
y( j) = x( j − pn), j ∈ Z.
p=0

We assert that y ∈ Cn .
Proof We need to verify that for any j and l from Z there holds the equality y( j +
ln) = y( j), or, equivalently,


m−1
 
 m−1
x j − ( p − ln) = x( j − pn). (2.1.5)
p=0 p=0

We fix j and introduce a signal z( p) = x( j − pn). This signal is m-periodic. Accord-


ing to (2.1.3),

m−1 
m−1
z( p − l) = z( p).
p=0 p=0

It corresponds to (2.1.5). 
2.1.4 We introduce the inner (scalar) product and the norm in C N :

N −1

x, y = x( j) y( j), x = x, x 1/2
.
j=0

Two signals x, y are called orthogonal if x, y = 0. A signal x is called normalized


if x = 1.
We denote a shift x( j − k) of a signal x( j) as an element of the space C N by
x(· − k).
Lemma 2.1.4 For all k, l ∈ Z there holds an equality

δ N (· − k), δ N (· − l) = δ N (k − l).
18 2 Signal Transforms

Proof We fix an integer k and introduce a signal xk ( j) = δ N ( j − k). Recall that


δ N ( j) = δ N (− j) for all j ∈ Z. Taking into account formula (2.1.1) we write

N −1
 N −1

δ N (· − k), δ N (· − l) = δ N ( j − k) δ N ( j − l) = xk ( j) δ N (l − j)
j=0 j=0
= xk (l) = δ N (l − k) = δ N (k − l).

The lemma is proved. 

Corollary 2.1.2 The system of signals (2.1.2) is orthonormal, i.e. it constitutes an


orthonormal basis in the space C N .

Lemma 2.1.5 Given arbitrary signals x and y, there holds a Cauchy–Bunyakovskii


inequality
|x, y | ≤ x × y . (2.1.6)

Provided x = O, the inequality turns into an equality if and only if y = cx for some
c ∈ C.

Proof Provided x = O, the inequality (2.1.6) holds as an equality. Assume that


x = O. We take a signal y and denote c = y, x /x, x . For a signal z = y − c x
we have z, x = 0. Taking into account that y, x = x, y , we write

z 2
= z, y − c x = z, y = y − c x, y
= y 2 − cx, y = y 2 − |x, y |2 / x 2 .

We come to the equality

x 2
× y 2
− |x, y |2 = x 2
× z 2.

Hence follows both the inequality (2.1.6) and the condition of turning this inequality
into an equality. The lemma is proved. 

2.1.5 We can introduce an operation of multiplication of two signals in the linear


complex space C N :

y = x1 x2 ⇔ y( j) = x1 ( j) x2 ( j), j ∈ Z.

In this case C N becomes a commutative algebra with unity. Unity element is a


signal 1I that has 1I( j) = 1 for all j ∈ Z. Given a signal x, the inverse signal x −1
is determined from a condition x x −1 = x −1 x = 1I. It exists if and only if all values
x( j) are nonzero. In this case x −1 ( j) = [x( j)]−1 , j ∈ Z.
2.1 Space of Signals 19

2.1.6 Along with a signal x we will consider signals x, Re x, Im x, |x| with values
x( j) = x( j), [Re x]( j) = Re x( j), [Im x]( j) = Im x( j), |x|( j) = |x( j)|. Note that
x x = |x|2 .

A signal x is called even if x(− j) = x( j) and odd if x(− j) = −x( j) for all
j ∈ Z. A signal x is called real if Im x = O and imaginary if Re x = O.

2.1.7 Later on we will be interested in a space C N for N ≥ 2. However, for N = 1


this space also has meaning: C1 consists of signals x with x( j) ≡ c, where c is a
complex number. In this case δ1 = 1I.

2.2 Discrete Fourier Transform

2.2.1 We take the N -th degree root of unity which we denote by ω N = exp(2πi/N ).

Lemma 2.2.1 Valid is the equality

N −1
1  kj
ω = δ N ( j), j ∈ Z. (2.2.1)
N k=0 N

Proof The left side of (2.2.1) contains an N -periodic function, it follows from the
relation
k( j+l N ) kj  kl kj
ωN = ω N ω NN = ω N for l ∈ Z

(see Sect. 1.7). A unit pulse δ N ( j) is N -periodic as well. Thus, it is sufficient to


verify equality (2.2.1) for j ∈ 0 : N − 1.
For j = 0 it is trivial. Let j ∈ 1 : N − 1. We will use the geometric progression
summation formula
N −1
 1 − zN
zk = for z = 1.
k=0
1−z

j
By putting z = ω N we obtain

N −1
1  kj
Nj
1 − ωN
ωN = j
= 0 = δ N ( j) for j ∈ 1 : N − 1.
N k=0 N (1 − ω N )

The lemma is proved. 

2.2.2 Discrete Fourier transform (DFT) is a mapping F N : C N → C N that associates


a signal x with a signal X = F N (x) with the values
20 2 Signal Transforms

N −1
 −k j
X (k) = x( j) ω N , k ∈ Z. (2.2.2)
j=0

The signal X is referred to as a Fourier spectrum of the signal x, or just a spectrum.


The values X (k) are called spectral components.

Theorem 2.2.1 Valid is the inversion formula

N −1
1  kj
x( j) = X (k) ω N , j ∈ Z. (2.2.3)
N k=0

Proof According to (2.2.2) and (2.2.1) we have

N −1

N −1 N −1

1  kj 1   −kl kj
X (k) ω N = x(l) ω N ωN
N k=0 N k=0 l=0
N −1
 N −1  N −1
 1  k( j−l) 
= x(l) ωN = x(l) δ N ( j − l) = x( j).
l=0
N k=0 l=0

The theorem is proved. 

Formula (2.2.3) can be written in a shorter way: x = F N−1 (X ). If we


 now substitute
F N (x) instead of X into the right side, we will get x = F N−1 F N (x) , so that F N−1 F N
 
is an identity operator. As far as X = F N (x) = F N F N−1 (X ) , we conclude that
F N F N−1 is also an identity operator.
The mapping F N−1 : C N → C N is called an inverse DFT.
kj
2.2.3 We introduce the notation u k ( j) = ω N . This time an inversion formula for
DFT takes a form

N −1
1 
x( j) = X (k) u k ( j). (2.2.4)
N k=0

It means that a signal x( j) is expanded over the system of signals

u 0 ( j), u 1 ( j), . . . , u N −1 ( j). (2.2.5)

The coefficients of this expansion are the spectral components.

Lemma 2.2.2 The system of signals (2.2.5) is orthogonal. In addition, u k 2


=N
for all k ∈ 0 : N − 1.
2.2 Discrete Fourier Transform 21

Proof For k, l ∈ 0 : N − 1 we have

N −1
 N −1
 (k−l) j
u k , u l = u k ( j) u l ( j) = ωN = N δ N (k − l).
j=0 j=0

Hence the lemma’s statement follows evidently. 

It is ascertained that the system (2.2.5) forms an orthogonal basis in the space C N .
This basis is called exponential.
The coefficients of the expansion (2.2.4) are determined uniquely. More to the
point, if
N −1
1 
x( j) = a(l) u l ( j), j ∈ Z, (2.2.6)
N l=0

then necessarily a(k) = X (k) for all k ∈ 0 : N − 1. Indeed, let us multiply both parts
of equality (2.2.6) by u k ( j) scalarly. According to Lemma 2.2.2 we obtain

N −1
1 
x, u k = a(l) u l , u k = a(k),
N l=0

thus
N −1
 −k j
a(k) = x, u k = x( j) ω N = X (k).
j=0

We rewrite formula (2.2.1) in a form

N −1
1 
δ N ( j) = u k ( j).
N k=0

We have the expansion of the unit pulse over the exponential basis. All the coefficients
in this expansion are equal to unity. By virtue of the uniqueness of such an expansion,
F N (δ N ) = 1I.

2.2.4 Here we provide frequently used properties of discrete Fourier transform.

Theorem 2.2.2 A signal x is real if and only if its spectrum X is even.

Proof
Necessity Let x be a real signal. We write

N −1
 N −1

−k j −k j
X (−k) = x( j) ω N = x( j) ω N = X (k).
j=0 j=0
22 2 Signal Transforms

Hence it follows that X (−k) = X (k) for all k ∈ Z. We ascertained evenness of the
spectrum X .

Sufficiency By virtue of evenness of the spectrum X , Theorem 2.2.1, and the corol-
lary to Lemma 2.1.2 (for l = 0), we have

N −1 N −1
1    (−k) j 1  kj
x( j) = X − (−k) ω N = X (−k) ω N
N k=0 N k=0
N −1
1  kj
= X (k) ω N = x( j).
N k=0

Thus, x is a real signal.


The theorem is proved. 

Theorem 2.2.3 A signal x is even if and only if its spectrum X is real.

Proof
Necessity By virtue of evenness of the signal x and the corollary to Lemma 2.1.2
(for l = 0) we get

N −1
 N −1
  −k(− j)  −k j
X (k) = x − (− j) ω N = x(− j) ω N
j=0 j=0
N −1
 −k j
= x( j) ω N = X (k).
j=0

Sufficiency We have

N −1 N −1
1  kj 1  kj
x(− j) = X (k) ω N = X (k) ω N = x( j).
N k=0 N k=0

The theorem is proved. 

As a consequence of Theorems 2.2.2 and 2.2.3 we get the following result: a


signal x is real and even if and only if its spectrum X is real and even.

2.2.5 Below we present two examples of DFT calculation. Note that it is sufficient
to define signals from C N by their values on the main period 0 : N − 1.

Example 2.2.1 Let m be a natural number, 2m ≤ N , and

1 for j ∈ 0 : m − 1 and j ∈ N − m + 1 : N − 1;
x( j) =
0 for j ∈ m : N − m.
2.2 Discrete Fourier Transform 23

(In case of m = 1 the signal x( j) coincides with δ N ( j).)


We will show that

⎨ 2m − 1 for k = 0;

 
X (k) = sin (2m − 1)kπ/N

⎩ for k ∈ 1 : N − 1.
sin(kπ/N )

Indeed, by the definition of DFT


m−1 N −1
 
m−1
−k j k(N − j) kj
X (k) = ωN + ωN = ωN .
j=0 j=N −m+1 j=−(m−1)

In particular, X (0) = 2m − 1. Further, by the geometric progression summation


formula, for k ∈ 1 : N − 1 we have

ω−k(m−1) − ωkm 1 − ω−k


X (k) = N N
× N
=
1 − ωN
k
1 − ω−k
N
ω−k(m−1) − ωkm −km
N − ωN + ωk(m−1)
= N N
2 − ωkN − ω−k
  N
cos 2(m − 1)kπ/N − cos(2mkπ/N )
=
1 − cos(2kπ/N )
   
sin (2m − 1)kπ/N sin(kπ/N ) sin (2m − 1)kπ/N
= = .
sin2 (kπ/N ) sin(kπ/N )

Example 2.2.2 Let N = 2n and



1 for j ∈ 0 : n − 1,
x( j) =
−1 for j ∈ n : N − 1.

Let us show that 


0 for even k,
X (k) = πk
2 (1 − i cot N
) for odd k.

By the definition of DFT


n−1
−k j

2n−1
−k( j−n)−kn

n−1
−k j
X (k) = ωN − ωN = (1 − ω−kn
N ) ωN .
j=0 j=n j=0

Since ω2 = −1, we have ω−kn


N
−kn
= ω2n = ω2−k = (−1)k , so that
24 2 Signal Transforms


⎨ 0 for even k,
X (k) = 2 (1 − ω−kn
N ) 4

⎩ = for odd k.
−k
1 − ωN 1 − ω−k
N

Now it is remaining to mention that in a case when k is not divisible by N (in


particular, when k is odd) the following equality holds:

1 1
=  
1− ω−k
N 1 − cos N + i sin 2πk
2πk
N
1
=  
2 sin πk
N
sin πkN
+ i cos πkN
sin πk − i cos πk 1 πk 
= N N
= 1 − i cot . (2.2.7)
2 sin πk
N
2 N

2.3 Parseval Equality

2.3.1 The following statement is true.

Theorem 2.3.1 Let X = F N (x), Y = F N (y). Then

x, y = N −1 X, Y . (2.3.1)

Proof According to (2.2.2) and (2.2.3) we obtain


⎛ ⎞
N −1
 N −1 
 N −1
1 1 ⎝ −k j
X (k) Y (k) = x( j) ω N ⎠ Y (k)
N k=0 N k=0 j=0
N −1
 N −1  N −1
 1  −k j

= x( j) Y (k) ω N = x( j) y( j).
j=0
N k=0 j=0

The theorem is proved. 

Corollary 2.3.1 The following equality is valid:

x 2
= N −1 X 2
. (2.3.2)

Formula (2.3.2) is referred to as a Parseval equality and formula (2.3.1) as a


generalized Parseval equality.
2.3.2 Parseval equality can be used for calculation of trigonometric sums in cases
where explicit formulae for the spectral components of a signal are available. Let us
2.3 Parseval Equality 25

revisit the Example 2.2.2 from the previous section. For the signal that was considered
there we have

x 2
= N = 2n,

n−1 
 (2k + 1)π 2 
n−1
 1
X 2
=4 1 − i cot  =4 2 (2k+1)π
.
k=0
2n k=0
sin 2n

By virtue of (2.3.2) we obtain


n−1
1
= n2.
k=0
sin2 (2k+1)π
2n

Consider one more example. Let

x( j) = j, j ∈ 0 : N − 1.

We will show that


 1
2
N (N − 1) for k = 0;
X (k) = (2.3.3)
− 2 N (1 − i cot πk
1
N
) for k ∈ 1 : N − 1.

By the definition of DFT we have

N −1
 −k j
X (k) = j ωN .
j=0

In particular, X (0) = 21 N (N − 1). Let k ∈ 1 : N − 1. We write

N −1
 −k j
( j + 1) ω N = X (k) + N δ N (k) = X (k).
j=0

At the same time,

N −1
 N −1

−k j −k( j+1)
(j + 1) ω N = ωkN ( j + 1) ω N
j=0 j=0


N
−k j  
= ωkN j ωN = ωkN X (k) + N .
j =1

 
We come to the equation X (k) = ωkN X (k) + N , from which, by virtue of (2.2.7),
it follows that
26 2 Signal Transforms

N ωkN N 1  πk 
X (k) = = − = − N 1 − i cot ,
1 − ωkN 1 − ω−k
N
2 N

k ∈ 1 : N − 1.

Formula (2.3.3) is ascertained.


Let us calculate squares of norms of the signal x and its spectrum X . We have

N −1
 (N − 1)N (2N − 1)
x 2
= j2 =
j=1
6

(see the problem 1.15 from Chap. 1),

 1 N −1
1 2 1
X 2
= N (N − 1)2 + N 2 .
4 4 k=1
sin2 πk
N

On the ground of (2.3.2) we get

N −1
(N − 1)(2N − 1) 1 1 1
= (N − 1)2 + .
6 4 4 k=1 sin2 πk
N

After uncomplicated transformations we come to a remarkable formula

N −1
 1 N2 − 1
πk
= .
k=1
sin2 N 3

2.4 Sampling Theorem

2.4.1 As a sample we refer to a value x( j) of a signal x for some fixed argument j.


The theorem below shows that, under a certain assumption, a signal x can be com-
pletely restored from its samples on a grid coarser than Z.

Let N = mn, where n ≥ 2 and m = 2μ − 1. We denote

1  kj
m−1
h m ( j) = ω .
m k=0 N

Theorem 2.4.1 (Sampling Theorem) If the spectrum X of a signal x equals to zero


on the set of indices μ : N − μ then
2.4 Sampling Theorem 27


m−1
x( j) = x(ln) h( j − ln), j ∈ Z. (2.4.1)
l=0

Proof By virtue of a DFT inversion formula and the theorem’s hypothesis we have

1   −(N −k) j 
μ−1 N −1

kj 
x( j) = X (k) ω N + X − (N − k) ω N
N k=0 k=N −μ+1
μ−1

1 kj
= X (k) ω N . (2.4.2)
N k=−μ+1

kj
We fix an integer j and put y(k) = ω N , k ∈ −μ + 1 : μ − 1. By extending y on Z
periodically with a period of m we obtain a signal y that belongs to Cm . Let us
calculate its DFT. According to Lemma 2.1.2,


m−1 μ−1

y(k − μ + 1) ωm−(k−μ+1)l = ω N ωm−k l
k j
Y (l) =
k=0 k =−μ+1
μ−1
 k( j−ln)
= ωN = m h m ( j − ln).
k=−μ+1

The inversion formula yields

1  
m−1 m−1
y(k) = Y (l) ωm
lk
= h m ( j − ln) ωm
lk
, k ∈ Z.
m l=0 l=0

Recalling a definition of the signal y we gain

kj

m−1
ωN = h m ( j − ln) ωm
lk
, k ∈ −μ + 1 : μ − 1.
l=0

It remains to substitute this expression into (2.4.2). We come to the formula

μ−1
 
m−1
1
x( j) = X (k) h m ( j − ln) ωm
lk
N k=−μ+1 l=0
⎧ ⎫

m−1 ⎨1 μ−1
 ⎬
= h m ( j − ln) X (k) ωk(ln)
⎩N N

l=0 k=−μ+1


m−1
= h m ( j − ln) x(ln).
l=0
28 2 Signal Transforms

The theorem is proved. 

In case of an odd m = 2μ − 1 the kernel h m ( j) can be represented as follows:



⎨1 for j = 0;
h m ( j) = sin(π j/n) (2.4.3)
⎩ for j ∈ 1 : N − 1.
m sin(π j/N )

The equality h m (0) = 1 is obvious. Let j ∈ 1 : N − 1. Then, as it was shown in


par. 2.2.5 during analysis of the Example 2.2.1,

μ−1  
 kj sin π j (2μ − 1)/N
ωN = . (2.4.4)
k=−μ+1
sin(π j/N )

It remains to take into account that 2μ − 1 = m and N = mn.

2.4.2 The sampling theorem is related to the following interpolation problem: con-
struct a signal x ∈ C N that satisfies to the conditions

x(ln) = z(l), l ∈ 0 : m − 1,
(2.4.5)
X (k) = 0, k ∈ μ : N − μ,

where z(l) are given numbers (generally, complex ones).

Theorem 2.4.2 The unique solution of the Problem (2.4.5) is a signal


m−1
x( j) = z(l) h m ( j − ln). (2.4.6)
l=0

Proof The conditions (2.4.5) are in fact a system of N linear equations with respect
to N variables x(0), x(1), …, x(N − 1). Let us consider a homogeneous system

x(ln) = 0, l ∈ 0 : m − 1,
X (k) = 0, k ∈ μ : N − μ.

According to the sampling theorem, it has only zero solution. Therefore the sys-
tem (2.4.5) is uniquely resolvable for any z(l).
Formula (2.4.6) follows from (2.4.1). 
2.4.3 The interpolation formula (2.4.6) can be generalized to the case of an even m.
For m = 2μ we put

1 
μ−1
 kj
h m ( j) = cos(π j/n) + ωN .
m k=−μ+1
2.4 Sampling Theorem 29

Theorem 2.4.3 A signal


m−1
x( j) = z(l) h m ( j − ln)
l=0

satisfies to interpolation conditions

x(ln) = z(l), l ∈ 0 : m − 1.

Proof Let us show that h m (ln) = δm (l). We have

1 
μ−1
 −1

h m (ln) = (−1)l + ωmkl + ωm(k+m)l .
m k=0 k=−μ+1

We replace an index in the latter sum by putting k = k + m. When k goes from


−μ + 1 to −1, the index k goes from −μ + 1 + m = μ + 1 to m − 1, thus

−1
 
m−1
ωm(k+m)l = ωmk l .
k=−μ+1 k =μ+1

μl
The summand (−1)l can be written in a form (−1)l = ωm . As a result we come to
the required formula
1  kl
m−1
h m (ln) = ω = δm (l).
m k=0 m

On the basis of Lemma 2.1.1, for l ∈ 0 : m − 1 we gain


m−1
 
 m−1
x(ln) = z(l ) h m (l − l )n = z(l ) δm (l − l ) = z(l).
l =0 l =0

The theorem is proved. 

In case of an even m = 2μ the kernel h m ( j) can be represented as follows:



1 for j = 0;
h m ( j) = (2.4.7)
1
m
sin( πnj ) cot( πNj ) for j ∈ 1 : N − 1.

The equality h m (0) = 1 is evident. Let j ∈ 1 : N − 1. Then according to (2.4.4)


30 2 Signal Transforms

μ−1  
 kj sin π j (m − 1)/N
ωN =
k=−μ+1
sin(π j/N )
sin(π j/n) cos(π j/N ) − cos(π j/n) sin(π j/N )
=
sin(π j/N )
= sin(π j/n) cot(π j/N ) − cos(π j/n).

The remaining follows from the definition of h m ( j).

2.5 Cyclic Convolution

2.5.1 Let x and y be signals of C N . A signal u = x ∗ y with samples

N −1

u( j) = x(k) y( j − k), j ∈Z
k=0

is referred to as a cyclic convolution of signals x and y.

Theorem 2.5.1 (Convolution Theorem) Let X = F N (x) and Y = F N (y). Then

F N (x ∗ y) = X Y (2.5.1)

where the right side is a component-wise product of spectra.

Proof According to Lemma 2.1.2 we have


 N −1
N −1 

 −k( j−l)−kl
[F N (x ∗ y)](k) = x(l) y( j − l) ω N
j=0 l=0
N −1
 N −1
 −k( j−l)
= x(l) ω−kl
N y( j − l) ω N
l=0 j=0
N −1
 N −1
 −k j
= x(l) ω−kl
N y( j) ω N = X (k) Y (k),
l=0 j=0

which conforms to (2.5.1). The theorem is proved. 

Corollary 2.5.1 Valid is the formula

x ∗ y = F N−1 (X Y ). (2.5.2)

Theorem 2.5.2 A cyclic convolution is commutative and associative.


2.5 Cyclic Convolution 31

Proof The equality x ∗ y = y ∗ x follows directly from (2.5.2). Let us verify the
associativity. Take three signals x1 , x2 , x3 and denote their spectra by X 1 , X 2 , X 3 .
Relying on (2.5.1) and (2.5.2) we obtain
   
(x1 ∗ x2 ) ∗ x3 = F N−1 F N (x1 ∗ x2 ) X 3 = F N−1 (X 1 X 2 ) X 3
   
= F N−1 X 1 (X 2 X 3 ) = F N−1 X 1 F N (x2 ∗ x3 ) = x1 ∗ (x2 ∗ x3 ).

The theorem is proved. 

2.5.2 A linear complex space C N where component-wise product of signals as an


operation of multiplication is replaced with a cyclic convolution constitutes another
commutative algebra with unity. Here unity element is δ N because according to
Lemma 2.1.1

N −1

[x ∗ δ N ]( j) = x(k) δ N ( j − k) = x( j).
k=0

An inverse element y to a signal x is defined by a condition

x ∗ y = δN . (2.5.3)

It exists if and only if each component of the spectrum X is nonzero. In that case
y = F N−1 (X −1 ), where X −1 (k) = [X (k)]−1 . Let us verify this.
Applying an operation F N to both sides of (2.5.3) we get an equation X Y = 1I
with respect to Y . This equation is equivalent to (2.5.3). Such a method is called a
transition into a spectral domain. The latter equation is resolvable if and only if each
component of the spectrum X is nonzero. The solution is written explicitly in a form
Y = X −1 . The inversion formula yields y = F N−1 (X −1 ). This very signal is inverse
to x.

2.5.3 A transform L : C N → C N is called linear if

L(c1 x1 + c2 x2 ) = c1 L(x1 ) + c2 L(x2 )

for any x1 , x2 from C N and any c1 , c2 from C. The simplest example of a linear
transform is a shift operator P that maps a signal x to a signal x = P(x) with
samples x ( j) = x( j − 1).    
A transform L : C N → C N is referred to as stationary if L P(x) = P L(x)
for all x ∈ C N . It follows from the definition that
   
L Pk (x) = Pk L(x) , k = 0, 1, . . .

Here P0 is an identity operator.


32 2 Signal Transforms

Theorem 2.5.3 A transform L : C N → C N is both linear and stationary if and only


if there exists a signal h such that

L(x) = x ∗ h for all x ∈ C N . (2.5.4)

Proof
Necessity Taking into account that Pk (x) = x(· − k) we rewrite formula (2.1.1) in
a form
N −1

x= x(k) Pk (δ N ).
k=0

According to the hypothesis, an operator L is linear and stationary. Hence

N −1
 N −1
 k    
L(x) = x(k) L P (δ N ) = x(k) Pk L(δ N ) .
k=0 k=0

Denoting h = L(δ N ) we obtain

N −1

L(x) = x(k) Pk (h) = x ∗ h.
k=0

Sufficiency The linearity of a convolution operator is evident. We will verify the


stationarity. By virtue of the commutativity of a cyclic convolution we have

N −1

L(x) = h ∗ x = h(k) Pk (x).
k=0

Now we write
N −1
    
P L(x) = h(k) Pk+1 (x) = L P(x) .
k=0

The theorem is proved. 

It is affirmed that a linear stationary operator L can be represented in a form (2.5.4)


where h = L(δ N ). Such an operator is also referred to as a filter, and the signal h is
referred to as its impulse response.

2.5.4 As an example we consider an operation of taking a finite difference of the


r -th order:
r  
r −l r
[ (x)]( j) =  x( j) =
r r
(−1) x( j + l).
l=0
l
2.5 Cyclic Convolution 33

We will show that r (x) = x ∗ h r , where

r  
r
h r ( j) = (−1)r −l δ N ( j + l). (2.5.5)
l=0
l

According to (2.1.1) we have


r  N −1
r r −l
 x( j) =
r
(−1) x(k) δ N ( j + l − k)
l=0
l k=0
N −1
   
r
r  
= x(k) (−1)r −l δ N ( j − k) + l
k=0 l=0
l
N −1

= x(k) h r ( j − k) = [x ∗ h r ]( j),
k=0

as it was to be ascertained.
Thus, the operator r : C N → C N is a filter with an impulse response h r of a
form (2.5.5). It is obvious that h r = r (δ N ).
kj
2.5.5 We take a filter L(x) = x ∗ h and denote H = F N (h). Recall that u k ( j) = ω N .

Theorem 2.5.4 Valid is the equality

L(u k ) = H (k) u k , k ∈ 0 : N − 1.

Proof We have

N −1

[L(u k )]( j) = [h ∗ u k ]( j) = h(l) u k ( j − l)
l=0
N −1
 N −1

h(l) ω−kl
k( j−l) kj
= h(l) ω N = ωN N = H (k) u k ( j).
l=0 l=0

The theorem is proved. 

Theorem 2.5.4 states that exponential functions u k form a complete set of eigen-
functions of any filter L(x) = x ∗ h; in addition, an eigenfunction u k corresponds to
an eigenvalue H (k) = [F N (h)](k), k ∈ 0 : N − 1.
The signal H is referred to as a frequency response of the filter L.
34 2 Signal Transforms

2.6 Cyclic Correlation

2.6.1 Let x and y be signals of C N . A signal Rx y with samples

N −1

Rx y ( j) = x(k) y(k − j), j ∈ Z,
k=0

is referred to as a cross-correlation of signals x and y.


Put y1 ( j) = y(− j). Then
Rx y = x ∗ y1 . (2.6.1)

Theorem 2.6.1 (Correlation Theorem) The following formula is valid:

F N (Rx y ) = X Y , (2.6.2)

where X = F N (x) and Y = F N (y).

Proof By virtue of (2.6.1) and (2.5.1) we may write F N (Rx y ) = X Y1 , where Y1 =


F N (y1 ). It is remaining to verify that Y1 = Y . According to (2.1.4) we have

N −1
 N −1

−k j k(− j)
Y1 (k) = y1 ( j) ω N = y(− j) ω N
j=0 j=0
N −1
 kj
= y( j) ω N = Y (k).
j=0

The theorem is proved. 

A signal Rx x is referred to as an auto-correlation of a signal x. According to (2.6.2)

F N (Rx x ) = X X = |X |2 . (2.6.3)

We note that |Rx x ( j)| ≤ Rx x (0) for j ∈ 1 : N − 1. Indeed, by virtue of Cauchy–


Bunyakovskii inequality (2.1.6) and Lemma 2.1.2 we have
 N −1 
 
 
|Rx x ( j)| =  x(k) x(k − j)
 
k=0
 N −1 1/2  N −1 1/2
 
≤ |x(k)| 2
|x(k − j)| 2

k=0 k=0
N −1

= |x(k)|2 = Rx x (0).
k=0
2.6 Cyclic Correlation 35

N −1
2.6.2 The orthonormal basis {δ N (· − k)}k=0 in a space C N consists of shifts of the
unit pulse. Are there any other signals whose shifts form orthonormal bases? This
question can be answered positively.
N −1
Lemma 2.6.1 Shifts {x(· − k)}k=0 of a signal x form an orthonormal basis in a
space C N if and only if Rx x = δ N .
Proof Since
N −1

Rx x (l) = x( j) x( j − l) = x, x(· − l) ,
j=0

the condition Rx x = δ N is equivalent to the following:

x, x(· − l) = δ N (l), l ∈ 0 : N − 1. (2.6.4)

At the same time, for k, k ∈ 0 : N − 1 we have

N −1
  
x(· − k), x(· − k ) = x( j − k) x ( j − k) − (k − k)
j=0
N −1
    
= x( j ) x j − k − k N = x, x(· − k − k N) .
j =0

Orthonormality condition x(· − k), x(· − k ) = δ N (k − k) takes a form


   
x, x(· − k − k N) = δ N k − k N , k, k ∈ 0 : N − 1. (2.6.5)

Equivalence of the relations (2.6.4) and (2.6.5) guarantees the validity of the lemma’s
statement. 
N −1
Theorem 2.6.2 Shifts {x(· − k)}k=0 of a signal x form an orthonormal basis in a
space C N if and only if |X (k)| = 1 for k ∈ 0 : N − 1.
Proof
N −1
Necessity If {x(· − k)}k=0 is an orthonormal basis then, by virtue of Lemma 2.6.1,
Rx x = δ N . Hence F N (Rx x ) = 1I. On the strength of (2.6.3) we have |X |2 = 1I, thus
|X (k)| = 1 for k ∈ 0 : N − 1.

Sufficiency Let |X | = 1I. Then, according to (2.6.3), F N (Rx x ) = 1I. It is possible


only when Rx x = δ N . It is remaining to refer to Lemma 2.6.1. The theorem is
proved. 
2.6.3 Let us take N complex numbers Y (k), k ∈ 0 : N − 1, whose moduli are equal
to unity. With the aid of the inverse Fourier transform we construct a signal y =
F N−1 (Y ). By virtue of Theorem 2.6.2 its shifts {y(· − k)}k=0
N −1
form an orthonormal
basis in a space C N . Let us expand a signal x over this basis
36 2 Signal Transforms

N −1

x= c(k) y(· − k) (2.6.6)
k=0

and calculate the coefficients c(k). In order to do that we multiply both sides of (2.6.6)
scalarly by y(· − l), l ∈ 0 : N − 1. We gain x, y(· − l) = c(l) or

N −1

c(l) = x( j) y( j − l) = Rx y (l).
j=0

We come to the formula


N −1

x= Rx y (k) y(· − k).
k=0

2.7 Optimal Interpolation

2.7.1 Let N = mn, where n ≥ 2, and r be a natural number. We consider an extremal


problem

f (x) := r (x) 2
→ min,
(2.7.1)
x(ln) = z(l), l ∈ 0 : m − 1; x ∈ C N .

Here we need to construct the possibly smoothest signal that takes given values z(l)
in the nodes ln. The smoothness is characterized by the squared norm of the finite
difference of the r -th order. Most commonly r = 2.
Let us perform a change of variables

N −1
 −k j
X (k) = x( j) ω N , k ∈ 0 : N − 1,
j=0

and rewrite the Problem (2.7.1) in new variables X (k). We start with a goal function.
As it was mentioned in par. 2.5.4, the equality r (x) = x ∗ h r holds, where h r is
determined by formula (2.5.5): h r = r (δ N ). Using the Parseval equality (2.3.2) and
the Convolution Theorem 2.5.1 we obtain

r (x) 2
= x ∗ h r 2 = N −1 F N (x ∗ h r ) 2
= N −1 X Hr 2

N −1

= N −1 |X (k) Hr (k)|2 .
k=0
2.7 Optimal Interpolation 37

Here
N −1
 −k j
Hr (k) = h r ( j) ω N
j=0


r  
N −1
r −k( j+l)+kl
= (−1)r −l δ N ( j + l) ω N
l=0
l j=0


r   N −1

r −k j
= (−1)r −l ωkl
N δ N ( j) ω N
l=0
l j=0
r  
r
= (−1)r −l N = (ω N − 1) .
ωkl k r

l=0
l

Denote
 2π k 2 2π k  2π k  πk
αk := |ωkN − 1|2 = cos − 1 + sin2 = 2 1 − cos = 4 sin2 .
N N N N

Then |Hr (k)|2 = αkr and

N −1
1  r
r (x) 2
= α |X (k)|2 . (2.7.2)
N k=0 k

Now we turn to the constraints. We have


N −1
1  1 
m−1 n−1
x(ln) = X (k) ωkln
N = X (qm + p) ωm(qm+ p)l
N k=0 N p=0 q=0
⎡ ⎤
1 
m−1 
n−1
= ⎣1 X ( p + qm)⎦ ωmpl .
m p=0 n q=0

The constraints of the Problem (2.7.1) take a form


⎡ ⎤
1 ⎣1 
m−1 n−1
X ( p + qm)⎦ ωmpl = z(l), l ∈ 0 : m − 1.
m p=0 n q=0

The latter formula is an expansion of a signal z ∈ Cm over the exponential basis. It


is equivalent to

1
n−1
X ( p + qm) = Z ( p), p ∈ 0 : m − 1, (2.7.3)
n q=0
38 2 Signal Transforms

where Z = Fm (z). On the basis of (2.7.2) and (2.7.3) we come to an equivalent


setting of the Problem (2.7.1):

N −1
1  r
α |X (k)|2 → min,
N k=0 k
(2.7.4)
1
n−1
X ( p + qm) = Z ( p), p ∈ 0 : m − 1.
n q=0

2.7.2 The Problem (2.7.4) falls into m independent subproblems corresponding to


different p ∈ 0 : m − 1:

1  r
n−1
α |X ( p + qm)|2 → min,
N q=0 p+qm
(2.7.5)

n−1
X ( p + qm) = n Z ( p).
q=0

Since α0 = 0, we get the following problem for p = 0:

1  r
n−1
α |X (qm)|2 → min,
N q=1 qm

n−1
X (qm) = n Z (0).
q=0

Its solution is evident:


 
X ∗ (0) = n Z (0), X ∗ (m) = X ∗ (2m) = · · · = X ∗ (n − 1)m = 0. (2.7.6)

The minimal value of the goal function equals to zero.


Let p ∈ 1 : m − 1. In this case each coefficient α p+qm , q ∈ 0 : n − 1, is positive.
According to Cauchy–Bunyakovskii inequality (2.1.6) we have
 2
 n−1 
   r/2  −r/2 
2 
|n Z ( p)| =  α p+qm X ( p + qm) α p+qm 
 q=0 
⎛ ⎞⎛ ⎞

n−1 
n−1
≤⎝ αrp+qm |X ( p + qm)|2 ⎠⎝ α −r
p+qm
⎠. (2.7.7)
q=0 q=0
2.7 Optimal Interpolation 39


n−1 −1
Denoting λ p = n α −r
p+qm we obtain
q=0

1  r
n−1
1
α |X ( p + qm)|2 ≥ λ p |Z ( p)|2 .
N q=0 p+qm m

Inequality (2.7.7) turns into an equality if and only if


r/2 −r/2
α p+qm X ( p + qm) = c p α p+qm ,

or X ( p + qm) = c p α −r
p+qm , with some c p ∈ C for all q ∈ 0 : n − 1. The variables
X ( p + qm) must satisfy to the constraints of the Problem (2.7.5), so it is necessary
that

n−1
cp α −r
p+qm = n Z ( p).
q=0

Hence c p = λ p Z ( p).
It is affirmed that for every p ∈ 1 : m − 1 the unique solution of the Prob-
lem (2.7.5) is the sequence

X ∗ ( p + qm) = λ p Z ( p) α −r
p+qm , q ∈ 0 : n − 1. (2.7.8)

Moreover, the minimal value of the goal function equals to m −1 λ p |Z ( p)|2 . Let us
note that λ p is a harmonic mean of the numbers

αrp , αrp+m , . . . , αrp+(n−1)m .

Formulae (2.7.6) and (2.7.8) define X ∗ on the whole main period 0 : N − 1. The
inversion formula yields the unique solution of the Problem (2.7.1):

N −1
1  kj
x∗ ( j) = X ∗ (k) ω N , j ∈ Z. (2.7.9)
N k=0

The minimal value of the goal function of the Problem (2.7.1) is a total of the minimal
values of the goal functions of the Problems (2.7.5) for p = 0, 1, . . . , m − 1, so that

1 
m−1
f (x∗ ) = λ p |Z ( p)|2 .
m p=1
40 2 Signal Transforms

2.7.3 Let us modify formula (2.7.9) to the form more convenient for calculations.
Represent indices k, j ∈ 0 : N − 1 in a way k = p + qm, j = s + ln, where p, l ∈
0 : m − 1 and q, s ∈ 0 : n − 1. In accordance with (2.7.6) and (2.7.8) we write

1 
m−1 n−1
( p+qm)(s+ln)
x∗ (s + ln) = X ∗ ( p + qm) ω N
N p=0 q=0
⎡⎛ ⎞ ⎤
1 
m−1 
n−1
= ⎣⎝ 1 X ∗ ( p + qm) ωnqs ⎠ ω N ⎦ ωmpl
ps
m p=0 n q=0
⎡ ⎛ ⎞ ⎤
1 1 
m−1 
n−1
= Z (0) + ⎣λ p Z ( p) ⎝ 1 α −r ωqs ⎠ ω N ⎦ ωmpl .
ps
m m p=1 n q=0 p+qm n

We come to the following scheme of solving the Problem (2.7.1):

(1) we form two arrays of constants that depend only on m, n and r : one-dimensional
⎛ ⎞−1

n−1
λp = n ⎝ α −r
p+qm
⎠ , p ∈ 1 : m − 1,
q=0

and (column-wise) two-dimensional


⎛ ⎞
1 
n−1
D[s, p] = ⎝ α −r ωqs ⎠ ω N ,
ps
n q=0 p+qm n

s ∈ 1 : n − 1, p ∈ 1 : m − 1;

(2) we calculate Z = Fm (z) and "


Z ( p) = λ p Z ( p) for p ∈ 1 : m − 1;
(3) we introduce a two-dimensional array B with the columns

B[s, 0] = Z (0), s ∈ 1 : n − 1,

B[s, p] = "
Z ( p) D[s, p], s ∈ 1 : n − 1, p ∈ 1 : m − 1;

(4) applying the inverse DFT of order m to all n − 1 rows of the matrix B we obtain
a solution of the Problem (2.7.1):

x∗ (ln) = z(l), l ∈ 0 : m − 1,
2.7 Optimal Interpolation 41

1 
m−1
x∗ (s + ln) = B[s, p] ωmpl ,
m p=0

l ∈ 0 : m − 1, s ∈ 1 : n − 1.

2.8 Optimal Signal–Filter Pairs

2.8.1 We will proceed with a more detailed analysis of linear stationary operators
(a. k. a. filters).

A filter L with an impulse response h is called matched with a signal x if

L(x) := x ∗ h = Rx x . (2.8.1)

A matched filter exists. For instance, one may consider h( j) = x(− j), j ∈ Z. In
this case
N −1

[x ∗ h]( j) = x(k) x(k − j) = Rx x ( j).
k=0

Let us clarify the question of the uniqueness of a matched filter.

Theorem 2.8.1 Let x ∈ C N be a signal with all spectral components being nonzero.
Then the impulse response h of a matched with the signal x filter is determined
uniquely.

Proof We take a signal h satisfying to the condition (2.8.1). Denote by X and H


the spectra of the signals x and h, respectively. The convolution theorem and for-
mula (2.6.3) yield X H = F N (Rx x ) = X X . As far as X (k) = 0 for all k ∈ Z, it holds
H = X . Thus, by the inversion formula of DFT,

N −1
1  kj
h( j) = X (k) ω N , j ∈ Z.
N k=0

Now we have
N −1
1  kj
h(− j) = X (k) ω N = x( j),
N k=0

whence it follows that h( j) = x(− j), j ∈ Z. The theorem is proved. 

Provided the spectrum X of a signal x has zero components, a matched filter is


not unique. One may put
42 2 Signal Transforms

X (k), if X (k) = 0,
H (k) = (2.8.2)
ck , if X (k) = 0,

where ck are arbitrary complex numbers. The inversion formula

N −1
1  kj
h( j) = H (k) ω N , j ∈ Z,
N k=0

gives the analytical representation of impulse responses of all filters matched with
the signal x. Indeed, signals H of a form (2.8.2), and solely such signals, satisfy to
a condition X (H − X ) = O which is equivalent to X H = F N (Rx x ). Applying the
operator F N−1 to both sides of the latter equality we gain x ∗ h = Rx x .
 
2.8.2 We denote by R "x x = Rx x (0) −1 Rx x a normalized auto-correlation of a
nonzero signal x. If it holds

"x x = δ N ,
R (2.8.3)

the corresponding signal x is called delta-correlated. As long as R "x x (0) = 1, the


"
equality Rx x (0) = δ N (0) holds automatically. Thus, the condition (2.8.3) is equiva-
lent to Rx x ( j) = 0 for j = 1, . . . , N − 1.
It is not difficult to describe the whole set of all delta-correlated signals. In order
to do that we transfer equality (2.8.3) into a spectral domain. Taking into account
 −1
that F N (Rx x ) = |X |2 and F N (δ N ) = 1I, we gain Rx x (0) |X |2 = 1I. It means that

|X (k)| = Rx x (0) for all k ∈ Z. We come to the following conclusion.

Theorem 2.8.2 A nonzero signal x is delta-correlated if and only if it can be repre-


sented as
N −1
1  kj
x( j) = ck ω N , j ∈ Z,
N k=0

where ck are nonzero complex coefficients whose moduli are pairwise equal.

Only the sufficiency
√ needs proof. Let |c(k)| ≡ A > 0. Since c(k) = X (k), it
holds |X (k)| ≡ A. The Parseval equality (2.3.2) yields

N −1
 N −1

Rx x (0) = |x( j)|2 = N −1 |X (k)|2 = A, (2.8.4)
j=0 k=0


so that |X (k)| ≡ Rx x (0). The latter identity is equivalent to (2.8.3).
2.8 Optimal Signal–Filter Pairs 43

2.8.3 A value
N −1

E(x) = |x( j)|2
j=0

is referred to as the energy of a signal x. According to (2.8.4) it holds E(x) = Rx x (0)


and
E(x) = N −1 E(X ), (2.8.5)

where X = F N (x).
Given a nonzero signal x, we associate with it a side-lobe blanking filter (SLB
filter) whose impulse response h is determined by a condition

x ∗ h = E(x) δ N . (2.8.6)

A signal and its SLB filter form a signal–filter pair.


Transition of the equality (2.8.6) into a spectral domain yields X H = E(x) 1I.
We see that SLB filter exists if and only if each component of the spectrum X of a
signal x is nonzero. Moreover H = E(x) X −1 and

N −1

h( j) = N −1 E(x) [X (k)]−1 ω N ,
kj
j ∈ Z.
k=0

2.8.4 Let us consider a set of signals with a given energy. We will examine an
extremal problem of selecting from this set a signal whose SLB filter has an impulse
response with the smallest energy. This problem can be formalized as follows:

(P) Minimize [E(x)]−1 E(h) under constraints

x ∗ h = E(x) δ N ; E(x) = A; x, h ∈ C N .

Here A is a fixed positive number. We rewrite the problem (P) in a more compact
form:
γ := A−1 E(h) → min,
(2.8.7)
x ∗ h = A δ N ; E(x) = A; x, h ∈ C N .

A solution (x∗ , h ∗ ) of this problem is referred to as an optimal signal–filter pair.

Theorem 2.8.3 The minimal value of γ in the Problem (2.8.7) equals to unity. It is
achieved on any delta-correlated signal x∗ with E(x∗ ) = A. Moreover, the optimal
SLB filter h ∗ is a matched filter.

Proof Let us transfer the Problem (2.8.7) into a spectral domain. According to (2.8.5)
we gain
44 2 Signal Transforms

γ := (AN )−1 E(H ) → min,


X H = A 1I, (AN )−1 E(X ) = 1.

The spectrum H can be excluded. Taking into account that H = AX −1 we write

N −1
A 
γ := |X (k)|−2 → min,
N k=0
N −1
1 
|X (k)|2 = 1.
AN k=0

Denote ak = |X (k)|2 . The classical inequality between a harmonic mean and an


arithmetic mean yields
# N −1
1 N 1 ak
# N −1 −1 ≤ = 1.
k=0
(2.8.8)
A k=0 ak A N

The left side of the inequality (2.8.8) consists of the value γ −1 , which is not greater
than unity. So γ is not less than unity. The equality to#unity is achieved if and only
N −1
if all the values ak are equal√to each other. Since k=0 k = AN it means that
a
ak ≡ A, therefore |X (k)| ≡ A. Taking into account Theorem 2.8.2 we come to
the following conclusion: there holds the inequality γ ≥ 1; the equality γ = 1 is
achieved on all delta-correlated signals x∗ with E(x∗ ) = A and solely on them.
As to the optimal SLB filters for the indicated signals x∗ , these are necessarily the
matched filters. Indeed, as long as x∗ is delta-correlated, we have

Rx∗ x∗ = Rx∗ x∗ (0) δ N = E(x∗ ) δ N .

On the other hand, the constraints of the problem (P) yield x∗ ∗ h ∗ = E(x∗ ) δ N .
Therefore x∗ ∗ h ∗ = Rx∗ x∗ , which in accordance with (2.8.1) means that h ∗ is an
impulse response of a matched with x∗ filter. The theorem is proved. 

2.9 Ensembles of Signals

2.9.1 A finite set of signals x from C N with the same energy will be referred to as
an ensemble of signals and will be denoted as Q. For the definiteness sake we will
assume that

E(x) = A for all x ∈ Q. (2.9.1)

We introduce two characteristics of an ensemble of signals:


2.9 Ensembles of Signals 45

Ra = max max |Rx x ( j)|,


x∈Q j∈1:N −1

Rc = max max |Rx y ( j)|.


x,y∈Q j∈0:N −1
x= y

Note that if Ra = 0 then every signal in Q is delta-correlated (see par. 2.8.2).

Theorem 2.9.1 Let Q be an ensemble consisting of m signals, and let the condition
(2.9.1) hold. Then
 2  
Rc N − 1 Ra 2
N + ≥ 1. (2.9.2)
A m−1 A

The essential role in the proof of the above theorem is played by the following
assertion.
Lemma 2.9.1 For arbitrary signals x and y from C N there holds an equality

N −1
 N −1

|Rx y ( j)|2 = Rx x ( j) R yy ( j). (2.9.3)
j=0 j=0

Proof On the basis of (2.3.2) and (2.6.2) we write

N −1
 N −1
1   2
|Rx y ( j)|2 = [F N (Rx y )](k)
j=0
N k=0
N −1 N −1
1  1 
= |X (k) Y (k)|2 = |X (k)|2 |Y (k)|2 .
N k=0 N k=0

In addition to that, according to (2.3.1) and (2.6.3) we have

N −1
 N −1
1 
Rx x ( j) R yy ( j) = [F N (Rx x )](k) [F N (R yy )](k)
j=0
N k=0
N −1
1 
= |X (k)|2 |Y (k)|2 .
N k=0

The right sides of the presented relations are equal, therefore the left sides are equal
as well. The lemma is proved. 

Proof of the theorem The equality (2.9.3) yields


46 2 Signal Transforms


N −1 
2 N −1  ⎛ ⎞
    
 
 Rx x ( j) = Rx x ( j) ⎝ R yy ( j)⎠
 
j=0 x∈Q j=0 x∈Q y∈Q
N −1

= Rx x ( j) R yy ( j)
x∈Q y∈Q j=0
N −1

= |Rx y ( j)|2
x∈Q y∈Q j=0
N −1
 N −1

= |Rx y ( j)|2 + |Rx x ( j)|2 . (2.9.4)
x∈Q y∈Q j=0 x∈Q j=0
y=x

Let us estimate the left and the right sides of this equality. Since Rx x (0) = E(x) = A
for all x ∈ Q, we have

N −1 
2  2
   
   
 Rx x ( j) ≥  Rx x (0) = m 2 A2 .
   
j=0 x∈Q x∈Q

Further, by a definition of Rc and Ra ,

N −1

|Rx y ( j)|2 ≤ Rc2 N (m − 1) m,
x∈Q y∈Q j=0
y=x

N −1
 N −1
 
|Rx x ( j)| =
2
|Rx x ( j)|2 + |Rx x (0)|2
x∈Q j=0 x∈Q j=1 x∈Q

≤ Ra2 (N − 1) m + m A2 .

Combining the derived estimates we come to inequality

m (m − 1) A2 ≤ m (m − 1) N Rc2 + m (N − 1) Ra2 .

Hence (2.9.2) follows evidently. The theorem is proved. 

The inequality (2.9.2) is referred to as Sidel’nikov–Sarwate inequality. It shows,


in particular, that the values Ra and Rc cannot be arbitrary small simultaneously.
Below we will consider two extreme cases when one of these values equals to zero
and the other one gets the smallest possible magnitude.
2.9 Ensembles of Signals 47

2.9.2 Two signals x, y from C N are called non-correlated if Rx y (k) ≡ 0. Since

N −1
 N −1

R yx (k) = y( j) x( j − k) = x( j) y( j + k) = Rx y (−k),
j=0 j=0

the identity Rx y (k) ≡ 0 holds together with R yx (k) ≡ 0.


Let us rewrite the conditions Rx y = O and R yx = O in the equivalent form

x, y(· − k) = 0, y, x(· − k) = 0, k ∈ Z.

We come to the following conclusion: the signals x and y are non-correlated when
the signal x is orthogonal to all shifts of the signal y and the signal y is orthogonal
to all shifts of the signal x.
Let Qc be an ensemble consisting of pairwise non-correlated signals. In this case
Rc = 0. Furthermore, for such ensembles the equality (2.9.4) gets the form
 2
N −1  
   N −1
 
 R ( j)  = |Rx x ( j)|2 . (2.9.5)
 x x 

j=0 x∈Qc  x∈Qc j=0

Since non-correlated signals are at least orthogonal, the amount of signals in Qc does
not exceed N . We will show that in a space C N there exist ensembles containing
exactly N pairwise non-correlated signals.
−1 pj
Put Qc = {u p } Np=0 , where u p ( j) = ω N . This is an ensemble of signals with A =
N because E(u p ) = N for all p ∈ 0 : N − 1. Further,

N −1
 N −1
 j ( p− p )+ p k pk
Ru p u p (k) = u p ( j) u p ( j − k) = ωN = N ω N δ N ( p − p ).
j=0 j=0

It is evident that Ru p u p (k) ≡ 0 for p = p , p, p ∈ 0 : N − 1, i.e. the ensemble Qc


consists of N pairwise non-correlated signals. Note as well that Ru p u p (k) = N u p (k).
In particular, |Ru p u p (k)| ≡ N for all p ∈ 0 : N − 1. The latter identity has a general
nature. Namely, the following theorem is true.
Theorem 2.9.2 If an ensemble Qc consists of N pairwise non-correlated signals,
and E(x) = A for all x ∈ Qc , then

|Rx x (k)| ≡ A for all x ∈ Qc . (2.9.6)

Proof According to (2.9.5) we have


 2
N −1  
   

|Rx x (k)| ≥ 
2
Rx x (0) = N 2 A2 . (2.9.7)
x∈Qc k=0 x∈Qc 
48 2 Signal Transforms

As it was mentioned in par. 2.6.1, valid are the relations |Rx x (k)| ≤ Rx x (0) = A.
Assume that for some x ∈ Qc and k ∈ 1 : N − 1 there holds |Rx x (k)| < A. Then

N −1

|Rx x (k)|2 < N 2 A2 .
x∈Qc k=0

This contradicts with (2.9.7). The theorem is proved. 

Corollary 2.9.1 For any ensemble consisting of N pairwise non-correlated signals,


the inequality (2.9.2) is fulfilled as an equality.

Indeed, we need to take into account that in this case m = N , Rc = 0, and, by


virtue of Theorem 2.9.2, Ra = A.

2.9.3 Now we turn to ensembles Qa consisting of delta-correlated signals. For √ such


ensembles there holds Ra = 0, so that the inequality (2.9.2) gets a form Rc ≥ A/ N .
This estimation does not depend on the amount of signals in an ensemble. It, in
particular, turns into an equality if


|Rx y ( j)| ≡ A/ N for all x, y ∈ Qa , x = y. (2.9.8)

We will present an example of an ensemble that satisfies the condition (2.9.8).


Consider a two-parametric collection of signals of a form

k( j 2 + pj)
akp ( j) = ω N , k, p ∈ 0 : N − 1, gcd (k, N ) = 1. (2.9.9)

Lemma 2.9.2 Provided that N is odd, the signals akp are delta-correlated.

Proof For the sake of simplicity we denote x = akp . We have

N −1
 N −1

Rx x ( j) = x(l) x(l − j) = x(l + j) x(l)
l=0 l=0
N −1
 k(l 2 +2l j+ j 2 + pl+ pj)−k(l 2 + pl)
= ωN
l=0
N −1

k( j 2 + pj) 2kl j
= ωN ωN = N x( j) δ N (2k j). (2.9.10)
l=0
2.9 Ensembles of Signals 49

We will show that in our case

δ N (2k j) = δ N ( j), j ∈ Z. (2.9.11)

Rewrite (2.9.11) in an equivalent form


 
δ N 2k j N = δ N ( j), j ∈ 0 : N − 1. (2.9.12)

The number N is relatively prime with k (by the proviso) and is relatively prime
with 2 (due to oddity), therefore N is relatively prime with the product 2k. Since
gcd (2k, N ) = 1, the mapping j → 2k j N is a permutation of a set 0 : N − 1 that
maps zero to zero. This fact, along with the definition of the unit pulse δ N , guarantees
the validity of the equality (2.9.12) and, as a consequence, of (2.9.11).
On the basis of (2.9.10) and (2.9.11) we gain

Rx x ( j) = N x( j)δ N ( j) = N x(0)δ N ( j) = N δ N ( j).

The lemma is proved. 

Let N be an odd number. We take two signals of a form (2.9.9):

k( j 2 + pj) s( j 2 + pj)
x( j) = ω N , y( j) = ω N .

To be definite, we assume that k > s.

Theorem 2.9.3 Provided that gcd (k − s, N ) = 1, the following identity holds:



|Rx y ( j)| ≡ N. (2.9.13)

Proof We have

N −1
 N −1

|Rx y ( j)|2 = x(l + j) y(l) x(q + j) y(q)
l=0 q=0
N −1 
 N −1
k(l 2 +2l j+ j 2 + pl+ pj)−s(l 2 + pl) −k(q 2 +2q j+ j 2 + pq+ pj)+s(q 2 + pq)
= ωN ωN .
l=0 q=0

The power is reduced to a form

k(l − q)(l + q + 2 j + p) − s(l − q)(l + q + p) = (k − s)(l − q)(l + q + p) + 2k(l − q) j.


50 2 Signal Transforms

Taking into account Lemma 2.1.2 we gain

N −1 
 N −1
(k−s)(l−q)(l+q+ p)+2k(l−q) j
|Rx y ( j)| =
2
ωN
q=0 l=0
N −1 
 N −1
(k−s)l(l+2q+ p)+2kl j
= ωN
q=0 l=0
N −1
 N −1

(k−s)l(l+ p)+2kl j 2(k−s)lq
= ωN ωN
l=0 q=0
N −1
 (k−s)l(l+ p)+2kl j  
=N ωN δ N 2(k − s)l N . (2.9.14)
l=0

The number N is relatively prime with k − s (by the hypothesis) and is relatively
prime with 2 (due to oddity), therefore gcd (2(k − s), N ) = 1. In this case the map-
ping l → 2(k − s)l N is a permutation of a set 0 : N − 1 that maps zero to zero.
Using this fact and the definition of the unit pulse δ N we conclude that the sum in the
right side of (2.9.14) contains only one nonzero term that corresponds to l = 0. We
come to the identity |Rx y ( j)|2 ≡ N , which is equivalent to (2.9.13). The theorem is
proved. 

Signals akp of a form (2.9.9) have the same energy E(akp ) = N . According to
Lemma 2.9.2, provided that N is odd, any set of these signals constitutes an ensemble
Qa with A = N and Ra = 0. Assume that signals akp from Qa satisfy to two additional
conditions:
– they have the same p;
– for any pair of signals akp , asp from Qa with k > s, the difference k − s is relatively
prime with N .
Then, by virtue of Theorem 2.9.3, the following identity is valid:

|Rx y ( j)| ≡ N for all x, y ∈ Qa , x = y.
√ √
It coincides with (2.9.8) since in this case A/ N = N .
Note that when N is prime, the following signals satisfy to all conditions formu-
lated above: a1, p , a2, p , . . . , a N −1, p . The amount of these signals is N − 1.
2.10 Uncertainty Principle 51

2.10 Uncertainty Principle

2.10.1 We refer to the following set as a support of a signal x ∈ C N :

supp x = { j ∈ 0 : N − 1 | x( j) = 0}.

We denote by |supp x| the number of indices contained in a support. Along with the
support of a signal x we will consider the support of its spectrum X .
Theorem 2.10.1 (Uncertainty Principle) Given any nonzero signal x ∈ C N , the fol-
lowing inequality holds:
|supp x| × |supp X| ≥ N. (2.10.1)

The point of the inequality (2.10.1) is that the support of a nonzero signal and the
support of its spectrum cannot both be small.
2.10.2 We precede the proof of Theorem 2.10.1 by an auxiliary statement.
Lemma 2.10.1 Let m := |supp x| > 0. Then for any q ∈ 0 : N − 1 the sequence

X (q + 1), X (q + 2), . . . , X (q + m)

contains at least one nonzero element.


Proof Let supp x = { j1 , . . . , jm }. We fix q ∈ 0 : N − 1 and write


m
−(q+l) jk

m
q+l
X (q + l) = x( jk ) ω N = zk x( jk ), l ∈ 1 : m, (2.10.2)
k=1 k=1

−j
where z k = ω N k . It is clear that z k are pairwise different points on a unit
  
circle of a complex plane. We denote a = x( j1 ), . . . , x( jm ) , b = X (q + 1), . . . ,
 q+l m
X (q + m) , Z = {z k }l,k=1 , and rewrite the equality (2.10.2) in a form b = Z a. It
is sufficient to show that the matrix Z is invertible. In this case the condition a = O
will imply b = O.
We have ⎡ q+1 q+1 ⎤
z1 · · · zm
Z = ⎣ ··· ··· ···· ⎦.
q+m q+m
z1 · · · zm

The determinant  of this matrix can be transformed to a form


 
 1 ··· 1 
$ q+1  z 1 · · · z m 
m 
= z k  
 · · · · · · · · · · 
k=1
 z m−1 · · · z m−1 
1 m
52 2 Signal Transforms

where in the right side we can see a nonzero Vandermonde determinant. Therefore,
 = 0. This guarantees invertibility of the matrix Z .
The lemma is proved. 

2.10.3 Now we turn to proving Theorem 2.10.1. Let supp X = {k1 , . . . , kn }, where
0 ≤ k1 < k2 < · · · < kn < N . We fix s ∈ 1 : n − 1. According to the lemma, the
sequence X (ks + 1), . . . , X (ks + m) contains a nonzero element. But the first
nonzero element after X (ks ) is X (ks+1 ). Therefore,

ks+1 ≤ ks + m, s ∈ 1 : n − 1. (2.10.3)

Further, the sequence X (kn + 1), . . . , X (kn + m) also contains a nonzero element,
and by virtue of N -periodicity of a spectrum the first nonzero element after X (kn )
is X (k1 + N ). Therefore,
k1 + N ≤ kn + m. (2.10.4)

On the ground of (2.10.3) and (2.10.4) we gain

k1 + N ≤ kn + m ≤ kn−1 + 2m ≤ · · · ≤ k1 + nm. (2.10.5)

Hence it follows that mn ≥ N .


The theorem is proved. 

2.10.4 The signal x = δ N turns the inequality (2.10.1) into an equality. In fact, we
can describe the whole set of signals that turn the inequality (2.10.1) into an equality.

Theorem 2.10.2 Let x ∈ C N be a signal with properties

m = |supp x|, n = |supp X|,

and let the equality mn = N holds for this signal. Then necessarily
qj
x( j) = c ω N δn ( j − p), (2.10.6)

where q ∈ 0 : m − 1, p ∈ 0 : n − 1, and c ∈ C, c = 0.

Proof Just like in the previous paragraph, we denote

supp F N (x) = {k1 , . . . , kn }.

According to (2.10.5) and the equality mn = N we have

kn + m ≤ kn−1 + 2m ≤ · · · ≤ k1 + nm = k1 + N ≤ kn + m.
2.10 Uncertainty Principle 53

Hence
ks = k1 + (s − 1)m, s ∈ 1 : n. (2.10.7)

By a definition, kn = k1 + (n − 1)m ≤ N − 1, thus k1 ≤ m − 1. Denoting q = k1 ,


we rewrite (2.10.7) in a form ks = q + (s − 1)m.
Let us express the signal x through its spectrum:

1  1 
n n−1
k j (q+sm) j
x( j) = X (ks ) ω Ns = X (q + sm) ω N
N s=1 N s=0

1  1 
n−1
qj
= ωN X (q + sm) ωns j .
n s=0 m

Denote H (s) = 1
m
X (q + sm), s ∈ 0 : n − 1, and h = Fn−1 (H ). Then

qj
x( j) = ω N h( j). (2.10.8)

Since x is a nonzero signal, according to (2.10.8) we can find an index p such


that h( p) = 0. By virtue of n-periodicity of the signal h we can presume that p ∈
0 : n − 1. We will show that

h( j) = c δn ( j − p), (2.10.9)

where c = h( p). The conclusion of the theorem will follow from this equality and
from (2.10.8).
Again, by virtue of n-periodicity of the signal h and (2.10.8) we have
q( p+sn)
x( p + sn) = ω N h( p) = 0, s ∈ 0 : m − 1.

That is, we pointed out m indices from the main period where the samples of the
signal x are not zero. By the theorem hypothesis |supp x| = m, so on other indices
from 0 : N − 1 the signal x is zero. In particular, for j ∈ 0 : n − 1, j = p, there will
qj
be 0 = x( j) = ω N h( j). We derived that h( j) = 0 for all j ∈ 0 : n − 1, j = p. It
means that the signal h can be represented in a form (2.10.9).
The theorem is proved. 

2.10.5 To make the picture complete, let us find the spectrum of a signal x of a
form (2.10.6). We write

N −1
 
m−1
qj −k j (q−k)( p+sn)
X (k) = c ω N δn ( j − p) ω N =c ωN
j=0 s=0

(q−k) p

m−1
(q−k) p
= c ωN ωm(q−k)s = m c ω N δm (k − q).
s=0
54 2 Signal Transforms

Therefore, the signal x of a form (2.10.6) is not equal to zero on the indices j =
p + sn, s ∈ 0 : m − 1, while its spectrum X is not equal to zero on the indices
k = q + tm, t ∈ 0 : n − 1.

Exercises

2.1 Prove that a signal x ∈ C N is even if and only if the value x(0) is real and
x(N − j) = x( j) holds for j ∈ 1 : N − 1.

2.2 Prove that a signal x ∈ C N is odd if and only if Re x(0) = 0 and x(N − j) =
−x( j) holds for j ∈ 1 : N − 1.

2.3 Prove that any signal can be uniquely represented as a sum of an even and an
odd signal.

2.4 Prove that δmn (m j) = δn ( j) for all j ∈ Z.

2.5 Prove that



m−1
δmn ( j + ln) = δn ( j) for all j ∈ Z.
l=0

2.6 Prove that for r ∈ 1 : N − 1 there holds


r  2
 r
 (δ N )
r 2
= .
s=0
s

2.7 Let N = mn. Prove that for any signal x ∈ C N there holds


m−1 N −1

x(s + ln) = x( j) δn (s − j) for all s ∈ Z.
l=0 j=0

Numbers k and N in the Exercises 2.8 and 2.9 are natural relative primes.

2.8 Prove that δk N ( j) = δk ( j) δ N ( j) holds for all j ∈ Z.

2.9 Prove that


N −1

δ N (k j + l) = 1 for all l ∈ Z.
j=0

2.10 Prove that a signal x ∈ C N is odd if and only if its spectrum X is pure imagi-
nary.
Exercises 55

2.11 Let a and b be two real signals from C N . We aggregate a complex signal
x = a + ib. Prove that the spectra A, B, and X of these signals satisfy to the following
relations
A(k) = 21 [X (k) + X (N − k)],

B(k) = − 21 i [X (k) − X (N − k)]

for all k ∈ Z.
2.12 Let N be an even number. We associate a real signal x with a complex signal
xa with a spectrum

⎪ X (k) for k = 0 and k = N /2,

X a (k) = 2X (k) for k ∈ 1 : N /2 − 1,


0 for k ∈ N /2 + 1 : N − 1.

Prove that Re xa = x.
2.13 Formulate and solve the problem analogous to the previous one for an odd N .
2.14 Prove that for an even N and for k ∈ 0 : N /2 − 1 there hold

N /2−1
 % & −k j
X (k) = x(2 j) + ω−k
N x(2 j + 1) ω N /2 ,
j=0

N /2−1
 % & −k j
X (N /2 + k) = x(2 j) − ω−k
N x(2 j + 1) ω N /2 .
j=0

2.15 Prove that for an even N and for k ∈ 0 : N /2 − 1 there hold

N /2−1
 % & −k j
X (2k) = x( j) + x(N /2 + j) ω N /2 ,
j=0

N /2−1
 % & − j −k j
X (2k + 1) = x( j) − x(N /2 + j) ω N ω N /2 .
j=0

In the Exercise 2.16 through 2.19 it is required to calculate the Fourier spectrum
of given signals.
πj
2.16 x( j) = sin , j ∈ 0 : N − 1.
N
2.17 x( j) = (−1) j , j ∈ 0 : N − 1. Consider the cases of N = 2n and N = 2n +
1 separately.
56 2 Signal Transforms

j for j ∈ 0 : n,
2.18 x( j) =
j − N for j ∈ n+1 : N −1 (N = 2n + 1).

j for j ∈ 0 : n,
2.19 x( j) =
N − j for j ∈ n + 1 : N − 1 (N = 2n).

j2
2.20 Let x( j) = ω N . Find the amplitude spectrum |X | of the signal x.

The Exercise 2.21 through 2.31 contain some transforms of a signal x ∈ C N .


It is required to establish the relation between spectra of the given signal and the
transformed one.

2.21 xl ( j) = x( j + l), where l ∈ Z.


2πl j
2.22 xl ( j) = cos x( j), where l ∈ Z.
N
 
2.23 y p ( j) = x  pj N , in assumption that p and N are natural relative primes.

x( j) for j ∈ 0 : N − 1,
2.24 xn ( j) =
0 for j ∈ N : n N − 1 (xn ∈ Cn N ).
 
2.25 xn ( j) = x  j N for j ∈ 0 : n N − 1 (xn ∈ Cn N ).

The transforms presented in the Exercises 2.24 and 2.25 are referred to as pro-
longations of a signal.

x( j/n) if  j n = 0,
2.26 xn ( j) =
0 for others j ∈ Z (xn ∈ Cn N ).
 
2.27 xn ( j) = x  j/n (xn ∈ Cn N ).

The transforms presented in the Exercises 2.26 and 2.27 are referred to as stretches
of a signal.

2.28 yn ( j) = x( jm) for N = mn (yn ∈ Cn ).

This transform is referred to as subsampling.


m−1
2.29 yn ( j) = x( j + pn) for N = mn (yn ∈ Cn ).
p=0


m−1
2.30 yn ( j) = x( p + jm) for N = mn (yn ∈ Cn ).
p=0
Exercises 57


m−1
 
2.31 y( j) = x p +  j/mm for N = mn (y ∈ C N ).
p=0

2.32 Given the complex numbers c0 , c1 , . . . , c N −1 , c N , we form two signals


x0 ( j) = c j , j ∈ 0 : N − 1, and x1 ( j) = c j+1 , j ∈ 0 : N − 1. What is the relation
between the spectra of these signals?
2.33 A spectrum X of a signal x is associated with a signal y with samples

X (0) for k = 0,
y(k) =
X (N − k) for k ∈ 1 : N − 1.

Prove that x = N −1 F N (y).


2.34 Prove that for any x ∈ C N valid is the equality

F N4 (x) = N 2 x.

2.35 Prove the formula F N (x y) = N −1 (X ∗ Y ).


2.36 Assume that all samples of a signal x ∈ C N are nonzero. Introduce a signal
y = x −1 . Prove that the DFTs X and Y of signals x and y are bound with a relation
X ∗ Y = N 2δN .
2.37 Prove that convolution of two even signals is even.
2.38 Prove that auto-correlation Rx x is an even function for any x ∈ C N .
2.39 Prove that
N −1

Rx x ( j) = |X (0)|2 .
j=0

2.40 Let u = x ∗ y. Prove that Ruu = Rx x ∗ R yy .


2.41 Take two delta-correlated signals x and y and their convolution u = x ∗ y.
Prove that E(u) = E(x) E(y).
2.42 Prove that convolution of two delta-correlated signals is delta-correlated.
2.43 A Frank signal v belongs to a space C N 2 and is defined by the formula v( j1 N +
j j
j0 ) = ω N1 0 , j1 , j0 ∈ 0 : N − 1. Prove that a Frank signal is delta-correlated.
2.44 Consider a Zadoff–Chu signal
 j 2 +2q j
ω2N for an even N ,
a( j) =
j ( j+1)+2q j
ω2N for an odd N ,

where q ∈ Z is a parameter. Prove that a signal a belongs to a space C N and is


delta-correlated.
58 2 Signal Transforms

2.45 A signal x ∈ C N is called binary if it takes the values +1 and −1 only. Prove
that there are no delta-correlated signals among binary ones if N = 4 p 2 , p being a
natural number.
2.46 The Exercise 2.26 introduced a signal xn ∈ Cn N that was a stretch of a signal
x ∈ C N . What is the relation between auto-correlations of these signals?
2.47 Take four signals x, y, w, and z, and form four new signals u 1 = Rx y , v1 =
Rwz , u 2 = Rxw , and v2 = R yz . Prove that Ru 1 v1 = Ru 2 v2 .
2.48 Let x and y be non-correlated signals. Prove that signals Rxw and R yz are also
non-correlated regardless of w and z.
2.49 We remind that the signals from a basis of shifts of a unit pulse are pairwise
orthogonal. Prove that they are pairwise correlated.
N −1
2.50 Prove that a system of shifts {x( j − k)}k=0 is linearly independent on Z if
and only if all the components of the spectrum X are nonzero.
N −1 N −1
2.51 Systems of shifts {x(· − k)}k=0 and {y(· − k)}k=0 are called biorthogonal if
there holds x(· − k), y(· − k ) = δ N (k − k ). Prove that the criterion of biorthog-
onality is satisfying to the condition Rx y = δ N .
N −1
2.52 Let a system of shifts {x(· − k)}k=0 be linearly independent on Z. Prove that
N −1
there exist the unique signal y ∈ C N such that the systems of shifts {x(· − k)}k=0
N −1
and {y(· − k)}k=0 are biorthogonal.
2.53 Let x ∈ C N be a nonzero signal. A value

max |x( j)|2


j∈0:N −1
p(x) = N#
−1
N −1 |x( j)|2
j=0

is referred to as a peak factor of a signal x. Prove that 1 ≤ p(x) ≤ N . Clarify the


cases when the inequalities are fulfilled as equalities.
2.54 Let g ∈ C N . Find a signal x ∈ C N satisfying to the equation

−2 x( j − 1) + c x( j) = g( j), j ∈ Z,

where c > 0 is a parameter.


2.55 Consider a complex-valued 1-periodic function of a real argument f (t). Find
the coefficients of a trigonometric polynomial of a form


n
T (t) = a(k) exp(2πikt)
k=−n

that satisfies to the interpolation conditions T (t j ) = f (t j ), where t j = j/(2n + 1),


j ∈ Z.
Comments 59

Comments

In this chapter, we introduce the basic concepts of the discrete harmonic analysis
such as discrete Fourier transform, cyclic convolution, and cyclic correlation. The
peculiarity of the presentation is that we consider a signal as an element of the
functional space C N .
We systematically use the N -periodic unit pulse δ N . The expansion (2.1.1) of
an arbitrary signal over the shifts of the unit pulse corresponds to the expansion
of a vector over the unitary vectors. Lemma 2.1.1 is elementary; however, it lets
us easily prove Theorem 2.5.3 about general form of a linear stationary operator.
Theorem 2.6.2 is a generalization of Lemma 2.1.1. It makes it clear when shifts of a
signal form an orthonormal basis in the space C N .
A solution of the optimal interpolation problem is obtained in [3]. More sophis-
ticated question of this solution’s behavior when r → ∞ is investigated in the same
paper. A similar approach is used in [2] for solving the problem of discrete periodic
data smoothing.
The problem of the optimal signal–filter pair was studied in [11]. In our book we
revise all the notions needed for the problem’s setting and give its simple solution.
A generalization of these results is presented in the paper [38].
The sections on ensembles of signals and the uncertainty principle are written on
the basis of the survey [45] and the paper [9], respectively. The point of the uncertainty
principle is that the number of indices comprising the support of a signal and the
number of indices comprising the support of its spectrum cannot be simultaneously
small. The more localized is a signal in time domain, the more dispersed is its
frequency spectrum.
Additional exercises are proposed to be solved by the reader. They are intended to
help in mastering the discrete harmonic analysis techniques. These exercises intro-
duce, in particular, such popular signal transforms as prolongation, stretching, and
subsampling. Some exercises prepare the reader for the further theory development.
These are, first of all, the Exercises 2.14 and 2.15. Special signals are considered.
We attract the reader’s attention to Frank signal (Exercise 2.43). Detailed studying
and generalization of this signal is undertaken in the paper [26].
Chapter 3
Spline Subspaces

3.1 Periodic Bernoulli Functions

3.1.1 Let N ≥ 2 and r ≥ 0 be integer numbers. A signal

N −1
1  k
(ω − 1)−r ω N ,
kj
br ( j) = j ∈ Z, (3.1.1)
N k=1 N

is referred to as a discrete periodic Bernoulli function of order r . According to a


definition we have

  0 for k = 0,
F N (br ) (k) =
(ωkN − 1)−r for k ∈ 1 : N − 1.
 
A condition F N (br ) (0) = 0 means that

N −1

br ( j) = 0. (3.1.2)
j=0

For r = 0 we have
N −1
1  kj 1
b0 ( j) = ω N = δ N ( j) − . (3.1.3)
N k=1 N

Theorem 3.1.1 For all r ≥ 0 and j ∈ Z valid are the equalities

br +1 ( j) := br +1 ( j + 1) − br +1 ( j) = br ( j), (3.1.4)

br (r − j) = (−1)r br ( j). (3.1.5)

© Springer Nature Switzerland AG 2020 61


V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7_3
62 3 Spline Subspaces

Proof According to (3.1.1) we write


N −1
1  k
(ω − 1)−r −1 ω N (ωkN − 1)
kj
br +1 ( j) =
N k=1 N
N −1
1  k
(ω − 1)−r ω N = br ( j).
kj
=
N k=1 N

Further,
N −1
1  k k(r − j)
br (r − j) = (ω − 1)−r ω N
N k=1 N
N −1 N −1
1  −k −r −k j 1  (N −k) j
= (1 − ω N ) ω N = (1 − ω NN −k )−r ω N
N k=1 N k=1
N −1
1 
(1 − ωkN )−r ω N = (−1)r br ( j).
kj
=
N k=1

The theorem is proved. 




Lemma 3.1.1 For k ∈ 1 : N − 1 valid is the formula


N −1
  π k −2r
−k j
b2r ( j + r ) ω N = (−1)r 2 sin . (3.1.6)
j=0
N

Proof Note that

ω−k k 2 −k 2k k k −k
N (ω N − 1) = ω N (ω N − 2 ω N + 1) = ω N − 2 + ω N
 2π k  πk
= −2 1 − cos = −4 sin2 . (3.1.7)
N N
Taking into account Lemma 2.1.2 we gain

N −1
 N −1

−k( j+r )+kr −k j
b2r ( j + r ) ωN = ωkr
N b2r ( j) ω N
j=0 j=0
 
2 −r
= ωkr
N (ω N − 1)
k −2r
= ω−k N (ω N − 1)
k

 π k −2r
= (−1)r 2 sin .
N
The lemma is proved. 

3.1 Periodic Bernoulli Functions 63

3.1.2 Shifts of a Bernoulli function can be used for expansion of arbitrary signals.

Theorem 3.1.2 Any signal x ∈ C N for each r ≥ 0 can be represented as

N −1

x( j) = c + r x(k) br ( j − k), j ∈ Z, (3.1.8)
k=0

 N −1
where c = N −1 j=0 x( j).

Proof We denote
N −1

Ir ( j) = r x(k) br ( j − k).
k=0

According to (3.1.3), for r = 0 we have

N −1

I0 ( j) = x(k) [δ N ( j − k) − N −1 ] = x( j) − c, (3.1.9)
k=0

which conforms to (3.1.8).


Let r ≥ 1. At first we will show that for arbitrary signals x and y from C N there
holds a summation by parts formula

N −1
 N −1

y(k) x(k) = − x(k) y(k − 1). (3.1.10)
k=0 k=0

Indeed, Lemma 2.1.2 yields

N −1
 N −1

y(k) x(k) = y(k) [x(k + 1) − x(k)]
k=0 k=0
N −1
 N −1

= [y(k − 1) − y(k)] x(k) = − x(k) y(k − 1).
k=0 k=0

Recall that r (x) =  r −1 (x) . Taking into account (3.1.10) and (3.1.4) we gain

N −1

Ir ( j) = − r −1 x(k) [br ( j − k) − br ( j − k + 1)]
k=0
N −1

= r −1 x(k) br −1 ( j − k) = Ir −1 ( j).
k=0
64 3 Spline Subspaces

Thus, Ir ( j) = Ir −1 ( j) = · · · = I1 ( j) = I0 ( j). It is remaining to refer to formula


(3.1.9). The theorem is proved. 


Let us substitute x( j) = br +s ( j) into (3.1.8). According to (3.1.2) and (3.1.4) we


gain
N −1

br +s ( j) = bs (k) br ( j − k).
k=0

This formula emphasizes a convolution nature of Bernoulli functions.

3.2 Periodic B-splines

3.2.1 We suppose that N = mn and m ≥ 2. Let us introduce a signal



⎨n − j for j ∈ 0 : n − 1,
x1 ( j) = 0 for j ∈ n : N − n, (3.2.1)

j − N + n for j ∈ N − n + 1 : N − 1,

and calculate its discrete Fourier transform X 1 = F N (x1 ).

Lemma 3.2.1 The following equality holds:


⎧ 2
⎨n f or k = 0,

X 1 (k) = sin(π k/m) 2 (3.2.2)
⎩ f or k ∈ 1 : N − 1.
sin(π k/N )

Proof A definition of DFT yields


n−1 N −1

−k j k(N − j)
X 1 (k) = (n − j) ω N + n − (N − j) ω N
j=0 j=N −n+1


n−1
−k j

n−1
kj
=n+ (n − j) ω N + (n − j  ) ω N
j=1 j  =1


n−1  
n−1 
k(n− j)−kn −k kj
= n + 2 Re (n − j) ω N = n + 2 Re ωm j ωN .
j=1 j=1

For k = 0 we get X 1 (0) = n + n(n − 1) = n 2 . Further we assume that


k ∈ 1 : N − 1.
3.2 Periodic B-splines 65

Let us use the formula (see Sect. 1.6)


n−1
z
jz j = [(n − 1) z n − nz n−1 + 1], z = 1. (3.2.3)
j=1
(1 − z)2

Substituting z = ωkN into (3.2.3) we gain


n−1
ωkN
[(n − 1) ωmk − n ωmk ω−k
kj
j ωN = N + 1].
j=1
(1 − ωkN )2

Equality (3.1.7) yields


n−1
1
ωm−k [n − 1 − n ω−k −k
kj
j ωN = − πk N + ωm ].
j=1
4 sin2 N

Hence it follows that


   
1 2π k   2π k  sin(π k/m) 2
X 1 (k) = n − πk
n 1 − cos − 1 − cos = .
2 sin2 N
N m sin(π k/N )

The lemma is proved. 




The signal x1 is characterized by two properties: by the equality x1 (ln) = n δm (l)


and by linearity on each interval ln : (l + 1)n, l ∈ Z.

3.2.2 We put
Q 1 = x1 ; Q r = Q 1 ∗ Q r −1 , r = 2, 3, . . . (3.2.4)

A signal Q r is referred to as a discrete periodic B-spline of order r . According


to (3.2.1) and (3.2.4) it takes only non-negative integer values. Figure 3.1 depicts a
graph of a B-spline Q r ( j) for m = 8, n = 5, and r = 2.

− 0

Fig. 3.1 Graph of a B-spline Q r ( j) for m = 8, n = 5, and r = 2


66 3 Spline Subspaces

Theorem 3.2.1 For all natural r there holds


N −1
1  r kj
Q r ( j) = X (k) ω N , j ∈ Z. (3.2.5)
N k=0 1

Proof When r = 1, formula (3.2.5) coincides with the DFT inversion formula which
reconstructs the signal x1 from its spectrum X 1 . We perform an induction step from
r to r + 1. From validity of (3.2.5) it follows that F N (Q r ) = X 1r holds. By virtue of
the convolution theorem we write

F N (Q r +1 ) = F N (Q 1 ∗ Q r ) = X 1 X 1r = X 1r +1 .

The DFT inversion formula yields

N −1
1  r +1 kj
Q r +1 ( j) = X (k) ω N , j ∈ Z.
N k=0 1

The theorem is proved. 




As it was noted earlier, B-spline Q r ( j) takes only non-negative integer values.


Hereto we should add that Q r ( j) is an even signal. It follows from Theorems 3.1.2
and 2.2.3.
Formula (3.2.5) can be considered as a definition of B-spline of order r . It has
meaning for m = 1 as well. In this case, equality (3.2.2) takes a form X 1 = N 2 δ N
so that according to (3.2.5) we gain Q r ( j) ≡ N 2r −1 .
We also note that for m = N (and n = 1)

N −1
1  kj
Q r ( j) = ω = δ N ( j)
N k=0 N

holds for all natural r .

3.2.3 Later on we will need the values Q r ( pn) for p ∈ 0 : m − 1. Let us calculate
them. We will use the fact that every index k ∈ 0 : N − 1 can be represented in a
form k = qm + l, where q ∈ 0 : n − 1 and l ∈ 0 : m − 1. According to (3.2.5) we
have

1  r
m−1 n−1
pn(qm+l)
Q r ( pn) = X (qm + l) ω N
N l=0 q=0 1
 n−1 
1  pl 1  r
m−1
= ωm X 1 (qm + l) .
m l=0 n q=0
3.2 Periodic B-splines 67

=4

=3

=2
0
2

Fig. 3.2 Graphs of a signal Tr (l) for m = 512, n = 2, and r = 2, 3, 4

Denoting
1 r
n−1
Tr (l) = X (qm + l) (3.2.6)
n q=0 1

we gain
1 
m−1
Q r ( pn) = Tr (l) ωmpl . (3.2.7)
m l=0

We note that a signal Tr (l) is real, m-periodic, and even. Reality follows from
the definition (3.2.6), and m-periodicity from Lemma 2.1.3. The formula (3.2.7) and
Theorem 2.2.2 guarantee evenness of Tr (l).
Figure 3.2 shows graphs of a signal Tr (l) on the main period 0 : m − 1 for m =
512, n = 2, and r = 2, 3, 4.
Let us transform formula (3.2.6). For l ∈ 1 : m − 1 we introduce a value

n−1  
1 π(qm + l) −2r
r (l) = 2 sin .
n q=0 N

Lemma 3.2.2 Valid is the equality



n 2r −1 for l = 0,
Tr (l) =  2r (3.2.8)
2 sin πl
m
r (l) for l ∈ 1 : m − 1.

Proof Equality (3.2.2) yields

X 1 (m) = X 1 (2m) = · · · = X 1 (n − 1)m = 0,


68 3 Spline Subspaces

therefore Tr (0) = n −1 X 1r (0) = n 2r −1 . For l ∈ 1 : m − 1 we have

n−1  
1  2 sin(π(qm + l)/m) 2r  πl
2r
Tr (l) = = 2 sin r (l).
n q=0 2 sin(π(qm + l)/N ) m

The lemma is proved. 




From (3.2.8) it follows, in particular, that Tr (l) > 0 for all l ∈ Z.

3.2.4 We will determine the relation between discrete periodic B-splines and discrete
periodic Bernoulli functions.

Theorem 3.2.2 Valid is the formula


 
1 2r 
r
r −l 2r
Q r ( j) = n + (−1) b2r ( j + r − ln). (3.2.9)
N l=−r
r −l

Proof According to (3.2.2) and (3.1.7) a value X 1 (k) for k ∈ 1 : N − 1 can be rep-
resented in a form
ωm−k (ωmk − 1)2
X 1 (k) = −k .
ω N (ωkN − 1)2

Bearing this in mind we gain

N −1 N −1
1  r 1  k k( j+r −r n)
(ω − 1)2r (ωkN − 1)−2r ω N
kj
X 1 (k) ω N =
N k=1 N k=1 m
N −1  
1  k 2r
2r k( j+r −(r − p)n)
= (ω N − 1)−2r (−1)2r − p ωN
N k=1 p=0
p
N −1  
1  k r
2r k( j+r −ln)
= (ω N − 1)−2r (−1)r −l ωN
N k=1 l=−r
r − l
   N −1 
1  k
r
2r k( j+r −ln)
= (−1)r −l (ω N − 1)−2r ω N
l=−r
r −l N k=1
r  
2r
= (−1)r −l b2r ( j + r − ln).
l=−r
r −l

Now the statement of the theorem follows from the formula (3.2.5). 

3.3 Discrete Periodic Splines 69

3.3 Discrete Periodic Splines

3.3.1 Let N = mn and N ≥ 2. A discrete periodic spline S( j) of order r is defined


as a linear combination of shifts of B-spline Q r ( j) with complex coefficients:


m−1
S( j) = c( p) Q r ( j − pn). (3.3.1)
p=0

The set of splines of a form (3.3.1) is denoted by Srm . Since Q r ( j) = δ N ( j) for


m = N , Lemma 2.1.1 yields SrN = C N for all natural r . For m = 1 we have Q r ( j) ≡
N 2r −1 , therefore Sr1 is a set of signals that are identically equal to a complex constant.

Lemma 3.3.1 The basic signals Q r ( j − pn), p ∈ 0 : m − 1, are linearly indepen-


dent on Z.

Proof Let

m−1
S( j) := c( p) Q r ( j − pn) = 0 ∀ j ∈ Z
p=0

for some complex coefficients c( p). We will show that all c( p) are equal to zero. We
have
N −1
 
m−1 N −1

−k j −k( j− pn)−kpn
0= S( j) ω N = c( p) Q r ( j − pn) ω N
j=0 p=0 j=0
 m−1
  
N −1 
−k j
= c( p) ωm−kp Q r ( j) ω N . (3.3.2)
p=0 j=0

 −1 −k j
According to (3.2.5) there holds Nj=0 Q r ( j) ω N = X 1r (k). We denote C(k) =
m−1 −kp
p=0 c( p) ωm . Then equality (3.3.2) can be rewritten as C(k)X 1 (k) = 0. The
r

formula (3.2.2) guarantees that X 1 (k) = 0 for k ∈ 0 : m − 1. Hence C(k) = 0 for


r

the same indices k. The DFT inversion formula yields c( p) = 0 for p ∈ 0 : m − 1.


The lemma is proved. 


It is obvious that Srm is a linear complex space. It is a subspace of C N . On the


basis of Lemma 3.3.1 one can ascertain that the dimension of Srm equals to m.

Lemma 3.3.2 Valid is the identity


m−1
Q r ( j − pn) ≡ n 2r −1 . (3.3.3)
p=0
70 3 Spline Subspaces

Proof According to (3.2.5) and (2.2.1) we gain

 N −1
1  r  k( j− pn)
m−1 m−1
Q r ( j − pn) = X 1 (k) ωN
p=0
N k=0 p=0
N −1  m−1 
1 r kj 1

−kp
= X (k) ω N ω
n k=0 1 m p=0 m
N −1
1 r 1 r
n−1
kj
= X 1 (k) ω N δm (k) = X (lm) ωnl j .
n k=0 n l=0 1

It is remaining to take into account that X 1 (lm) = 0 for l ∈ 1 : n − 1, and X 1 (0) =


n2. 


From (3.3.3) it follows, in particular, that a signal identically equal to a complex


constant belongs to Srm .

3.3.2 It is possible to give an equivalent definition of a discrete periodic spline with


the aid of Bernoulli functions.

Theorem 3.3.1 A signal S belongs to Srm if and only if it can be represented as


m−1
S( j) = d + d(l) b2r ( j + r − ln), (3.3.4)
l=0

m−1
where l=0 d(l) = 0.

Proof
Necessity According to (3.3.1) and (3.2.9) we have


m−1  r   
1 2r r −k 2r
S( j) = c( p) n + (−1) b2r j + r − (k + p)n
p=0
N k=−r
r −k
 
n 2r  
m−1 m−1 r
r −k 2r
= c( p) + c( p)(−1) b2r j + r − k + p m n .
N p=0 p=0 k=−r
r −k

Collecting similar terms in the double sum we come to (3.3.4). Herein


m−1 
m−1 
r  
r −k 2r
d(l) = c( p) (−1) = 0.
l=0 p=0 k=−r
r −k
3.3 Discrete Periodic Splines 71

Sufficiency As it was mentioned above, a signal f ( j) ≡ d belongs to Srm . It is


remaining to verify that the set Srm contains a signal


m−1
g( j) = d(l) b2r ( j + r − ln) (3.3.5)
l=0

m−1
which has l=0 d(l) = 0. Let us calculate G = F N (g). We have

N −1
 
m−1 N −1

−k j −k( j−ln)−kln
G(k) = g( j) ω N = d(l) b2r ( j + r − ln) ω N
j=0 l=0 j=0
 m−1
  
N −1 
−k j
= d(l) ωm−kl b2r ( j + r ) ω N .
l=0 j=0

m−1
Denote D(k) = l=0 d(l) ωm−kl . The theorem’s hypothesis yields D(0) = 0. Taking
into account (3.1.6) we gain

0 for k = 0,
G(k) = −2r (3.3.6)
(−1)r 2 sin(π k/N ) D(k) for k ∈ 1 : N − 1.

Note that due to m-periodicity there holds D(m) = D(2m) = · · · = D (n − 1)m =


0, therefore

G(0) = G(m) = G(2m) = · · · = G (n − 1)m = 0. (3.3.7)

We introduce an m-periodic signal A(k):



0 for k = 0,
A(k) = −2r
(−1)r 2 sin(π k/m) D(k) for k ∈ 1 : m − 1.

We will show that


G(k) = A(k) X 1r (k), k ∈ 0 : N − 1. (3.3.8)

For k = 0, m, 2m, . . . , (n − 1)m this formula is true by virtue of (3.3.7) and the
equalities A(0) = 0 and X 1 (m) = X 1 (2m) = · · · = X 1 (n − 1)m = 0. For other
k ∈ 1 : N − 1, according to (3.3.6) and (3.2.2), we gain
−2r
G(k) = (−1)r 2 sin(π k/m) X 1r (k) D(k) = A(k) X 1r (k).
72 3 Spline Subspaces

Formula (3.3.8) allows conversion of a signal g to a form (3.3.1). Indeed, we put

1 
m−1
a( p) = A(k) ωmkp .
m k=0

Then (3.2.5) yields

N −1 N −1
1  kj 1  kj
g( j) = G(k) ω N = A(k) X 1r (k) ω N
N k=0 N k=0
N −1  m−1 
1   −kp kj
= a( p) ωm X 1r (k) ω N
N k=0 p=0

  N −1 
1  r
m−1
k( j− pn)
= a( p) X 1 (k) ω N
p=0
N k=0


m−1
= a( p) Q r ( j − pn). (3.3.9)
p=0

The theorem is proved. 




Remark 3.3.1 The proof contains a scheme of transition from the expansion (3.3.5)
of a signal g to the expansion (3.3.9). The scheme looks this way:
       
d(l) → D(k) → A(k) → a( p) .

3.3.3 Let us present an important (for what follows) property of discrete periodic
splines.

Theorem 3.3.2 For an arbitrary spline S ∈ Srm and an arbitrary signal x ∈ C N


there holds
N −1
 
m−1
r S( j) r x( j) = (−1)r d(l) x(ln), (3.3.10)
j=0 l=0

where d(l) are the coefficients from the representation (3.3.4) of the spline S.

Proof We denote by Ir (x) the expression in the left side of equality (3.3.10). Accord-
ing to (3.3.4) and (3.1.4) we have

N −1  m−1
  
Ir (x) = d(l) br r − (ln − j) r x( j).
j=0 l=0

Using formulae (3.1.5) and (3.1.8) we gain


3.3 Discrete Periodic Splines 73


m−1 
N −1 
Ir (x) = (−1)r d(l) r x( j) br (ln − j)
l=0 j=0


m−1
 
= (−1)r d(l) x(ln) − c ,
l=0

 N −1 m−1
where c = N −1 j=0 x( j). Taking into account the equality l=0 d(l) = 0 we
come to (3.3.10). The theorem is proved. 


3.4 Spline Interpolation

3.4.1 We consider the following interpolation problem on the set Srm of discrete
periodic splines of order r :

S(ln) = z(l), l ∈ 0 : m − 1, (3.4.1)

where z(l) are arbitrary complex numbers. A detailed notation of the problem (3.4.1)
by virtue of (3.3.1) looks this way:


m−1
c( p) Q r (l − p)n = z(l), l ∈ 0 : m − 1. (3.4.2)
p=0

Thus, a problem of discrete spline interpolation is reduced to solving a system of


linear equations (3.4.2) with respect to spline’s coefficients c( p).
We introduce a signal h( p) = Q r ( pn). By means of this signal we can rewrite
the system (3.4.2) as follows:


m−1
c( p) h(l − p) = z(l), l ∈ 0 : m − 1,
p=0

or in more compact form c ∗ h = z. Going into a spectral domain we obtain the


equivalent system of equations

C(k) H (k) = Z (k), k ∈ 0 : m − 1, (3.4.3)

where C = Fm (c), Z = Fm (z), and


m−1 
m−1
H (k) = h( p) ωm−kp = Q r ( pn) ωm−kp .
p=0 p=0
74 3 Spline Subspaces

According to (3.2.7) we have H (k) = Tr (k), where, as it was mentioned after the
proof of Lemma 3.2.2, all values Tr (k) are positive. The system (3.4.3) has a unique
solution C(k) = Z (k)/Tr (k), k ∈ 0 : m − 1. The DFT inversion formula yields

1  
m−1
c( p) = Z (k)/ Tr (k) ωmkp , p ∈ 0 : m − 1. (3.4.4)
m k=0

Let us summarize.
Theorem 3.4.1 The interpolation problem (3.4.1) has a unique solution. Coeffi-
cients of the interpolation spline S∗ are determined by formula (3.4.4).
3.4.2 We will show that a discrete interpolation spline S∗ has an extremal property.
Incidentally we will clarify the role of the parameter r .
Consider an extremal problem

N −1

f (x) := |r x( j)|2 → min,
j=0
(3.4.5)
x(ln) = z(l), l ∈ 0 : m − 1; x ∈ C N .

Theorem 3.4.2 A unique solution of the problem (3.4.5) is a discrete interpolation


spline S∗ .
Proof Let x be an arbitrary signal satisfying to the constraints of the problem (3.4.5).
We put η = x − S∗ . It is evident that η(ln) = 0 holds for l ∈ 0 : m − 1. On the
strength of linearity of a finite difference of order r we have

N −1
  r 
f (x) = f (S∗ + η) =  S∗ ( j) + r η( j)2
j=0
N −1

= f (S∗ ) + f (η) + 2 Re r S∗ ( j) r η( j).
j=0

Theorem 3.3.2 yields

N −1
 
m−1
r S∗ ( j) r η( j) = (−1)r d∗ (l) η(ln) = 0.
j=0 l=0

Therefore, f (x) = f (S∗ ) + f (η). Hence follows the inequality f (x) ≥ f (S∗ ) that
guarantees optimality of S∗ .
Let us verify the uniqueness of a solution of the problem (3.4.5). Assume that
f (x) = f (S∗ ). Then f (η) = 0. This is possible only when r η( j) = 0 for all j ∈ Z.
Theorem 3.1.2 yields η( j) ≡ const. But η(ln) = 0 holds for l ∈ 0 : m − 1, so
η( j) ≡ 0. We gain x = S∗ . The theorem is proved. 

3.5 Smoothing of Discrete Periodic Data 75

3.5 Smoothing of Discrete Periodic Data

3.5.1 Let N = mn. We consider a problem of smoothing of discrete periodic data


in the following setting:

N −1

f (x) := |r x( j)|2 → min,
j=0
(3.5.1)

m−1
g(x) := |x(ln) − z(l)| ≤ ε, x ∈ C N ,
2

l=0

where ε > 0 is a fixed value (a parameter). Thus, it is required to find a signal x∗ ∈ C N


that provides given accuracy of approximation g(x∗ ) ≤ ε of data z(l) on a coarse
grid {ln}l=0
m−1
and has the minimal squared norm of the finite difference of the r -th
order. The latter condition characterizes “smoothness” of a desired signal.
Note that for ε = 0 the problem (3.5.1) is equivalent to the problem (3.4.5).

3.5.2 As a preliminary we will solve an auxiliary problem


m−1
q(c) := |c − z(l)|2 → min, (3.5.2)
l=0

where the minimum is taken among all c ∈ C. We have


m−1
 
q(c + h) =  c − z(l) + h 2
l=0


m−1
= q(c) + m |h| + 2 Re 2
c − z(l) h.
l=0

It is obvious that a unique minimum point c∗ of the function q(c) is determined from
a condition

m−1
c − z(l) = 0,
l=0

so that
1 
m−1
c∗ = z(l). (3.5.3)
m l=0

Herein


m−1 
m−1
ε∗ := q(c∗ ) = − c∗ − z(l) z(l) = |z(l)|2 − m |c∗ |2 . (3.5.4)
l=0 l=0
76 3 Spline Subspaces

The number ε∗ is a critical value of the parameter ε. When ε ≥ ε∗ , a solution of


the problem (3.5.1) is the signal x∗ ( j) ≡ c∗ because in this case there hold g(x∗ ) =
q(c∗ ) = ε∗ ≤ ε and f (x∗ ) = 0. Later on we assume that 0 < ε < ε∗ . In particular,
ε∗ > 0. This guarantees that m ≥ 2 and z(l) ≡ const.
3.5.3 We fix a parameter α > 0, introduce a function

Fα (x) = α f (x) + g(x),

and consider yet another auxiliary problem

Fα (x) → min, (3.5.5)

where the minimum is taken among all x ∈ C N . We take an arbitrary spline S ∈ Srm
and write down an expansion

Fα (S + H ) = α f (S + H ) + g(S + H )
 N −1 
= α f (S) + f (H ) + 2 Re r S( j) r H ( j)
j=0


m−1 
m−1
+ g(S) + |H (ln)|2 + 2 Re [S(ln) − z(l)] H (ln).
l=0 l=0

According to Theorem 3.3.2 we have


m−1
Fα (S + H ) = Fα (S) + α f (H ) + |H (ln)|2
l=0


m−1
+ 2 Re [(−1)r α d(l) + S(ln) − z(l)] H (ln).
l=0

Here d(l) are the coefficients of the expansion (3.3.4) of the spline S over the shifts
of the Bernoulli function.
Suppose that there exists a spline Sα ∈ Srm satisfying to the conditions

(−1)r α d(l) + S(ln) = z(l), l ∈ 0 : m − 1,



m−1
(3.5.6)
d(l) = 0.
l=0

Then for any H ∈ C N there holds an equality


m−1
Fα (Sα + H ) = Fα (Sα ) + α f (H ) + |H (ln)|2 .
l=0
3.5 Smoothing of Discrete Periodic Data 77

It, in particular, yields Fα (Sα + H ) ≥ Fα (Sα ), so Sα is a solution of the prob-


lem (3.5.5). Moreover, this solution is unique. Indeed, assuming that Fα (Sα + H ) =
Fα (Sα ) we gain

N −1
 
m−1
| H ( j)| = 0 and
r 2
|H (ln)|2 = 0.
j=0 l=0

The former equality holds only when H ( j) ≡ const (see Theorem 3.1.2). According
to the latter one we have H ( j) ≡ 0.
It is remaining to verify that the system (3.5.6) has a unique solution in the class
of splines S of a form (3.3.4). We take a solution d0 , d0 (0), d0 (1), . . . , d0 (m − 1)
of the homogeneous system

(−1)r α d(l) + S(ln) = 0, l ∈ 0 : m − 1,



m−1
(3.5.7)
d(l) = 0.
l=0

We denote the corresponding spline by S0 . According to Theorem 3.3.2 and (3.5.7)


we have
N −1
 
m−1 
m−1
|r S0 ( j)|2 = (−1)r d0 (l) S0 (ln) = −α |d0 (l)|2 .
j=0 l=0 l=0

By virtue of positiveness of α this equality can be true only when all d0 (l) are
equal to zero. But in this case there holds S0 ( j) ≡ d0 . At the same time, S0 (ln) =
(−1)r +1 α d0 (l) = 0 holds for l ∈ 0 : m − 1, so that d0 = 0. Thus it is proved that the
homogeneous system (3.5.7) has only zero solution. As a consequence we gain that
the system (3.5.6) has a unique solution for all z(l), l ∈ 0 : m − 1.
Let us summarize.
Theorem 3.5.1 The auxiliary problem (3.5.5) has a unique solution Sα . This is a
discrete periodic spline of a form (3.3.4) whose coefficients are determined from the
system of linear equations (3.5.6).
3.5.4 We will show that the system (3.5.6) can be solved explicitly. In order to do
this we transit into a spectral domain:


m−1 
m−1
(−1) α r
d(l) ωm−kl +d ωm−kl
l=0 l=0

 m−1
m−1 
+ d( p) b2r (l − p)n + r ωm−kl
l=0 p=0


m−1
= z(l) ωm−kl .
l=0
78 3 Spline Subspaces

We denote D = Fm (d) and Z = Fm (z). Taking into account that kl = k(l − p) + kp


we gain


m−1 
m−1
(−1) α D(k) + m d δm (k) +
r
d( p) ωm−kp b2r (ln + r ) ωm−kl = Z (k).
p=0 l=0

m−1
Note that D(0) = l=0 d(l) = 0. Putting


m−1
Br (k) = b2r (ln + r ) ωm−kl
l=0

we come to a system of linear equations with respect to d, D(1), . . . , D(m − 1):

[(−1)r α + Br (k)] D(k) + m d δm (k) = Z (k), (3.5.8)

k ∈ 0 : m − 1.

For k = 0 we have

1 
m−1
1
d= Z (0) = z(l) = c∗ .
m m l=0

For k ∈ 1 : m − 1 the equation (3.5.8) takes a form

[(−1)r α + Br (k)] D(k) = Z (k). (3.5.9)

Lemma 3.5.1 For k ∈ 1 : m − 1 there holds

Br (k) = (−1)r r (k), (3.5.10)

where, just like in par. 3.2.3,

n−1  
1 π(qm + k) −2r
r (k) = 2 sin .
n q=0 N

Proof We have

m−1 N −1
1  j (ln+r ) j −kl
Br (k) = (ω − 1)−2r ω N ωm
N l=0 j=1 N
3.5 Smoothing of Discrete Periodic Data 79

N −1  m−1 
1 j 1  l( j−k)
(ω N − 1)−2r ω N
rj
= ωm
n j=1 m l=0
N −1
1 j
(ω − 1)−2r ω N δm ( j − k).
rj
=
n j=1 N

As far as j ∈ 1 : N − 1 and k ∈ 1 : m − 1, it holds

−m + 2 ≤ j − k ≤ N − 2.

The unit pulse δm in the latter sum is nonzero only when j − k = qm, q ∈ 0 : n − 1.
Taking into account this consideration and equality (3.1.7) we gain

1  qm+k
n−1
r (qm+k)
Br (k) = (ω − 1)−2r ω N
n q=0 N

1   −(qm+k) qm+k −r


n−1
= ωN (ω N − 1)2
n q=0
n−1  
1 π(qm + k) −2r
= (−1) r
2 sin = (−1)r r (k).
n q=0 N

The lemma is proved. 




On the basis of (3.5.9) and (3.5.10) we write



⎨ 0 for k = 0,

D(k) = (−1)r Z (k)

⎩ for k ∈ 1 : m − 1.
α + r (k)

The DFT inversion formula yields

(−1)r  Z (k) ωmkl


m−1
d(l) = , l ∈ 0 : m − 1. (3.5.11)
m k=1 α + r (k)

The explicit solution of the system (3.5.6) is found.

3.5.5 A solution of the auxiliary problem (3.5.5) is obtained in a form (3.3.4). Let
us convert it to a form (3.3.1).

Theorem 3.5.2 A smoothing spline Sα ( j) can be represented as


80 3 Spline Subspaces


m−1
Sα ( j) = cα ( p) Q r ( j − pn), (3.5.12)
p=0

where
1 
m−1 kp
Z (k) ωm
cα ( p) = 2r
. (3.5.13)
m k=0 Tr (k) + α 2 sin(π k/m)

Proof We have

m−1
Sα ( j) = d + d(l) b2r ( j + r − ln), (3.5.14)
l=0

where the coefficients d(l) are calculated by formula (3.5.11) and

1  Z (0)
m−1
1
d= Z (0) = Q r ( j − pn).
m m p=0 Tr (0)

The latter equality is true by virtue of (3.2.8) and (3.3.3). Further, the remark to
Theorem 3.3.1 yields


m−1 
m−1
d(l) b2r ( j + r − ln) = a( p) Q r ( j − pn).
l=0 p=0

Here

1  
m−1
−2r
a( p) = (−1)r 2 sin(π k/m) D(k) ωmkp
m k=1

1 
m−1 kp
Z (k) ωm
=
m k=1 2 sin(π k/m) 2r α + r (k)

1 
m−1 kp
Z (k) ωm
= 2r
.
m k=1 Tr (k) + α 2 sin(π k/m)

We used formula (3.2.8) again. Substituting derived expressions into (3.5.14) we


come to (3.5.12). The theorem is proved. 


Note that when α = 0 formula (3.5.13) for the coefficients of a smoothing spline
coincides with the formula (3.4.4) for the coefficients of an interpolation spline.

3.5.6 We introduce a function ϕ(α) = g(Sα ). According to (3.5.1), (3.5.6) and the
Parseval equality we have
3.5 Smoothing of Discrete Periodic Data 81


m−1
α2 
m−1
ϕ(α) = α 2 |d(l)|2 = |D(k)|2
l=0
m k=0

α 
2 m−1
|Z (k)|2 1 
m−1
|Z (k)|2
= 2
= .
m k=1 α + r (l) m k=1 1 + r (k)/α 2

We remind that z(l) ≡ const, therefore at least one of the components Z (1), . . . ,
Z (m − 1) of the discrete Fourier transform is nonzero.
The function ϕ(α) strictly increases on the semiaxis (0, +∞), whereby
limα→+0 ϕ(α) = 0. Let us determine the limit of ϕ(α) for α → +∞. Taking into
account (3.5.3) and (3.5.4) we gain

1  1 
m−1 m−1
1
lim ϕ(α) = |Z (k)|2 = |Z (k)|2 − |Z (0)|2
α→+∞ m k=1 m k=0 m
  m−1 2 m−1
1    
m−1
= |z(l)| − 
2
z(l) = |z(l)|2 − m |c∗ |2 = ε∗ ,
l=0
m l=0 l=0

where ε∗ is the critical value of the parameter ε. Hence it follows, in particular, that
the equation ϕ(α) = ε with 0 < ε < ε∗ has a unique positive root α∗ .

Theorem 3.5.3 The discrete periodic spline Sα∗ is a unique solution of the problem
(3.5.1).

Proof We take an arbitrary signal x satisfying to the constraints of the problem (3.5.1)
and assume that there holds f (x) ≤ f (Sα∗ ). Then

Fα∗ (x) = α∗ f (x) + g(x) ≤ α∗ f (Sα∗ ) + ε


= α∗ f (Sα∗ ) + ϕ(α∗ ) = α∗ f (Sα∗ ) + g(Sα∗ ) = Fα∗ (Sα∗ ).

Taking into account that Sα∗ is a unique minimum point of Fα∗ on C N we conclude that
x( j) ≡ Sα∗ ( j). It means that in case of x( j) ≡ Sα∗ ( j) there holds f (x) > f (Sα∗ ).
The theorem is proved. 


3.6 Tangent Hyperbolas Method

3.6.1 It is ascertained in par. 3.5.6 that a unique solution of the smoothing prob-
lem (3.5.1) for 0 < ε < ε∗ is a discrete periodic spline Sα ( j) of a form (3.5.12) with
α = α∗ , where α∗ is a unique positive root of the equation ϕ(α) = ε. Here we will
consider a question of calculation of α∗ .
82 3 Spline Subspaces

We introduce a function

1 1 
m−1
|Z (k)|2
ψ(β) = ϕ = .
β m k=1 1 + r (k) β 2

We take an interval (−τ, +∞), where τ = mink∈1:m−1 [r (k)]−1 . On this interval
there hold inequalities ψ  (β) < 0 and ψ  (β) > 0, therefore the function ψ(β) is
strictly decreasing and strictly convex on (−τ, +∞). In addition to that we have
ψ(0) = ε∗ and lim ψ(β) = 0. If β∗ is a positive root of the equation ψ(β) = ε
β→+∞
then α∗ = 1/β∗ . Thus, instead of ϕ(α) = ε we can solve the equation ψ(β) = ε.
Let us consider the equivalent equation [ψ(β)]−1/2 = ε−1/2. We will solve it by
the Newton method with an initial approximation β0 = 0. Working formula of the
method looks this way:

ψ −1/2 (βk ) − ε−1/2


βk+1 = βk −
(−1/2) ψ −3/2 (βk ) ψ  (βk )
   
2 ψ(βk ) ψ(βk ) 1/2
= βk +  1− , k = 0, 1, . . . (3.6.1)
ψ (βk ) ε

Let us find out what this method corresponds to when it is applied to the equation
ψ(β) = ε.

3.6.2 We will need the following properties of the function [ψ(β)]−1/2 .

Lemma 3.6.1 The function [ψ(β)]−1/2 strictly increases and is concave on the inter-
val (−τ, +∞).

Proof We have  −1/2 


ψ (β) = − 21 ψ −3/2 (β) ψ  (β),
 −1/2   2 
ψ (β) = − 21 − 23 ψ −5/2 (β) ψ  (β) + ψ −3/2 (β) ψ  (β)
 2 
= 14 ψ −5/2 (β) 3 ψ  (β) − 2 ψ(β) ψ  (β) .

It is obvious that [ψ −1/2 (β)] > 0. The inequality [ψ −1/2 (β)] ≤ 0 is equivalent to
the following:
3 ψ  (β) ≤ 2 ψ(β) ψ  (β).
2
(3.6.2)

To verify (3.6.2) we introduce notations



ηk = m −1 |Z (k)| r (k) , θk = [r (k)]−1 .
2
3.6 Tangent Hyperbolas Method 83

Then

m−1
ηk 
m−1
ηk
ψ(β) = , ψ  (β) = −2 ,
k=1
(β + θk )2 k=1
(β + θk )3


m−1
ηk
ψ  (β) = 6 .
k=1
(β + θk )4

The inequality (3.6.2) takes a form


m−1 √ √ 2 m−1 m−1 
 ηk ηk  ηk  ηk
≤ . (3.6.3)
k=1
β + θk (β + θk )2 k=1
(β + θk )2 k=1
(β + θk )4

The latter is true on the strength of Cauchy–Bunyakovskii inequality. The lemma is


proved. 


Remark 3.6.1 By virtue of Lemma 2.1.5 the inequality (3.6.3) is fulfilled as an


equality if and only if β + θk ≡ const. If not all values θk = [r (k)]−1 for k ∈ 1 :
m − 1 are equal to each other then the inequality (3.6.3) holds strictly. In this case
the function [ψ(β)]−1/2 is strictly concave.

3.6.3 According to concavity of the function [ψ(β)]−1/2 , for β ≥ 0 we have

ψ −1/2 (β) − ψ −1/2 (βk ) ≤ − 21 ψ −3/2 (βk ) ψ  (βk ) (β − βk ),

so  
−1/2 −1/2 ψ  (βk )
0<ψ (β) ≤ ψ (βk ) 1 − (β − βk ) .
2 ψ(βk )

Raising to the power of −2 we gain


 −2
ψ  (βk )
ψ(β) ≥ ψ(βk ) 1 − (β − βk ) . (3.6.4)
2 ψ(βk )

We denote the function in the right side of the inequality (3.6.4) by ζk (β). A graph
of this function is a hyperbola. By virtue of (3.6.4) this hyperbola lies under the graph
of the function ψ(β). Since ζk (βk ) = ψ(βk ) and ζk (βk ) = ψ  (βk ), the mentioned
graphs are tangent to each other when β = βk (see Fig. 3.3). Moreover, the root βk+1
of the equation ζk (β) = ε is calculated with the aid of the formula (3.6.1).

According to what has been said it is reasonable to refer to the iterative method
(3.6.1) for solving the equation ψ(β) = ε as a tangent hyperbolas method.
84 3 Spline Subspaces

( )
( )

0 +1 ∗

Fig. 3.3 Tangent hyperbolas method

3.7 Calculation of Discrete Spline’s Values

3.7.1 We start with a discrete periodic spline of the first order


m−1
S1 ( j) = c( p) Q 1 ( j − pn). (3.7.1)
p=0

We assume that the coefficients c( p) are continued with a period m on all integer
indices p. In particular, c(m) = c(0).
Lemma 3.7.1 The values S1 (0), S1 (1), . . . , S1 (N ) are calculated consecutively by
the scheme
S1 (0) = n c(0);

S1 (ln + k + 1) = S1 (ln + k) + c(l), (3.7.2)

k ∈ 0 : n − 1, l ∈ 0 : m − 1.

Proof Let j = ln + k, k ∈ 0 : n − 1, l ∈ 0 : m − 1. Formula (3.7.1) yields


m−1 
m−1
S1 ( j) = c( p) Q 1 (k − ( p − l)n) = c( p + l) Q 1 (k − pn)
p=0 p=0


m−1
= c( p + l) Q 1 (k + (m − p)n).
p=0
3.7 Calculation of Discrete Spline’s Values 85

For p ∈ 2 : m − 1 we have

n ≤ k + (m − p)n ≤ n − 1 + (m − 2)n = N − n − 1.

Hence see (3.2.4) and (3.2.1) Q 1 (k + (m − p)n) = 0 holds for the given p. We
gain
S1 ( j) = c(l) Q 1 (k) + c(l + 1) Q 1 (k + N − n).

Recall that Q 1 (k) = n − k for k ∈ 0 : n − 1. Furthermore, Q 1 ( j) = j − N + n for


j ∈ N − n : N − 1. Since N − n ≤ k + N − n ≤ N − 1 for k ∈ 0 : n − 1, it holds
Q 1 (k + N − n) = k. Thus, for k ∈ 0 : n − 1 and l ∈ 0 : m − 1 we have

S1 (ln + k) = c(l)(n − k) + c(l + 1)k = nc(l) + kc(l). (3.7.3)

In particular, S1 (ln) = nc(l) holds for l ∈ 0 : m − 1. By virtue of periodicity the


latter equality is true for l = m as well.
Note that formula (3.7.3) is true for k = n. In this case it takes a form S1 (l +
1)n = nc(l + 1), l ∈ 0 : m − 1. Replacing k by k + 1 in (3.7.3) we write

S1 (ln + k + 1) = nc(l) + (k + 1)c(l), k ∈ 0 : n − 1. (3.7.4)

On the basis of (3.7.3) and (3.7.4) we come to (3.7.2). The lemma is proved. 

Below we present a program that implements calculations along the scheme (3.7.2).

Program Code
s1(0) := n ∗ c(0); j := 0;
for l := 0 to m − 1 do
begin h := c(l + 1) − c(l);
for k := 1 to n do
begin j := j + 1;
s1( j) := s1( j − 1) + h end
end

We see that calculation of values S1 ( j) for j = 0, 1, . . . , N requires one multipli-


cation by n and (n + 1)m additions.
3.7.2 Now we turn to a general case of a discrete periodic spline of order r . With
the aid of cyclic convolution we introduce a sequence of signals

Sν = Q 1 ∗ Sν−1 , ν = 2, 3, . . . (3.7.5)

Here S1 is a spline of a form (3.7.1).


86 3 Spline Subspaces

Theorem 3.7.1 There holds an equality


m−1
Sr ( j) = c( p) Q r ( j − pn), j ∈ Z. (3.7.6)
p=0

Proof When r = 1, formula (3.7.6) coincides with (3.7.1). We perform an induction


step from r − 1 to r . According to the inductive hypothesis, (3.7.5), and (3.2.4), we
have
N −1

Sr ( j) = Q 1 (l) Sr −1 ( j − l)
l=0


m−1 N −1
 
m−1
= c( p) Q 1 (l) Q r −1 ( j − l − pn) = c( p) Q r ( j − pn).
p=0 l=0 p=0

The theorem is proved. 




3.7.3 Theorem 3.7.1 shows that calculation of values of Sr ( j) is reduced to calcu-


lation of values of S1 ( j) and consecutive convolution with B-spline Q 1 ( j). We will
consider a question of calculation of a convolution with Q 1 separately. Let

N −1

y( j) = x(k) Q 1 ( j − k).
k=0

Theorem 3.7.2 Valid is the equality


n−1
y( j) = nx( j) + (n − k)[x( j + k) + x( j − k)], (3.7.7)
k=1

j ∈ 0 : N − 1.

Proof Since Q 1 is even, we have

N −1
 N −1

y( j) = x(k) Q 1 (k − j) = x(k + j) Q 1 (k)
k=0 k=0


n−1 N −1

= (n − k)x(k + j) + n − (N − k) x j − (N − k)
k=0 k=N −n+1


n−1
= nx( j) + (n − k)[x( j + k) + x( j − k)].
k=1
3.7 Calculation of Discrete Spline’s Values 87

The theorem is proved. 




We fix j ∈ 0 : N − 1 and introduce notations

d0 = x( j); dk = x( j + k) + x( j − k), k ∈ 1 : n − 1; tk = n − k.

Formula (3.7.7) can be rewritten in a form


n−1
y( j) = dk tk .
k=0

We construct a sequence of numbers {h k } by a rule

h k = dk + h k−1 , k = 0, 1, . . . , n − 1; h −1 = 0. (3.7.8)

Taking into account that tk − tk+1 = 1 we gain


n−1 
n−1 
n−2
y( j) = (h k − h k−1 ) tk = h k tk − h k tk+1
k=0 k=0 k=−1


n−2 
n−1
= h n−1 tn−1 + hk = hk .
k=0 k=0

Thus,

n−1
y( j) = hk , (3.7.9)
k=0

where h k are calculated with the recurrent formula (3.7.8).


Below we present a program of calculation of values y( j) for j ∈ 0 : N − 1 which
is based on the representation (3.7.9).

Program Code
for j := 0 to n − 1 do
begin h := x( j); s := h;
for k := 1 to n − 1 do
begin h := h + x( j + k) + x( j − k) ;
s := s + h end;
y( j) := s
end
88 3 Spline Subspaces

The program uses only additions. The number of additions is 3(n − 1)N .
The values x( j) for j from (−n + 1) to N + n − 2 must be given explicitly. By
virtue of periodicity one should put

x( j) = x(N + j) for j ∈ −n + 1 : −1;


x( j) = x( j − N ) for j ∈ N : N + n − 2.

3.8 Orthogonal Basis in a Space of Splines

3.8.1 We consider a discrete periodic spline


m−1
S( j) = c( p) Q r ( j − pn) (3.8.1)
p=0

and transform its coefficients by a rule ξ = Fm (c). Taking into account the DFT
inversion formula we write
m−1  m−1 
1  
S( j) = ξ(k) ωmkp Q r ( j − pn)
m p=0 k=0

  
1  kp
m−1 m−1
= ξ(k) ωm Q r ( j − pn) . (3.8.2)
k=0
m p=0

We introduce a notation

1  kp
m−1
μk ( j) = ω Q r ( j − pn), k ∈ 0 : m − 1. (3.8.3)
m p=0 m

Formula (3.8.2) takes a form


m−1
S( j) = ξ(k) μk ( j). (3.8.4)
k=0

It is obvious that the signals μk belong to Srm . According to (3.8.4) they form a
basis in Srm . We will show that this basis is orthogonal.

3.8.2 As a precursor, let us obtain an expansion of the signal μk over the exponential
basis.
3.8 Orthogonal Basis in a Space of Splines 89

Lemma 3.8.1 Valid is the equality

1  r
n−1
(qm+k) j
μk ( j) = X (qm + k) ω N . (3.8.5)
N q=0 1

Proof According to (3.2.5) and (3.8.3) we have


 N −1 
1  kp  r
m−1
l( j− pn)
μk ( j) = ωm X 1 (l) ω N
m N p=0 l=0
N −1  m−1 
1  r lj 1

(k−l) p
= X (l) ω N ω
N l=0 1 m p=0 m
N −1
1  r lj
= X (l) ω N δm (k − l).
N l=0 1

The index l can be represented as l = qm + k  , where q ∈ 0 : n − 1 and k  ∈ 0 :


m − 1. Bearing this in mind we gain

1  r
n−1 m−1
(qm+k  ) j
μk ( j) = X (qm + k  ) ω N δm (k − k  )
N q=0 k  =0 1

1  r
n−1
(qm+k) j
= X (qm + k) ω N .
N q=0 1

The lemma is proved. 



Theorem 3.8.1 The following relations hold:

μk , μk  = 0 for k = k  ,
(3.8.6)
μk 2 = 1
m
T2r (k), k ∈ 0 : m − 1.

Proof On the basis of (3.8.5) we write

N −1

μk , μk  = μk ( j) μk  ( j)
j=0
 N −1 
1  r
n−1
1 (qm+k−q  m−k  ) j
= X 1 (qm + k) X 1r (q  m + k  ) ωN
N q,q  =0 N j=0

1  r
n−1
= X (qm + k) X 1r (q  m + k  ) δ N (q − q  )m + k − k  .
N q,q  =0 1
90 3 Spline Subspaces

It is evident that |k − k  | ≤ m − 1 and |(q − q  )m + k − k  | ≤ N − 1. If k = k  then


the argument of the unit pulse δ N is nonzero for all q, q  ∈ 0 : n − 1. In this case
there holds μk , μk  = 0. When k = k  , according to (3.2.6) we gain

1  2r
n−1
1
μk  =
2
X (qm + k) = T2r (k).
N q=0 1 m

The theorem is proved. 




3.8.3 It is ascertained that the splines {μk }m−1k=0 form an orthogonal basis in a
space Srm . A transition from the expansion (3.8.1) to the expansion (3.8.4) is based on
the coefficients transform ξ = Fm (c). An inverse transition is related to the inversion
formula for DFT: c = Fm−1 (ξ ).
Note that (3.8.5) and (3.2.2) yield

μ0 ( j) ≡ N −1 n 2r . (3.8.7)

Lemma 3.8.2 For k ∈ 1 : m − 1 valid is the equality

μm−k ( j) = μk ( j), j ∈ Z. (3.8.8)

Proof According to (3.8.3) we have

1  −(m−k) p
m−1
μm−k ( j) = ω Q r ( j − pn) = μk ( j),
m p=0 m

which is equivalent to (3.8.8). The lemma is proved. 




Lemma 3.8.3 For all integer l there holds

μk ( j + ln) = ωmkl μk ( j), j ∈ Z. (3.8.9)

Indeed,

1  k( p−l)+kl
m−1
μk ( j + ln) = ω Q r j − ( p − l)n = ωmkl μk ( j).
m p=0 m

Theorem 3.8.2 Let k ∈ 1 : m − 1 and p be a natural number. If a product kp is not


divisible by m then there holds

N −1
  p
μk ( j) = 0.
j=0
3.8 Orthogonal Basis in a Space of Splines 91

Proof On the basis of (3.8.9) we write

N −1
 
 p m−1 n−1
 p
μk ( j) = μk (q + ln)
j=0 l=0 q=0


m−1 
n−1
 p 
n−1
 p
= ωmklp μk (q) = m δm (kp) μk (q) .
l=0 q=0 q=0

Hence the required equality follows immediately. 




Let us consider special cases. For p = 1 we have

N −1

μk ( j) = 0, k ∈ 1 : m − 1.
j=0

Let p = 2. For k ∈ 1 : m − 1 the condition 2k m = 0 is reduced to a relation 2k =


m. Thus, for k ∈ 1 : m − 1, k = m/2 valid is the equality

N −1
  2
μk ( j) = 0.
j=0

We see that for the mentioned indices k there always exist some complex values
among μk ( j). Along with that, if m is even then all values μm/2 ( j) are real because
there holds
1 
m−1
μm/2 ( j) = (−1) p Q r ( j − pn). (3.8.10)
m p=0

3.8.4 Let us return to formula (3.8.3) and rewrite it as follows:

1 
m−1
μk ( j) = Q r ( j − pn) ωmpk .
m p=0

This formula has a form of the DFT inversion formula, therefore


m−1
Q r ( j − pn) = μk ( j) ωm− pk , p ∈ 0 : m − 1.
k=0

In particular,

m−1
Q r ( j) = μk ( j), j ∈ Z. (3.8.11)
k=0
92 3 Spline Subspaces

3.9 Bases of Shifts


 m−1
3.9.1 As it was noted in the par. 3.3.1, shifts of a B-spline Q r ( j − pn) p=0 form
a basis in a space Srm . Are there any other splines with the similar property? This
question can be answered completely.
We take a spline ϕ ∈ Srm and expand it over the orthogonal basis:


m−1
ϕ( j) = ξ(k) μk ( j). (3.9.1)
k=0

Formula (3.8.9) yields


m−1
ϕ( j − pn) = ξ(k) ωm−kp μk ( j). (3.9.2)
k=0

Hence the shifts ϕ( j − pn) also belong to Srm .


Theorem 3.9.1 A system of shifts {ϕ( j − pn)}m−1 p=0 forms a basis in Sr if and only
m

if each coefficient ξ(k) in the expansion (3.9.1) is nonzero.


Proof We rewrite (3.9.2) in a form


m−1
ϕ( j − pn) = ξ(k) μk ( j) ωm−kp .
k=0

The DFT inversion formula yields

1  kp
m−1
ξ(k) μk ( j) = ω ϕ( j − pn), k ∈ 0 : m − 1. (3.9.3)
m p=0 m

If every ξ(k) is nonzero then we can divide (3.9.3) by ξ(k) and thus gain an
expansion of all splines μk ( j) over the system {ϕ( j − pn)}m−1
p=0 . Therefore this
system is a basis in Srm .
p=0 be a basis in Sr . If at least one coefficient
Conversely, let {ϕ( j − pn)}m−1 m

ξ(k) in the expansion (3.9.1) is equal to zero then according to (3.9.3) the system
{ϕ( j − pn)}m−1
p=0 is linearly dependent. But this contradicts with a definition of a
basis. The theorem is proved. 

3.9.2 Two splines ϕ and ψ from Srm are called dual if for all p, q ∈ 0 : m − 1 there
holds 
ϕ(· − pn), ψ(· − qn) = δm ( p − q). (3.9.4)

Thus, duality of splines ϕ and ψ is characterized by the fact that the systems of their
p=0 and {ψ( j − pn)} p=0 are biorthogonal.
shifts {ϕ( j − pn)}m−1 m−1
3.9 Bases of Shifts 93

Along with (3.9.1) we write an expansion


m−1
ψ( j) = η(k) μk ( j).
k=0

We note that (3.9.2) and (3.8.6) yield

 ! m−1
 
m−1 "
ϕ(· − pn), ψ(· − qn) = ξ(k) ωm−kp μk , η(l) ωm−lq μl
k=0 l=0


m−1
= ξ(k) η(k) ωmk(q− p) μk 2
k=0

1 
m−1
= ξ(k) η(k) T2r (k) ωmk(q− p) . (3.9.5)
m k=0

Theorem 3.9.2 Splines ϕ and ψ from Srm are dual if and only if their coefficients
ξ(k), η(k) in the expansions over the orthogonal basis satisfy to the condition
 −1
ξ(k) η(k) = T2r (k) , k ∈ 0 : m − 1. (3.9.6)

Proof
Necessity We take (3.9.4) and put p = 0 there. According to (3.9.5) we gain

1 
m−1
ξ(k) η(k) T2r (k) ωmkq = δm (q).
m k=0

Therefore


m−1
ξ(k) η(k) T2r (k) = δm (q) ωm−kq = 1, k ∈ 0 : m − 1,
q=0

which is equivalent to (3.9.6).

Sufficiency obviously follows from (3.9.5) and (3.9.6). The theorem is proved. 


3.9.3 Theorem 3.9.2 lets us introduce a self-dual spline. It is obtained when ξ(k) =
η(k), k ∈ 0 : m − 1. In this case the condition (3.9.6) takes a form
 −1
|ξ(k)|2 = T2r (k) , k ∈ 0 : m − 1.

The simplest self-dual spline is defined by the formula (see Fig. 3.4)
94 3 Spline Subspaces

− 0

Fig. 3.4 Graph of a self-dual spline ϕr ( j) for m = 8, n = 5, and r = 2


m−1
μk ( j)
ϕr ( j) = √ , j ∈ Z.
k=0
T2r (k)

According to (3.9.5) we have ϕr (· − pn), ϕr (· − qn) = δm (q − p). The latter
 m−1
means that the shifts ϕr ( j − pn) p=0 form an orthonormal system.

3.9.4 According to (3.8.11) each coefficient in the expansion of a discrete periodic


B-spline Q r ( j) over the orthogonal basis is equal to unity. By virtue of (3.9.6) a dual
to Q r ( j) spline Rr ( j) looks this way (see Fig. 3.5):


m−1
μk ( j)
Rr ( j) = . (3.9.7)
k=0
T2r (k)

− 0

Fig. 3.5 Graph of a spline Rr ( j) dual to a B-spline Q r ( j) for m = 8, n = 5, and r = 2


3.9 Bases of Shifts 95

We will show how the dual splines Q r ( j) and Rr ( j) help in solving a problem of
spline processing of discrete periodic data with the least squares method.

Consider an extremal problem

N −1

F(S) := |S( j) − z( j)|2 → min, (3.9.8)
j=0

where the minimum is taken among all S ∈ Srm . Given an arbitrary H ∈ Srm , we have

F(S + H ) = (S − z) + H 2 = F(S) + H 2 + 2 Re S − z, H .

If we manage to construct a spline S∗ ∈ Srm such that the difference S∗ − z is orthogo-


nal to any element H of Srm , then S∗ will be the unique solution of the problem (3.9.8).
Let us use a representation


m−1
S∗ ( j) = d(q) Rr ( j − qn). (3.9.9)
q=0

A condition of orthogonality can be written as

! m−1
 "
d(q) Rr (· − qn) − z, Q r (· − pn) = 0, p ∈ 0 : m − 1.
q=0

On the strength of duality of the splines Q r and Rr we gain

N −1


d( p) = z, Q r (· − pn) = z( j) Q r ( j − pn), (3.9.10)
j=0

p ∈ 0 : m − 1.

Thus, a unique solution of the problem (3.9.8) is the spline (3.9.9) with the coefficients
being calculated with formula (3.9.10).
The problem (3.9.8) can be interpreted as a problem of orthogonal projection of
a signal z on a subspace Srm .

3.9.5 We can transit from the expansion (3.9.9) of the spline S∗ ( j) over the basis
 m−1  m−1
Rr ( j − qn) q=0 to the expansion over the basis Q r ( j − pn) p=0 . In order to do
this we use formulae (3.9.7) and (3.9.2) and write down
96 3 Spline Subspaces


m−1 
m−1 
m−1
ωm
−kq
S∗ ( j) = d(q) Rr ( j − qn) = d(q) μk ( j)
q=0 q=0 k=0
T2r (k)


m−1
μk ( j) 
m−1
= d(q) ωm−kq .
k=0
T2r (k) q=0

We denote D = Fm (d), ξ(k) = D(k)/T2r (k), c = Fm−1 (ξ ). Then


m−1
S∗ ( j) = c( p) Q r ( j − pn).
p=0

3.10 Wavelet Subspaces

3.10.1 We will carry out further analysis in assumption that m = 2t , where t is a


natural number. We put m ν = m/2ν and n ν = 2ν n. In this case m ν n ν = N for all
ν = 0, 1, . . . , t.
We denote orthogonal splines corresponding to parameters m ν , n ν by μνk . In
particular, μ0k = μk . We have by a definition that μνk ∈ Srm ν . On the strength of (3.8.3)
we may consider the splines μνk being defined for all integer k. It is clear that they
are m ν -periodic on k.

Theorem 3.10.1 The following recurrent formula holds for ν = 0, 1, . . . , t − 1:

μν+1 ν ν
k ( j) = cν (k) μk ( j) + cν (m ν+1 + k) μm ν+1 +k ( j), (3.10.1)

2r
where cν (l) = 2 cos(πl/m ν ) .

Proof We introduce a notation


⎧ 2r
⎨ nν for l =  0,
yν (l) = sin(πl/m ν ) 2r
⎩ for l ∈ 1 : N − 1.
sin(πl/N )

Equality (3.8.5) yields

n ν −1
1  (qm +k) j
μνk ( j) = yν (qm ν + k) ω N ν .
N q=0

We note that
yν+1 (l) = cν (l) yν (l), l ∈ 0 : N − 1. (3.10.2)
3.10 Wavelet Subspaces 97

Bearing in mind m ν -periodicity of the signal cν we gain

2n ν −1
1  (qm +k) j
μν+1
k ( j) = yν+1 (qm ν+1 + k) ω N ν+1
N q=0
n ν −1
1  (2qm +k) j
= yν+1 (2qm ν+1 + k) ω N ν+1
N q=0
n ν −1
1  ((2q+1)m ν+1 +k) j
+ yν+1 (2q + 1)m ν+1 + k ω N
N q=0
n ν −1
1  (qm +k) j
= cν (k) yν (qm ν + k) ω N ν
N q=0
n ν −1
1  (qm +m +k) j
+ cν (m ν+1 + k) yν (qm ν + m ν+1 + k) ω N ν ν+1
N q=0
= cν (k) μνk ( j) + cν (m ν+1 + k) μνm ν+1 +k ( j).

The theorem is proved. 



m ν+1
Formula (3.10.1), in particular, yields an inclusion Sr ⊂ Srm ν .

3.10.2 Let us construct a nonzero spline wkν+1 ∈ Srm ν , k ∈ 0 : m ν+1 − 1, of a form

wkν+1 ( j) = aν (k) μνk ( j) + aν (m ν+1 + k) μνm ν+1 +k ( j) (3.10.3)

being orthogonal to μν+1


k . Since

wkν+1 , μν+1
k = aν (k) cν (k) μνk 2 + aν (m ν+1 + k) cν (m ν+1 + k) μνm ν+1 +k 2 ,

the condition wkν+1 , μν+1


k = 0 can be written down with the aid of a determinant
of order two:  
 aν (k) cν (m ν+1 + k) μνm ν+1 +k 2 
 
  = 0. (3.10.4)
 a (m ν 2 
ν ν+1 + k) −cν (k) μk 

The second column of the determinant is nonzero, therefore the equality (3.10.4) is
possible only if there exists a number λν (k) such that

aν (k) = λν (k) cν (m ν+1 + k) μνm ν+1 +k 2 ,

aν (m ν+1 + k) = −λν (k) cν (k) μνk 2 .


98 3 Spline Subspaces

Putting
λν (m ν+1 + k) = −λν (k), k ∈ 0 : m ν+1 − 1, (3.10.5)

we come to a single formula

aν (k) = λν (k) cν (m ν+1 + k) μνm ν+1 +k 2 , (3.10.6)

k ∈ 0 : m ν − 1.

Thus, a spline wkν+1 of a form (3.10.3) is orthogonal to μν+1 k if and only if coeffi-
cients aν (k) can be represented by (3.10.6), where the numbers λν (k) are of the prop-
erty (3.10.5). A condition wkν+1 ( j) ≡ 0 is equivalent to λν (k) = 0, k ∈ 0 : m ν+1 − 1.
According to (3.10.5) numbers ρν (k) = λν (k) ωm−kν satisfy to the equality
ρν (m ν+1 + k) = ρν (k), k ∈ 0 : m ν+1 − 1. It means that λν (k) can be represented
in a form λν (k) = ρν (k) ωmk ν , where ρν (k) is an arbitrary m ν+1 -periodic sequence
whose members are all nonzero.
We will consider the simplest case of ρν (k) ≡ 1. It corresponds to splines wkν+1
of a form (3.10.3) with the coefficients

aν (k) = ωmk ν cν (m ν+1 + k) μνm ν+1 +k 2 . (3.10.7)

Formula (3.10.7) lets us consider the splines wkν+1 being


 defined for all integer k.
In addition to that, according to (3.10.3), the sequence wkν+1 ( j) is m ν+1 -periodic
on k.
We note that

w0ν+1 ( j) = aν (m ν+1 ) μνm ν+1 ( j) = −22r μν0 2 μνm ν+1 ( j).

Lemma 3.10.1 For all integer numbers l there holds

wkν+1 ( j + ln ν+1 ) = ωmklν+1 wkν+1 ( j). (3.10.8)

Proof According to (3.10.3) and (3.8.9) we write

wkν+1 ( j + ln ν+1 ) = aν (k) μνk ( j + 2ln ν ) + aν (m ν+1 + k) μνm ν+1 +k ( j + 2ln ν )


= aν (k) ωm2lkν μνk ( j) + aν (m ν+1 + k) ωm2l(m
ν
ν+1 +k)
μνm ν+1 +k ( j)
= ωmklν+1 wkν+1 ( j).

The lemma is proved. 



3.10 Wavelet Subspaces 99
 m ν+1 −1
Theorem 3.10.2 The splines wkν+1 k=0 form an orthogonal system. Moreover,

wkν+1 , μν+1
k = 0 for all k, k  ∈ 0 : m ν+1 − 1,
(3.10.9)
wkν+1  = μνk  μνm ν+1 +k  μν+1
k .

Proof The equalities wkν+1 , wkν+1  = 0 and wkν+1 , μν+1


k = 0 for k = k  , k, k  ∈
0:m − 1, follow from (3.10.3), (3.10.1), and orthogonality of the system
 ν mν+1
ν −1
μk k=0 . The equality wkν+1 , μν+1
k = 0 is provided by the choice of a spline wkν+1 .
ν+1
The norm of wk is calculated directly on the basis of formula (3.10.7). Indeed,

wkν+1 2 = |aν (k)|2 μνk 2 + |aν (m ν+1 + k)|2 μνm ν+1 +k 2


 
= μνk 2 μνm ν+1 +k 2 |cν (m ν+1 + k)|2 μνm ν+1 +k 2 + |cν (k)|2 μνk 2
= μνk 2 μνm ν+1 +k 2 μν+1
k  .
2

It is remaining to extract the square root. The theorem is proved. 



3.10.3 We denote by Wr ν+1 a linear hull spanned by the splines wkν+1 , k ∈ 0 :
m

m ν+1 − 1. As long as each wkν+1 belongs to Srm ν , we have Wr ν+1 ⊂ Srm ν . As it was
m
m ν+1
noted earlier, the inclusion Sr ⊂ Sr holds as well. According to Theorem 3.10.2

the splines

μν+1 ν+1 ν+1 ν+1 ν+1 ν+1


0 , μ1 , . . . , μm ν+1 −1 , w0 , w1 , . . . , wm ν+1 −1

form an orthogonal basis in Srm ν . The space Srm ν itself can be considered as an
m m
orthogonal sum of the subspaces Sr ν+1 and Wr ν+1 , i. e.

Srm ν = Srm ν+1 ⊕ Wrm ν+1 . (3.10.10)

Applying formula (3.10.10) consecutively for ν = 0, 1, . . . , t − 1, we come to the


expansion

Srm = Srm 0 = Srm 1 ⊕ Wrm 1 = Srm 2 ⊕ Wrm 2 ⊕ Wrm 1


= Srm 2 ⊕ Wrm 2 ⊕ Wrm 1 = . . .
= Srm t ⊕ Wrm t ⊕ Wrm t−1 ⊕ · · · ⊕ Wrm 1 .

Here Srm t = Sr1 is a one-dimensional space consisting of signals that are identically
equal to a complex constant (see par. 3.3.1).
Let us formulate the obtained result as a theorem.
Theorem 3.10.3 A space of discrete periodic splines Srm with m = 2t can be decom-
posed into orthogonal sum

Srm = Sr1 ⊕ Wrm t ⊕ Wrm t−1 ⊕ · · · ⊕ Wrm 1 .


100 3 Spline Subspaces

According to this theorem any spline S ∈ Srm with m = 2t can be represented as

 ν −1
t m
S( j) = α + αν (k) wkν ( j),
ν=1 k=0

where α and αν (k) are complex coefficients.


The subspaces Wrm ν , ν ∈ 1 : t, are called wavelet ones.
m m
Formula (3.10.10) shows that Wr ν+1 is the orthogonal complement of Sr ν+1 to
Sr .

3.11 First Limit Theorem

3.11.1 We return to the problem of discrete spline interpolation (see the Sect. 3.4).
We denote the only spline from the set Srm that satisfies to interpolation conditions

S(ln) = z(l), l ∈ 0 : m − 1,

by Sr,n ( j). By this we emphasize dependency of the interpolating spline on the param-
eters r and n (with fixed m ≥ 2). We are interested in behavior of the spline Sr,n ( j)
whether r → ∞ or n → ∞.
In this section we consider the case r → ∞.
3.11.2 Recall that

m−1
Sr,n ( j) = c( p) Q r ( j − pn), (3.11.1)
p=0

whereby
[Fm (c)](k) = Z (k)/Tr (k), k ∈ 0 : m − 1. (3.11.2)

Here Z (k) = [Fm (z)](k) and


⎧ 2r −1

⎨n for k = 0,
n−1  2r
Tr (k) = 1 sin(π k/m) (3.11.3)

⎩n for k ∈ 1 : m − 1.
s=0
sin(π(sm + k)/N )

Let us find the discrete Fourier transform of Sr,n .


Lemma 3.11.1 The following formula holds for the spectrum X r = F N (Sr,n ):

X r = Z Vr , (3.11.4)

where Z = Fm (z) and for q ∈ 0 : n − 1 there holds


3.11 First Limit Theorem 101

⎪ δn (q)
⎨n  for l = 0;
n−1  2r −1
Vr (qm + l) = sin(π(qm + l)/N ) (3.11.5)

⎩n for l ∈ 1 : m − 1.
s=0
sin(π(sm + l)/N )

Proof According to (3.11.1) we have

N −1 m−1
  
−k( j− pn)−kpn
X r (k) = c( p) Q r ( j − pn) ω N =
j=0 p=0

 N −1

m−1
−k j    
= c( p) ωm−kp Q r ( j) ω N = Fm (c) (k) F N (Q r ) (k).
p=0 j=0

Taking into account (3.11.2) we gain

X r (k) = Z (k)Vr (k),

where  
Vr (k) = F N (Q r ) (k)/Tr (k). (3.11.6)

We will show that the signals Vr can be represented in a form (3.11.5).


As you know (see (3.2.5) and (3.2.2)),


⎨n 2r for k = 0,
 
[F N (Q r )](k) = sin(π k/m) 2r (3.11.7)

⎩ for k ∈ 1 : N − 1.
sin(π k/N )

For another thing, Tr (0) = n 2r −1 and Tr (k) > 0 for all k ∈ Z. Therefore,

Vr (0) = n and Vr (m) = Vr (2m) = · · · = Vr (n − 1)m = 0.

This fact can be written as Vr (qm) = nδn (q), q ∈ 0 : n − 1.


For k = qm + l, where l = 0, m-periodicity of the signal Tr (k) and formu-
lae (3.11.7) and (3.11.3) yield
2r −1
[F N (Q r )](qm + l) sin(πl/m) Tr (l)
Vr (qm + l) = =
Tr (l) sin(π(qm + l)/N )
2r

n−1   
sin(π(qm + l)/N ) 2r −1
=n .
s=0
sin(π(sm + l)/N )

The lemma is proved. 



102 3 Spline Subspaces

3.11.3 As it follows from (3.11.6) and (3.11.5), the signal Vr is N -periodic, real, and
even. For all natural r the following equalities hold: Vr (qm) = nδn (q), q ∈ 0 : n − 1,
and

n−1
Vr (qm + l) = n, l ∈ 1 : m − 1. (3.11.8)
q=0

Equality (3.11.8) also holds for l = 0.  ∞


We will show that, for each fixed k ∈ Z, the sequence Vr (k) r =1 has a limit when
r → ∞.
To do so, we introduce a spectrum V∗ with the following components:
 # $
n for k = 0, 1, . . . , (m − 1)/2 ,
V∗ (k) =
0 for k = m/2 + 1, . . . , N /2.

In case of an even m we additionally put V∗ (m/2) = n/2. With the aid of the
equality V∗ (N − k) = V∗ (k) we spread V∗ onto the whole main period 0 : N − 1.
Figure 3.6 depicts the graphs of V∗ (k) for m = 3, n = 3, and m = 4, n = 3.

It is evident that V∗ (qm) = n δn (q), q ∈ 0 : n − 1, because

m/2 + 1 ≤ m and (n − 1)m ≤ N − m/2 − 1.

Therefore, for all natural r there holds

Vr (qm) = V∗ (qm), q ∈ 0 : n − 1. (3.11.9)

Fig. 3.6 Graphs of the


spectrum V∗ (k) on the main
period

0 1 4 5 8
= 3, =3

0 1 2 3 6 9 10 11
= 4, =3
3.11 First Limit Theorem 103

Lemma 3.11.2 Valid is the limit relation

lim Vr (k) = V∗ (k), k ∈ Z. (3.11.10)


r →∞

Proof Let k = qm + l, where q ∈ 0 : n − 1 and l ∈ 0 : m − 1. In case of l = 0 the


conclusion of the lemma is a trivial consequence of the equality (3.11.9). Thereafter
we assume that l ∈ 1 : m − 1.
Denote αk = sin2 (kπ/N ), k ∈ 0 : N − 1. We will need a few properties of these
numbers.
1. The biggest value of αk equals to αN /2 . In case of an odd N it is achieved on
two indices k = (N − 1)/2 and k = (N + 1)/2; in case of an even N the only
critical point is k = N /2. # $
2. For s ∈ 1 : n − 1 and l ∈ 1 : (m − 1)/2 there holds αsm+l > αl . It follows from
the inequalities 2l < m and

l < sm + l ≤ (n − 1)m + l < nm − l

and from the equality αl = α N −l (see Fig. 3.7).


3. If m is even then αsm+m/2 > αm/2 holds for s ∈ 1 : n − 2. Indeed,

m/2 < sm + m/2 ≤ (n − 2)m + m/2 < nm − m/2.

# Let us verify
$ validity of the limit relation (3.11.10) for q = 0 and l ∈ 1 :
(m − 1)/2 . According to (3.11.3) we have

 n−1  
 αl r −1
Vr (l) = n 1 + .
s=1
αsm+l

On the strength of the property (2) of the numbers αk we gain


# $
lim Vr (l) = n = V∗ (l), l ∈ 1 : (m − 1)/2 .
r →∞

Further, from (3.11.8) it follows that

Fig. 3.7 Graph of the


function y = sin2 x on its
period

0
104 3 Spline Subspaces


n−1
# $
Vr (qm + l) = n, l ∈ 1 : (m − 1)/2 .
q=0

All the values Vr (qm + l) are non-negative and Vr (l) → n when r → ∞, so for
q ∈ 1 : n − 1 there holds
# $
lim Vr (qm + l) = 0, l ∈ 1 : (m − 1)/2 .
r →∞

# $
Note that for q ∈ 1 : n# − 1 and l $∈ 1 : (m − 1)/2 the index qm + l varies from
m + 1 to (n − 1)m + (m − 1)/2 . Bearing in mind the equality
# $
(m − 1)/2 + m/2 = m − 1

we obtain # $
(n − 1)m + (m − 1)/2 = N − m/2 − 1.

Furthermore m + 1 > m/2 + 1, so

m/2 + 1 < qm + l ≤ N − m/2 − 1.

By a definition, V∗ (qm + l) = 0 for the given q and l. Hence

lim Vr (qm + l) = 0 = V∗ (qm + l),


r →∞

# $
q ∈ 1 : n − 1, l ∈ 1 : (m − 1)/2 .

If m is even, we should additionally consider the case of l = m/2 where


V∗ (l) = n/2. According to (3.11.5) we have
 n−2 
 r −1
αm/2
Vr ( m2 ) = n 2 + .
s=1
αsm+m/2

By virtue of the property (3) of the numbers αk we gain

lim Vr ( m2 ) = n
2
= V∗ ( m2 ).
r →∞

It is also clear that, due to the fact that both signals Vr and V∗ are even, there holds

lim Vr (n − 1)m + m
2
= lim Vr ( m2 ) = V∗ ( m2 ) = V∗ (n − 1)m + m
2
.
r →∞ r →∞

According to (3.11.8)
3.11 First Limit Theorem 105


n−1
Vr (qm + m
2
) = n.
q=0

Taking into account the relations Vr (m/2) → n/2 and Vr (n − 1)m + m/2 → n/2
we conclude that

lim Vr (qm + m
2
) = 0 = V∗ (qm + m
2
), q ∈ 1 : n − 2.
r →∞

Thus, for all q ∈ 0 : n − 1 the limit relation

lim Vr (qm + m
2
) = V∗ (qm + m
2
) (3.11.11)
r →∞

holds.
When m = 2, the set 1 : m − 1 consists of a single index l = 1 = m/2. In this
case the relation (3.11.11) proves the lemma. Thereafter we assume that m ≥ 3. This
guarantees that the set m/2 + 1 : m − 1 is not empty.
According to (3.11.8),


n−1
Vr (qm + l) = n, l ∈ m/2 + 1 : m − 1. (3.11.12)
q=0

Furthermore,

lim Vr (n − 1)m + l = lim Vr (m − l) = n = V∗ (m − l) = V∗ (n − 1)m + l .


r →∞ r →∞

We took into account that the following inequalities hold for the given l:
# $
1 ≤ m − l ≤ (m − 1)/2 .

Now (3.11.12) yields

lim Vr (qm + l) = 0 = V∗ (qm + l),


r →∞

q ∈ 0 : n − 2, l ∈ m/2 + 1 : m − 1.

Therefore, the limit relation

lim Vr (qm + l) = V∗ (qm + l), l ∈ m/2 + 1 : m − 1,


r →∞

holds for all q ∈ 0 : n − 1.


The lemma is proved. 

106 3 Spline Subspaces

3.11.4 We introduce the notations

X ∗ = Z V∗ , tm,n = F N−1 (X ∗ ).

First Limit Theorem Valid is the limit relation

lim Sr,n ( j) = tm,n ( j), j ∈ Z. (3.11.13)


r →∞

kj
Proof Using the inversion formula and the fact that |ω N | = 1 holds for all integer k
and j we gain

N −1
1  kj
|Sr,n ( j) − tm,n ( j)| ≤ |X r (k) − X ∗ (k)| × |ω N |
N k=0
N −1
1 
= |Z (k)| × |Vr (k) − V∗ (k)|.
N k=0

Now the conclusion of the theorem immediately follows from Lemma 3.11.2. 


3.11.5 Let us find out the nature of the limit signal tm,n . Consider two cases depending
upon whether m is even.

Case of m = 2μ − 1. We have m/2 + 1 = μ. By a definition, the even signal V∗


takes the following values:

0 for k ∈ μ : N − μ,
V∗ (k) =
n for the other k ∈ 0 : N − 1.

Therefore, 
0 for k ∈ μ : N − μ,
X ∗ (k) =
n Z (k) for the other k ∈ 0 : N − 1.

We will use the sampling theorem (see Sect. 2.4 of Chap. 2). According to it there
holds
N −1

tm,n ( j) = tm,n (ln) h m,n ( j − ln), (3.11.14)
l=0

where
μ−1

1 kj
h m,n ( j) = ωN . (3.11.15)
m k=−μ+1
3.11 First Limit Theorem 107

We will show that


tm,n (ln) = z(l), l ∈ 0 : m − 1. (3.11.16)

The inversion formula yields

N −1  μ−1 N −1 
1  1  
tm,n (ln) = X ∗ (k) ω N =
kln
Z (k) ωm +
kl
Z (k) ωm .
kl
N k=0 m k=0 k=N −μ+1

Replacing the index k  = k − N + m in the second sum and bearing in mind


m-periodicity of the spectrum Z we gain

1 
m−1
tm,n (ln) = Z (k) ωmkl = z(l).
m k=0

Formula (3.11.16) is ascertained.


Now the expression (3.11.14) takes a form

N −1

tm,n ( j) = z(l) h m,n ( j − ln). (3.11.17)
l=0

On the basis of (3.11.17) and (3.11.16) we conclude that the limit signal tm,n ( j)
is an interpolating trigonometric polynomial defined on a set of integer numbers.
Case of m = 2μ. We have m/2 + 1 = μ + 1 and


⎨0n for k ∈ μ + 1 : N − μ − 1,

V∗ (k) = for k = μ and k = N − μ,

⎪ 2
⎩n for the other k ∈ 0 : N − 1.

We introduce a signal h m,n = F N−1 (V∗ ). By a definition of discrete Fourier trans-


form, the following formula is true:
 μ−1 N −1 
1  kj n μj (N −μ) j
 kj
h m,n ( j) = n ωN + ωN + ωN + n ωN
N k=0 2 k=N −μ+1
 μ−1
 
1 kj
= cos(π j/n) + ωN . (3.11.18)
m k=−μ+1

We will show that the representation (3.11.17) still holds for the limit signal tm,n ,
however, unlike with the case of m = 2μ − 1, the kernel h m,n has a form (3.11.18).
This statement is equivalent to the following: for signal (3.11.17) there holds
F N (tm,n ) = X ∗ . Let us verify this equality.
108 3 Spline Subspaces

We write
N −1  m−1
  
  −k( j−ln)−kln
F N (tm,n ) (k) = z(l) h m,n ( j − ln) ω N
j=0 l=0


m−1 N −1
 −k j
= z(l) ωm−kl h m,n ( j) ω N = Z (k) V∗ (k) = X ∗ (k).
l=0 j=0

This is exactly what had to be ascertained.


Note that for m = 2μ, by virtue of Theorem 2.4.3, the signal of a form (3.11.17)
with the kernel (3.11.18) satisfies to the interpolation conditions

tm,n (ln) = z(l), l ∈ 0 : m − 1.

Let us summarize.

Lemma 3.11.3 The limit signal tm,n ( j) is an interpolating trigonometric polynomial


defined on the set of integer numbers. The kernel h m,n in the representation (3.11.17)
of this signal has a form (3.11.15) when m = 2μ − 1 and a form (3.11.18) when
m = 2μ.

Section 2.4 of Chap. 2 denotes compact representations for the kernels h m,n ; namely,
(2.4.3) for an odd m and (2.4.7) for an even m.

3.12 Second Limit Theorem

3.12.1 Now we turn to examination of limit behavior of the interpolating spline


Sr,n ( j) in a case of n → ∞ (and fixed r and m). For this end we present Sr,n ( j) in
the following form:

m−1
Sr,n ( j) = %r,n ( j − pn),
c( p) Q (3.12.1)
p=0

%r,n is a normalized B-spline defined by the formula


where Q

%r,n ( j) = 1
Q Q r,n ( j), j ∈ Z.
n 2r −1
By virtue of Lemma 3.3.2,


m−1
%r,n ( j − pn) ≡ 1.
Q
p=0
3.12 Second Limit Theorem 109

%r,n ( j)
Taking into account non-negativity of Q r,n ( j) we conclude that the values of Q
belong to the segment [0, 1].
Let us find out how the normalized B-spline Q %r,n ( j) behaves when n is growing.

3.12.2 We introduce a continuous periodic B-spline of the first order. We define it


as an m-periodic function of a real argument that is determined on the main period
[0, m] by the following formula:


⎨1 − x for x ∈ [0, 1],
B1 (x) = 0 for x ∈ [1, m − 1],


x − m + 1 for x ∈ [m − 1, m].

Periodic B-splines of higher orders are defined with the aid of convolution:
& m
Bν (x) = Bν−1 (t) B1 (x − t) dt, ν = 2, 3, . . .
0

It is clear that Bν (x) ≥ 0 for any natural ν and real x.

Lemma 3.12.1 The following identity holds for all natural ν:


m−1
Bν (x − l) ≡ 1. (3.12.2)
l=0

Proof We will make use of an induction on ν.


Denote the left side of the identity (3.12.2) by L ν (x). When ν = 1, the function
L 1 (x) is a continuous m-periodic polygonal path that is fully identified by its values
at the nodes x ∈ {0, 1, . . . , m}. By a definition,

1 for k = 0 and k = m,
B1 (k) =
0 for k ∈ 1 : m − 1,

and for l ∈ 1 : m − 1

1 for k = l,
B1 (k − l) =
0 for the others k ∈ 0 : m.

Hence it follows that L 1 (k) = 1 for all k ∈ 0 : m. This guarantees validity of the
identity (3.12.2) for ν = 1.
We perform an induction step from ν to ν + 1. We have


m−1 & m
m−1
L ν+1 (x) = Bν+1 (x − l) = Bν (t) B1 (x − l − t) dt.
l=0 l=0 0
110 3 Spline Subspaces

For any m-periodic function f (t) that is integrable on the main period [0, m], the
following formula holds:
& m & m
f (t − ξ ) dt = f (t) dt ∀ξ ∈ R.
0 0

Using this formula and the induction hypothesis we gain

& m
m−1 & m m−1
 
L ν+1 (x) = Bν (t − l) B1 (x − t) dt = B1 (x − t) Bν (t − l) dt
l=0 0 0 l=0
& m & m
= B1 (x − t) dt = B1 (−t) dt = 1.
0 0

The lemma is proved. 




As it was noted, Bν (x) ≥ 0 for all x ∈ R. Lemma 3.12.1 yields Bν (x) ≤ 1. Thus,
values of a B-spline Bν (x) belong to the segment [0, 1] for all natural ν and all real x.

Lemma 3.12.2 A periodic B-spline Bν (x) is a (2ν − 2) times continuously differ-


entiable function of a real variable Bν ∈ C 2ν−2 (R) . On each segment [k, k + 1]
for an integer k, the spline Bν (x) coincides with some algebraic polynomial of order
not higher than 2ν − 1.

Proof When ν = 1, the stated properties of the B-spline B1 (x) hold. Let us perform
an induction step from ν to ν + 1.
We have
& m & x+m/2
Bν+1 (x) = Bν (t) B1 (x − t) dt = Bν (t) B1 (x − t) dt.
0 x−m/2

By a definition of a B-spline of the first order,




⎨1 − (x − t), if (x − t) ∈ [0, 1],
B1 (x − t) = 1 + (x − t), if (x − t) ∈ [−1, 0],


0, if 1 ≤ |x − t| ≤ m/2.

Therefore,
& x & x+1
Bν+1 (x) = Bν (t) (1 − x + t) dt + Bν (t) (1 + x − t) dt.
x−1 x

Using the definition of a derivative we gain


& x & x+1

Bν+1 (x) =− Bν (t) dt + Bν (t) dt. (3.12.3)
x−1 x
3.12 Second Limit Theorem 111

Yet another differentiation yields



Bν+1 (x) = Bν (x − 1) + Bν (x + 1) − 2Bν (x). (3.12.4)

By the induction hypothesis, Bν ∈ C 2ν−2 (R). It follows from (3.12.3) and (3.12.4)
that Bν+1 ∈ C 2ν (R).
If x ∈ [k, k + 1], where k is an integer number, then (x − 1) ∈ [k − 1, k] and (x +

1) ∈ [k + 1, k + 2]. According to (3.12.4) and the induction hypothesis, Bν+1 (x) on
the segment [k, k + 1] coincides with some algebraic polynomial of order not higher
than 2ν − 1. Hence Bν+1 (x) on this segment is a polynomial of order not higher than
2ν + 1.
The lemma is proved. 

Lemma 3.12.3 For any natural ν and all real x, y there holds

|Bν (x) − Bν (y)| ≤ |x − y|.

Proof For ν = 1 the inequality is evident. Replacing the index ν + 1 by ν in for-


mula (3.12.3) we come to the fact that the derivative Bν (x) equals to a difference
of two integrals taken over the segments of unit length, whereby the values of the
integrand Bν−1 (t) for ν ≥ 2 fall into the segment [0, 1]. It is clear that |Bν (x)| ≤ 1.
This guarantees validity of the required inequality. 


3.12.3 Let us find out how a continuous periodic B-spline Bν (x) and a discrete
%r,n ( j) are related.
normalized B-spline Q
Lemma 3.12.4 For any given order ν there exists a non-negative Aν such that
  
% j  Aν
 Q ν,n ( j) − Bν ≤ for all j ∈ Z and n ≥ 2. (3.12.5)
n n

Proof For ν = 1 we have Q %1,n ( j) = B1 ( j ), so we can take A1 = 0.


n
Let the inequality (3.12.5) hold for some ν ≥ 1. We will verify that it holds for
the next ν too. We write
N −1 N −1  
% 1% % 1% k
Q ν+1,n ( j) = Q ν,n ( j − k) Q 1,n (k) = Q ν,n ( j − k) B1
n k=0 n k=0 n
N −1 &  
1  k+1 % k
= Q ν,n ( j − k) B1 dt.
n k=0 k n

Further,
& m & m
Bν+1 (x) = Bν (t) B1 (x − t) dt = Bν (t + x) B1 (−t) dt
&0 m 0
& m
= Bν x − (m + t) B1 (m − t) dt = Bν (x − t) B1 (t) dt,
0 0
112 3 Spline Subspaces

therefore
  & m   &    
j j 1 N j −t t
Bν+1 = Bν − t B1 (t) dt = Bν B1 dt
n 0 n n 0 n n
N −1 &    
1  k+1 j −t t
= Bν B1 dt.
n k=0 k n n

We have
   N −1 &      
% j  1  k+1  % k j −t t 
 Q ν+1,n ( j) − Bν+1 ≤  Q ν,n ( j − k) B1 − Bν B1  dt
n n k n n n
k=0

N −1 &            
1  k+1  % j −t k j −t k t 
=  Q ν,n ( j − k) − Bν B1 + Bν B1 − B1  dt.
n k n n n n n
k=0

 
We know that B1 nk and Bν j−tn
do not exceed unity in modulus. Lemma 3.12.3
yields
 k    
t   k − t  1

B1 − B1 ≤ ≤ for t ∈ [k, k + 1].
n n n n

Finally, by virtue of the induction hypothesis, for the same t we have


        
% j − t  
% j − k   j −k j − t 
 Q ν,n ( j − k) − Bν  ≤ Q ν,n ( j − k) − Bν  + Bν − Bν 
n n n n

Aν  t − k   Aν + 1
≤ + ≤ .
n n n

We come to an inequality
 j  Aν + 2
%
 Q ν+1,n ( j − k) − Bν+1 ( ) ≤ m .
n n

To finish the proof of the lemma it is remaining to put Aν+1 = m(Aν + 2). 


3.12.4 Recall that the spline Sr,n of a form (3.12.1) satisfies to interpolation condi-
tions
Sr,n (ln) = z(l), l ∈ 0 : m − 1. (3.12.6)

Hence, just like in Sect. 3.4, we can obtain an expression for Cn = Fm (cn ). Let us
do it. Denote Z = Fm (z),

gn (l) = Q r (ln), % %r (ln),


gn (l) = Q

%n = Fm (%
G n = Fm (gn ), G gn ).
3.12 Second Limit Theorem 113

By a definition of a normalized B-spline, there hold

1 %n = 1 G n .
gn (l) =
% gn (l), G
n 2r −1 n 2r −1
Equation (3.12.6) in a spectral domain takes a form (see Sect. 3.4)

%n = Z .
Cn G

%n .
Therefore, Cn = Z /G
 
Lemma 3.12.5 A sequence of spectra G % with com-
%n converges to a spectrum G
ponents

m−1
%
G(k) = Br (l) ωm−kl . (3.12.7)
l=0

%
Furthermore, G(k) > 0 for all k ∈ Z.
Proof According to Lemma 3.12.4 we have

lim % %r (ln) = Br (l).


gn (l) = lim Q
n→∞ n→∞

As a consequence,


m−1 
m−1
%n (k) = lim
lim G gn (l) ωm−kl =
% %
Br (l) ωm−kl =: G(k).
n→∞ n→∞
l=0 l=0

% are positive.
Let us verify that all components of the limit spectrum G
It follows from (3.12.7) and (3.12.2) that


m−1 
m−1
%
G(0) = Br (l) = Br (m − 1 − l) = 1.
l=0 l=0

For k ∈ 1 : m − 1, on the basis of (3.2.7) and (3.2.8) we write

 2r 
n−1 
π(qm + k) −2r
%n (k) = sin π k
G n sin .
m q=0
mn

Dropping all summands but the one corresponding to q = 0 we come to an inequality


 2r  π k −2r
%n (k) ≥ sin π k
G n sin .
m mn
Since sin x < x for x > 0, we have
114 3 Spline Subspaces

 2r  π k −2r
%n (k) ≥ sin π k
G .
m m
%
Passing to the limit as n → ∞ we gain G(k) > 0, k ∈ 1 : m − 1.
The lemma is proved. 


3.12.5 According to Lemma 3.12.5, coefficients cn ( p) of an interpolating spline


Sr,n ( j) of a form (3.12.1) converge as n → ∞. Indeed, the inversion formula yields

1  1  Z (k) kp
m−1 m−1
cn ( p) = Cn (k) ωmkp = ω .
m k=0 %n (k) m
m k=0 G

Passing to the limit we gain

1  Z (k) kp
m−1
lim cn ( p) = ωm =: c∗ ( p), (3.12.8)
n→∞ %
m k=0 G(k)

% is defined by formula (3.12.7).


where the denominator G(k)
We introduce an m-periodic spline


m−1
Sr (x) = c∗ ( p) Br (x − p).
p=0

It follows from Lemma 3.12.2 that Sr ∈ C 2r −2 (R).

Second Limit Theorem For all real x the following limit relation holds:

lim Sr,n (nx) = Sr (x). (3.12.9)


n→∞

Proof Fix an arbitrary x ∈ R. We have

        
 Sr,n (nx) − Sr (x) ≤ Sr nx − Sr (x) + Sr,n (nx) − Sr nx .
n n
(3.12.10)
The first summand in the right side of (3.12.10) vanishes as n → ∞ by virtue of
continuity of the spline Sr and by inequalities

1 nx
− ≤ − x ≤ 0.
n n
Let us make an estimate of the second summand. We write

  nx   m−1   nx 


   % 
Sr,n (nx) − Sr =
  cn ( p) Q r,n nx − pn − c∗ ( p) Br − p 
n n
p=0
3.12 Second Limit Theorem 115

 
m−1      


cn ( p) Q%r,n nx − pn − Br nx − p  +  cn ( p) − c∗ ( p) Br nx − p  .
n n
p=0

Lemma 3.12.4 yields


  nx 
 %  Ar
 Q r,n nx − pn − Br −p ≤ .
n n

By virtue of (3.12.8), cn ( p) → c∗( p) as n → ∞ for all p ∈ 0 : m − 1. Hereto we


should add that all the sequences cn ( p) are bounded and Br nx n
− p does not
exceed unity in modulus. These facts guarantee that the second summand in the right
side of (3.12.10) vanishes as n → ∞. Therefore, the limit relation (3.12.9) holds.
The theorem is proved. 


It follows from (3.12.6) and (3.12.9) that

Sr (l) = z(l), l ∈ 0 : m − 1.

In other words, the limit spline Sr satisfies to the same interpolation conditions as all
discrete splines Sr,n .

Exercises

3.1 Prove that discrete Bernoulli functions are real.

3.2 Prove that the discrete Bernoulli function of the first order b1 ( j) can be repre-
sented as  
1 N +1
b1 ( j) = −j for j ∈ 1 : N .
N 2

3.3 Prove that the discrete Bernoulli function of the second order b2 ( j) can be
represented as
N 2 − 1 ( j − 1)(N − j + 1)
b2 ( j) = − +
12N 2N
for j ∈ 1 : N .

3.4 Prove that for n = 2 there holds


 
πl 2r  πl 2r
Tr (l) = 22r −1 cos + sin , l ∈ 0 : m − 1.
N N

3.5 Prove that for p, p  ∈ 0 : m − 1 there holds


116 3 Spline Subspaces

Q r (· − pn), Q r (· − p  n) = Q 2r ( p − p  )n .

3.6 Prove that


r  
r −l 2r
 Q r ( j) =
2r
(−1) δ N ( j + r − ln).
l=−r
r −l

3.7 Let 2r (n − 1) ≤ N − 2. Prove that B-spline Q r ( j) has the following properties:

• Q r ( j) > 0 for j ∈ 0 :r (n − 1) and j ∈ N − r (n − 1) : N − 1,


• Q r ( j) = 0 for j ∈ r (n − 1) + 1 : N − r (n − 1) − 1,
• Q r r (n − 1) = 1.

(A statement of the exercise guarantees that the set of arguments where Q r ( j) = 0


holds is not empty.)

3.8 Let N = mn, m ≥ 2. We put


 N −1

1  sin(π k/m) kj
Q 1/2 ( j) = n+ ωN .
N k=1
sin(π k/N )

Prove that if n is odd then there holds



⎨ 1, for j ∈ 0 : (n − 1)/2
Q 1/2 ( j) = and j ∈ N − (n − 1)/2 : N − 1,

0 for the others j ∈ 0 : N − 1.

%1/2 ( j) = Q 1/2 ( j) − 1/m is pure imaginary if n is even.


3.9 Prove that the signal Q

3.10 Consider a spline S( j) of a form (3.3.1). Let its coefficients c( p) after


m-periodic continuation form an even signal. Prove that the spline S( j) is even
as well.

3.11 We take a signal x ∈ C N and consider the extremal problem


' r '
' (x − S)'2 → min,

where the minimum is taken among all S ∈ Srm . Prove that a unique (up to an additive
constant) solution of this problem is the interpolation spline S∗ satisfying to the
conditions S∗ (ln) = x(ln), l ∈ 0 : m − 1.

3.12 Prove that the smoothing spline Sα from par. 3.5.3 is real-valued if the initial
data z(l), l ∈ 0 : m − 1, are real.

3.13 Prove that the orthogonal spline μk ( j) is even with respect to j.


Exercises 117

3.14 Formula (3.8.3) defines μk ( j) for all k ∈ Z. Prove that with j being fixed the
m-periodic with respect to k sequence μk ( j) is even.

3.15 Construct an orthogonal basis in Srm with m = n = 2 and r = 1.

3.16 Consider the expansion (3.8.4) of a spline S over the orthogonal basis. Let the
m-periodically continued coefficients ξ(k) form an even signal. Prove that in this
case S( j) takes only real values.

3.17 Under the conditions of the previous exercise, let the signal ξ composed of
the coefficients of the expansion (3.8.4) be real and even. Prove that this guarantees
reality and evenness of the spline S( j).

3.18 Prove that a self-dual spline ϕr ( j) defined by formula (3.9.7) is real and even.

3.19 Prove that the spline Rr ( j) dual to B-spline Q r ( j) is real and even.

3.20 Prove that



m−1
Rr ( j − qn) ≡ n −2r .
q=0

3.21 Prove that the numbers


n ν −1
1 
Trν (k) = yν (qm ν + k)
n ν q=0

(see par. 3.10.1) satisfy to the recurrent relation


 
Trν+1 (k) = 1
2
cν (k) Trν (k) + cν (m ν+1 + k) Trν (m ν+1 + k) .

3.22 Prove that


 r  
2r
cν (l) = ωm−lp .
p=−r
r−p ν

3.23 Prove that B-splines

N −1
1 
Q rν ( j) =
lj
yν (l) ω N
N l=0

satisfy to the recurrent relation

 r  
2r
Q rν+1 ( j) = Q rν ( j − pn ν ).
p=−r
r−p
118 3 Spline Subspaces

3.24 Prove that an m ν -periodic on k sequence {aν (k)} defined by formula (3.10.7)
is even.
 
3.25 Prove that an m ν+1 -periodic on k sequence wkν+1 ( j) of a form (3.10.3) with
the coefficients (3.10.7) is even.

3.26 Let wkν+1 be the splines introduced in par. 3.10.2 and k ∈ 1 : m ν+1 − 1, k =
m ν+1 /2. Prove that there holds

N −1
  2
wkν+1 ( j) = 0.
j=0

3.27 Prove that the spline wmν+1


ν+2
( j) is real-valued.

3.28 Let a spline ϕ belong to a wavelet subspace Wrm ν . Prove that the system of
 m ν −1
shifts ϕ(· − ln ν ) l=0 forms a basis in Wrm ν if and only if each coefficient in the
 m ν −1
expansion of ϕ over the basis wkν k=0 is nonzero.
 m ν −1
3.29 Spline wavelets ϕ, ψ ∈ Wrm ν are called dual if their shifts ϕ(· − ln ν ) l=0
 m ν −1
and ψ(· − ln ν ) l=0 are biorthogonal. Prove that ϕ and ψ are dual if and only if
 m ν −1
their coefficients βν (k), γν (k) in the expansions over the basis wkν k=0 satisfy to
the condition
 −1
βν (k) γ ν (k) = m ν wkν 2 , k = 0, 1, . . . , m ν − 1.

3.30 Let ν ∈ 0 : t − 1. By analogy with (3.8.11) we introduce a B-wavelet

m ν+1 −1

Prν+1 ( j) = wkν+1 ( j).
k=0

Prove that
ν −1
m
Prν+1 ( j) = dν ( p) Q rν ( j − pn ν ),
p=0

where dν = Fm−1ν (aν ) and the sequence {aν (k)} is defined by formula (3.10.7).

3.31 Prove that the coefficients dν ( p) from the previous exercise can be represented
as
 r  
2r
dν ( p) = (−1) p+1 Q ν2r ( p + l + 1)n ν .
l=−r
r − l

3.32 Calculate F N (Prν+1 ).


Exercises 119

3.33 Prove that



Prν+1 (· − ln ν+1 ), Prν+1 (· − l  n ν+1 ) = ζrν+1 (l − l  ),
 
where ζrν+1 is DFT of order m ν+1 of the sequence wkν+1 2 .

3.34 Prove that the spline Prν+1 ( j − n ν ) is even with respect to j.

3.35 Prove that a continuous periodic B-spline Bν (x) introduced in par. 3.12.2 is an
even function.

Comments

Discrete periodic Bernoulli functions are introduced in the paper [4]. Ibid the theorem
about expansion of an arbitrary signal over the shifts of Bernoulli functions is proved.
This theorem plays an important role in discrete harmonic analysis.
Discrete periodic splines and their numerical applications are the central point of
the paper [29]. Piecewise polynomial nature of B-splines is investigated in [28].
Defining a spline as a linear combination of the shifts of a B-spline is a standard
procedure. Less standard is an equivalent definition via a linear combination of the
shifts of a Bernoulli function (Theorem 3.3.1). The latter definition is essentially used
in devising a fundamental relation (3.3.10) which, in turn, is a basis of establishing
the minimal norm property (Theorem 3.4.2). In continuous context the minimal norm
property is peculiar to natural splines [27].
A solution of the discrete spline interpolation problem (along with the minimal
norm property) is obtained in [29]. We note that discrete spline interpolation is used
in construction of lifting schemes of wavelet decompositions of signals [33, 34,
53]. Hermite spline interpolation and its applications to computer aided geometric
design are considered in [6]. Common approaches to wavelet processing of signals
are presented in the monograph [18].
An analysis of the problem of discrete periodic data smoothing is performed within
the framework of a common smoothing theory [42]. Along with that, to implement
the common approach we utilize the techniques of discrete harmonic analysis to
the full extent. We hope that a reader will experience an aesthetic enjoyment while
examining this matter.
Formula (3.8.3) defining an orthogonal basis in a space of signals is the beginning
of discrete spline harmonic analysis per se. Many problems are dealt with in terms of
coefficients of an expansion over the orthogonal basis. In particular, it is these terms
that are used to state a criterion of duality of two splines (Theorem 3.9.2). In practical
terms, the spline dual to a B-spline helps solving a problem of spline processing of
discrete periodic data with the least squares method.
Orthogonal splines are used to obtain a wavelet decomposition of the space of
splines.
120 3 Spline Subspaces

Sections 3.8–3.10 are written on the basis of the paper [13]. In continuous context
a question of orthogonal periodic splines and their applications was considered in [40,
43, 52].
Limit properties of discrete periodic splines are investigated in the papers [19,
20]. The papers [7, 21] are devoted to application of discrete periodic splines to the
problems of geometric modeling.
Some of the additional exercises are of interest on their own. For example, the
problem 3.23 presents a so-called calibration relation for B-splines. A property of an
interpolating spline noted in the problem 3.11 is referred to as the best approximation
property. Problems 3.30–3.34 introduce a notion of B-wavelet and examine some of
its properties.
Chapter 4
Fast Algorithms

4.1 Goertzel Algorithm

Let us consider a question of calculating a single component of a spectrum X =


F N (x). We fix k ∈ 0 : N − 1 and write down

N −1
 −k j
X (k) = x( j) ω N
j=0
N −1
  2π k  N −1
  2π k 
= x(0) + x( j) cos j −i x( j) sin j .
j=1
N j=1
N

Denote α = 2π k/N , c j = cos(α j), s j = sin(α j), and

N −1
 N −1

A(k) = x( j) c j , B(k) = x( j) s j .
j=1 j=1

Then
X (k) = x(0) + A(k) − i B(k). (4.1.1)

Note that
   
c j + c j−2 = cos(α j) + cos α( j − 2) = 2 cos α( j − 1) cos(α) = 2 cos(α) c j−1 ,
   
s j + s j−2 = sin(α j) + sin α( j − 2) = 2 sin α( j − 1) cos(α) = 2 cos(α) s j−1 .

This induces the recurrent relations that serve as a basis for further transforms:

© Springer Nature Switzerland AG 2020 121


V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7_4
122 4 Fast Algorithms

c j = 2 cos(α) c j−1 − c j−2 , j = 2, 3, . . . ,


(4.1.2)
c0 = 1, c1 = cos(α);

s j = 2 cos(α) s j−1 − s j−2 , j = 2, 3, . . . ,


(4.1.3)
s0 = 0, s1 = sin(α).

In order to calculate A(k) and B(k) we construct a recurrent sequence {g j } basing


on conditions

x( j) = g j −2 cos(α)g j+1 +g j+2 , j = N−1, N−2, . . . , 1,


(4.1.4)
g N +1 = g N = 0.

Such a construction is possible. Indeed, for j = N − 1 we gain g N −1 = x(N − 1).


The values g N −2 , . . . , g1 are determined sequentially from the formula

g j = x( j) + 2 cos(α) g j+1 − g j+2 , j = N − 2, . . . , 1,


(4.1.5)
g N = 0, g N −1 = x(N − 1).

According to (4.1.4) and (4.1.2) we have

N −1
 N −1
  
A(k) = x( j) c j = g j − 2 cos(α) g j+1 + g j+2 c j
j=1 j=1
N −1
 
N N +1

= g j c j − 2 cos(α) g j c j−1 + g j c j−2
j=1 j=2 j=3
N −1
  
= g1 c1 + g2 c2 − 2 cos(α) g2 c1 + g j c j − 2 cos(α) c j−1 + c j−2
j=3

= g1 c1 + g2 c2 − 2 cos(α) c1 ) = g1 c1 − g2 c0 = g1 cos(α) − g2 .

Similarly, with a reference to (4.1.4) and (4.1.3), we convert the expression for B(k):

N −1
 N −1
  
B(k) = x( j) s j = g j − 2 cos(α) g j+1 + g j+2 s j
j=1 j=1

= g1 s1 + g2 s2 − 2 cos(α) g2 s1 = g1 s1 + g2 s2 − 2 cos(α) s1 )
= g1 s1 = g1 sin(α).

Now formula (4.1.1) gets a form


4.1 Goertzel Algorithm 123

X (k) = x(0) + g1 cos(α) − g2 − i g1 sin(α) = x(0) − g2 + g1 ω−k


N . (4.1.6)

Calculation of X (k) with a fixed k using formula (4.1.6) is referred to as Goertzel


algorithm.
The key element of Goertzel algorithm is scheme (4.1.5) that describes construc-
tion of a sequence {g j }. In fact, we do not need the whole sequence {g j } but only
two its elements g2 and g1 . Achievement of this goal is provided by the following
group of operators:

Program Code
g := x(N − 1); g1 := 0; a := 2 ∗ cos(2 ∗ π ∗ k / N );
for j := N − 2 downto 1 do
begin g2 := g1; g1 := g;
g := x( j) + a ∗ g1 − g2 end

As the output we obtain g = g1 and g1 = g2 . The cycle uses N − 2 multiplications


on a real number a and 2(N − 2) additions.

4.2 First Sequence of Orthogonal Bases

4.2.1 With the aid of Goertzel algorithm, it is possible to calculate the whole spec-
trum of a signal. However this is not the best way. More effective methods exist that
are called fast Fourier transforms (FFTs). There are several FFT algorithms, and all
of them depend on arithmetic properties of the length of the period N . We will focus
on the case of N = 2s .
Our approach to FFT is related to constructing a recurrent sequence of orthogonal
bases in a space of signals. This matter is considered in the present section. The next
section is devoted to a description of FFT.
4.2.2 In a space C N with N = 2s we will construct a recurrent sequence of orthog-
N −1
onal bases f 0 , f 1 , . . . , f s . Here f ν = { f ν (k; j)}k=0 . A signal f ν (k; j) as an ele-
ment of a space C N will be denoted as f ν (k). We put Nν = N /2ν and ν = 2ν−1 .
N −1
A sequence f ν = { f ν (k)}k=0 , ν = 0, 1, . . . , s, is defined as follows:

f 0 (k) = δ N (· − k), k ∈ 0 : N − 1;
 
f ν (l + pν+1 ) = f ν−1 (l + 2 pν ) + ω
l
ν+1
f ν−1 l + (2 p + 1)ν ,
  (4.2.1)
f ν (l + ν + pν+1 ) = f ν−1 (l + 2 pν ) − ω l
ν+1
f ν−1 l + (2 p + 1) ν ,
124 4 Fast Algorithms

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.

An index k at f ν (k) is represented in a form k = pν+1 + r , where p ∈ 0 : Nν − 1


and r ∈ 0 : ν+1 − 1. In turn, r = σ ν + l, where σ ∈ 0 : 1 and l ∈ 0 : ν − 1.

Thus, k = pν+1 + σ ν + l. Note that ω ν+1
= ω2 = −1. This allows writing down
the recurrent relations (4.2.1) in a single line:
l+σ ν  
f ν (l + σ ν + pν+1 ) = f ν−1 (l + 2 pν ) + ω ν+1
f ν−1 l + (2 p + 1)ν ,
(4.2.2)
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.

In particular, for ν = 1 we gain

f 1 (σ + 2 p) = f 0 (2 p) + ω2σ f 0 (2 p + 1), (4.2.3)

p ∈ 0 : N1 − 1, σ ∈ 0 : 1.

How does the basis f s look like? Answering this question needs some additional
preparation.

4.2.3 In Sect. 1.4 we introduced a permutation revν . This permutation is defined on


the set {0, 1, . . . , 2ν − 1}; it maps a number j = ( jν−1 , jν−2 , . . . , j0 )2 to a number
revν ( j) = ( j0 , j1 , . . . , jν−1 )2 whose binary code equals to the reverted binary code
of the number j. It is assumed by a definition that rev0 (0) = 0.

The following lemma helps to clear up the explicit form of signals f ν (k).

Lemma 4.2.1 Given q ∈ 0 : ν − 1, valid are the equalities

2 revν−1 (q) = revν (q),

2 revν−1 (q) + 1 = revν (ν + q).

Proof When ν = 1, the assertion is trivial. Let ν ≥ 2. Then for q ∈ 0 : ν − 1 we


have
q = qν−2 2ν−2 + · · · + q0 , revν−1 (q) = q0 2ν−2 + · · · + qν−2 ,

2 revν−1 (q) = q0 2ν−1 + · · · + qν−2 2 + 0 = revν (q),

2 revν−1 (q) + 1 = q0 2ν−1 + · · · + qν−2 2 + 1 = revν (2ν−1 + q) = revν (ν + q).

The lemma is proved. 




4.2.4 Now we return to the recurrent relations (4.2.1) and (4.2.2).


4.2 First Sequence of Orthogonal Bases 125

Theorem 4.2.1 Valid is the formula

ν+1 −1
 l rev (q)
f ν (l + pν+1 ) = ων+1ν f 0 (q + pν+1 ), (4.2.4)
q=0

p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.

Proof When ν = 1, formula (4.2.4) coincides with (4.2.3) if we replace σ by l there.


We perform an induction step from ν − 1 to ν.
Take l ∈ 0 : ν+1 − 1 and represent it in a form l = σ ν + l  , where l  ∈ 0 :
ν − 1 and σ ∈ 0 : 1. According to (4.2.2) and to the inductive hypothesis, for
p ∈ 0 : Nν − 1 we have

f ν (l + pν+1 ) = f ν (l  + σ ν + pν+1 )
l  +σ ν  
= f ν−1 (l  + 2 pν ) + ω ν+1
f ν−1 l  + (2 p + 1)ν

 ν −1
l  revν−1 (q)
= ων f 0 (q + 2 pν )
q=0

 ν −1
l  revν−1 (q)  
+ ω
l
ν+1
ων f 0 q + (2 p + 1)ν .
q=0

Let us examine the coefficients. Lemma 4.2.1 yields

l  revν−1 (q) (l  +σ ν )revν−1 (q) l(2rev (q)) l rev (q)


ων = ων = ων+1 ν−1 = ων+1ν ,

l  revν−1 (q) l(2rev (q)+1) l rev (q+ν )


ω
l
ν+1
ων = ων+1 ν−1 = ων+1ν .

Hence

 ν −1
l rev (q)
f ν (l + pν+1 ) = ων+1ν f 0 (q + pν+1 )
q=0

 ν −1
l rev (q+ν )  
+ ων+1ν f 0 (q + ν ) + pν+1
q=0
ν+1 −1
 l rev (q)
= ων+1ν f 0 (q + pν+1 ).
q=0

The theorem is proved. 



126 4 Fast Algorithms

When ν = s, formula (4.2.4) takes a form

N −1
 l revs (q)
f s (l; j) = ωN δ N ( j − q)
q=0
l revs ( j)  
= ωN = u l revs ( j) , l, j ∈ 0 : N − 1. (4.2.5)

We deduced that f s is an exponential basis with a reverted argument.


4.2.5 Consider the signals f ν (k) given a fixed ν.

Theorem 4.2.2 For each ν ∈ 0 : s the system of signals

f ν (0), f ν (1), . . . , f ν (N − 1) (4.2.6)

is orthogonal and  f ν (k)2 = 2ν holds for all k ∈ 0 : N − 1.

Proof The assertion is known to be true for ν = 0 (the corollary to Lemma 2.1.4);
therefore, we assume that ν ∈ 1 : s. We take k, k  ∈ 0 : N − 1 and represent them
in a form k = l + pν+1 , k  = l  + p  ν+1 , where l, l  ∈ 0 : ν+1 − 1 and p, p  ∈
0 : Nν − 1. Bearing in mind formula (4.2.4), the definition of signals f 0 (k) and
Lemma 2.1.4, we write

f ν (k), f ν (k  ) = f ν (l + pν+1 ), f ν (l  + p  ν+1 )


 
ν+1 −1 ν+1 −1
 l  rev (q  ) 
l revν (q)
= ων+1 f 0 (q + pν+1 ), ων+1ν f 0 (q  + p  ν+1 )
q=0 q  =0
ν+1 −1 ν+1 −1
  l rev (q)−l  revν (q  )  
= ων+1ν δ N q − q  + ( p − p  )ν+1 .
q=0 q  =0

The argument of the unit pulse δ N does not exceed N − 1 in absolute value. When p =
p  , it is other than zero for all q, q  ∈ 0 : ν+1 − 1 because |q − q  | ≤ ν+1 − 1.
Hence f ν (k), f ν (k  ) = 0 for p = p  .
Let p = p  . Then

ν+1 −1
 (l−l  )revν (q)
f ν (k), f ν (k  ) = ων+1
q=0
ν+1 −1
 (l−l  )q 
= ων+1 = ν+1 δν+1 (l − l  ). (4.2.7)
q  =0

We used formula (2.2.1) and the fact that the mapping q → revν (q) is a permutation
of the set {0, 1, . . . , ν+1 − 1}. On the basis of (4.2.7) we conclude that the scalar
product f ν (k), f ν (k  ) is nonzero only when p = p  and l = l  , i.e. only when
4.2 First Sequence of Orthogonal Bases 127

k = k  . In the latter case  f ν (k)2 = ν+1 = 2ν for all k ∈ 0 : N − 1. The theorem


is proved. 


Essentially, we ascertained that for each ν ∈ 0 : s the system of signals (4.2.6)


forms an orthogonal basis in a space C N .

4.2.6 Let us show that the signals f ν (l + pν+1 ) given some fixed ν, l and p ∈ 0 :
Nν − 1 differ from f ν (l) only by a shift of an argument.

Theorem 4.2.3 Given l ∈ 0 : ν+1 − 1, valid is the identity

f ν (l + pν+1 ; j) ≡ f ν (l; j − pν+1 ), p ∈ 0 : Nν − 1. (4.2.8)

Proof Let us write down formula (4.2.4) for p = 0:

ν+1 −1
 l rev (q)
f ν (l; j) = ων+1ν δ N ( j − q).
q=0

Hence it follows that


ν+1 −1
 l rev (q)
f ν (l; j − pν+1 ) = ων+1ν δ N ( j − pν+1 − q)
q=0
ν+1 −1
 l rev (q)
= ων+1ν f 0 (q + pν+1 ; j) = f ν (l + pν+1 ; j).
q=0

The theorem is proved. 




4.3 Fast Fourier Transform

4.3.1 In the previous section we constructed s + 1 orthogonal bases f 0 , f 1 , . . . , f s


in a space C N with N = 2s . Let us take a signal x ∈ C N . It can be expanded over
any  bases. Bearing in mind our final goal, we will expand a signal x0 ( j) =
 of these
x revs ( j) , j ∈ 0 : N − 1:

N −1
1 
x0 = xν (k) f ν (k). (4.3.1)
2ν k=0

To determine the coefficients xν (k), we multiply both sides of (4.3.1) scalarly


by f ν (l). According to Theorem 4.2.2 we gain x0 , f ν (l) = xν (l), so that
128 4 Fast Algorithms

N −1
 N −1
  
xν (k) = x0 ( j) f ν (k; j) = x revs ( j) f ν (k; j). (4.3.2)
j=0 j=0

In particular,
N −1
    
x0 (k) = x revs ( j) δ N ( j − k) = x revs (k) .
j=0

The recurrent relation (4.2.1) yields

xν (l + pν+1 ) = x0 , f ν (l + pν+1 )
 
= x0 , f ν−1 (l + 2 pν ) + ωl
ν+1
f ν−1 l + (2 p + 1)ν
−l
 
= xν−1 (l + 2 pν ) + ω ν+1
xν−1 l + (2 p + 1)ν .

Similarly,
−l
 
xν (l + ν + pν+1 ) = xν−1 (l + 2 pν ) − ων+1
xν−1 l + (2 p + 1)ν .

We come to a recurrent scheme


 
x0 (k) = x revs (k) , k ∈ 0 : N − 1;

−l
 
xν (l + pν+1 ) = xν−1 (l + 2 pν ) + ων+1
xν−1 l + (2 p + 1)ν ,
−l
  (4.3.3)
xν (l + ν + pν+1 ) = xν−1 (l + 2 pν ) − ω ν+1
x ν−1 l + (2 p + 1) ν ,

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.

Along this scheme we calculate the coefficients of expansion of the signal x0 over
all bases f ν right up to f s . Note that according to (4.3.2) and (4.2.5) we have

N −1
 N −1
  −k rev ( j)  −k j 
xs (k) = x revs ( j) ω N s = x( j  ) ω N = X (k).
j=0 j  =0

Thus, the coefficients xs (k) are nothing else but the spectral components of the signal
x on the main period.
s s
Calculations with formula (4.3.3) require N ν ν = 1
2
Nν ν+1 = 1
2
sN =
ν=1 ν=1
s
1
2
N log2 N multiplications and 2 Nν ν = N log2 N additions.
ν=1
Scheme (4.3.3) is one of the versions of the fast Fourier transform for N = 2s . It
is referred to as the decimation-in-time Cooley–Tukey algorithm.
4.3 Fast Fourier Transform 129

4.3.2 Formula (4.3.3) can be inverted:

xs (k) = X (k), k ∈ 0 : N − 1;

xν−1 (l + 2 pν ) = 21 xν (l + pν+1 ) + xν (l + ν + pν+1 ) ,


  (4.3.4)
xν−1 l + (2 p + 1)ν = 21 ω l
ν+1
x ν (l + pν+1 ) − x ν (l + ν + pν+1 ) ,

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = s, s − 1, . . . , 1.
 
Along formula (4.3.4) we descend down to x0 (k) = x revs (k) . Replacing k by
revs (k) we obtain  
x(k) = x0 revs (k) , k ∈ 0 : N − 1.

Thereby we have pointed out the fast algorithm of reconstructing a signal x from its
spectrum X = F N (x) for N = 2s .

4.4 Wavelet Bases

4.4.1 We rewrite formula (4.2.2) of transition from the basis f ν−1 to the basis f ν :
l+σ ν  
f ν (l + σ ν + pν+1 ) = f ν−1 (l + 2 pν ) + ω ν+1
f ν−1 l + (2 p + 1)ν ,
(4.4.1)

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.

Let us analyze the structure of this formula. It is convenient to assume that the
basis f ν−1 is divided into ν blocks; the blocks are marked with an index l. Each
block contains Nν−1 signals with inner indices 2 p and 2 p + 1 for p ∈ 0 : Nν − 1.
According to (4.4.1) a block with an index l generates two blocks of the basis f ν
with indices l and l + ν , herein each block contains Nν signals with an inner
index p. The complete scheme of branching for N = 23 is presented in Fig. 4.1. By
virtue of Theorem 4.2.2 all the bases f 0 , f 1 , . . . , f s are orthogonal. According to
Theorem 4.2.3 the signals of each block of the basis f ν differ only by shifts of an
argument; the length of a shift is a multiple of ν+1 .

We can significantly increase the number of orthogonal bases in C N with N = 2s


if we use a vertical constituent. This is demonstrated in Fig. 4.2. The squares denote
blocks. A number inside a square shows how many signals are there in this block.
Hanging blocks are distinguished by double squares.
In all four variants of branching, unions of signals contained in hanging blocks
form orthogonal bases.
130 4 Fast Algorithms

Fig. 4.1 Branching scheme for N = 23

(a) 8 (b) 8 (c) 8 (d) 8

4 4 4 4 4 4 4 4

2 2 2 2 2 2 2 2

1 1 1 1 1 1 1 1

Fig. 4.2 Wavelet bases

Indeed, orthogonality of signals of the ν-th level is known. Signals of a hanging


block of the (ν + 1)-th level are linear combinations of signals of some block of the
ν-th level, and therefore they are orthogonal to signals from another block of the ν-th
level.
An orthogonal basis made of hanging blocks of all levels ν = 1, . . . , s will be
referred to as a wavelet basis, and the collection of all such bases will be referred to
as a wavelet packet.
Below we will study in detail the wavelet basis generated by the branching
scheme (a) in Fig. 4.2.
4.4.2 When l = 0, formula (4.2.1) gets a form

f 0 (k) = δ N (· − k), k ∈ 0 : N − 1;
 
f ν ( pν+1 ) = f ν−1 (2 pν ) + f ν−1 (2 p + 1)ν ,
  (4.4.2)
f ν (ν + pν+1 ) = f ν−1 (2 pν ) − f ν−1 (2 p + 1)ν ,

p ∈ 0 : Nν − 1, ν = 1, . . . , s.
4.4 Wavelet Bases 131

The signals { f ν (ν + pν+1 )} will enter a wavelet basis, while the signals
{ f ν ( pν+1 )} will participate in further branching. The recurrent relations (4.4.2)
can be written down in a single line:
 
f ν (σ ν + pν+1 ) = f ν−1 (2 pν ) + (−1)σ f ν−1 (2 p + 1)ν , (4.4.3)

p ∈ 0 : Nν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.

We introduce linear spans


 
ν −1
Vν = lin { f ν ( pν+1 )} Np=0 , ν = 0, 1, . . . , s;
 
ν −1
Wν = lin { f ν (ν + pν+1 )} Np=0 , ν = 1, . . . , s.
 
−1
It is evident that V0 = lin {δ N (· − p)} Np=0 = C N . On the strength of (4.4.2) we
have Vν ⊂ Vν−1 and Wν ⊂ Vν−1 . Since the signals

ν −1 ν −1
{ f ν ( pν+1 )} Np=0 , { f ν (ν + pν+1 )} Np=0

belong to Vν−1 and are pairwise orthogonal, and their total amount coincides with
the dimension of Vν−1 , we conclude that they form an orthogonal basis of Vν−1 .
Moreover, Vν−1 is an orthogonal sum of Vν and Wν , i.e.

Vν−1 = Vν ⊕ Wν , ν = 1, . . . , s. (4.4.4)

This formula corresponds to a branching step. Consequently applying (4.4.4) we


come to an orthogonal decomposition of the space C N :

C N = V0 = V1 ⊕ W1 = (V2 ⊕ W2 ) ⊕ W1 = . . .
= Vs ⊕ Ws ⊕ Ws−1 ⊕ · · · ⊕ W2 ⊕ W1 . (4.4.5)
 
Here Vs = lin f s (0) . According to (4.2.5) there holds f s (0; j) ≡ 1, so Vs is a
subspace of signals that are identically equal to a complex constant.
Subspaces Wν are referred to as wavelet subspaces. Identity (4.2.8) yields

f ν (ν + pν+1 ; j) = f ν (ν ; j − pν+1 ),

p ∈ 0 : Nν − 1, ν = 1, . . . , s.

This means that the basis of Wν consists of shifts of the signal f ν (ν ; j); the shifts
are multiples of ν+1 .
132 4 Fast Algorithms

Theorem 4.4.1 Given ν ∈ 1 : s, valid is the formula



⎪ 1 for j ∈ 0 : ν − 1,

f ν (ν ; j) = −1 for j ∈ ν : ν+1 − 1, (4.4.6)


0 for j ∈ ν+1 : N − 1.

Proof On the basis of (4.2.4) we write

ν+1 −1
  revν (q)
f ν (ν ; j) = ων+1
ν
δ N ( j − q)
q=0
 ν −1 ν+1 −1
 revν (q)
 revν (q)
= ω2 δ N ( j − q) + ω2 δ N ( j − q). (4.4.7)
q=0 q=ν

For ν = 1 we have f 1 (1 ; j) = δ N ( j) − δ N ( j − 1), which corresponds to (4.4.6).


Let ν ≥ 2. When q ∈ 0 : ν − 1, Lemma 4.2.1 yields revν (q) = 2 revν−1 (q),
rev (q)
therefore ω2 ν = (−1)2 revν−1 (q) = 1. When q ∈ ν : ν+1 − 1, it can be repre-
sented in a form q = ν + q  , where q  ∈ 0 : ν − 1. Lemma 4.2.1 now yields
rev (q) 
revν (q) = revν (ν + q  ) = 2 revν−1 (q  ) + 1, so that ω2 ν = (−1)2 revν−1 (q )+1 =
−1. Substituting the obtained expressions for the coefficients into (4.4.7) we gain

 ν −1 ν+1 −1
 
f ν (ν ; j) = δ N ( j − q) − δ N ( j − q). (4.4.8)
q=0 q=ν

This conforms to (4.4.6). The theorem is proved. 




4.5 Haar Basis. Fast Haar Transform

4.5.1 According to (4.4.5) the signals

f s (0); f ν (ν + pν+1 ), p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1 (4.5.1)

form an orthogonal basis in a space C N with N = 2s . It is referred to as the discrete


Haar basis related to decimation in time. Figure 4.3 depicts the Haar basis for N = 23 .

Any signal x ∈ C N can be expanded over basis (4.5.1):


s 
N ν −1

x = 2−s xs (0) f s (0) + 2−ν xν (ν + pν+1 ) f ν (ν + pν+1 ). (4.5.2)


ν=1 p=0
4.5 Haar Basis. Fast Haar Transform 133

Fig. 4.3 Haar basis related to decimation in time for N = 23

In order to simplify the indexing we introduce the following notations:

ϕ0 ( p) = f 0 ( p) = δ N (· − p), p ∈ 0 : N − 1;

ϕν ( p + σ Nν ) = f ν (σ ν + pν+1 ),

p ∈ 0 : Nν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.

In particular, ϕs (σ ) = f s (σ s ) for σ ∈ 0 : 1. For s = 3 signals (4.5.1)

f 3 (0), f 3 (4), f 2 (2), f 2 (6), f 1 (1), f 1 (3), f 1 (5), f 1 (7)

shown in Fig. 4.3 coincide with the signals

ϕ3 (0), ϕ3 (1), ϕ2 (2), ϕ2 (3), ϕ1 (4), ϕ1 (5), ϕ1 (6), ϕ1 (7).

According to (4.4.3) we have

ϕν ( p + σ Nν ) = ϕν−1 (2 p) + (−1)σ ϕν−1 (2 p + 1).

We come to the recurrent relations

ϕ0 ( p) = δ N (· − p), p ∈ 0 : N − 1;
134 4 Fast Algorithms

ϕν ( p) = ϕν−1 (2 p) + ϕν−1 (2 p + 1),


(4.5.3)
ϕν ( p + Nν ) = ϕν−1 (2 p) − ϕν−1 (2 p + 1),

p ∈ 0 : Nν − 1, ν = 1, . . . , s.

In our new notations, Haar basis (4.5.1) will be constituted of the signals

ϕs (0); ϕν ( p + Nν ), p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1.

We put ξν (k) = x, ϕν (k) . In particular,

ξ0 ( p) = x, ϕ0 ( p) = x, f 0 ( p) = x( p),

ξs (0) = x, ϕs (0) = x, f s (0) = xs (0),

ξν ( p + Nν ) = x, ϕν ( p + Nν ) = x, f ν (ν + pν+1 ) = xν (ν + pν+1 ).

On the basis of (4.5.3) we gain

ξ0 ( p) = x( p), p ∈ 0 : N − 1;

ξν ( p) = ξν−1 (2 p) + ξν−1 (2 p + 1),


(4.5.4)
ξν ( p + Nν ) = ξν−1 (2 p) − ξν−1 (2 p + 1),

p ∈ 0 : Nν − 1, ν = 1, . . . , s.

Formula (4.5.2) takes a form


s 
N ν −1
−s −ν
x =2 ξs (0) ϕs (0) + 2 ξν ( p + Nν ) ϕν ( p + Nν ). (4.5.5)
ν=1 p=0

Along scheme (4.5.4), for every ν we calculate Nν coefficients ξν ( p + Nν ) of the


wavelet expansion (4.5.5) and Nν coefficients ξν ( p) that will be utilized with the
next ν.

4.5.2 We will give an example of expanding a signal over Haar basis. Let N = 23
and the signal x be defined by its samples on the main period as
x = (1, −1, −1, 1, 1, 1, −1, −1). Calculations performed along formula (4.5.4) are
presented in the Table 4.1.
According to (4.5.5) we obtain the expansion

x= 1
4
4 ϕ2 (3) + 21 2 ϕ1 (4) − 21 2 ϕ1 (5) = f 2 (6) + f 1 (1) − f 1 (3).
4.5 Haar Basis. Fast Haar Transform 135

Table 4.1 Calculation of Haar coefficients

This result can be verified immediately taking into account the form of Haar basic
functions shown in Fig. 4.3.

4.5.3 Scheme (4.5.4) of calculation of the coefficients of expansion (4.5.5) is referred


to as the decimation-in-time fast Haar transform. This transform requires only addi-
tions; the number of additions is


s
2 Nν = 2 (2s−1 + 2s−2 + · · · + 2 + 1) = 2(N − 1).
ν=1

4.5.4 Formula (4.5.4) can be inverted:

ξν−1 (2 p) = 1
2
ξν ( p) + ξν ( p + Nν ) ,
ξν−1 (2 p + 1) = 1
2
ξν ( p) − ξν ( p + Nν ) ,

p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1.

Herein x( p) = ξ0 ( p), p ∈ 0 : N − 1. We have derived the fast algorithm of recon-


structing the samples of a signal x given in form (4.5.5). The reconstruction is
performed on the main period.

4.6 Decimation in Frequency

4.6.1 If we take an orthogonal system of signals and perform the same permutation of
an argument of each signal, the transformed signals will remain pairwise orthogonal.
This simple idea allows us to construct new orthogonal bases in a space C N .
Let N = 2s and f 0 , f 1 , . . . , f s be orthogonal bases in C N defined in par. 4.2.2.
We put  
gν (k; j) = f ν revs (k); revs ( j) , ν ∈ 0 : s.
136 4 Fast Algorithms

In particular,  
g0 (k; j) = δ N revs ( j) − revs (k) = δ N ( j − k).

According to (4.2.5) we have


revs (k) j
gs (k; j) = ω N . (4.6.1)

It is evident that for each ν ∈ 0 : s the signals gν (0), gν (1), …, gν (N − 1) are pairwise
orthogonal and there holds gν (k)2 = 2ν , k ∈ 0 : N − 1.

Theorem 4.6.1 There hold the recurrent relations

g0 (k) = δ N (· − k), k ∈ 0 : N − 1;

s (2l)
gν (2l Nν + p) = gν−1 (l Nν−1 + p) + ωrev
N gν−1 (l Nν−1 + Nν + p),
  (4.6.2)
s (2l)
gν (2l + 1)Nν + p = gν−1 (l Nν−1 + p) − ωrev N gν−1 (l N ν−1 + N ν + p),

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.

Proof As a preliminary, we will ascertain that


 
revs (2l + σ )Nν + p = revν−1 (l) + σ ν + ν+1 revs−ν ( p) (4.6.3)

for p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, σ ∈ 0 : 1, ν = 1, . . . , s. Recall that rev0 (0) = 0


by a definition, so formula (4.6.3) for ν = 1 takes a form

revs (σ N1 + p) = σ + 2 revs−1 ( p), p ∈ 0 : N1 − 1, σ ∈ 0 : 1.

The latter equality can be easily verified:

revs (σ N1 + p) = revs (σ 2s−1 + ps−2 2s−2 + · · · + p0 )


= p0 2s−1 + · · · + ps−2 2 + σ = 2 revs−1 ( p) + σ.

Let ν ≥ 2. Then
 
revs (2l + σ )Nν + p = revs (lν−2 2s−1 + · · · + l0 2s−ν+1 + σ 2s−ν
+ ps−ν−1 2s−ν−1 + · · · + p0 )
= p0 2s−1 + · · · + ps−ν−1 2ν + σ 2ν−1 + l0 2ν−2 + · · · + lν−2
= ν+1 revs−ν ( p) + σ ν + revν−1 (l),

as it was to be ascertained.
4.6 Decimation in Frequency 137

The recurrent relations (4.6.2) can be written down in a single line


  s (2l)
gν (2l +σ )Nν + p = gν−1 (l Nν−1 + p) + (−1)σ ωrevN gν−1 (l Nν−1 + Nν + p),
(4.6.4)
where σ ∈ 0 : 1. To verify (4.6.4) we use formulae (4.6.3) and (4.2.2). We gain
   
gν (2l + σ )Nν + p; j = f ν revs ((2l + σ )Nν + p); revs ( j)
 
= f ν revν−1 (l) + σ ν + ν+1 revs−ν ( p); revs ( j)
 
= f ν−1 revν−1 (l) + 2 revs−ν ( p)ν ; revs ( j)
revν−1 (l)  
+(−1)σ ων+1 f ν−1 revν−1 (l) + (2revs−ν ( p) + 1)ν ; revs ( j)
 
= f ν−1 revs (l Nν−1 + p); revs ( j)
revν−1 (l)  
+(−1)σ ων+1 f ν−1 revs (l Nν−1 + Nν + p); revs ( j)
(l)
= gν−1 (l Nν−1 + p; j) + (−1)σ ων+1
rev
ν−1
gν−1 (l Nν−1 + Nν + p; j).

rev (l)
s (2l)
It is remaining to check that ων+1
ν−1
= ωrev
N for l ∈ 0 : ν − 1. For ν = 1 this is
obvious, and for ν ≥ 2 this is a consequence of the equality

2s−ν revν−1 (l) = l0 2s−2 + · · · + lν−2 2s−ν = revs (2l).

The theorem is proved. 




Theorem 4.6.2 Valid is the equality

ν+1 −1
 q revs (l)
gν (l Nν + p) = ωN g0 (q Nν + p), (4.6.5)
q=0

p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.

Proof At first we note that


 
revs revν (q)Nν + p = p0 2s−1 + · · · + ps−ν−1 2ν + qν−1 2ν−1 + · · · + q0
= q + ν+1 revs−ν ( p). (4.6.6)

According to (4.2.4) and (4.6.6) we have

gν (l Nν + p; j)
 
= f ν revs (lν−1 2s−1 + · · · + l0 2s−ν + ps−ν−1 2s−ν−1 + · · · + p0 ); revs ( j)
 
= f ν revν (l) + ν+1 revs−ν ( p); revs ( j)
ν+1 −1
 rev (l) revν (q)  
= ων+1
ν
f 0 q + ν+1 revs−ν ( p); revs ( j)
q=0
138 4 Fast Algorithms

ν+1 −1
 rev (l) revν (q)  
= ων+1
ν
f 0 revs (revν (q)Nν + p); revs ( j)
q=0
ν+1 −1
 rev (l) q 
= ων+1
ν
g0 (q Nν + p); j .
q=0

revν (l) s (l)


It is remaining to take into account that ω ν+1
= ωrev
N for l ∈ 0 : ν+1 − 1. The
theorem is proved. 


Theorem 4.6.3 Valid is the identity

gν (l Nν + p; j) ≡ gν (l Nν ; j − p), (4.6.7)

p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.

Proof Equality (4.6.5) yields

ν+1 −1
 q revs (l)
gν (l Nν ; j) = ωN δ N ( j − q Nν ).
q=0

Therefore
ν+1 −1
 q revs (l)  
gν (l Nν ; j − p) = ωN δ N j − (q Nν + p)
q=0
ν+1 −1
 q revs (l)
= ωN g0 (q Nν + p; j) = gν (l Nν + p; j).
q=0

The theorem is proved. 



4.6.2 A signal y ∈ C N with N = 2 can be expanded over any basis gν :
s

N −1
1 
y= yν (k) gν (k). (4.6.8)
2ν k=0

Here yν (k) = y, gν (k) . Relying on (4.6.2), in a usual way we come to recurrent


relations for coefficients yν (k) of expansion (4.6.8):

y0 (k) = y(k), k ∈ 0 : N − 1;

yν (2l Nν + p) = yν−1 (l Nν−1 + p) + ω−rev


N
s (2l)
yν−1 (l Nν−1 + Nν + p),
  (4.6.9)
yν (2l + 1)Nν + p = yν−1 (l Nν−1 + p) − ω−rev N
s (2l)
yν−1 (l N ν−1 + N ν + p),
4.6 Decimation in Frequency 139

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.

For ν = s according to (4.6.1) we obtain

N −1
 N −1
 −revs (k) j  
ys (k) = y( j) gs (k; j) = y( j) ω N = Y revs (k) .
j=0 j=0

Hence it follows that the components of Fourier spectrum Y of a signal y are deter-
mined by the formula
 
Y (k) = ys revs (k) , k ∈ 0 : N − 1.

Scheme (4.6.9) is referred to as the decimation-in-frequency Cooley–Tukey algo-


rithm for calculation of the discrete Fourier transform.
Formula (4.6.9) can be inverted:
 
ys (k) = Y revs (k) , k ∈ 0 : N − 1;
 
yν−1 (l Nν−1 + p) = 1
2
yν (2l Nν + p) + yν (2l + 1)Nν + p ,

s (2l)
 
yν−1 (l Nν−1 + Nν + p) = 1
2
ωrev
N yν (2l Nν + p) − yν (2l + 1)Nν + p ,

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = s, s − 1, . . . , 1.

Herein y(k) = y0 (k), k ∈ 0 : N − 1.


4.6.3 The structure of formula (4.6.4) is similar to the structure of formula (4.4.1).
It is convenient to assume that the basis gν−1 is divided into ν blocks; the blocks
are marked by an index l. Each block contains Nν−1 signals with inner indices p and
p + Nν for p ∈ 0 : Nν − 1. According to (4.6.4) a block with an index l generates
two blocks of the basis gν with inner indices 2l and 2l + 1, herein each block contains
Nν signals with an index p. On the strength of (4.6.7) the signals inside a block differ
only by a shift of an argument. The complete scheme of branching is the same as it
was shown in Fig. 4.1, par. 4.4.1.
Wavelet bases are formed of blocks of different levels. This is done in the same
way as in par. 4.4.1.
4.6.4 We will investigate the branching scheme (a) in Fig. 4.2. It corresponds to
putting l = 0 in (4.6.2). We derive the recurrent relations

g0 (k) = δ N (· − k), k ∈ 0 : N − 1;
gν ( p) = gν−1 ( p) + gν−1 ( p + Nν ),
(4.6.10)
gν ( p + Nν ) = gν−1 ( p) − gν−1 ( p + Nν ),
p ∈ 0 : Nν − 1, ν = 1, . . . , s.
140 4 Fast Algorithms

The signals {gν ( p + Nν )} will enter a wavelet basis and the signals {gν ( p)} will
participate in further branching.
The signals

gs (0); gν ( p + Nν ), p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1, (4.6.11)

form an orthogonal basis in a space C N with N = 2s . It is referred to as the discrete


Haar basis related to decimation in frequency.
According to (4.6.1) we have gs (0; j) ≡ 1. Identity (4.6.7) yields

gν ( p + Nν ; j) ≡ gν (Nν ; j − p), p ∈ 0 : Nν − 1.

Theorem 4.6.4 For ν ∈ 1 : s there holds

gν (Nν ; j) = δ Nν−1 ( j) − δ Nν−1 ( j − Nν ), j ∈ Z.

Proof Using formula (4.6.5) with l = 1 and p = 0 we gain

ν+1 −1
 q revs (1)
gν (Nν ; j) = ωN δ N ( j − q Nν )
q=0
ν+1 −1

= (−1)q δ N ( j − q Nν )
q=0

 ν −1 
 ν −1

= δ N ( j − q Nν−1 ) − δ N ( j − Nν − q Nν−1 )
q=0 q=0
= δ Nν−1 ( j) − δ Nν−1 ( j − Nν ).

The theorem is proved. 




Figure 4.4 depicts basis (4.6.11) for N = 23 .

4.6.5 Any signal y ∈ C N can be expanded over basis (4.6.11):


s 
N ν −1
−s −ν
y=2 ys (0) gs (0) + 2 yν ( p + Nν ) gν ( p + Nν ). (4.6.12)
ν=1 p=0

Here yν (k) = y, gν (k) . Bearing in mind (4.6.10) we deduce recurrent relations for
the coefficients of expansion (4.6.12):
4.6 Decimation in Frequency 141

Fig. 4.4 Haar basis related to decimation in frequency for N = 23

y0 (k) = y(k), k ∈ 0 : N − 1;
yν ( p) = yν−1 ( p) + yν−1 ( p + Nν ),
(4.6.13)
yν ( p + Nν ) = yν−1 ( p) − yν−1 ( p + Nν ),
p ∈ 0 : Nν − 1, ν = 1, . . . , s.

Along this scheme, for every ν we calculate Nν coefficients yν ( p + Nν ) of the wavelet


expansion (4.6.12) and Nν coefficients yν ( p) that will be utilized with the next ν.

As an example we will expand the signal from par. 4.5.2 over basis (4.6.11). It
is convenient to rename this signal as y in lieu of x. Calculations performed along
formula (4.6.13) are presented in Table 4.2.
According to (4.6.12) we obtain the expansion

y = g2 (2) − g1 (5) + g1 (7).

Table 4.2 Calculation of Haar coefficients


142 4 Fast Algorithms

This result can be verified immediately taking into account the form of Haar basic
functions shown in Fig. 4.4.
Scheme (4.6.13) of calculation of the coefficients of expansion (4.6.12) is referred
to as the decimation-in-frequency fast Haar transform. This transform requires only
additions; the number of operations is 2(N − 1).
Note that the coefficients of expansion (4.6.12) are contained in the table {yν (k)}
constructed along formula (4.6.9). Thus, in a process of calculating Fourier coeffi-
cients we incidentally calculate Haar coefficients as well.
4.6.6 Formula (4.6.13) can be inverted:

yν−1 ( p) = 21 [yν ( p) + yν ( p + Nν )],

yν−1 ( p + Nν ) = 21 [yν ( p) − yν ( p + Nν )],

p ∈ 0 : Nν − 1, ν = s, s − 1, . . . , 1.

Herein y( p) = y0 ( p), p ∈ 0 : N − 1. We have derived the fast algorithm of recon-


structing the samples of a signal y given in form (4.6.12). The reconstruction is
performed on the main period.

4.7 Sampling Theorem in Haar Bases

4.7.1 We begin with Haar basis related to decimation in time.


Lemma 4.7.1 Given a signal x ∈ C N , for each k ∈ 1 : s there holds an equality


N k −1 
k 
N ν −1

x = 2−k ξk ( p)ϕk ( p) + 2−ν ξν ( p + Nν )ϕν ( p + Nν ). (4.7.1)


p=0 ν=1 p=0

In case of k = s we obtain a wavelet expansion (4.5.5).


Proof Let k = 1. We have

N −1
 N −1

x= x( p) δ N (· − p) = ξ0 ( p) ϕ0 ( p)
p=0 p=0


N 1 −1

= ξ0 (2 p) ϕ0 (2 p) + ξ0 (2 p + 1) ϕ0 (2 p + 1) .
p=0

We put

N 1 −1


x= ξ0 (2 p + 1) ϕ0 (2 p) + ξ0 (2 p) ϕ0 (2 p + 1) .
p=0
4.7 Sampling Theorem in Haar Bases 143

Then
x = 21 (x + 
x ) + 21 (x − 
x ) =: 1
2
v1 + 21 w1 . (4.7.2)

On the basis of (4.5.3) and (4.5.4) we gain


N 1 −1

v1 = ξ0 (2 p) + ξ0 (2 p + 1) ϕ0 (2 p) + ϕ0 (2 p + 1)
p=0


N 1 −1

= ξ1 ( p) ϕ1 ( p),
p=0


N 1 −1

w1 = ξ0 (2 p) − ξ0 (2 p + 1) ϕ0 (2 p) − ϕ0 (2 p + 1)
p=0


N 1 −1

= ξ1 ( p + N1 ) ϕ1 ( p + N1 ).
p=0

Now formula (4.7.2) conforms to (4.7.1) for k = 1.


We perform an induction step from k to k + 1. We denote


N k −1

vk = ξk ( p) ϕk ( p)
p=0
Nk+1 −1

= ξk (2 p) ϕk (2 p) + ξk (2 p + 1) ϕk (2 p + 1)
p=0

and introduce a signal

Nk+1 −1


vk = ξk (2 p + 1) ϕk (2 p) + ξk (2 p) ϕk (2 p + 1) .
p=0

Then
vk = 21 (vk + 
vk ) + 21 (vk − 
vk ) =: 1
2
vk+1 + 21 wk+1 . (4.7.3)

Here
Nk+1 −1

vk+1 = ξk+1 ( p) ϕk+1 ( p),
p=0
144 4 Fast Algorithms

Nk+1 −1

wk+1 = ξk+1 ( p + Nk+1 ) ϕk+1 ( p + Nk+1 ).
p=0

Combining (4.7.1) and (4.7.3) we gain

Nk+1 −1 ν −1
 
k+1 
N
x = 2−k−1 ξk+1 ( p)ϕk+1 ( p) + 2−ν ξν ( p + Nν )ϕν ( p + Nν ).
p=0 ν=1 p=0

The lemma is proved. 




Theorem 4.7.1 (Sampling Theorem) Let x ∈ C N be a signal such that for some
k ∈ 1 : s there holds ξν ( p + Nν ) = 0 for all p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Then


N k −1

x= x(2k p) ϕk ( p). (4.7.4)


p=0

Proof By virtue of (4.7.1) it is sufficient to verify that ξk ( p) = 2k x(2k p) for p ∈


0 : Nk − 1. On the basis of (4.5.4) we have

ξν−1 (2 p) = 1
2
ξν ( p) + ξν ( p + Nν ) .

Taking into account the hypothesis of the theorem we gain ξν ( p) = 2 ξν−1 (2 p) for
all p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Hence for p ∈ 0 : Nk − 1 there holds

ξk ( p) = 2 ξk−1 (2 p) = 22 ξk−2 (22 p) = · · · = 2k ξ0 (2k p) = 2k x(2k p).

The theorem is proved. 




We note that according to (4.2.4) there holds

2
k
−1
ϕk (0; j) = f k (0; j) = δ N ( j − q),
q=0

i.e. ϕk (0; j) is a periodic step that equals to unity for j = 0, 1, . . . , 2k − 1 and


equals to zero for j = 2k , . . . , N − 1. By virtue of (4.2.8), for p ∈ 0 : Nk − 1 we
have

ϕk ( p; j) = f k ( pk+1 ; j) = f k (0; j − pk+1 ) = ϕk (0; j − 2k p).

This observation lets us rewrite (4.7.4) in a form


4.7 Sampling Theorem in Haar Bases 145


N k −1

x( j) = x(2k p) ϕk (0; j − 2k p).


p=0

The latter formula shows that in the premises of Theorem 4.7.1, the signal x is
a step-function defined by equalities x( j) = x(2k p) for j ∈ {2k p, 2k p + 1, . . . ,
2k ( p + 1) − 1}, p = 0, 1, . . . , Nk − 1.
4.7.2 Now we turn to Haar basis related to decimation in frequency.
Lemma 4.7.2 Given a signal y ∈ C N , for each k ∈ 1 : s there holds an equality


N k −1 
k 
N ν −1

y = 2−k yk ( p)gk ( p) + 2−ν yν ( p + Nν )gν ( p + Nν ). (4.7.5)


p=0 ν=1 p=0

In case of k = s we obtain a wavelet expansion (4.6.12).


Proof Let k = 1. We have

y( j) = 1
2
y( j) + y( j − N1 ) + 1
2
y( j) − y( j − N1 )
=: 1
2
v1 ( j) + w1 ( j).
1
2
(4.7.6)

According to (4.6.10) and (4.6.13) we write

N −1

v1 = y( p) δ N (· − p) + δ N (· − p + N1 N)
p=0
N −1

= y0 ( p) g0 ( p) + g0 ( p + N1 N)
p=0


N 1 −1 
N 1 −1

= y0 ( p) g1 ( p) + y0 ( p + N1 ) g1 ( p)
p=0 p=0


N 1 −1

= y1 ( p) g1 ( p),
p=0


N 1 −1 
N 1 −1

w1 = y0 ( p) g1 ( p + N1 ) − y0 ( p + N1 ) g1 ( p + N1 )
p=0 p=0


N 1 −1

= y1 ( p + N1 ) g1 ( p + N1 ).
p=0

Now formula (4.7.6) corresponds to (4.7.5) for k = 1.


146 4 Fast Algorithms

We perform an induction step from k to k + 1. We denote


N k −1

vk = yk ( p) gk ( p)
p=0

and write down


Nk −1
1 
vk = yk ( p) gk ( p) + gk ( p + Nk+1 Nk )
2 p=0
Nk −1
1 
+ yk ( p) gk ( p) − gk ( p + Nk+1 Nk )
2 p=0

=: 1
2
vk+1 + 21 wk+1 . (4.7.7)

Here
Nk+1 −1 Nk+1 −1
 
vk+1 = yk ( p) gk+1 ( p) + yk ( p + Nk+1 ) gk+1 ( p)
p=0 p=0
Nk+1 −1

= yk+1 ( p) gk+1 ( p)
p=0

and
Nk+1 −1 Nk+1 −1
 
wk+1 = yk ( p) gk+1 ( p+ Nk+1 ) − yk ( p+ Nk+1 ) gk+1 ( p+ Nk+1 )
p=0 p=0
Nk+1 −1

= yk+1 ( p + Nk+1 ) gk+1 ( p + Nk+1 ).
p=0

Combining (4.7.5) and (4.7.7) we gain

Nk+1 −1 ν −1
 
k+1 
N
y = 2−k−1 yk+1 ( p) gk+1 ( p) + 2−ν yν ( p + Nν ) gν ( p + Nν ).
p=0 ν=1 p=0

The lemma is proved. 




Theorem 4.7.2 (Sampling Theorem) Let y ∈ C N be a signal such that for some
k ∈ 1 : s there holds yν ( p + Nν ) = 0 for all p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Then
4.7 Sampling Theorem in Haar Bases 147


N k −1

y= y( p) gk ( p). (4.7.8)
p=0

Proof By virtue of (4.7.5) it is sufficient to verify that yk ( p) = 2k y( p) for p ∈ 0 :


Nk − 1. On the basis of (4.6.13) we have

yν−1 ( p) = 1
2
yν ( p) + yν ( p + Nν ) .

Taking into account the hypothesis of the theorem we gain yν ( p) = 2 yν−1 ( p) for all
p ∈ 0 : Nν − 1 and ν = 1, . . . , k. Hence for p ∈ 0 : Nk − 1 there holds

yk ( p) = 2 yk−1 ( p) = 4 yk−2 ( p) = · · · = 2k y0 ( p) = 2k y( p).

The theorem is proved. 




Note that according to (4.6.5) there holds

k+1 −1

gk (0; j) = δ N ( j − q Nk ) = δ Nk ( j).
q=0

On the strength of (4.6.7), for p ∈ 0 : Nk − 1 we have

gk ( p; j) = gk (0; j − p) = δ Nk ( j − p). (4.7.9)

This lets us rewrite (4.7.8) in a form


N k −1

y( j) = y( p) δ Nk ( j − p).
p=0

The latter formula shows that in the premises of Theorem 4.7.2, the signal y is
Nk -periodic.
Essentially, we ascertained that with the aid of expanding a signal over Haar basis
related to decimation in frequency one can detect its hidden periodicity.

4.8 Convolution Theorem in Haar Bases

4.8.1 Formula (4.6.12) gives an expansion of a signal y ∈ C N over the discrete Haar
basis related to decimation in frequency. As it was mentioned in par. 4.6.4, the basic
signals satisfy to the identity

gν ( p + Nν ; j) ≡ gν (Nν ; j − p),
148 4 Fast Algorithms

p ∈ 0 : Nν − 1, ν = 1, . . . , s − 1.

In the other words, every basic signal of the ν-th level is a shift of a single signal
gν (Nν ). We introduce a notation ψν ( j) = gν (Nν ; j). According to Theorem 4.6.4
we have
ψν ( j) = δ Nν−1 ( j) − δ Nν−1 ( j − Nν ), ν = 1, . . . , s. (4.8.1)

In particular, ψν (− j) = ψν ( j), ψν ( j − Nν ) = −ψν ( j). To simplify formula (4.6.12)


we put

β = ys (0); 
yν ( p) = yν ( p + Nν ), p ∈ 0 : Nν − 1, ν = 1, . . . , s.

In the new notations formula (4.6.12) takes a form


s 
N ν −1

y( j) = 2−s β + 2−ν 
yν ( p) ψν ( j − p), j ∈ Z. (4.8.2)
ν=1 p=0

We took advantage of the fact that gs (0; j) ≡ 1.


We will examine in more detail the properties of the signal ψν .

Lemma 4.8.1 Valid is the formula

ψν ( j) = (−1) j/Nν  δ Nν ( j), j ∈ Z. (4.8.3)

Proof By virtue of (4.8.1) the signal ψν is Nν−1 -periodic. Since Nν−1 = 2Nν , of
the same property is the signal in the right side of (4.8.3). Hence it is sufficient to
verify equality (4.8.3) on the main period 0 : Nν−1 − 1. When j = 0 or j = Nν , it
is a consequence of (4.8.1). When j ∈ 1 : Nν − 1 or j ∈ Nν + 1 : Nν−1 − 1, both
sides of (4.8.3) are equal to zero. The lemma is proved. 


Lemma 4.8.2 Given p ∈ 0 : Nν − 1, valid is the equality

ψν ( j − p) = (−1) j/Nν  δ Nν ( j − p), j ∈ Z. (4.8.4)

Proof Let us show that

(−1)( j− p)/Nν  δ Nν ( j − p) = (−1) j/Nν  δ Nν ( j − p). (4.8.5)

As long as both sides of this equality are Nν−1 -periodic (in terms of j) signals, it is
sufficient to verify it on the period p : p + Nν−1 − 1. When j = p or j = p + Nν ,
equality (4.8.5) is true. It is also true for any other j from the given period because
in this case both sides of (4.8.5) are equal to zero.
Now (4.8.4) follows from (4.8.3) and (4.8.5). 

4.8 Convolution Theorem in Haar Bases 149

Lemma 4.8.3 For any integers j and p there holds


 
ψν ( j − p) = (−1) p/Nν  ψν j − p Nν . (4.8.6)

Proof According to (4.8.3), for each integer j and l we have

ψν ( j − l Nν ) = (−1)l ψν ( j).
 
Taking into account that j − p = j − p Nν −  p/Nν  Nν we come to (4.8.6). 

4.8.2 We will investigate how the decimation-in-frequency discrete Haar transform
acts on a cyclic convolution. Recall that a cyclic convolution of signals x and y from
C N is a signal u = x ∗ y with samples

N −1

u( j) = x(k) y( j − k).
k=0

Theorem 4.8.1 (Convolution Theorem) Let u be a cyclic convolution of signals x


and y. If along with (4.8.2) we have expansions


s 
N ν −1

x( j) = 2−s α + 2−ν 
xν ( p) ψν ( j − p), (4.8.7)
ν=1 p=0


s 
N ν −1

u( j) = 2−s γ + 2−ν 
u ν ( p) ψν ( j − p), (4.8.8)
ν=1 p=0

then it is necessary that γ = αβ, 


u s (0) = 
xs (0) 
ys (0) and


p

N ν −1

u ν ( p) =
 
xν (q) 
yν ( p − q) − 
xν (q) 
yν ( p − q + Nν ), (4.8.9)
q=0 q= p+1

p ∈ 0 : Nν − 1, ν = s − 1, s − 2, . . . , 1.

Proof Let us transform a formula


s 
N ν −1

y( j − k) = 2−s β + 2−ν 
yν ( p) ψν ( j − k − p).
ν=1 p=0

As it was mentioned in par. 4.8.1, there holds ψν (− j) = ψν ( j). Along with (4.8.6)
it gives us
   
ψν ( j − k − p) = ψν k − ( j − p) = (−1)( j− p)/Nν  ψν k − j − p Nν .
150 4 Fast Algorithms

We have

 
N ν −1
s
 
y( j − k) = 2−s β + 2−ν (−1)( j− p)/Nν  
yν ( p) ψν k − j − p Nν .
ν=1 p=0

We change the variables: q = j − p Nν . Then p = j − q Nν and

       
j−p j − j −q Nν j −q − j −q Nν +q j −q
= = = .
Nν Nν Nν Nν

We come to a formula

 
N ν −1
s
 
y( j − k) = 2−s β + 2−ν (−1)( j−q)/Nν  
yν j −q Nν ψν (k − q). (4.8.10)
ν=1 q=0

Let us substitute (4.8.7) and (4.8.10) into a convolution formula. Bearing in mind
reality, orthogonality and norming of the basic functions we gain

 
N ν −1
s
 
u( j) = 2−s αβ + 2−ν (−1)( j−q)/Nν  
xν (q) 
yν j −q Nν .
ν=1 q=0

We write
 
(−1)( j−q)/Nν  
yν j −q Nν
 
= (−1) j/Nν +( j Nν −q)/Nν  
yν j Nν − q Nν

N ν −1
   
= (−1) j/Nν  (−1)( p−q)/Nν  
yν p − q Nν δ Nν j Nν −p .
p=0

According to (4.8.4) we have


 
δ Nν j Nν − p = δ Nν ( j − p) = (−1) j/Nν  ψν ( j − p),

therefore

 Nν −1 N
 ν −1 
−s
s
−ν ( p−q)/Nν 
 
u( j) = 2 αβ + 2 (−1) 
xν (q)
yν p − q Nν ψν ( j − p).
ν=1 p=0 q=0

But we already have representation (4.8.8) for the signal u. By virtue of uniqueness of
expansion over an orthogonal basis we conclude that γ = αβ and the sum in braces
is nothing else but 
u ν ( p). It is remaining to note that
4.8 Convolution Theorem in Haar Bases 151


N ν −1 
p
 
(−1)( p−q)/Nν   yν p − q
xν (q)  Nν = 
xν (q) 
yν ( p − q)
q=0 q=0


N ν −1

− 
xν (q) 
yν ( p − q + Nν ).
q= p+1

The theorem is proved. 




The expression in the right side of formula (4.8.9) is referred to as a skew-cyclic


convolution of signals xν and 
yν . Thus, coefficients of the ν-th level in the expansion
of a cyclic convolution u over Haar basis related to decimation in frequency are
obtained as a result of a skew-cyclic convolution of expansion coefficients of the
ν-th level of the signals x and y.

4.8.3 Now we turn to the discrete Haar basis related to decimation in time (see
Sects. 4.4.2 and 4.5.1). We have

ϕν ( p + Nν ; j) = f ν (ν + pν+1 ; j) = f ν (ν ; j − pν+1 )


= ϕν (Nν ; j − pν+1 ).

For the sake of simplicity we will write ϕν ( j) instead of ϕν (Nν ; j). Formula (4.4.8)
yields
 ν −1 ν+1 −1
 
ϕν ( j) = δ N ( j − q) − δ N ( j − q). (4.8.11)
q=0 q=ν

We take expansion (4.5.5) and rename α := ξs (0) and 


ξν ( p) := ξν ( p + Nν ). For-
mula (4.5.5) takes a form


s 
N ν −1

x( j) = 2−s α + 2−ν 
ξν ( p) ϕν ( j − pν+1 ). (4.8.12)
ν=1 p=0

We took advantage of the fact that ϕs (0; j) ≡ f s (0; j) ≡ 1.

We will examine in more detail the properties of the signal ϕν .

Lemma 4.8.4 Valid is the equality


 
ϕν ( j) = (−1) j/ν  δ Nν  j/ν+1  , j ∈ Z. (4.8.13)

Proof The right side of (4.8.13) is N -periodic because N = 2Nν ν = Nν ν+1 ; the
left side is N -periodic as well. Hence it is sufficient to verify equality (4.8.13) for
j ∈ 0 : N − 1.
152 4 Fast Algorithms

The right side of (4.8.13) equals to 1 for j ∈ 0 : ν − 1, equals to −1 for j ∈


ν : ν+1 − 1, and equals to zero for j ∈ ν+1 : N − 1 (take into account that
N = Nν ν+1 ). According to (4.8.12), ϕν ( j) has the same values in the indicated
nodes j. The lemma is proved. 


Substituting j − pν+1 instead of j in (4.8.13) we gain


 
ϕν ( j − pν+1 ) = (−1) j/ν  δ Nν  j/ν+1  − p . (4.8.14)

In case of j ∈ 0 : N − 1 the coefficient (−1) j/ν  can be represented in another


way. Let j = ( js−1 , . . . , j0 )2 . Then
 
j js−1 2s−1 + · · · + jν 2ν + jν−1 2ν−1
= = js−1 2s−ν + · · · + jν 2 + jν−1 ,
ν 2ν−1
so
(−1) j/ν  = (−1) jν−1 . (4.8.15)

In the following lemma we will use the operation ⊕ of bitwise summation mod-
ulo 2 (see Sect. 1.5).
Lemma 4.8.5 Given k ∈ 0 : N − 1, k = (ks−1 , . . . , k0 )2 , valid is the equality
 
ϕν ( j ⊕ k) = (−1)kν−1 ϕν j − k/ν+1 ν+1 , (4.8.16)

j ∈ 0 : N − 1.

Proof We write
j =  j/ν+1 ν+1 + jν−1 ν + j ν ,

k = k/ν+1 ν+1 + kν−1 ν + k ν .

It is clear that
   
j ⊕ k =  j/ν+1  ⊕ k/ν+1  ν+1 + jν−1 + kν−1 2 ν + j ν ⊕ k ν .

According to (4.8.13) and (4.8.15) we have


 
ϕν ( j ⊕ k) = (−1) jν−1 +kν−1 δ Nν  j/ν+1  ⊕ k/ν+1  .

Let us use the equality δ Nν (a ⊕ b) = δ Nν (a − b), where a, b ∈ 0 : Nν − 1 (it is true


both for a = b and for a = b). We gain
 
ϕν ( j ⊕ k) = (−1) jν−1 +kν−1 δ Nν  j/ν+1 −k/ν+1  . (4.8.17)
4.8 Convolution Theorem in Haar Bases 153

Formulae (4.8.14) and (4.8.15) yield


   
(−1) jν−1 δ Nν  j/ν+1  − k/ν+1  = ϕν j − k/ν+1 ν+1 . (4.8.18)

Combining (4.8.17) and (4.8.18) we come to (4.8.16). The lemma is proved. 




Corollary 4.8.1 For p ∈ 0 : Nν − 1 we have

ϕν ( j ⊕ pν+1 ) = ϕν ( j − pν+1 ), j ∈ 0 : N − 1. (4.8.19)

4.8.4 Application of the decimation-in-time discrete Haar transform to a cyclic con-


volution does not produce a satisfactory result. More efficient is application of this
transform to a dyadic convolution.
Let x and y be signals of C N . A signal z ∈ C N with samples

N −1

z( j) = x(k) y( j ⊕ k), j ∈0: N −1 (4.8.20)
k=0

is referred to as a dyadic convolution of signals x and y.

Theorem 4.8.2 (Dyadic Convolution Theorem) Let z be a dyadic convolution of


signals x and y. If along with (4.8.12) we have expansions


s 
N ν −1

y( j) = 2−s β + 2−ν 
ην ( p) ϕν ( j − pν+1 ),
ν=1 p=0


s 
N ν −1
−s
z( j) = 2 γ + 2 −ν 
ζν ( p) ϕν ( j − pν+1 ),
ν=1 p=0

then it is necessary that γ = αβ and


N ν −1

ζν ( p) = 
ξν (q) 
ην ( p ⊕ q), (4.8.21)
q=0

p ∈ 0 : Nν − 1, ν = 1, . . . , s.

Proof We fix j ∈ 0 : N − 1. According to (4.8.19) we have

 
N ν −1
s
 
y( j ⊕ k) = 2−s β + 2−ν 
ην ( p) ϕν ( j ⊕ k) ⊕ pν+1 .
ν=1 p=0
154 4 Fast Algorithms

Since

( j ⊕ k) ⊕ pν+1 = k ⊕ ( j ⊕ pν+1 )
 
= k ⊕ ( j/ν+1  ⊕ p) ν+1 + jν−1 ν + j ν ,

equality (4.8.16) yields

 
N ν −1
s
 
y( j ⊕ k) = 2−s β + 2−ν ην ( p) ϕν k − ( j/ν+1  ⊕ p)ν+1 .
(−1) jν−1 
ν=1 p=0

Performing a change of variables q = p ⊕  j/ν+1  we come to a formula

 
N ν −1
s
 
y( j ⊕ k) = 2−s β + 2−ν ην q ⊕  j/ν+1  ϕν (k − qν+1 ).
(−1) jν−1 
ν=1 q=0
(4.8.22)
Let us substitute (4.8.12) and (4.8.22) into (4.8.20). Bearing in mind reality,
orthogonality, and norming of the basic signals we gain

 
N ν −1
s
 
z( j) = 2−s αβ + 2−ν (−1) jν−1 
ξν (q) 
ην q ⊕  j/ν+1  . (4.8.23)
ν=1 q=0

The following transform is based on formulae (4.8.14) and (4.8.15):

ν −1
  N  
(−1) jν−1 
ην q ⊕  j/ν+1  = ην (q ⊕ p) (−1) j/ν  δ Nν  j/ν+1  − p

p=0


N ν −1

= 
ην ( p ⊕ q) ϕν ( j − pν+1 ). (4.8.24)
p=0

Substituting (4.8.24) into (4.8.23) we come to the expansion


s 
N ν −1  N
 ν −1 
z( j) = 2−s αβ + 2−ν 
ξν (q) 
ην ( p ⊕ q) ϕν ( j − pν+1 ).
ν=1 p=0 q=0

Uniqueness of expansion over an orthogonal basis guarantees that γ = αβ holds,


and that the sum in braces is equal to 
ζν ( p). The theorem is proved. 


Formula (4.8.21) shows that coefficients of the ν-th level in the expansion of a
dyadic convolution z over Haar basis related to decimation in time are obtained as
a result of a dyadic convolution of expansion coefficients of the ν-th level of the
signals x and y.
4.9 Second Sequence of Orthogonal Bases 155

4.9 Second Sequence of Orthogonal Bases

4.9.1 We retain the notations N = 2s , Nν = N /2ν , ν = 2ν−1 . We will construct


N −1
another sequence of orthogonal bases wν = {wν (k; j)}k=0 , ν = 0, 1, . . . , s, with
the aid of recurrent relations

w0 (k) = δ N (· − k), k ∈ 0 : N − 1;
 
wν (l + pν+1 ) = wν−1 (l + 2 pν ) + wν−1 l + (2 p + 1)ν ,
  (4.9.1)
wν (l + ν + pν+1 ) = wν−1 (l + 2 pν ) − wν−1 l + (2 p + 1)ν ,

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.

These formulae differ from (4.2.1) only by the coefficient ωl


ν+1
being replaced with
unity. A transition from the basis wν−1 to the basis wν can be written in a single line:
 
wν (l + σ ν + pν+1 ) = wν−1 (l + 2 pν ) + (−1)σ wν−1 l + (2 p + 1)ν ,
(4.9.2)
p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, σ ∈ 0 : 1, ν = 1, . . . , s.

In particular, for ν = 1 we gain

w1 (σ + 2 p) = w0 (2 p) + (−1)σ w0 (2 p + 1), (4.9.3)

p ∈ 0 : N1 − 1, σ ∈ 0 : 1.

How do the signals ws (k; j) look like? Answering this question needs some
additional preparation.
4.9.2 We introduce a sequence of matrices
   
1 1 Aν−1 Aν−1
A1 = ; Aν = , ν = 2, . . . , s. (4.9.4)
1 −1 Aν−1 −Aν−1

A matrix Aν is referred to as a Hadamard matrix. It is a square matrix of order ν+1 .


We will suppose that the indices of its rows and columns vary from 0 to ν+1 − 1.
Let k, j ∈ 0 : ν+1 − 1, k = (kν−1 , kν−2 , . . . , k0 )2 and j=( jν−1 , jν−2 , . . . , j0 )2 .
We put
ν−1

{k, j}ν = kα jα .
α=0

Theorem 4.9.1 Elements of a Hadamard matrix satisfy to the formula

Aν [k, j] = (−1){k, j}ν , k, j ∈ 0 : ν+1 − 1. (4.9.5)


156 4 Fast Algorithms

Proof When ν = 1, the assertion is obvious because A1 [k, j] = (−1)k j holds for
k, j ∈ 0 : 1. We perform an induction step from ν − 1 to ν.
We take k, j ∈ 0 : ν+1 − 1 and represent them in the following manner: k =
kν−1 2ν−1 + l, j = jν−1 2ν−1 + q. Here kν−1 , jν−1 ∈ 0 : 1 and l, q ∈ 0 : ν − 1.
According to (4.9.4) we have

Aν [k, j] = (−1)kν−1 jν−1 Aν−1 [l, q]. (4.9.6)

Taking into account the inductive hypothesis we gain

Aν [k, j] = (−1)kν−1 jν−1 +{l,q}ν−1 = (−1){k, j}ν .

The theorem is proved. 




Later on we will need formula (4.9.6) in a form

Aν [l + σ ν , q + τ ν ] = (−1)σ τ Aν−1 [l, q], (4.9.7)

l, q ∈ 0 : ν − 1, σ, τ ∈ 0 : 1.

4.9.3 Let us get an explicit expression for the signals wν (k).

Theorem 4.9.2 There holds a representation

ν+1 −1

wν (l + pν+1 ) = Aν [l, q] w0 (q + pν+1 ), (4.9.8)
q=0

p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.

Proof When ν = 1, formula (4.9.8) coincides with (4.9.3) if we replace σ by l in


the latter one. We perform an induction step from ν − 1 to ν.
We represent an index l ∈ 0 : ν+1 − 1 in a form l = σ ν + l  , where l  ∈ 0 :
ν − 1 and σ ∈ 0 : 1. On the basis of (4.9.2) and the inductive hypothesis we write

wν (l + pν+1 ) = wν (l  + σ ν + pν+1 )
 
= wν−1 (l  + 2 pν ) + (−1)σ wν−1 l  + (2 p + 1)ν

 ν −1

= Aν−1 [l  , q] w0 (q + 2 pν )
q=0

ν −1
 
+ (−1)σ Aν−1 [l  , q] w0 q + (2 p + 1)ν .
q=0
4.9 Second Sequence of Orthogonal Bases 157

Formula (4.9.7) yields

Aν−1 [l  , q] = Aν [l  + σ ν , q] = Aν [l, q],

(−1)σ Aν−1 [l  , q] = Aν [l  + σ ν , q + ν ] = Aν [l, q + ν ].

Taking this into account we gain


 ν −1

wν (l + pν+1 ) = Aν [l, q] w0 (q + pν+1 )


q=0

 ν −1

+ Aν [l, q + ν ] w0 (q + ν + pν+1 )
q=0
ν+1 −1

= Aν [l, q] w0 (q + pν+1 ).
q=0

The theorem is proved. 




When ν = s, formula (4.9.8) takes a form

N −1

ws (l; j) = As [l, q] δ N ( j − q) = As [l, j] = (−1){l, j}s ,
q=0

l, j ∈ 0 : N − 1.

The functions
vk ( j) = (−1){k, j}s , k, j ∈ 0 : N − 1 (4.9.9)

are referred to as the discrete Walsh functions. Thus,

ws (k; j) = vk ( j), k, j ∈ 0 : N − 1. (4.9.10)

Figure 4.5 depicts Walsh functions for N = 8.

4.9.4 Consider the signals wν (k) given a fixed ν.

Theorem 4.9.3 For each ν ∈ 0 : s the signals

wν (0), wν (1), . . . , wν (N − 1) (4.9.11)

are pairwise orthogonal and wν (k)2 = 2ν holds for all k ∈ 0 : N − 1.


158 4 Fast Algorithms

Fig. 4.5 Walsh functions for N = 8

Proof The assertion is known to be true for ν = 0. Let ν ∈ 1 : s. We take k, k  ∈


0 : N − 1 and represent them in a form k = l + pν+1 , k  = l  + p  ν+1 , where
l, l  ∈ 0 : ν+1 − 1 and p, p  ∈ 0 : Nν − 1. According to (4.9.8) and Lemma 2.1.4
we have
ν+1 −1
  
wν (k), wν (k  ) = Aν [l, q] Aν [l  , q  ] δ N q − q  + ( p − p  )ν+1 .
q,q  =0

If p = p  then wν (k), wν (k  ) = 0. Let p = p  . In this case we have

ν+1 −1

wν (k), wν (k  ) = Aν [l, q] Aν [l  , q].
q=0

Note that by virtue of (4.9.5) the matrix Aν is symmetric. Furthermore,

Aν Aν = 2ν I2ν , ν = 1, 2, . . . , (4.9.12)

where I2ν is an identity matrix of order 2ν . For ν = 1 this is evident. We perform an


induction step from ν − 1 to ν. According to (4.9.4)
4.9 Second Sequence of Orthogonal Bases 159
  
Aν−1 Aν−1 Aν−1 Aν−1
Aν Aν =
Aν−1 −Aν−1 Aν−1 −Aν−1
 
2 Aν−1 Aν−1 O
=
O 2 Aν−1 Aν−1
 ν 
2 I2ν−1 O
= = 2 ν I 2ν .
O 2ν I2ν−1

Validity of equality (4.9.12) is ascertained. On the basis of the stated properties of a


Hadamard matrix Aν we gain

ν+1 −1

wν (k), wν (k  ) = Aν [l, q] Aν [q, l  ] = (Aν Aν )[l, l  ] = 2ν I2ν [l, l  ].
q=0

Now we can make a conclusion that the scalar product wν (k), wν (k  ) is nonzero
only when p = p  and l = l  , i.e. only when k = k  . In the latter case wν (k)2 = 2ν
for all k ∈ 0 : N − 1.
The theorem is proved. 


Essentially, we ascertained that for each ν ∈ 0 : s signals (4.9.11) form an orthog-


onal basis in a space C N . In particular, the Walsh functions v0 , v1 , . . . , v N −1 of
form (4.9.9) constitute an orthogonal basis. This basis is referred to as Walsh–
Hadamard basis or, more often, Walsh basis. Note that vk 2 = N holds for all
k ∈ 0 : N − 1.

4.10 Fast Walsh Transform

4.10.1 Any signal x ∈ C N can be expanded over Walsh basis:

N −1
1 
x= xs (k) vk . (4.10.1)
N k=0

Here xs (k) = x, vk or, in the explicit form,

N −1

xs (k) = x( j) vk ( j), k ∈ 0 : N − 1. (4.10.2)
j=0

The transform W N that maps a signal x ∈ C N to a signal xs = W N (x) with com-


ponents (4.10.2) is referred to as the discrete Walsh transform (DWT). By analogy
with DFT, a signal xs is referred to as a Walsh spectrum of a signal x. Formula (4.10.1)
can be interpreted as the inversion formula for DWT.
160 4 Fast Algorithms

We will consider a question of fast calculation of a Walsh spectrum.


4.10.2 For each ν ∈ 0 : s a signal x ∈ C N can be expanded over the basis (4.9.11):

N −1
1 
x= xν (k) wν (k). (4.10.3)
2ν k=0

Here xν (k) = x, wν (k) . In case of ν = s formula (4.10.3) coincides with (4.10.1).


It follows from (4.9.10).
The recurrent relations (4.9.1) for the basic functions generate the recurrent rela-
tions for the coefficients of expansion (4.10.3):

x0 (k) = x(k), k ∈ 0 : N − 1;
 
xν (l + pν+1 ) = xν−1 (l + 2 pν ) + xν−1 l + (2 p + 1)ν ,
  (4.10.4)
xν (l + ν + pν+1 ) = xν−1 (l + 2 pν ) − xν−1 l + (2 p + 1)ν ,

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.

When ν = s, we obtain the Walsh spectrum xs of the signal x.


Scheme (4.10.4) is referred to as the decimation-in-time fast Walsh transform.
This scheme requires only additions; the number of operations is N log2 N .
Relation (4.10.4) can be inverted:

xν−1 (l + 2 pν ) = 21 xν (l + pν+1 ) + xν (l + ν + pν+1 ) ,


  (4.10.5)
xν−1 l + (2 p + 1)ν = 21 xν (l + pν+1 ) − xν (l + ν + pν+1 ) ,

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = s, s − 1, . . . , 1.

Herein x(k) = x0 (k), k ∈ 0 : N − 1. Scheme (4.10.5) makes it possible to quickly


reconstruct the samples of a signal x given in form (4.10.1). The reconstruction is
performed on the main period.
N −1
4.10.3 The sequence of orthogonal bases {wν (k; j)}k=0 , ν = 0, 1, . . . , s, generates
wavelet bases and a wavelet packet. This is done in the same way as in case of the
N −1
sequence { f ν (k; j)}k=0 , ν = 0, 1, . . . , s (see par. 4.4.1). The block structure of the
bases wν is utilized; this structure follows from the formula analogous to (4.2.8):

wν (l + pν+1 ; j) ≡ wν (l; j − pν+1 ),

p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 0, 1, . . . s.

We will not go into details. We just note that the branching scheme (a) in Fig. 4.2 in
this case also generates Haar basis related to decimation in time.
4.11 Ordering of Walsh Functions 161

4.11 Ordering of Walsh Functions

4.11.1 Signals of the exponential basis are ordered by frequency. Index k in a notation
kj  
u k ( j) = ω N = exp i 2πk
N
j , k = 0, 1, . . . , N − 1

is a frequency of the signal u k ( j) in the following sense: while j varies from 0 to N


(to the beginning of the next period), the argument of a complex number u k ( j) being
equal to 2πk
N
j monotonically increases from 0 to 2π k, i.e. a point u k ( j) runs around
the unit circle of the complex plane k times.
We will consider Walsh functions
 
vk ( j) = (−1){k, j}s = exp iπ {k, j}s , k = 0, 1, . . . , N − 1,

from this point of view. Unfortunately, the value {k, j}s does not increase mono-
tonically together with j. An example of this value’s behavior for N = 23 and
k = 2 = (0, 1, 0)2 is presented in Table 4.3.
To attain monotonicity, we represent vk ( j) in a form
s−1
α=0 kα ( jα + jα+1 2+···+ js−1 2 )
s−1−α
vk ( j) = (−1) .

Note that

j/2α = js−1 2s−1−α + · · · + jα+1 2 + jα + jα−1 2−1 + · · · + j0 2−α ,

so
 j/2α  = js−1 2s−1−α + · · · + jα+1 2 + jα .

We gain
s−1 α
vk ( j) = (−1) α=0 kα  j/2  , j ∈ 0 : N − 1.

Introducing a notation

s−1
θk ( j) = kα  j/2α 
α=0

we come to a representation

Table 4.3 Behavior of the value {k, j}3


j 0 1 2 3 4 5 6 7

{k, j}3 0 0 1 1 0 0 1 1
162 4 Fast Algorithms

vk ( j) = (−1)θk ( j) , j ∈ 0 : N − 1. (4.11.1)

Formula (4.11.1) is valid for j = N = 2s as well. In this case we have


s−1 
s−1
θk (N ) = kα 2s−α = 2 kα 2s−1−α = 2 revs (k)
α=0 α=0

(a definition of the permutation revs can be found in Sect. 1.4). Therefore, the right
side of (4.11.1) for j = N is equal to unity. By virtue of N -periodicity, the left side
of (4.11.1) equals to unity too. Indeed, vk (N ) = vk (0) = 1. Thus, equality (4.11.1)
holds for j ∈ 0 : N .
We rewrite (4.11.1) in a form
 
vk ( j) = exp iπ θk ( j) , j ∈ 0 : N.

It is evident that the function θk ( j) varies from 0 to 2 revs (k) monotonically non-
decreasing while j increases from 0 to N . As a consequence, the argument π θk ( j) of
a complex number vk ( j) varies from 0 to 2π revs (k) monotonically non-decreasing
while j increases from 0 to N . Hence a point vk ( j) runs around the unit circle of the
complex plane revs (k) times. Thus a number revs (k) is treated as a frequency of a
function vk .  
We denote vk = vrevs (k) . A function
vk has a frequency of k because revs revs (k) =
k. It can be represented in a form
s−1
vk ( j) = (−1)
 α=0 ks−1−α jα , k, j ∈ 0 : N − 1.

Walsh functions v0 , 
v1 , . . . ,
v N −1 are ordered by frequency. They comprise a Walsh–
Paley basis of the space C N .
Figure 4.6 depicts the functions  vk ( j) for N = 8.

4.11.2 There exists another ordering of Walsh functions—by the number of sign
changes on the main period. To clarify this matter we need to return to Hadamard
matrices (see par. 4.9.2).
We denote by walν (k) the number of sign changes in a row of Hadamard matrix
Aν with the index k ∈ 0 : ν+1 − 1. According to (4.9.4) we have wal1 (0) = 0 and
wal1 (1) = 1.

Theorem 4.11.1 The following recurrent relations hold:

wal1 (k) = k, k ∈ 0 : 1;

walν (2k) = walν−1 (k), (4.11.2)

walν (2k + 1) = 2ν − 1 − walν−1 (k), (4.11.3)


4.11 Ordering of Walsh Functions 163

Fig. 4.6 Walsh functions ordered by frequency, for N = 8

k ∈ 0 : ν − 1, ν = 2, . . . , s.

Proof The first relation is true. We will verify (4.11.2) and (4.11.3). Recall that
Aν [k, j] = (−1){k, j}ν . Hence it follows that for k, j ∈ 0 : ν − 1 there hold

Aν [2k, 2 j] = Aν [2k, 2 j + 1] = Aν [2k + 1, 2 j] = Aν−1 [k, j],

Aν [2k + 1, 2 j + 1] = −Aν−1 [k, j].

At first, let us show that (4.11.2) holds. Since Aν [2k, 2 j] = Aν [2k, 2 j + 1], we
may not take into account the elements Aν [2k, 2 j + 1] while determining the number
of sign changes. The remaining elements are Aν [2k, 2 j] = Aν−1 [k, j]; they have
walν−1 (k) sign changes by a definition. Relation (4.11.2) is ascertained.
Let us rewrite equality (4.11.3) in a form

walν (2k) + walν (2k + 1) = 2ν − 1. (4.11.4)

We introduce submatrices of order two


 
Aν [2k, j − 1] Aν [2k, j]
Gj = ,
Aν [2k + 1, j − 1] Aν [2k + 1, j]

j = 1, . . . , ν+1 − 1.
164 4 Fast Algorithms

We will show that one of the rows of G j has a sign change while another one does
not. Let us consider two cases.
(a) j = 2 j  + 1, j  ∈ 0 : ν − 1. We write
 
Aν [2k, 2 j  ] Aν [2k, 2 j  + 1]
Gj =
Aν [2k + 1, 2 j  ] Aν [2k + 1, 2 j  + 1]
 
Aν−1 [k, j  ] Aν−1 [k, j  ]
= .
Aν−1 [k, j  ] −Aν−1 [k, j  ]

It is obvious that only the second row of G j has a sign change.


(b) j = 2 j  , j  ∈ 1 : ν − 1. In this case we have
 
Aν [2k, 2( j  − 1) + 1] Aν [2k, 2 j  ]
Gj =
Aν [2k + 1, 2( j  − 1) + 1] Aν [2k + 1, 2 j  ]
 
Aν−1 [k, j  − 1] Aν−1 [k, j  ]
= .
−Aν−1 [k, j  − 1] Aν−1 [k, j  ]

We see that G j has only one sign change either in the first or in the second row.
The sequence G 1 , . . . , G ν+1 −1 accumulates ν+1 − 1 sign changes in the rows
of the matrix Aν with indices 2k and 2k + 1, which conforms to (4.11.4).
The theorem is proved. 


Corollary 4.11.1 The mapping k → walν (k) is a permutation of the set {0, 1, . . . ,
ν+1 − 1}.

Indeed, this is evident for ν = 1. If it is true for ν − 1 then it is true for ν as


well, because by virtue of (4.11.2) the even indices walν (2k) contain numbers from
0 to ν − 1, and the odd indices walν (2k + 1) in accordance with (4.11.3) contain
numbers from ν to ν+1 − 1.

4.11.3 Recall that As [k, j] = vk ( j) for k = 0, 1, . . . , N − 1 (see par. 4.9.3). There-


fore the value wals (k) is the number of sign changes of the Walsh function vk ( j) on
the main period. As it was ascertained above, a number of sign changes differs for
different Walsh functions and varies from 0 to N − 1.
We denote vk = vwal−1s (k)
. As long as wals wal−1
s (k) = k, a function  vk has k sign
changes on the main period. Walsh functions  v0 , 
v1 , . . . , 
v N −1 are ordered by the
number of sign changes.
Figure 4.7 shows functions  vk ( j) for N = 8.

4.11.4 With the aid of the permutations revs and wals we defined frequency and
number of sign changes of Walsh functions. It turns out that these permutations are
bound with each other through the permutation greys (see Sect. 1.4).
4.11 Ordering of Walsh Functions 165

Fig. 4.7 Walsh functions ordered by sign changes, for N = 8

Theorem 4.11.2 For each ν ∈ 1 : s there holds an equality


 
greyν walν (k) = revν (k), k ∈ 0 : ν+1 − 1. (4.11.5)

Proof Let us remind the recurrent relations for the permutations revν and greyν :

rev1 (k) = k, k ∈ 0 : 1;

revν (2k) = revν−1 (k),

revν (2k + 1) = 2ν−1 + revν−1 (k),

k ∈ 0 : ν − 1, ν = 2, 3, . . . ;

grey1 (k) = k, k ∈ 0 : 1;

greyν (k) = greyν−1 (k),

greyν (2ν − 1 − k) = 2ν−1 + greyν−1 (k),

k ∈ 0 : ν − 1, ν = 2, 3, . . .

For ν = 1 formula (4.11.5) is valid. We perform an induction step from ν − 1


to ν.
166 4 Fast Algorithms

We will consider the cases of even and odd k separately. Let k = 2k  , k  ∈ 0 :


ν − 1. According to Theorem 4.11.1 and to the inductive hypothesis we have
   
greyν walν (2k  ) = greyν walν−1 (k  )
 
= greyν−1 walν−1 (k  ) = revν−1 (k  ) = revν (2k  ).

Similarly, for k = 2k  + 1, k  ∈ 0 : ν − 1, we gain


   
greyν walν (2k  + 1) = greyν 2ν − 1 − walν−1 (k  )
 
= 2ν−1 + greyν−1 walν−1 (k  )
= 2ν−1 + revν−1 (k  ) = revν (2k  + 1).

The theorem is proved. 




Corollary 4.11.2 Valid is the equality


 
wal−1
s (k) = revs greys (k) , k ∈ 0 : N − 1.

Indeed, according to (4.11.5) we have


 
greys wals (k) = revs (k), k ∈ 0 : N − 1.

Now we pass to the inverse mappings:


 
wal−1
s grey−1 −1
s (k) = revs (k), k ∈ 0 : N − 1.

Replacing k by greys (k) in the latter relation we gain


 
wal−1 −1
s (k) = revs greys (k) , k ∈ 0 : N − 1.

It is remaining to take into account that rev−1


s (l) = revs (l) for l ∈ 0 : N − 1.

4.12 Sampling Theorem in Walsh Basis

4.12.1 Let us write down the expansion of a signal x ∈ C N over the Walsh basis
ordered by frequency:
N −1
1 
x= ξ(k)
vk , (4.12.1)
N k=0

where ξ(k) = x, vk . We are interested in a case when ξ(k) = 0 for k ∈ ν+1 :
N − 1, ν ∈ 0 : s − 1.
4.12 Sampling Theorem in Walsh Basis 167

We denote

N ν −1

h ν ( j) = δ N ( j − q).
q=0

The function h ν ( j) is an N -periodic step. On the main period, it is equal to unity for
j ∈ 0 : Nν − 1 and is equal to zero for j ∈ Nν : N − 1.
Theorem 4.12.1 Consider expansion (4.12.1). If there holds ξ(k) = 0 for k ∈
ν+1 : N − 1, ν ∈ 0 : s − 1, then

ν+1 −1

x( j) = x(l Nν ) h ν ( j − l Nν ), j ∈ 0 : N − 1. (4.12.2)
l=0

Formula (4.12.2) shows that in the premises of the theorem the signal x( j) is a step-
function. It equals to x(l Nν ) for j ∈ l Nν : (l + 1)Nν − 1, l = 0, 1, . . . , ν+1 − 1.
4.12.2 We will prepend the proof of the theorem with a few auxiliary assertions.
Lemma 4.12.1 Valid is the formula

ν −1
  N
δν+1 revs ( j) = δ N ( j − q), j ∈ 0 : N − 1. (4.12.3)
q=0

Proof If j = ( js−1 , js−2 , . . . , j0 )2 then there holds

revs ( j) = j0 2s−1 + · · · + js−ν−1 2ν + js−ν 2ν−1 + · · · + js−1 .

Denote p = j0 2s−ν−1 + · · · + js−ν−1 and r = js−ν 2ν−1 + · · · + js−1 . Then revs ( j)


= pν+1 + r , herein r ∈ 0 : ν+1 − 1. On the strength of ν+1 -periodicity of the
unit pulse δν+1 we have  
δν+1 revs ( j) = δν+1 (r ). (4.12.4)

Let j ∈ 0 : Nν − 1. In this case js−1 = js−2 = · · · = js−ν = 0. In particular,


r = 0. According to equality (4.12.4) we get δν+1 revs ( j) = 1. And the right side
of (4.12.3) also equals to unity for the given indices j.
If j ∈ Nν : N − 1 then at least one of the digits js−1 , js−2 , . . . , js−ν is nonzero.
As a consequence
 we gain r = 0, i.e. r ∈ 1 : ν+1 − 1. Formula (4.12.4) yields
δν+1 revs ( j) = 0. The right side of (4.12.3) also equals to zero for the mentioned
indices j.
The lemma is proved. 

 
Formula (4.12.3) can be compactly rewritten in a form δν+1 revs ( j) = h ν ( j)
whence it follows that
 
h ν revs ( j) = δν+1 ( j), j ∈ 0 : N − 1. (4.12.5)
168 4 Fast Algorithms

Lemma 4.12.2 For j = ( js−1 , js−2 , . . . , j0 )2 there holds

ν−1

δν+1 ( j) = δ2 ( jα ). (4.12.6)
α=0

Proof We have

j = js−1 2s−1 + · · · + jν 2ν + jν−1 2ν−1 + · · · + j0 .

Denote p  = js−1 2s−ν−1 + · · · + jν and r  = jν−1 2ν−1 + · · · + j0 . Then


j = p  ν+1 + r  and
δν+1 ( j) = δν+1 (r  ). (4.12.7)

Two cases are possible.


(1) j0 = j1 = · · · = jν−1 = 0. In this case r  = 0. According to (4.12.7) we have
δν+1 ( j) = 1. It is obvious that the right side of (4.12.6) equals to unity as well.
(2) At least one of the digits j0 , j1 , . . . , jν−1 is nonzero. Let jα = 1 for some α  ∈
0 : ν − 1. Then the factor δ2 ( jα ) in the right side of (4.12.6) is equal to zero, so the
whole product equals to zero. Since r  ∈ 1 : ν+1 − 1 in this case, equality (4.12.7)
yields δν+1 ( j) = 0.
The lemma is proved. 


Lemma 4.12.3 The following expansion holds

ν+1 −1
1 
h ν ( j) = 
vk ( j), j ∈ 0 : N − 1. (4.12.8)
ν+1 k=0

Proof Denote the right side of (4.12.8) as f ν ( j). We will show that
 
f ν revs ( j) = δν+1 ( j). (4.12.9)

According to the definition of Walsh functions 


vk we have

ν+1 −1
  1 
f ν revs ( j) = (−1){revs (k), revs ( j)}s
ν+1 k=0
ν+1 −1 s−1
1  
= (−1)kα jα .
ν+1 k=0 α=0
4.12 Sampling Theorem in Walsh Basis 169

Since k = (kν−1 , kν−2 , . . . , k0 )2 , we write

 1 ν−1
 
  1
1
f ν revs ( j) = ··· (−1)kα jα
ν+1 kν−1 =0 k0 =0 α=0
ν−1 
 1
1
= (−1)kα jα .
ν+1 α=0 kα =0

Let us use the formula

1 1  kα jα
1 1
(−1)kα jα = ω = δ2 ( jα ).
2 k =0 2 k =0 2
α α

We gain
  ν−1

f ν revs ( j) = δ2 ( jα ).
α=0

Now (4.12.9) follows from (4.12.6).    


Relations (4.12.9) and (4.12.5) yield f ν revs ( j) = h ν revs ( j) . Replacing j by
revs ( j) in the latter equality we come to (4.12.8).
The lemma is proved. 

Formula (4.12.8) is an expansion of the step h ν over the Walsh basis ordered by
frequency.

Lemma 4.12.4 For all k and l from the set 0 : ν+1 − 1 there holds

{revν (l), k}ν = {revs (k), l Nν }s . (4.12.10)

Proof Let l = (lν−1 , lν−2 , . . . , l0 )2 and k = (kν−1 , kν−2 , . . . , k0 )2 . Then revν (l) =
(l0 , l1 , . . . , lν−1 )2 and

{revν (l), k}ν = lν−1 k0 + lν−2 k1 + · · · + l0 kν−1 . (4.12.11)

Moreover,
revs (k) = k0 2s−1 + k1 2s−2 + · · · + kν−1 2s−ν ,

l Nν = lν−1 2s−1 + lν−2 2s−2 + · · · + l0 2s−ν ,

so that
{revs (k), l Nν }s = k0 lν−1 + k1lν−2 + · · · + kν−1l0 . (4.12.12)

Comparing (4.12.11) and (4.12.12) we come to (4.12.10). The lemma is proved. 



170 4 Fast Algorithms

Lemma 4.12.5 For j ∈ 0 : N − 1 and l ∈ 0 : ν+1 − 1 there holds

h ν ( j ⊕ l Nν ) = h ν ( j − l Nν ). (4.12.13)

Proof Let us use the fact that δ N ( j ⊕ k) = δ N ( j − k) for all j and k from 0 : N − 1.
The definition of h ν yields


N ν −1 
N ν −1
   
h ν ( j ⊕ l Nν ) = δ N ( j ⊕ l Nν ) − q = δ N ( j ⊕ l Nν ) ⊕ q
q=0 q=0


N ν −1
 
= δ N j ⊕ (l Nν ⊕ q) .
q=0

Since l Nν = lν−1 2s−1 + lν−2 2s−2 + · · · + l0 2s−ν and q = qs−ν−1 2s−ν−1 + · · · + q0 ,


we have l Nν ⊕ q = l Nν + q. Hence

 ν −1 ν −1
N
  N  
h ν ( j ⊕ l Nν ) = δ N j ⊕ (l Nν + q) = δ N j − (l Nν + q)
q=0 q=0


N ν −1
 
= δ N ( j − l Nν ) − q) = h ν ( j − l Nν ).
q=0

The lemma is proved. 




4.12.3 Now we turn to the proof of the theorem. On the basis of the theorem’s
hypothesis and formula (4.12.1) we write

ν+1 −1
1 
x( j) = ξ(k)
vk ( j), j ∈ 0 : N − 1. (4.12.14)
N k=0

We fix an integer j ∈ 0 : N − 1, introduce a function d(k) = vk ( j), k ∈ 0 : ν+1 −


1, and expand d over the Walsh basis that is defined on the set {0, 1, . . . , ν+1 − 1}
and ordered by frequency. To do that we calculate Fourier–Walsh coefficients of the
function d. According to (4.12.10) we have

ν+1 −1

D(l) = d(k) (−1){revν (l), k}ν
k=0
ν+1 −1 ν+1 −1
 
= vk ( j) (−1){revs (k), l Nν }s =
 
vk ( j)
vk (l Nν ),
k=0 k=0

l ∈ 0 : ν+1 − 1.
4.12 Sampling Theorem in Walsh Basis 171

We note that

vk ( j  ) = 
vk ( j)
 vk ( j ⊕ j  ), j, j  ∈ 0 : N − 1. (4.12.15)

Indeed,

vk ( j  ) = (−1){revs (k), j}s +{revs (k), j }s
vk ( j)


s−1


s−1

= (−1)ks−1−α jα + jα 2 = (−1)ks−1−α ( j⊕ j )α = 
vk ( j ⊕ j  ).
α=0 α=0

Taking into account (4.12.15), (4.12.8) and (4.12.13), we gain

ν+1 −1

D(l) = 
vk ( j ⊕ l Nν ) = ν+1 h ν ( j ⊕ l Nν ) = ν+1 h ν ( j − l Nν ).
k=0

With the aid of the DWT inversion formula and equality (4.12.10) we can recon-
vk ( j) for k ∈ 0 : ν+1 − 1:
struct the values 
ν+1 −1
1 

vk ( j) = d(k) = D(l) (−1){revν (l), k}ν
ν+1 l=0
ν+1 −1

= h ν ( j − l Nν ) (−1){revs (k), l Nν }s
l=0
ν+1 −1

= h ν ( j − l Nν )
vk (l Nν ). (4.12.16)
l=0

It is remaining to substitute (4.12.16) into (4.12.14). This yields


ν+1 −1 ν+1 −1
1  
x( j) = ξ(k) h ν ( j − l Nν )vk (l Nν )
N k=0 l=0
ν+1 −1
   ν+1 −1 
1
= h ν ( j − l Nν ) ξ(k)
vk (l Nν )
l=0
N k=0
ν+1 −1

= h ν ( j − l Nν ) x(l Nν ).
l=0

The theorem is proved. 

4.12.4 Theorem 4.12.1 can be inverted. To be more precise, the following assertion
is true.
172 4 Fast Algorithms

Theorem 4.12.2 Let x be a step-function of a form

ν+1 −1

x( j) = a(l) h ν ( j − l Nν ). (4.12.17)
l=0

Then its Fourier–Walsh coefficients ξ(k) defined by the formula

N −1

ξ(k) = vk ( j)
x( j)
j=0

are equal to zero for k ∈ ν+1 : N − 1.

Proof On the basis of (4.12.13), (4.12.8), and (4.12.15) we have

ν+1 −1 N −1
 
ξ(k) = a(l) h ν ( j ⊕ l Nν )
vk ( j)
l=0 j=0
ν+1 −1 N −1 ν+1 −1
1   
= a(l) 
vk ( j) 
v p ( j)
v p (l Nν )
ν+1 l=0 j=0 p=0
ν+1 −1 ν+1 −1 N −1
1   
= a(l) 
v p (l Nν ) 
vk ( j)
v p ( j).
ν+1 l=0 p=0 j=0

For each k ∈ ν+1 : N − 1 and p ∈ 0 : ν+1 − 1, Walsh functions  vk and v p are


orthogonal. This guarantees that the coefficients ξ(k) are equal to zero for k ∈ ν+1 :
N − 1. The theorem is proved. 


4.13 Ahmed–Rao Bases

4.13.1 We still presume that N = 2s . Let us take arbitrary nonzero complex numbers
t (0), t (1), . . . , t (N /2 − 1) and construct one more sequence of bases in C N :

g0 (k) = δ N (· − k), k ∈ 0 : N − 1;

gν (2l Nν + p) = gν−1 (l Nν−1 + p) + t (l) gν−1 (l Nν−1 + Nν + p),


  (4.13.1)
gν (2l + 1)Nν + p = gν−1 (l Nν−1 + p) − t (l) gν−1 (l Nν−1 + Nν + p),

p ∈ 0 : Nν − 1, l ∈ 0 : ν − 1, ν = 1, . . . , s.
4.13 Ahmed–Rao Bases 173

s (2l)
Formula (4.13.1) differs from (4.6.2) only by the coefficient ωrev
N being replaced
with t (l). A transition from the basis gν−1 to the basis gν can be written in a single
line:

  1
gν (2l + σ )Nν + p = (−1)σ τ [t (l)]τ gν−1 (l Nν−1 + τ Nν + p), (4.13.2)
τ =0

where σ ∈ 0 : 1. In particular, for ν = 1 we have


1
g1 (σ N1 + p) = (−1)σ τ [t (0)]τ g0 (τ N1 + p), (4.13.3)
τ =0

p ∈ 0 : N1 − 1, σ ∈ 0 : 1.

Let us express signals of the ν-th level through signals of the zero level. In order
to do this we introduce a sequence of matrices T1 , T2 , . . . , Ts by the rule
 
1 t (0)
T1 = ;
1 −t (0)

Tν [2l + σ, 2q + τ ] = (−1)σ τ [t (l)]τ Tν−1 [l, q], (4.13.4)

l, q ∈ 0 : ν − 1, σ, τ ∈ 0 : 1, ν = 2, . . . , s.

Theorem 4.13.1 The following representation is valid:

ν+1 −1

gν (l Nν + p) = Tν [l, q] g0 (q Nν + p), (4.13.5)
q=0

p ∈ 0 : Nν − 1, l ∈ 0 : ν+1 − 1, ν = 1, . . . , s.

Proof When ν = 1, formula (4.13.5) coincides with (4.13.3) up to notations. We


perform an induction step from ν − 1 to ν, ν ≥ 2.
We take an index l ∈ 0 : ν+1 − 1 and represent it in a form l = 2l  + σ , σ ∈ 0 :
1, l  ∈ 0 : ν − 1. On the basis of (4.13.2), (4.13.4) and the inductive hypothesis we
write
 
gν (l Nν + p) = gν (2l  + σ )Nν + p

1
= (−1)σ τ [t (l  )]τ gν−1 (l  Nν−1 + τ Nν + p)
τ =0
174 4 Fast Algorithms


1 
ν −1

= (−1)σ τ [t (l  )]τ Tν−1 [l  , q  ] g0 (q  Nν−1 + τ Nν + p)


τ =0 q  =0
1 
  ν −1
 
= Tν [2l  + σ, 2q  + τ ] g0 (2q  + τ )Nν + p
τ =0 q  =0
ν+1 −1

= Tν [l, q] g0 (q Nν + p).
q=0

The theorem is proved. 




From (4.13.5) with ν = s it follows that

gs (l; j) = Ts [l, j], l, j ∈ 0 : N − 1. (4.13.6)

4.13.2 Let us find an explicit expression for the elements of the matrix Tν .

Theorem 4.13.2 Given ν ∈ 1 : s, for l, q ∈ 0 : ν+1 − 1, l = (lν−1 , . . . , l0 )2 , q =


(qν−1 , . . . , q0 )2 , valid is the formula

ν−1

Tν [l, q] = (−1){l,q}ν t (l/2α+1 )

. (4.13.7)
α=0

Proof When ν = 1, formula (4.13.7) takes a form

T1 [l, q] = (−1)lq [t (0)]q , l, q ∈ 0 : 1.

The validity of this equality is obvious. We perform an induction step from ν − 1


to ν, ν ≥ 2.
Let l = 2l  + σ and q = 2q  + τ , where σ, τ ∈ 0 : 1 and l  , q  ∈ 0 : ν − 1.
Binary digits of the numbers l and l  and q and q  are bound with the relations

lα+1 = lα , qα+1 = qα , l0 = σ, q0 = τ.

Bearing in mind the inductive hypothesis and equality (4.13.4) we gain

Tν [l, q] = Tν [2l  + σ, 2q  + τ ] = (−1)σ τ [t (l  )]τ Tν−1 [l  , q  ]


ν−2

  q
= (−1)σ τ [t (l  )]τ (−1){l ,q }ν−1 t (l  /2α+1 ) α
α=0
ν−1

= (−1){l,q}ν [t (l/2)]q0 t (l  /2α )

.
α=1
4.13 Ahmed–Rao Bases 175

It is remaining to verify that for α ∈ 1 : ν − 1 there holds

l  /2α  = l/2α+1 .

When α = ν − 1, the latter equality turns to an undoubtedly true assertion 0 = 0.



Let α ∈ 1 : ν − 2. If l  = lν−2 2ν−2 + · · · + l0 then l = 2l  + σ = lν−2

2ν−1 + · · · +

l0 2 + σ and
l  /2α  = lν−2

2ν−2−α + · · · + lα = l/2α+1 .

The theorem is proved. 




On the basis of (4.13.6) and (4.13.7) we come to a representation


s−1
gs (k; j) = (−1){k, j}s t (k/2α+1 )

α=0


s−1
t (k/2α+1 )

= vk ( j) , k, j ∈ 0 : N − 1, (4.13.8)
α=0

where vk ( j) is the Walsh function of order k.

4.13.3 Let us clarify the conditions when the family gν consists of pairwise orthog-
onal signals.

Lemma 4.13.1 If |t (l)| = 1 for l ∈ 0 : ν − 1 then for any k, k  ∈ 0 : ν+1 − 1


there holds
ν+1 −1

Tν [k, j] Tν [k  , j] = 2ν δν+1 (k − k  ). (4.13.9)
j=0

Proof For ν = 1 we have |t (0)| = 1, therefore


1 
1

T1 [k, j] T1 [k  , j] = (−1)k j [t (0)] j (−1)k j [t (0)] j
j=0 j=0


1


1
(k−k  ) j
= (−1)(k−k ) j = ω2 = 2δ2 (k − k  ).
j=0 j=0

We perform an induction step from ν − 1 to ν, ν ≥ 2.


Let k = 2l + σ , k  = 2l  + σ  , and j = 2q + τ , where σ, σ  , τ ∈ 0 : 1 and

l, l , q ∈ 0 : ν − 1. On the basis of (4.13.4) and the inductive hypothesis we write
176 4 Fast Algorithms

ν+1 −1

Tν [k, j] Tν [k  , j]
j=0

 ν −1 
1
= Tν [2l + σ, 2q + τ ] Tν [2l  + σ  , 2q + τ ]
q=0 τ =0

 ν −1 
1

= (−1)σ τ [t (l)]τ Tν−1 [l, q] (−1)σ τ [t (l  )]τ Tν−1 [l  , q]
q=0 τ =0
 
ν −1  
1


= Tν−1 [l, q] Tν−1 [l  , q] (−1)(σ −σ )τ [t (l)]τ [t (l  )]τ
q=0 τ =0


1

= 2ν−1 δν (l − l  ) (−1)(σ −σ )τ [t (l)]τ [t (l  )]τ .
τ =0

If l = l  then the obtained expression equals to zero. Let l = l  . By virtue of the


lemma’s hypothesis |t (l)| = 1 for l ∈ 0 : ν − 1, hence

ν+1 −1
 
1

Tν [k, j] Tν [k  , j] = 2ν−1 (−1)(σ −σ )τ = 2ν δ2 (σ − σ  ).
j=0 τ =0

We see that the left side of (4.13.9) is nonzero only when l = l  and σ = σ  , i.e. only
when k = k  . In the latter case we have
ν+1 −1

|Tν [k, j]|2 = 2ν , k ∈ 0 : ν+1 − 1.
j=0

The lemma is proved. 




Actually, Lemma 4.13.1 states that, provided |t (l)| = 1 for l ∈ 0 : ν − 1, the


rows of the matrix Tν are pairwise orthogonal and the squared norm of each row is
equal to 2ν .

Theorem 4.13.3 Provided |t (l)| = 1 for l ∈ 0 : ν − 1, the signals

gν (0), gν (1), . . . , gν (N − 1) (4.13.10)

are pairwise orthogonal and gν (k)2 = 2ν holds for all k ∈ 0 : N − 1.

Proof Let k = l Nν + p and k  = l  Nν + p  , where p, p  ∈ 0 : Nν − 1 and l, l  ∈ 0 :


ν+1 − 1. According to (4.13.5) and Lemma 2.1.4 we have
4.13 Ahmed–Rao Bases 177

gν (k), gν (k  ) = gν (l Nν + p), gν (l  Nν + p  )
ν+1 −1 ν+1 −1
   
= Tν [l, q] Tν [l  , q  ] δ N (q − q  )Nν + ( p − p  ) .
q=0 q  =0

When q = q  , the corresponding terms in the double sum are equal to zero. Taking
into account (4.13.9) we gain

ν+1 −1

 
gν (k), gν (k ) = δ N ( p − p ) Tν [l, q] Tν [l  , q]
q=0

= 2ν δ N ( p − p  ) δν+1 (l − l  ).

We see that the scalar product gν (k), gν (k  ) is nonzero only when p = p  and l = l  ,
i.e. only when k = k  . In the latter case gν (k)2 = 2ν holds for all k ∈ 0 : N − 1.
The theorem is proved. 


Corollary 4.13.1 Provided |t (l)| = 1 for l ∈ 0 : N /2 − 1, for each ν ∈ 1 : s sig-


nals (4.13.10) form an orthogonal basis in a space C N . The squared norm of each
signal is equal to 2ν .

4.13.4 We will consider the particular case of choosing the coefficients t (l) whose
moduli are equal to unity. Let us fix r ∈ 1 : s and put

s (2l)
(r )
ωrev
N for l ∈ 0 : r − 1,
t (l) = (4.13.11)
1 for l ∈ r : s − 1.

Consider the signals


gν(r ) (0), gν(r ) (1), . . . , gν(r ) (N − 1) (4.13.12)

defined by the recurrent relations (4.13.1) with the coefficients t (l) = t (r ) (l). Since
|t (r ) (l)| ≡ 1, Theorem 4.13.3 yields that signals (4.13.12) form an orthogonal basis
in a space C N for all ν ∈ 1 : s and r ∈ 1 : s. A collection of signals (4.13.12) with
ν = s is referred to as Ahmed–Rao basis with an index r .
s (2l)
When r = s, formula (4.13.11) takes a form t (s) (l) = ωrev N , l ∈ 0 : s − 1.
In this case the recurrent relations (4.13.1) with t (l) = t (s) (l) coincide with (4.6.2)
and therefore generate a sequence of bases leading to the exponential basis. An
explicit expression for the signals of this sequence is presented in Theorem 4.6.2. In
particular, formula (4.6.1) yields
revs (k) j
gs(s) (k; j) = ω N , k, j ∈ 0 : N − 1.

When r = 1, we have t (1) (l) = 1, l ∈ 0 : s − 1. On the basis of (4.13.8) we


come to a formula gs(1) (k; j) = vk ( j), where vk ( j) is a Walsh function of order k.
178 4 Fast Algorithms

Note that Walsh basis is obtained both by sequence (4.9.1) and sequence (4.13.1)
with t (l) ≡ 1.

4.13.5 We will investigate in more detail Ahmed–Rao bases with an index r ∈ 2 :


s − 1. To do that we turn to the matrices Tν(r ) whose elements are calculated by
formula (4.13.7) with t (l) = t (r ) (l), l ∈ 0 : s − 1.

Lemma 4.13.2 The following equality is valid for ν ≤ r :


revs (l)q
Tν(r ) [l, q] = ω N , l, q ∈ 0 : ν+1 − 1. (4.13.13)

Proof Let l = (lν−1 , . . . , l0 )2 and q = (qν−1 , . . . , q0 )2 . For ν ≤ r we have l/2α+1 


∈ 0 : r − 1 and

revs (2l/2α+1 )
t (r ) (l/2α+1 ) = ω N , α ∈ 0 : ν − 1. (4.13.14)

We will show that


α
t (r ) (l/2α+1 ) = (−1)−lα ω2N revs (l) , α ∈ 0 : ν − 1. (4.13.15)

When α = ν − 1, the left side of (4.13.15) equals to unity. Since

2ν−1 revs (l) = 2ν−1 (l0 2s−1 + · · · + lν−1 2s−ν )


= l0 2s+ν−2 + · · · + lν−1 2s−1 ,

the right side of (4.13.15) also equals to unity when α = ν − 1.


Let us take α ∈ 0 : ν − 2. We write
 
revs (2l/2α+1 ) = revs (lν−1 , . . . , lα+1 , 0)2
= lα+1 2s−2 + · · · + lν−1 2s−ν+α
= 2α [l0 2s−1 + · · · + lα 2s−α−1 + lα+1 2s−α−2 + · · ·
+ lν−1 2s−ν − (l0 2s−1 + · · · + lα 2s−α−1 )]
= 2α revs (l) − (l0 2s+α−1 + · · · + lα 2s−1 ).

From here and from (4.13.14) follows formula (4.13.15).


Formula (4.13.15) yields

ν−1
 ν−1
 2α revs (l)qα
t (r ) (l/2α+1 ) (−1)−lα qα ω N

=
α=0 α=0
ν−1
revs (l) qα 2α
= (−1){l,q}ν ω N α=0

revs (l)q
= (−1){l,q}ν ω N . (4.13.16)
4.13 Ahmed–Rao Bases 179

Combining formula (4.13.7) with t (l) = t (r ) (l) and (4.13.16), we come to (4.13.13).
The lemma is proved. 


Now we will consider the case of ν ≥ r + 1. Note that an integer l ∈ r +1 :


ν+1 − 1 can be represented in a form

l = (1, l p−1 , . . . , lr −1 , . . . , l0 )2 , p ∈ r : ν − 1. (4.13.17)

Here p is the index of the most significant nonzero digit in a binary code of l.
We introduce one more notation. If q = (qν−1 , qν−2 , . . . , q0 )2 then we put

[q]α = (qν−1 , qν−2 , . . . , qα , 0, . . . , 0)2 , α ∈ 1 : ν − 1.

Lemma 4.13.3 Given ν ≥ r + 1 and q ∈ 0 : ν+1 − 1, valid is the equality


⎧ rev (l)q

⎪ ωN
s
, l ∈ 0 : r +1 − 1;

Tν(r ) [l, q] = (−1){l,q} p−r +1 ω N s
rev (l)[q]
p−r +1
, (4.13.18)



l = (1, l p−1 , . . . , l0 )2 , p ∈ r : ν − 1.

Proof For l ∈ 0 : r +1 − 1 we have l/2α+1  ∈ 0 : r − 1, therefore formula


(4.13.14) holds. It is remaining to transit from (4.13.14) to (4.13.15). We will show
how to perform such a transition in this case.
Let l = (lr −1 , . . . , l0 )2 , so that lν−1 = · · · = lr = 0 and

2α revs (l) = 2α (l0 2s−1 + · · · + lr −1 2s−r )


= l0 2s−1+α + · · · + lr −1 2s−r +α .

When α ∈ r − 1 : ν − 1, the left side of (4.13.15) equals to unity. The right side
of (4.13.15) also equals to unity both when α ∈ r : ν − 1 and α = r − 1. When
α ∈ 0 : r − 2, we have to repeat the manipulations from the proof of the former
lemma replacing ν by r .
Now we take l ∈ r +1 : ν+1 − 1 and represent l in form (4.13.17). When α ∈
0 : p − r , we have p − (α + 1) ≥ r − 1, therefore

l/2α+1  = (1, l p−1 , . . . , lα+1 )2 ≥ 2r +1 = r .

According to (4.13.11) we write

t (r ) (l/2α+1 ) = 1, α ∈ 0 : p − r. (4.13.19)

We will show that, when α ∈ p − r + 1 : ν − 1, there holds


α
t (r ) (l/2α+1 ) = (−1)−lα ω2N revs (l) . (4.13.20)
180 4 Fast Algorithms

When α ∈ p : ν − 1, the left side of (4.13.20) equals to unity. Since

2α revs (l) = 2α (l0 2s−1 + · · · + l p−1 2s− p + 2s− p−1 )


= l0 2s−1+α + · · · + l p−1 2s− p+α + 2s− p−1+α ,

the right side of (4.13.20) also equals to unity both when α ∈ p + 1 : ν − 1 and α =
p. When α ∈ p − r + 1 : p − 1, an inequality p − (α + 1) ≤ r − 2 holds, therefore
l/2α+1  ∈ 0 : r − 1 and

revs (2l/2α+1 ) = 2α revs (l) − (l0 2s−1+α + · · · + lα 2s−1 ).

From here and from (4.13.14) follows formula (4.13.20).


Let us turn to a calculation of Tν(r ) [l, q] for l ∈ r +1 : ν+1 − 1. On the basis
of (4.13.19) and (4.13.20) we write

ν−1
 ν−1
 2α revs (l)qα
t (r ) (l/2α+1 ) (−1)−lα qα ω N

=
α=0 α= p−r +1
ν−1
ν−1 revs (l) qα 2α
= (−1)− α= p−r +1 lα qα ωN α= p−r +1
.

Substituting this expression into (4.13.7) and taking into account that

ν−1

qα 2α = [q] p−r +1 ,
α= p−r +1

ν−1

{l, q}ν − lα qα = {l, q} p−r +1 ,
α= p−r +1

we obtain the required formula for Tν(r ) [l, q] when l ∈ r +1 : ν+1 − 1.


The lemma is proved. 


According to Theorem 4.13.1 signals (4.13.12) can be represented in form (4.13.5)


with Tν = Tν(r ) . Elements of the matrix Tν(r ) for r ∈ 2 : s − 1 are defined by for-
mula (4.13.13) or (4.13.18), depending on the relation between the values of r , ν
and l. Thereby the question of the explicit representation of the signals gν(r ) (k; j),
k, j ∈ 0 : N − 1, is resolved.

Theorem 4.13.4 Ahmed–Rao functions gs(r ) (k; j) for r ∈ 2 : s − 1 can be repre-


sented by the formula
⎧ rev (k) j

⎪ ωN
s
, k ∈ 0 : r +1 − 1;

gs(r ) (k; j) = (−1){k, j} p−r +1 ω N s
rev (k)[ j]
p−r +1
,



k = (1, k p−1 , . . . , k0 )2 , p ∈ r : s − 1.
4.13 Ahmed–Rao Bases 181

The proof immediately follows from (4.13.6) and (4.13.18).


4.13.6 Below we present a characteristic property of Ahmed–Rao functions.
Theorem 4.13.5 A function gs(r ) (k; j) for each j takes one of 2r values ω2r , q ∈
q

0 : 2r − 1.
Proof For k ∈ 0 : r +1 − 1, k = (kr −1 , . . . , k0 )2 , we have

revs (k) = k0 2s−1 + · · · + kr −1 2s−r = 2s−r revr (k),

so that Theorem 4.13.4 yields


revs (k) j revr (k) j
gs(r ) (k; j) = ω N = ω2r 2r
.

Let k = (1, k p−1 , . . . , k0 )2 , p ∈ r : s − 1. Then

revs (k) = k0 2s−1 + · · · + k p−1 2s− p + 2s− p−1 = 2s− p−1 k  ,

where k  = rev p+1 (k). At the same time,


s−1
[ j] p−r +1 = jα 2α = 2 p−r +1 j  ,
α= p−r +1

where j  =  j/2 p−r +1 . Hence

revs (k)[ j] p−r +1 = 2s−r k  j  .

On the basis of Theorem 4.13.4 we gain

revs (k)[ j] p−r +1 2r −1 {k, j} p−r +1 + k  j 


gs(r ) (k; j) = (−1){k, j} p−r +1 ω N = ω2r 2r
.

The theorem is proved. 



4.13.7 In a conclusion we will consider a question of the frequency of discrete
Ahmed–Rao functions.
Theorem 4.13.6 The frequency of the function gs(r ) (k; j) is equal to revs (k) for each
r ∈ 2 : s − 1.
Proof When k ∈ 0 : r +1 − 1, the assertion is evident.
Let k = (1, k p−1 , . . . , k0 )2 , p ∈ r : s − 1. Note that 1 ≤ p − r + 1 ≤ s − r ≤
s − 2. We take j ∈ 0 : N − 1 and represent it in a form j = j  2 p−r +1 + j  , where
j  ∈ 0 : 2 p−r +1 − 1 and j  ∈ 0 : 2s− p+r −1 − 1. Then
 revs (k) j  2 p−r +1
gs(r ) (k; j) = (−1){k, j } p−r +1
ωN .
182 4 Fast Algorithms

We have
 p−r    p−r −α
(−1){k, j } p−r +1
= (−1) α=0 kα ( jα + jα+1 2+···+ j p−r 2 )
p−r 
= (−1) α=0 kα  j /α+1  .

Denote

p−r
θk ( j  ) = kα  j  /α+1 . (4.13.21)
α=0

In this notation
 
gs(r ) (k; j) = exp iπ [θk ( j  ) + 2−s+ p−r +2 revs (k) j  ] .

By putting
ζk ( j) = θk ( j  ) + 2−s+ p−r +2 revs (k) j 

we come to a representation
 
gs(r ) (k; j) = exp iπ ζk ( j) , j ∈ 0 : N − 1. (4.13.22)

One can see that formula (4.13.22) is valid for j = N as well. Indeed, on the
strength of N -periodicity we have gs(r ) (k; N ) = gs(r ) (k; 0) = 1. At the  same time

j  = 2s− p+r −1 and j  = 0 when j = N , so ζk (N ) = 2revs (k) and exp iπ ζk (N ) =
1.
Since ζk (0) = 0 and ζk (N ) = 2revs (k), to complete the proof of the theorem we
only need to verify that the function ζk ( j) monotonically nondecreases while j varies
from 0 to N .
We take j, l ∈ 0 : N and represent them in a form j = j  2 p−r +1 + j  , l =
 p−r +1
l2 + l  . Presuppose that j > l. Then j  ≥ l  because | j  − l  | ≤ 2 p−r +1 − 1.
When j = l  , the inequality ζk ( j) ≥ ζk (l) follows from monotonic nondecrease of


the function θk ( j  ).
Assume that j  > l  . Let us estimate θk ( j  ). According to (4.13.21) we gain


p−r

p−r
θk (2 p−r +1 − 1) ≤ kα 2 p−r +1 /α+1  = kα 2 p−r −α+1
α=0 α=0

p−r
= 2−s+ p−r +2 kα 2s−1−α ≤ 2−s+ p−r +2 revs (k).
α=0

On the strength of monotonic nondecrease of θk ( j  ) we have

0 ≤ θk ( j  ) ≤ 2−s+ p−r +2 revs (k)


4.14 Calculation of DFT of Any Order 183

and
|θk ( j  ) − θk (l  )| ≤ 2−s+ p−r +2 revs (k).

Now we write

ζk ( j) − ζk (l) ≥ 2−s+ p−r +2 revs (k)( j  − l  ) − |θk ( j  ) − θk (l  )|


≥ 2−s+ p−r +2 revs (k)[( j  − l  ) − 1] ≥ 0.

The theorem is proved. 




4.14 Calculation of DFT of Any Order

4.14.1 We will show that discrete Fourier transform of any order can be reduced to
DFT whose order is a power of two.
We take Fourier matrix FN of order N ≥ 3 with the elements
kj
FN [k, j] = ω N , k, j ∈ 0 : N − 1.

Fourier spectrum of a signal x ∈ C N can be represented in a form X = F N x, where


−k j
F N [k, j] = ω N .
We introduce two more matrices of order N : a diagonal matrix D N with diagonal
elements
−k 2
D N [k, k] = ω2N , k ∈ 0 : N − 1,

and a Toeplitz matrix G N with elements

(k− j)2
G N [k, j] = ω2N , k, j ∈ 0 : N − 1.

Theorem 4.14.1 The following factorization holds:

F N = DN G N DN . (4.14.1)

Proof We have
−k j −k 2 (k− j)2 − j2
F N [k, j] = ω N = ω2N ω2N ω2N .

At the same time,

N −1

   
D N G N D N [k, j] = D N [k, l] × G N D N [l, j]
l=0
N −1
 N −1

= D N [k, l] G N [l, l  ]D N [l  , j]
l=0 l  =0
184 4 Fast Algorithms

N −1
 − j2
= D N [k, l]G N [l, j]ω2N
l=0

−k 2 − j2
−k 2 (k− j)2 − j2
= ω2N G N [k, j] ω N 2 = ω2N ω2N ω2N .

Comparing the obtained formulae we come to (4.14.1). 



2
4.14.2 We denote ak = ω2N
k
. Then a row of the matrix G N with an index k can be
represented in a form
 
G N [k, 0 : N − 1] = ak , ak−1 , . . . , a1 , a0 , a1 , . . . , a N −k−1 , (4.14.2)

k ∈ 0 : N − 1.

Let m be the minimal natural number such that

M := 2m > 2N − 1.

We introduce a signal 
h ∈ C M with the following values on the main period:
 

h[0 : M − 1] = a0 , a1 , . . . , a N −1 ,0, . . . , 0 , a N −1 , a N −2 , . . . , a1 .
!" #
(M−(2N −1)) times

M a Toeplitz matrix of order M with elements


We denote by G

M [k, j] = 
G h(k − j), k, j ∈ 0 : M − 1. (4.14.3)

Note that
M [k, j] = G N [k, j] for k, j ∈ 0 : N − 1.
G (4.14.4)

Indeed, by the definition


h(− j) = 
h(M − j) = a j for j ∈ 1 : N − 1.

For k ∈ 0 : N − 1 we gain
 
M [k, 0 : N − 1] =
G h(k), 
h(k − 1), . . . ,  h(1),  h(0),  h(−1), . . . , 
h(−N + k + 1)
 
= ak , ak−1 , . . . , a1 , a0 , a1 , . . . , a N −k−1 . (4.14.5)

Now (4.14.4) follows from (4.14.2) and (4.14.5).


4.14 Calculation of DFT of Any Order 185

Equality (4.14.4) can be written in a matrix form


 
  I
G N = IN , O GM N , (4.14.6)
O

where I N is an identity matrix of order N and O is a zero matrix of order (M − N ) ×


N.

4.14.3 According to (4.14.1) and (4.14.6) we have


 
F N x = DN M I N D N x.
I N , O G (4.14.7)
O

We denote z = D N x and augment the vector z with zeroes to obtain an


M-dimensional vector 
z. Now formula (4.14.7) can be rewritten as follows:

M 
F N x = D N , O G z. (4.14.8)

M we gain
By a definition (4.14.3) of the matrix G

  
M−1
M 
G z [k] = z( j) 
 h(k − j), k ∈ 0 : M − 1. (4.14.9)
j=0

The right side of this equality contains a convolution z ∗h. The convolution theorem
yields  
z ∗
 z) F M (
h = F M−1 F M ( h) . (4.14.10)

 = F M (
The spectrum H h) does not depend on x. It can be calculated in advance:

N −1
 N −1

(k) = 1 + j2 −k j j2 −k(M− j)
H ω2N ω M + ω2N ω M
j=1 j=1
N −1
 j2  2πk 
= 1+2 ω2N cos M
j , k ∈ 0 : M − 1. (4.14.11)
j=1

On the basis of (4.14.8)–(4.14.11) we come to the following scheme of calculation


of a spectrum X = F N x.
− j2
1. Form a vector z with the components z( j) = ω2N x( j), j ∈ 0 : N − 1, and aug-
ment it with zeroes up to an M-dimensional vector 
z.
2. Calculate Z = F M (z).
3. Component-wise multiply the vector   of form (4.14.11). Denote
Z by the vector H

the resulting vector by Y .
4. Calculate  ).
X = F M−1 (Y
186 4 Fast Algorithms

5. Find the components of the spectrum X by the formula

−k 2 
X (k) = ω2N X (k), k ∈ 0 : N − 1.

The worst case for this scheme is N = 2s + 1 when

2N − 1 = 2s+1 + 1, M = 2s+2 = 4(N − 1).

Exercises

4.1 Let N = 2s and x be asignal such that x( j) = 0 for j ∈ ν+1 : N − 1. Consider


a signal y( j) = x revs ( j) , j ∈ 0 : N − 1. Prove that the samples y( j) are equal to
zero when j is not divisible by Nν .
4.2 Consider a signal
ν+1 −1

h( j) = δ N ( j − q).
q=0

 
Prove that h revs ( j) = δ Nν ( j), j ∈ 0 : N − 1.
4.3 Let ψν ( j) and ϕν ( j) be the signals introduced in Sects. 4.8.1 and 4.8.3, respec-
tively. Prove that  
ϕν revs ( j) = ψν ( j), j ∈ 0 : N − 1.

4.4 Prove that the signal ψν ( j) satisfies to a recurrent relation

ψ1 ( j) = δ N ( j) − δ N ( j − N /2),

ψν ( j) = ψν−1 (2 j), ν = 2, . . . , s.

4.5 Expand the unit pulse δ N over the Haar basis related to decimation in time.
4.6 Expand the unit pulse δ N over the Haar basis related to decimation in frequency.
4.7 Let q = (qs−1 , qs−2 , . . . , q0 )2 . Prove that


s
δ N ( j − q) = 2−s + (−1)qν−1 2−ν ϕν ( j − q/ν+1  ν+1 ).
ν=1

4.8 Let q = (qs−1 , qs−2 , . . . , q0 )2 . Prove that


s
δ N ( j − q) = 2−s + (−1)qs−ν 2−ν ψν ( j − q Nν ).
ν=1
Exercises 187

4.9 We introduce a signal


s 
N ν −1
−s −ν
x( j) = 2 + 2 ψν ( j − p).
ν=1 p=0

Prove that x( j) = 1 − 2 j/N for j ∈ 0 : N − 1.

4.10 We introduce a signal


s 
N ν −1

y( j) = 2−s + 2−ν ϕν ( j − pν+1 ).


ν=1 p=0

Prove that y( j) = 1 − 2 revs ( j)/N for j ∈ 0 : N − 1.

4.11 Consider unit steps of a form

k+1 −1

h k ( j) = δ N ( j − q), k ∈ 1 : s − 1.
q=0

Prove that there holds


s
h k ( j) = 2−s+k + 2−ν+k ϕν ( j).
ν=k+1

4.12 Prove that the following expansion is true for the unit steps from the previous
exercise:
k+1 −1

s−k 
h k ( j) = 2−s+k + 2−ν ψν ( j − p).
ν=1 p=0

4.13 We take expansion (4.8.12) of a signal x( j) and suppose that q = (qs−1 ,


qs−2 , . . . , q0 )2 . Prove that


s 
N ν −1

x( j ⊕ q) = 2−s α + (−1)qν−1 2−ν 


ξν ( p ⊕ q/ν+1 ) ϕν ( j − pν+1 )
ν=1 p=0

holds for j ∈ 0 : N − 1.

4.14 We take expansion (4.8.2) of a signal y( j). Prove that for all q ∈ Z there holds

 
N ν −1
s
 
y( j − q) = 2−s β + 2−ν (−1)( p−q)/Nν  
yν p − q Nν ψν ( j − p).
ν=1 p=0
188 4 Fast Algorithms

4.15 We denote by Vν a linear hull spanned by the signals gν (k; j), k = 0, 1, . . . ,


Nν − 1. Prove that Vν = C Nν .

4.16 We denote by Wν a linear hull spanned by the signals ψν ( j − k), k =


0, 1, . . . , Nν − 1. Prove that
$ % &
Wν = w ∈ C Nν−1 % w( j − Nν ) = −w( j), j ∈ Z .

4.17 Prove that Walsh functions vk ( j) have the following properties:

1/vk ( j) = vk ( j),

vk ( j) vk  ( j) = vm ( j),

where m = k ⊕ k  .

4.18 Prove that


v2k+1 ( j) = v2k ( j) v1 ( j).

4.19 We denote by vk(1) ( j) Walsh functions with the main period 0 : N1 − 1, i.e.

vk(1) ( j) = (−1){k, j}s−1 , k, j ∈ 0 : N1 − 1.

Prove that the formulae

vk ( j) = vk(1) ( j), vk (N1 + j) = vk(1) ( j),

v N1 +k ( j) = vk(1) ( j), v N1 +k (N1 + j) = −vk(1) ( j)

are valid for k, j ∈ 0 : N1 − 1.

4.20 Prove that for N = 2s the equality

{N − 1 − j, j}s = 0

holds for all j ∈ 0 : N − 1.

4.21 Let vk ( j) be one of Walsh functions with the property vk (N − 1) = 1. Prove


that
vk (N − 1 − j) = vk ( j), j ∈ 0 : N − 1.

4.22 Let vk ( j) be one of Walsh functions with the property vk (N − 1) = −1. Prove
that
vk (N − 1 − j) = −vk ( j), j ∈ 0 : N − 1.

4.23 Prove that for N ≥ 8 and p ∈ 0 : N2 − 1 there holds


Exercises 189
 
v3N2 + p ( j) = v N1 + p j + N2 N , j ∈ 0 : N − 1.

4.24 Functions rν ( j) = v Nν ( j), ν ∈ 1 : s, are referred to as Rademacher functions.


Plot the graphs of Rademacher functions on the main period for N = 8.
4.25 Express Walsh functions via Rademacher functions.
4.26 Prove that
N −1
1 
vk ( j) = δ N ( j), j ∈ 0 : N − 1.
N k=0

4.27 We turn to discrete Walsh transform W N (see par. 4.10.1). Prove the Parseval
equality
x2 = N −1 W N (x)2 .

4.28 Let z be a dyadic convolution of signals x and y from C N with N = 2s (see


par. 4.8.4). Prove that
W N (z) = W N (x) W N (y).

4.29 Calculate the Fourier spectrum of Walsh functions v1 , v2 , and v3 .


4.30 We denote by Rν the Fourier spectrum of the Rademacher function rν (see
Exercise 4.24). Prove that
⎧  
⎨ 2ν 1 − i cot (2l + 1)π for k = (2l + 1)ν , l ∈ 0 : Nν − 1;
Rν (k) = Nν−1

0 for others k ∈ 0 : N − 1.

4.31 Let V p = F N (v p ). Prove that for p ∈ 0 : N1 − 1 there holds

V2 p+1 (k) = V2 p (k − N1 ), k ∈ 0 : N − 1.

4.32 Along with the Fourier spectrum V p of the Walsh function v p with the main
period 0 : N − 1 we consider the Fourier spectrum V p(1) of the Walsh function v (1)
p
with the main period 0 : N1 − 1. Prove that the spectra V p and V p(1) are bound with
the relations
V p (2k) = 2V p(1) (k), V p (2k + 1) = 0,

N1 −1
1 
VN1 + p (2k) = 0, VN1 + p (2k + 1) = V (1) (l) h(k − l)
N1 l=0 p

for all p, k ∈ 0 : N1 − 1, where


 π(2 j + 1) 
h( j) = 2 1 − i cot .
N
190 4 Fast Algorithms

4.33 Prove that for N ≥ 8 and p ∈ 0 : N2 − 1 there holds

V3N2 + p (k) = i k VN1 + p (k), k ∈ 0 : N − 1.

4.34 Prove that for Walsh functions 


vk ordered by frequency the formula

v2k ( j) = 
 vk (2 j)

is true for all k, j ∈ 0 : N1 − 1.

4.35 Let  vk be a Walsh function ordered by the number of sign changes (see
v N −1 = v1 .
par. 4.11.3). Prove that 

4.36 We take a step-function (4.12.17) and expand it over the Walsh basis ordered
by frequency. What are the coefficients ξ(k) of this expansion for k ∈ 0 : ν+1 − 1?

4.37 Calculate the inverse permutation wal−1


3 (k) values for k = 0, 1, . . . , 7.

4.38 We introduce a Frank–Walsh signal

f ( j1 N + j0 ) = v j1 ( j0 ), j1 , j0 ∈ 0 : N − 1,

and put F = W N 2 ( f ). Prove that

F(k1 N + k0 ) = N vk1 (k0 ), k1 , k0 ∈ 0 : N − 1.

Comments

Goertzel’s algorithm of fast calculation of a single component of Fourier spectrum


is published in [10].
Introduction of the recurrent sequence of orthogonal bases (4.2.1) is a crucial
point. Each signal of the ν-th basis in sequence (4.2.1) depends only on two signals
of the (ν − 1)-th basis. This produces a simple recurrent scheme of calculation of
coefficients of expansion of a signal over all bases in the sequence. The last step gives
us a complete Fourier spectrum. Moreover, sequence (4.2.1) lets us form wavelet
bases; coefficients of expansion of a signal over these bases are contained in the
table of coefficients of expansion of a signal over the bases of the initial recurrent
sequence.
We briefly reminded the contents of Sects. 4.2–4.4. They are written on the basis
of the papers [35–37].
The simplest wavelet basis is Haar basis. We pay a lot of attention to it in our book
(Sects. 4.5–4.8, Exercises 4.3–4.14). The fact is, there are two Haar bases. One of
them is related to decimation in time, it is well known (e.g. see [51] where an extensive
bibliography is collected). Another one is related to decimation in frequency, it is
Comments 191

introduced in [30]. The paper [25] is devoted to a comparative study of the two Haar
bases.
Among the results concerning Haar bases we lay emphasis on a convolution
theorem; to be more accurate, on a cyclic convolution theorem in case of Haar basis
related to decimation in frequency, and on a dyadic convolution theorem in case of
decimation in time. This topic has a long history. The final results were obtained
in [24].
The recurrent sequence of orthogonal bases (4.6.2) can be defined explicitly, as it
was done in [35], or can be deduced from (4.2.1) with the aid of reverse permutation.
In our book we use the latter approach as being more didactic. This approach is
published for the first time ever.
The recurrent sequence of orthogonal bases (4.9.1) that leads to fast Walsh trans-
form was studied in [37]. A question of ordering of discrete Walsh functions and
their generalizations was considered in [23, 49].
Note that the recurrent relations (4.2.1) that play a central role in construction
of fast algorithms have just been presented. However they could be deduced on the
basis of a factorization of a discrete Fourier transform matrix into a product of sparse
matrices. This requires more sophisticated technical tools, in particular, a Kronecker
product of matrices. On the subject of details and generalizations, refer to [12, 22,
32, 44, 46, 48, 50].
A sampling theorem in Walsh basis (in a more general form) is published in [39].
Ahmed–Rao bases are introduced in the book [1]. In case of N = 2s they para-
metrically consolidate Walsh basis and the exponential basis. Signals of Walsh
q
basis take two values of ±1; signals of the exponential basis take N values ω N ,
q ∈ 0 : N − 1. Signals of Ahmed–Rao basis with an index r ∈ 1 : s take 2 values r
q
ω2r , q ∈ 0 : 2r − 1. Ahmed–Rao bases are studied in the papers [14–16].
In Sect. 4.14 we ascertain that calculation of DFT of any order can be reduced to
calculation of DFT whose order is a power of two. This fact is mentioned in [46,
pp. 208–211].
Solutions

To Chapter 1

1.1 The assertion is evident if N = 1. Let N ≥ 2 and j ∈ pN + 1 : ( p + 1)N for


some p ∈ Z. Then
 j − 1  j 
= p, − = −( p + 1),
N N
whence the required equality follows.

1.2 By a definition of a residual we write


 
n jn N = n j − (n j)/(n N ) n N = n( j −  j/N N ) = n j N .

1.3 For j ∈ 0 : N − 1 we have


   
f f ( j) =  f ( j)n N =  jn N n N
=  jn 2  N = jn 2  N N
=  j N = j.

1.4 Since f ( j) ∈ 0 : N − 1, it is sufficient to ascertain that f ( j) = f ( j ) for j =


j , j, j ∈ 0 : N − 1. Let us assume the contrary: f ( j) = f ( j ) for some j, j with
mentioned properties. Then

( j − j )n N = ( jn + l) − ( j n + l) N =  jn + l N −  j n + l N N
=  f ( j) − f ( j ) N = 0.

Taking into account relative primality of n and N we conclude that j − j is divisible


by N . But | j − j | ≤ N − 1, therefore j = j . This contradicts with our assumption.

1.5 We have n = pd and N = qd, herein gcd ( p, q) = 1. According to the result


of Exercise 1.2 we have
© Springer Nature Switzerland AG 2020 193
V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7
194 Solutions

f ( j) =  j pdqd = d j pq = d  jq p q .

Hence
f ( j) ∈ {0, d, 2d, . . . , (q − 1)d}.

1.6 Use the equality of sets

{a j + bk | a ∈ Z, b ∈ Z} = { p( j − k) + qk | p ∈ Z, q ∈ Z}

and Theorem 1.2.1.


1.7 According to (1.3.1) we have a1 n 2 + a2 n 1 = 1 for some integer a1 and a2 . We
put j1 = a1 jn 1 and j2 = a2 jn 2 . Taking into account the result of Exercise 1.2 we
gain
 
 j1 n 2 + j2 n 1  N = a1 jn 1 n 2 + a2 jn 2 n 1 N
= a1 jn 2 n 1 n 2 + a2 jn 1 n 1 n 2 N
=  j (a1 n 2 + a2 n 1 ) N = j.

Let us verify that this representation of a number j is unique. Assume that there
is another representation j =  j1 n 2 + j2 n 1  N with j1 ∈ 0 : n 1 − 1, j2 ∈ 0 : n 2 − 1.
Then ( j1 − j1 )n 2 + ( j2 − j2 )n 1  N = 0 holds. It means that

( j1 − j1 )n 2 + ( j2 − j2 )n 1 = pN

for some integer p. Taking modulo n 1 residuals we come to the equality ( j1 −


j1 )n 2 n 1 = 0. On the strength of relative primality of n 1 and n 2 the difference j1 − j1
is divisible by n 1 . Since at the same time | j1 − j1 | ≤ n 1 − 1, it is necessary that
j1 = j1 . Similarly one can show that j2 = j2 .
1.8 According to (1.3.1) we have aα n α + bα m = 1 for some integer aα and bα ,
α ∈ 1 : s. Multiplying out these equalities we gain

(a1 a2 · · · as )(n 1 n 2 · · · n s ) + pm = 1.

Hence relative primality of the product N = n 1 n 2 · · · n s and the number m follows.


1.9 At first we consider the case of s = 2. Let j = p1 n 1 and j = p2 n 2 . Then 0 =
 jn 2 =  p1 n 1 n 2 . Since gcd (n 1 , n 2 ) = 1, Theorem 1.3.1 yields  p1 n 2 = 0. Taking
into account the result of Exercise 1.2 we gain

 jn 1 n 2 =  p1 n 1 n 1 n 2 = n 1  p1 n 2 = 0.

We perform an induction step from s to s + 1. By virtue of the inductive hypothesis


j is divisible by the product n 1 n 2 · · · n s . Furthermore j is divisible by n s+1 . According
to the result of the previous exercise the number n 1 n 2 · · · n s is relatively prime with
n s+1 . Therefore, j is divisible by the product N = (n 1 n 2 · · · n s )n s+1 .
Solutions 195

1.10 For s = 2 we have N = n 1 n 2 , N1 = n 2 , and N2 = n 1 . The statement of the


exercise corresponds to formula (1.3.1). We perform an induction step from s to
s + 1.
Let N = n 1 · · · n s n s+1 . By virtue of the inductive hypothesis there exist integer
numbers a1 , a2 , . . . , as such that
s

aα = 1.
α=1
n s+1

Furthermore, by virtue of relative primality of Ns+1 and n s+1 there exist integer
numbers as+1 and bs+1 with the property

as+1 Ns+1 + bs+1 n s+1 = 1.

Combining two presented equalities we gain


s
bs+1 aα Nα = bs+1 n s+1 = 1 − as+1 Ns+1 ,
α=1

which is equivalent to what was required.

1.11 According to the result of the previous exercise there exist integer numbers
a1 , a2 , . . . , as such that
s
aα Nα = 1. (S.1)
α=1

We put jα = aα jn α . Taking into account the result of Problem 1.2 we gain
s s s
jα Nα = aα jn α Nα = aα j Nα n α Nα
α=1 N α=1 N α=1 N
s
= j aα N α = j.
α=1 N

Uniqueness of this representation is verified in the same way as in the solution of


Problem 1.7. 
It follows from (S.1) that aα n α Nα n α = 1. It means that the number pα = aα n α
is a unique solution of the equation x Nα n α = 1 on the set 0 : n α − 1. Taking into
account this fact we come to a final expression for the coefficients jα :

jα = aα n α j nα
=  pα jn α , α ∈ 1 : s.
196 Solutions

1.12 We will show that the formula


s
k= k α pα N α , (S.2)
α=1 N

where kα ∈ 0 : n α − 1, establishes a bijective mapping between assemblies of coeffi-


cients (k1 , k2 , . . . , ks ) and numbers k ∈ 0 : N − 1. Since both considered sets have
the same number of elements, it is sufficient to verify that different integers k cor-
respond to different assemblies of coefficients. Assume that along with (S.2) there
holds
s
k= k α pα N α ,
α=1 N

where kα ∈ 0 : n α − 1. Then

s
(kα − kα ) pα Nα = 0,
α=1 N

so that
s
(kν − kν ) pν Nν = pN
ν=1

for some integer p. Taking modulo n α residuals we gain (kα − kα ) pα Nα n α = 0.


By virtue of relative primality of the numbers pα Nα and n α the difference kα − kα
is divisible by n α . Since at the same time |kα − kα | ≤ n α − 1, it is necessary that
kα = kα for all α ∈ 1 : s.
It is ascertained that any number k ∈ 0 : N − 1 allows representation (S.2) where
kα ∈ 0 : n α − 1. Let us rewrite formula (S.2) in a form
s
kν pν Nν = q N + k, q ∈ Z.
ν=1

Taking modulo n α residuals and bearing in mind the definition of pα we gain



kn α = kα  pα Nα n α nα
= kα ,

i. e. kα = kn α .

1.13 Let us show that


 ν   ν 
ν−1 ν−k ν−1
greyν jν−1 2 + jν−k 2 = jν−1 2 + greyν−1  jν−1 + jν−k 2 2ν−k .
k=2 k=2
(S.3)
Solutions 197

When jν−1 = 0, this is another notation of the second line of relations (1.4.5). Let
jν−1 = 1. Equality (1.4.6) yields

 ν   ν 
greyν 2ν−1 + jν−k 2ν−k = 2ν−1 + greyν−1 (1 − jν−k ) 2ν−k .
k=2 k=2

As long as 1 − jν−k = 1 + jν−k 2 , the latter formula corresponds to (S.3) for


jν−1 = 1.
Further, the same considerations yield

 ν 
greyν−1  jν−1 + jν−k 2 2ν−k =  jν−1 + jν−2 2 2ν−2
k=2
 ν 
+ greyν−2  jν−2 + jν−k 2 2ν−k .
k=3

We have used the equality



 jν−1 + jν−2 2 +  jν−1 + jν−k 2 2 =  jν−2 + jν−k 2 .

Continuing this process we gain

greyν ( j) = jν−1 2ν−1 +  jν−1 + jν−2 2 2ν−2 +  jν−2 + jν−3 2 2ν−3


+ · · · +  j1 + j0 2 .

We could use more compact representation greyν ( j) = j ⊕  j/2.

1.14 By virtue of the result of the previous exercise it is sufficient to solve a system
of equations
jν−1 = pν−1 ,

 jν−k+1 + jν−k 2 = pν−k , k = 2, . . . , ν.

This system is solved in an elementary way. Indeed, for k ∈ 2 : ν we have

 pν−1 + pν−2 + · · · + pν−k 2 =  jν−1 + ( jν−1 + jν−2 ) + ( jν−2 + jν−3 )


+ · · · + ( jν−k+1 + jν−k )2 = jν−k .

1.15 We have
n
 3  n n−1
k − (k − 1)3 = k3 − k 3 = n3.
k=1 k=1 k=0

At the same time


198 Solutions

n
 3  n
k − (k − 1)3 = (3k 2 − 3k + 1).
k=1 k=1

Therefore
n
3 k 2 = n 3 + 23 n(n + 1) − n = 1
2
n(2n 2 + 3n + 1) = 1
2
n(n + 1)(2n + 1),
k=1

which is equivalent to what was required.


1.16 We denote  
n
n
Sn = k .
k=1
k
n   
Taking into account the formula k
= n
n−k
we gain

n   n  
n n
Sn = [n − (n − k)] = (n − k ) = n 2n − Sn .
k=0
n−k k =0
k

Hence the required equality follows in an obvious way.


1.17 We have
kn N
εnk = ωkn
N = ωN .

It is remaining to take into account that by virtue of relative primality of n and N the
  N −1
set of powers kn N k=0 is a permutation of the set {0, 1, . . . , N − 1}.
1.18 We have a0 m + b0 n = 1 for some integer a0 and b0 . We write

ωmb0 ωna0 = ωmn ωnm = ωmn


b0 n a0 m a0 m+b0 n
= ωmn .

Hence
ωmn = ωmb0 m ωna0 n .

Putting p = b0 m and q = a0 n we obtain the required decomposition ωmn =


p q p q
ωm ωn . Let us prove its uniqueness. Assume that ωmn = ωm ωn for some p ∈ 0 :
p− p q −q
m − 1 and q ∈ 0 : n − 1. Then there holds ωm = ωn . Raising to the power n
( p− p )nm
we come to the equality ωm = 1. Hence ( p − p )nm = 0. We know that
gcd (m, n) = 1, therefore the difference p − p is divisible by m. Since at the same
time | p − p | ≤ m − 1, we gain p = p. Similarly one can show that q = q.
Relative primality of p and m and q and n follows from the relations

(a0 + b0 /mn) m + b0 m n = 1,


Solutions 199

a0 n m + (a0 /nm + b0 ) n = 1.

1.19 Denote
N −1
PN −1 (z) = zk .
k=0

For z = 1 we have
1 − zN
PN −1 (z) = .
1−z
j
It is clear that PN −1 (ω N ) = 0 for j = 1, 2, . . . , N − 1. Thus, we know N − 1 dif-
ferent roots ω N , ω2N , . . . , ω NN −1 of the polynomial PN −1 (z) of degree N − 1. This
lets us write down the representation


N −1
j
PN −1 (z) = (z − ω N ).
j=1

1.20 Let
r
Pr (z) = ak z k .
k=0

Then
r −1
   
Pr ( j) = ar ( j + 1)r − j r + ak ( j + 1)k − j k =: Pr −1 ( j).
k=0

Therefore the finite difference of the first order of a polynomial of degree r is a


polynomial of degree r − 1. The same considerations yield that the finite difference
of the second order of a polynomial of degree r is a polynomial of degree r − 2 and
so on. The finite difference of the r -th order of a polynomial of degree r identically
equals to a constant, so the finite difference of the (r + 1)-th order of a polynomial
of degree r identically equals to zero.

To Chapter 2

2.1 If x is an even signal then x(0) = x(0) holds, so the value x(0) is real. By virtue
of N -periodicity we have x(N − j) = x( j) for j ∈ 1 : N − 1.
Conversely, let x(0) be a real number and x(N − j) = x( j) hold for j ∈ 1 : N −
1. Then x(− j) = x( j) holds for j ∈ 0 : N − 1, and N -periodicity yields x(− j) =
x( j) for all j ∈ Z.
This exercise shows how to determine an even signal through its values on the
main period only.
200 Solutions

2.2 The solution is similar to the previous one.

2.3 Assume that there exist an even signal x0 and an odd signal x1 such that x( j) =
x0 ( j) + x1 ( j) holds. Then

x(− j) = x 0 (− j) + x 1 (− j) = x0 ( j) − x1 ( j).

Adding and subtracting the given equalities we gain


   
x0 ( j) = 1
2
x( j) + x(− j) , x1 ( j) = 21 x( j) − x(− j) . (S.4)

Now one can easily verify that the signals x0 and x1 of a form (S.4) are even and
odd, respectively, and that x = x0 + x1 holds.

2.4 A signal δmn (m j) is n-periodic; hence, it is sufficient to verify the equality


δmn (m j) = δn ( j) for j ∈ 0 : n − 1. It is obviously true for j = 0. If 1 ≤ j ≤ n − 1
then m ≤ m j ≤ mn − m hold, so δmn (m j) = 0. And δn ( j) = 0 for these j as well.
m−1
2.5 A signal x( j) = l=0 δmn ( j + ln) by virtue of Lemma 2.1.3 is n-periodic,
so it is sufficient to prove the equality x( j) = δn ( j) for j ∈ 0 : n − 1. Note that the
inequalities 0 ≤ j + ln ≤ mn − 1 hold for given j and l ∈ 0 : m − 1; moreover, the
left inequality turns into an equality only when j = 0 and l = 0. Hence it follows
that x(0) = 1 = δn (0) and x( j) = 0 = δn ( j) for j ∈ 1 : n − 1.

2.6 We will use Lemma 2.1.4. Taking into account that r ≤ N − 1 we gain

 r   r   
rr −s r −s r
 (δ N )
r 2
= (−1) δ N (· + s), (−1) δ N (· + s )
s=0
s s =0
s
r    r  2
r r r
= (−1)s+s δ N (s − s ) = .
s,s =0
s s s=0
s

It’s not difficult to show that


 2  
r
r 2r
= .
s=0
s r

To do so we should take an identity (1 + z)r (1 + z)r = (1 + z)2r and similize the


coefficients at z r . We come to a more compact formula
 
2r
r (δ N ) 2
= .
r
Solutions 201

2.7 Lemma 2.1.1 yields

m−1 m−1 N −1
x(s + ln) = x( j) δ N (s + ln − j)
l=0 l=0 j=0
N −1 N −1
m−1
 
= x( j) δmn (s − j) + ln = x( j) δn (s − j).
j=0 l=0 j=0

We used the result of Exercise 2.5 in the last transition.

2.8 It is sufficient to verify that the equality  jk N = 0 holds if and only if  jk = 0
and  j N = 0.
Let  jk N = 0. It means that j = p k N holds for some integer p. Hence it follows
that j is divisible both by k and by N , i. e. that  jk = 0 and  j N = 0.
The converse proposition constitutes the contents of Problem 1.9 (for s = 2).

2.9 When k and N are relative primes, the mapping j → k j N is a permutation of


the set {0, 1, . . . , N − 1}, therefore

N −1 N −1 N −1 N −1
 
δ N (k j + l) = δ N k j N + l = δ N ( j + l) = δ N ( j) = 1.
j=0 j=0 j =0 j=0

2.10 The DFT inversion formula yields that a signal oddity condition x(− j) =
−x( j) is equivalent to the identity

N −1 N −1
−k j   −k j
X (k) ω N = − X (k) ω N , j ∈ Z,
k=0 k=0

which, in turn, holds if and only if X (k) = −X (k) or X (k) = −X (k) for all k ∈ Z.
The latter characterizes the spectrum X as pure imaginary.

2.11 Linearity of DFT yields

X (k) = A(k) + i B(k),

where the spectra A and B are even (see Theorem 2.2.3). Further,

X (N − k) = X (−k) = A(−k) − i B(−k) = A(k) − i B(k).

It is remaining to add and subtract the obtained equalities.


We see that calculation of spectra of two real signals a and b is reduced to calcu-
lation of a spectrum of one complex signal x = a + ib.
202 Solutions

2.12 The DFT inversion formula yields


 N /2 N /2−1

1 kj kj
xa ( j) = X (k) ω N + X (k) ω N
N k=0 k=1
⎡ ⎤
N /2 N −1
1 ⎣ kj (N −k) j ⎦
= X (k) ω N + X (N − k) ω N .
N k=0 k=N /2+1

The spectrum X of a real signal x is even, therefore

N −1 N −1
(N −k) j −k j
X (N − k) ω N = X (k) ω N .
k=N /2+1 k=N /2+1

Taking into account that Re z = Re z we gain


⎡ ⎤
N /2 N −1
1 ⎣ kj kj ⎦
Re xa ( j) = Re X (k) ω N + Re X (k) ω N
N k=0 k=N /2+1

= Re x( j) = x( j).

2.13 We take a real signal x and correspond it with a complex signal xa with a
spectrum ⎧
⎪ X (k) for k = 0,

X a (k) = 2X (k) for k ∈ 1 : (N − 1)/2,


0 for k ∈ (N + 1)/2 : N − 1.

Let us show that Re xa = x. We have


(N −1)/2 (N −1)/2

1 kj kj
xa ( j) = X (k) ω N + X (k) ω N .
N k=0 k=1

Since
(N −1)/2 N −1 N −1
kj (N −k) j −k j
X (k) ω N = X (N − k) ω N = X (k) ω N ,
k=1 k=(N +1)/2 k=(N +1)/2

we gain $ %
N −1
1 kj
Re xa ( j) = Re X (k) ω N = Re x( j) = x( j).
N k=0
Solutions 203

2.14 Let us verify, for instance, the second equality. We have

N −1 N −1
−(N /2+k) j −k j
X (N /2 + k) = x( j) ω N = (−1) j x( j) ω N
j=0 j=0
N /2−1 N /2−1
−k(2 j) −k(2 j+1)
= x(2 j) ω N − x(2 j + 1) ω N
j=0 j=0
N /2−1
  −k j
= x(2 j) − ω−k
N x(2 j + 1) ω N /2 .
j=0

2.15 Let us verify, for instance, the second equality. We have

N /2−1 N /2−1
−(2k+1) j −(2k+1)(N /2+ j)
X (2k + 1) = x( j) ω N + x(N /2 + j) ω N
j=0 j=0
N /2−1
  − j −k j
= x( j) − x(N /2 + j) ω N ω N /2 .
j=0

2.16 We immediately obtain

N −1 N −1
π j −k j 1  −(2k−1) j −(2k+1) j 
X (k) = sin ω = ω2N − ω2N
j=0
N N 2i j=0
& '
−(2k−1)N −(2k+1)N
1 1 − ω2N 1 − ω2N
= −(2k−1)
− −(2k+1)
2i 1 − ω2N 1 − ω2N
& '
1 1 1
= −(2k−1)
− −(2k+1)
i 1 − ω2N 1 − ω2N
& '
−(2k−1) −(2k+1)
1 ω2N − ω2N
= −(2k−1) −(2k+1) −4k
i 1 − ω2N − ω2N + ω2N
π−2k
2 ω2N sin
= −(2k−1)
N
−(2k+1) −4k
1 − ω2N − ω2N + ω2N
π π
2 sin N sin
= −1 −2k
= N
π
.
ω2N
2k
− ω2N
1
− ω2N + ω2N cos 2πk
N
− cos N

j nj
2.17 Since (−1) j = ω2 = ω2n , Lemma 2.2.1 for N = 2n yields

N −1 N −1
−k j (n−k) j
X (k) = (−1) j ω N = ωN = N δ N (k − n).
j=0 j=0
204 Solutions

πk
Let N = 2n + 1. We will show that X (k) = 1 + i tan N
holds for k ∈ 0 : N − 1.
The definition of DFT yields

n n−1
−2k j −k(2 j+1)
X (k) = (−1) 2j
ωN + (−1)2 j+1 ω N
j=0 j=0
n−1
−2k j
= ω−2kn
N + (1 − ω−k
N ) ωN .
j=0

In particular, X (0) = 1. Similar to (2.2.7), for k ∈ 1 : N − 1 we gain

1 − ω−2kn 1 − ω−2kn
X (k) = ω−2kn + (1 − ω−k
N )
N
= ω−2kn + N
1 − ω−2k 1 + ω−k
N N
N N
2 πk
= = 1 + i tan .
1 + ω−k
N
N

2.18 We write
n 2n n
−k j −k( j−N ) −k j
X (k) = j ωN + ( j − N ) ωN = j ωN .
j=0 j=n+1 j=−n

It is obvious that X (0) = 0. Let k ∈ 1 : N − 1. We denote z = ω−k


N . Taking into
account that z −n = z n+1 we gain

n n+1 n
(1 − z) X (k) = jz j − ( j − 1)z j = −nz −n − nz n+1 + zj
j=−n j=−n+1 j=−n+1
−n+1 −n
z −z n+1
z (z − 1)
= −2nz −n + = −2nz −n + = −N z −n .
1−z 1−z

Let us perform some transformations:

2π kn 2π kn (N − 1)π k (N − 1)π k
z −n = ωkn
N = cos + i sin = cos + i sin
N N N N
 π k π k 
= (−1)k cos − i sin ,
N N
πk  πk πk  πk  πk πk 
1 − z = 2 sin sin + i cos = 2i sin cos − i sin .
N N N N N N
We come to the final formula
Solutions 205

1 (−1)k
X (k) = Ni , k ∈ 1 : N − 1.
2 πk
sin
N
2.19 The definition of DFT yields

n N −1 n−1 n−1
−k j k(N − j) −k j
= n ω−kn
kj
X (k) = j ωN + (N − j) ω N N + j ωN + j ωN
j=0 j=n+1 j=1 j=1
⎧ ⎫
⎨ n−1 ⎬
−k j
= n (−1)k + 2 Re j ωN .
⎩ ⎭
j=1

It is clear that X (0) = n 2 . Let k ∈ 1 : N − 1. We will use formula

n−1
z  
jz j = 1 − nz n−1 + (n − 1)z n , z = 1 (S.5)
j=1
(1 − z) 2

(see Sect. 1.6). Since the number z = ω−k


N is other than unity and z = (−1) , for-
n k

mula (S.5) yields

ω−k  
n−1
−k j
j ωN = N
1 − n(−1)k ωkN + (n − 1)(−1)k . (S.6)
j=1
(1 − ω−k
N )
2

It is not difficult to verify that

ω−k 1
N
=− , k ∈ 1 : N − 1. (S.7)
(1 − ω−k πk
N )
2
4 sin2
N
Indeed,

ω−k ω−k 1
N
= N
=
(1 − ω−k
N )
2 1− 2 ω−k
N + ω−2k
− 2 + ω−k
N N ωkN
1 1
=−  =− .
2π k  πk
2 1 − cos 4 sin2
N N
On the basis of (S.6) and (S.7) we gain
206 Solutions
⎧ ⎫
⎨ n−1 ⎬ + 1  πk ,
−k j
Re j ωN =− 1 − (−1)k + n(−1)k 2 sin2
⎩ ⎭ πk N
j=1 4 sin2
N
1 1 − (−1)k
= − n(−1)k − ,
2 πk
4 sin2
N
so that
1 − (−1)k
X (k) = − , k ∈ 1 : N − 1.
πk
2 sin2
N
2.20 According to Lemmas 2.1.2 and 2.2.1 we have

N −1 N −1 N −1
j 2 −k j ( j 2 −l 2 )−k( j−l)
ω−l +kl
2
X (k) X (k) = ωN N = ωN
j=0 l=0 j, l=0
N −1 N −1 N −1 N −1
( j−l)(( j−l)+2l−k) j ( j+2l−k)
= ωN = ωN
l=0 j=0 l=0 j=0
N −1 N −1 N −1
j ( j−k) 2 jl j ( j−k)
= ωN ωN = N ωN δ N (2 j).
j=0 l=0 j=0


It is clear that provided N is odd the equality
 |X (k)| = N holds for all
 k ∈ Z. As
for N = 2n, there holds |X (k)|2 = N 1 + ωn(n−k) = N 1 + (−1) n−k
, so in this
-  
N

case we have |X (k)| = N 1 + (−1)n−k for all k ∈ Z.

2.21 The definition of DFT and Lemma 2.1.2 yield

N −1 N −1
−k( j+l)+kl −k j
X l (k) = x( j + l) ω N = ωkl
N x( j) ω N = ωkl
N X (k), k ∈ Z.
j=0 j=0

2.22 We have
N −1
1  lj −l j  −k j
X l (k) = ω N + ω N x( j) ω N
2 j=0
⎛ ⎞
N −1 N −1
1⎝ −(k−l) j −(k+l) j ⎠
= x( j) ω N + x( j) ω N
2 j=0 j=0
 
= 21 X (k − l) + X (k + l) , k ∈ Z.
Solutions 207

2.23 Since gcd ( p, N ) = 1, there exist integers r and q such that r p + q N = 1,


herein we can suppose that r ∈ 1 : N − 1 and gcd (r, N ) = 1. Taking into account
that the mapping j →  pj N is a permutation of the set {0, 1, . . . , N − 1}, we write

N −1 N −1
  −k j (r p+q N )   −r k pj N
Y p (k) = x  pj N ω N = x  pj N ω N
j=0 j=0
N −1
−r k N j  
= x( j) ω N = X r k N , k ∈ 0 : N − 1.
j=0

We see that Euler permutation j →  pj N of samples of a signal leads to Euler


permutation k → r k N of components of its spectrum. Here r p N = 1.
2.24 On the basis of the definition of DFT and the inversion formula we gain

N −1 N −1
$ N −1
%
−k j 1 lj −k j
X n (k) = x( j) ωn N = X (l) ω N ωn N
j=0 j=0
N l=0
⎧ ⎫
N −1 ⎨1 N −1 ⎬ N −1
− j (k−ln)
= X (l) ωn N = X (l) h(k − ln),
⎩N ⎭
l=0 j=0 l=0

N −1
1 kj
where h(k) = ωn N .
N j=0
 
2.25 By virtue of N -periodicity we have x( j) = x  j N , hence

n N −1 n−1 N −1
−k j −k(l N + p)
X n (k) = x( j) ωn N = x(l N + p) ωn N
j=0 l=0 p=0
⎛ ⎞
N −1 n−1 N −1
−kp −kp
= x( p) ωn N ωn−kl = ⎝ x( p) ωn N ⎠ n δn (k).
p=0 l=0 p=0

We come to the following formula:


$
n X (k/n) if kn = 0,
X n (k) =
0 otherwise.

2.26 The definition of DFT yields

N −1
 
X n (k) = x(l) ωn−kln
N = X k N , k ∈ 0 : n N − 1.
l=0
208 Solutions

2.27 Let us represent a number j ∈ 0 : n N − 1 in a form j = ln + p, where p ∈


0 : n − 1 and l ∈ 0 : N − 1. Since  j/n = l, we gain

N −1 n−1 N −1 n−1
−k(ln+ p) −kp
X n (k) = x(l) ωn N = x(l) ω−kl
N ωn N .
l=0 p=0 l=0 p=0

n−1
1 kp
We denote h(k) = ωn N . Then
n p=0

 
X n (k) = n X k N h(k), k ∈ 0 : n N − 1.

2.28 According to (2.2.1) we have

n−1 N −1
−k jm −k j
Yn (k) = x( jm) ωnm = x( j ) ω N δm ( j )
j=0 j =0
⎧ ⎫
N −1 ⎨1 m−1 ⎬
−k j − pjn
= x( j) ω N ωmn
⎩m ⎭
j=0 p=0
m−1 N −1 m−1
1 − j (k+ pn) 1
= x( j) ω N = X (k + pn).
m p=0 j=0
m p=0

2.29 The inclusion yn ∈ Cn follows from Lemma 2.1.3. Let us calculate the spec-
trum of the signal yn . We have

n−1 m−1 N −1
Yn (k) = x( j + pn) ωn−k( j+ pn) = x(l) ω−kml
N = X (km).
j=0 p=0 l=0

The inversion formula


m−1 n−1
1
x( j + pn) = X (km) ωnk j , j ∈0:n−1
p=0
n k=0

is referred to as a Poisson summation formula. Here x ∈ C N and N = mn.

2.30 The DFT inversion formula yields

m−1 m−1 N −1
1 l( p+ jm)
yn ( j) = x( p + jm) = X (l) ω N
p=0
N p=0 l=0
Solutions 209

m−1 m−1 n−1


1 (qn+k)( p+ jm)
= X (qn + k) ω N
N p=0 q=0 k=0
⎧ ⎡ ⎤⎫
1
n−1 ⎨m−1 1
m−1 ⎬
(qn+k) p ⎦
= X (qn + k) ⎣ ωN ωk j .
n ⎩ m ⎭ n
k=0 q=0 p=0

On the strength of uniqueness of the expansion over an orthogonal basis we gain

m−1
Yn (k) = X (k + qn) h(k + qn),
q=0

m−1
1 pj
where h( j) = ωN .
m p=0


m−1
2.31 Let us introduce a signal yn ( j) = x( p + jm), yn ∈ Cn . Then y( j) =
p=0
yn ( j/m). We denote X = F N (x), Y = F N (y), and Yn = Fn (yn ). Taking into con-
sideration solutions of Exercises 2.27 (changing n by m and N by n) and 2.30, we
write  
Y (k) = m Yn kn h(k), k ∈ 0 : N − 1,

m−1
Yn (k) = X (k + qn) h(k + qn), k ∈ 0 : n − 1,
q=0

m−1
1 pj
where h( j) = ω N . Hence
m p=0

m−1
   
Y (k) = m h(k) X kn + qn h kn + qn , k ∈ 0 : N − 1.
q=0

2.32 We have
N −1 N
−k( j+1)+k −k j
X 1 (k) = c j+1 ω N = ωkN c j ωN
j=0 j=1
⎛ ⎞
N −1
−k j  
= ωkN ⎝ c j ωN + c N − c0 ⎠ = ωkN X 0 (k) + (c N − c0 ) .
j=0
210 Solutions

2.33 A setting of the exercise states that y(k) = X (N − k) holds for all k ∈ Z.
According to (2.1.4) we have

N −1 N −1
(N −k) j kj
[F N (y)]( j) = X (N − k) ω N = X (k) ω N = N x( j).
k=0 k=0

Thereby we ascertained that calculation of inverse DFT of the spectrum X is reduced


to calculation of direct DFT of the signal y.

2.34 Let X = F N (x). Then the DFT inversion formula yields

N −1
k(N − j)
[F N2 (x)]( j) = X (k) ω N = N x(N − j).
k=0

Applying the latter formula to a signal F N2 (x) we gain


 
[F N4 (x)]( j) = N [F N2 (x)](N − j) = N 2 x N − (N − j) = N 2 x( j).

This result can be restated as follows: the mapping N −2 F N4 : C N → C N is the identity


one.

2.35 We have
N −1 N −1
1 −1 1 (k−l) j+l j
[F N (X ∗ Y )]( j) = 2 X (l) Y (k − l) ω N
N N k=0 l=0
N −1 N −1
1 lj (k−l) j
= X (l) ω N Y (k − l) ω N
N2 l=0 k=0
N −1 N −1
1 lj kj
= X (l) ω N Y (k) ω N = x( j) y( j),
N2 l=0 k=0

which is equivalent to what was required.

2.36 According to the result of Exercise 2.35 we have

X ∗ Y = N F N (x y) = N F N (1I) = N 2 δ N .

2.37 Provided the signals x and y are even, the corresponding spectra X and Y are
real (see Theorem 2.2.3). The spectrum of the convolution x ∗ y, which is equal to
X Y , is also real; hence, the convolution x ∗ y itself is even.

2.38 Auto-correlation is even because its Fourier transform is real. The immediate
proof is also possible:
Solutions 211

N −1 N −1
Rx x (− j) = x(k) x(k + j) = x(k − j) x(k) = Rx x ( j).
k=0 k=0

2.39 The correlation theorem yields

N −1
 
Rx x ( j) = F N (Rx x ) (0) = |X (0)|2 .
j=0

2.40 The correlation theorem and the convolution theorem yield

F N (Ruu ) = |F N (u)|2 = |F N (x ∗ y)|2 = |X |2 |Y |2 ,

F N (Rx x ∗ R yy ) = F N (Rx x ) F N (R yy ) = |X |2 |Y |2 .

Hence it follows that F N (Ruu ) = F N (Rx x ∗ R yy ). The inversion formula yields


Ruu = Rx x ∗ R yy .
The obtained result can be stated this way: auto-correlation of convolution equals
to convolution of auto-correlations.

2.41 As long 2as x and y are delta-correlated signals, there holds |X (k)| ≡ Rx x (0)
and |Y (k)| ≡ R yy (0). On the basis of the Parseval equality and the convolution
theorem we gain

N −1
E(u) = N −1 E(U ) = N −1 |X (k) Y (k)|2 = Rx x (0) R yy (0) = E(x) E(y).
k=0

2.42 Let u = x ∗ y and Rx x = E(x) δ N , R yy = E(y) δ N . According to the results


of Exercises 2.40 and 2.41 we have

Ruu = Rx x ∗ R yy = E(x) E(y) δ N ∗ δ N = E(u) δ N ,

which was to be ascertained.

2.43 It is evident that E(v) = N 2 ; therefore, it is sufficient to prove that |V (k)| ≡ N .


The definition of DFT yields

N 2 −1 N −1 N −1 N −1
−k j j j −k( j1 N + j0 ) −k j0 j ( j0 −k)
V (k) = v( j) ω N 2 = ω N1 0 ω N 2 = ωN 2 ω N1
j=0 j1 , j0 =0 j0 =0 j1 =0
N −1
−k j   −kk
=N ω N 2 0 δ N j0 − k N = N ω N 2 N .
j0 =0

As a consequence we gain the required identity.


212 Solutions

2.44 Let us investigate, for example, a case of an odd N . Since the number s(s N + 1)
is even for any s ∈ Z, we have
( j+s N )( j+s N +1)+2q( j+s N ) s N (s N +1)
a( j + s N ) = ω2N = a( j) ω2N = a( j).

It means that a ∈ C N . Further

N −1 N −1
Raa ( j) = a(k) a(k − j) = a(k + j) a(k)
k=0 k=0
N −1
(k+ j)(k+ j+1)+2q(k+ j)−k(k+1)−2qk
= ω2N
k=0
N −1
j ( j+1)+2q j 2k j
= ω2N ω2N = N a( j) δ N ( j) = N δ N ( j).
k=0

We ascertained that the signal a is delta-correlated.

2.45 If a binary signal x ∈ C N is delta-correlated then, in particular, Rx x (1) = 0


holds. It means that
N −1
x(k) x(k − 1) = 0.
k=0

Each product x(k) x(k − 1) equals either to +1 or to −1. Their sum can equal to
zero only when N is even, i. e. when N = 2n. √
√ Further, a binary delta-correlated signal satisfies to a relation |X (0)| = Rx x (0) =
N or, in more detail,
3 3 √
3x(0) + x(1) + · · · + x(N − 1)3 = 2n.

The left side of the latter equality is an integer number, so the square root of 2n must
be integer as well. This is possible only when n = 2 p 2 . Thus, binary delta-correlated
signals can exist only for N = 4 p 2 .
If p = 1, a binary delta-correlated signal exists. It is, for instance,
x = (1, 1, 1, −1). There is a hypothesis that no binary delta-correlated signals exist
for p > 1. This hypothesis is not entirely proved yet.

2.46 Let us show that


$
Rx x ( j/n) if  jn = 0,
Rxn xn ( j) = (S.8)
0 for others j ∈ 0 : n N − 1.
 
In the solution of Exercise 2.26 it was ascertained that X n (k) = X k N , therefore
Solutions 213
  3  32
Fn N (Rxn xn ) (k) = |X n (k)|2 = 3 X k N 3 , k ∈ 0 : n N − 1.

According to the same Exercise 2.26, the discrete Fourier transform of the signal from
   3  32
the right side of the formula (S.8) looks this way: F N (Rx x ) k N = 3 X k N 3 .
We gained that the DFTs of the signals from the left and from the right sides of
formula (S.8) are equal. Therefore, equal are the signals themselves.

2.47 The correlation theorem yields

Ru 1 v1 = U1 V 1 = (X Y )(W Z ),

Ru 2 v2 = U2 V 2 = (X W )(Y Z ).

The right sides of these relations are equal. Hence the left sides are equal too.

2.48 We have Rx y = O. Keeping the notations of the previous exercise we consec-


utively gain u 1 = Rx y = O, U1 = O, Ru 1 v1 = O, Ru 2 v2 = Ru 1 v1 = O. It is remaining
to recall that u 2 = Rxw and v2 = R yz .

2.49 Let k, k ∈ 0 : N − 1, k = k , and x = δ N (· − k), y = δ N (· − k ). According


to Lemma 2.1.4 we have

Rx y ( j) = δ N (· − k), δ N (· − k − j) = δ N (k − k − j)
 
= δ N j − k − k  N .

For j = k − k  N we gain Rx y ( j ) = 1. This characterizes the signals as being


correlated.

2.50 We take a linear combination of shifts of a signal x and equate it to zero:

N −1
c(k) x( j − k) = 0 for all j ∈ Z.
k=0

This condition can be rewritten in a form c ∗ x = O, which by virtue of the convo-


lution theorem is equivalent to a relation C X = O. Now it is clear that C = O (and
therefore c = O) if and only if all the components of the spectrum X are nonzero.

2.51 The proof is similar to the proof of Lemma 2.6.1.

2.52 By virtue of the result of the previous exercise, we need to construct a signal y
such that Rx y = δ N . Using the correlation theorem we write the equivalent condition
X Y = 1I. Here each component of the spectrum X is nonzero (see Exercise 2.50),
therefore Y = (X )−1 . A desired signal is obtained with the aid of the DFT inversion
formula.
214 Solutions

2.53 The following inequalities are true:

N −1
1
|x( j)|2 ≤ max |x( j)|2 ,
N j∈0:N −1
j=0

N −1
|x( j)|2 ≥ max |x( j)|2 .
j∈0:N −1
j=0

Hence it follows that 1 ≤ p(x) ≤ N . The left inequality is fulfilled as an equality


only if |x( j)| ≡ const, and the right one only if a signal x has a single nonzero sample
on the main period.

2.54 Consider the equivalent equation in a spectral domain

N −1 N −1 N −1
−k j −k j −k j
− 2 x( j − 1) ω N + c x( j) ω N = g( j) ω N .
j=0 j=0 j=0

Since
N −1 N −1
−k j −k j
2 x( j − 1) ω N = [x( j + 1) − 2x( j) + x( j − 1)] ω N
j=0 j=0
N −1
−k j
= (ωkN −2+ ω−k
N ) x( j) ω N
j=0
 πk 
= −4 sin2 N
X (k),

the equation for the spectra can be rewritten as


   
4 sin2 πk
N
+ c X (k) = G(k).

Hence  
X (k) = G(k)/ 4 sin2 (π k/N ) + c , k ∈ Z.

A desired signal x( j) is obtained with the aid of the DFT inversion formula.

2.55 We denote N = 2n + 1. By virtue of 1-periodicity of the function f (t), the


signal x( j) = f (t j ) is N -periodic. Note that

kj
exp(2πikt j ) = ω N ,

so the interpolation conditions take a form


Solutions 215

n
kj
x( j) = a(k) ω N , j ∈ Z. (S.9)
k=−n

Let us extend the vector of coefficients a(k) periodically with a period of N on


all integer indices and rewrite formula (S.9) this way:

N −1
1   kj
x( j) = N a(k) ω N , j ∈ Z.
N k=0

The latter equality has a form of a DFT inversion formula. Therefore,

N −1
−k j
N a(k) = x( j) ω N
j=0

or
N −1
1
a(k) = f (t j ) exp(−2πikt j ), k ∈ Z.
N j=0

To Chapter 3

3.1 We have
N −1
1 (N −k) j
br ( j) = (ω NN −k − 1)−r ω N
N k=1
N −1
1
(ωkN − 1)−r ω N = br ( j).
kj
=
N k=1

This guarantees reality of br ( j).

3.2 According to (3.1.4) and (3.1.3) for j ∈ 1 : N − 1 we have

j j
b1 ( j + 1) − b1 (1) = [b1 (k + 1) − b1 (k)] = b0 (k) = − j/N .
k=1 k=1

Denoting c = b1 (1) we gain

b1 ( j) = c − ( j − 1)/N , j ∈ 2 : N.

The last equality is true for j = 1 as well. 


The constant c can be determined from a condition Nj=1 b1 ( j) = 0 equivalent
to (3.1.2) for r = 1:
216 Solutions

N
1 N −1
c= ( j − 1) = .
N2 j=1
2N

We come to the formula


 
1 N +1
b1 ( j) = −j , j ∈ 1 : N.
N 2

3.3 According to (3.1.4) and the result of the previous exercise, for j ∈ 1 : N − 1
we have
j j j  
1 N +1
b2 ( j + 1) − b2 (1) = [b2 (k + 1) − b2 (k)] = b1 (k) = −k
k=1 k=1
N k=1
2
4 5
1 N +1 ( j + 1) j j (N − j)
= j− = .
N 2 2 2N

Denoting c = b2 (1) we gain

( j − 1)(N − j + 1)
b2 ( j) = c + , j ∈ 1 : N.
2N
Like
 N in the previous exercise, the constant c can be determined from a condition
j=1 b2 ( j) = 0 equivalent to (3.1.2) for r = 2:

N N −1
1 1
c=− ( j − 1)(N − j + 1) = − j (N − j)
2N 2 j=1
2N 2 j=1
4 5
1 N 2 (N − 1) (N − 1)N (2N − 1) N2 − 1
=− − =− .
2N 2 2 6 12N

We come to the formula

N 2 − 1 ( j − 1)(N − j + 1)
b2 ( j) = − + , j ∈ 1 : N.
12N 2N
3.4 According to (3.2.6) and (3.2.2), for n = 2 we have N = 2m and

Tr (l) = 21 [X 1r (l) + X 1r (m + l)], l ∈ 0 : m − 1,


 
πk 2
where X 1 (k) = 2 cos 2m for k ∈ 0 : N − 1. Taking into account that X 1 (m + l) =
 
πl 2
2 sin 2m for l ∈ 0 : m − 1 we come to the required formula.
Solutions 217

3.5 We note that


N −1 N −1
1 k( j− pn) 1  −kpn  k j
Q r ( j − pn) = X 1r (k) ω N = X 1r (k) ω N ωN .
N k=0
N k=0

Now we use the generalized Parseval equality (2.3.1). We gain

N −1
 1  −kpn  kp n 
Q r (· − pn), Q r (· − p n) = X 1r (k) ω N X 1r (k) ω N
N k=0
N −1
1 k( p − p)n  
= X 12r (k) ω N = Q 2r ( p − p)n .
N k=0

3.6 On the strength of (3.2.9) and (3.1.4) we have


r  
2r r −l
 Q r ( j) =
2r
(−1) 2r b2r ( j + r − ln)
l=−r
r − l
r  
2r
= (−1)r −l b0 ( j + r − ln).
l=−r
r −l

It is remaining to use formula (3.1.3).

3.7 When r = 1, the assertion follows from the definition of Q 1 ( j). We perform an
induction step from r − 1 to r , r ≥ 2. According to (3.2.4) we have
& n−1 N −1
'
Q r ( j) = + Q 1 (k) Q r −1 ( j − k)
k=0 k=N −n+1
n−1
= Q 1 (k) Q r −1 ( j − k). (S.10)
k=−n+1

Given j ∈ 0 : (r − 1)(n − 1), we take the right side of (S.10) and consider the
summand corresponding to k = 0. This summand equals to Q 1 (0) Q r −1 ( j). By virtue
of the inductive hypothesis it is positive. As far as the other summands are nonneg-
ative, we gain Q r ( j) > 0.
If j ∈ (r − 1)(n − 1) + 1 : r (n − 1), we will consider the summand correspond-
ing to k = n − 1. It is equal to Q 1 (n − 1) Q r −1 ( j − n + 1). Since j − n + 1 belongs
to the set (r − 2)(n − 1) + 1 : (r − 1)(n − 1), we have Q r −1 ( j − n + 1) > 0. There-
fore, in this case the right side of (S.10) also contains a positive summand which
guarantees positivity of Q r ( j).
218 Solutions

For j = r (n − 1) we have

(r − 1)(n − 1) ≤ j − k ≤ (r + 1)(n − 1),

so that the right side of (S.10) contains the only nonzero summand corresponding to
k = n − 1. We gain
   
Q r r (n − 1) = Q 1 (n − 1) Q r −1 (r − 1)(n − 1) = 1.

At last, let j ∈ r (n − 1) + 1 : N − r (n − 1) − 1. In this case the difference j − k


for all k ∈ −n + 1 : n − 1 belongs to the set

(r − 1)(n − 1) + 1 : N − (r − 1)(n − 1) − 1

on which Q r −1 ( j − k) = 0 holds. We gain that Q r ( j) = 0 holds for the given j.


The fact that Q r ( j) > 0 for j ∈ N − r (n − 1) + 1 : N − 1 follows from evenness
of a B-spline Q r ( j).

3.8 Let us calculate the DFT of the signal x( j) that stands in the right side of the
equality being proved. We have

(n−1)/2 N −1 (n−1)/2
−k j k(N − j) kj
X (k) = ωN + ωN = ωN .
j=0 j=N −(n−1)/2 j=−(n−1)/2

It is evident that X (0) = n. Suppose that k ∈ 1 : N − 1. Using formula (2.4.4) for


μ = (n + 1)/2 we gain

sin(π k/m)
X (k) = , k ∈ 1 : N − 1.
sin(π k/N )

We see that the DFTs of the signals x and Q 1/2 are equal. Therefore, the signals
themselves are also equal.

3.9 Denote G = F N (Q 01/2 ). We have

sin(π k/m)
G(0) = 0, G(k) = for k ∈ 1 : N − 1.
sin(π k/N )

Note that by virtue of parity of n for k ∈ 1 : N − 1 there holds


 
sin π(N − k)/m sin(π n − π k/m) sin(π k/m)
G(N − k) =  = =− = −G(k).
sin π(N − k)/N sin(π − π k/N ) sin(π k/N )
Solutions 219

Therefore
N −1 N −1
1 −k j 1 (N −k) j
Q 01/2 ( j) = G(k) ω N =− G(N − k) ω N
N k=1
N k=1
N −1
1 kj
=− G(k) ω N = −Q 01/2 ( j).
N k=1

Hence follows the equality Re Q 01/2 = O which means by a definition that the signal
Q 01/2 is pure imaginary.
3.10 Let us use the equality c( p) = c(− p), evenness of a B-spline Q r ( j), and
formula (2.1.4). We gain

m−1
  m−1
S(− j) = c(− p) Q r − j + (− p)n = c( p) Q r ( j − pn) = S( j),
p=0 p=0

which is equivalent to what was required.


3.11 We have

r (x − S) 2
= r (S∗ − S) + r (x − S∗ ) 2
= r (S∗ − S) 2 + r (x − S∗ ) 2

N −1
   
+ 2 Re r S∗ ( j) − S( j) r x( j) − S ∗ ( j) .
j=0

Theorem 3.3.2 yields

N −1
    m−1
 
r S∗ ( j) − S( j) r x( j) − S ∗ ( j) = (−1)r d(l) x(ln) − S ∗ (ln) = 0.
j=0 l=0

Here d(l) are the coefficients of the expansion of the discrete periodic spline S∗ − S
over the shifts of the Bernoulli function. Combining the given equalities we gain

r (x − S) 2
= r (S∗ − S) 2
+ r (x − S∗ ) 2 .

Hence follows optimality of S∗ .


Let S1 ∈ S rm be another solution of the original problem. Then

r (S∗ − S1 ) 2
= 0.

Theorem 3.1.2 yields S∗ ( j) − S1 ( j) ≡ const, i. e. S1 differs from S∗ by an additive


constant.
220 Solutions

3.12 Basic functions of the spline Sα presented in a form (3.3.4) are real. Its coeffi-
cients are determined from the system of linear equations (3.5.6) with the real matrix.
Provided the right side of the system is real its solution is real as well.
3.13 The assertion follows from formula (3.8.5).
3.14 A conclusion about evenness of μk ( j) with respect to k follows from formu-
lae (3.8.7) and (3.8.8) and from the result of Exercise 2.1.
3.15 In this case an orthogonal basis is formed by two splines μ0 ( j) and μ1 ( j).
According to (3.8.7) and (3.8.10) we have
 
μ0 ( j) ≡ 1, μ1 ( j) = 1
2
Q 1 ( j) − Q 1 ( j − 2) .

Since Q 1 ( j) for j = 0, 1, 2, 3 takes the values 2, 1, 0, 1, we gain that μ1 ( j) for


the same arguments j takes the values 1, 0, −1, 0.
3.16 We put c = Fm−1 (ξ ). Then

m−1
S( j) = c( p) Q r ( j − pn).
p=0

Coefficients c( p) in this representation are real (see Theorem 2.2.2). It is remaining


to take into account that the values of a B-spline Q r are also real.
3.17 According to Theorems 2.2.2 and 2.2.3 the signal c = Fm−1 (ξ ) is real and
even. The representation

m−1
S( j) = c( p) Q r ( j − pn)
p=0

lets us draw a conclusion that S is real (which is obvious) and even (see Exercise 3.10).
3.18 The coefficients ξ(k) = [T2r (k)]−1/2 in the expansion of ϕr ( j) over the orthog-
onal basis are real and comprise an even sequence. It is remaining to use the result
of Exercise 3.17.
3.19 The solution is similar to the previous one.
3.20 According to (3.9.7) and (3.8.9) we have

m−1 m−1 m−1 −kq m−1 m−1


μk ( j) ωm μk ( j)
Rr ( j − qn) = = ωm−kq
q=0 q=0 k=0
T2r (k) k=0
T2r (k) q=0
m−1
μk ( j) μ0 ( j)
=m δm (k) = m .
k=0
T2r (k) T2r (0)
Solutions 221

It is remaining to take into account that μ0 ( j) ≡ 1


N
n 2r and T2r (0) = n 4r −1 .
3.21 The solution is similar to the proof of Theorem 3.10.1.
3.22 Since
 2
ωm−lν (ωm
l
ν
+ 1)2 = ωm
l
ν
+ ωm−lν + 2 = 2 cos(πl/m ν ) ,

we have
 
2r r  
2r 2r
cν (l) = ωm−lr (ωm
l
+ 1) 2r
= ωm−l(r − p)
= ωm−lp .
ν ν
p=0
p ν
p=−r
r−p ν

3.23 Taking into account (3.10.2) and the result of the previous exercise we gain

N −1 N −1
1 1
Q rν+1 ( j)
lj lj
= yν+1 (l) ω N = cν (l) yν (l) ω N
N l=0
N l=0
r   N −1 r  
1 2r l( j− pn ν ) 2r
= yν (l) ω N = Q rν ( j − pn ν ).
N p=−r
r−p l=0 p=−r
r−p

3.24 We have
aν (−k) = ωm−kν cν (m ν+1 − k) μνm ν+1 −k 2 .

Let us use the fact that the sequences {cν (k)} and {μνk } are even with respect to
k. (The former one is even by a definition. As for evenness of the latter one, see
Exercise 3.14.) Taking into account that m ν+1 − k = m ν − (m ν+1 + k) we gain

aν (−k) = ωm−kν cν (m ν+1 + k) μνm ν+1 +k 2


= a ν (k).

3.25 In the same way as in the solution of the previous exercise we have

ν+1
w−k ( j) = a ν (k) μνk ( j) + a ν (m ν+1 + k) μνm ν+1 +k ( j) = wkν+1 ( j).

3.26 On the basis of (3.10.8) we gain

N −1 m ν+1 −1 n ν+1 −1
 ν+1 2  ν+1 2
wk ( j) = wk ( p + ln ν+1 )
j=0 l=0 p=0
m ν+1 −1 n ν+1 −1
 ν+1 2
= ωm2klν+1 wk ( p)
l=0 p=0
n ν+1 −1
 2
= m ν+1 δm ν+1 (2k) wkν+1 ( p) .
p=0
222 Solutions

Hence the required equality follows obviously.

3.27 According to evenness of the signal aν (see Exercise 3.24) we have

aν (m ν+1 + m ν+2 ) = aν (m ν − m ν+2 ) = a ν (m ν+2 ).

Similarly, with a reference to Exercise 3.14, one can deduce the equality

μνm ν+1 +m ν+2 ( j) = μνm ν+2 ( j).

On the basis of (3.10.7) we gain


 
wmν+1
ν+2
( j) = 2 Re aν (m ν+2 ) μνm ν+2 ( j) .

3.28 If
m ν −1
ϕ( j) = βν (k) wkν ( j)
k=0

then (3.10.8) yields

m ν −1
ϕ( j − ln ν ) = βν (k) wkν ( j) ωm−lk
ν
.
k=0

By virtue of the DFT inversion formula we write

m ν −1
1
βν (k) wkν ( j) = ωm
lk
ν
ϕ( j − ln ν ).
mν k=0

Now the solution finishes in the same way as the proof of Theorem 3.9.1.

3.29 We note that


m ν −1

ϕ(· − ln ν ), ψ(· − l n ν ) = βν (k) γ ν (k) wkν 2
ωmk(l−l
ν
)
.
k=0


Therefore the equality ϕ(· − ln ν ), ψ(· − l n ν ) = δm ν (l − l ) holds if and only if

m ν βν (k) γ ν (k) wkν 2


≡1

holds (see the proof of Theorem 3.9.2).

3.30 Let us use formula (3.10.3). We gain


Solutions 223

m ν −1
Prν+1 ( j) = aν (k) μνk ( j). (S.11)
k=0

Since
m ν −1
1
μνk ( j) = ωmkpν Q rν ( j − pn ν ),
mν p=0

we have
m ν −1
$ m ν −1
%
1
Prν+1 ( j) = Q rν ( j − pn ν ) aν (k) ωmkpν .
p=0
mν k=0

It is remaining to put dν = Fm−1


ν
(aν ).
3.31 Recall that
aν (k) = ωmk ν cν (m ν+1 + k) μνm ν+1 +k 2 .

According to the result of Exercise 3.22 we have


r  
2r
cν (m ν+1 + k) = (−1)l ωmklν .
l=−r
r −l

Equality (3.8.6) yields

1 ν
μνm ν+1 +k 2
= T (m ν+1 + k).
m ν 2r

Therefore,
r  
1 ν 2r
aν (k) = T2r (m ν+1 + k) (−1)l ωmk(l+1) .
mν l=−r
r − l ν

Taking into consideration formula (3.2.7) we gain

m ν −1
1
dν ( p) = aν (k) ωmkpν
mν k=0
r   m ν −1
1 2r
= 2 (−1) l
T2rν (m ν+1 + k) ωmk(νp+l+1)
mν l=−r
r −l k=0
r  $ m −1
%
1 2r 1 ν
= (−1) l
T2rν (k) ωm(k−m ν+1 )( p+l+1)

mν l=−r
r − l m ν k=0
ν

r  
1 2r  
= (−1) p+1 Q ν2r ( p + l + 1) n ν .
m ν l=−r r − l
224 Solutions

3.32 Let us use formula (S.11) and the fact that

n ν −1
1 (qm ν +k) j
μνk ( j) = yν (qm ν + k) ω N
N q=0

(see par. 3.10.1). Bearing in mind m ν -periodicity of the sequence {aν (k)} we gain

m ν −1 n ν −1
1 (qm ν +k) j
Prν+1 ( j) = aν (qm ν + k) yν (qm ν + k) ω N
N k=0 q=0
N −1
1 lj
= aν (l) yν (l) ω N .
N l=0

Hence  
F N (Prν+1 ) (l) = aν (l) yν (l), l ∈ 0 : N − 1.

3.33 According to (3.10.8) and Theorem 3.10.2 we have

m ν+1 −1 m ν+1 −1

Prν+1 (· − ln ν+1 ), Prν+1 (· − l n ν+1 ) = ωm−kl
ν+1
wkν+1 , ωm−kν+1l wkν+1
k=0 k =0
m ν+1 −1
= wkν+1 2
ωm−k(l−l
ν+1
)
,
k=0

which is equivalent to what was required.

3.34 According to the results of Exercises 3.30 and 3.31 we have

m ν −1
 
Prν+1 ( j − n ν ) = dν ( p) Q rν j − ( p + 1)n ν
p=0
m ν −1
= dν ( p − 1) Q rν ( j − pn ν ).
p=0

Here  
1
r
2r  
dν ( p − 1) = (−1) p Q ν2r ( p + l)n ν .
mν l=−r
r −l
Solutions 225

At the same time


m ν −1
Prν+1 (− j − n ν ) = dν ( p − 1) Q rν ( j + pn ν )
p=0
m ν −1
= dν (− p − 1) Q rν ( j − pn ν ).
p=0

Since
 
1
r
2r  
dν (− p − 1) = (−1) p Q ν2r ( p − l)n ν
mν l=−r
r +l
 
p 1
r
2r  
= (−1) Q ν2r ( p + l)n ν = dν ( p − 1),
m ν l=−r r − l

we gain
Prν+1 (− j − n ν ) = Prν+1 ( j − n ν ).

This means that the real spline Prν+1 ( j − n ν ) is even with respect to j.

3.35 B-spline B1 (x) is even, it follows from its definition. Assume that Bν−1 (−x) =
Bν−1 (x) holds for some ν ≥ 2. In this case
6 6
m m  
Bν (−x) = Bν−1 (t) B1 (x + t) dt = Bν−1 (m − t) B1 x − (m − t) dt
60 m 0

= Bν−1 (t) B1 (x − t) dt = Bν (x).


0

To Chapter 4

4.1 Let j = ( js−1 , js−2 , . . . , j0 )2 . A condition j = pNν implies that not all com-
ponents js−ν−1 , . . . , j0 are equal to zero. And then

revs ( j) ≥ j0 2s−1 + · · · + js−ν−1 2ν ≥ 2ν = ν+1 .


 
Hence it follows that y( j) := x revs ( j) = 0.

4.2 Use the solution of the previous exercise. Bear in mind that the equality
revs ( pNν ) = revν ( p) holds for p ∈ 0 : ν+1 − 1.

4.3 We have
ϕν ( j) = ϕν (Nν ; j) = f ν (ν ; j),
 
ψν ( j) = gν (Nν ; j) = f ν revs (Nν ); revs ( j) .
226 Solutions

It remains to take into consideration that revs (Nν ) = ν .

4.4 The first equality follows from formula (4.8.1). To prove the second equality,
let us use the result of Exercise 2.4. We gain

ψν−1 (2 j) = δ Nν−2 (2 j) − δ Nν−2 (2 j − Nν−1 )


 
= δ2Nν−1 (2 j) − δ2Nν−1 2( j − Nν )
= δ Nν−1 ( j) − δ Nν−1 ( j − Nν ) = ψν ( j).

4.5 The required expansion has a form


s
δ N ( j) = 2−s + 2−ν ϕν ( j).
ν=1

It can be obtained in the same way as in the example from par. 4.5.2, but it also can
be deduced analytically. Indeed, according to (4.5.3) we have

2 ϕν−1 (0; j) = ϕν (0; j) + ϕν (Nν ; j).

Hence
s s s
2−ν ϕν ( j) = 2 2−ν ϕν−1 (0; j) − 2−ν ϕν (0; j)
ν=1 ν=1 ν=1
s−1 s
= 2−ν ϕν (0; j) − 2−ν ϕν (0; j)
ν=0 ν=1
−s
= ϕ0 (0; j) − 2 ϕs (0; j).

It is remaining to take into account that ϕ0 (0; j) = δ N ( j) and ϕs (0; j) ≡ 1.

4.6 The required expansion has a form


s
δ N ( j) = 2−s + 2−ν ψν ( j).
ν=1

It can be obtained in the same way as in the example from par. 4.6.5, but it also can
be deduced analytically. Indeed, according to (4.6.10) we have

2 gν−1 (0; j) = gν (0; j) + gν (Nν ; j).


Solutions 227

Hence
s s s
−ν −ν
2 ψν ( j) = 2 2 gν−1 (0; j) − 2−ν gν (0; j)
ν=1 ν=1 ν=1
s−1 s
= 2−ν gν (0; j) − 2−ν gν (0; j)
ν=0 ν=1
−s
= g0 (0; j) − 2 gs (0; j).

It is remaining to take into account that g0 (0; j) = δ N ( j) and gs (0; j) ≡ 1.

4.7 According to (4.8.12) we have

s Nν −1
δ N ( j − q) = 2−s α + 2−ν ξν ( p) ϕν ( j − pν+1 ).
ν=1 p=0

Here
N −1
α = δ N (· − q), ϕs (0) = δ N ( j − q) = 1,
j=0

ξν ( p) = δ N (· − q), ϕν ( p + Nν ) = δ N (· − q), f ν (ν + pν+1 )


= f ν (ν + pν+1 ; q) = f ν (ν ; q − pν+1 ).

Since −N + ν+1 ≤ q − pν+1 ≤ N − 1 for p ∈ 0 : Nν − 1, formula (4.4.6)


yields that a coefficient ξν ( p) is nonzero if and only if

q − pν+1 ∈ 0 : ν+1 − 1. (S.12)

Note that

q − pν+1 = (q/ν+1  − p)ν+1 + qν−1 ν + · · · + q0 ,

therefore condition (S.12) holds only for p = q/ν+1 . Referring to formula (4.4.6)
again and bearing in mind that qν−1 ∈ 0 : 1 we gain

ξν (q/ν+1 ) = f ν (ν ; qν−1 ν + · · · + q0 ) = (−1)qν−1 .

Thus,
s
δ N ( j − q) = 2−s + 2−ν (−1)qν−1 ϕν ( j − q/ν+1  ν+1 ).
ν=1
228 Solutions

4.8 According to (4.8.2) we have

s Nν −1
−s −ν
δ N ( j − q) = 2 β + 2 yν ( p) ψν ( j − p).
ν=1 p=0

Here
N −1
β = δ N (· − q), gs (0) = δ N ( j − q) = 1,
j=0

yν ( p) = δ N (· − q), gν ( p + Nν ) = gν ( p + Nν ; q) = gν (Nν ; q − p)
= ψν (q − p) = δ Nν−1 (q − p) − δ Nν−1 (q − p − Nν ).

Since q = l Nν−1 + qs−ν Nν + q Nν , there holds


 
yν ( p) = δ Nν−1 (qs−ν Nν + q Nν − p) − δ Nν−1 (qs−ν − 1)Nν + q Nν − p .

If qs−ν = 0 then

yν ( p) = δ Nν−1 (q Nν − p) − δ Nν−1 (q Nν − p − Nν ).

Taking into account the inequalities |q Nν − p| ≤ Nν − 1 and

−Nν−1 + 1 ≤ q Nν − p − Nν ≤ −1,

we conclude that $
1 for p = q Nν ,
yν ( p) =
0 for p = q Nν .

If qs−ν = 1 then

yν ( p) = δ Nν−1 (q Nν − p + Nν ) − δ Nν−1 (q Nν − p).

Taking into account that 1 ≤ q Nν − p + Nν ≤ Nν−1 − 1 we gain


$
−1 for p = q Nν ,
yν ( p) =
0 for p = q Nν .

Moreover, in both cases yν (q Nν ) = (−1)qs−ν holds.


Solutions 229

We come to the formula


s
−s
δ N ( j − q) = 2 + 2−ν (−1)qs−ν ψν ( j − q Nν ).
ν=1

4.9 Formula (4.6.10) yields

2 gν−1 ( p; j) = gν ( p; j) + ψν ( j − p), p ∈ 0 : Nν − 1.

Therefore
s Nν −1
−ν
2 ψν ( j − p)
ν=1 p=0
s Nν −1
= 2−ν [2 gν−1 ( p; j) − gν ( p; j)]
ν=1 p=0
s−1 Nν+1 −1 s−1 Nν −1 N −1
= 2−ν gν ( p; j) − 2−ν gν ( p; j) − 2−s gs (0; j) + g0 (0; j)
ν=0 p=0 ν=0 p=0 p=0
N −1 s−1 Nν −1
= δ N ( j − p) − 2−s − 2−ν δ Nν ( j − p). (S.13)
p=0 ν=0 p=Nν+1

We used formula (4.7.9) while performing the last transition.


Let j = ( js−1 , js−2 , . . . , j0 )2 . We will show that

Nν −1
δ Nν ( j − p) = js−ν−1 . (S.14)
p=Nν+1

As far as j = l Nν + js−ν−1 2s−ν−1 + · · · + j0 , we have

Nν −1 Nν −1
 
δ Nν ( j − p) = δ Nν ( js−ν−1 2s−ν−1 + · · · + j0 ) − p . (S.15)
p=Nν+1 p=Nν+1

If js−ν−1 = 0 then the right side of (S.15) equals to zero. In this case (S.15) corre-
sponds to (S.14). Let js−ν−1 = 1. Then the right side of (S.15) equals to unity. In
this case (S.15) also corresponds to (S.14).
Substituting (S.14) into (S.13) and taking into account that

N −1
δ N ( j − p) ≡ 1
p=0
230 Solutions

we gain
s−1
x( j) = 1 − 2−ν js−ν−1 = 1 − 2 j/N , j ∈ 0 : N − 1.
ν=0

4.10 A definition yields

ϕν ( j − pν+1 ) = ϕν (Nν ; j − pν+1 ) = f ν (ν + pν+1 ; j).

We have
Nν −1
  −s
s
−ν
 
y revs ( j) = 2 + 2 f ν ν + pν+1 ; revs ( j)
ν=1 p=0
Nν −1
s
 
= 2−s + 2−ν f ν ν + ν+1 revs−ν ( p); revs ( j) .
ν=1 p=0

According to (4.6.3) there holds ν + ν+1 revs−ν ( p) = revs (Nν + p), therefore
   
f ν ν + ν+1 revs−ν ( p); revs ( j) = f ν revs ( p + Nν ); revs ( j)
= gν ( p + Nν ; j) = ψν ( j − p).

Taking into consideration the result of the previous exercise we gain

Nν −1
  s
y revs ( j) = 2−s + 2−ν ψν ( j − p) = 1 − 2 j/N .
ν=1 p=0

Hence
y( j) = 1 − 2 revs ( j)/N , j ∈ 0 : N − 1.

4.11 In the same way as in the solution of Exercise 4.5 we write

s s−1 s
2−ν ϕν ( j) = 2−ν ϕν (0; j) − 2−ν ϕν (0; j)
ν=k+1 ν=k ν=k+1
−k −s
=2 ϕk (0; j) − 2 ϕs (0; j). (S.16)

It was noted in par. 4.7.1 that

k+1 −1
ϕk (0; j) = δ N ( j − q) =: h k ( j).
q=0

Now the required expansion follows from (S.16).


Solutions 231

4.12 Note that for ν ∈ 1 : s − k there holds an inequality

Nν ≥ 2k = k+1 .

According to (4.6.10) we have

2 gν−1 ( p; j) = gν ( p; j) + ψν ( j − p) for p ∈ 0 : k+1 − 1.

Hence
s−k s−k−1 s−k
2−ν ψν ( j − p) = 2−ν gν ( p; j) − 2−ν gν ( p; j)
ν=1 ν=0 ν=1

= g0 ( p; j) − 2−s+k gs−k ( p; j).

Summing up the last equations on p from 0 to k+1 − 1 we gain

s−k k+1 −1 k+1 −1


2−ν ψν ( j − p) = h k ( j) − 2−s+k gs−k ( p; j). (S.17)
ν=1 p=0 p=0

Equality (4.7.9) yields gs−k ( p; j) = δk+1 ( j − p), so

k+1 −1 k+1 −1
gs−k ( p; j) = δk+1 ( j − p) ≡ 1.
p=0 p=0

Now the required expansion follows from (S.17).

4.13 On the basis of (4.8.12) and (4.8.19) we have

Nν −1
s
 
x( j ⊕ q) = 2−s α + 2−ν ξν ( p) ϕν ( j ⊕ q) ⊕ pν+1 .
ν=1 p=0

Since

( j ⊕ q) ⊕ pν+1 = j ⊕ (q ⊕ pν+1 )
 
= j ⊕ (q/ν+1  ⊕ p)ν+1 + qν−1 ν + qν ,

equality (4.8.16) yields

Nν −1
−s
s
qν−1 −ν
 
x( j ⊕ q) = 2 α + (−1) 2 ξν ( p) ϕν j − (q/ν+1  ⊕ p)ν+1 .
ν=1 p=0
232 Solutions

Performing a change of variables p = q/ν+1  ⊕ p we gain the required expan-


sion.
4.14 On the basis of (4.8.2) and (4.8.6) we have

Nν −1
s
 
y( j − q) = 2−s β + 2−ν yν ( p) (−1)(q+ p)/Nν  ψν j − q + p Nν .
ν=1 p=0

We change the variables: p = q + p Nν . In this case p =  p − q Nν and


 
 q + p   q +  p − q   p − ( p − q) −  p − q 
Nν Nν
= =
Nν Nν Nν
 p − ( p − q)/N N  p −q
ν ν
= =− .
Nν Nν

Finally we gain

Nν −1
−s
s
−ν
 
y( j − q) = 2 β + 2 (−1)( p −q)/Nν  yν  p − q Nν ψν ( j − p ).
ν=1 p =0

4.15 According to (4.7.9) the identity

gν (k; j) = δ Nν ( j − k)

is true for k ∈ 0 : Nν − 1. Hence the required equality follows obviously.


4.16 We denote
 3 
G ν = w ∈ C Nν−1 3 w( j − Nν ) = −w( j), j ∈ Z .

It is required to verify that Wν = G ν .


Let us take w ∈ Wν . Then

Nν −1
w( j) = a(k) ψν ( j − k).
k=0

By virtue of formula (4.8.1) we have ψν (· − k) ∈ C Nν−1 , therefore w ∈ C Nν−1 . Fur-


ther, the same formula (4.8.1) yields ψν ( j − Nν ) = −ψν ( j), whence it follows that
w( j − Nν ) = − w( j). So, w ∈ G ν . We have ascertained that Wν ⊂ G ν .
Now let w ∈ G ν . Since w ∈ C Nν−1 , there holds

2Nν −1
w( j) = w(k) δ Nν−1 ( j − k). (S.18)
k=0
Solutions 233

We rewrite the equality w( j − Nν ) = −w( j) in a form

2Nν −1
w( j) = − w(k) δ Nν−1 ( j − k − Nν ). (S.19)
k=0

Summing (S.18) and (S.19) and taking into account (4.8.1) we gain

2Nν −1
2 w( j) = w(k) ψν ( j − k)
k=0
Nν −1 Nν −1
= w(k) ψν ( j − k) + w(k + Nν ) ψν ( j − k − Nν )
k=0 k=0
Nν −1
= [w(k) − w(k + Nν )] ψν ( j − k).
k=0

Hence w ∈ Wν . The inclusion G ν ⊂ Wν is ascertained, so is the equality G ν = Wν .

4.17 Since discrete Walsh functions vk ( j) take only two values +1 and −1, there
holds [vk ( j)]2 ≡ 1. An equivalent notation is 1/vk ( j) = vk ( j).
Further,

s−1
vk ( j) vk ( j) = (−1)kα +kα 2 jα = vm ( j),
α=0

where m = k ⊕ k .

4.18 Take into account that 2k + 1 = 2k ⊕ 1 and use the result of the previous
exercise.

4.19 We remind that the numbers vk (0), vk (1), . . . , vk (N − 1) form the row of
the Hadamard matrix As with the index k. The required formulae follow from the
recurrent relation 4 5
As−1 As−1
As = .
As−1 −As−1

4.20 Let j ∈ 0 : N − 1. The numbers N − 1 − j and j belong to the set 0 : N − 1,


and their sum equals to N − 1 = (1, 1, . . . , 1)2 . This is possible only when the
binary codes of these numbers satisfy to the following condition: if a binary digit of
one of these numbers equals to zero then the same binary digit of another number
equals to unity.

4.21 Under the conditions of the exercise we have

vk ( j) = vk (N − 1)vk ( j).
234 Solutions

Since N − 1 = (1, . . . , 1)2 , there holds


s−1 s−1
vk ( j) = (−1) α=0 kα (1+ jα ) = (−1) α=0 kα (1− jα ) .

Note that for j ∈ 0 : N − 1 the following equality is true:

(N − 1 − j)α = 1 − jα , α ∈ 0 : s − 1.

Hence s−1
vk ( j) = (−1) α=0 kα (N −1− j)α = vk (N − 1 − j).

4.22 The solution is similar to one of the previous exercise.

4.23 We fix p = ( ps−3 , . . . , p0 )2 and write

3N2 + p = N1 + N2 + p = (1, 1, ps−3 , . . . , p0 )2 ,

N1 + p = (1, 0, ps−3 , . . . , p0 )2 .

Let us take j = ( js−1 , js−2 , . . . , j0 )2 . If js−2 = 0, i. e. j = ( js−1 , 0, js−3 , . . . , j0 )2 ,


then  j + N2  N = ( js−1 , 1, js−3 , . . . , j0 )2 . We gain
s−3
{3N2 + p, j}s = {N1 + p,  j + N2  N }s = js−1 + pα jα .
α=0

This guarantees validity of the equality


 
v3N2 + p ( j) = v N1 + p  j + N2  N . (S.20)

Let js−2 = 1, i. e. j = ( js−1 , 1, js−3 , . . . , j0 )2 . Then

 j + N2  N = ( js−1 + 12 , 0, js−3 , . . . , j0 )2 .

We have
s−3
{3N2 + p, j}s = js−1 + 1 + pα jα ,
α=0

s−3
{N1 + p,  j + N2  N }s =  js−1 + 12 + pα jα .
α=0

Since
(−1) js−1 +12 = (−1) js−1 +1 ,

equality (S.20) holds in this case as well.


Solutions 235

Fig. S.1 Graphs of the Rademacher functions for N = 8

4.24 The definition of Rademacher functions yields

rν ( j) = v Nν ( j) = (−1) js−ν = (−1) j/Nν  .

For N = 8 we have

r1 ( j) = (−1) j/4 , r2 ( j) = (−1) j/2 , r3 ( j) = (−1) j ,

j ∈ 0 : 7.

Figure S.1 depicts the graphs of the functions r1 , r2 and r3 .

4.25 As it was noted in the solution of the previous exercise, rν ( j) = (−1) js−ν holds.
Bearing this in mind we gain


s 
s
vk ( j) = (−1)ks−ν js−ν = [rν ( j)]ks−ν , k ∈ 0 : N − 1.
ν=1 ν=1

4.26 Let us use the identity v0 (k) ≡ 1 and the fact that vk ( j) = v j (k). We write

N −1 N −1
vk ( j) = v0 (k) v j (k).
k=0 k=0

Now the required equality follows from orthogonality of Walsh functions and the
equality v0 , v0  = N .

4.27 We denote xs = W N (x). Equalities (4.10.2) and (4.10.1) yield

N −1  N −1 
xs 2
= x( j) vk ( j) x s (k)
k=0 j=0
N −1 7 N −1 8
= x( j) x s (k) vk ( j) = N x 2 ,
j=0 k=0

which is equivalent to what was required.


236 Solutions

4.28 On the basis of the definitions of the discrete Walsh transform and the dyadic
convolution we write
N −1  N −1 
   
W N (z) (k) = x(l) y( j ⊕ l) vk ( j ⊕ l) ⊕ l
j=0 l=0
N −1 N −1
= x(l) y( j) vk ( j ⊕ l).
l=0 j=0

According to the result of Exercise 4.17 we have

vk ( j ⊕ l) = v j⊕l (k) = v j (k) vl (k) = vk ( j) vk (l).

Hence
N −1 N −1
     
W N (z) (k) = x(l) vk (l) y( j) vk ( j) = W N (x) (k) W N (y) (k)
l=0 j=0

as was to be proved.
4.29 We denote V p = F N (v p ). Taking into account that v1 ( j) = (−1) j0 = (−1) j
for j ∈ 0 : N − 1 we gain

N −1 N −1 N −1
−k j j −k j − j (k−N1 )
V1 (k) = (−1) j ω N = ω2 ω N = ωN = N δ N (k − N1 ).
j=0 j=0 j=0

Further, v2 ( j) = (−1) j1 = (−1) j/2 for j ∈ 0 : N − 1. We put j = 2l + q, q ∈


0 : 1, l ∈ 0 : N1 − 1. Then  j/2 = l and

N −1 1 N1 −1
 j/2 −k j −k(2l+q)
V2 (k) = (−1) ωN = (−1)l ω N
j=0 q=0 l=0
1 N1 −1
−kq
= ωN ω−l(k−N
N1
2)
= N1 (1 + ω−k
N )δ N1 (k − N2 ).
q=0 l=0

Since v3 ( j) = (−1) j1 + j0 = (−1) j/2+ j = (−1)3l+q , we have

V3 (k) = N1 (1 − ω−k
N )δ N1 (k − N2 ).

The Fourier spectra V2 (k) and V3 (k) on the main period are not equal to zero only
for k = N2 = N /4 and k = N2 + N1 = 3N /4. Herein

V2 (N /4) = N1 (1 + ω4−1 ) = N1 (1 − i),


Solutions 237

V2 (3N /4) = N1 (1 + ω4−3 ) = N1 (1 + i);

V3 (N /4) = N1 (1 + i), V3 (3N /4) = N1 (1 − i).

Further results on Fourier spectrum of Walsh functions can be found in [17].


4.30 We will use the fact that rν ( j) = (−1) j/Nν  for j ∈ 0 : N − 1 (see the solution
of Exercise 4.24). We put j = pNν + q, q ∈ 0 : Nν − 1, p ∈ 0 : ν+1 − 1. Then
 j/Nν  = p and

ν+1 −1 Nν −1 ν+1 −1 Nν −1
−k( pNν +q) −kp −kq
Rν (k) = (−1) p
ωN = (−1) p ων+1 ωN
p=0 q=0 p=0 q=0
ν+1 −1 Nν −1 Nν −1
− p(k−ν ) −kq −kq
= ων+1 ωN = 2ν δν+1 (k − ν ) ωN .
p=0 q=0 q=0

It is clear that the Fourier spectrum Rν (k) on the main period is not equal to zero
only for k = ν + lν+1 = (2l + 1)ν , l ∈ 0 : Nν − 1. Given these k, according
to (2.2.7) we gain

Nν −1 Nν −1
−(2l+1)ν q −(2l+1)q
Rν (k) = 2ν ωN = 2ν ω Nν−1
q=0 q=0

1 − ω−(2l+1)N ν  (2l + 1)π 


2ν = 2ν 1 − i cot
Nν−1
= .
1 − ω−(2l+1)
Nν−1
Nν−1

4.31 It is known that v2 p+1 ( j) = v2 p ( j) v1 ( j) (see Exercise 4.18). Let us use the
result of Exercise 2.35 which yields

V2 p+1 = N −1 (V2 p ∗ V1 ).

Since V1 (k) = N δ N (k − N1 ) (see Exercise 4.29), we have

N −1
V2 p+1 (k) = V2 p (l) δ N (k − l − N1 ) = V2 p (k − N1 ).
l=0

4.32 According to the result of Exercise 4.19 for p ∈ 0 : N1 − 1 we have

N1 −1 N1 −1
−k j −k(N1 + j)
V p (k) = v p ( j) ω N + v p (N1 + j) ω N
j=0 j=0
N1 −1
  −k j
= 1 + (−1)k v (1)
p ( j) ω N .
j=0
238 Solutions

Hence immediately follows that

V p (2k) = 2V p(1) (k), V p (2k + 1) = 0,


(S.21)
k ∈ 0 : N1 − 1.

Now we note that N1 + p = N1 ⊕ p for p ∈ 0 : N1 − 1. Therefore (see Exer-


cise 4.17)
v N1 + p ( j) = v N1 ( j) v p ( j).

Going over to Fourier transforms we gain (see Exercise 2.35)

VN1 + p = N −1 (V p ∗ VN1 ). (S.22)

The spectrum VN1 is calculated easily. Indeed,


$
1 for j ∈ 0 : N1 − 1,
 j/N1 
v N1 ( j) = (−1) js−1
= (−1) =
−1 for j ∈ N1 : N − 1.

As it is shown in par. 2.2.5,


$
0 for even k,
VN1 (k) =
πk
2(1 − i cot N
) for odd k.

Let us write down formula (S.22) in more detail:

N1 −1 N −1
1   1 1  
VN1 + p (2k) = V p (2l + 1) VN1 2(k − l) − 1 + V p (2l) VN1 2(k − l) .
N N
l=0 l=0
(S.23)
Both sums in the right side
 of (S.23)
 equal to zero: the former one due to V p (2l + 1),
the latter one due to VN1 2(k − l) . Thus, VN1 + p (2k) = 0 for k ∈ 0 : N1 − 1. Further,
using equality (S.21) we gain, similar to (S.23),

N1 −1
1  
VN1 + p (2k + 1) = V p(1) (l) VN1 2(k − l) + 1 ,
N1 l=0

k ∈ 0 : N1 − 1.

It is remaining to take into account that VN1 (2 j + 1) = h( j).

4.33 By virtue of N -periodicity of Walsh functions and equality (S.20) for p ∈ 0 :


N2 − 1 we gain
Solutions 239

N −1 N −1
−k j −k( j−N2 )
V3N2 + p (k) = v N1 + p ( j + N 2 ) ω N = v N1 + p ( j) ω N
j=0 j=0

= ω4k VN1 + p (k) = i VN1 + p (k), k ∈ 0 : N − 1.


k

4.34 It is sufficient to verify that

{revs (2k), j}s = {revs (k), 2 j}s , k, j ∈ 0 : N1 − 1.

Let k = (0, ks−2 , . . . , k0 )2 and j = (0, js−2 , . . . , j0 )2 . Then

revs (k) = (k0 , . . . , ks−2 , 0)2 , revs (2k) = (0, k0 , . . . , ks−2 )2 .

We gain
s−2
{revs (2k), j}s = {revs (k), 2 j}s = ks−2−α jα .
α=0

4.35 It is required to verify that there holds the equality wal−1s (N − 1) = 1 or,
which is equivalent, wals (1) = N − 1. By the definition, wals (1) is the number of
sign changes of the Walsh function v1 ( j) on the main period. Since v1 ( j) = (−1) j
for j ∈ 0 : N − 1, we have wals (1) = N − 1.

4.36 It follows from the proof of Theorem 4.12.2 that the formula

ν+1 −1
ξ(k) = Nν a(l) vk (l Nν )
l=0

is true for k ∈ 0 : ν+1 − 1.

4.37 With the aid of Theorem 4.11.1 we consecutively fill out the table of values of
the permutations wal1 (k), wal2 (k), and wal3 (k) (Table S.1).
On the basis of the definition of an inverse mapping we gain

wal−1
3 (k) = {0, 4, 6, 2, 3, 7, 5, 1}.

Table S.1 Values of walν (k)


for ν = 1, 2, 3 ν walν (k) for
k = 0, 1, . . . , 2ν − 1
1 0 1

2 0 3 1 2

3 0 7 3 4 1 6 2 5
240 Solutions

4.38 We note that

vk1 N +k0 ( j1 N + j0 ) = (−1){k1 , j1 }s +{k0 , j0 }s = vk1 ( j1 ) vk0 ( j0 ).

Therefore
N −1
F(k1 N + k0 ) = f ( j1 N + j0 ) vk1 N +k0 ( j1 N + j0 )
j1 , j0 =0
N −1
= v j1 ( j0 ) vk1 ( j1 ) vk0 ( j0 )
j1 , j0 =0
N −1 N −1
= vk1 ( j1 ) v j1 ( j0 ) vk0 ( j0 )
j1 =0 j0 =0
N −1
=N vk1 ( j1 ) δ N ( j1 − k0 ) = N vk1 (k0 ).
j1 =0
References

1. Ahmed, N., Rao, K.R.: Orthogonal Transforms for Digital Signal Processing. Springer, Hei-
delberg, New York (1975)
2. Ber, M.G., Malozemov, V.N.: On the recovery of discrete periodic data. Vestnik Leningrad.
Univ. Math. 23(3), 8–14 (1990)
3. Ber, M.G., Malozemov, V.N.: Interpolation of discrete periodic data. Probl. Inf. Transm. 28(4),
351–359 (1992)
4. Ber, M.G., Malozemov, V.N.: Best formulas for the approximate calculation of the discrete
Fourier transform. Comput. Math. Math. Phys. 32(11), 1533–1544 (1992)
5. Blahut, R.E.: Fast Algorithms for Digital Signal Processing. Addison-Wesley, Reading, MA
(1984)
6. Chashnikov, N.V.: Hermite spline interpolation in the discrete periodic case. Comput. Math.
Math. Phys. 51(10), 1664–1678 (2011)
7. Chashnikov, N.V.: Discrete Periodic Splines and Coons Surfaces. Lambert Academic Publish-
ing (2010) (in Russian)
8. Cooley, J.W., Tukey, J.W.: An algorithm for the machine calculation of complex Fourier series.
Math. Comput. 19(90), 297–301 (1965)
9. Donoho, D.L., Stark, P.B.: Uncertainty principles and signal recovery. SIAM J. Appl. Math.
49(3), 906–931 (1989)
10. Goertzel, G.: An algorithm for the evaluation of finite trigonometric series. Am. Math. Monthly
65, 34–35 (1958)
11. Ipatov, V.P.: Periodic Discrete Signals with Optimal Correlation Properties. Radio i Svyaz,
Moscow (1992). (in Russian)
12. Johnson, J., Johnson, R.W., Rodriguez, D., Tolimieri, R.: A methodology for designing, mod-
ifying and implementing Fourier transform algorithms on various architectures. Circuits Syst.
Signal Process. 9(4), 449–500 (1990)
13. Kirushev, V.A., Malozemov, V.N., Pevnyi, A.B.: Wavelet decomposition of the space of discrete
periodic splines. Mathem. Notes 67(5), 603–610 (2000)
14. Korovkin, A.V.: Generalized discrete Ahmed-Rao transform. Vestnik Molodyh Uchenyh 2,
33–41 (2003). (in Russian)
15. Korovkin, A.V., Malozemov, V.N.: Ahmed-Rao bases. Mathem. Notes 75(5), 780–786 (2004)
16. Korovkin, A.V., Masharsky, S.M.: On the fast Ahmed-Rao transform with subsampling in
frequency. Comput. Math. Math. Phys. 44(6), 934–944 (2004)

© Springer Nature Switzerland AG 2020 241


V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7
242 References

17. Lvovich, A.A., Kuzmin, B.D.: Analytical expression for spectra of Walsh functions.
Radiotekhnika 35(1), 33–39 (in Russian) (1980)
18. Mallat, S.: A Wavelet Tour of Signal Processing, 2nd edn. Press, Acad (1999)
19. Malozemov, V.N., Chashnikov, N.V.: Limit theorems of the theory of discrete periodic splines.
J. Math. Sci. 169(2), 188–211 (2010)
20. Malozemov, V.N., Chashnikov, N.V.: Limit theorems in the theory of discrete periodic splines.
Doklady Math. 83(1), 39–40 (2011)
21. Malozemov, V.N., Chashnikov, N.V.: Discrete periodic splines with vector coefficients for
computer-aided geometric design. Doklady Math. 80(3), 797–799 (2009)
22. Malozemov, V.N., Masharsky, S.M.: Glassman’s formula, fast Fourier transform, and wavelet
expansions. Am. Math. Soc. Transl. 209(2), 93–114 (2003)
23. Malozemov, V.N., Masharsky, S.M.: Generalized wavelet bases related with discrete Vilenkin-
Chrestenson transform. St. Petersburg Math. J. 13(1), 75–106 (2002)
24. Malozemov, V.N., Masharsky, S.M.: Haar spectra of discrete convolutions. Comput. Math.
Math. Phys. 40(6), 914–921 (2000)
25. Malozemov, V.N., Masharsky, S.M.: Comparative study of two wavelet bases. Probl. Inf.
Transm. 36(2), 114–124 (2000)
26. Malozemov, V.N., Masharsky, S.M., Tsvetkov, K.Yu.: Frank signal and its generalizations.
Probl. Inf. Transm. 37(2), 100–107 (2001)
27. Malozemov, V.N., Pevnyi, A.B.: Polynomial Splines. LGU, Leningrad (1986). (in Russian)
28. Malozemov, V.N., Pevnyi, A.B.: Discrete periodic B-splines. Vestnik St. Petersburg Univ. Math.
30(4), 10–14 (1997)
29. Malozemov, V.N., Pevnyi, A.B.: Discrete periodic splines and their numerical applications.
Comput. Math. Math. Phys. 38(8), 1181–1192 (1998)
30. Malozemov, V.N., Pevnyi, A.B., Tretyakov, A.A.: Fast wavelet transform for discrete periodic
signals and patterns. Probl. Inf. Transm. 34(2), 161–168 (1998)
31. Malozemov, V.N., Prosekov, O.V.: Fast Fourier transform of small orders. Vestnik St. Petersburg
Univ. Math. 36(1), 28–35 (2003)
32. Malozemov, V.N., Prosekov, O.V.: Parametric versions of the fast Fourier transform. Doklady
Math. 78(1), 576–578 (2008)
33. Malozemov, V.N., Solov’eva, N.A.: Parametric lifting schemes of wavelet decompositions. J.
Math. Sci. 162(3), 319–347 (2009)
34. Malozemov, V.N., Solov’eva, N.A.: Wavelets and Frames in Discrete Analysis. Lambert Aca-
demic Publishing (2012) (in Russian)
35. Malozemov, V.N., Tret’yakov, A.A.: New approach to the Cooley-Tukey algorithm. Vestnik
St. Petersburg Univ. Math. 30(3), 47–50 (1997)
36. Malozemov, V.N., Tret’yakov, A.A.: The Cooley-Tukey algorithm and discrete Haar transform.
Vestnik St. Petersburg Univ. Math. 31(3), 27–30 (1998)
37. Malozemov, V.N., Tret’yakov, A.A.: Partitioning, orthogonality and permutations. Vestnik St.
Petersburg Univ. Math. 32(1), 14–19 (1999)
38. Malozemov, V.N., Tsvetkov, K.Yu.: On optimal signal-filter pairs. Probl. Inf. Transm. 39(2),
216–226 (2003)
39. Malozemov, V.N., Tsvetkov, K.Yu.: A sampling theorem in Vilenkin-Chrestenson basis. Com-
mun. Appl. Anal. 10(2), 201–207 (2006)
40. Kamada, Masaru: Toraichi, Kazuo, Mori, Ryoichi: Periodic spline orthogonal bases. J. Approx.
Theory 55(1), 27–34 (1988)
41. McClellan, J.H., Rader, C.M.: Number Theory in Digital Signal Processing. Prentice-Hall,
Englewood Cliffs, NJ (1979)
42. Morozov, V.A.: Regular Methods for Solving Ill-Posed Problems. Nauka, Moscow (1987). (in
Russian)
43. Narcowich, F.J., Ward, J.D.: Wavelets associated with periodic basis functions. Appl. Comput.
Harmonic Anal. 3(1), 40–56 (1996)
44. Prosekov, O.V., Malozemov, V.N.: Parametric Variants of the Fast Fourier Transform. Lambert
Academic Publishing (2010) (in Russian)
References 243

45. Sarwate, D., Pursley, M.: Cross-correlation properties of pseudorandom and related sequences.
Proc. IEEE 68(5), 593–619 (1980)
46. Malozemov, V.N. (ed.): Selected Chapters of Discrete Harmonic Analysis and Geometric Mod-
eling. Part One. VVM, St. Petersburg (2014). (in Russian)
47. Malozemov, V.N. (ed.): Selected Chapters of Discrete Harmonic Analysis and Geometric Mod-
eling. Part Two. VVM, St. Petersburg (2014). (in Russian)
48. Temperton, C.: Self-sorting in-place fast Fourier transform. SIAM J. Sci. Statist. Comput.
12(4), 808–823 (1991)
49. Trakhtman, A.M., Trakhtman, V.A.: Fundamentals of the Theory of Discrete Signals on Finite
Intervals. Sov. Radio, Moscow (1975). (in Russian)
50. Vlasenko, V.A., Lappa, Yu.M., Yaroslavsky, L.P.: Methods of Synthesis of Fast Algorithms for
Signal Convolution and Spectral Analysis. Nauka, Moscow (1990). (in Russian)
51. Zalmanzon, L.A.: Fourier, Walsh and Haar Transforms and Their Application to Control.
Communications and Other Fields. Nauka, Moscow (1989). (in Russian)
52. Zheludev, V.A.: Wavelets based on periodic splines. Rus. Acad. Sci. Dokl. Math. 49(2), 216–
222 (1994)
53. Zheludev, V.A., Pevnyi, A.B.: Biorthogonal wavelet schemes based on discrete spline interpo-
lation. Comput. Math. Math. Phys. 41(4), 502–513 (2001)
Index

A Continuous periodic
Ahmed–Rao basis, 177 B-spline, 109
Algorithm spline, 114
Cooley–Tukey Convolution
decimation in frequency, 139 cyclic, 30
decimation in time, 128 dyadic, 153
Goertzel, 121 skew-cyclic, 151
Amplitude spectrum, 56 theorem, 30
Auto-correlation, 34 in Haar basis, 149, 153
normalized, 42 Cooley–Tukey algorithm
decimation in frequency, 139
decimation in time, 128
B Correlation
Basis auto, 34
Ahmed–Rao, 177 cross, 34
exponential, 21 cyclic, 34
Haar, decimation in frequency, 140 theorem, 34
Haar, decimation in time, 132 Cross-correlation, 34
in space of splines, orthogonal, 90 Cyclic
of shifts, 35 convolution, 30
Walsh–Hadamard, 159
correlation, 34
Walsh–Paley, 162
wavelet, 130, 139
Bernoulli, discrete functions, 61
Binary D
code, 4 Delta-correlated signal, 42
signal, 58 DFT inversion formula, 20
Bitwise summation, 8 Discrete Fourier transform, 19
B-spline Discrete functions
continuous periodic, 109 Ahmed–Rao, 180
discrete periodic, 65 Bernoulli, 61
normalized, 108 Rademacher, 189
Walsh, 157
ordered by frequency, 162
C ordered by sign changes, 164
Cauchy–Bunyakovskii inequality, 18 Discrete periodic B-spline, 65

© Springer Nature Switzerland AG 2020 245


V. N. Malozemov and S. M. Masharsky, Foundations of Discrete
Harmonic Analysis, Applied and Numerical Harmonic Analysis,
https://doi.org/10.1007/978-3-030-47048-7
246 Index

Discrete periodic spline, 69 H


interpolating, 74 Haar basis
smoothing, 79 related to decimation in frequency, 140
Discrete transform related to decimation in time, 132
Fourier, 19 Hadamard matrix, 155
Walsh, 159
Discrete Walsh functions, 157
Discrete Walsh transform, 159 I
Dual splines, 92 Imaginary signal, 19
Impulse response, 32
Dual spline wavelets, 118
Inequality
Dyadic convolution, 153
Cauchy–Bunyakovskii, 18
theorem, 153
Sidel’nikov–Sarwate, 46
Interpolation, 28, 36, 73

E
Energy of signal, 43 L
Ensemble of signals, 44 Linear transform, 31
Equality
Parseval, 24
M
Parseval, generalized, 24
Matched filter, 41
Euler permutation, 4
Matrix
Even signal, 19 Hadamard, 155
Exponential basis, 21

N
F Non-correlated signals, 47
Fast Fourier transform, 128, 139 Normalized
Fast Haar transform auto-correlation, 42
decimation in frequency, 142 B-spline, 108
decimation in time, 135 signal, 17
Fast transform
Fourier, 128, 139 O
Haar, 135, 142 Odd signal, 19
Walsh, 160 Orthogonal basis in space of splines, 90
Fast Walsh transform Orthogonal signals, 17
decimation in time, 160
Filter, 32
matched, 41 P
SLB, 43 Parseval equality, 24
Filter response generalized, 24
frequency, 33 Peak factor of signal, 58
impulse, 32 Permutation
Fourier spectrum, 20 Euler, 4
Frank signal, 57 greyν , 6
Frank–Walsh signal, 190 revν , 4, 124, 136
Frequency response, 33 walν , 164
Prolongation of signal, 56

G R
Generalized Parseval equality, 24 Rademacher, discrete functions, 189
Goertzel algorithm, 121 Real signal, 19
Index 247

Recurrent relations, 5, 6, 87, 121, 123, 128, Splines


130, 133, 136, 138, 140, 155, 160, dual, 92
172 Spline wavelets
Residual, 1 dual, 118
Stationary transform, 31
Stretch of signal, 56
S Subsampling, 56
Sampling theorem, 26 Subspace
in Haar basis, 144, 146 of splines, 69
in Walsh basis, 167 wavelet, 100, 131
Self-dual spline, 93 Support of signal, 51
Side-lobe blanking filter, 43
Sidel’nikov–Sarwate inequality, 46
Signal, 15 T
binary, 58 Tangent hyperbolas method, 81
delta-correlated, 42 Theorem
energy, 43 convolution, 30
even, 19 in Haar basis, 149, 153
Frank, 57 correlation, 34
Frank–Walsh, 190 dyadic convolution, 153
imaginary, 19 sampling, 26
normalized, 17 in Haar basis, 144, 146
odd, 19 in Walsh basis, 167
peak factor, 58 Transform
prolongation, 56 linear, 31
real, 19 stationary, 31
stretch, 56
subsampling, 56
support, 51 U
Tr (l), 67 Uncertainty principle, 51
Zadoff–Chu, 57 Unit pulse, 15
Signal–filter pair, 41
Signals
ensemble, 44 W
non-correlated, 47 Walsh–Hadamard basis, 159
orthogonal, 17 Walsh–Paley basis, 162
Skew-cyclic convolution, 151 Walsh spectrum, 159
Spectrum Wavelet
amplitude, 56 basis, 130, 139
Fourier, 20 packet, 130
Walsh, 159 subspace, 100, 131
Spline
continuous periodic, 114
discrete periodic, 69 Z
self-dual, 93 Zadoff–Chu signal, 57

You might also like