Digital Communication Systems Engineering With Software-Defined Radio
Digital Communication Systems Engineering With Software-Defined Radio
Engineering with
Software-Defined Radio
To my parents, Lingzhen and Xuexun
DP
Di Pu
Alexander M. Wyglinski
artechhouse.com
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.
ISBN-13: 978-1-60807-525-6
10 9 8 7 6 5 4 3 2 1
To my parents, Lingzhen and Xuexun
DP
Preface xiii
Chapter 1
What Is an SDR? 1
1.1 Historical Perspective 1
1.2 Microelectronics Evolution and its Impact on Communications
Technology 2
1.2.1 SDR Definition 3
1.3 Anatomy of an SDR 5
1.3.1 Design Considerations 6
1.4 Build it and they Will Come 7
1.4.1 Hardware Platforms 8
1.4.2 SDR Software Architecture 10
1.5 Chapter Summary 13
1.6 Additional Readings 13
References 13
Chapter 2
Signals and Systems Overview 15
2.1 Signals and Systems 15
2.1.1 Introduction to Signals 15
2.1.2 Introduction to Systems 16
2.2 Fourier Transform 18
2.2.1 Introduction and Historical Perspective 19
2.2.2 Definition 20
2.2.3 Properties 20
2.3 Sampling Theory 23
2.3.1 Uniform Sampling 24
2.3.2 Frequency Domain Representation of Uniform Sampling 24
2.3.3 Nyquist Sampling Theorem 26
2.3.4 Sampling Rate Conversion 27
2.4 Pulse Shaping 30
2.4.1 Eye Diagrams 32
2.4.2 Nyquist Pulse Shaping Theory 32
2.4.3 Two Nyquist Pulses 34
2.5 Filtering 39
2.5.1 Ideal Filter 39
vii
viii Contents
2.5.2 Z-Transform 39
2.5.3 Digital Filtering 42
2.6 Chapter Summary 47
2.7 Problems 47
References 50
Chapter 3
Probability Review 53
3.1 Fundamental Concepts 53
3.1.1 Set Theory 53
3.1.2 Partitions 54
3.1.3 Functions 56
3.1.4 Axioms and Properties of Probability 57
3.1.5 Conditional Probability 57
3.1.6 Law of Total Probability and Bayes’ Rule 58
3.1.7 Independence 59
3.2 Random Variables 59
3.2.1 Discrete Random Variables 60
3.2.2 Continuous Random Variables 65
3.2.3 Cumulative Distribution Functions 69
3.2.4 Central Limit Theorem 70
3.2.5 The Bivariate Normal 71
3.3 Random Processes 72
3.3.1 Statistical Characteristics of Random Processes 74
3.3.2 Stationarity 76
3.3.3 Gaussian Processes 77
3.3.4 Power Spectral Density and LTI Systems 78
3.4 Chapter Summary 79
3.5 Additional Readings 80
3.6 Problems 80
References 88
Chapter 4
Digital Transmission Fundamentals 89
4.1 What is Digital Transmission? 89
4.1.1 Source Encoding 91
4.1.2 Channel Encoding 92
4.2 Digital Modulation 94
4.2.1 Power Efficiency 94
4.2.2 Pulse Amplitude Modulation 95
4.2.3 Quadrature Amplitude Modulation 98
4.2.4 Phase Shift Keying 99
4.2.5 Power Efficiency Summary 104
4.3 Probability of Bit Error 105
4.3.1 Error Bounding 107
Contents ix
Chapter 5
Basic SDR Implementation of a Transmitter and a Receiver 131
5.1 Software Implementation 131
5.1.1 Repetition Coding 132
5.1.2 Interleaving 134
5.1.3 BER Calculator 135
5.1.4 Receiver Implementation over an Ideal Channel 136
5.2 USRP Hardware Implementation 137
5.2.1 Frequency Offset Compensation 138
5.2.2 Finding Wireless Signals: Observing IEEE 802.11 WiFi
Networks 140
5.2.3 USRP In-phase/Quadrature Representation 141
5.3 Open-Ended Design Project: Automatic Frequency Offset Compensator 145
5.3.1 Introduction 145
5.3.2 Objective 146
5.3.3 Theoretical Background 147
5.4 Chapter Summary 149
5.5 Problems 149
References 152
Chapter 6
Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver 153
6.1 Software Implementation 153
6.1.1 Observation Vector Construction 153
6.1.2 Maximum-Likelihood Decoder Implementation 157
6.1.3 Correlator Realization of a Receiver in Simulink 159
6.2 USRP Hardware Implementation 162
6.2.1 Differential Binary Phase-Shift Keying 163
6.2.2 Differential Quadrature Phase-Shift Keying 166
6.2.3 Accelerate the Simulink Model that Uses USRP Blocks 166
6.3 Open-Ended Design Project: Frame Synchronization 167
6.3.1 Frame Synchronization 167
x Contents
Chapter 7
Multicarrier Modulation and Duplex Communications 177
7.1 Theoretical Preparation 177
7.1.1 Single Carrier Transmission 177
7.1.2 Multicarrier Transmission 181
7.1.3 Dispersive Channel Environment 183
7.1.4 OFDM with Cyclic Prefix 185
7.1.5 Frequency Domain Equalization 186
7.1.6 Bit and Power Allocation 187
7.2 Software Implementation 189
7.2.1 MATLAB Design of Multicarrier Transmission 189
7.2.2 Simulink Design of OFDM 192
7.3 USRP Hardware Implementation 194
7.3.1 Eye Diagram 194
7.3.2 Matched Filter Observation 195
7.4 Open-Ended Design Project: Duplex Communication 197
7.4.1 Duplex Communication 197
7.4.2 Half-Duplex 198
7.4.3 Time-Division Duplexing 198
7.4.4 Useful Suggestions 199
7.4.5 Evaluation and Expected Outcomes 200
7.5 Chapter Summary 201
7.6 Problems 201
References 204
Chapter 8
Spectrum Sensing Techniques 207
8.1 Theoretical Preparation 207
8.1.1 Power Spectral Density 207
8.1.2 Practical Issues of Collecting Spectral Data 209
8.1.3 Hypothesis Testing 214
8.1.4 Spectral Detectors and Classifiers 218
8.2 Software Implementation 222
8.2.1 Constructing Energy Detector 222
8.2.2 Observing Cyclostationary Detector 226
8.3 USRP Hardware Experimentation 227
8.4 Open-Ended Design Project: CSMA/CA 230
8.4.1 Carrier Sense Multiple Access 230
Contents xi
Chapter 9
Applications of Software-Defined Radio 239
9.1 Cognitive Radio and Intelligent Wireless Adaptation 239
9.1.1 Wireless Device Parameters 241
9.2 Vehicular Communication Networks 242
9.2.1 VDSA Overview 243
9.2.2 Transmitter Design 244
9.2.3 Receiver Design 245
9.2.4 VDSA Test-Bed Implementation 245
9.3 Satellite Communications 246
9.4 Chapter Summary 250
References 250
Appendix A
Getting Started with MATLAB and Simulink 253
A.1 MATLAB Introduction 253
A.2 Edit and Run a Program in Matlab 253
A.3 Useful MATLAB Tools 254
A.3.1 Code Analysis and M-Lint Messages 254
A.3.2 Debugger 255
A.3.3 Profiler 256
A.4 Simulink Introduction 257
A.5 Getting Started in Simulink 257
A.5.1 Start a Simulink Session 257
A.5.2 Start a Simulink Model 257
A.5.3 Simulink Model Settings 258
A.6 Build a Simulink Model 259
A.6.1 Obtain the Blocks 260
A.6.2 Set the Parameters 260
A.6.3 Connect the Blocks 262
A.7 Run Simulink Simulations 264
References 266
Appendix B
Universal Hardware Driver (UHD) 267
B.1 Setting Up Your Hardware 267
B.2 Installing UHD-Based USRP I/O Blocks 267
B.3 Burning the Firmware to an SD Card 268
xii Contents
Appendix C
Data Flow on USRP 271
C.1 Receive Path 271
C.1.1 Situation 1 272
C.1.2 Situation 2 274
C.2 Transmit Path 274
References 276
Appendix D
Quick Reference Sheet 277
D.1 LINUX 277
D.1.1 Helpful Commands 277
D.1.2 Modify the Iptables 277
D.2 MATLAB 278
D.2.1 How to Start MATLAB 278
D.2.2 The MATLAB Environment 278
D.2.3 Obtaining Help 278
D.2.4 Variables in MATLAB 278
D.2.5 Vectors and Matrices in MATLAB 279
D.3 USRP2 Hardware 279
D.3.1 XCVR2450 Daughtercard 279
D.3.2 Sampling 280
D.3.3 Clocking 281
D.3.4 DDC and DUC 281
D.4 Differential Phase-Shift Keying (DPSK) 282
Reference 282
Appendix E
Trigonometric Identities 283
About the Author 285
Index 287
Preface
The communications sector has witnessed a significant paradigm shift over the
past several decades with respect to how wireless transceivers and algorithms are
implemented for a wide range of applications. Until the 1980s, almost all wireless
communication systems were based on relatively static, hard-wired platforms using
technologies such as application-specific integrated circuits (ASICs). Nonetheless,
with numerous advances being made in the areas of digital processing technol-
ogy, digital-to-analog converters (DACs), analog-to-digital converters (ADCs), and
computer architectures, communication engineers started rethinking how wireless
transceivers could be implemented entirely in the digital domain, resulting in both
digital communications and digital signal processing algorithms being constructed
completely in programmable digital logic and software modules. Subsequently, this
rethinking by the communications sector has given rise to software-defined radio
(SDR) technology, where all the baseband processing of the wireless transceiver
is entirely performed in the digital domain (e.g., programmable logic, software
source code). Nevertheless, despite the fact that SDR technology has been around
for nearly three decades, undergraduate and graduate courses focusing on digital
communications have been mostly taught using educational methodologies and
techniques that were either based on transmission technologies from half a century
ago or entirely lacked any physical hardware context whatsoever. Consequently,
there exists a significant need by the wireless community for an educational ap-
proach that leverages SDR technology in order to provide individuals with the latest
insights on digital communication systems; this book was primarily written in order
to satisfy this need.
The objective of this book is to provide the reader with a “hands-on” educa-
tional approach employing software-defined radio (SDR) experimentation to fa-
cilitate the learning of digital communication principles and the concepts behind
wireless data transmission, and provide an introduction to wireless access tech-
niques. Based on four years of educational experiences by the authors with respect
to the instruction of a digital communications course for senior undergraduate and
junior graduate students at Worcester Polytechnic Institute (WPI) based on SDR
experimentation, it is expected that enabling individuals to prototype and evaluate
actual digital communication systems capable of performing “over-the-air” wireless
data transmission and reception will provide them with a first-hand understanding
of the design tradeoffs and issues associated with these systems, as well as provide
them with a sense of actual “real world” operational behavior. Given that the learn-
ing process associated with using an actual SDR platform for prototyping digital
communication systems is often lengthy, tedious, and complicated, it is difficult to
employ this tool within a relatively short period of time, such as a typical under-
graduate course (e.g., 7–14 weeks). Consequently, this book contains a collection
of preassembled Simulink experiments along with several software examples to
xiii
xiv Preface
enable the reader to successfully implement these designs over a short period of
time, as well as simultaneously synthesize several key concepts in digital commu-
nication system theory. Furthermore, in order to provide the reader with a solid
grasp of the theoretical fundamentals needed to understand and implement digital
communication systems using SDR technology, three overview chapters near the
beginning of this book provides a quick summary of essential concepts in signals
and systems, probability theory, and basic digital transmission.
The intended readers for this book are senior undergraduate and entry-level
graduate students enrolled in courses in digital communication systems, industry prac
titioners in the telecommunications sector attempting to master software-defined
radio technology, and wireless communication researchers (government, industry,
academia) seeking to verify designs and ideas using SDR technology. It is assumed
that the reader possesses an electrical and computer engineering background with a
concentration in signals, systems, and information. This book, along with its corre-
sponding collection of software examples and 28 sets of lecture slides (which can be
viewed at http://www.artechhouse.com and at http://www.sdr.wpi.edu/), is designed
to introduce the interested reader to concepts in digital communications, wireless
data transmission, and wireless access via a structured, “hands-on” approach.
In order to efficiently utilize this book for learning digital communications with
SDR technology, it is recommended that the interested individual reads this book
in a sequential manner. Starting with an overview of SDR technology in Chapter 1,
the reader will then be provide a quick refresher of signals and systems, probability
theory, and basic digital communications in Chapter 2, Chapter 3, and Chapter 4.
Following this review of the fundamentals, the next four chapters, namely, Chapters
5 through 8, focus on performing basic wireless data transmission, designing differ-
ent types of wireless receivers, introducing the concept of multicarrier modulation,
and devising spectrum sensing techniques. Finally, Chapter 9 provides an overview
of the various advanced applications where SDR technology has made a signifi-
cant impact. A collection of appendixes at the end of this book provides the reader
with instant access to information in order to help them quickly overcome the rela-
tively steep learning curve associated with SDR implementations. In addition to the
hands-on experiments, end-of-chapter problems also provide the readers with an
opportunity to strengthen their theoretical understanding of the concepts covered in
this book. Note that the 28 sets of lecture slides are closely related to the sequence
of topics presented in this book, where each set of slides is intended for a lecture of
approximately 60–90 minutes in duration. As a result, this book and its associated
materials is well-suited for a senior-level undergraduate course on digital communi-
cations taught over a 7-week academic term or a 14-week academic semester.
This book was made possible by the extensive support of numerous individu-
als and organizations throughout the duration of this project. First, we are deeply
indebted to our collaborators and sponsors at the MathWorks in Natick, MA, for
their constant support. In particular, we would like to sincerely thank Don Orofino,
Kate Fiore, Mike McLernon, and Ethem Sozer for their endless guidance and assis-
tance throughout the entire project. Second, we would like to sincerely thank Dan-
iel Cullen for his assistance in getting this educational approach “off the ground”
in 2010 with the first offering of the SDR-based digital communications course on
which this book is based. Third, we would like to express our deepest gratitude
Preface xv
to the WPI ECE department, especially its department head, Fred Looft, for being
a unwavering supporter in making the course that this book is based on become a
reality and a permanent addition to the WPI ECE curriculum. Finally, we would like
to thank our families for their support and encouragement.
Di Pu
Worcester Polytechnic Institute, USA
Alexander M. Wyglinski
Worcester Polytechnic Institute, USA
Chapter 1 1
2
What Is an SDR? 3
4
5
6
Modern society as we know it today is becoming increasingly dependent on the reliable 7
and seamless wireless exchange of information. For instance, a rapidly growing num- 8
ber of individuals are using smartphones in order to access websites, send and receive 9
emails, view streaming multimedia content, and engage in social networking activities 10
anywhere on the planet at any time. Furthermore, numerous drivers on the road today 11
are extensively relying upon global positioning system (GPS) devices and other wire- 12
less navigation systems in order to travel in a new neighborhood or city without the 13
need for a conventional map. Finally, in many hospitals and medical centers across the 14
nation and around the world, the health of a patient is now being continuously moni- 15
tored using an array of medical sensors attached to him/her that send vital information 16
wirelessly to a computer workstation for analysis by a medical expert. 17
The enabling technology for supporting any form of wireless information ex- 18
change is the digital transceiver, which is found in every cell phone, WiFi-enabled 19
laptop, BlueTooth device, and other wireless appliances designed to transmit and 20
receive digital data. Digital transceivers are capable of performing a variety of base- 21
band operations, such as modulation, source coding, forward error correction, 22
and equalization. Although digital transceivers were initially implemented using 23
integrated circuits and other forms of nonprogrammable electronics, the advent 24
of programmable baseband functionality for these digital transceivers is the result 25
of numerous advancements in microprocessor technology over the past several de- 26
cades. Consequently, there exists a plethora of digital transceiver solutions with a 27
wide range of capabilities and features. In this chapter, we will provide an introduc- 28
tion to software-defined radio (SDR) hardware platforms and software architec- 29
tures, as well as study how it has evolved over the past several decades to become a 30
transformative communications technology. 31
32
33
1.1 HISTORICAL PERSPECTIVE 34
35
The term “software-defined radio” was first coined by Joseph Mitola [1], although 36
SDR technology was available since the 1970s and the first demonstrated SDR proto 37
type was presented in 1988 [2]. However, the key milestone for the advancement 38
of SDR technology took place in the early 1990s with the first publicly funded SDR 39
development initiative called SpeakEasy I/II by the U.S. military [3]. The SpeakEasy 40
project was a significant leap forward in SDR technology since it used a collection 41
of programmable microprocessors for implementing more than 10 military com- 42
munication standards, with transmission carrier frequencies ranging from 2 MHz 43
to 2 GHz, which at that time was a major advancement in communication systems 44
engineering. Additionally, the SpeakEasy implementation allowed for software 45
46
1 47
1 upgrades of new functional blocks, such as modulation schemes and coding schemes.
2 The first generation of the SpeakEasy system initially used a Texas Instruments
3 TMS320C40 processor (40 MHz). However, the SpeakEasy II platform was the first
4 SDR platform to involve field programmable gate array (FPGA) modules for imple-
5 menting digital baseband functionality. Note that given the microprocessor technol-
6 ogy at the time, the physical size of the SpeakEasy prototypes were large enough to
7 fit in the back of a truck.
8
9
10 Q Why were the first SDR platforms physically very large in size?
11
12
13 Following this initial SDR project, research and development activities in this
14 area continued to advance the current state-of-the-art in SDR technology, with one
15 of the outcomes being the Joint Tactical Radio System (JTRS), which is a next-
16 generation voice-and-data radio used by the U.S. military and employs the software
17 communications architecture (SCA) [4]. Initially developed to support avionics [5]
18 and dynamic spectrum access [6] applications, JTRS is scheduled to become the
19 standard communications platform for the U.S. Army by 2012.
20
21
22 1.2 MICROELECTRONICS EVOLUTION AND ITS IMPACT ON
23 COMMUNICATIONS TECHNOLOGY
24
25 The microelectronic industry has rapidly evolved over the past six decades, resulting
26 in numerous advances in microprocessor systems that have enabled many of the ap-
27 plications we take for granted every day. The rate at which this evolution has prog
28 ressed over time has been characterized by the well-known Moore’s Law, which
29 defines the long-term trend of the number of transistors that can be accommodated
30 on an integrated circuit. In particular, Moore’s Law dictates that the number of
31 transistors per integrated circuit approximately doubles every two years, which
32 subsequently affects the performance of microprocessor systems such as processing
33 speed and memory. One area that the microelectronics industry has significantly
34 influenced over the past half century is the digital communication systems sector,
35 where microprocessor systems have been increasingly employed in the implementa-
36 tion of digital transceivers, yielding more versatile, powerful, and portable com-
37 munication system platforms capable of performing a growing number of advance
38 operations and functions. With the latest advances in microelectronics and micro-
39 processor systems, this has given rise to software-defined radio (SDR) technology,
40 where baseband radio functionality can be entirely implemented in digital logic and
41 software, as illustrated in Figure 1.2.
42 There exists several different types of microprocessor systems for SDR imple-
43 mentations. For instance, one popular choice is the general purpose microprocessor,
44 which is often used in SDR implementations and prototypes due to its high level of
45 flexibility with respect to reconfigurability, as well as due to its ease of implementation
46 regarding new designs. On the other hand, general purpose microprocessors are not
47 specialized for mathematical computations, and they can be potentially power inef-
ficient. Another type of microprocessor system, called a digital signal processor (DSP), 1
is specialized for performing mathematical computations, implementation of new dig- 2
ital communication modules can be performed with relative ease, and the processor | 3
is relatively power efficient (e.g., DSPs are used in cellular telephones). On the other 4
hand, DSPs are not well suited for computationally intensive processes and can be 5
rather slow. Alternatively, field programmable gate arrays (FPGAs) are computation- 6
ally powerful, but power inefficient, and it is neither flexible nor easy to implement 7
new modules. Similarly, graphics processing units (GPUs) are extremely powerful 8
computationally but difficult to use and it is implement new modules as well. 9
10
11
1.2.1 SDR Definition
12
Given these microprocessor hardware options, let us now proceed with formu- 13
lating a definition for an SDR platform. An SDR is a class of reconfigurable/ 14
reprogrammable radios whose physical layer characteristics can be significantly modi- 15
fied via software changes. It is capable of implementing different functions at different 16
times on the same platform, it defines in software various baseband radio features, 17
(e.g., modulation, error correction coding), and it possesses some level of software 18
control over RF front-end operations, (e.g., transmission carrier frequency). Since all 19
of the baseband radio functionality is implemented using software, this implies that 20
potential design options and radio modules available to the SDR platform can be 21
readily stored in memory and called upon when needed for a specific application, such 22
as a specific modulation, error correction coding, or other functional block needed 23
to ensure reliable communications. Note that due to the programmable nature of the 24
SDR platform and its associated baseband radio modules, these functional blocks can 25
potentially be changed in real-time and the operating parameters of functional blocks 26
can be adjusted either by a human operator or an automated process. 27
Although definitions may vary to a certain degree regarding what constitutes 28
an SDR platform, several key characteristics that generally define an SDR can be 29
summarized by the following list [7]: 30
31
• Multifunctionality: Possessing the ability to support multiple types of radio 32
functions using the same digital communication system platform. 33
• Global mobility: Transparent operation with different communication net- 34
works located in different parts of the world (i.e., not confined to just one 35
standard). 36
• Compactness and power efficiency: Many communication standards can be 37
supported with just one SDR platform. 38
• Ease of manufacturing: Baseband functions are a software problem, not a 39
hardware problem. 40
• Ease of upgrading: Firmware updates can be performed on the SDR platform 41
to enable functionality with the latest communication standards. 42
43
One of the most commonly leveraged SDR characteristic is that of multifunctionality, 44
where the SDR platform employs a particular set of baseband radio modules based on 45
how well the communication system will perform as a result of that configuration. To 46
illustrate how multifunctionality might work on an SDR platform, suppose we employ 47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 (a)
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35 (b)
36 Figure 1.1 An illustration of multifunctionality in an FPGA-based SDR platform. (a) Swapping dif
37 ferent modulation schemes on the same FPGA. (b) Digital communication transmitter chain on an
FPGA using the multiprocessor system-on-chip (MPSoC) concept.
38
39
40 an FPGA-based implementation consisting of a baseband radio architecture shown in
41 Figure 1.1(a). Now, let us assume that the transmission environment has changed and
42 the SDR platform has determined that a change in the modulation scheme is needed
43 to continue guaranteeing sufficient transmission performance. Consequently, using the
44 multiprocessor system-on-chip (MPSoC) concept, the SDR platform quickly swaps out
45 the BPSK modulation software module and replaces it with a QPSK modulation soft-
46 ware module, allowing for the continued operation of the SDR platform with minimal
47 interruption. The way that the MPSoC concept works is shown in Figure 1.1(b), where
1 is introduced to the transmitter, the first task performed is to remove all redundant/
2 repeating binary patterns from the information in order to increase the efficiency
3 of the transmission. This is accomplished using the source encoder block, which is
4 designed to strip out all redundancy from the information. Note that at the receiver,
5 the source decoder reintroduces the redundancy in order to return the binary infor-
6 mation back to its original form. Once the redundancy has been removed from the
7 binary information at the transmitter, a channel encoder is employed to introduce
8 a controlled amount of redundancy to the information stream in order to protect
9 it from potential errors introduced during the transmission process across a noisy
10 channel. A channel decoder is used to remove this controlled redundancy and re-
11 turn the binary information back to its original form. The next step at the trans-
12 mitter is to convert the binary information into unique electromagnetic waveform
13 properties such as amplitude, carrier frequency, and phase. This is accomplished
14 using a mapping process called modulation. Similarly, at the receiver the demodula-
15 tion process converts the electromagnetic waveform back into its respective binary
16 representation. Finally, the discrete samples outputted by the modulation block
17 are resampled and converted into a baseband analog waveform using a digital-to-
18 analog (D/A) converter before being processed by the radio frequency (RF) front-
19 end of the communication system and upconverted to an RF carrier frequency.
20 At the receiver, the reverse operation is performed, where the intercepted analog
21 signal is downconverted by the RF front-end to a baseband frequency before being
22 sampled and processed by an analog-to-digital (A/D) converter.
23
24
1.3.1 Design Considerations
25
26 Given the complexity of an SDR platform and its respective components, as de-
27 scribed in the previous section as well as in Figure 1.2, it is important to understand
28 the limitations of a specific SDR platform and how various design decisions may
29 impact the performance of the resulting prototype. For instance, it is very desirable
30 to have real-time baseband processing for spectrum sensing and agile transmission
31 operations with high computational throughput and low latency. However, if the
32 microprocessor being employed by the SDR platform is not sufficiently powerful
33 enough in order to support the computational operations of the digital communi-
34 cation system, one needs to reconsider either the overall transceiver design or the
35 requirements for low latency and high throughput. Otherwise, the SDR implemen-
36 tation will fail to operate properly, yielding transmission errors and poor com-
37 munication performance. An example of when the microprocessor is not capable
38 of handling the computational needs of the SDR platform is shown in Figure 1.3,
39 where the transmission being produced by the SDR platform is occurring in bursts,
40 yielding several periods of transmitted signal interspersed with periods of no trans-
41 missions. Note that such a bursty transmission would be extremely difficult to han-
42 dle at the receiver due to the intermittent nature of the signal being intercepted.
43 Other design considerations to think about when devising digital communica-
44 tion systems based on an SDR platform include the following:
45
46 • The integration of the physical and network layers via a real-time protocol
47 implementation on an embedded processor. Note that most communication
1
2
3
4
5
6
7
8
9
10
11
12
Figure 1.3 An example of sampling rate errors when the SDR platform is unable to keep up with
the transmission of the data. Note that the bursts can result in significant difficulty at the receiver 13
with respect to the interception and decoding of the received signal. 14
15
systems are divided into logically separated layers in order to more readily 16
facilitate the design of the communication system. However, it is imperative 17
that each layer is properly designed due to the strong interdependence be- 18
tween all the layers. 19
• Ensuring that a sufficiently wide bandwidth radio front-end exists with agil- 20
ity over multiple subchannels and scalable number of antennas for spatial 21
processing. Given how many of the advanced communication system designs 22
involve the use of multiple antennas and wideband transmissions, it is im- 23
portant to know what the SDR hardware is capable of doing with respect to 24
these physical attributes. 25
• Many networks employing digital communication systems possess a central- 26
ize architecture for controlling the operations of the overall network (e.g., 27
control channel implementation). Knowing your radio network architecture 28
is important since it will dictate what sort of operations are essential for one 29
digital transceiver to communicate with another. 30
• The ability to perform controlled experiments in different environments (e.g., 31
shadowing and multipath, indoor and outdoor environments) is important 32
for the sake of demonstrating the reliability of a particular SDR implementa- 33
tion. In other words, if an experiment involving an SDR prototype system is 34
conducted twice in a row in the exact same environment and using the exact 35
same operating parameters, it is expected that the resulting output and per- 36
formance should be the same. Consequently, being able to perform controlled 37
experiments provides the SDR designer with a “sanity check” capability. 38
• Reconfigurability and fast prototyping through a software design flow for 39
algorithm and protocol description. 40
41
42
1.4 BUILD IT AND THEY WILL COME 43
44
Given our brief overview of SDR technology and its definition, as well as a survey 45
of various microprocessor design options and constraints available when imple- 46
menting an SDR platform, we will now focus our attention on several well-known 47
1 SDR hardware implementations published in the open literature. Note that when
2 designing a complete SDR system from scratch, it is very important to have both
3 a hardware platform that is both sufficiently programmable and computationally
4 powerful, as well as a software architecture that can allow a communication system
5 designer to implement a wide range of different transceiver realizations. In this sec-
6 tion, we will first study some of the well-known SDR hardware platforms before
7 talking about some of the available SDR software architectures.
8
9
1.4.1 Hardware Platforms
10
11 Exploration into advanced wireless communication and networking techniques re-
12 quire highly flexible hardware platforms. As a result, SDR is very well suited due
13 to its rapidly reconfigurable attributes, which allows for controlled yet realistic
14 experimentation. Thus, the use of real-time test bed operations enables a large set
15 of experiments for various receiver settings, transmission scenarios, and network
16 configurations. Furthermore, SDR hardware provides an excellent alternative for
17 comprehensive evaluation of communication systems operating within a networked
18 environment, whereas Monte Carlo simulations can be computationally exhaustive
19 and are only as accurate as the devised computer model. In this section, we will
20 study several well-known SDR hardware platforms used by the wireless community
21 for research and experimentation.
22 One of the most well-known of all SDR hardware platforms is the Universal
23 Software Radio Peripheral (USRP) concept that was introduced by Matt Ettus,
24 founder and president of Ettus Research LLC, which is considered to be a relatively
25 inexpensive hardware for enabling SDR design and development [8]. All the base-
26 band digital communication algorithms and digital signal processing are conducted
27 on a computer workstation “host,” where the USRP platform acts as a radio pe-
28 ripheral allowing for over-the-air transmissions and the libusrp library file de-
29 fines the interface between the USRP platform and the host computer workstation.
30 Note that the USRP design is open source, which allows for user customization and
31 fabrication. Furthermore, USRP platform design is modular in terms of the sup-
32 ported RF front-ends, referred to as daughtercards. We will now talk about two
33 types of USRP platforms: the USRP1 and USRP2.
34 The Universal Software Radio Peripheral-Version 1 (USRP1) was designed and
35 manufactured by Ettus Research LLC for a variety of different communities interested
36 in an inexpensive SDR platform. The USRP1 consists of a USB interface between host
37 computer workstation and USRP1 platform, which resulted in a data bottleneck due
38 to the low data rates supported by the USB connection. The USRP1 supports up
39 to two RF transceiver daughtercards, possesses an Altera Cyclone EP1C12Q240C8
40 FPGA for performing sampling and filtering, contains four high-speed analog-to-
41 digital converters, each capable of 64 MS/s at a resolution of 12 bits, with an 85 dB
42 SFDR (AD9862), and contains four high-speed digital-to-analog converters, each ca-
43 pable of 128 MS/s at a resolution of 14 bits, with 83 dB SFDR (AD9862).
44 Following the success of the USRP1, Ettus Research LLC officially released the
45 Universal Software Radio Peripheral-Version 2 (USRP2) platform in September
46 2008, as shown in Figure 1.4(a). The USRP2 platform provides a more capable
47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
(a) (b)
15
Figure 1.4 Examples of software-defined radio platforms. (a) Front view of a Universal Software
Radio Peripheral–Version 2 (USRP2) software-defined radio platform by Ettus Research LLC. (b) Front
16
view of a Kansas University Agile Radio (KUAR) software-defined radio platform. 17
18
19
SDR device for enabling digital communication system design and implementation. 20
The USRP2 features include a gigabit Ethernet interface between host computer 21
workstation and USRP2 platform, supports only one RF transceiver daughtercard, 22
possesses a Xilinx Spartan 3-2000 FPGA for performing sampling and filtering, 23
contains two 100 MS/s, 14 bits, analog-to-digital converters (LTC2284), with a 24
72.4 dB SNR and 85 dB SFDR for signals at the Nyquist frequency, contains two 25
400 MS/s, 16 bits, digital-to-analog converters (AD9777), with a 160 MS/s without 26
interpolation, and up to 400 MS/s with 8x interpolation, and is MIMO-capable 27
for supporting the processing of digital communication system designs employing 28
multiple antennas. 29
The radio frequency (RF) front-ends are usually very difficult to design and 30
are often limited to a narrow range of transmission carrier frequencies. This is due 31
to the fact that the properties of the RF circuit and its components change across 32
different frequencies and that the RF filters are constrained in the sweep frequency 33
range. Consequently, in order to support a wide range of transmission carrier fre- 34
quencies, both the USRP1 and USRP2 platforms can use an assortment of modular 35
RF daughtercards, such as the following: 36
37
• BasicTX: A transmitter that supports carrier frequencies within 1–250 MHz; 38
• BasicRX: A receiver that supports carrier frequencies within 1–250 MHz; 39
• RFX900: A transceiver that supports carrier frequencies within 800–1000 40
MHz with a 200+mW output; 41
• RFX2400: A transceiver that supports carrier frequencies within 2.3–2.9 GHz 42
with a 20+mW output; 43
• XCVR2450: A transceiver that supports carrier frequencies within two 44
bands, namely, 2.4–2.5 GHz with an output of 100+mW and 4.9–5.85 GHz 45
with an output of 50+mW. 46
47
1 Another well-known SDR hardware platform was the Kansas University Agile
2 Radio (KUAR), which is a small form factor SDR platform containing a Xilinx
3 Virtex-II Pro FPGA board and a PCI Express 1.4 GHz Pentium-M microprocessor,
4 as shown in Figure 1.4(b) [9, 10]. For its size and capability, the KUAR was one
5 of the leading SDR implementations in its day, incorporating substantial computa-
6 tional resources as well as wideband frequency operations. Around the same time
7 period, the Berkeley BEE2 was designed as a powerful reconfigurable computing en-
8 gine with five Xilinx Virtex-II Pro FPGAs on a custom-built emulation board [11].
9 The Berkeley Wireless Research Center (BWRC) cognitive radio test-bed hardware
10 architecture consists of the BEE2, several reconfigurable 2.4-GHz radio modems,
11 and fiber link interfaces for connections between the BEE2 and the radios modems.
12 The software architecture consists of Simulink-based design flow and BEE2 specific
13 operating system, which provides an integrated environment for implementation
14 and simple data acquisition during experiments.
15 With respect to compact SDR platforms, Motorola developed and built a 10-MHz,
16 4-GHz CMOS-based small form factor cognitive radio platform prototype [12].
17 Fundamentally flexible, with a low-power transceiver radio frequency integrated cir-
18 cuit (RFIC) at the core of this experimental platform, this prototype can receive and
19 transmit signals of many wireless protocols, both standard and experimental. Car-
20 rier frequencies from 10 MHz to 4 GHz with channel bandwidths from 8 kHz to
21 20 MHz were supported. Similarly, the Maynooth Adaptable Radio System (MARS) is
22 a custom-built small form factor SDR platform [13]. The MARS platform had the
23 original objectives of being a personal computer connected radio front-end where all
24 the signal processing is implemented on the computer’s general-purpose processor.
25 The MARS platform was designed to deliver performance equivalent to that of a
26 future base station and the wireless communication standards in the 1700-MHz to
27 2450-MHz frequency range. Furthermore, the communication standards GSM1800,
28 PCS1900, IEEE 802.11b/g, and UMTS (TDD and FDD) are also supported.
29 Rice University Wireless Open Access Research Platform (WARP) radios in-
30 clude a Xilinx Virtex-II Pro FPGA board as well as a MAX2829 transceiver [14],
31 while the Lyrtech Small Form Factor SDR is developed by a company from the Ca-
32 nadian Province of Québec that leverages industrial collaborations between Texas
33 Instruments and Xilinx in order to produce these high-performance SDR platforms
34 that consist of an array of different microprocessor technology [15]. Finally, Epiq
35 Solutions recently released the MatchStiq SDR platform, which is a powerful yet
36 very compact form factor SDR platform capable of being deployed in the field to
37 perform a variety of wireless experiments, including their inclusion onboard ve-
38 hicles such as automobiles and unmanned aerial vehicles [16].
39
40
1.4.2 SDR Software Architecture
41
42 Given the programmable attributes of an SDR platform, it is vitally important to
43 also develop an efficient and reliable software architecture that would operate on
44 these platforms in order to perform the various data transmission functions we
45 expect from a wireless communications system. In this section, we will review some
46 of the SDR software architectures currently available for use with a wide variety of
47 SDR hardware platforms.
One of the first Simulink interfaces to the USRP2 platform was implemented as 1
part of an MS thesis at WPI and generously sponsored by the MathWorks [17]. In 2
this research project, the focus was on creating a Simulink blockset capable of com- 3
municating with the USRP2 libraries, which can then allow for communications with 4
the USRP2 platform itself. The resulting blocksets from this thesis research are shown 5
in Figure 1.5. By creating a Simulink interface to this SDR hardware, it is expected 6
that the existing signal processing libraries provided by the MathWorks can be exten- 7
sively leveraged in order to create actual digital communications systems capable of 8
performing over-the-air data transmission with other USRP2 platforms. 9
The architecture of the Simulink transmit and receiver blocks are shown in 10
Figure 1.6. In Figure 1.6(b), we observe how the Simulink transmitter block calls 11
the functions implemented by the S-function at different parts of the simulation. 12
While the simulation is initializing, it calls on mdlStart such that a data handler 13
object and first-in first-out (FIFO) register are created and a USRP2 object is instan- 14
tiated. Once the USRP2 object has been created, the operating parameters set in the 15
mask are passed down to the USRP2 hardware while the model is in the process 16
of initializing. Furthermore, the data handler object loads data into the FIFO and, 17
while the simulation is running, Simulink is repeatedly calling mdlOutputs such 18
that a frame of data is read from the FIFO, converted to a Simulink data type, and 19
sent to the output port to be received in the simulation. Note that when the simula- 20
tion has finished, the FIFO and data handler are deallocated. The transmitter shown 21
in Figure 1.6(a) possesses a similar mode of operation. 22
From the MS thesis and the development of the first Simulink prototype block- 23
set interface with the USRP2 SDR platform, the MathWorks built upon the lessons 24
learned from this experience and ultimately created the SDRu blockset, which can 25
be downloaded from the MathWorks website and installed with MATLAB R2011a 26
or later along with the Communications Toolbox [18]. After several years of de- 27
velopment, the SDRu blocks are at the core of numerous SDR implementations 28
using Simulink and the USRP2, as well as educational activities such as those to be 29
discussed later in this book. 30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Figure 1.5 The initial prototype Simulink transmitter and receiver interfaces for the USRP2 platform [17]. 47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 (a)
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39 (b)
40 Figure 1.6 Architecture of the initial prototype interfaces to the USRP2 platform [17]. (a) Initial
41 prototype Simulink transmitter interface. (b) Initial prototype Simulink receiver interface.
42
43 Other SDR software architectures include the popular open-source GNU Radio
44 software [19], which is a community based effort for devising an SDR software
45 architecture capable of interfacing with any SDR hardware platform, especially the
46 USRP family of products, and enabling them to reliably and seamlessly communicate
47 with other SDR platforms as well as conventional wireless systems. Given the large
1 [4] White, B. E., “Tactical Data Links, Air Traffic Management, and Software Programmable Ra-
2 dios,” Proceedings of the 18th Digital Avionics Systems Conference, St. Louis, MO, 1999.
3 [5] Eyermann, P. A., “Joint Tactical Radio Systems—a Solution to Avionics Modernization,”
Proceedings of the 18th Digital Avionics Systems Conference, St. Louis, MO, 1999.
4
[6] Bergstrom, C. S. Chuprun and D. Torrieri, “Adaptive Spectrum Exploitation Using Emerg-
5
ing Software Defined Radios,” Proceedings of the IEEE Radio and Wireless Conference.
6 Denver, CO, 1999.
7 [7] Reed, J. H., Software Radio: A Modern Approach to Radio Engineering, Prentice Hall
8 PTR, 2002.
9 [8] Ettus Research LLC, “USRP Networked Series”. https://www.ettus.com/product/category/
10 USRP_ Networked_Series.
11 [9] Minden, G. J., J. B. Evans, L. Searl, D. DePardo, V. R. Petty et al., “KUAR: A Flexible
12 Software-Defined Radio Development Platform,” Proceedings of the IEEE International
Symposium on New Frontiers in Dynamic Spectrum Access Networks, Dublin, Ireland, 2007.
13
[10] Minden, G. J., J. B. Evans, and L. Searl, D. DePardo, V. R. Petty et al., “Cognitive Radio for
14
Dynamic Spectrum Access—An Agile Radio for Wireless Innovation,” IEEE Communica-
15 tions Magazine, May, 2007.
16 [11] Chang Chen, J. Wawrzynek., and R W. Brodersen., “BEE2: A High-End Reconfigurable
17 Computing System,” IEEE Design and Test of Computers Magazine, March/April, 2005.
18 [12] Shi Q., D. Taubenheim, S. Kyperountas, P. Gorday, N. Correal et al., “Link Maintenance
19 Protocol for Cognitive Radio System with OFDM PHY,” Proceedings of the IEEE Inter-
20 national Symposium on New Frontiers in Dynamic Spectrum Access Networks, Dublin,
21 Ireland, 2007.
22 [13] Farrell, R., M. Sanchez, and G. Corley, “Software-Defined Radio Demonstrators: An Example
and Future Trends” International Journal of Digital Multimedia Broadcasting, 2009.
23
[14] Rice University WARP, WARP: Wireless Open-Access Research Platform, http://warp.rice.
24
edu/.
25
[15] Lyrtech RD Incorporated, Lyrtech RD Processing Systems: Software-Defined Radios,
26 http://lyrtechrd.com/en/ products/families/+processing-systems+software-defined-radio.
27 [16] Epiq Solutions, MatchStiq: Handheld Reconfigurable RF Transceiver, http://epiqsolutions.
28 com/matchstiq/.
29 [17] Leferman M. J., “Rapid Prototyping Interface for Software Defined Radio Experimenta-
30 tion,” Masters Thesis of Worcester Polytechnic Institute, Worcester, MA, 2010.
31 [18] The MathWorks, USRP Hardware Support from MATLAB and Simulink, http://www.
32 mathworks.com/discovery/sdr/usrp.html.
33 [19] GNU Radio, Welcome to GNU Radio!, http://gnuradio.org/.
34 [20] Nychis, The Comprehensive GNU Radio Archive Network. http://www.cgran.org/.
35 [21] Sutton P. D., J. Lotze, H. Lahlou, S. A. Fahmy, and K. Nolan et al., “Iris: An Architecture for
Cognitive Radio Networking Testbeds” IEEE Communications Magazine, September, 2010.
36
[22] Kensington P., RF and Baseband Techniques for Software Defined Radio, Norward, MA:
37
Artech House, 2005.
38
[23] Cabric, D., D. Taubenheim, G. Cafaro, R. Farrell, “Cognitive Radio Platforms and Test-
39 beds” Cognitive Radio Communications and Networks: Principles and Practice, Academic
40 Press, 2009.
41 [24] Farrell, R., R. Villing, A. M. Wyglinski, C. R. Anderson, J. H. Reed et al., “Rationale for
42 a Clean Slate Radio Architecture,” Proceedings of the SDR Forum Technical Conference,
43 Washington, DC, 2008.
44
45
46
47
This chapter covers the theory of linear time-invariant (LTI) signals and systems,
which forms the fundamental basis for the latter chapters. Sections 2.3 to 2.5 intro-
duce several more advanced topics, such as sampling, pulse shaping, and filtering.
However, this chapter does not discuss these topics at length, preferring to direct
the reader to more comprehensive books on these subjects.
2.1.1 Introduction to Signals
A signal is defined as any physical quantity that varies with time, space, or any other
independent variable or variables. Mathematically, we describe a signal as a func-
tion of one or more independent variables [1]. For example, a cosine wave signal
is defined as:
s(t) = Acos(2π ft) (2.1)
where the signal amplitude A and carrier frequency f are constants. Then, this signal
is a function of t. In other words, this signal varies with time. We can classify signals
based on different criteria.
In terms of distribution, signals can be classified as deterministic signals and
random signals. For example, the cosine wave signal defined in (2.1) is a determin-
istic signal because for each variable t, there is one and only one corresponding s(t).
However, the temperature of a location at each hour of a day is a random variable.
Even if we know the temperature for past hours, we cannot predict what the tem-
perature would be for the next hour.
With respect to the value of independent variable t, signals can be classified as
either a continuous time signal or a discrete time signal. If t can pick up any value
from –¥ to ¥, the signal is a continuous time signal. Otherwise, if t can only pick
up discrete values from –¥ to ¥, the signal is a discrete time signal. In this case,
we usually use n instead of t to denote the time variable, and the resulting signal is
written as s[n] instead of s(t). Therefore, a discrete time cosine wave signal can be
expressed as:
s[n] = A cos(2π fn), n = 1,…, N (2.2)
where n are the discrete values we can pick up for time variable. Figure 2.1(a)
shows an example of continuous time triangular wave, and Figure 2.1(b) shows an
example of discrete time triangular wave.
In this chapter, we will discuss converting a signal from continuous-time do-
main to discrete-time domain using sampling, as well as from discrete-time domain
15
16 Signals and Systems Overview
(a)
(b)
Figure 2.1 Use triangular wave as an example to illustrate continuous time signal and discrete time
signal. (a) Continuous time signal. (b) Discrete time signal.
2.1.2 Introduction to Systems
A system may be defined as a physical device that performs an operation on a signal
[1]. We usually denote a system using letter S. For example, assume the input sig-
nal to a system is a continuous-time signal x(t), after some operations of system S,
the output signal is a continuous-time signal y(t), as shown in Figure 2.3, then this
continuous-time system can be defined as:
2.1 Signals and Systems 17
(a)
(b)
Figure 2.2 Use triangular wave as an example to illustrate periodic signal and aperiodic signal.
(a) Periodic triangular wave. (b) Aperiodic triangular wave.
Figure 2.3 A continuous system S, with input signal x(t) and output signal y(t).
18 Signals and Systems Overview
If
S[ x1(t)] = y1(t), S[ x2 (t)] = y2 (t) (2.7)
then
There are several other important properties of systems. Since this chapter fo-
cuses on continuous-time systems, the following properties are discussed in the
continuous-time context:
• Causal: A system is causal if for t < t0, the input signal is 0, then the corre-
sponding output signal for t < t0 is also 0. In other words, the output signal
cannot exist before the input signal. Otherwise, the system is called a non
casual system.
• Memoryless: A system is memoryless if the output value at any time t de-
pends and only depends on the input signal value at that same time t.
Electrical and computer engineers often deal with physical phenomena that are
structured as some form of energy transmission via wave emission. Whether it is
sound waves, wireless signals, or ultrasound pulses, it is very important to pos-
sess a collection of tools that can analyze and study these forms of transmission.
Throughout history, numerous scientists and engineers have studied and developed
mathematical tools that can be used to characterize these wave emissions.
One form of mathematical tools developed during the 18th century is Fourier
analysis, which can be used to analyze signals and systems in frequency domain.
2.2 Fourier Transform 19
From a human perspective, we are often more used to analyzing signals and systems
in the time domain. However, the perspectives in the frequency domain can provide
us with more insight on the various features of a particular signal and/or system, as
well as its manipulation by its combination with other signals and processing using
other tools.
Fourier analysis family mainly includes Fourier transform (FT), Fourier series
(FS), discrete time Fourier transform (DTFT), and discrete Fourier transform (DFT).
Each has its own applications. In this section, we will focus on Fourier transform,
but we will also briefly cover the other three.
equation. The subsequent development of the field is known as harmonic analysis and
is also an early instance of representation theory. The first fast Fourier transform (FFT)
algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when in-
terpolating measurements of the orbit of the asteroids Juno and Pallas [4].
2.2.2 Definition
As mentioned at the beginning of this section, each tool in the Fourier family has its
own application, which is listed in Table 2.1. These applications are classified based
on the properties of input signals, as shown in Figure 2.5.
The Fourier transform, in essence, decomposes or separates a continuous, ape-
riodic waveform or function, as shown in Figure 2.5(a), into sinusoids of different
frequency that sum to the original waveform. It identifies or distinguishes the dif-
ferent frequency sinusoids and their respective amplitudes [5].
The Fourier transform of x(t) is defined as [1]:
¥
X(ω) = ò x(t)e - jωt dt (2.11)
-¥
where t is the time variable in seconds across the time domain, and W is the fre-
quency variable in radian per second across frequency domain.
Applying the similar transform to X(w) yields the inverse Fourier transform [1]:
¥
x(t) = ò X(ω)e j 2πωt dω
-¥ (2.12)
2.2.3 Properties
Several properties of Fourier transform are useful when studying signals and sys-
tems in the Fourier transform domain:
(a)
(b)
(c)
(d)
Figure 2.5 Use triangular wave as an example to illustrate four different types of signals. (a) Con-
tinuous, aperiodic signal. (b) Continuous, periodic signal. (c) Discrete, aperiodic signal. (d) Discrete,
periodic signal.
22 Signals and Systems Overview
çè a + jw ÷ø
e−a|t|u(t), a > 0 2a
a + w2
2
Suppose the time signal is x(t) and its Fourier transform signal is X(w).
The Fourier transform properties introduced here are summarized in Table 2.4
for your quick reference.
Figure 2.6 Basic parts of an analog-to-digital converter (ADC) [1]. Sampling takes place in the
sampler block.
2.3.1 Uniform Sampling
Concerning sampling interval, there are infinite ways to do sampling. However, if we
specify the sampling interval as a constant number Ts, we get the most widely used sam-
pling, called uniform sampling, or periodic sampling. Using this method, we are taking
“samples” of the continuous-time signal every Ts seconds, which can be defined as:
where x(t) is the input continuous-time signal, x[n] is the output discrete-time sig-
nal, Ts is the sampling period, and fs = 1/Ts is the sampling frequency.
An equivalent model for the uniform sampling operation is shown in
Figure 2.7(a), where the continuous-time signal x(t) is multiplied by an impulse
train p(t) to form the sampled signal xs(t), which can be defined as:
xs (t) = x(t)p(t) (2.15)
(a)
(b)
Figure 2.7 An equivalent model for the uniform sampling operation. (a) A continuous-time signal
x(t) is multiplied by a periodic pulse p(t) to form the sampled signal xs(t). (b) A periodic pulse p(t).
According to Figure 2.7(b), we can define the sampling function p(t) as:
¥
p(t) = å δ (t - kTs ), k = 0,1, 2,… (2.16)
k =-¥
where at time instants kTs, we have p(t) = 1. According to [7], p(t) is a Dirac comb
constructed from Dirac delta functions.
Substitution of (2.16) into (2.15) gives:
¥
(2.17)
xs (t) = x(t)p(t) = x(t) å δ (t - kTs )
k =-¥
In order to learn the sampling process in frequency domain, we need to take the
Fourier transform of xs(t). According to frequency-domain convolution property in
26 Signals and Systems Overview
Equation (2.20) tells us that the uniform sampling creates images of the Fourier
transform of the input signal, and images are periodic with sampling frequency fs.
The Nyquist sampling theorem states that a real signal, x(t), which is bandlim-
ited to B Hz can be reconstructed without error from samples taken uniformly at
a rate R > 2B samples per second. This minimum sampling frequency, Fs = 2B Hz,
is called the Nyquist rate or the Nyquist frequency. The corresponding sampling
interval, T = 1/2B, is called the Nyquist interval [1]. A signal bandlimited to B Hz
which is sampled at less than the Nyquist frequency of 2B, (i.e., which was sampled
at an interval T > 1/2B is said to be undersampled).
When a signal is undersampled, its spectrum has overlapping tails, where Xs(f )
no longer has complete information about the spectrum and it is no longer possible
2.3 Sampling Theory 27
(a)
(b)
(c)
Figure 2.8 The spectrum of original signal x(t) and the sampled signal xs (t) on frequency domain.
(a) The spectrum of original signal x(t), with bandwidth –fh to fh, and amplitude A. (b) The spectrum
of sampled signal xs(t), which satisfies Nyquist sampling theorem. (c) The spectrum of sampled signal
xs(t), which does not satisfies Nyquist sampling theorem and has aliasing.
to recover x(t) from the sampled signal. In this case, the tailing spectrum does not
go to zero, but is folded back onto the apparent spectrum. This inversion of the tail
is called spectral folding or aliasing, as shown in Figure 2.8(c) [5].
where x[n] is the original signal, y[n] is the decimated signal, and D is the decima-
tion rate.
According to (2.22), the sampling rates of the original signal and the decimated
signal can be expressed as:
F
Fy = x (2.23)
D
where Fx is the sampling rates of the original signal, and Fy is the sampling rates of
the decimated signal.
Since the frequency variables in radians, wx and wy, can be related to sampling
rate, Fx and Fy, by:
2π F (2.24)
ω x = 2π FTx =
Fx
and
2π F
ω y = 2π FTy = (2.25)
Fy
it follows that wx and wy are related by:
ω y = Dω x (2.26)
which means that the frequency range of wx is stretched into the corresponding
frequency range of wy by a factor of D.
In order to avoid aliasing of the decimated sequence y[n], it is required that
0 £½wy½£ p. Based on (2.26), it implies that the spectrum of the original sequence
π
should satisfy 0 £ ω x £ . Therefore, in reality, decimation is usually a two-step
D
process, consisting of a lowpass anti-aliasing filter and a downsampler, as shown
in Figure 2.9. The lowpass anti-aliasing filter is used to constrain the bandwidth of
π
the input signal to the downsampler x[n] to be 0 £ ω x £ .
D
In frequency domain, the spectrum of the decimated signal, y[n], can be ex-
pressed as [1]:
1 D-1 æ ω y - 2π k ö æ ω y - 2π k ö
Y (ω y ) = å
D k =0
HD ç
è D
S
ø÷ èç D ÷ø (2.27)
where S(w) is the spectrum of the input signal s[n], and HD(w) is the frequency
response of the lowpass filter hD[n]. With a properly designed filter HD(w), the
aliasing is eliminated, and consequently, all but the first k = 0 term in (2.27) vanish
[1]. Hence, (2.27) becomes:
1 æ ωy ö æ ωy ö 1 æ ωy ö
Y (ω y ) = HD ç S = S (2.28)
D è D ø÷ èç D ø÷ D èç D ø÷
Figure 2.9 The structure of decimation, consisting of a lowpass anti-aliasing filter and a downsampler.
2.3 Sampling Theory 29
for 0 ≤ ω y ≤ π . The spectra for the sequence x[n] and y[n] are illustrated in
π
Figure 2.10, where the frequency range of the intermediate signal is 0 £ ω x £ ,
D
and the frequency range of the decimated signal is 0 £ ω y £ π .
The process of increasing the sampling rate is called interpolation, which can
be accomplished by interpolating I – 1 new samples between successive values of
signal. In time domain, it can be defined as:
ì x[n/I ] n = 0, ± I, ± 2I,…
y[n] = í (2.29)
î0 otherwise
where x[n] is the original signal, y[n] is the interpolated signal, and I is the inter-
polation rate.
According to (2.29), the sampling rates of the original signal and the inter
polated signal can be expressed as:
Fy = IFx (2.30)
where Fx is the sampling rates of the original signal, and Fy is the sampling rates
of the interpolated signal. Since (2.24) and (2.25) also hold here, it follows that wx
and wy are related by:
(a)
(b)
Figure 2.10 The spectra for the sequence x[n]and y[n] where the frequency range of wx is stretched
into the corresponding frequency range of wy by a factor of D. (a) Spectrum of the intermediate
sequence. (b) Spectrum of the decimated sequence.
30 Signals and Systems Overview
ωx (2.31)
ωy =
I
which means that the frequency range of wx is compressed into the corresponding
frequency range of wy by a factor of I. Therefore, after the interpolation, there will
be I replicas of the spectrum of x[n], where each replica occupies a bandwidth of
p /I. Since only the frequency components of y[n] in the range 0 £ |wy| £ p /I are
unique, (i.e., all the other replicas are the same as this one), the images of Y(w)
above wy = p/I should be rejected by passing it through a lowpass filter with the
following frequency response:
ì π
ïC 0 £ ω y £ (2.32)
HI (ω y ) = í I
ïî0 otherwise
where C is a scale factor.
Therefore, in reality, interpolation is also a two-step process, consisting of an
upsampler and a lowpass filter, as shown in Figure 2.11. The spectrum of the output
signal z[n] is:
ì π
ïCX(ω z I) 0 £ ω z £ (2.33)
Z(ω z ) = í I
ïî0 otherwise
where X(w) is the spectrum of the output signal x[n].
The spectra for the sequence x[n], y[n], and z[n] are illustrated in Figure 2.12,
where the frequency range of the original signal is 0 £ |wx| £ p, and the frequency
π
range of the decimated signal is 0 £ ω z £ .
I
In Section 2.3, we have shown that an analog signal can be converted to a digital
signal using an analog-to-digital converter (ADC), as illustrated in Figure 2.6. These
digital signals consisting of “1” and “0” values have significant advantages in the
aspect of signal processing. However, they cannot be directly transmitted in a real-
world situation. Instead, they must be first converted into an analog signal in order
to be transmitted. This conversion is done by the “transmit” or “pulse-shaping”
filter, which changes each symbol in the digital message into a suitable analog pulse
[8]. This process is shown in Figure 2.13, where the impulse modulator is denoted
by p(t), and the impulse response of the transmit filter is denoted by hT(t).
Similar to the sampling function in Section 2.3.2, the impulse modulator can
be defined as:
∞
p(t) = ∑ δ (t − kT ) (2.34)
k= −∞
Figure 2.11 The structure of interpolation, consisting of an upsampler and a lowpass filter.
2.4 Pulse Shaping 31
(a)
(b)
(c)
Figure 2.12 The spectra for the sequence x[n], y[n], and z[n], where the frequency range of wx
is compressed into the corresponding frequency range of wy by a factor of I. (a) Spectrum of the
original sequence. (b) Spectrum of the intermediate sequence. (c) Spectrum of the interpolated
sequence.
where p(t) = 1 for t = kT, and p(t) = 0 for all the other time instants. Therefore, the
analog pulse train after the impulse modulator is:
¥ t = kT
ìs(kT )
sa (t) = s[n]p(t) = å s(kT )δ (t - kT ) = í
t ¹ kT
(2.35)
k =-¥ î0
32 Signals and Systems Overview
Figure 2.13 On the transmitter side, the impulse modulator and transmit filter convert the digital
symbols into analog pulses for the sake of transmission.
where each digital symbol s[n] initiates an analog pulse that is scaled by the value
of the symbol.
If we do not have a transmit filter in our system, these pulses sa(t) will be trans-
mitted through the channel right away. In theory, on the receiver side, we should
be able to get the same pulse train, although with some delay. However, in reality,
we actually cannot get it due to the interference between adjacent symbols. There
are two situations when adjacent symbols may interfere with each other: when the
pulse shape is wider than a single symbol interval T, and when there is a nonunity
channel that “smears” nearby pulses, causing them to overlap. Both of these situ-
ations are called intersymbol interference (ISI) [8]. In order to solve this problem,
pulse shaping filters are introduced to bandlimit the transmit waveform.
2.4.1 Eye Diagrams
In telecommunication, an eye diagram, also known as an eye pattern, is an oscil-
loscope display in which a digital data signal from a receiver is repetitively sampled
and applied to the vertical input, while the data rate is used to trigger the horizontal
sweep [8]. It is so called because, for several types of coding, the pattern looks like
a series of eyes between a pair of rails.
Several system performance measures can be derived by analyzing the display,
especially the extent of the ISI. As the eye closes, the ISI increases; as the eye opens,
the ISI decreases. Besides, if the signals are too long, too short, poorly synchronized
with the system clock, too high, too low, too noisy, or too slow to change, or have
too much undershoot or overshoot, this can be observed from the eye diagram. For
example, Figure 2.14 shows a typical eye pattern for the noisy QPSK signal.
Figure 2.14 A typical eye pattern for the QPSK signal. The width of the opening indicates the time
over which sampling for detection might be performed. The optimum sampling time corresponds to
the maximum eye opening, yielding the greatest protection against noise. If there were no filtering
in the system, then the system, would look like a box rather than an eye.
Now, let us consider under which condition we can assure that there is no in-
tersymbol interference between symbols. The input to the equivalent channel, sa(t),
has been defined in (2.35). As mentioned before, each analog pulse is scaled by the
value of the symbol, so we can express sa(t) in another way:
Figure 2.15 The equivalent channel of a communication system, which consists of the transmit
filter, the channel, and the receive filter.
34 Signals and Systems Overview
where ak is the value of the kth symbol. It yields the output to the equivalent chan-
nel, y(t), which is:
y(t) = sa (t)* h(t) = å ak [δ (t - kT )* h(t)] = å ak h(t - kT ) (2.38)
Therefore, given a specific time instant, for example, t = mT, where m is a constant,
the input sa(t) is:
(2.39)
a ∑
s (mT ) = a d (mT − kT ) = a
k m
Since we do not want the interference from the other symbols, we would like the
output to contain only the am term, which is:
y(mT ) = am h(mT - mT ) (2.41)
C k = m (2.42)
h(mt − kT ) =
0 k ≠ m
where C is some nonzero constant.
If we generalize (2.42) to any time instant t, we can get the Nyquist pulse shap-
ing theory as follows.
The condition that one pulse does not interfere with other pulses at subsequent
T-spaced sample instants is formalized by saying that h(t) is a Nyquist pulse if it
satisfies:
C k = 0 (2.43)
h(t) = h(kT ) =
0 k≠ 0
for all integers k.
sin(π x)
sinc(x) = (2.44)
πx
As shown in Figure 2.16, when variable x takes an integer value k, the value of the
sine function will be:
ì1 k = 0 (2.45)
sinc(k) = í
î0 k ¹ 0
In other words, zero crossings of the sine function occur at nonzero integer values.
Another important property of sinc function is that sine pulse is the inverse
Fourier transform of a rectangular signal, as shown in Figure 2.17(a). Suppose the
rectangular signal is defined as [9]:
ì 1
ïT |ω | <
H(ω) = í 2T (2.46)
ïî 0 otherwise
Take the inverse Fourier transform of the rectangular signal, we can obtain the sine
signal as:
ætö (2.47)
h(t) = sinc ç ÷
èTø
æ kT ö
h(t) = h(kT ) = sinc ç = sinc (k) (2.48)
è T ÷ø
1.2
0.8
0.6
sinc(x)
0.4
0.2
4
0 5 10
x
Figure 2.16 The plot of sine function as defined in (2.44). The x-axis is x, and the y-axis is sinc(x).
36 Signals and Systems Overview
Since k is an integer here, according to (2.45), we can continue writing (2.48) as:
æ kT ö ì1 k=0
h(t) = h(kT ) = sinc ç = sinc (k) = í (2.49)
è T ÷ø î0 k¹0
Comparing (2.49) with (2.43), we can easily find that if we make t = kT, the sine
pulse exactly satisfies Nyquist pulse shaping theory in Section 2.4.2. In other words,
1
by choosing sampling time at kT (sampling frequency equals ), our sampling in-
T
stants are located at the equally spaced zero crossings, as shown in Figure 2.17(b),
so there will be no intersymbol interference.
Recall the Nyquist sampling theorem in Section 2.3.3 states that a real sig-
nal, x(t), which is bandlimited to B Hz can be reconstructed without error using a
(a)
1.2
0.8
0.6
sinc(k)
0.4
0.2
...... ......
0
4
0 5 10
k
(b)
Figure 2.17 The sine pulse on time domain and its Fourier transform, rectangular pulse, on fre-
quency domain. (a) The rectangular pulse on frequency domain, defined in (2.46). (b) The sine pulse
t
defined in (2.49). The x-axis is k, where k = , and the y-axis is sinc(k).
T
2.4 Pulse Shaping 37
1 (2.51)
h(t) ~
t
so it is obvious that timing error can cause large ISI.
Given that Nyquist-I pulse is not practical, we need to introduce Nyquist-II
pulse, which has a larger bandwidth B > Bmin, but with less ISI sensitivity. Since this
type of pulse is more practical, it is much more widely used in pratice.
The raised cosine pulse is one of the most important types of Nyquist-II pulses,
which has the frequency transfer function defined as:
ì 1- β
ïT 0 £ |f | £
2T
ï
ïT æ æ πT æ 1- β ööö 1-β 1+ β
HRC (f ) = í ç 1 + cos ç çè | f | - ÷ £ |f | £ (2.52)
ï2 è è β 2T ø ÷ø ÷ø 2T 2T
ï 1+ β
ï0 |f | ³
î 2T
β
where b is the rolloff factor, which takes a value from 0 to 1, and is the excess
bandwidth. 2T
The spectrum of raised cosine pulse is shown in Figure 2.18. In general, it has
the bandwidth b ³ 1/(2T). When b = 0, the bandwidth b = 1/(2T), and there is no
excess bandwidth, which is actually a rectangular pulse. On the other end, when
b = 1, it reaches the maximum bandwidth B = l/T.
By taking the inverse Fourier transform of HRC(f ), we can obtain the impulse
response of raised cosine pulse, defined as:
æ βt ö
cos ç π ÷
è Tø æ πt ö
hRC (t) = sinc ç ÷ (2.53)
æ βt ö
2 èTø
1- ç2 ÷
è Tø
38 Signals and Systems Overview
12
10 =0
= 0.5
8 =1
Amplitude
5 5 0 0.5 1 1.5
Normalized Frequency
Figure 2.18 Spectrum of a raised cosine pulse, defined in (2.52), which varies by the rolloff factor b.
The x-axis is the normalized frequency f0. The actual frequency can be obtained by f0/T.
Nyquist-II pulses do not have an ISI sensitivity because their peak distortion,
the tail of hRC(t), converges quickly, which can be expressed as:
¥
1
Dp = å |hRC ( e ¢ + (n - k))| ~
n3
(2.54)
n =-¥
Therefore, when timing error occurs, the distortion will not accumulate to infinity
in a similar fashion related to Nyquist-I pulses [10].
Actually, in many practical communications systems, root raised cosine filters are
used [11]. If we consider the communication channel as an ideal channel, and we as-
sume that the transmit filter and the receive filter are identical, we can use root raised
cosine filters for both of them, and their net response must equal to HRC(f ) defined
in (2.52). Since the impulse response of the equivalent channel can be expressed as:
Therefore, the frequency response of root raised cosine filter must satisfy:
2.5 Filtering
When doing signal processing, we usually need to get rid of the noise and extract
the useful signal. This process is called filtering, and the device employed is called
a filter, which discriminates, according to some attribute of the objects applied at
its input, what passes through it [1]. Typical filter is a frequency-selective circuit,
namely, if noise and useful signal have different frequency distributions, by apply-
ing this circuit, noise will be attenuated or even eliminated, while the useful signal
is retained.
Filters can be classified from different aspects. For example, according to its
frequency response, filters can be classified as lowpass, highpass, bandpass and
bandstop. According to the signal it deals with, filters can be classified as analog
filters and digital filters. Specifically, analog filters deal with continuous–time sig-
nals, while digital filter deal with discrete-time signals. This section will focus on
digital filters.
2.5.1 Ideal Filter
As mentioned before, filters are usually classified according to their frequency-
domain characteristics as lowpass, highpass, bandpass, and bandstop or band-
eliminated filters [1]. The ideal magnitude response characteristics of these types of
filters are shown in Figure 2.19.
According to Figure 2.19, the magnitude response characteristics of an ideal
filter can be generalized as follows: In passband, the magnitude is a constant. While
in stopband, the magnitude falls to zero. However, in reality, this type of ideal filter
cannot be achieved, so a practical filter is actually the optimum approximation of
an ideal filter.
2.5.2 Z-Transform
In Section 2.2, we have introduced the Fourier transform, which deals with
continuous-time signals on the frequency domain. Since we are focusing on digital
filter design in this section, where discrete-time signals are involved, we need to
introduce a new type of transform, namely, the z-transform.
The z-transform of a discrete-time signal x[n] is defined as the power series:
X(z) =
¥
å x[n]z - n (2.58)
n =-¥
(a) (b)
(c) (d)
Figure 2.19 Ideal magnitude response characteristics of four types of filters on the frequency
range [0,2p]. (a) Lowpass filter. (b) Highpass filter. (c) Bandpass filter, where (wCl, wC2) is pass band.
(d) Bandstop filter, where (wCl, wC2) is stop band.
transform generalizes the Fourier transform from the real line (the frequency axis
jw) to the entire complex plane.
According to Section 2.3.2, we know that if a continuous-time signal x(t) is
uniformly sampled, its sampling signal xs(t) can be expressed as:
¥
xs (t) = å x(nT )δ (t - nT ) (2.60)
n =-¥
where T is the sampling interval. If we take the Laplace transform of both sides,
we will get:
¥ ¥ é ¥ ù
Xs (s) = ò
-¥
xs (t)e - st dt = ò
-¥
ê å x(nT )δ (t - nT )ú e - st dt (2.61)
ëê n =-¥ ûú
Since integration and summation are both linear operators, we can exchange their or-
der. Then, based on the sampling property of the delta function, we can further get:
¥ ¥
x(nT ) éê ò δ (t - nT )e - st dt ùú = å x(nT )e - snT
¥
Xs (s) = å ë -¥ û n =-¥
(2.62)
n =-¥
1
Let z = esT, or s = ln z; then (2.62) becomes:
T
¥
X(z) = å x(nT )z - n (2.63)
n =-¥
2.5 Filtering 41
Since T is the sampling interval, x(nT) = x[n]. The previous equation can be further
written as:
¥
X(z) = å x[n]z - n (2.64)
n =-¥
z = e sT (2.65)
or
1
s= ln(z) (2.66)
T
According to (2.58), we know that z-transform is the series of z–1. Actually, the
z-transform has no meaning unless the series converge. Given a limitary-amplitude
sequence x[n], the set of all the z values that makes its z-transform converge is called
region of convergence (ROC). Based on the theory of series, these z-values must
satisfy:
¥
å |x[n]z -n
| < ¥ (2.67)
n =-¥
The frequently used z-transform pairs and their region of convergence are listed in
Table 2.5.
When discussing a linear time-invariant system, the z-transform of its system
impulse response can be expressed as the ratio of two polynomials:
where the roots of A(z) = 0 are called the poles of the system, and the roots of
B(z) = 0 are called the zeros of the system. It is possible that the system can have
multiple poles and zeros.
If we factorize the numerator B(z) and denominator A(z), (2.68) can be written
as:
(z - z1)(z - z2 )�(z - zm ) (2.69)
H(z) = C
(z - p1)(z - p2 )�(z - pn )
where C is a constant, {pk} are all the poles, and {zk} are all the zeros. It will help us
draw the pole-zero plot of H(z).
Suppose we have a linear time-invariant system whose system impulse response
is defined as:
h[n] = n 2 an u[n] (2.70)
According to Table 2.5, its z-transform is as follows:
42 Signals and Systems Overview
nanu[n] az ½z½>½a½> 0
(z - a)2
æ 1 1ö az bz æ 1 1ö
çè n + n ÷ø u[n] + z > max ç , ÷
a b az - 1 bz - 1 è a bø
eanu[n] z ½z½> e –a
z - ea
z(z + a) (2.71)
H(z) = a
(z - a)3
Comparing (2.71) with (2.69), we easily get that this system has two zeros, z1 = 0
and z2 = –a, and three poles, p1 = p2 = p3 = a. Therefore, its pole-zero plot is shown
in Figure 2.20.
There are several properties of the z-transform that are useful when studying
signals and systems in the z-transform domain. Since these properties are very simi-
lar to those of the Fourier transform introduced in Section 2.2.3, we list them in
Table 2.6 without further discussion.
2.5.3 Digital Filtering
In order to perform digital filtering, input and output of a digital system must both
be discrete-time series. If the input series is x[n], the impulse response of the filter is
h[n], then the output series y[n] will be:
Figure 2.20 Pole-zero plot of the system defined in (2.71). Poles are denoted using crossings, and
zeros are denoted using circles. The region of convergence of this system is the region outside the
circle z = | a |.
where X(z) and Y(z) are the z-transforms of the input and output series, x[n] and
y[n], and H(z) is the z-transform of h[n].
Suppose the time signal is x[n] and its z-transform signal is X(z).
44 Signals and Systems Overview
As mentioned in Section 2.5.1, the ideal filters are not achievable in practice.
Consequently, we limit our attention to the class of linear time-invariant systems
specified by the difference equation [1]:
N M
y[n] = - å ak y[n - k] + å bk x[n - k] (2.74)
k =1 k=0
where y[n] is the current filter output, the y[n – k] are previous filter outputs, the x[n – k]
are current or previous filter inputs. This system has the following frequency response:
M
å bk e - z
H(z) = k =0 (2.75)
N
1 + å ak e - z
k =1
where the {ak} are the filter’s feedback coefficients corresponding to the poles of the
filter, the {bk} are the filter’s feed forward coefficients corresponding to the zeros of
the filter, and N is the filter’s order.
The basic digital filter design problem is to approximate any of the ideal fre-
quency response characteristics with a system that has the frequency response (2.75)
by properly selecting the coefficients {ak} and {bk} [1].
There are two basic types of digital filters, finite impulse response (FIR) and infi-
nite impulse response (IIR) filters. When excited by an unit sample d[n], the impulse
response h[n] of a system may last a finite duration, as shown in Figure 2.21(a), or
forever even before the input is applied, as shown in Figure 2.21(b). In the former
case, the system is finite impulse response, and, in the latter case, the system is infi-
nite impulse response.
A FIR filter of length M with input x[n] and output y[n] can be described by the
difference equation [1]:
M -1
y[n] = b0 x[n] + b1x[n - 1] + � + bM -1x[n - M + 1] = å bk x[n - k] (2.76)
k =0
where the filter has no feedback coefficients {ak}, so H(z) has only zeros.
IIR filter has been defined in (2.74), which has one or more nonzero feedback
coefficients {ak}. Therefore, once the filter has been excited with an impulse, there
is always an output.
(a)
(b)
Figure 2.21 The impulse responses of an FIR filter and an IIR filter. (a) The impulse response of an
FIR filter. (b) The impulse response of an IIR filter.
A CIC filter consists of one or more integrator and comb filter pairs. In the case
of a decimating CIC, the input signal is fed through one or more cascaded integra-
tors, then a down-sampler, followed by one or more comb sections whose number
equals to the number of integrators. An interpolating CIC is simply the reverse of
this architecture, with the down-sampler replaced with a up-sampler, as shown in
Figure 2.22.
To illustrate a simple form of a comb filter, consider a moving average FIR filter
described by the difference equation:
1 M (2.77)
y[n] = å x[n - k]
M + 1 k=0
1 M -k 1 [1 - z -(M +1) ]
H(z) = å
M + 1 k=0
z =
M + 1 (1 - z -1)
(2.78)
Figure 2.22 The structure of an interpolating cascaded integrator-comb filter [14], with input sig-
nal x[n] and output signal y[n]. This filter consists of a comb and integrator filter pair, and an up-
sampler with interpolation ratio L.
for all integer value of k except k = 0, L, 2L,..., ML, as shown in Figure 2.23.
The common form of the CIC filter usually consists of several stages, and the
system function for the composite CIC filter is:
N
æ 1 1 - z - L(M +1) ö
H(z) = HL (z)N = ç ÷ (2.81)
è M + 1 1 - z -L ø
where N is number of stages in the composite filter.
Characteristics of CIC filters include linear phase response and utilizing only
delay and addition and subtraction. In other words, it requires no multiplication
operations. Therefore, it is suitable for hardware implementation.
Figure 2.23 The zeros of a CIC filter defined in (2.79), where all the zeros are on the unit circle.
2.7 PROBLEMS 47
Software defined radio experimentation requires both strong computer skills and
extensive knowledge of digital signal processing. This chapter lists some useful top-
ics including sampling, pulse shaping, and filtering. The purpose of this chapter is
to help the readers get prepared for the experimentation chapters, especially for
the USRP hardware. For example, on the USRP transmit path, there are analog-
to-digital converters and digital down-converters, where the theory of sampling is
needed. On the USRP receive path, besides digital-to-analog converters and digital
up-converters, there are CIC filters, where filtering knowledge is useful. In addition,
when you build a real communication system, you will need a transmit filter and
receive filter, which requires expertise on pulse shaping.
2.7 PROBLEMS
1 ω < 2 rad/s
X ( jω ) = (2.84)
0 otherwise
48 Signals and Systems Overview
6. [Sampling Rate Conversion] Let us cascade the decimator and the interpo-
lator introduced in Section 2.3.4 to perform a sampling rate conversion by
factor 3/4, as shown in Figure 2.24(a), where the input signal is i[n] and the
output signal is o[n]. Suppose the spectrum of the input signal is shown in
Figure 2.24(b).
Y ( e jω )
F (e jω ) = . (2.86)
X ( e jω )
Figure 2.25 A cascaded system, with input signal x [n] and output signal y [n].
2.7 PROBLEMS 49
x [ n] = 3δ ( n − 2) + 2δ ( n − k), (2.87)
what is its z transform? What is its region of convergence?
9. [Z Transform] Given a z transform
5z
X (z) = 2
, (2.88)
z − 3z + 2
what is its inverse z transform?
10. [Z Transform] Let y[n] = x[n] * h[n] be the output of a discrete-time sys-
n
æ 1ö
tem, where * is the convolution operator. Let h[ n ] = çè ÷ø u[n ] and
3
( )
ì 1 n n ³ 0,
ï 2
x [ n] = í (2.89)
( )
n
ï 1 n < 0.
î 4
Find Y (z) and specify its region of convergence.
11. [Stable
System] Suppose a linear time-invariant system is specified by the
following difference equation
12. [Ideal Filter] Suppose we have a system shown in Figure 2.26(a), where the
input signals are
sin(2t )
f (t) = , (2.91)
2π t
and
13. [FIR Filter] Suppose we have a three order real-coefficient FIR filter, whose
frequency response is
3
H (e jw) = ∑ h[n]e − jwn (2.93)
n =0
æ jπ ö
Given H(e ) = 2, H ç e 2 ÷ 7 - j 3 and H(ejp) = 0, what is H (z)?
j0
è ø
14. [FIR Filter] Design a second-order FIR filter whose frequency response has
π
a zero magnitude at w = . Specify the design using time-domain difference
2
equation.
50 Signals and Systems Overview
- -
15. [Linear Filter] Draw a direct form realization of the linear filter with the fol-
lowing impulse response:
References
[13] Hogenauer, E. B., “An Economical Class of Digital Filters for Decimation and Interpola-
tion” IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. 29, No. 2, 1981,
pp. 155–162.
[14] Lyons, G., Streamlining Digital Signal Processing: A Tricks of the Trade Guidebook,
Wiley-IEEE Press, 2007.
C h a p t e r 3
Probability Review
3.1.1 Set Theory
When dealing with the outcomes of random events and phenomena, one useful
mathematical tool is set theory. Suppose we define the sample space as the set of
all possible, distinct, indecomposable measurements that could be observed. The
sample space is usually denoted as W and can either be continuous (e.g., W = [0, ¥))
or discrete (e.g., W = {0, 1, 2, . . .}). An outcome is an element of a point located within
the sample space W, while an event is a collection of outcomes.
53
54 Probability Review
The idea of sample spaces, outcomes, and events can be described using set
theory. For example, suppose we are given A and B as a collection of points, and w
is a point in W; then:
1. w Î W: “w is an element of W.”
2. A Ì B: “A is a subset of B.”
3. A = B: “A is an equal set to B.”
4. A Ì B and A ¹ B: “A is a proper subset of B.”
and
C C
∞ ∞ ∞ ∞
∩
An
= ∪ AC
n and
∪ A n
= ∩ AnC (3.3)
n =1 n =1 n =1 n =1
Finally, a summary of possible set operations and their associated Venn diagram
representations are shown in Table 3.2.
3.1.2 Partitions
Partitioning of a sample space is a very powerful tool in set theory since it allows
an individual to “divide and conquer” all the possible events into mutually exclu-
sive, complementary subsets. Suppose we have a family of nonempty sets Bn that
are pairwise disjoint and form partitions of the entire sample space W. as shown in
Figure 3.2(a), where the sets satisfy:
Bn ∩ Bm = ∅, ∀ n, m and n ≠ m,
N
∪ Bn = Ω (3.4)
n =1
Consequently, we can carve up some set A using these partitions, such that:
A = A∩ Ω
N
= A ∩ ∪ Bn
n =1
N
= ∪ (A ∩ Bn) (3.5)
n =1
A
Ac
W
Union A È B := {w Î W: w Î A or w Î B}
B A
W
Intersection A Ç B := {w Î W: w Î A and w Î B}
B A
W
Set difference B \ A := B Ç Ac
B
A
W
(a)
(b)
Figure 3.2 An illustration of the partitioning of a set A into several disjoint subsets based on the
disjoint sets {Bn}. (a) Sample space W partitioned into disjoint sets {Bn}. (b) Set A partitioned by dis-
joint sets {Bn}.
3.1.3 Functions
By definition, functions are used to transform or map various values into another
quantitative representation. This mapping process can be readily represented in
Figure 3.3, where the values of x located within the domain X are mapped by a
function f to a collection of values y contained within a specific range, which forms
part of a potentially larger co-domain Y.
Notice how the range contains all possible values of f(x), where {f(x): x Î X}.
Furthermore, the range is a subset of the co-domain Y, and under specific circum-
stances the range and co-domain are one and in the same set of values. Whenever
dealing with functions, there is a collection of associated terminology describing the
mapping operations by functions such as f, such as:
• Onto: A function is “onto” if its range equals its co-domain such that for
every y ÎY, f(x) = y has a solution.
• One-to-one: The condition f(x1) = f(x2) implies x1 = x2.
• Invertible: For every y Î Y, there is a unique x Î X with f(x) = y, which means
that the function is both onto and one-to-one, and that for every y Î Y and
f(x) = y has a unique solution.
Range
Domain Co-domain
Figure 3.3 An illustration of how a function maps values from the domain to a range contained
within a co-domain.
3.1 Fundamental Concepts 57
æ ¥ ö æ N ö
P çç ∩ An ÷÷ = limN ®¥ P çç ∩ An ÷÷
ç ÷ ç ÷
è n =1 ø è n =1 ø
58 Probability Review
Now taking the definition for the conditional probability and isolating for P (AÇBi),
we get the following:
which becomes the definition for the Law of Total Probability, namely:
P(A ∩ Bi ) P( A ∩ Bi )
P(Bi | A) = =
P(A) n (3.11)
∑ P(A|Bk )P(Bk)
k =1
which is referred to as Bayes’ Rule. With the Law of Total Probability and Bayes’
Rule now defined, it is possible to solve complex probability models within the
context of a set theory framework using these tools and their variants.
3.2 Random Variables 59
3.1.7 Independence
There are several occasions where a special relationship exists between two or more
events. For instance, if knowledge of an occurrence of an event B does not alter the
probability of some other event A, we can say that A is independent of B; that is,
P ( A ∩ B)
P ( A) = P(A |B) = (3.12)
P ( B)
In general, when we have events Ai, i = 1, 2, . . ., we say that they are mutually in-
dependent when there are a finite number of events and satisfy:
P ∩ Ai = ∏ P(Ai ) (3.14)
i∈I i∈I
Note that if (3.14) holds for all subsets of I containing exactly two events but not
necessarily three or more, we say that the Ai are pairwise independent.
X = X (ω ) (3.15)
where X is the random variable, and w is the outcome of an experiment found in the
sample space W. An example of the mapping process relating some experiment outcome
w by the random variable X to some real number value is shown in Figure 3.4.
Note that the sample space W is the domain of the function X(W), while Wx is
the range and the co-domain is .
Given this mapping function, let us now develop a notation for defining the
probability of an event or an outcome. First, we will let upper case letters denote
random variables, while lower case letters will denote the possible values of the
60 Probability Review
random variables. Second, let us define the following expression for the probability
of a specific event, as well as its shorthand representation on the right-hand side:
where P(.) denotes the probability measure, w Î W denotes the outcomes are ele-
ments of the sample space W, and X(w) denotes the mapping function that produces
an output value less than a deterministic quantity x. On the right-hand side, the
shorthand version is interpreted as “the probability that the random variable X will
produce an output that is less than the deterministic quantity x.”
Finally, some additional shorthand expressions and their equivalent full math-
ematical representations include the scenario where if B is a set of real numbers,
then the probability that X produces an output belonging to that set is given by:
Another expression is when B is an interval such that B = (a, b], the probability that
the output of the random variable X is given by:
∑ P(X = xi ) = 1 (3.19)
i
Using the Law of Total Probability that we have studied in Section 3.1.6, we know that:
where the set B is composed of a collection of values xi. A specific form of discrete
random variable is die integer-valued random variable, where its output values are
the integers xi = i; that is,
3.2 Random Variables 61
Several frequently used PMFs are specified in Table 3.4, including uniform, Poisson,
and Bernoulli random variables.
When dealing with multiple discrete random variables X 1 ,...,X n , the general-
ized definition for all of these random variables to be independent is given as:
n n
p ∩ {X j ∈ Bj } = ∏ P(X j ∈ Bj ),∀ choices of sets B1 , …, Bn (3.24)
j= 1 j= 1
Note that if X 1 , . . ., X n are independent, then so are any subset of them. Further-
more, if X 1 , . . ., X n , . . . is an infinite sequence of random variables, then they are
independent if the generalized expression holds for every finite n = 1, 2, . . .. Finally,
if for every B the probability P(Xj Î B) does not depend on j, men we say that all
the Xj are identically distributed. In the case when all the Xj are also independent
with each other, we refer to these random variables as independent and identically
distributed (i.i.d.).
0 1 2 3 n -1 n x
k -λ
λ e PX ( x)
Poisson pX (k) = k!
, k = 0, 1, 2, . . .
0 1 2 3 x
ì p, k =1 PX ( x)
ï
Bernoulli pX (k) = í1 - p, k=0 P
ï0, otherwise 1- p
î
0 1 2 3 x
62 Probability Review
3.2.1.1 Expectation
One way to characterize a random variable is by its expected value or mean, which
can be quantitatively determined using the expression:
where the sum of each value is weighted by its probability of occurrence. Conse-
quently, using the definition of the PMF, we can rewrite (3.25) as:
Suppose we have a random variable X and a real-valued function g(x) that maps
it to a new random variable Z (i.e., Z = g(X)). Recall that X is actually the mapping
of points from a sample space W to a collection of real numbers. Therefore, we are
performing a mapping of a mapping when solving for the random variable Z (i.e.,
Z(w) = g(X(w))). Thus, in order to compute the expectation of the random variable
Z, namely E[Z], we can employ the following expression:
E[ X r ]
Derive the Markov Inequality, P(X ³ a) £ , using the definition
Q for the expectation.
ar
P( X = xi , Y = yj ) pX Y(x i , yj ) (3.29)
pY | X (yj | xi ) = P ( Y = yj | X = xi) = =
P ( X = xi ) pX ( x i)
3.2 Random Variables 63
= ∑ pY | X ( y j) pX( x i) (3.32)
i
P( Z = z) = ∑ P(Z = z |X = xi )P(X = xi )
i
Looking at the resulting equation more closely, we notice that since we already have
the entire expression conditioned on the fact that the random variable X produces
a deterministic output xi, namely, X = xi, we can substitute the random variable X
with the deterministic output xi such that:
This operation is referred to as the Substitution Law, and it can be used to help
solve expressions and operations involving multiple random variables.
Similarly, the conditional expectation is given as:
E [ g(Y )| X = xi ] = ∑ g ( yj ) pY | X( yj | xi ) (3.35)
j
E [ g(X , Y )| X = xi ] = E[ g( xi ,Y )| X = xi ] (3.36)
Derive the resulting expressions for the Law of Total Probability for
Q Expectation and the Substitution Law.
P( X = 1| Y = j) ≥ P( X = 0|Y = j ) (3.38)
Figure 3.5 An example of a binary channel where transmission values generated by the random
variable X are being observed at the receiver as the random variable Y.
If X = 0 and X = 1 are equally likely to occur, we can simplify this expression such
that it yields the maximum likelihood (ML) rule:
P(Y = j |X = 1) P( X = 0) (3.42)
≥
P(Y = j |X = 0) P(X = 1)
where the right-hand side is referred to as the “threshold” since it does not depend on j.
where X is the discrete random variable and both a and b are boundaries of a subset
belonging to the sample space W. However, suppose now that X is a continuous
random variable that can take on the entire continuum of values within the interval
[a, b). In order to compute the same probability in this case. we can start by real-
izing this scenario as the summation of an infinite number of points in [a, b) with
the space between samples equal to Dx.
Given an infinite number of samples, our Dx becomes so tiny that Dx ultimately
converges to dx and our summation becomes an integral expression. Therefore, we
can rewrite (3.43) in terms of continuous random variables, yielding:
b
P(a ≤ X < b) = ∫ f (t)dt (3.44)
a
where f(t) is called the probability density function or PDF. Note that the PDF is the
continuous version of the PMF that we discussed previously in this chapter. More-
over, generally, we can express the probability of a continuous random variable
using the PDF by the expression:
+∞
P(X ∈ B) = ∫ f (t)dt = ∫ IB(t)f (t)dt (3.45)
B −∞
where g(.) is some real function that is applied to the random variable X.
3.2 Random Variables 67
λ e ,− λx
x≥0 l
Exponential f (x) =
0, x< 0
x
0
f ( x)
f (x) = λ e− λ | x |
Laplace 2 l /2
x
0
f ( x)
f (x) = λ/ π ,λ >0 1 / pl
Cauchy λ 2 + x2
1 / 2pl
x
-l 0 l
f ( x)
2
Gaussian f (x) = 1 e−0.5((x− µ )/σ )
2π σ2
x
m
P(Y ∈ C , X = x)
P(Y ∈ C | X = x) = (3.48)
P( X = x)
we observe that the occurrence of P(X = x) = 0 would yield a “divide-by-zero” sce-
nario. Consequently, we need to determine another definition for the conditional
68 Probability Review
probability (and conditional expectation) that would work within the continuous
random variable framework.
It can be shown that in order to calculate the conditional probability, one must
employ a conditional density [1], which can be defined as:
fX Y (x, y)
fY | X (y |x) = (3.49)
f X ( x)
where fx(x) > 0. Thus, leveraging the conditional density, we can now compute the
conditional probability without concern for a “divide-by-zero” scenario by solving
for the following expression:
P(Y ∈ C | X = x) = ∫ fY | X (y | x) dy (3.50)
C
+∞
P(Y ∈ C ) = ∫ P(Y ∈ C | X = x)fX (x) dx (3.51)
−∞
where we weigh all the conditional probabilities by the PDF of X before integrating
them together to form the overall probability of the event Y Î C. Finally, just as in
the discrete random variable case, we can employ a form of Substitution Law for
continuous random variables when dealing with conditional probability, which is
defined by:
Note that if X and Y are independent, then the joint density factors, yielding the
following expression for the conditional density:
fX Y (x, y)
fY | X (y| x) =
f X ( x)
fX (x) fY ( y)
=
f X ( x)
= fY (y) (3.53)
which implies that when the two random variables are independent, we do not need
to condition one event on the other.
Similarly, the conditional expectation when dealing with continuous random
variables is defined as the following expression employing the conditional density,
namely:
3.2 Random Variables 69
+∞
E [ g(Y )| X = x] = ∫ g ( y)fY | X (y| x) dy (3.54)
−∞
Furthermore, the Law of Total Probability for a conditional expectation is given as:
+∞
E [ g(X , Y )] = ∫ E [ g(X ,Y )| X = x]fX ( x) dx (3.55)
−∞
E [ g (X , Y )| X = x] = E [g(x , Y )| X = x] (3.56)
x
FX (x) = P(X ≤ x) = ∫ f (t)dt (3.57)
−∞
which describes the probability that the outcome of an experiment described by the
random variable X is less than or equal to the dummy variable x.
As an example, suppose that we want to calculate the probability of P(a £ X < b)
using the PDF shown in Figure 3.6(a). One approach to quickly evaluating this
probability is to leverage the tail probabilities of this distribution, namely, P(X < a)
(shown in Figure 3.6(b)) and P(X < b) (shown in Figure 3.6(c)). Notice how the tail
probabilities are actually the CDFs of X based on (3.57), where FX(a) = P(X < a)
and FX(b) = P(X < b).
Consequently, given that we are only interested in the region of the PDF where these
two tail probabilities do not intersect, we can compute the following probability:
where all we really need are the values for the CDF of X at x = a and x = b.
Several fundamental characteristics of the CDF include the fact that FX (x) is
bounded between zero and one, and that FX(x) is a nondecreasing function (i.e.,
FX(xi) £ FX(x2) if x1 £ x2). Furthermore, the PDF is the derivative of the CDF in
terms of the dummy variable x, which we can define as:
d
fX (x) = FX (x) (3.59)
dx
70 Probability Review
(a)
(b)
(c)
Figure 3.6 An example of how the CDF can be used to obtain the tail probabilities P(X < a) and
P(X < b) in order to quickly calculate P (a < X < b). (a) The region of the PDF of the random variable X
that needs to be integrated in order to yield P (a £ X < b). (b) The region of the PDF of the random
variable X that needs to be integrated in order to yield P (X < a). (c) The region of the PDF of the
random variable X that needs to be integrated in order to yield P (X < b).
In such cases, we can employ a useful theorem that is capable of helping us define the
overall behavior of the summed random variables. Referred to as the central limit the-
orem (CLT), this theorem states the conditions under which the mean of a sufficiently
large number of independent random variables, each with finite mean and variance,
can be approximated by a Normal (i.e., Gaussian) distribution. Note that the CLT
requires the random variables to be identically distributed. Since real-world quantities
are often the balanced sum of many unobserved random events, this theorem provides
a partial explanation for the prevalence of the normal probability distribution.
Suppose we have X1,X2,X3,..., Xn independent and identically distributed ran-
dom variables with a common mean m and common variance s 2. Let us define the
sum of these random variables by the expression:
n
1 Xi − µ
Yn =
n
∑ σ
(3.60)
i= 1
where the random variable Xi is normalized by the standard deviation s and its
mean shifted to zero by m. Consequently, if we take the limit of n®¥, then by the
CLT the CDF of Yn becomes:
−∞
1 2
lim FYn (y) = ∫ e− t /2
dt (3.61)
n→∞ 2π y
where the integrand is the definition for the standard normal PDF. The reason why
this result for the CLT is so important is that if we take a collection of independent
and identically distributed random variables possessing any distribution and sum
them all together, we will obtain a resulting distribution that is Gaussian.
1 2
/ 2σ 2
fX ( x) = e- (x- µ) (3.62)
2πσ 2
where m is the mean of the Gaussian random variable X, and s 2 is the variance of X. In
the case that m = 0 and s 2 = 1, we refer to X as a standard Normal random variable.
Although the univariate Normal distribution occurs rather frequently in the
real world, there are several instances where a bivariate Normal distribution could
also be defined to characterize a specific distribution involving two Gaussian
72 Probability Review
random variables possessing some degree of correlation between each other. An illus-
tration of an example of a bivariate Normal distribution is shown in Figure 3.7. To
visualize the impact of correlation between two Gaussian random variables forming
a bivariate Normal distribution, let us examine the examples shown in Figure 3.8
and Figure 3.9, where the zero-mean Gaussian random variables X and Y produce
10,000 coordinate points in two-dimensional space. Since these random variables
are zero-mean, the density of the resulting scatter plot is mostly concentrated around
the origin at coordinates (0, 0). We can see in Figure 3.8 that since X and Y are
uncorrelated, the resulting density of coordinate outputs are circularly symmetrical
around the means of the two random variables. On the other hand, if X and Y are
correlated, the resulting density of output coordinates are heavily skewed along an
axis within the two-dimensional plane, as shown in Figure 3.9.
Mathematically speaking, the general definition for a bivariate Gaussian den-
sity with parameters mX, mY, σ X2 , 2, and correlation coefficient r is given by:
σY
æ æ
( ) ( ) ( ) + ( ) ö÷ø ö÷ø
y - µY 2
2 y - µY
x - µX x - µX
exp ç -1 2 ç σX
- 2ρ σX σY σY
è 2(1- ρ ) è
fXY (x, y) =
2πσ Xσ Y 1 - ρ 2 (3.63)
As we have seen thus far, a random variable is simply a rule for assigning to every
possible outcome w from a sample space W a number X(w). Given this description,
Figure 3.7 Density of a bivariate Normal distribution with no correlation (i.e., r = 0).
3.3 Random Processes 73
Figure 3.8 Scatter plot of a bivariate Normal distribution with no correlation (i.e., r = 0).
we will now provide a definition for a random process, which is also referred to as a
stochastic process. A random process is a family of time-domain functions depending
on the parameters t and w (i.e., we can define a random process by the function).
Figure 3.9 Scatter plot of a bivariate Normal distribution with a substantial amount of correlation
between Gaussian random variables X and Y.
74 Probability Review
Given this additional degree of freedom, namely, the time parameter t, we should
ask ourselves how we should interpret the behavior of a random process. Referring
to the random process illustration shown in Figure 3.10, we see that a random pro-
cess X(t,w) consists of a collection of deterministic time-domain functions, each of
which corresponds to an outcome wi from the sample space W. The “randomness”
associated with the random process is derived from the fact that at every time in-
stant, the real-valued output of the random process is generated by the completely
random selection of one of the time-domain functions at that specific time instant.
Furthermore, notice how the selection of the same deterministic time-domain func-
tion corresponding to the same outcome at two different time instants, say t0 and
t1, could yield a real-valued output that is different. In general, we can interpret the
random process X(t,w) as follows:
second-order distribution F(x1, x2; t1, t2) and second-order density f(x1, x2; t1, t2)
are defined as:
µ X (t ) = E [ X(t, ω )] (3.72)
Another useful statistical characterization tool for a random process X(t,w) is
the autocorrelation function RXX(t1,t2), where we evaluate the amount of correla-
tion that the random process X(t,w) possesses at two different time instants t1 and
t2. We can define this mathematically by the expression:
RXX (t1 , t2 ) = E [X(t1 , ω )X∗ (t2 , ω)]
+∞ +∞
= ∫ ∫ x1x∗2 f (x1 , x2 ; t1 , t2 )dx1dx2 (3.73)
−∞ −∞
Note that the value of the diagonal for RXX(t1,t2) is the average power of X(t,w),
namely:
4. For RXX(t,t) = E[X(t,w)|2 ] ³ 0 and given time instants t1 and t2, we have the
following inequality:
3.3.2 Stationarity
A stationary process is a random process that exhibits the same behavior at any
two time instants (i.e., the random process is time invariant). Two common forms
of stationary processes are strict-sense stationary (SSS) random processes and wide-
sense stationary (WSS) random processes. In this section, we will define these two
types of random processes.
Furthermore, f(x1, x2; t1, t2) = f(x1, x2; t1 + c, t2 + c) is independent of c for any value
of c. Consequently, this means that the density becomes:
Thus, the joint density of the random process at time instants t and t + t is indepen-
dent of t and is equal to f (x1, x2;t).
• The mean function mX(t) does not depend on time t (i.e., mX(t) = E[X(t,w)] =
mX).
• The autocorrelation function RXX(t + t,t) depends only on the relative differ-
ence between t and t + t (i.e., RXX(t + t,t) = Rxx(t)).
Several observations about WSS random processes include the following:
• The average power of a WSS random process is independent of time since
E[|X(t,w)|2] = RXX(0).
• The autocovariance function of a WSS random process is equal to CXX(t) =
RXX (t) – |mX|2.
• The correlation coefficient of a WSS random process is given by rXX(t) =
CXX(t)/CXX(0).
• Two random processes X(t,w) and Y(t,w) are jointly WSS if each is WSS and
their cross-correlation depends on t = t1 – t2.
• If the random process X(t,w) is WSS and uncorrelated, then CXX(t) = qd(t),
where q is some multiplicative constant.
3.3.2.3 Cyclostationarity
There exists another form of stationarity characteristic that often occurs in wire-
less data transmission. A cyclostationary random process Y(t) is defined by a mean
function mY(t) that is periodic across time t as well as an autocorrelation function
Ryy(t + q,q) that is periodic across q for a fixed value of t. Consequently, a cyclosta-
tionary random process Y(t) with period T0 can be described mathematically by:
T0
1
RYY (τ ) =
T0 ∫ RXX ( τ + θ , θ )dθ (3.80)
0
T
y = ò g(t)X(t) dt (3.81)
0
Note that the random variable Y has a Gaussian distribution, where its PDF is
defined as:
− (y− µY )2
1 2σ Y 2
fY (y) = e
2πσY2 (3.82)
where mY is the mean and σ Y2 is the variance. Such processes are important because
they closely match the behavior of numerous physical phenomena, such as additive
white Gaussian noise (AWGN).
¥
SXX (f ) = ò RXX (τ )e - j 2π f τ dτ (3.83)
-¥
¥
RXX (f ) = ò SXX (τ )e + j 2π f τ df (3.84)
-¥
Using the EWK relations, we can derive several general properties of the PSD of
a stationary process, namely:
3.4 Chapter Summary 79
¥
• SXX (0) = ò RXX (τ )dτ
-¥
∞
• E{X (t)} = ∫ SXX (f )df
2
−∞
• Sxx(f )>0 for all f.
• Sxx(−f ) = Sx(f ).
• The power spectral density, appropriately normalized, has the properties usu-
ally associated with a probability density function:
SXX (f )
px (f ) = ∞
∫ −∞ SXX (f )df (3.85)
A very powerful consequence of the EWK relations is its usefulness when attempt-
ing to determine the autocorrelation function or PSD of a WSS random process that
is the output of a linear time-invariant (LTI) system whose input is also a WSS ran-
dom process. Specifically, suppose we denote H(f ) as the frequency response of an
LTI system h(t). We can then relate the power spectral density of input and output
random processes by the following equation:
where SXX(f ) is the PSD of input random process and SYY (f ) is the PSD of output
random process. This very useful relationship is illustrated in Figure 3.11.
In this chapter, a brief introduction to some of the key mathematical tools for ana-
lyzing and modeling random variables and random processes has been presented.
Of particular importance, the reader should understand how to mathematically
manipulate Gaussian random variables, Gaussian random processes, and bivariate
Normal distributions since they frequently occur in digital communications and
wireless data transmission applications. Furthermore, understanding how station-
arity works and how to apply the EWK relations to situations involving random
processes being filtered by LTI systems is vitally important, especially when dealing
with the processing and treatment of received wireless signals by the communica-
tion system receiver.
Figure 3.11 An example of how the an LTI system h(t) can transform the PSD between the WSS
random process input X(t) and the WSS random process output Y(t).
80 Probability Review
3.5 ADDITIONAL READINGS
Although this chapter attempts to provide the reader with an introduction to some
of the key mathematical tools needed to analyze and model the random variables
and random processes that frequently occur in many digital communication systems,
the treatment of these mathematical tools is by no means rigorous nor thorough.
Consequently, the interested reader is encouraged to see some of the available text-
books that address this broad area in greater detail. For instance, the gold standard
for any textbook on the subject of probability and random processes is by Papoulis
and Pillai [1]. On the other hand, those individuals seeking to understand probabil-
ity and random processes theory within the context of communication networks
would potentially find the textbook by Leon-Garcia to be highly relevant [3].
For individuals who are interested in activities involving physical layer digital
communication systems and digital signal processing, such as Wiener filtering, the
textbook by Gubner would be a decent option given that many of the examples and
problems included in this publication are related to many of the classic problems
in communication systems [4]. Regarding textbooks that possess numerous solved
examples and explanations, the books by Hsu [5] and Krishnan [6] would serve as
suitable reading material. Finally, for those individuals who are interested in study-
ing advanced topics and treatments of random processes, the textbook by Grimmett
and Stirzaker would be a suitable publication [7].
3.6 Problems
1. [Probability of Event] A die is tossed twice and the number of dots facing up in
each toss is counted and noted in the order of occurrence.
HINT: Assume that the probability of any subinterval is proportional to its length.
Figure 3.12 An asymmetric communications channel with input values (“0,” “1”) and output
values (“0,” “1,” “2”).
namely a value of “0” with a probability a, and a value of “1” with a probability
1 − a. The output can take one of the three discrete values: “0,” “1,” and “2.”
Use the notation,
Rj = {j is received}, j = 0, 1, 2.
The conditional probabilities for the occurrence of the output symbols are also
shown in the figure. For example, P[R0|T0] = p1 (read as: “probability that a ‘0’
is received given that a ‘0’ was transmitted equals p1”).
(a) Find the probability that the output is “0” (i.e., find P[R0]).
(b) Find the probability that the input was “0” given that the output is “0” (i.e.,
find P[T0|R0]).
(c) Find the probability that the input was “1” given that the output is “2” (i.e.,
find P[T1|R2]).
(a) Generate a 1 × 1, 000, 000 vector containing “0” and “1” with probabilities
a and 1 − a (NOTE: Use in this problem a = 0.1, 0.5, and 0.8). Show as a
function of vector index, n, the accumulation of the probabilities of “0” and
“1” as n increases from 1 to 1, 000, 000. Do these values correspond to the
theoretical (expected) outcome?
(b) Take the vector generated in part (a) and feed it to the asymmetric channel of
Figure 3.12. Plot the accumulated values of P[R0], P[T0|R0], and P[T1|R2] as
n increases from 1 to 1,000,000. Does the asymptotic behavior of these three
scenarios correspond to the theoretical results in question 3?
82 Probability Review
ne way of writing a MATLAB script is explained as follows. Notice that the follow-
O
ing steps need to be implemented cumulatively as n increases from 1 to 1, 000, 000.
• Generate the transmit symbol vector using randsrc() function. The rand-
src() function allows you to generate the input vector such that each entry
of the alphabet occurs with the desired probability.
• Select the indices associated with zeros (ones) in the transmit symbol vector
using find() function.
• Randomize the indices associated with zeros (ones) in the transmit symbol
vector using randperm() function.
• Select the first 100 × p1% (100 × q1%) elements from the vector containing
randomized zero (one) locations and store them as zeros, select the next 100 ×
p2% (100 × q2%) elements from the same vector containing randomized
zero (one) locations and store them as ones, and so on. (You may have to use
round() function to select an integer number of zeros (ones).)
• Count the appropriate values, and store P[R0] as a function of n, P[T0|R0] as
a function of n and P[T1|R2] as a function of n.
• Substitute the numerical values in your result from question 3. Verify that the
plots of P[R0] versus n P[T0|R0] versus n and P[T1|R2] versus n settle at these
numerically obtained values as n increases.
6. [Bernoulli Random Numbers] Suppose you have just been hired by the Acme
Randomness Company, Inc. and your first assignment is to deal with their
RNG3000 random number generator, as shown in Figure 3.13. This random
number generator operates as follows: Three Bernoulli random number genera-
tors are continuously producing sequences of “1” and “0” values, where the ith
Bernoulli random generator has a probability of pi of producing a “1” and a
probability of 1 − pi of producing a “0.” Although these three random number
generators are simultaneously producing binary values, only one is actually being
selected by a switch for a duration of N samples, and the switch randomly selects
a random number generator after every N samples. Note that the value for N is
selected at random from a range [1000, 10000] only once when the RNG3000
starts up. Finally, the output of the switch is then passed through a binary sym-
metric channel. The binary symmetric channel possesses a probability of p for
the case when the input samples remains the same at the output of the channel,
and a probability 1 – p when the sample value is flipped to the other binary
3.6 Problems 83
value. Note that the RNG3000 is capable of producing L = 10 000 000 random
samples in a single operation.
Given that p = 0.995, p1 = 0.9, p2 = 0.5, p3 = 0.1, and that we are not informed
about the value of N generated at the start of the process, design a device that
can be connected to the output of the binary symmetric channel capable of de-
termining the approximate value of N based solely on the continuous stream of
output samples from the binary symmetric channel, as well as determine which
Bernoulli random number generator is being selected by the switch for each seg-
ment of N samples. Note that you will need to implement Figure 3.13 in the first
place such that you can test your proposed design.
• Store in a single vector all the L samples produced at the output of the
RNG3000 that you have implemented for a fixed value of N (note that N is
generated only once via a uniform random number generator).
• To determine the size of N, apply a window of size N starting with N = 1000
and slide it across the vector of length L, recording the statistics of “0” and
“1” values per window. Based on the statistics, and iteratively repeating this
procedure for incrementally increasing window sizes, one can then determine
based on the statistics the appropriate size of N.
• Once N has been determined, one can use the statistics per window in order
to ascertain which one of the three Bernoulli random number generators
were employed.
ïìc (1 - x 2 ) -1 £ x £ 1
f X ( x) = í
ïî 0 elsewhere (3.87)
Figure 3.13 Schematic of the Acme Corporation’s RNG3000 random number generator.
84 Probability Review
Once your receiver has been constructed, please answer the following questions:
(a) What is the approximate value of the parameter l of the exponential random
variable describing the interburst time?
3.6 Problems 85
(b) What is the approximate value of the variance s2 of the Gaussian noise added
to the transmitted message?
(c) Concatenating all the transmission bursts together, what is the secret message?
10. [Random Variable] A random variable X has the following probability density
function (PDF):
ìïc (1 - x2 ) -1£ x £ 1
fX (x) = í
ïî0 elsewhere (3.89)
11. [RLC Circuit] In this problem, suppose the Acme Corporation is focusing their
efforts on the design of an RLC circuit similar to the one shown in Figure 3.14.
The voltage transfer function between the source and the capacitor is:
1
H(ω ) = 2 (3.90)
(1 − ω LC) + jω RC
S uppose that the resonant frequency of the circuit, w0, is the value of w that
maximizes |H(w)|2.
(a) Solve for the expression of the resonant frequency in terms of R, L, and C.
(b) Given that these circuits are mass produced, then the actual values of R, L,
and C in a particular device may vary somewhat from their design specifica-
12. [Bivariate Normal] In this problem, you have just been assigned another project
by the Acme Randomness Company, with respect to the characterization of some
test data that was recently collected during a measurement campaign. This test
data is saved in the file testdata.mat.
An initial analysis by the company has shown that this data is a concate
nation of five bivariate Normal vector pairs, (X, Y), where the length for each
pair of vectors is derived from a discrete uniform random variable that produces
values between 100, 000 and 1, 000, 000. Note that all of these bivariate Normal
vectors are zero-mean and unit variance.
Furthermore, each of these vector pairs possess a different positive correla-
tion factor rXY between them, where rXY is defined as:
ìæ X - mX ö æ Y - mY ö ü
ρXY = E íç ÷ç ÷ ý (3.91)
îè σ X ø è σ Y ø þ
where mX and mY are the mean values of X and Y, while sX and sY are the stan-
dard deviation values of X and Y.
Your task is to determine the approximate lengths of these five vectors and to
find the approximate correlation factor pXY for each pair of vectors.
• Use the plotting command scatter in order to show the correlation be-
tween X and Y. Please include these scatter plots in your solutions and make
sure to explain them thoroughly.
• Employ a sliding window approach in order to test the correlation of a spe-
cific segment. Changes in the correlation could be indicative of a transition
from one vector to another.
(a) If X(t) contains a DC component equal to A, then Rxx(t) will contain a con-
stant component equal to A2.
(b) If X(t) contains a sinusoidal component, then Rxx(t) will also contain a sinu-
soidal component of the same frequency.
3.6 Problems 87
14. [Cross-correlation] Consider a pair of stationary processes X(t) and Y(t). Show
that the cross-correlations RXY(t) and RYX(t) of these processes have the following
properties:
E [Xn+ 1 | Yn , …, Y1 ] = Xn , n ≥ 1 (3.92)
where mz = p – q and σ Z2 = 4 pq. Let us define the following random walk process:
n n− 1
Wn = ∑ Zi = ∑ Zi + Zn = Wn−1 + Zn (3.95)
i= 1 i= 1
2
The mean of Wn is mw = n(p – q), while its variance is σ W = 4npq.
Prove that Yn = Wn – n(p – q) is a discrete martingale with respect to {Zk, 1 £ k £ n}.
E [Y (t)] < ¥,
E [Y (t) | X (s), 0 £ s £ t ] = Y (s) (3.96)
2
(σ 2t / 2)
Y (t) = ekW (t)-k , t > 0 (3.97)
here k is an arbitrary constant. Show that Y(t) is a martingale with respect to
w
{W(t), t ³ 0}.
17. [Advanced Topic] Suppose that we define a random process as Y(t) = W 2(t), where
W(t) is a Wiener process with variance s 2t. Show that Y(t) is a submartingale.
18. [Advanced Topic] Suppose that we have a Wiener process X(t). Using the follow-
ing arbitrary linear combination:
n
∑ ai X(ti ) = a1X(t1) + a2X(t2 ) + … + an X(tn ) (3.98)
i= 1
where 0 £ t1 £ . . .£ tn and ai are real constants, show that it can also be a Gauss-
ian process. [15 marks]
19. [Advanced Topic] Suppose we define the term continuous in probability for a
random process {X(t), t Î T} when for every Î > 0 and t Î T:
20. [Advanced Topic] Suppose we define X(w) as the Fourier transform of a random
process X(t). If X(w) is nothing more than white noise with zero mean and an
autocorrelation function that is equal to q(w1)d(w1 – w2), then show that X(t) is
wide-sense stationary with a power spectral density equal to q(w)/ 2p.
References
[1] Papoulis A., and S. Unnikrishna Pillai, Probability, Random Variables and Stochastic Pro-
cesses, 4th Edition, McGraw-Hill Higher Education, 2002.
[2] Abramowitz, M., and I. Stegun, Handbook of Mathematical Functions: with Formulas,
Graphs, and Mathematical Tables, Dover Publications, 1965.
[3] Leon-Garcia, A., Probability, Statistics, and Random Processes for Electrical Engineering,
3rd Edition., Prentice Hall, 2008.
[4] Gubner J. A., Probability and Random Processes for Electrical and Computer Engineers,
Cambridge University Press, 2006.
[5] Hsu H., Schaum’s Outline of Probability, Random Variables, and Random Processes,
McGraw-Hill, 1996.
[6] Krishnan V., Probability and Random Processes, Wiley-Interscience, 2006.
[7] Grimmett G., and Stirzaker D., Probability and Random Processes, 3rd Edition, Oxford
University Press, 2001.
C h a p t e r 4
89
90 Digital Transmission Fundamentals
decoding blocks that handle the removal of redundant information from the binary
data, channel encoding and channel decoding blocks that introduce a controlled
amount of redundant information to protect the transmission for potential errors,
and the radio frequency front-end or RF front-end blocks that handle the conversa-
tion of baseband waveforms to higher carrier frequencies.
One may ask the question: Why do we need all these blocks in our digital commu-
nication system? Notice in Figure 4.2 the presence of a channel between the transmitter
and the receiver of the digital transmission system. The main reason that the design of a
digital communication system tends to be challenging, and that so many blocks are in-
volved in its implementation, is due to this channel. If the channel was an ideal medium,
where the electromagnetic waveforms from the transmitter are clearly sent to the receiver
without any sort of distortion or disturbances, then the design of digital communication
systems would be trivial. However, in reality a channel introduces a variety of random
impairments to a digital transmission that can potentially affect the correct reception of
waveforms intercepted at the receiver. For instance, a channel may introduce some form
of noise that can obfuscate some of the waveform characteristics. Furthermore, in many
real-world scenarios many of these nonideal effects introduced by the channel are time-
varying and thus difficult to deal with, especially if they vary rapidly in time.
Thus, under real-world conditions, the primary goal of any digital communica-
tion system is to transmit a binary message m̂ (t) and have the reconstructed version
of this binary message m̂(t) at the output of the receiver be equal to each other. In
other words, our goal is to have P(m̂(t) ≠ m(t)) as small as needed for a particular
application. The metric for quantitatively assessing the error performance of a digi-
tal communication system is referred to as the probability of error or bit error rate
(BER), which we define as Pe = P(m̂(t) ≠ m(t)). Note that several data transmission
applications possess different Pe requirements due in part to the data transmission
rate. For instance, for digital voice transmission, a BER of Pe ~ 10−3 is considered
acceptable, while for an average data transmission application a BER of Pe ~ 10−5 −
10−6 is deemed sufficient. On the other hand, for a very high data rate application
such as those that employ fiber-optic cables, a BER of Pe ~ 10−9 is needed since any
more errors would flood a receiver given that the data rates can be extremely high.
To help mitigate errors that may occur due to the impairments introduced by
the channel, we will briefly study how source encoding and channel encoding works
before proceeding with an introduction to modulation.
4.1.1 Source Encoding
One of the goals of any communication system is to efficiently and reliably com-
municate information across a medium from a transmitter to a receiver. As a re-
sult, it would be ideal if all the redundant information from a transmission could
be removed in order to minimize the amount of information that needs to be sent
across the channel, which would ultimately result in a decrease in the amount of
time, computational resources, and power being expended on the transmission.
Consequently, source encoding is a mechanism designed to remove redundant in-
formation in order to facilitate more efficient communications.
The way source encoding operates is by taking a sequence of source symbols
u and mapping them to a corresponding sequence of source encoded symbols
v, vi ∈ v as close to random as possible, and the components of v are uncorrelated,
(i.e., unrelated). Thus, by performing this source encoding operation we hope to
achieve some level of redundancy minimization in vi ∈ v, thus limiting the amount
of wasted radio resources employed in the transmission of otherwise “predictable
symbols” in u. In other words, a source encoder removes redundant information
from the source symbols in order to realize efficient transmission. Note that in order
to perform source encoding, the source symbols need to be digital.
A single analog television channel occupies 6 MHz of frequency bandwidth.
i On the other hand, up to eight digitally encoded television channels can fit
within the same frequency bandwidth of 6 MHz.
92 Digital Transmission Fundamentals
since our goal is to maximize the minimum Hamming distance in a given codebook
to ensure that the probability of accidentally choosing a codeword other than the
correct codeword is kept to a minimum. For example, suppose we have a codebook
consisting of {101,010}. We can readily calculate the minimum Hamming distance
to be equal to dH,min = 3, which is the bestpossible result. On the other hand, a code-
book consisting of {111, 101} possesses a minimum Hamming distance of dH,min = 1,
which is relatively poor in comparison to the previous codebook example.
In the event that a codeword is corrupted during transmission, decoding spheres
(also known as Hamming spheres) can be employed in order to make decisions on
the received information, as shown in Figure 4.3, where codewords that are cor-
rupted during transmission are mapped to the nearest eligible codeword. Note that
when designing a codebook, the decoding spheres should not overlap in order to en-
able the best possible decoding performance at the receiver, (i.e.,→ dH.min = 2t + 1).
A rate 1/3 repetition code with no source encoding would look like:
1 → 111 = c1 (1st codeword)
hat are the Hamming distances for the codeword pairs dH (111,000)
W
and dH (111,101)?
a quantitative expression that described the limit on the data rate, or capacity, of a
digital transceiver in order to achieve error-free transmission.
Suppose one considers a channel with capacity C and we transmit data at a
fixed code rate of K/N, which is equal to Rc (a constant). Consequently, if we in-
crease N, then we must increase K in order to keep Rc equal to a constant. What
Shannon states is that there exists a code such that for Rc = K/N < C and as N → ∞,
we have the probability of error Pe → 0. Conversely, for Rc = K/N ≥ C, Shannon
indicated that no such code exists. Hence, C is the limit in rate for reliable com-
munications, (i.e., C is the absolute limit that you cannot go any faster than this
amount without causing errors).
So why is the result so important? First, the concept of reliability in digital com-
munications is usually expressed as the probability of bit error, which is measured
at the output of the receiver. As a result, it would be convenient to know what this
capacity is given the transmission bandwidth, B, the received signal-to-noise ratio
(SNR) using mathematical tools rather than empirical measurements. Thus, Shannon
derived the information capacity of the channel, which turned out to be equal to:
the channel capacity. Thus, as η → 1, the system becomes more efficient. Therefore,
the capacity expression provides us with a basis for tradeoff analysis between B and
SNR, and it can be used for comparing the noise performance of one modulated
scheme versus another.
∫s
2
Es = (t)dt (4.3)
0
where T is the period of the symbol. Then, for a modulation scheme consisting of M
symbols, we can define the average symbol energy via the following weighted average:
T T
Es = P(s1(t)) ⋅ ∫0 s1 (t)dt + � + P(sM (t)) ⋅ ∫0 sM (t)dt
2 2
(4.4)
where P(si(t)) is the probability that the symbol si(t) occurs. Furthermore, if we
would like to calculate the average energy per bit, we can approximate this using Es
and dividing this quantity by b = log2 (M) bits per symbol, yielding:
Es = Es
Eb = (4.5)
b log 2 (M)
To quantitatively assess the similarity between two symbols in terms of their
physical characteristics, we define the Euclidean distance as:
T
dij2 = ∫ (si (t) − s j (t))2 dt = E∆sij (4.6)
0
where ∆sij (t) = si(t) – sj(t). Since we are often interested in the worst-case scenario
when assessing the performance of a modulation scheme, we usually compute the
minimum Euclidean distance, namely:
T
∫
2
dmin = min (si (t) − s j (t))2 dt (4.7)
si (t ), s j (t ),i ≠ j 0
Thus, the power efficiency of a signal set used for modulation is given by the
expression:
d2
e p = min (4.8)
Eb
The most basic form of PAM is binary PAM (B-PAM), where the individual
binary digits are mapped to a waveform s(t) possessing two amplitude levels ac-
cording to the following modulation rule:
• “1” → s1(t);
• “0” → s2(t).
where s1(t) is the waveform s(t) possessing one unique amplitude level, while s2(t) is
also based on the waveform s(t) but possesses another unique amplitude level. Note
that the waveform s(t) is defined across a time period T and is zero otherwise. Since
the duration of the symbols is equivalent to the duration of the bits, the bit rate for
a B-PAM transmission is defined as Rb = 1/T bits per second.
The energy of the waveform s(t) is defined as:
T
Es = ∫ s2 (t)dt [ Joules] (4.9)
0
where u(t) is the unit step function and A is the signal amplitude. Furthermore, sup-
pose that the bit “1” is defined by the amplitude A while the bit “0” is defined by
the amplitude − A. We can subsequently write our modulation rule to be equal to:
• “1” → s(t);
• “0” → s(t).
2
2 A
Therefore, the symbol energy is given by Es = E-s = A T = . From this result, we
Rb
can define the energy per bit for a B-PAM transmission as:
T T
Eb = P(1) × ò s12(t)dt + P(0) × ò s22(t)dt (4.11)
0 0
where P(l) is the probability that the bit is a “1,” and P(0) is the probability that the
bit is a “0.” Thus, if we define s1(t) = s(t) and s2(t) = –s(t), then the average energy
per bit is equal to:
T
Eb = Es {P(1) + P(0)} = Es = ∫ s2 (t)dt = A2T (4.12)
0
which is then plugged into (4.8) in order to yield a power efficiency result for a
B-PAM transmission of:
d2 4A2T
e p = min = 2 = 4 (4.14)
Eb AT
4.2 Digital Modulation 97
As we will observe throughout the rest of this section, a power efficiency result of
4 is the best possible result that you can obtain for any digital modulation scheme
when all possible binary sequences are each mapped to a unique symbol.
Suppose we now generalize the B-PAM results obtained for the average bit en-
ergy, the minimum Euclidean distance, and the power efficiency and apply them to
the case when we try mapping binary sequences to one of M possible unique signal
amplitude levels, referred to as M-ary pulse amplitude modulation (M-PAM). First,
let us express the M-PAM waveform as:
In order to calculate the average symbol energy, Es, we can simplify the mathemat-
ics by exploiting the symmetry of signal constellation, which yields:
2 2 M/ 2
Es = A T ∑ (2i − 1)2
M i =1
(M 2 − 1)
= A2T which is simplified via tables (4.18)
3
Es A2T (22b − 1)
→ Eb = =
log 2 (M) 3b
where wc is the carrier frequency, and Ai and Bj are the in-phase and quadrature am-
plitude levels. Notice how the cosine and sine functions are used to modulate these
amplitude levels in orthogonal dimensions. Visualizing M-QAM as a signal constel-
lation, the signal waveforms will assume positions in both the real and imaginary
axes, as shown in Figure 4.7.
To compute the power efficiency of M-QAM, ep ,1M-QAM, we first need to
calculate the minimum Euclidean distance, which becomes:
T
∫ ∆s
2 2
dmin = (t)dt = 2 A2T (4.21)
0
where we have selected the following signal waveforms without loss of generality:
s1(t) = A ⋅ cos(wct) + A ⋅ sin(w ct)
s2 (t) = 3A ⋅ cos(wct) + A ⋅ sin(wct) (4.22)
In order to derive the average symbol energy, Es, we leverage the expression from
M-ary PAM by replacing M with M such that:
2 M −1
Es = A T (4.23)
3
which can then be used to solve:
Es 2b − 1
Eb = = A2T (4.24)
log 2 (M) 3b
Thus, the power efficiency is equal to:
3!b
e p, M− QAM = b
(4.25)
2 −1
which is designed specifically for the symbol-set used by the modulator, determines
the phase of the received signal and maps it back to the symbol it represents, thus
recovering the original data. This requires the receiver to be able to compare the
phase of the received signal to a reference signal; such a system is termed coherent.
Phase shift keying (PSK) characterizes symbols by their phase. Mathematically,
a PSK signal waveform is represented by:
π
si (t) = A cos 2π fct + (2i − 1) , for i = 1,..., log 2 m (4.26)
m
π
where A is the amplitude, fc is carrier frequency, and (2i − 1) is the phase offset of
m
each symbol. PSK presents an interesting set of tradeoffs with PAM and QAM. In
amplitude modulation schemes, channel equalization is an important part of decod-
ing the correct symbols. In PSK schemes, the phase of the received signal is much
more important than the amplitude information.
There are several types of PSK modulation schemes based on the number of
M possible phase values a particular PSK waveform can be assigned. One of the
most popular and most robust is binary PSK, or B-PSK, modulation, whose signal
constellation is illustrated in Figure 4.8. In general, the modulation rule for B-PSK
modulation is the following:
“1” → s1(t) = A ⋅ cos(ωc t + θ )
= A ⋅ cos(ωc (t) + θ + π )
= −s1(t ) (4.27)
In other words, the two signal waveforms that constitute a B-PSK modulation
scheme are separated in phase by θ.
In order to derive the power efficiency of a B-PSK modulation scheme, ep,bpsk,
2
we first need to compute the minimum Euclidean distance dmin by employing the
definition and solving for:
∫
2
dmin = (s1(t) − s2 (t))2 dt
0
T
= 4A2 ∫ cos2 (ω ct + θ )dt
0
T
4A2T 4A2
=
2
+
2 ∫ cos(2ω ct + 2θ )dt
0
= 2 A2T (4.28)
Notice in (4.28) how the second term disappeared from the final re-
sult. This is due to the fact that the second term possessed a carrier
frequency that was twice that of the original signal. Since the carrier
signal is a periodic sinusoidal waveform, integrating such as signal
possessing as high frequency would result in the positive portion of
i the integration canceling out the negative portion of the integration,
yielding an answer than is close to zero. Consequently, we refer to this
term as a double frequency term. Also note that many communication
systems filter their received signals, which means the probability of
filtering out the double frequency term is also quite high.
2
Note that another way for computing dmin is to use the concept of correlation,
which describes the amount of similarity between two different signal waveform. In
this case, we can express the minimum Euclidean distance as:
T
2
dmin = ∫ (s2 (t) − s1(t))2 dt = Es1 + Es2 − 2 r12 (4.29)
0
where the symbol energy for symbol i, ESi, and the correlation between symbols 1
and 2, r12, are given by:
T T
∫ s12 (t)dt ∫
2
Es1 = =A cos2 (ω ct + θ )dt
0 0
2 2 T
AT A
=
2
+
2 ∫ cos(2ω ct + 2θ)dt
0
A2T
=
2
A2T
Es2 =
2
A2T
Eb = P(0) ⋅ Es2 + P(1) ⋅ Es1 = (4.30)
2
102 Digital Transmission Fundamentals
Note that since the number of bits represented by a single symbol is equal to one,
both the bit energy and symbol energy are equivalent.
Finally, applying the definition for the power efficiency, we get the following
expression:
2
dmin
ε p,BPSK = = 4 (4.31)
Eb
This is suppose to be the largest possible value for p for a modulation scheme
employing all possible signal representations, (i.e., M = 2b waveforms). Notice when
using the correlation approach to calculate the minimum Euclidean distance, in or
2
der to get a large ε p, we need to maximize dmin, which means we want ρ12 < 0. Thus,
to achieve this outcome, we need the following situation:
s1(t ) = A ⋅ cos(wc t + q )
Q
s2 (t ) = 0
the power efficiency is equal to εp = 2.
So far we have studied a PSK modulation scheme that consist of only just one of
two waveforms. We will now expand our PSK signal constellation repertoire to in-
clude four distinct waveforms per modulation scheme. In quadrature PSK (QPSK)
modulation, a signal waveform possesses the following representation:
T
2
dmin = ∫ ∆s 2 (t)dt = 2 A2 T (4.34)
0
4.2 Digital Modulation 103
Next, we would like to find Eb , which requires us to average over all the signal
waveforms. Consequently, this is equal to:
where the symbol energy of all four symbols is equal to Es1 = Es2 = Es3 = Es4 = A2T.
Finally, solving for the power efficiency using (4.8), we get:
2
dmin
e p ,QPSK = = 4 (4.36)
Eb
which is the same as BPSK but with 2 bits per symbol, making this a fantastic result!
Finally, let us study the general case when a PSK modulation scheme has a
choice of M possible phase values, where the distance of a signal constellation point
to the origin is always a constant and the signal constellation consists of M equally
spaced points on a circle. Referred to as M-ary PSK (M-PSK), a signal waveform
can be mathematically represented as:
2p i
si (t ) = A ⋅ cos w c t + , for i = 0,1, 2, …, M − 1 (4.37)
M
Note that there are several advantages and disadvantages with this modulation
scheme. For instance, as M increases the spacing between signal constellation points
decreases, thus resulting in a decrease in error robustness. Conversely, having the
information encoded in the phase results in constant envelope modulation, which
is good for nonlinear power amplifiers and makes the transmission robust to am-
plitude distortion channels.
Regarding the derivation of the power efficiency for an M-PSK modulation
scheme, εp,m-psk, suppose we define two adjacent M-PSK signal waveforms as
s1(t) = A × cos(ωct) and s2(t) = A ⋅ cos(ωct + 2p/M). Calculating the minimum Euclid
ean distance using:
2
dmin = Es1 + Es2 − 2ρ12 (4.38)
104 Digital Transmission Fundamentals
One of the most commonly used quantitative metrics for measuring the performance of
a digital communication system is the probability of bit error or BER, which is the prob-
ability that a bit transmitted will be decoded incorrectly. This metric is very important
when assessing whether the design of a digital communication system meets the specific
error robustness requirements of the application to be support (e.g., voice, multimedia,
data). Furthermore, having a metric that quantifies error performance is helpful when
comparing one digital communication design with another. Consequently, in this sec-
tion we will provide a mathematical introduction to the concept of BER.
Suppose that a signal si(t), i = 1, 2, was transmitted across an AWGN channel
with noise signal n(t), and that a receiver intercepts the signal r(t). The objective of
the receiver is to determine whether either s1(t) or s2(t) was sent by the transmitter.
Given that the transmission of either s1(t) or s2(t) is a purely random event, the only
information that the receiver has about what was sent by the transmitter is the ob-
served intercepted signal r(t), which contains either signal in addition to some noise
introduced by the AWGN channel.
Given this situation, we employ the concept of hypothesis testing [2] in order to
set up a framework by which the receiver can decide on whether s1(t) or s2(t) was
sent based on the observation of the intercepted signal r(t). Thus, let us employ the
following hypothesis testing framework:
H1 : r(t) = s1(t) + n(t), 0 ≤ t ≤ T
∫ x(t)y(t)dt
0
Consequently, our decision rule on whether s1(t) or s2(t) was transmitted given that
we observe r(t) is defined as:
T T
where we assume that s1(t) was transmitted. Recall that correlation tells us how
similar one waveform is to another waveform. Therefore, if the receiver knows the
appearance of s1(t) and s2(t), we can then determine which of these two waveforms
is more correlated to r(t). Since s1(t) was assumed to be transmitted, ideally the
received signal r(t) should be more correlated to s1(t) than s2(t).
On the other hand, what happens if some distortion, interference, and/or noise
is introduced in the transmission channel such that the transmitted signal waveforms
106 Digital Transmission Fundamentals
are corrupted? In the situation where a transmitted signal waveform is sufficiently cor-
rupted such that it appears to be more correlated to another possible signal waveform,
the receiver could potentially select an incorrect waveform, thus yielding an error
event. In other words, assuming s1(t) was transmitted, an error event occurs when:
T T
Es1 − ρ12 ≤ z
From this expression, we observe that both Es1 and ρ12 are deterministic quan-
tities. On the other hand, z is based on the noise introduced by the transmission
channel, and thus it is a random quantity that requires some characterization. Since
n(t) is a Gaussian random variable, then z is also a Gaussian random variable. This
is due to the fact that the process of integration is equivalent to a summation across
an infinite number of samples, and since we are summing up Gaussian random vari-
ables, the result in is also a Gaussian random variable. With z ~ N(0, s 2), we now
need to calculate the variance of z, s 2, which can be solved as follows:
T
N0
σ 2 = E{z 2 } =
2 ∫ (s1(t) − s2 (t))2 dt
0
N
= 0 (Es1 + Es2 − 2ρ12 ) → Assume Es1 = Es2 = E
2
= N0 (E − ρ12 )
T T
where E = Ei = ∫ si2 (t)dt and ρ12 = ∫ s1(t)s2 (t)dt. Note that we are assuming that the
0 0
channel is introducing zero-mean noise, which means the sum of these noise contribu-
tions, (i.e., z, will also be zero-mean).
With both deterministic and random quantities characterized, we can now pro-
ceed with the derivation for the probability of bit error. The probability of an error
occurring given that a “1” was transmitted, (i.e., P(e|1)), is equal to:
E − ρ12 Since z ∼ N(0, σ 2 )
P(z ≥ E − ρ12 ) = Q →
σ and E − ρ12 is constant
(E − ρ12 )2
= Q
→ Use σ 2 = N0 ( E −ρ 12 )
σ2
E − ρ12
= Q
N0
4.3 Probability of Bit Error 107
When dealing with a large number of signal waveforms that form a modulation
scheme, the resulting probability of error, Pe, is expressed as a sum of pairwise er-
ror probabilities (i.e., the probability of one received symbol being another, specific
received symbol). The pairwise error probability of si(t) being decoded when sj(t)
was transmitted is given as:
dij2
Q (4.50)
2 N0
where N0 is the variance of the noise. An important note here is that we are assum-
ing the noise is AWGN. since Q functions apply specifically to Gaussian random
variables. Therefore, the complete expression for Pe can be expressed as:
æ d2 ö æ d12j ö æ dMj
2 ö
Q ç min ÷ £ Pe £ Q ç ÷ + …+ Qç ÷, i ¹ j (4.51)
è 2 N0 ø èç 2N0 ø÷ èç 2N0 ø÷
where the second half of the relationship is the summation of every single pairwise
error probability.
approaches zero. You will find that computing the pairwise error probability of points
farther away yields negligible contributions to the total Pe, but can save a significant
amount of time as well as cycles. Thus, an accurate estimate of P(e) can be computed
from the following bounds. These upper and lower bounds can be expressed as:
æ d2 ö æ dij2 ö
Q ç min ÷ £ P (e ) £ å Q ç ÷ (4.52)
è 2 N0 ø i ÎI èç 2N0 ÷ø
where I is the set of all signal waveforms within the signal constellation that are
immediately adjacent to the signal waveform j. In order to accurately assess the per-
formance of a communications system, it must be simulated until a certain number
of symbol errors are confirmed [3]. In most cases, 100 errors will give a 95 percent
confidence interval, which should be employed later on in this book in order to char-
acterize the bit error rate of any digital communication system under evaluation.
Until this point we have studied digital communication systems from a signal wave-
form perspective. Leveraging this perspective, we have developed mathematical
tools for analyzing the power efficiency and BER of different modulation schemes.
However, there are several instances where the use of a signal waveform framework
can be tedious or somewhat cumbersome. In this section, we will introduce another
perspective on how to characterize and analyze modulation scheme using a differ-
ent mathematics representation: signal vectors.
Suppose we define φj(t) as an orthnormal set of functions over the time interval
[0, T] such that:
T
1 i= j
∫ φi (t)φj (t)dt = 0 otherwise
0
Given that si(t) is the ith signal waveform, we would like to represent this wave-
form as a sum of several orthonormal functions; that is,
N
si (t) = ∑ sikφk (t) (4.53)
k= 1
Figure 4.10 Sample vector representation of Si (t) in three dimensional space using basis functions
φ1(t), φ2(t), and φ3(t).
In order to find the vector elements, sil, we need to solve the expression:
T N T
ò si (t)φl (t)dt = å sik ò φk (t)φl (t)dt = sil (4.55)
0 k =1 0
which is essentially a dot product or projection of the signal waveform si(t) on the
orthonormal function fi(t). At the same time, if we perform the vector dot product
between the signal waveforms si(t) and sj(t), we get a correlation operation that is
equal to:
T
∫ si (t)s j (t)dt = si ⋅ s j = ρij (4.56)
0
while the energy of a signal si(t) is equal to:
T
Esi = ∫ si2 (t)dt = si ⋅ si =||si||2 (4.57)
0
T T
∫ ∫
2
dmin = ∆sij2 (t)dt = (si (t) − s j (t))2 dt
0 0
where the correlation term between signal waveforms Si(t) and Sj(t) is given by:
T
ρij = ∫ si (t)s j (t)dt = si ⋅ s j (4.58)
0
In order to solve for the power efficiency, we choose a set of orthonormal basis
functions fi(t), i = 1,2,...,k, where k is the dimension of the signal vector space.
Given this set of functions, we can now represent the vector si, i = 1,2,..., M where
si = (si1, si2,…sik) and:
T
sij = ∫ si (t)φ j (t)dt (4.59)
0
Consequently, using the vector representations for the signals and the orthonor-
mal functions, we can calculate the minimum Euclidean distance:
2
dmin = min || si − s j || 2 (4.60)
i≠ j
Eb = Es / log2(M) (4.61)
and the power efficiency:
2
ε p = dmin / E b (4.62)
4.5 Gram-Schmidt Orthogonalization
N
si (t) = ∑ sikφk (t) (4.63)
k= 1
T T
4.5.1 An Example
Suppose we want to perform the Gram-Schmidt orthogonalization procedure of the sig-
nals shown in Figure 4.11 in the order s3(t), s1(t), s4(t), s2(t) and obtain a set of orthonor-
mal functions {fm(t)}. Note that the order in which the signal waveforms are employed
to generate the orthonormal basis functions is very important, since each ordering of
signal waveforms can yield a potentially different set of orthonormal basis functions.
∴ φ3(t) = 0 (4.69)
but we notice the resulting f3(t) is equal to zero. This implies that the signal wave-
form s4(t) can be entirely characterized by only f1(t) and f2(t). Finally, for s2(t), we
get the following:
3
g 4 (t) = s2 (t) − ∑ s2 jφ j (t) = 0
j= 1
g 4 (t) s2 (t)
∴ φ4 (t) = =
T 2
∫ g 42 (t)dt (4.70)
0
Consequently, with the orthonormal basis functions {f1(t), f2(t), f4(t)} defined, we
can now express the four signal waveforms as:
• s1 = (2 / 3 , 6 / 3, 0);
• s2 = (0, 0, 2);
• s3 = ( 3 , 0, 0);
• s4 = (−1/ 3 ,−4/ 6 , 0).
4.6 Optimal Detection
Detection theory, or signal detection theory, is used in order to discern between signal
and noise [2]. Using this theory, we can explain how changing the decision threshold
will affect the ability to discern between two or more scenarios, often exposing how
adapted the system is to the task, purpose, or goal at which it is aimed.
114 Digital Transmission Fundamentals
N N N
si (t) = å sikφk (t), r(t) = å rkφk (t), n(t) = å nkφk (t)
k =1 k =1 k =1
Given that all of these signal waveforms use the same orthonormal basis functions,
we can rewrite the waveform model expression r(t) = si(t) + n(t) into:
N N N
å rkφk (t) = å sikφk (t) + å nkφk (t)
k =1 k =1 k =1
r = si + n
T
nk = ò n(t)φk (t)dt (4.71)
0
which is the projection of the noise signal waveform on the orthonormal basis
function fk(t). Since the noise signal n(t) is a Gaussian random variable and the
integration process is a linear operation, this means that nk is a Gaussian random
variable as well. Thus, the noise signal vector n is a Gaussian vector. Let us now
proceed with determining the statistical characteristics of n in order to employ this
knowledge in signal waveform detection.
First, we would like to calculate the mean of these vector elements. Thus, by
applying the definition for the expectation, this yields:
T
E{nk } = E ∫ n(t)φk (t)dt
0
T
= ∫ E{n(t)}φk (t)dt
0
=0 (4.72)
since since E{n(t)} = 0, which ultimately means that the mean of the noise signal
vector is E{n} = 0.
The next step is to calculate the variance of these vector elements. Suppose we
let (nnT)kl = nknl be equal to the (k, l)th element of nnT. Therefore, in order to deter-
mine E{nknl}, where nk and nl are defined by:
T T
nk = ∫ n(t )φ k (t )dt , nl = ∫ n( ρ)φl (ρ)dρ
0 0
ìïT T üï
= E í ò ò n(t)n(ρ)φk (t)φl (t)dtdρ ý
îï 0 0 þï
Solving E{nknl} yields:
TT
E{nk nl } = ò ò E{n(t)n(ρ)}φk (t)φl (t)dtd ρ
0 0
TT
N0
= òò 2
δ (t - ρ)φk (t)φl (t)dtd ρ
0 0
T
N0
=
2 ò φk (t)φl (t)dt
0
N0
= δ (k - l),
2 (4.73)
where the integration of the product of the two orthonormal functions fk(t) and
fl(t) yields a delta function since only when k = l do these two functions project onto
each other. As a result, the matrix equivalent of this outcome is equal to:
N0
E{nnT } =I N × N (4.74)
2
Given the vector representation of the Gaussian random variable obtained in (4.74),
we need to define the joint probability density function of this representation in order
to characterize the individual elements of this vector. Leveraging the assumption that
the noise elements are independent to each other, we can express the joint probability
density function as the product of the individual probability density functions for each
element, yielding:
116 Digital Transmission Fundamentals
N
1 2 2
p(n) = p(n1 , n2 ,… , nN ) =
(2 πσ 2) N / 2
∏ e − n / 2σ
i
i=1
= p(n1)p(n2)…p(nN)
1 − n2 / 2σ 2
where p(ni ) = e i is the probability density function for the vector element
σ 2π N 2 N0
ni. Since we know that E{n kn l } = 0 δ (k − l) we can then solve E{nk } = = σ 2.
2 2
Additionally, we know that the dot product of a vector can be written as the sum-
mation of the squared elements, namely:
N
∑ ni2 = ||n||2
i=1 (4.75)
which can then be used to yield the following expression for the joint probability
density function:
1 2 2
p(n) = p(n1 , n2 ,… , nN ) = 2 N /2
e − || n || / 2σ
(2π σ ) (4.76)
where the probability of error is P(e) = P(error), the probability of correct reception
is P(c) = P(correct), and P(e) = 1 – P(c) is the complementary relationship between
these two probabilities. Then, using the law of total probability, the overall prob-
ability of correct detection is equal to:
P (c ) = ∫ P(c| r = ρ) p(ρ)dρ
V (4.78)
where P(c|r = ³) ³ 0 and p(ρ) ³ 0. Therefore, we observe that when P(c) attains a
maximum value, this occurs when P(c|r = ρ) also possesses a maximum value.
In order to maximize P(c|r = ρ), we use the following decision rule at the
receiver:
for i = 1,2,..., M and i ¹ k. Note that for this decision rule we are assuming that sk
is present in ρ such that:
ρ = sk + n → m
ˆ = m k (4.80)
4.6 Optimal Detection 117
for i = 1,2,..., M. In the next section, we will mathematically derive the maximum
likelihood detector given the optimal decision rule for data transmissions being
performed across AWGN channels.
r = si + n (4.86)
where si is the ith signal waveform sent by the transmitter, n is the noise introduced
to the data transmission by the AWGN channel, and r is the intercepted signal
118 Digital Transmission Fundamentals
æ ||ρ - s i ||2 ö
= max ç - ÷
si è 2σ 2 ø
= min||ρ -s i || (4.91)
si
4.7 Basic Receiver Realizations 119
Since we are interested in the choice of si that yields the maximum value for the
decision rule, we can rewrite this decision rule as:
where g(t) is a pulse signal, w(t) is a white noise process with mean m = 0 and power
spectral density equal to, N 0 and x(t) is the observed received signal. Assuming
2
the receiver knows all the possible waveforms of g(t) produced by the transmitter,
the objective of the receiver is to detect the pulse signal g(t) in an optimum manner
based on an observed received signal x(t). Note that the signal g(t) may represent a
“1” or a “0” in a digital communication system.
In order to enable the receiver to successfully detect the pulse signal g(t) in an
optimal manner given the observed received signal x(t), let us filter x(t) such that
the effects of the noise are minimized in some statistical sense so the probability of
correct detection is enhanced. Suppose we filter x(t) using h(t) such that the output
of this process yields:
where n(t) is the result of the noise signal w(t) filtered by h(t) and g0(t) is the filtered
version of g(t) by h(t). The transmission model and filtering operation by h(t) is il-
lustrated in Figure 4.14.
Let us rewrite this filtering operation in the frequency domain, where the time-
domain convolution operations become frequency-domain products. Thus, taking
the inverse Fourier transform of H(f)G(f), which is equivalent to a convolution of
h(t) and g(t), we get the following expression for the filtered version of g(t):
∞ j 2π ft
g 0 (t) = ∫ −∞ H (f )G(f )e df (4.95)
where the inverse Fourier transform returns the filtering operation back to the time
domain.
Let us now calculate the instantaneous power of the filtered signal g0(t), which
is given as:
¥
|g 0 (t )|2 = |ò H (f )G(f )e j 2π ft df |2
-¥ (4.96)
ò
2 j 2π ft
|g 0 (t)| = H (f )G(f )e df (4.98)
-¥
which is the magnitude squared of the inverse Fourier transform of H(f )G(f ) =
N
F{h(t)*g(t)}. Since w(t) is a white Gaussian process with power spectral density 0 ,
2
we know that by the EWK Theorem that the power spectral density of the filtered
N0
noise signal n(t) is equal to SN (f ) = | H (f ) | 2. Therefore, applying the definition
2
for η and including these expressions will yield:
¥
|ò H (f )G(f )e j 2π fTdf |2
-¥
η=
N0 ¥
2 ò-¥
| H (f )|2df (4.99)
From this resulting expression, we see that we need to solve for frequency
response H(f ) such that it yields the largest possible value for the peak pulse SNR η.
In order to obtain a closed-form solution let us employ Schwarz’s Inequality. Sup-
pose that we have two complex functions, say, φ1(x) and φ2(x), such that:
¥ ¥
ò |φ1(x )|2 dx < ¥ and ò |φ2 ( x)|2 dx < ¥ (4.100)
-¥ -¥
Then, by Schwarz’s Inequality we can rewrite the following integral expression as
an inequality:
2
¥ æ¥ ö æ¥ ö
ò ò ò
2 2
φ 1 (x )φ 2 ( x) dx £ ç |φ 1 ( x)| dx ÷ × ç |φ 1 ( x)| dx ÷ (4.101)
-¥ è -¥ ø è -¥ ø
¥
2
η£
N0 ò |G(f )|2 df (4.103)
-¥
Thus, in order to make this expression an equality, the optimal value for H(f) should
be equal to:
The reason that we call these filters matched filters is because when we
convolve the time-flipped and time-shifted version of the transmitted
pulse signal with itself, the process is SNR maximizing. Consequently,
i if a receiver intercepts some unknown noise-corrupted signal, it can
readily identify which one was sent by a transmitter by matching this
intercepted signal to all the available signal waveforms known at the
receiver using an implementation illustrated in Figure 4.15.
(a)
(b)
(c)
Figure 4.16 An example of the time-domain input and output signal waveforms processed by a
matched filter. (a) Time-domain representation of the input signal to the matched filter. (b) Time-
domain impulse response of the matched filter. (c) Time-domain representation of the output signal
of the matched filter.
transmitter and its signal characteristics, it is also possible to employ a more sta-
tistical approach for determining which signal waveforms have been sent, even in
the presence of a noisy, corruption-inducing channel. Specifically, we can employ
the concept of correlation such that we only need to assume knowledge about the
waveforms themselves.1
Suppose we start with the decision rule derived at the beginning of this section
and expand it such that:
= ρ × ρ - 2 ρ × s i + s i × s i (4.106)
Since ρ ⋅ ρ is common to all the decision metrics for different values of the signal
waveforms si , we can conveniently omit it from the expression, thus yielding:
1. or a matched filtering implementation, knowledge of both the transmission signal waveforms and
F
the statistical characteristics of the noise introduced by the channel is needed by the receiver.
124 Digital Transmission Fundamentals
T T
ρ ⋅ si = ∫ ρ (t )s i (t )dt si ⋅s i = ∫ si2 (t )dt = E si
0 0
Based on this result, we can design a receiver structure that leverages correla-
tion in order to decide which signal waveform was sent by the transmitter based on
the observed intercepted signal at the receiver. A schematic of a correlation-based
implementation is shown in Figure 4.17. Given r(t) = si(t) + n(t) and we observe only
r(t) = p(t) at the input to the receiver, we first correlate r(t) with si (t) across all i.
Next, we normalize the correlation result by the corresponding signal energy Esi
in order to facilitate a fair comparison. Note that if all energy values are the same
for each possible signal waveform, we can dispense with the energy normalization
process since this will have no impact on the decision making. Finally, the resulting
decision values for each of the branches are compared against each other, and the
branch with the largest resulting value is selected.
4.9 ADDITIONAL READINGS
Given the introductory nature of this chapter with respect to the topic of digital
communications, the interested reader is definitely encouraged to explore the
numerous books that provide a substantially more detailed and advanced treat-
ment of this topic. For instance, the latest edition of the seminal digital commu-
nications textbook by Proakis and Salehi [4] provides a rigorous, mathematical
treatment of many of the concepts covered in this chapter, in addition to many
other topics not presented, such as spread spectrum technologies, equalization,
and RAKE receiver implementations. To complement this mathematically terse
document, the authors also published a companion textbook that treats digital
communications from a more applied perspective, including examples in MATLAB
and Simulink [5].
As for introductory textbooks on digital communications, Sklar wrote an excel-
lent document that provides the reader with a balance of mathematical rigor, detailed
explanations, and several well-crafted examples [6]. The textbook by Couch is also
in the same category as Sklar, but it treats both analog and digital communications
[7], which is well suited for individuals who do not possess a background in the
communications field. Rice wrote his introductory textbook on digital communica-
tions from a discrete-time perspective, which is suited for an individual possessing
a background in discrete-time signal and systems [8]. The textbook also provides
numerous end-of-chapter problems as well as MATLAB examples available online,
providing the reader with many opportunities to practice the theoretical concepts
covered within this text.
The classic digital communications book by Barry, Messerschmitt, and Lee [9]
is an excellent reference textbook for those individuals who possess some under-
standing about digital communications but need convenient and immediate access
to detailed information. Similarly, the textbooks by Madhow [10] and Pursley [11]
both provide readers with a more advanced treatment of digital communication
theory. Finally, the book by Hsu [12] is an excellent reference textbook that consists
mostly of a popular collection of solved problems.
4.10 Problems
1. Power efficiency: Find and compare the power efficiency for the following
three binary signal sets. Give the relative performance in decibels. Assume
the s1(t) and s2(t) are equally likely.
(a) s1(t) = Bsin(ω0t + φ) and s2(t) = Bsin(ω0t – φ) for 0 ≤ t ≤ T and where
2
Eb ≤ A2T . Find the best (B,φ).
A2T
(b) s1(t) = Asin(ω0t + θ) and s2(t) = Bsin(ω0t) for 0 ≤ t ≤ T and where Eb ≤ 02
and A0 is known. Find the best (A, B, θ).
126 Digital Transmission Fundamentals
Figure 4.18 Three eight-point signal constellations (8-PSK, “7-around-1,” and “Box”).
4.10 Problems 127
Im
3A
A Re
2 0
0
N is a Gaussian random variable. Solve for the mean and variance of N.
(b) Determine the three conditional probabilities of error: Pe given that s(t)
was sent, Pe given that −s(t) was sent, and Pe given that 0 was sent.
HINT: Solve in terms of E, N, and A, as well as express the answer
using the Q-function:
¥
1
Q( x) =
2π ò e-u/2 du (4.110)
x
d ∞
= − df g( f ( x))
dx ∫ g (a)da
dx
(4.111)
f (x)
Figure 4.21 Four signaling waveforms employed in binary pulse amplitude modulation scheme.
7. Modulation and power efficiency: Suppose we are given the four waveforms
shown in Figure 4.21 as candidates for a binary pulse amplitude modulation
(PAM) modulation scheme. Let us consider me signal pairs P1 : (s1(t),s2(t)),
P2: (s1(t),s3(t)), and P3 : (s1(t), s4(t)).
2
(a) Determine the minimum Euclidean distance, dmin , for signal pairs P1,
P2, and P3.
(b) Calculate the average energy per bit, Eb, for signal pairs P1, P2, and P3.
Note that the pair of signals represent either a binary 1 or a binary 0, each
of which has an equal probability of occurring in the transmission.
(c) Of the three signal waveform pairs, which possesses the best power
efficiency ∈p and explain in your own words why this is the case.
8. Modulation and power efficiency:
(a) Calculate the average symbol energy, E s, for the tertiary signal constella-
tion shown in Figure 4.22. Assume all three symbols are equiprobable.
2
(b) Calculate the minimum Euclidean distance, dmin , for the tertiary signal
constellation shown in Figure 4.22.
(c) Suppose that each signal constellation point in Figure 4.22 requires b = 2 bits
in order to achieve unique representation. Find the power efficiency, ∈p.
9. Modulation and power efficiency:
(a) Given the four waveforms in Figure 4.23, compute the Euclidean dis-
tance between all possible pairs of waveforms:
2
dij2 = ∫ (si (t ) − s j (t ))2 dt , i ≠ j (4.112) 2
dmin
0
(b) Suppose that you are designing a modulator that maps b = 1 bit into
a waveform and that you are given the responsibility of choosing two
waveforms from the available four waveforms shown in Figure 4.23.
To achieve the best possible result when operating in an additive noise
channel, which two waveforms would you choose? Justify your answer.
HINT: Use you results in part (d).
References
1. The experiment files used in this book are available at [1]. 131
132 Basic SDR Implementation of a Transmitter and a Receiver
When working in Simulink, the format of the data is extremely important. For
example, if a [36 ´ 1] signal is sent, to an input port expecting a [37 ´ 1] signal,
as highlighted in Figure 5.2, not only will the model not run, the software will
produce numerous error messages, as shown in Figure 5.3. Understanding how
each block affects the data format will greatly simplify the design process of a
communication system when we use Simulink.
5.1.1 Repetition Coding
One of the key building blocks of any communication system is the forward error
correction (FEC), where redundant data is added to the transmitted stream to make
Figure 5.2 Mask parameters of the Bernoulli Binary Generator block, where the samples per frame
is 36.
5.1 Software Implementation 133
Figure 5.3 Error messages from Simulink. By clicking on the error message, a detailed description
of the source of this error will be provided in the lower window to help with the debugging
process.
it more robust to channel errors. There are many types of FEC techniques, such
as the repetition coding approach, where each transmitted bit is repeated multiple
times. For example, Problem 1 in Section 5.5 introduces a repetition code with rep-
etition factor R = 5 in a binary pulse amplitude modulation (BPAM) transmission
system. In this section, we will explore together one technique for combating the
introduction of errors to data transmissions by implementing a simple repetition
coder (repetition factor R = 4). So what does it mean by a repetition coder with
repetition factor R = 4? A simple definition is that if a “0” symbol is to be transmit-
ted, this “0” symbol will be repeated four times by the repetition coder, such that
the output would be “0000”.
Let us first start by double clicking on the repetition coder block, which will
result in a MATLAB function block editor opening, in which we can write cus-
tomized MATLAB code. As mentioned previously, setting break points is a great
way for understanding and debugging M-files. For more information about break
points and how they can be used to debug and evaluate the code, please refer to
Appendix A.
5.1.2 Interleaving
A repetition code is one of several useful tools for a communication systems engi-
neer in order to enhance a robust data transmission. However, it is sometimes not
enough, since it does not address the issue when a large quantity of data is cor-
rupted in contiguous blocks. For instance, if a transmitter sends the data stream
“101101”, a repetition coder with a repetition factor of 4 will yield:
111100001111111100001111,
where each input bit is repeated four times. While this encoding scheme may appear
robust to error, it is still possible during a data transmission that a significant noise
burst occurs over many consecutive bits, corrupting numerous binary digits in the
transmission, and yields the following outcome:
Interleaving is an approach where binary data is reordered such that the correla-
tion existing between the individual bits within a specific sequence is significantly
reduced. Since errors usually occur across a consecutive series of bits, interleaving
a bit sequence prior to transmission and de-interleaving the intercepted sequence at
the receiver allows for the dispersion of bit errors across the entire sequence, thus
minimizing its impact on the transmitted message. A simple interleaver will mix up
the repeated bits to make the redundancy in the data even more robust to error. It
reorders the duplicated bits amongst each other to ensure that at least one redun-
dant copy of each will arrive even if a series of bits are lost. For example, if we use
an interleaving step of 4, it means we reorder the vector by index [1, 5, 9, ..., 2,
6, 10, ...]. As a result, running “111100001111111100001111” through such an
interleaver will yield the following output:
101101101101101101101101.
The interleaving step can be any of the factoring numbers of the data length. How-
ever, different mixing algorithms will change the effectiveness of the interleaver.
Once we have implemented the interleaver, let us combine the repetition coder
and the interleaver into a single FEC sub-system. Although the simple interleav-
ing technique introduced above is sufficient for our implementation, there are
5.1 Software Implementation 135
various other forms of interleaving, that we will investigate in the next two sub-
sections.
Figure 5.6 Approximate appearance of final Simulink transceiver model. This model implements
a digital communication system using 16-QAM and repetition coding. We can simulate the com-
munication channel by adding the AWGN Channel block.
In this section, we will deal with several fundamental but important concepts in
digital communication system designs via SDR using the ethernet-based USRP plat-
forms.2 First is frequency offset compensation. Due to the imperfection of devices,
frequency offset exists in every communication system, which prevents the ideal
signal reception. Therefore, users need to find out the amount of frequency offset
and try to compensate for it. The other important concept is in-phase/quadrature
(I/Q) representation of the signals. Supposewe have a signal s(t) with the phase f(t),
we can represent this signal using its I and Q component, namely,
sI (t) = s(t)cos(φ(t)) (5.1)
and
sQ (t) = s(t)sin(φ(t)) (5.2)
2. Simulink and MATLAB support most of the Ethernet-based USRP platforms, uncluding USRP2, USRP
N210 rev2 and rev 4.
138 Basic SDR Implementation of a Transmitter and a Receiver
Since the default data type of USRP is I/Q, understanding how I and Q data
flows in USRP will greatly help us with the SDR design and implementation. In
addition, we will be familiarized with some of the useful Simulink tools for digital
communication system.
f × ppm (5.3)
∆f =
106
where ppm is the peak variation (expressed as +/-), f is the center frequency (in Hz),
and Df is the peak frequency variation (in Hz). Due to imperfections and tolerances
in the manufacturing process, oscillator crystals typically have inaccuracies of about
20 or 50 parts per million (ppm), which are guaranteed by the manufacturer.
Figure 5.7 A 10.7 Mhz local oscillator on the SDR PCB (from [5]), which is used to generate the
carrier wave in order to modulate signals from baseband to RF band.
5.2 USRP Hardware Implementation 139
2.45 ´ 109 ´ 20
Df = = 49, 000 Hz (5.4)
106
In other words, when using a carrier frequency of 2.45 GHz, the worst case fre-
quency offset could be as much as about 50 kHz. Note that if there are two USRP2s
trying to communicate with each other, where one has an offset of -50kHz and the
other has an offset of +50kHz, the overall difference in frequency will be 100kHz.
It is necessary to compensate for this frequency offset in order to achieve success-
ful digital communication between the USRPs. For example, we have measured some
of the USRP2s to have frequency offsets of as much as 45kHz (45kHz/2.45GHz =
18ppm). With frequency offset compensation, the USRP2 works fine, but without
it the digital communication system will not work at all.
In order to devise a software-based approach for frequency offset compensator,
let us perform the following steps to measure and compensate this phenomenon.
First, arbitrarily select a carrier somewhere within either of the supported bands of
the XCVR2450 daughtercard (2.4 to 2.5 GHz and 4.9 to 5.9 GHz). For example,
suppose we choose 2.45 GHz. Note that it is not recommended to choose this ex-
act frequency since other individuals using wireless equipment including software-
defined radio might be transmitting all at the same time and frequency.
Then, let us run siggen.mdl on one USRP and observeFFT.mdl on
another USRP, as shown in Figure 5.8 and Figure 5.9. By setting the Center fre-
quency parameter in both SDRu Transmitter and SDRu Receiver blocks, we can
use the carrier frequency selected in the previous step, as shown in Figure 5.10(a)
and Figure 5.10(b). Please note that these two blocks can be obtained by typing
sdrulib in the command window.
As long as these two models are running, we can now use the FFT plot from
spectrum scope to measure the frequency offset between the received signal and
the desired carrier. In order to have a better observation from the scope, we can
right click on the plot from the scope, and choose “Autoscale”. An example of the
output from the FFT scope is shown in Figure 5.11(a). Once the value of the fre-
quency offset is obtained, we can proceed to stop the models, and adjust the carrier
frequency of the transmitter by the amount of this offset. For example, by visually
140 Basic SDR Implementation of a Transmitter and a Receiver
Figure 5.8 The structure of siggen.mdl. In this model, since the SDRU Transmitter block requires
the complex input, the Real-Imag to Complex block converts real and imaginary inputs to a complex-
valued output signal.
inspection, we determine the value at which the peak occurred is Df Hz, as shown
in Figure 5.11(b). Then, we can set the center frequency of the transmitter to be
2.45 GHz+Df Hz. By iteratively adjusting the carrier frequency and observing the
result, we should be able to determine the carrier frequency that we will use on the
transmitter in order to observe the correct carrier frequency on the receiver.
Keep in mind that both the transmitter and the receiver have frequency offsets. For
example, 2.45 GHz on the receiver is somewhere within 2.45 GHz +/- 20ppm and is
not exactly 2.45 GHz. When tuning the transmitter to eliminate the frequency offset
on the receiver, we are essentially compensating for the offset between them. It is the
relative offset between the transmitter and receiver that matters. Note that we can
adjust either the transmitter or the receiver (or both) to compensate for the offset. The
basic idea is to get the two devices synchronized in carrier frequency with each other.
If there are more than two USRPs communicating with each other, it is important to
make sure that each pair of transmitters and receivers are all tuned to each other.
Figure 5.9 The structure of observeFFT.mdl. In this model, FFT Display is an enabled subsystem. We
use the Data Len parameter to qualify the execution of this part. Specifically, when Data Len contains
a zero value, there is no data, so FFT Display cannot be enabled.
5.2 USRP Hardware Implementation 141
(a) (b)
Figure 5.10 Set Center frequency of the SDRu Transmitter block and the SDRu Receiver block. (a)
Set Center frequency of the SDRu Transmitter block to be 2.45 GHz; (b) Set Center frequency of the
SDRu Receiver block to be 2.45 GHz.
(a)
(b)
Figure 5.11 Determine the offset Df based on the output from the FFT scope. (a) An example of
the output from the FFT scope; (b) by visual inspection, we can determine the offset Df, the value at
which peak occurred.
Section 4.2.3, a bandpass signal can be represented by the sum of its in-phase (I)
and quadrature (Q) components. Moreover, the data input to the USRP board is
complex, which includes I and Q. Since I and Q play an important role in digital
communications, we are going to observe the I and Q data on transmitter and
receiver sides.
5.2 USRP Hardware Implementation 143
Figure 5.12 The structure of main subsystem of observeiq.mdl. Since the SDRu Receiver block pro-
duces the complex output, the Complex to Real-Image block converts the complex-valued signal to
real and imaginary components.
144 Basic SDR Implementation of a Transmitter and a Receiver
(a)
(b)
Figure 5.13 Sample plots from the two scopes when both I and Q inputs are sine waves. (a) Output
from Real Scope; (b) output from Imag Scope.
5.3 Open-Ended Design Project: Automatic Frequency Offset Compensator 145
Figure 5.14 The structure of digital up converter on transmitter path, where a baseband I/Q com-
plex signal Iin (t) and Qin (t) is upconverted to the IF band by a carrier frequency of wc.
At the receiver path, the standard FPGA configuration includes digital down
converters (DDC) implemented with a 4-stage cascaded integrator-comb (CIC), as
shown in Figure 5.15. First, the DDC down-converts the received signal from the
IF band to the base band. Next, it decimates the signal so that the data rate can be
adapted by the Gigabit Ethernet and is reasonable for the processing capability of
computers. The complex input signal (IF) is multiplied by the constant frequency
(usually the IF) exponential signal. The resulting signal is also complex, and cen-
tered at 0. Then the signal is decimated with a factor M. The decimator can be
treated as a low pass filter followed by a down sampler. In this structure, the coor-
dinate rotation digital computer (CORDIC) [9], also known as the digit-by-digit
method and Volder’s algorithm, is a simple and efficient algorithm to calculate
hyperbolic and trigonometric functions. It is commonly used when no hardware
multiplier is available as the only operations it requires are addition, subtraction,
bitshift and table lookup.
Given this insight on the structure of DUC and DDC, we can now perform the
following mathematical derivations:
5.3.1 Introduction
Starting from this chapter, there will be several open-ended design projects involv-
ing the implementation of an advanced digital transmission/reception system using
SDR. These projects are related to what we have learned and done in each chapter,
but require additional thinking and investigation. For each design project, a poten-
tial solution will be provided. However, there is no single solution for these projects,
so please do not feel constrained to this solution. You are highly encouraged to
propose your own solution and try it out. Each reader is expected to come up with
his/her own innovative design.
Figure 5.15 The structure of digital down converter on receiver path, where the input RF signal
I’(t) and Q’(t) is first down-converted to the IF band by a carrier frequency of wc, and then decimated
with a factor M.
5.3.2 Objective
Although every communications system will need to compensate for frequency offsets
introduced by nonideal RF crystals, it is difficult and often impractical to compensate
for this manually. Full communications systems often include automatic frequency
offset compensation that can correct this nonideality without end-user intervention.
This feature is essential for devices that must communicate with other devices with
unknown or changing frequency offsets. The objective of this project is to design and
implement a software-defined radio (SDR) communication system capable of auto-
matically calculating the frequency offset between two USRP platforms.
In Section 5.2.1, we have already found the frequency offset of two USRPs
manually, namely, by comparing the FFT of the transmitted and received signals. In
this project, let us discover this offset in an automatic way. In other words, in the
receiver model, the only thing we need to do is to hit the “start simulation” button,
which will result in this model providing us with the value of the frequency offset.
5.3 Open-Ended Design Project: Automatic Frequency Offset Compensator 147
5.3.3 Theoretical Background
Before applying any method to USRP boards, it is highly recommended to test
the method with Simulink-only model without the USRP transmitter and receiver
blocks and see whether it works. This is used to examine the approach taken under
ideal conditions before actually testing it in the lab setting.
Figure 5.16 The frequency offset of the Simulink model can be introduced by filling out the Phase
offset and Frequency offset parameters of the Phase/Frequency Offset block.
148 Basic SDR Implementation of a Transmitter and a Receiver
Suppose the FFT index found in the previous step is idx, the frame
size used in this model is Fsize the FFT length of Magnitude FFT
block is N, then the actual frequency offset in Hz can be calculated
by:
idx × Fsize
∆f =
i N×2
idx × Fsize
∆f =
N× M
The techniques discussed above all happen on the receiver side. While on the
transmitter side, we might need the following blocks in the model:
But of course, we are not constrained to these blocks. Instead, we can use all the
blocks available in Simulink. If necessary, we can also use MATLAB to implement
our method.
1. When you are done with your design, compare your results here
from what you have obtained in Section 5.2.1. Are they identical
to each other? If not. which one do you think is more accurate?
Why?
Q 2. Using siggen.mdl, you can generate various frequency offsets
over a wide frequency band. Record these offset values, as well
as the results provided by your design, on which frequency band
do they have a good match of each other? Can you explain the
reason?
5.4 Chapter Summary 149
This chapter serves as a starting point for the SDR experimentation. In this chap-
ter, we first implement a bare bones communication system in Simulink, where we
apply two error-correction techniques, namely, repetition coding and interleaving.
Then, several experiments with USRP hardware make us familiar with this plat-
form, which highlights the frequency offset and I/Q data of USRP. In the end, the
open-ended design project helps to find out the frequency offset between two USRP
boards.
5.5 Problems
(a) What is the probability that a single received symbol accidentally gets
decoded incorrectly?
(b) What is the probability that repetition decoder yields an output that is
incorrect?
(c) What would be the impact on the error performance of the repetition decoder
when the noise standard deviation of the AWGN channel is increased to
s = 1?
Bn = I n + I n -1 (5.5)
at a rate 1/T symbols per second. The sequence {In} consists of binary digits se-
lected independently from the alphabet {−1, +1} with equal probability. Hence,
the filtered signal has the form:
∞
1
v (t) = ∑ Bn g(t − nT), T=
2W
(5.6)
n =−∞
150 Basic SDR Implementation of a Transmitter and a Receiver
(a) Sketch the signal space diagram for v(t) and determine the probability of
occurrence of each symbol.
(b) Determine the autocorrelation of the sequence {Bn}.
(c) Solve for the power spectral density (PSD) of the sequence {Bn}.
(d) Given that g(t) = u(t) − u(t − T), where u(t) is a unit step function, solve for
the PSD of the output signal v(t).
æt ö
P ç ÷ « Tsinc (Tf ) (5.7)
èT ø
(a) Sketch the signal space diagram for the QPRS signal and determine the
probability of occurrence of each symbol.
(b) Determine the autocorrelation of vc(t), vs(t), and v(t).
(c) Solve for the power spectral density (PSD) of vc(t), vs(t), and v(t).
4. Block Interleaving Refer to Section 5.1.2.1 for the definition of block interleav-
ing, and answer the following questions:
(a) Show how a sequence of bits labeled b1 through b20 can be block in-
terleaved when N = 5 and M = 4. What is the resulting interleaved
sequence?
(b) If the interleaved sequence experiences an error burst lasting five bit
periods, show how the error burst is dispersed in the de-interleaved
sequence.
(c) What is the amount of end-to-end delay caused by the block interleaving
and de-interleaving processes?
5.4 Chapter Summary 151
5. Convolutional Interleaving Refer to Section 5.1.2.2 for the definition and block
diagram of convolutional interleaving, and answer the following questions:
(a) Show how a sequence of bits labeled b1 through b30 can be interleaved
when N = 4. What is the resulting interleaved sequence?
(b) If the interleaved sequence experiences an error burst lasting five bit
periods, show how the error burst is dispersed in the de-interleaved
sequence.
(c) What is the amount of end-to-end delay caused by the interleaving and
de-interleaving processes?
Figure 5.20 The spectrum of transmitted signal x(t), and two carrier waves.
References
In this chapter, we will introduce several techniques for designing and implementing
two different receiver structures commonly used in the creation of a digital com-
munication system, as well as assess their performance observed during over-the-air
transmission. At the same time, we will show how to construct a series of ortho-
normal basis functions that can be combined in order to produce a wide range of
signal waveforms that could be employed by a digital transmitter and receiver. In
the experimental portion of this chapter, we will implement two different modula-
tion structures and then observe their performance. Finally, to wrap up this chapter,
we will develop a frame synchronization design for the open-ended design project.
Recall from Sections 4.7.1 and 4.7.2 the design of a maximum-likelihood receiver
structure based on a matched filter realization, as well as a correlator-based ap-
proach. Using the theory introduced in Section 4.6, we will now proceed to build
our own implementation for the matched filter and correlator-based receivers. Spe-
cifically, we will implement a two-step approach, where the first step creates the
observation vector from the received signal waveforms, followed by the decision-
making process in the second step. For the purpose of this experiment, we will use
PAM transmission and an AWGN channel to validate the design presented in this
chapter, as shown in Figure 6.1.
153
154 Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver
Figure 6.1 The received signal x(t) is generated by a PAM transmission and an AWGN channel.
Let us download and open the correlator.m file. The code has been partially
implemented, and we will finish the rest of the code with the following steps and
plot the results.
For the first part of the code, let us first start off this design by defining and
implementing the four equiprobable transmitted signals specified in problem 1 of
Section 6.5, s1(t), s2(t), s3(t), s4(t), as shown in Figure 6.3. Each of them corresponds
to one of the four transmitted symbols mi, i = 1, 2, 3, 4. Since these four transmitted
symbols are of equal probability, these four transmitted signals are also equiprobable.
In this part, {si(t)} are given in the form of signals in time domain. As specified by
correlator.m, the duration of each signal is 3s, and the number of symbols is 100, de-
fined by the number variable, so we should set the transmission time to 300s in order
to ensure that 100 symbols are transmitted and received in each simulation loop.
Figure 6.2 A bank of correlators operating on the received signal x(t) to produce the observation
vector X that will be used for a correlator-based detector.
6.1 Software Implementation 155
2 2
s1(t) 1 1
s2(t)
0 0
2
0 1 2 3 0 1 2 3
Time, t /s Time, t /s
2 2
1 1
s3(t)
s4(t)
0 0
1 1
2 2
0 1 2 3 0 1 2 3
Time, t /s Time, t /s
Figure 6.3 The waveform of four equiprobable transmitted signals, s1(t), s2(t), s3(t), and s4(t). Each
of them lasts for 3s, and corresponds to one of the transmitted symbols.
1.5
0.5
s(t)
5
0 5 10 15 20 25 30
Time, t /s
Figure 6.4 A sample plot of s(t) generated by 10 transmitted symbols mi.
156 Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver
Next, let us create an AWGN channel that will be used to add zero-mean white
Gaussian noise of variance to the transmitted signal s(t), which can be expressed
as:
Plot the time domain representations of the input signals s(t) and
output signals x(t) for the channel for several different noise vari-
Q ances. Explain how the noise could potentially impair the success-
ful decoding of the intercepted signal at the receiver.
We can use randn in MATLAB to define the Gaussian noise. For instance,
i to create 100 zero–mean Gaussian values with standard deviation
2, we can use the following command: 2 . * randn (100,1) ;
Once the received signal waveforms have been implemented in MATLAB, let us
proceed to define the orthonormal functions {fm(t)} derived in problem 1 of Section
6.5. This step is similar to the step that {si(t)} are defined, just make sure that the
vector for {si(t)} and {fm(t)} has the same length.
In the end, let us implement an “integrate-and-dump” block in order to obtain
the observation vector X, defined as:
Sampling
Dump
Figure 6.5 The operations done by an integrate-and-dump block. Notice how the integrator accu-
mulates the input waveform across the time period T before it resets (i.e., dumps, in order to repeat
the process again).
When you have done all these steps test your implementation with
a transmission of duration 300s, randomly consisting of one of the
four equiprobable signals of duration T = 3s each with zero-mean
Q white Gaussian noise of variance 0.5 added. Assume perfect syn-
chronization between the received signal and the communications
system. Plot each of the elements of the observation vector X using
the stem command in MATLAB.
Figure 6.6 A maximum-likelihood (ML) decoder system operating on the observation vector X
from Section 6.1.1 to produce an estimate m of the transmitted symbol mi to minimize the average
probability of symbol error. Notice how the element-by-element product of the vectors X and si fol-
lowed by an accumulator is equal to the dot product, XTsi.
In order to get started with this stage of the implementation, let us download
and open the decoder.m file. Although this code is incomplete, it will serve as a
starting point for the rest of this experiment.
For the first part of the code, let us define the four vector representations of the
signals {si(t)}, which are denoted as sv in the code and have been derived in problem
1 of Section 6.5. Each signal vector svi should be a 1 ´ 3 vector.
As long as we have the svi vector, we can calculate the energy of each signal us-
ing the following expression:
3
Ei = || svi ||2 = < svi , svi > = å svij2 , i = 1, 2, 3, 4 (6.4)
j =1
where <svi, svi> is the inner product of svi and svij is the component of each signal
vector.
Given the energy of each signal, let us proceed to implement the accumulator
and subtraction as shown in Figure 6.6. For accumulator, it adds up all the element-
by-element product between X and si.
Plot the output of the accumulator for each of the branches. Pro-
Q vide your observations and explain.
sk = arg min || x j - si ||
si
= arg min(x j × x j - 2x j × si + si × si )
si
where xj . xj is common to all decision metrics for different si, so we can omit it,
thus yielding:
min(-2x j × si + si × si ) = max(2x j × si - si × si )
si si
= max 2
si
(ò0
T
x j (t )si (t )dt - 0.5ò
T
s 2 (t)dt
0 i
) (6.6)
Therefore, it is necessary to subtract half of the signal energy in each branch. Note
that if all energy values are the same for each possible signal waveform, we can
dispense with the energy normalization process since this will have no impact on
the decision making. In the end, select the largest branch and decode it to produce
the estimate m̂.
There are several factors that can impact the performance of the entire system, such
as limits of integration, energy subtraction, and SNR of the channel. Energy sub-
traction has been discussed before. As for the other two factors, if we integrate over
a shorter duration than the symbol length, we are correlating the received symbol
with a smaller portion of the transmitted symbol. If we increase the SNR value, it
implies that the transmit power is also increased.
additive white Gaussian noise (AWGN) channel, and a four-branch receiver using
correlation in each branch to help determine which symbol was transmitted every
T = 1 second. Let us assume perfect synchronization in this system.
Given these factors, try out the following changes to the system
and compare them with the original results:
1. In Section 6.1.1, do not integrate the entire period T; integrate
until 0.75T. Plot both the estimate m and the actual transmit-
ted symbol mi and compare it with the original plot.
2. In Section 6.1.2, do not subtract the energy of si(t) from each
branch. Plot both the estimate m and the actual transmitted
symbol mi and compare it with the original plot.
Q 5. Combine your implementation from Sections 6.1.1 and 6.1.2.
Change the parameters of AWGN channel in Section 6.1.1
to make the signal-to-noise ratio (SNR) range from 10−5 to
10−1. Compare the m in Section 6.1.2 with the input x in Sec-
tion 6.1.1 to get the bit error rate. Plot a BER curve. NOTE:
In order to get an accurate BER curve, you may need a large
number of transmitted signals, especially when the SNR value
is high.
In order to prepare for our implementation, let us download and open the inte-
grate_and_dump.mdl file, which is shown in Figure 6.7. In this Simulink model,
we can find three important components that will be used in our own implemen-
tation. Note that the Integrate and Dump block accumulates samples according
to the signal possessing the most samples per unit time. In this case, the Gaussian
Noise Generator is producing 100 samples per 1 second duration. We can run the
experiment for different pulse durations T (the default is T = 1 second).
Now let us create a new Simulink model that will contain our design for the four-
branch correlator-based receiver. First, implement a transmitter that can generate
each of the four waveforms shown in Figure 6.17. The Multiport Switch block can
be used to determine which waveform is transmitted for each symbol period of
time.
Next, pass this signal stream through an AWGN channel in order to corrupt
the signal with noise. Vary the amount of noise introduced to the signal stream and
observe the shape of the received signal r(t). Note that other channel effects such as
fading can also corrupt the signal. However, we are only focusing on the AWGN
in this exercise.
Given the corrupted intercepted signal r(t), we will conduct the following steps
to create the receiver: First, for each branch, we will repeat si(t) indefinitely such
that it aligns with the corresponding intercepted signal r(t). One approach is to use
the repeating sequence block in the basic Simulink sources blockset. Then, multiply
the repeating si(t) waveforms in each of the branches with the intercepted signal
r(t). Take the output of this multiplication and input it into the integrate-and-dump
block.
As discussed before, due to the unique symbol energies, for proper operation, the
system must compensate for the differences in each branch. Therefore, take the
output of the integrate-and-dump block and subtract from it the energy of the cor-
responding waveform si(t). Since the receiver knows about all the waveforms ahead
162 Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver
of time, simply calculate manually the symbol energy and use the constant source
block and the addition block to subtract off the energy from the integrated signal
stream. The relevant signal energy is scaled and subtracted from each branch, thus
producing a measure of the difference between the received signal and each of the
waveforms for each sample instant.
What do you notice about the relative energy levels for each of the
Q branches?
Finally, implement the decision-making block that selects the maximum value at
each time instant T = 1 seconds. The ML decoder just chooses the symbol with
the highest correlation as the estimate of the transmitted symbol, which is then
converted to binary and compared to a binary version of the transmitted symbol.
Note that there is a delay of one sample interval in the decoder output. In order to
calculate the error rate, the binary input also needs to be delayed by one sample
interval to align with the decoded symbol.
As long as we have created the transmitter and the receiver, let us calculate the
bit error rate of this receiver design for a range of noise variance values. Plot
the BER curve of this model down to 10-3. A MATLAB script can be created to
set various values for the SNR, which are used to control the Simulink model
to determine the BER parameters. The BER block can be used to terminate the
simulations for each iteration. The termination condition is set to terminate after
either a certain number of errors are generated or a certain number of symbols
have been transmitted.
Suppose you removed the energy subtraction for all the branches.
Q Does the BER performance of the system change?
In the previous section, we studied how to design actual digital receivers in software
based on the ML concept. However, there is significant value in evaluating these
designs in actual hardware as well in order to provide insights that only real-world
implementations can provide. Therefore, we will start off with the design of two
different modulation schemes employed in a basic transmitter and receiver imple-
mentation. From this experiment, we will get our first experience of how the simu-
lation world differs from actual hardware world.
6.2 USRP Hardware Implementation 163
6.2.1.1 Transmitter
The transmitter has a basic structure, which is made up of four blocks, as shown
in Figure 6.8.
The purpose and special parameters of each block is as follows: Signal From
Workspace block is the source of this model. In this block, we use a valid MATLAB
expression ([1 0]’) to specify the Signal parameter, which generates a repeated ‘10.’
We use this signal since it is easy to observe. In the Sample time parameter, there is a
variable named BitRate. This variable is specified by callback functions, which will
be introduced shortly. We set Samples per frame to be 179, because the input frame
size of the SDRu Transmitter block is set as 358, and Raised Cosine Transmit Filter
has an upsampling factor of 2.
In a DBPSK system, the input binary sequence is differentially modulated us-
ing a DBPSK modulator. In Simulink, this operation is conducted by the DBPSK
Modulator Baseband block. The output of this block is a baseband representation
of the modulated signal.
The Raised Cosine Transmit Filter block upsamples and filters the input signal
using a square root raised cosine FIR filter. The icon of the block shows the impulse
response of the filter. Similar to the Signal From Workspace block, the variable
named oversampling is defined in callback functions.
The SDRu Transmitter block is the sink of this model, which can be obtained
by typing sdrulib in the command window. Please note that the input frame size to
this block does not have to be 358. It can be any number you would like.
6.2.1.2 Receiver
The receiver behaves in an inverse manner relative to the transmitter, but there are
more things going on, so it is more complicated than the transmitter. The receiver
model is made up of three parts, as shown in Figure 6.9.
Figure 6.8 The structure of DBPSK transmitter, where a binary source is modulated using DBPSK
and filtered using raised cosine pulse shaping filter.
164 Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver
Figure 6.9 The structure of a DBPSK receiver, where the transmitted signal is received by the SDRu
Receiver block and fed into the DBPSK Receiver subsystem. Frequency offset between two USRP2s
should be compensated to ensure signal reception.
The first part is the frequency offset compensator. We should use the frequency
offset found in Section 5.2.1 in this part. The second part is the SDRu Receiver
block and the third part is an enabled subsystem, where all the main operations of
this model reside, as shown in Figure 6.10.
The purpose and special parameters of each block inside the enabled subsystem
is as follows: Since the output of the SDRu Receiver block is sample based, we use
the Frame Conversion block to convert it to framebased. This block does not make
any changes to the input signal other than the sampling mode.
In order to form a pulse shaping filter with the Raised Cosine Transmit Filter,
the Raised Cosine Receive Filter block filters the input signal using a square root
raised cosine FIR filter. The icon of the Raised Cosine Receive Filter block shows
the impulse response of the filter.
One of the most significant advantages of a DPSK modulation scheme is that we
do not need to worry about the carrier recovery, which estimates and compensates
for frequency and phase differences between a received signal’s carrier wave and the
receiver’s local oscillator for the purpose of coherent demodulation. However, we
Figure 6.10 The structure of DBPSK Receiver subsystem, where a symbol is filtered using raised
cosine pulse shaping filter and demodulated using DBPSK. The timing recovery is conducted by the
Mueller-Muller Timing Recovery block.
6.2 USRP Hardware Implementation 165
still need to implement some form of timing recovery. The purpose of the timing re-
covery is to obtain symbol synchronization. Two quantities must be determined by
the receiver to achieve symbol synchronization. The first is the sampling frequency,
and the other is sampling phase. The Mueller-Muller Timing Recovery block recov-
ers the symbol timing phase of the input signal using the Mueller-Muller method.
This block implements a decision-directed, data-aided feedback method that re-
quires prior recovery of the carrier phase.
Corresponding to the DBPSK Modulator Baseband on the transmitter side, the
DBPSK Demodulator Baseband block demodulates a signal that was modulated
using the differential binary phase shift keying method. The input is a baseband
representation of the modulated signal.
In the end, To Workspace and Display blocks serve as the sink of this model.
Using these two blocks, we can observe the received data in two ways. We can ei-
ther observe a portion of data directly through Display, or save a larger number of
data to workspace through To Workspace.
Figure 6.11 Create model callback functions using the Callbacks pane of the model’s Model Prop-
erties dialog box. In our DBPSK example, several parameters are defined in InitFcn.
166 Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver
speed of the USRP2 board, such that the model cannot be executed in real time.
Here is a collection of performance improvements we can make in our Simulink
models to approach, if not achieve, real time:
Applying several or all of these approaches, we should be able to improve the real-
time processing ability of our Simulink models with USRP blocks. Besides, as of
MATLAB 2012b, Simulink will have a so-called Performance Advisor, which can
automatically improve a model’s performance.
In the previous experiments in this chapter, have sent digital information consisting
of ones and zeros. However, in real-world situations, we usually need to send mes-
sages that are made of packets or frames of data. Therefore, knowing where each
frame starts is a crucial step for performing any format of data reception. In this
section, we will develop frame synchronization mechanism in an open-ended man-
ner using the principle of correlation.
6
Largest correlation value
5
Low sidelobes
2
0 2 4 6 8 10 12 14
Figure 6.12 Autocorrelation function of Barker-7 code, which has a low sidelobes.
At the beginning, in the Barker Code Generator block, let us specify the length
of the Barker code, and this block will give the corresponding codeword, which is
either 1 or –1. However, the Bernoulli binary is either 1 or 0. In order to attach the
Barker code to the frame, we need to change its format using the Bipolar to Uni-
polar Converter block. Then, the Barker code is attached to the beginning of each
frame using Matrix Concatenate to form a marker and frame pair, which is known
as a packet. In the middle, the Delay block is used to simulate a real channel, which
incurs a delay of 5. In the end, the Error Rate Calculation block uses the delay value
we have found to calculate the computation delay and verifies whether we have
found it correctly. If the value is correct, the error rate should be 0.
Figure 6.13 Simulink model that realizes frame synchronization using Barker code.
170 Receiver Structure and Waveform Synthesis of a Transmitter and a Receiver
(a)
(b)
Figure 6.14 Two key subsystems in Simulink model mFindFrameSync.mdl. (a) Compute Delay
subsystem, which calculates the delay of the channel by detecting the peak of the correlation of
the received data and the Barker code. (b) Consecutive Delay Comparison subsystem, which
checks whether the calculated delay remains the same for several iterations.
In addition, there are two key subsystems in this model: Compute Delay (Align
Signals ® Enabled Delay Computation ® Compute Delay), as shown in Fig-
ure 6.14(a), calculates the delay of the channel by detecting the peak of the correla-
tion of the received data and the Barker code. Consecutive Delay Comparison (Align
Signals ® Consecutive Delay Comparison), as shown in Figure 6.14(b), checks
whether the calculated delay remains the same for numDelayCalcs iterations.
If it is, the Enabled Delay Computation subsystem is disabled, such that we
don’t need to calculate the delay again and again. This subsystem guarantees the
result of this model and also improves the efficiency.
Figure 6.15 Simulink model that realizes frame transmission using Barker code.
Compared to the basic model, there are three different blocks: First, in the
Signal From Workspace block, let us specify a source from the workspace in or-
der to use it as our test message. For example, suppose we would like to transmit
the test message “Hello world” to the awaiting receiver. Since “Hello world” is
in ASCII character format, in order to be transmitted, we need to convert it to
the format that the computer can process, which is the binary format. We run
charToBitsAndBack.m and get the corresponding bit streams. We save this bit
stream in a variable called “sBit” so that it can be used as the signal source. In this
block, “samples per frame” should be the same as the length of “sBit.” Although
the length of bit stream for “Hello world” is 77, 10 zeros are added to the end of
this bit stream, so the length of “sBit” becomes 87.
Next, in MATLAB Function block, we use a MATLAB Function to pick up the
useful information out of a frame. In this example, delay+13+1 is the first bit in the
stream and delay+13+77 is the last bit. The number 13 corresponds to the length of
the Barker code, and 77 is the length of bit stream for “Hello world.” Therefore, if
we want to transmit some other sentences, or we want to use a Barker code of some
other length, these values need to be modified.
At last, in To Workspace block, we save the useful bits to workspace,
and these bits can be converted to ASCII characters using the second half of
charToBitsAndBack.m. The converted results are illustrated in Figure 6.16,
which shows that we have successfully received the transmitted messages.
This chapter covers two basic receiver structures commonly used in the creation of a
digital communication system and assesses their performance observed during over-
the-air transmission. It also shows how to construct a series of orthonormal basis
6.5 Problems 173
s(t)
T/4 3T/4 T t
–1
functions that can be combined to produce a wide range of signal waveforms. In the
hardware experimental part of this chapter, two different modulation schemes are
implemented using USRP harware. Finally, a frame synchronization approach is de-
signed for the open-ended design project, which enables the packet transmission.
6.5 PROBLEMS
1. [Gram-Schmit
Orthogonalization] Carry out the Gram-Schmidt orthogonal-
ization procedure of the signals in Figure 6.16 in the order s3(t), s1(t), s4(t),
s2(t) and thus obtain a set of orthonormal functions {fn(t)}. Then, determine
the vector representation of the signals {sn(t)} by using the orthonormal func-
tions {fm(t)}. Also, determine the signal energies.
2. [Matched
Filter Realization] Consider the signal: s(t ) = A tcos(ω t ) for 0 £ t £ T
T c
(a) Determine the impulse response of the matched filter for the signal.
(b) Determine the output of the matched filter at t = T.
(c) Suppose the signal s(t) is passed through a correlator that correlates the
input s(t) with s(t). Determine the value of the correlator output at t =
T. Compare your result with that in part (b).
3. [Maximum
Likelihood Detection] A certain digital baseband modulation
scheme uses the pulse shown in Figure 6.18 to represent binary symbol “1”
and the negative of this pulse to represent binary “0.” Derive the formula
s1(t) s2(t)
1 1
–1 –1
4. [Receiver
Design] A pair of orthogonal signals s1(t) and s2(t) over the obser
vation interval 0 < t < 3T are shown in Figure 6.19. The received signal is
defined by
x(t) = sk(t)+ w(t), 0 < t < 3T, k =1, 2, (6.10)
here w(t) is additive white Gaussian noise of zero mean and power spectral
w
density N0/2.
(a) Design a receiver that decides in favor of signals s1(t) and s2(t), assum-
ing that these two signals are equiprobable.
(b) Calculate the average probability of symbol error incurred by this re-
ceiver for E/N0 = 4, where E is the signal energy.
5. [Correlator
Realization] A binary digital communication system employs
the signals
s0(t) = 0, 0 £ t £ T
s1(t) = A, 0 £ t £ T (6.1)
f or the transmission of information The demodulator is implemented as a
bank of correlators followed by samplers to sample the output of the cor-
relators at t + T.
(a) Determine the optimum detector for an AWGN channel and the opti-
mum threshold, assuming that the signals are equiprobable.
(b) Determine the probability of error as a function of the SNR. How does
the above signaling scheme compare with antipodal signaling?
6. [Error Bound] The signal constellation for a communication system with 16
equiprobable symbols is shown in Figure 6.20. The channel is AWGN with
noise power spectral density of N0/2.
(a) Using the union bound, find a bound in terms of A and N0 on the error
probability for this channel.
(b) Determine the average SNR per bit for this channel.
(c) Express the bound found in part (a) in terms of the average SNR per bit.
(d) Compare the power efficiency of this system with a 16-level PAM system.
Reference
Why do digital communication systems use multicarrier modulation? What are the
disadvantages of transmitting at a high data rate using a single carrier transmission?
What happens if part of the channel is severely attenuated? MCM addresses these
issues via a “divide-and-conquer” approach. More specifically, MCM transmits
data in parallel frequency bands simultaneously rather than in a single large chan-
nel. Since dividing the transmission into small bands allows for individual treatment
of these subcarriers, a multicarrier transmission scheme becomes robust to fast-
fading channels and narrowband interference.
Note that there are several important distinctions between multicarrier trans-
mission in a wired communications environment versus a wireless communications
environment. Without loss of generality, we will be focusing in this book chapter
on a wireless communications environment, but is important to note that certain
assumptions made here are not necessarily optimal for a wireline implementation.
177
178 Multicarrier Modulation and Duplex Communications
transmit large quantities of information over a short period of time while being
robust to transmission noise. The fundamental unit of information in these digital
communication systems is the bit, which consists of only two values (“on” and
“off,” “1” and “0,” and so). However, these systems usually map, or modulate,
groups of bits into symbols prior to transmission. At the receiver, each symbol is
compared to a known set of symbols and is demodulated into the group of bits cor-
responding to the closest matching symbol. In the next subsection, we will look at
the modulation scheme that will be employed in this chapter.
where wk = 2pk/2N is the carrier frequency and 2N is the period of the symbol.
Note that we have to limit the possible carrier frequencies to integer multiples of
2p/2N in order to keep orthogonality between the sinusoidal and cosinusoidal car-
riers in a digital implementation (see the following derivations).
In this chapter, we will be dealing with rectangular QAM signal constellations,
such as those shown in Figures 7.2(a), 7.2(b), and 7.2(c) for 4-QAM, 16-QAM, and
64-QAM, respectively. This basically amounts to saying that a[ℓ], b[ℓ] Î{±(2k – 1)E,
Figure 7.1 Rectangular QAM modulator (multirate model of a digital implementation), where the
carrier signals are cos(wkn) and sin(wkn), and their amplitudes are determined by input bits d[m].
7.1 Theoretical Preparation 179
00 10
Quadrature
1
01 11
1 3
In phase
(a)
1 3
In phase
(b)
1 3 5 7
In phase
(c)
Figure 7.2 Three types of QAM signal constellations. The in-phase values indicate the amplitude of the
carrier cos(wkn), while the quadrature values indicate the amplitude of the carrier sin(wkn). (a) 4-QAM signal
constellation. (b) Rectangular 16-QAM signal constellation. (c) Rectangular 64-QAM signal constellation.
180 Multicarrier Modulation and Duplex Communications
k = 1,..., 2D/2–1}, where E is some positive constant that scales the energy of the sent
signals and D/2 is the number of bits used to represent the amplitude level of one of
the carriers during a symbol.
One of the advantages of QAM signaling is the fact that demodulation is rela-
tively simple to perform. From Figure 7.3, the received signal, r[n], is split into
two streams and each multiplied by carriers, cos(wkn) and sin(wkn), followed by
a summation block (implemented here through filtering by a rectangular window
followed by downsampling). This process produces estimates of the in-phase and
quadrature amplitudes, â[l] and b̂[l], namely,
2 �N + 2 N − 1
aˆ[� ] = ∑ r[ n]cos(ω k n)
n = 2 �N
2 �N + 2 N − 1 2 � N +2 N −1
= ∑ a′ [n]cos(ω k n)cos(ωkn) + ∑ b′ [n]sin( ω kn)cos(ω kn )
n = 2 �N n = 2� N
2 �N + 2 N − 1 2 � N + 2 N −1
2π kn 2π kn 2π kn 2π kn
= ∑ a′ [n]cos
2 N
cos
2N
+ ∑ b′[n]sin 2N cos 2N
n = 2 �N n =2 � N
2 �N + 2 N − 1
a′ [n]
= ∑ 2 (7.2)
n = 2 �N
and
2 �N + 2 N -1
bˆ[� ] = å r[n]sin(ωk n)
n = 2 �N
2 �N + 2 N -1 2 �N + 2 N -1
= å a¢[n]cos(ωk n)sin(ωk n) + å b¢[n]sin(ωk n)sin(ω k n)
n = 2 �N n = 2 �N
2 �N + 2 N -1 2 �N + 2 N -1
æ 2πkn ö æ 2π kn ö æ 2π kn ö æ 2π kn ö
= å a¢[n]cos ç
è 2 N
÷ sin ç
ø è 2 N
÷+ å
ø
b¢[n ]sin ç
è 2 N
÷ sin ç
ø è 2N ø
÷
n = 2 �N n = 2 �N
2 �N + 2 N -1
b¢[n]
= å 2
(7.3)
n = 2 �N
Figure 7.3 Rectangular QAM demodulator, where the received signal, r [n], is split into two streams
and each multiplied by carriers, followed by a summation block.
7.1 Theoretical Preparation 181
where, due to the orthogonality of the two carriers, the cross terms vanish, leaving
the desired amplitude (after some trigonometric manipulation). The bits estimated
from â[ℓ] and b̂[ℓ] bits are then multiplexed together, forming the reconstructed
version of d[m], dˆ[m].
We have so far dealt with modulation and demodulation in an ideal setting.
Thus, one should expect that d[m] = d[m] with probability of 1. However, in the
next subsection we will examine a physical phenomenon that distorts the transmit-
ted signal, resulting in transmission errors.
a�k−1[n]
ak−1[�]
↑ 2N u[n] − u[n − 2N ]
cos ωk−1 n
dk−1 [m]
+
+
sin ωk−1 n
↑ 2N u[n] − u[n − 2N ]
bk−1 [�]
b�k−1[n]
ak+1[�] a�k+1[n]
↑ 2N u[n] − u[n − 2N ]
cos ωk+1n
dk+1[m]
+
+
sin ωk+1n
↑ 2N u[n] − u[n − 2N ]
bk+1[�]
b�k+1[n]
Figure 7.4 Transmitter of an orthogonally multiplexed QAM system, where several QAM modula-
tors are put in parallel, each with a different carrier frequency.
182 Multicarrier Modulation and Duplex Communications
each modulator came from portions of a high-speed bit stream. This is principle
behind multicarrier modulation.
Orthogonal frequency division multiplexing (OFDM) is an efficient type of
multicarrier modulation that employs the discrete Fourier transform (DFT) and in-
verse DFT (IDFT) to modulate and demodulate the data streams. Since the carriers
used in Figure 7.4 are a sinusoidal function of 2pkn/2N, it should come as no sur-
prise that a 2N-point DFT or IDFT can carry out the same modulation, since it also
contains summations of terms of the form e±2pkn/2N. The setup of an OFDM system
is presented in Figure 7.5. A high-speed digital input, d[m], is demultiplexed into
N subcarriers using a commutator. The data on each subcarrier is then modulated
into an M-QAM symbol, which maps a group of log2(M) bits at a time. Unlike the
representation of (7.1), for subcarrier k we will rearrange ak[ℓ] and bk[ℓ] into real
and imaginary components such that the output of the “modulator” block is pk[ℓ] =
ak[ℓ] + jbk[ℓ]. In order for the output of the IDFT block to be real, given N subcarri-
ers we must use a 2N-point IDFT, where terminals k = 0 and k = N are “don’t care”
inputs. For the subcarriers 1 £ k £ N – 1, the inputs are pk[ℓ] = ak[ℓ] + jbk[ℓ], while
for the subcarriers N + 1 £ k £ 2N – 1, the inputs are pk[ℓ] = a2N–k[ℓ] + jb2N–k[ℓ].
The IDFT is then performed, yielding
1 2 N −1
s[2�N + n] = ∑ pk [�]e j(2π nk/ 2N)
2 N k =0
(7.4)
where this time 2N consecutive samples of s[n] constitute an OFDM symbol, which
is a sum of N different QAM symbols.
This results in the data being modulated on several subchannels. This is achieved
by multiplying each data stream by a sin(Nx)/sin(x), several of which are shown in
Figure 7.6.
The subcarriers are then multiplexed together using a commutator, forming the
signal s[n], and transmitted to the receiver. Once at the receiver, the signal is de-
multiplexed into 2N subcarriers of data, ŝ[n], using a commutator and a 2N-point
DFT, defined as
[ ] []
[ ]
[ ]
Mod IDFT
Channel
_
^[ ] ^[] []
^[ ]
Demod Equalizer DFT ^[ ]
Figure 7.5 Overall schematic of an orthogonal frequency division multiplexing system, where the
DFT and IDFT are employed to modulate and demodulate the data streams.
7.1 Theoretical Preparation 183
Frequency Content
6
0
0 2 4 6
Frequency (rad)
Figure 7.6 Characteristics of orthogonal frequency division multiplexing: frequency response of
OFDM subcarriers.
2 N −1
p k[� ] = ∑ sˆ[2�N + n]e − j(2π nk/ 2 N ) (7.5)
n =0
and applied to the inputs, yielding the estimates of pk[ℓ], pk[ℓ]. The output of the
equalizer, pk[ℓ], then passes through a demodulator, and the result multiplexed to-
gether using a commutator, yielding the reconstructed high-speed bit stream, dˆ [m].
1
6
0.8 4
Magnitude (db)
Amplitude
2
0.6
0
0.4
0.2
0
0 1 2 3 4 0 1 2 3
Time Frequency (rad/sec)
(a) (b)
p2
Rx
p1
Tx
p3
(c)
Figure 7.7 Example of a channel response due to dispersive propagation. Notice the three distinctive
propagate paths, p1, p2 and p3, that start at the transmitter Tx and are intercepted at the receiver Rx.
(a) Impulse response. (b) Frequency response. (c) The process by which dispersive propagation
arises.
open field case, the channel impulse response (CIR) would be a delta, since no other
copies would be received by the receiver antenna. On the other hand, an indoor
environment would have several copies intercepted at the receiver antenna, and
thus its CIR would be similar to the example in Figure 7.7(a). The corresponding
frequency response of the example CIR is shown in Figure 7.7(b).
In an xDSL environment, the same principles can be applied to the wireline
environment. The transmitted signal is sent across a network of telephone wires,
with numerous junctions, bridging taps, and connections to other customer appli-
ances (e.g., telephones, xDSL modems). If the impedances are not matched well in
the network, reflections occur and will reach the devices connected to the network,
including the desired receiver.
With the introduction of the CIR, new problems arise in our implementation
that need to be addressed. In Section 7.1.4, we will look at how to undo the smear-
ing effect the CIR has on the transmitted signal. In Section 7.1.5, we will look at
one technique employed extensively in OFDM that inverts the CIR effects in the
frequency domain.
7.1 Theoretical Preparation 185
}
}
n
(a)
(b)
CP CP CP
(c)
Figure 7.8 The process of adding, smearing capturing, and removal of a cyclic prefix. (a) Add cyclic
prefix to an OFDM symbol. (b) Smear channel h(n) from the previous symbol into the cyclic prefix.
(c) Remove the cyclic prefix.
186 Multicarrier Modulation and Duplex Communications
Despite the usefulness of the cyclic prefix, there are several disadvantages. First,
the length of the cyclic prefix must be sufficient to capture the effects of the CIR. If
not, the cyclic prefix fail to prevent distortion introduced from other symbols. The
second disadvantage is the amount of overhead introduced by the cyclic prefix. By
adding more samples to buffer the symbols, we must send more information across
the channel to the receiver. This means to get the same throughput as a system with-
out the cyclic prefix, we must transmit at a higher data rate.
n− K L −1
∑ h[k ]s[ n − K − k] + ∑ s[n − k + 2 N − K] K ≤ n ≤ K + L −1
k= 0 k = n −K+1
=
L− 1
∑ h[k ]s[ n − K − k] K + L ≤ n ≤ 2N − 1
k= 0
From this equation, it is rapidly seen that, after removal of the cyclic prefix, the
received sequence r[ n] = r [ n + K] is
2N − 1
r[n] = ∑ h[k]s[((n − k))2 N ] = h[n ] 2N s[n ] (7.6)
k= 0
Thus, the received samples, after removal of the cyclic prefix, are just made up
of the circular convolution of the sent signal (i.e., 2N samples per symbol) with the
channel impulse response h[n]. If one now looks at (7.6) in the frequency domain,
it comes that
where capital letters represent 2N-point DFTs of the corresponding sequences. Re-
ferring back to Figure 7.5, the 2N-point DFT R[k] of the received samples is already
computed and is denoted by p k[�].
Referring to Figures 7.6 and 7.7(b), if we consider the multiplication of the
corresponding frequency samples together, we notice that each of the subcarriers
experiences a different channel “gain” H[k]. Therefore, what must be done is to
multiply each subcarrier with a gain that is an inverse to the channel frequency
response acting on that subcarrier. This is the principle behind per tone equaliza-
tion. By knowing what the channel frequency gains are at the different subcarriers,
one can use them to reverse the distortion caused by the channel by dividing the
subcarriers with them. For instance, if the system has 64 subcarriers centered at
frequencies wk = 2pk/64, k = 0, . . . , 63, then one would take the CIR h[n] and take
its 64-point FFT, resulting with the frequency response H[k], k = 0, . . . , 63. Then,
to reverse the effect of the channel on each subcarrier, simply take the inverse of the
channel frequency response point corresponding to that subcarrier,
1
W [k] = (7.7)
H[k]
N 0=4 N 0=1
N 1=4 N 1=3
N 2=4 N 2=6
N 3=4 N 3=4
M- 1 M- 1
N Total = Ni N M-3 =4 N Total = Ni N M-3 =7
i= 0 i= 0
N M-2 =4 N M-2 =2
N M-1 =4 N M-1 =3
(a) (b)
Figure 7.9 Comparison of variable and constant rate commutators with equivalent total rate.
(a) Constant-rate commutator. (b) Variable-rate commutator.
æ γ ö
bi = log 2 ç 1 + i ÷ (7.8)
è G ø
where gi is the SNR of subcarrier i (not in dB). Of course, (7.8) gives rise to nonin-
teger numbers of bits. Round the resulting bis appropriately.
ì N (f )
ïλ - , f eF
P (f ) = í | H (F) |2 (7.9)
ï 0, f eF ¢
î
where N(f ) is the PDF of Gaussian noise, H(f ) is the transfer function representing
a linear channel, and l is the value for which:
Figure 7.10 An illustration of the water filling principle. Notice how power is allocated to each
subcarrier, such that the resulting power would be a constant K.
Using the theory that has been introduced in Section 7.1, we will now proceed
to build our own implementation for one type of the MCM transmission system,
namely, an OFDM system. Specifically, we will design and prototype an OFDM
communication system using MATLAB, followed by its implementation in Simu-
link, both of which consist of a transmitter, a channel, and a receiver. With these
implementations, we can observe the BER performance and the transmission spec-
trum of the OFDM system.
Since the complete OFDM system is a rather complicated structure, let us break
down the whole system into several smaller pieces and start the implementation
from these basic functions. For each function, after its implementation, we will
evaluate it in order to make sure that it is functioning properly. This is a very useful
and widely employed strategy for engineers when developing and testing a large
scale, complicated system.
First of all, let us implement the rectangular QAM modulator as shown in Fig-
ure 7.1, given an arbitrary carrier frequency wk. The M-QAM modulator modu-
lates every b = log2 M binary bits into one of the M complex numbers. We can
get a better idea of this modulation process by looking at the constellation plot,
as shown in Figure 7.2. Corresponding to the QAM modulator, let us implement
the rectangular QAM demodulator as shown in Figure 7.3, given an arbitrary
190 Multicarrier Modulation and Duplex Communications
carrier frequency wk. The M-QAM demodulator converts the complex numbers
to binary bits.
1
10
Probability of Bit Error
2
10
3
10
4
10
10
0 5 10 15 20 25
Signal to Noise Ratio (dB)
Figure 7.11 Probability of bit error curves for 4-QAM, rectangular 16-QAM, and 64-QAM.
7.2 Software Implementation 191
Verify that Figure 7.4 and its corresponding receiver work under
Q ideal conditions. What carrier frequencies did you not use in the
implementation.
Then, let us implement Figure 7.5 using IDFT and DFT blocks. In MATLAB,
these two blocks can be realized using ifft and fft functions. Note that the size
of the input/output should be a variable. At the transmitter, in order to get a real-
valued time series s[n], according to the IDFT properties, a 2N-point IDFT should
be implemented. However, the output of the M-QAM modulator are Appoint com-
plex numbers. Therefore, before going through the IDFT block, we need to rear-
range and expand the N-point complex numbers to 2N-point as follows:
Implement the channel filter, given as an input the channel impulse re-
sponse, and add the noise generator. Test your implementation with the
following parameters, whose impulse response are shown in Figure 7.12:
h1 = [1 0 0 0 0 0 0 0]
Q h2 = [1 0.1 0.0001 0 0 0]
h3 = [1 0.3 0.4 0.32 0.2 0 .1 0.05 0 .06 0 .02 0 .009]
(7.13)
with m = 0 and s 2 = 0.001.
The last component we need to take into account is the cyclic prefix. Let us
implement the cyclic prefix add and remove blocks of the OFDM system, as shown
in Figure 7.8, with the length of the cyclic prefix an input variable. At the transmit-
ter, cyclic prefix is added to eliminate the intercarrier interference (ICI) caused by
192 Multicarrier Modulation and Duplex Communications
Impulse Response of h
1
1
Amplitude
0.5
0
0 1 2 3 4 5 6 7 8 9
Time
Impulse Response of h2
1
Amplitude
0.5
0
0 1 2 3 4 5 6 7 8 9
Time
Impulse Response of h
3
1
Amplitude
0.5
0
0 1 2 3 4 5 6 7 8 9
Time
Figure 7.12 The impulse responses of three channels, which will result in three different ICI
effects.
load and open the ofdm.mdl file. The example OFDM transmitter and receiver has
already been implemented and provided, as shown in Figure 7.13. We will finish the
rest of the model with the following steps and experiment with the parameters.
1. With your implementation, plot the BER curves for the three
channel realizations in (7.13) between BER values of 10–2 and
10–4. In particular, simulate your system when it uses N = 16,
32, and 64 subcarriers and employs a cyclic prefix of length
N/4. The high speed input x[m] is a uniformly distributed.
Q Plot the BER results.
2. Employ both constant-rate commutators, which either use
BPSK, 4-QAM, 16-QAM, and 64-QAM, and the variable-rate
commutator. Plot the BER results. Which approach attains the
best performance? Please justify.
In order to finish the model, let us start off this design by getting the AWGN
block from Simulink Library Browser and connecting the transmitter and receiver
via AWGN block. Then, connect Spectrum Scope after the AWGN channel and
connect the transmitter and receiver to the inputs of Error Rate Calculation block.
By doing this, we have completed this model. Now, let us play around this model
by changing some parameters.
1. Change the SNR value of the AWGN channel and observe the
effect in the frequency domain via Spectrum Scope.
Q 2. Open the OFDM Transmitter and OFDM Receiver subsys-
tems, change the number of subcarriers, and observe the effect
in the frequency domain via Spectrum Scope.
A matched filter is a theoretical framework that should not be mistaken for the
name of a specific type of filter. It is a filtering process designed to take a received
signal and minimize the effect of noise present in it. Hence, it maximizes the SNR
of the filtered signal. It happens that an optimum filter does not exist for each signal
shape transmitted, and it is a function only of the transmitted pulse shape. Due to
its direct relationship to the transmitted pulse shape, it is called a matched filter.
One commonly used matched filter design employed by many communication sys-
tems is the square root raised cosine filter.
In this section, we use two models to illustrates a typical setup in which a
transmitter uses a square root raised cosine filter to perform pulse shaping and the
corresponding receiver uses a square root raised cosine filter as a matched filter to
the transmitted signal. At the receiver, we can plot an eye diagram from the filtered
received signal, from which we can learn the effect of matched filter in a communi-
cation system.
7.3.1 Eye Diagram
In telecommunication, an eye diagram, also known as an eye pattern, is an oscil-
loscope display in which a digital data signal from a receiver is repetitively sampled
and applied to the vertical input, while the data rate is used to trigger the horizontal
sweep. It is so called because, for several types of coding, the pattern looks like a
series of eyes between a pair of rails.
Several system performance measures can be derived by analyzing the display.
If the signals are too long, too short, poorly synchronized with the system clock,
too high, too low, too noisy, too slow to change, or have too much undershoot or
overshoot, this can be observed from the eye diagram. For example, Figure 7.14
shows a typical eye pattern, where the x-axis represents timing and the y-axis repre-
sents amplitude. We can also obtain the information concerning time variation and
amount of distortion from the eye diagram. In general, the most frequent usage of
the eye pattern is for qualitatively assessing the extent of the ISI. As the eye closes,
the ISI increases; as the eye opens, the ISI decreases.
Best sampling instant
Distortion Time
amount variation
“The eye”
Figure 7.14 A typical eye pattern, where the x-axis represents timing and the y-axis represents
amplitude.
7.3 USRP Hardware Implementation 195
Figure 7.15 A typical eye pattern for the QPSK signal. The width of the opening indicates the time
over which sampling for detection might be performed. The optimum sampling time corresponds to
the maximum eye opening, yielding the greatest protection against noise. If there were no filtering
in the system, then the system would look like a box rather than an eye.
Figure 7.16 A model that illustrates a typical receiver setup, in which the receiver uses a square
root raised cosine filter as a matched filter corresponding to the square root raised cosine filter on
the transmitter.
filter for the Raised Cosine Transmit Filter on the transmitter. By double clicking the
Discrete-Time Eye Diagram Scope, we can observe the eye diagram of our system.
Perform the following tasks and plot your observations.
Figure 7.17 The subsystem of the receiver model, in which the receiver plots an eye diagram from
the filtered received signal. Based on this diagram, we can learn the effect of matched filter in a com-
munication system.
7.4 Open-Ended Design Project: Duplex Communication 197
Figure 7.18 The subsystem of the receiver model without a Raised Cosine Receive Filter.
Please note that when the eye diagram has two widely opened “eyes,” it in-
dicates the appropriate instants at which to sample the filtered signal before de-
modulating. It also indicates the absence of intersymbol interference at the sampling
instants of the received waveform. A large SNR in the channel will produce a low-
noise eye diagram. We can also construct a Simulink-only model using AWGN
Channel block and then change the SNR parameter in the AWGN Channel block
to see how the eyes in the diagram change.
In this section, we will design a duplex single carrier communication system based
on what we have accomplished thus far in this book. A duplex communication
system is a system composed of two connected parties or devices that can commu-
nicate with one another in both directions. Duplex systems are often employed in
many communications networks, either to allow for a communication “two-way
street” between two connected parties or to provide a “reverse path” for the moni-
toring and remote adjustment of equipment in the field.
Systems that do not need the duplex capability use instead simplex communica-
tion. These include broadcast systems, where one station transmits and the others
just “listen.” Several examples of communication systems employing simplex com-
munications include television broadcasting and FM radio transmissions.
Figure 7.19 A photograph of XCVR2450 RF transceiver daughterboard (from [12]), where a com-
mon local oscillator is used for both receive and transmit.
7.4.2 Half-Duplex
A half-duplex (HDX) system provides communication in both directions, but only
one direction at a time (not simultaneously). Typically, once a party begins receiv-
ing a signal, it must wait for the transmitter to stop transmitting before replying.
An example of a half-duplex system is a two-party system such as a “walkie-talkie”
style two-way radio, wherein one must use “over” or another previously designated
command to indicate the end of transmission and ensure that only one party trans-
mits at a time, because both parties transmit and receive on the same frequency. A
good analogy for a half-duplex system would be a one-lane road with traffic con-
trollers at each end. Traffic can flow in both directions, but only one direction at a
time, regulated by the traffic controllers. There are several different ways to control
the traffic in a half-duplex communication system, and we suggest you to use time-
division duplexing.
Mes
sage
ge
ssa
Me
Mes
sage
ge
ssa
Me
Time Time
Figure 7.20 A half-duplex communication system using time-division duplexing. A common carrier
is shared between station A and station B, the resource being switched in time.
Figure 7.21 An example of the status output at station B, where it shows the switch between the
transmitter mode and the receiver mode as well as the received messages.
7.5 Chapter Summary 201
7.5 CHAPTER SUMMARY
7.6 PROBLEMS
(a)
(b)
Figure 7.22 Schematics of generic multicarrier modulation employing synthesis and analysis filter-
banks. (a) Generic multicarrier transmitter. (b) Generic multicarrier receiver.
spectra for the following: (a) upsampled signal y(0)(n), (b) output of the
synthesis filter g(0)(n), (c) composite signal s(n), (d) output of the analy-
sis filter f (1)(n), (e) downsampled signal dˆ(1)(n).
NOTE: Please make sure to properly indicate values of the x-axis.
(b) Suppose now that a frequency selective channel exists between the
transmitter and receiver, as shown in Figure 7.23(f). Sketch the result-
ing signal spectra at the input to the multicarrier receiver, namely, r(n).
If each of the subcarrier equalizers w(0)(n), w(1)(n), w(2)(n), and w(3)(n)
consist of only one coefficient, what should be its values and why?
HINT: Refer to Figure 7.23(f) and the frequency response values at the
center of each subcarrier.
(c) Suppose that the average frequency attenuation per subcarrier is equal to:
the noise spectral density is equal to N0 = 10–6, the transmit power level
for each subcarrier is Pi = 1 × 10–3, and the SNR gap is equal to G = 20.
We understand that the number of bits that can be allocated to subcar-
rier i is equal to:
æ γ ö
bi = log 2 ç 1 + i ÷, i = 1, …, N (7.14)
è G ø
where the subcarrier signal-to-noise ratio (SNR) is equal to:
Pi | Ci |2
γi = (7.15)
N0
7.5 Chapter Summary 203
(a)
(b)
(c)
(d)
(e)
(f)
Figure 7.23 Sketches of the synthesis filter g(0)(n), all four signal spectra, and the channel response.
(a) Frequency response for synthesis filter g(0)(n). (b) Signal spectrum for d (0)(n). (c) Signal spectrum
for d (2)(n). (d) Signal spectrum for d (2)(n). (e) Signal spectrum for d (3)(n). (f) Frequency response of
the wireless channel.
transmits two bits per symbol epoch while modulation scheme 2 (i.e.,
MS2) transmits four bits per symbol epoch. Using the information pro-
vided in part (c), the probability of bit error for each subcarrier when
employing MSI is PeMS1 = {7 × 10–7, 2 × 10–6, 1 × 10–5, 5 × 10–8} while for
MS2 it is Pe,MS2 = {9 × 10–6, 2 × 10–5, 1 × 10–3, 1 × 10–6}. In order to maxi-
mize the throughput given an average probability of bit error constraint
of 1 × 10–5, what modulation scheme should be used for each subcarrier?
What is the maximum throughput? Please justify your answer.
HINT: When computing the average probability of bit error, remember
to weigh each subcarrier quality by the number of bits being supported
and divide by the total bits to be transmitted.
10. [Bit and Power Allocation]: Suppose we have an orthogonal frequency-
division multiplexing (OFDM) communication system consisting of N =
8 subcarriers that is transmitting over a frequency selective fading chan-
nel possessing an average frequency attenuation per subcarrier equal
to:
(a) Suppose that Pi = 1 × 10−5 for all subcarriers, and the SNR gap is equal
to G = 20. What is the bit allocation across all these subcarriers rounded
to the nearest integer? What is the average quantization error between
the optimal (real) bi values and the rounded (integer) values across all
the subcarriers?
(b) Suppose bi = 3 bits across all subcarriers given an SNR gap equal to G = 20.
What should be the approximate subcarrier power values Pi?
References
With advanced digital communication systems such as cognitive radio [1, 2] being
used in a growing number of wireless applications, the topic of spectrum sensing has
become increasingly important. This chapter will introduce the concept, the funda-
mental principles, and the practical applications of spectrum sensing. In the experi-
mental part of this chapter, two popular spectral detectors will be implemented and
their performance will be observed by applying them to several Simulink generated
signals. This is followed by a USRP hardware implementation of a wideband spectrum
sensing technique. Finally, the open-ended design project will focus on a commonly
used medium access control (MAC) scheme referred to as carrier sense multiple access
with collision avoidance (CSMA/CA), which heavily relies on spectrum sensing.
In recent years, a large portion of the assigned spectrum has been observed to be sparsely
and sporadically utilized during several spectrum measurement campaigns [3, 4], as il-
lustrated in Figure 8.1. In particular, spectrum occupancy by licensed transmissions are
often concentrated across specific frequency ranges while a significant amount of the
spectrum remains either underutilized or completely unoccupied. Therefore, dynamic
spectrum access [6, 7] has been proposed for increasing spectrum efficiency via the
real-time adjustment of radio resources using a combination of local spectrum sensing,
probing, and autonomous establishment of local wireless connectivity among cognitive
radio (CR) nodes and networks. During this process, spectrum sensing is employed for
the purpose of identifying unoccupied licensed spectrum (i.e., spectral “white spaces”).
Once these white spaces have been identified, secondary users (SU) opportunistically
utilize these spectral white spaces by wirelessly operating across them while simultane-
ously not causing harmful interference to the primary users (PU).1 Currently, there ex-
ists several techniques for spectrum sensing. This chapter will emphasize two of them,
namely, the energy detection and cyclostationary feature detection.
1. Primary users are licensed users who are assigned certain channels, and secondary users are unlicensed users who
are allowed to use the channels assigned to a primary user only when they do not cause any harmful interference
to the primary user [8].
207
208
SPECTRAL WHITESPACES
is often used to characterize the signal, which is obtained by taking the Fourier
transform of the autocorrelation RXX (t) of the WSS random process X(t).
The PSD and the autocorrelation of a function are mathematically related by the
Einstein-Wiener-Khinchin (EWK) relations [9], namely:
∞ − j 2π f τ
SXX (f ) = ∫ −∞ RXX (τ )e dτ (8.1)
¥
RXX (f ) = ò SXX (τ )e+ j 2π fτ df (8.2)
-¥
A very powerful consequence of the EWK relations is its usefulness when at-
tempting to determine the autocorrelation function or PSD of a WSS random pro-
cess that is the output of a linear time-invariant (LTI) system whose input is also
a WSS random process. Specifically, suppose we denote H(f ) as the frequency re-
sponse of an LTI system h(t). We can then relate the power spectral density of input
and output random processes with the following equation:
where SXX (f ) is the PSD of input random process and SYY (f ) is the PSD of output
random process. As we will see later, the experiments in Section 8.2 of this chapter
are all based on this very useful relationship, as illustrated in Figure 8.2.
Figure 8.2 An example of how the an LTI system h(t) can transform the PSD between the WSS
random process input X(t) and the WSS random process output Y(t).
210 Spectrum Sensing Techniques
(a)
(b)
Figure 8.3 A spectrum analyzer is employed to provide a snapshot of radio frequency bandwidth.
(a) Using Agilent CSA-N1996A spectrum analyzer to take spectrum measurements (from [10]).
(b) Power spectral density of a pulse shaped QPSK signal, collected by an Agilent Technologies
spectrum analyzer [11].
later in this chapter are the average of thousands of spectrum sweeps across a single
bandwidth. Note that for the same measurement equipment, the speed at which
spectrum measurements can be obtained varies from seconds to hours and days
depending on the choice of several sweep parameters. However, the selection of
these sweep parameters is heavily dependent on what signals are being observed,
what sort of characteristics are being sought after in the spectrum measurements,
and how the spectrum measurement information will be post-processed afterward.
8.1 Theoretical Preparation 211
As a result, Table 8.1 summarizes several parameters used to define spectrum mea-
surement processes and the aspects of measurement they mainly affect. Then in the
following subsection, we will understand how the choice of these parameters can
directly impact the outcome of the measurements obtained.
N = 32
20
18
16
14
12
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(a)
N = 512
20
18
16
14
12
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(b)
Figure 8.4 Two FFT plots of the same sine wave. (a) A 32-point FFT plot. (b) A 512-point FFT plot.
8.1 Theoretical Preparation 213
Figure 8.5 A simple example showing the relation between the total bandwidth B and the window
size W.
214 Spectrum Sensing Techniques
where E [ ·] is the expectation operator and N is the number of sweeps. For ex-
ample, Figure 8.6(a) shows a single sine wave whose amplitude is 1 with the added
white Gaussian noise. Figure 8.6(b) shows the result by accumulating 1000 sine
waves in Figure 8.6(a) and taking their average. Comparing these two plots, we
find that the effect of additive white Gaussian noise can be greatly eliminated by
the averaging process.
In real-world applications, as the number of sweeps used in the averaging pro-
cess increases, the noise level will be better captured. This can be used to estimate
the noise statistics and would be very useful when determining the energy threshold
for the SDR implementation of an energy detector.
8.1.3 Hypothesis Testing
As mentioned at the beginning of Section 8.1, in dynamic spectrum access net-
works, spectrum sensing is employed for the purpose of identifying unoccupied
licensed spectrum, which is equivalent to detecting the frequency locations of the
primary signals. Therefore, spectrum sensing can be interpreted as a signal detec-
tion problem. Most signal detection problems can be formulated in the framework
8.1 Theoretical Preparation 215
1
x(t)
1.5
0.5
x(t)
5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
(b)
Figure 8.6 The sine wave with and without the averaging process. (a) A sine wave whose amplitude is
1 with the added white Gaussian noise. (b) Accumulating 1000 sine waves and taking their average.
are primary signals in a particular channel. The two hypotheses are denoted as
follows:
H0 : no primary signals
H0 : x[k] = n[k]
1 x ∈ Γ1
δ (x) =
0 x ∈ Γ 1c (8.11)
When the observation x falls inside the region G1, we will choose 1. However,
if the observation falls outside the region G1, we will choose 0. Therefore, (8.11)
is known as decision rule, which is a function that maps an observation to an ap-
propriate hypothesis [12]. In the context of spectrum sensing, different spectral
detectors and classifiers are actualy the implementations of different decision rules.
In Section 8.1.4, two decision rules will be introduced.
Regardless of the precise signal model or detector used, sensing errors are inevi-
table due to additive noise, limited observations, and the inherent randomness of
the observed data [13]. In testing 0 versus 1 in (8.9), there are two types of errors
that can be made, namely, 0 can be falsely rejected or 1 can be falsely rejected
[12]. In the first hypothesis, there are actually no primary signals in the channel, but
the testing detects an occupied channel, so this type of error is called a false alarm
or type I error. In the second hypothesis, there actually exist primary signals in the
channel, but the testing detects only a vacant channel. Thus, we refer to this type of
error as a missed detection or type II error. Consequently, a false alarm may lead to
a potentially wasted opportunity for the SU to transmit, while a missed detection
could potentially lead to a collision with the PU [13].
Given these two types of errors, the performance of a detector can be character-
ized by two parameters, namely, the probability of false alarm (PF), and the prob-
ability of missed detection (PM) [14], which correspond to type I and type II errors,
respectively, and thus can be defined as:
8.1 Theoretical Preparation 217
PF = P{ Decide H 1 | H 0 } (8.12)
and
PM = P{ Decide H 0 | H1 } (8.13)
Note that based on PM, another frequently used parameter is the probability of
detection, which can be derived as follows:
PD = 1 - PM = P{ Decide H 1 | H1 } (8.14)
which characterizes the detector’s ability to identify the primary signals in the chan-
nel, so PD is usually referred to as the power of the detector.
As for detectors, we would like their probability of false alarm as low as pos-
sible, and, at the same time, their probability of detection as high as possible. How-
ever, in real-world situation, this is not achievable, because these two parameters are
constraining each other. To show their relationship, a plot called receiver operating
characteristic (ROC) is usually employed [15], as shown in Figure 8.7, where its
x-axis is the probability of false alarm and its y-axis is the probability of detection.
From this plot, we observe that as PD increases, the PF is also increasing. An opti-
mal point that reaches the highest PD and the lowest PF does not exist. Therefore,
the detection problem is also a tradeoff, which depends on how the type I and type
II errors should be balanced.
0.9
0.8
Probability of Detection PD
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Probability of False Alarm PF
Figure 8.7 A typical receiver operating characteristic (ROC), where the x-axis is the probability of
false alarm (PF) and the y-axis is the probability of detection (PD).
218 Spectrum Sensing Techniques
¥
E=ò | x (t ) |2 dt (8.15)
-¥
N
D= å (x[k ])2 (8.16)
k =1
where x[k] is the received signal and N is the total number of the received signals.
The underlying assumption is that with the presence of a signal in the channel, there
would be significantly more energy than if there was no signal present. Therefore,
energy detection involves the application of a threshold T in the frequency domain,
which is used to decide whether a transmission is present at a specific frequency, as
shown in Figure 8.8. With any portion of the frequency band where the energy
exceeds the threshold is considered to be occupied by a transmission, with the deci-
sion rule is as follows:
ì1 D >T
δ (D) = í (8.17)
î0 D <T
where D is the decision statistic calculated by (8.16) and T is the predefined energy
threshold. Therefore, in Figure 8.8, any portion of the frequency band where the
energy exceeds –100 dBm is considered an occupied channel.
Since different transmitters employ different signal power levels and trans-
mission ranges, one of the major concerns of energy detection is the selection of
an appropriate threshold. A threshold that works for one transmission may not
be appropriate for another. Figure 8.9 shows two typical detection errors caused
by inappropriate energy detection threshold. In Figure 8.9(a), the threshold is
too low, so some noise is considered primary signals, resulting in a type I error,
8.1 Theoretical Preparation 219
–50
–60
–80
–90
–100
–110
–120
–130
600 620 640 660 680 700 720 740
Frequency (in MHz)
Figure 8.8 Energy detection threshold level, denoted as T. Any portion of the frequency band
where the energy exceeds the threshold is considered occupied by a transmission.
false alarm. In Figure 8.9(b), the threshold is too high, so some primary signals
are ignored, incurring the type II error, missed detection.
M x (t ) = M x (t + T0 ) (8.18)
and
Rx (t ,τ ) = Rx (t + T0 , τ) (8.19)
where Mx(t) is the mean value of the signal x ( t ) and R(t, t) is the autocorrela-
tion function of the signal x ( t ) . These periodicities occur for signals possessing
well-defined characteristics due to several processes such as sampling, scanning,
modulating, multiplexing, and coding, which can be exploited to determine the
modulation scheme of the unknown signal [17].
The periodic nature of the signal allows it to be expressed as a Fourier series
[18, 19]:
220 Spectrum Sensing Techniques
–50
–60
–80
–90
–100
–110
–120
–60
Energy spectral density (in dBm)
–70
–80
–90
–100
–110
–120
–130
600 620 640 660 680 700 720 740
Frequency (in MHz)
(b)
Figure 8.9 Inappropriate energy detection threshold levels. (a) Low energy detection threshold
level yields type I error, false alarm. (b) High energy detection threshold level yields type II error,
missed detection.
ì æ τö æ τ öü
Rx (t ,τ ) = E íx ç t + ÷ x * ç t - ÷ ý = å Rxα (τ )e j 2παt (8.20)
î è 2 ø è 2 ø þ {α }
where E{.} is the expectation operator, {a} is the set of Fourier components, and
Rαx (τ ) is the cyclic autocorrelation function (CAF) given by:
1 T /2
Rxα (τ ) = lim ò-T / 2 Rx (t ,τ )e
- j 2παt
(8.21)
T ®¥ T
Alternatively, in the case when Rx (t, t) is periodic in t with period T0, (8.21)
can be expressed as:
8.1 Theoretical Preparation 221
1 T0 / 2 - j 2πα t (8.22)
Rxα (τ ) = ò-T / 2 Rx (t , τ )e
T0 0
¥
S xα (f ) = ò Rαx (τ )e - j 2π f τ dτ (8.23)
-¥
α 1 Dt / 2 1 æ αö * æ αö
SX (f ) = lim lim
T ®¥ Dt ®¥ Dt
ò-Dt / 2 T XT çè t , f + 2 ÷ø XT çè t , f - 2 ÷ø dt (8.24)
t +T / 2
XT (t , f ) = ò x (u)e j 2π fudu (8.25)
t -T / 2
a, the y-axis represents the spectral frequency f, and the z-axis represents the cor-
responding magnitude of the SOF for each (a, f ) pair. Note that when a does not
equal zero, the SOF values are approximately zero.
Cyclostationary feature detectors can be implemented via FFTs. Knowledge
of the noise variance is not required to set the detection threshold. Hence, the
detector does not suffer from the “SNR wall” problem of the energy detector.
However, the performance of the detector degrades in the presence of timing
and frequency jitters (which smear out the spectral lines) and RF nonlineari-
ties (which induce spurious peaks). Representative papers that consider the ap-
proach are [20–22].
In Section 8.1.4, two forms of spectral detectors and classifiers were presented. In
this section, we will implement these detectors using software and apply them to
some Simulink-generated signals in order to make detection. From these software
implementations, we will have a better understanding of the characteristics of these
two detectors, as well as their specific applications.
(a)
(b)
Figure 8.10 Distinctive cyclic features of different modulation schemes. (a) SOF of QPSK signal in
an AWGN channel at 10 dB SNR. (b) SOF of 4PAM signal in an AWGN channel at 10 dB SNR.
224 Spectrum Sensing Techniques
Then, the modulated signals are pulse shaped for over-the-air transmission. In the
end, vectors of each transmission are saved to the MATLAB workspace by Signal
To Workspace blocks.
Once the model has been downloaded and opened in Simulink, let us run the
model once and check the MATLAB workspace in order to see whether the sig-
nals have been saved. If yes, let us attach the Spectrum Scope block to the AWGN
Channel block to observe the output of each channel. Please note that in order to
get a satisfactory frequency resolution, the FFT length of the Spectrum Scope block
should be set properly.
Vary the SNR value of the AWGN Channel block and observe the
Q output spectrum of each channel. At what point do the signals be-
come unobservable due to noise?
Figure 8.11 Signal generation model, where signals of three different modulation schemes are
saved to the MATLAB workspace.
8.2 Software Implementation 225
1. Prefiltering of the intercepted signal extracts the frequency band of interest.
2. Analog-to-digital conversion (ADC) transforms the filtered intercepted sig-
nal into discrete time samples.
3. Fast Fourier transform (FFT) provides the frequency representation of the
signal.
4. Square-law device yields the square of the magnitude of the frequency re-
sponse from the FFT output.
5. Average N samples of the square of the FFT magnitude can be done by a
moving average filter of length N, which windows and time-averages the last
N samples of input. The window size N should be adjusted depending upon
the performance of the system.2
Please note that in real-world applications, the first two blocks are usually im-
plemented in the hardware, which depends on the software-defined radio equip-
ment itself and is not under our control. Therefore, for the implementation in this
section, only the last three blocks need to be implemented, and the signals from
Section 8.2.1.1 will serve as the discrete-time signals.
Figure 8.12 Schematic of an energy detector implementation employing prefiltering and a square-
law device to produce the decision statistic of the intercepted signals.
2. You can perform a linear averaging, which averages the N samples in a nonweighted manner. Or, you can perform
an exponential averaging, which averages the N samples in a weighted manner, and the most recent samples are
given more weighting than the older ones.
226 Spectrum Sensing Techniques
different threshold. Using this threshold, we compute the probability of false alarm
(the number of times we incorrectly say that there is a signal present) and the prob-
ability of missed detection (the number of times we did not find the signal). After
10 trials, we select the threshold that performs the best concerning both metrics
and set it as the threshold value that will be employed in the energy detection for
this specific SNR.
Please note that in this code, Cx is the SOF generated from the received QPSK
signal, and SCF is the corresponding spectral correlation function. The SOF is
plotted by the surf function, and the specifications of the plot is listed by the set
function.
3. The interested reader can check out [16–19] for more information about cyclostationary detectors.
8.3 USRP Hardware Experimentation 227
1. Generate an AWGN vector and plot its SOF. Does this plot sup-
port the conclusion that cyclostationary detectors are robust to
Gaussian noise? Why?
2. Why might cyclostationary detectors be susceptible to frequency
Q selective fading?
3. Adjust the roll-off factors of the pulse shaping filters and re-
examine the coherence functions. What effect does the excess
bandwidth have on the effectiveness of the detector?
Figure 8.13 Using Spectrum Scope to observe the FFT of the received signal, where the observed
frequency range is from −100 kHz to 100 kHz and the bandwidth is 200 kHz.
Then, let us transmit a sinusoidal signal in that frequency range and perform
the test again.
(a)
(b)
Figure 8.14 Two types of windows, which will be used to divide the wideband channel into narrow
band subchannels. (a) Rectangular window. (b) Hamming window.
(a)
(b)
Figure 8.15 A single frequency sweep result from two types of windows. (a) A single frequency
sweep result using a rectangular window. (b) A single frequency sweep result using a Hamming
window. Note there are overlaps in transition areas.
230 Spectrum Sensing Techniques
In the previous experiments found within this book, we were dealing with a wireless
network consisting of only two nodes. However, in many real-world situations, we
encounter multiple nodes communicating with each other in a distributed fashion,
in which these nodes share a common channel or medium as well as employ a me-
dium access control (MAC) protocol that enables everyone access to this medium.
In this section, we will design and implement an SDR communication system ca-
pable of achieving medium access control using a simple random backoff process
for collision avoidance and a spectrum sensing procedure that has been designed in
the earlier parts of this chapter.
• Carrier sense describes the fact that a transmitter uses feedback from a re-
ceiver that detects a carrier wave before trying to send. That is, it tries to
detect the presence of an encoded signal from another station before attempt-
ing to transmit. If a carrier is sensed, the station waits for the transmission in
progress to finish before initiating its own transmission.
• Multiple access describes the fact that multiple stations send and receive on
the medium. Transmissions by one node are generally received by all other
stations using the medium.
8.4 Open-Ended Design Project: CSMA/CA 231
8.4.2 Collision Avoidance
Carrier sense multiple access with collision avoidance (CSMA/CA) is a variant of
CSMA. Collision avoidance is used to improve CSMA performance by not allow-
ing wireless transmission of a node if another node is transmitting, thus reducing
the probability of collision due to the use of a random binary exponential backoff
time.
In this protocol, a carrier sensing scheme is used, namely, a node wishing to
transmit data has to first listen to the channel for a predetermined amount of time
to determine whether or not another node is transmitting on the channel within
the wireless range. If the channel is sensed “idle,” then the node is permitted to
begin the transmission process. If the channel is sensed as “busy,” the node defers
its transmission for a random period of time, namely the backoff time. A simple
example of CSMA/CA is shown in Figure 8.16, where node 1 wants to transmit,
but the channel is sensed as “busy,” so node 1 defers its transmission for five
slots.
8.4.3 Implementation Approach
Although medium access control is usually employed in multiple node communica-
tion systems, as a starting point, we will implement a two-node (radio 1 and radio 2)
communication system in this section. These two radios are both transceivers, shar-
ing a common communication channel and using CSMA/CA protocol. Since they are
transceivers, they can switch between transmit and receive modes. We can define the
duration of each mode. Please note since this system is also a half-duplex communica-
tion system, we can build it upon the system obtained in Section 7.4. However, unlike
Section 7.4, where two MATLAB scripts are used to control the switch between dif-
ferent modes, in this section, CSMA/CA is employed to switch the modes. Therefore,
our system should perform the following three stages, as shown in Figure 8.17:
Figure 8.16 An example of a 3-node communication system employing CSMA/CA protocol (based
on [26]), where node 1 employs a backoff time of five time slots.
232 Spectrum Sensing Techniques
• Stage 1:
– Radio 1 is in transmit mode. It switches between transmitting a random num-
ber of “Hello world” frames and listening for a codeword from radio 2.
– Radio 2 does the spectrum sensing using energy detection. If the channel
is sensed as “busy,” radio 2 enters a random backoff time. If the channel
is sensed “idle,” radio 2 begins to transmit a codeword to radio 1 and the
system enters stage 2.
• Stage 2:
– Radio 2 is in the transmit mode. It repeats sending a random number of
“change” to radio 1.
– Radio 1 is in the receive mode, and it does the spectrum sensing using en-
ergy detection. If the channel is sensed as “busy,” radio 1 keeps receiving
the incoming message. If the channel is sensed “idle,” radio 1 begins to
decode the message it has received, and the system enters stage 3.
• Stage 3:
– If radio 1 can get at least one “Change” in its decoded message, radio 1
begins to broadcast “Goodbye,” and their communication ends.
– If radio 1 does not get any “Change” in its decoded message, the whole
system returns to the origin and starts from stage 1.
Figure 8.18 An example of a successful implementation at radio 2, which demonstrates the three
stages introduced in Section 8.4.3.
be generatedwhile the system is running. Instead, they can be defined before the
system is running.
stage 1 and keeps sensing. When “Hello world” is received, radio 2 continues to
stage 2 and begins transmitting. After a predetermined transmission time, radio 2
proceeds to stage 3, in the listening mode.
8.5 CHAPTER SUMMARY
This chapter focuses on the principle and implementation of two spectrum sensing
techniques, namely, the energy detection and the cyclostationary feature detection.
In this chapter, the theory of spectrum sensing and the practical issues of collecting
spectral data are first introduced, followed by the implementation of the energy
detector and the observation of the cyclostationary feature detector in MATLAB.
Using USRP, a wideband spectrum sensor is implemented in the hardware experi-
ment. In the end, the open-ended design project realizes the CSMA/CA protocol,
which heavily relies on the spectrum sensing techniques and enables the multiple
access schemes.
8.6 PROBLEMS
1. [Power Spectral Density] Calculate and plot the power spectral density of a
100 MHz sinusoidal tone. How would you expect this to look on a spectrum
analyzer?
2. [Power Spectral Density] The power spectral density of a narrowband noise
signal, n(t), is shown in Figure 8.19. The carrier frequency is 10 Hz.
(a) Find the power spectral densities of the in-phase and quadra-
ture components of n(t).
(b) Find their cross-spectral densities.
3. [Power Spectral Density] The input to a time-invariant linear system with
impulse response h(t) is a white Gaussian noise process X(t) with two-
No
sided spectral density . The output is the random process Y(t). The fil-
2
ter’s response is given in terms of pT(t), the rectangular pulse of duration T,
sin(2π t)
by h(t) = * pT (t).
T
(a) Find the cross-correlation function RXY (t), −¥ < t < ¥.
(b) Find RY Y (0).
1 S nn (f )
–12 –10 –8 8 10 12 f
Figure 8.19 Power spectral density of a narrowband noise signal, n(t).
8.6 PROBLEMS 235
References
[1] M itola III, J., and G. Q. Maguire, “Cognitive Radio: Making Software Radios More Per-
sonal,” IEEE Personal Communications, Vol. 6, No. 4, Aug. 1999, pp. 13–18.
[2] Haykin, S.,”Cognitive Radio: Brain-Empowered Wireless Communications,” IEEE Jour-
nal on Selected Areas in Communications, Vol. 23, No. 2, Feb. 2005, pp. 201–220.
[3] McHenry, M. A., P. A. Tenhula, D. McCloskey, D. A. Roberson and C. S. Hood, “Chicago
Spectrum Occupancy Measurements Analysis and a Long-Term Studies Proposal,” Proceed-
ings of Workshop on Technology and Policy for Accessing Spectrum, Boston, MA, 2006.
[4] Pagadarai, S., and A. M. Wyglinski, “A Quantitative Assessment of Wireless Spectrum
Measurements for Dynamic Spectrum Access,” Proceedings of the IEEE International
Conference on Cognitive Radio Oriented Wireless Networks and Communications,
Hannover, Germany, 2009.
[5] Pagadarai, S., R. Rajbanshi, and A. M. Wyglinski, “Agile Transmission Techniques,” Cog-
nitive Radio Communications and Networks: Principles and Practice, Elsevier, 2009.
[6] Akyildiz, I. F., W. Y. Lee, M. C. Vuran, and S. Mohanty, “NeXt Generation/Dynamic
Spectrum Access/Cognitive Radio Wireless Networks: A Survey,” Computer Networks,
Vol. 50, No. 13, Sept 2006, pp. 2127–2159.
[7] Zhao, Q., and B. M. Sadler, “A Survey of Dynamic Spectrum Access,” IEEE Signal Pro-
cessing, Vol. 24, No. 3, May 2007, pp. 79–89.
[8] Peng Ning, Y. L., and H. Dai, “Authenticating Primary Users’ Signals in Cognitive Radio
Networks via Integrated Cryptographic and Wireless Link Signatures,” IEEE Symposium
on Security and Privacy, Oakland, CA, May 2010.
[9] Hanssen, A., and L. Scharf, “A Theory of Polyspectra for Nonstationary Stochastic Processes,”
IEEE Transactions on Signal Processing, Vol. 51, No. 5, May 2003, pp. 1243–1252.
[10] WPI Wireless Innovation Lab, Welcome to SQUIRRELWeb, http://www. spectrum.wpi.
edu/.
[11] Leferman, M. J., “Rapid Prototyping Interface for Software Defined Radio Experimenta-
tion,” Master of Thesis Worcester Polytechnic Institute, Worcester, MA, 2010.
[12] Poor, H. V., An Introduction to Signal Detection and Estimation, Springer, 2010.
[13] Zhao, Q., and A Swami, “Spectrum Sensing and Identification,” Cognitive Radio Com-
munications and Networks: Principles and Practice, Elsevier, 2009.
[14] Kay, S. M., “Statistical Decision Theory I,” Fundamentals of Statistical Signal Processing,
Volume 2: Detection Theory, Prentice Hall, 1998.
[15] Shanmugan, K. S., and A. M. Breipohl, “Signal Detection,” Random Signals: Detection,
Estimation and Data Analysis, Wiley, 1988.
[16] Like, E. C., Non-Cooperative Modulation Recognition Via Exploitation of Cyclic Statis-
tics, Master’s Thesis, 2007.
[17] Like, E. C., V. D. Chakravarthy, P. Ratazzi, and Z. Wu, “Signal Classification in Fading
Channels Using Cyclic Spectral Analysis,” EURASIP Journal on Wireless Communications
and Networking, 2009.
8.6 PROBLEMS 237
[18] G ardner, W. A., W. A. Brown, and C.-K. Chen, “Spectral Correlation of Modulated
Signals—Part II: Digital Modulation,” IEEE Transactions on Communications, Vol. 35,
No. 6, 1987, pp. 595–601.
[19] Gardner, W. A., Cyclostationarity in Communications and Signal Processing, IEEE Press,
Piscataway, NJ, 1993.
[20] Cabric, D., S. M. Mishra, and R. W. Brodersen, “Implementation Issues in Spectrum Sens-
ing for Cognitive Radios,” Proceedings of the Asilomar Conference on Signals, Systems
and Computers, 2004.
[21] Kim, K., I. A. Akbar, K. K. Bae, J.-S. Um, and C. M. Spooner, “Cyclostationary Approaches
to Signal Detection and Classification in Cognitive Radio,” Proceedings of IEEE Sympo-
sium on New Frontiers in Dynamic Spectrum Access Networks, Dublin, Ireland, 2007.
[22] Sutton, P. D., K. E. Nolan, and L. E. Doyle, “Cyclostationary Signatures in Practical Cogni-
tive Radio Applications,” IEEE Journal on Selected Areas in Communications, 2008.
[23] Hamming, R. W., “Windows,” Digital Filters, 3rd edition, Dover Publications, 1997.
[24] Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier
Transform,” Proceedings of IEEE, Vol. 66, No. 1, Jan. 1978, pp. 51–83.
[25] Tanenbaum, A. S., “The Medium Access Control Sublayer,” Computer Networks, 4th edi-
tion, Prentice Hall, 2002.
[26] http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/Linux.Wireless.mac.html.
[27] Cabric D., A. Tkachenko, and R. W. Brodersen, “Experimental Study of Spectrum Sensing
Based on Energy Detection and Network Cooperation,” Proceedings of the First Interna-
tional Workshop on Technology and Policy for Accessing Spectrum, Boston, MA, 2006.
Chapter 9
Given the latest advances in SDR technology with respect to computational horse-
power, energy efficiency, and form factor compactness, communication system de-
signers are devising innovative wireless transmission implementations capable of
enabling a wide range of wireless data transmission applications deemed either im-
possible or prohibitively costly only several years ago. Until this chapter, the focus
of this book was primarily on providing a solid theoretical foundation with respect
to digital communications and related areas, as well as synthesizing those concepts
in a series of practical, hands-on exercises that help provide a deeper understanding
about how digital transceivers actually operate in the real world.
In this chapter, we provide a context for the theory and practice covered thus
far in this book by exploring three state-of-the-art applications where SDR technol-
ogy has made significant, almost revolutionary, contributions. Specifically, we first
start in Section 9.1 with an exploration of wireless systems capable of performing
autonomous decision making, referred to as cognitive radio. Then, we focus in Sec-
tion 9.2 on the use of SDR technology in enabling next-generation wireless com-
munications in vehicular networks. Finally, we study how SDR technology can be
adapted for satellite communication systems in Section 9.3 before concluding this
chapter.
239
240 Applications of Software-Defined Radio
Figure 9.1 Wireless networking device employing “knobs” and “dials” in order to perform adaptation.
9.1 Cognitive Radio and Intelligent Wireless Adaptation 241
Table 9.1 Several Common Wireless Networking Device Configuration Parameters (from [1, 5])
Parameter Description
Transmit power Raw transmission power
Modulation type Type of modulation
Modulation index Number of symbols for a given modulation scheme
Carrier frequency Center frequency of the carrier
Bandwidth Bandwidth of transmission signal
Channel coding rate Specific rate of coding
Frame size Size of transmission frame
Time-division duplexing Percentage of transmit time
Symbol rate Number of symbols per second
242 Applications of Software-Defined Radio
Table 9.2 Several Common Wireless Networking Environmental Parameters (from [1, 5])
Parameter Description
Signal power Signal power as seen by the receiver
Noise power Noise power density for a given channel
Delay spread Variance of the path delays and their amplitudes for a channel
Battery life Estimated energy left in batteries
Power consumption Power consumption of current configuration
Spectrum information Spectrum occupancy information
Table 9.3 Several Common Wireless Networking Target Experiences (from [1, 5])
Objective Description
Minimize bit error rate Improve overall BER of the transmission environment
Maximize data throughput Increase overall data throughput transmitted by radio
Minimize power consumption Decrease amount of power consumed by system
Minimize interference Reduce the radio interference contributions
Maximize spectral efficiency Maximize the efficient use of the frequency spectrum
All forms of wireless communications require access to radio frequency (RF) spec-
trum. However, given the finite amount of RF spectrum and the rapidly growing
number of wireless applications and end users, many sectors are experiencing the
effects of spectrum scarcity due to the inefficient utilization of this natural re-
source. One solution that is capable of accommodating this growing demand for
RF spectrum while providing greater wireless access is dynamic spectrum access
(DSA). As opposed to the conventional spectrum allocation paradigm, which fo-
cuses on static licensed spectrum assignments where the license holders possess
exclusive transmission rights, the DSA paradigm enables unlicensed devices to
temporarily “borrow” unused licensed spectrum while ensuring the rights of the
incumbent license holders are respected. One technology capable of achieving
9.2 Vehicular Communication Networks 243
DSA is cognitive radio, which often employs autonomous and flexible communi-
cations techniques implemented via an SDR programmable wireless communica-
tions platform.1
There has been much research and development over the past several decades
in order to facilitate ubiquitous, reliable, and efficient wireless data connectivity
between individual vehicles as well as with roadside information infrastructure [8].
The objective of these research efforts into vehicular networking is to further en-
hance the overall driving experience while on the road [9–12]. In particular, wire-
less connectivity can be used to enable a greater level of vehicular safety such that
the driver’s situational awareness is enhanced [13]. Given the potential benefits of
vehicular networking, it is expected that commercial vehicle manufacturers will
introduce this technology in the near future [14–16].
Despite the advantages of employing wireless systems in vehicular applica-
tions, there is increasing concern that the rapid growth of this sector coupled with
significant bandwidth requirements of the vehicular wireless transmissions will
result in a spectrum scarcity issue. One remedy for this issue is to employ SDR
technology that is specifically designed to perform DSA networking across several
vehicles as well as with associated roadside infrastructure, which is referred to
as vehicular dynamic spectrum access (VDSA) [15, 17, 18]. Given the potential
of this solution, the feasibility of VDSA has been extensively studied for several
candidate frequency bands [17], networking protocols enabling VDSA commu-
nications have been assessed [14–16], the theoretical capacity of VDSA networks
have been derived [18], and experimental prototype VDSA platforms and test-
beds have been constructed [19, 20].
9.2.1 VDSA Overview
VDSA operates by establishing a wireless data transmission link between any two
vehicular communication devices across unoccupied portions of licensed spectrum.
Thus, a vehicular wireless device must be capable of identifying all the unoccupied
frequency bands via spectrum sensing prior to any data transmission. An example
of VDSA is shown in Figure 9.2 where “car A” communicates with “car B” by
forming a secondary (unlicensed) user (SU) transmission link LAB across some un-
occupied spectrum that is located between two primary (licensed) user (PU) signals.
Suppose that all vehicular communications is decentralized in this scenario, and
that all information is shared between vehicles using multihop relaying. Conse-
quently, wireless data transmission is subsequently performed between “car B” and
“car C” via the transmission link LBC , which is also located within a region of
unoccupied spectrum, in order to relay the information to all vehicles within the
geographical vicinity.
1. Given the growing number of wireless interfaces being employing on a single vehicle, it is also anticipated
that automobile manufacturers will eventually integrate all the communication systems on a vehicle into a
single SDR platform. Such an approach would facilitate remote updating of wireless standards and protocols
on vehicles without the need for an expensive upgrade recall, as well as simplify the communications
infrastructure on the vehicle itself.
244 Applications of Software-Defined Radio
Frequency
Figure 9.2 An illustration of the vehicular dynamic spectrum access (VDSA) concept (from [20]). In this
scenario, the vehicular secondary (unlicensed) users (SUs) temporarily access unoccupied spectrum
from primary (licensed) users (PUs) for performing wireless data transmission while on the road.
9.2.2 Transmitter Design
An architecture for a VDSA transmitter is shown in Figure 9.3, where the primary
concern when transmitting information within a VDSA networking architecture is
the potential for interference with existing primary and other secondary user signals
located within the spectral and geographical vicinity. As a result, several factors
must be considered when designing the VDSA transmitter, namely [20]:
Figure 9.3 An illustration of a VDSA transmitter design showing how it searches for and avoids both
PUs and SUs, as well as determine the most optimal location for the next SU carrier frequency, prior
to secondary transmission [20].
9.2 Vehicular Communication Networks 245
may need to employ some sort of contention-based physical layer medium ac-
cess control (MAC) in order to check out the availability of a frequency band
to avoid interference with the primary user or other secondary users.
• Message generation and protocol: Assuming that the VDSA network is op-
erating in a decentralized mode, it is necessary that the header information
of each frame shared within the network contains information needed to
facilitate the formation and continuation of any VDSA transmission links.
Examples of various information to be included in the header are the unique
identification number of the source vehicular wireless platform, time stamp
information, number of hops the message can be relayed before it is consid-
ered out of date, and identification number of the destination vehicle.
Figure 9.4 A photograph of the actual VDSA networking experiment conducted on March 12,
2011 at 12:50 PM [20]. Picture taken from car D. The figure shows the VDSA network implemented
on the highway I-190 north of Worcester, MA. As indicated on the figure, the leading car is perform-
ing spectrum sensing and the rest of the cars form the proposed VDSA networking architecture.
where the leading car was equipped with a spectrum analyzer in order to record all
the VDSA transmissions occurring during the experiment, while each of the remaining
cars was equipped with a USRP2 SDR platform with its antenna anchored to the ex-
terior of each vehicle. The experiment was initiated in Worcester, MA along Interstate
I-190 north to Leominster, MA followed by heading east along Route 2 until the in-
tersection with Interstate I-495, where the proposed VDSA network proceeded south
until the intersection between Interstate I-495 and Interstate I-290, where the vehicles
finally headed back west on Interstate I-290 to Worcester, MA.
During the experiment, car 1 of the VDSA network was continuously broadcast-
ing hazard messages, while the remaining three cars constantly performed spectrum
sensing over a 100-MHz-wide frequency band from 2.4 GHz to 2.5 GHz and per-
formed multihop relaying in the event that a message was successfully intercepted,
as shown in Figure 9.5. The receiver would lock on to the frequency and start to
decode the message as soon as transmission was detected using the three-sigma rule.
If the message was successfully decoded and recorded by the USRP2, it switched to
transmit mode and broadcasted the message to the rest of the cars. In the event that
no message was decoded, or the decoded message did not correspond to the desired
network protocol, the USRP2 ignored the transmission and started another itera-
tion of spectrum sensing. Each radio was connected to a laptop that would record
and time stamp every received message for performance analysis.
9.3 Satellite Communications
Figure 9.5 Three-dimensional plot showing a sample of the experimental results obtained from
the VDSA test-bed involving the secondary user transmission in the neighborhood of licensed user
signals. Notice the multihop transmission of “car 1”®“car 2”®“car 3”®“car 4,” where the transmis-
sion frequencies were dynamically chosen based on availability.
systems contained only the hardware necessary to perform the tasks that they were
designed for, which made these systems difficult to modify and upgrade. However,
with the advent of various digital processing and computing technologies, commu-
nication systems have evolved to the point where many of the encoders, modula-
tors, filters, and decoders used by these communication systems can be implemented
either partially or entirely using software and/or programmable logic [22]. Those
communications systems that implement all of their baseband functionality in soft-
ware are referred to as SDR [23].
While some SDR platforms employ general-purpose processors or even digital
signal processors (DSPs), many use a technology called field-programmable gate ar-
rays (FPGA). An FPGA consists of reconfigurable logic elements and a switch matrix
to route signals between them. These devices can be configured to support simple
logic operations, such as addition, or more complex systems, such as digital filters.
In addition, some FPGAs can support dynamic reconfiguration, where a system is
capable of swapping components as needed, without any reprogramming. Conse-
quently, the firmware on an FPGA can be updated remotely in order to install new
blocks and remove unused blocks. Although relatively difficult to use with respect
to the design and prototype of a communication system, one of the primary advan-
tages of an FPGA is that it is very computationally powerful, making it a suitable
choice for any SDR implementation involving complex operations. Commercially
available examples of FPGA-based SDR platforms include Lyrtech’s small form
factor (SFF) SDR development platform and the Wireless Open-Access Research
Platform (WARP) [24, 25]. Although the USRP family of SDR platforms available
from Ettus Research LLC possess an FPGA, these have been mostly used to perform
sampling and filtering operations on incoming and outgoing data streams.
248 Applications of Software-Defined Radio
One such CubeSat FPGA-based SDR platform that attempts to meet these
characteristics was presented in [30–32], with the hardware platform shown in
Figure 9.6 and the SDR firmware architecture shown in Figure 9.7. The actual
SDR communications hardware is based on the redesign of a USRP1 SDR platform
by Ettus Research LLC, which publishes their open source SDR hardware designs
and schematics online. At the center of the SDR architecture is the MicroBlaze
softcore microprocessor, which acts as a traffic mediator, guiding data between
the USB2 port to the various other devices in the system. The SPI and I2C control-
lers provide the users with a means of sending configuration information to the
USRP and to the RF daughtercards. One of the RS232 controllers connects to an
serial port on the board, while the other connects to the AT90, which controls the
9.3 Satellite Communications 249
Figure 9.6 A photograph of the COSMIAC CubeSat SDR system. The bottom board is the FPGA
board while the one on top is the RF daughtercard [29].
ASIM for the device. The other components simply control the peripherals that
give them their names.
Several advantages of this implementation of a CubeSat FPGA-based SDR plat-
form include the following [30]:
As for disadvantages, this system presented in [30] employs components that only
work with the Xilinx toolset, so the system loses its platform independence. Fur-
thermore, the USB2 controller that is at the core of the USRP1 implementation is
currently priced at $14,000, which can be prohibitively expensive.
250 Applications of Software-Defined Radio
Figure 9.7 Block diagram of the proposed CubeSat SDR firmware. The MicroBlaze processor acts
as the central control for all of the various communications blocks [30].
In this chapter, we briefly explored three applications where SDR technology is em-
ployed in order to facilitate wireless data transmission in challenging environments.
Whether it is intelligently and adaptively seeking out transmission opportunities
across time, frequency, and space; enabling the transfer of information between ve-
hicles while driving along a stretch of highway; or facilitating the exchange of data
between a satellite in orbit and a ground station back on Earth, SDR is increasingly
becoming the core technology that is enabling these and many other applications
in today’s society.
References
[1] Wyglinski, A. M., M. Nekovee, and T. Hou, Cognitive Radio Communications and Net-
works: Principles and Practice, Academic Press, 2009, http://www.elsevierdirect.com/
ISBN/9780123747150/ Cognitive-Radi-Communications-and-Networks.
[2] Barker, B., A. Agah, and A. M. Wyglinski, “Mission-Oriented Communications Properties
for Software Defined Radio Configuration,” Cognitive Radio Networks, CRC Press, 2008.
[3] Newman, T., A. M. Wyglinski, J. B. Evans, “Cognitive Radio Implementation for Efficient
Wireless Communication,” Encyclopedia of Wireless and Mobile Communications, CRC
Press, 2007.
[4] Newman, T., R. Rajbanshi, A. M. Wyglinski, J. B. Evans, and G. J. Minden, “Population
Adaptation for Genetic Algorithm-Based Cognitive Radios,” ACM/Springer Mobile Net-
works and Applications, 2008.
9.4 Chapter Summary 251
[22] Wireless Innovation Forum,“The Three Sigma Rule,” What is Software Defined Radio,
http://data.memberclicks.com/site/sdf/SoftwareDefinedRadio.pdf.
[23] Youngblood, G., “A Software-Defined Radio for the Masses, Part 1,” QEX, Jul/Aug:
13–21, 2002, http://www.arrl.org/files/file/Technology/tis/info/pdf/020708qex013.pdf.
[24] Mouser—Lyrtech SFF SDR Development Platform, April 2011, http://www.mouser.com/
Lyrtech/.
[25] Wireless Open-Access Research Platform, April 2011, http://warp.rice.edu/index.php.
[26] Vulcan Wireless SDR Products, April 2011, http://www.vulcanwireless.com/products/.
[27] Lynaugh, K., and M. Davis, Software Defined Radios for Small Satellites, Nov. 2010,
http://www.cosmiac. org/ReSpace2010/.
[28] California Polytechnic State University, CubeSat Design Specification Rev. 12, 2009, http://
www.cubesat.org/images/developers/cds_rev12.pdf.
[29] COSMIAC CubeSat SDR Photograph, April 2011, http://www.cosmiac.org/images/
CCRBwSDR.JPG.
[30] Olivieri, S., J. Aarestad, L. H. Pollard, A. M. Wyglinski, C. Kief et al.,“Modular FPGA-
Based Software Defined Radio for CubeSats,” Proceedings on the IEEE International Con-
ference on Communications, Ottawa, Canada, June 2012.
[31] Olivieri, S., A. M. Wyglinski, L. H. Pollard, J. Aarestad and C. Kief, “Responsive Satellite
Communications via FPGA-Based Software-Defined Radio for SPA-U Compatible Plat-
forms,” Presented at the ReSpace/MAPLD 2010 Conference in Albuquerque, NM, http://
www.cosmiac.org/ReSpace2010/. Nov. 2010.
[32] Olivieri, S. J., “Modular FPGA-Based Software Defined Radio for CubeSats,” Master’s
Thesis, Worcester Polytechnic Institute, Worcester, MA, May 2011.
Appendix A
You will be using MATLAB and Simulink for the experiments, as well as for the
open-ended design projects in this book. This appendix serves as a brief refresher of
MATLAB, since you should have used it before. However, if you don’t have exten-
sive experience with Simulink, then this appendix shows you how to get started with
the tool. Please note the MATLAB portion of this appendix is mainly based on the
MATLAB Documentation presented in [1] and the Simulink portion is based on the
Simulink Basics Tutorial presented in [2], but here we extract the most important and
fundamental concepts such that you can quickly get started by reading this appendix. For
more information about these two products, you are encouraged to refer to [1] and [2].
MATLAB is widely used in all areas of applied mathematics, in education and research
at universities, and in industry. MATLAB stands for MATrix LABoratory and the soft-
ware is built up around vectors and matrices. Consequently, this makes the software
particularly useful for solving problems in linear algebra, but also for solving algebraic
and differential equations as well as numerical integration. MATLAB possesses a col-
lection of graphical tools capable of producing advanced GUI and data plots in both 2D
and 3D. MATLAB also has several toolboxes useful for performing communications
signal processing, image processing, optimization, and other specialized operations.
When writing programs, you will need to do this in a separate window, called the edi-
tor. To open the editor, go to the “File” menu and choose either the “New...Script” (if
you want to create a new program) or “Open” (to open an existing document) option.
In the editor, you can now type in your code in much the same way that you would use a
text editor or a word processor. There are menus for editing the text that are also similar
to any word processor. While typing your code in the editor, no commands will be per-
formed. Consequently, in order to run a program, you will need to do the following:
3. In order to run the program, type the name of the file containing your pro-
gram at the prompt. When typing the filename in the command window, do
not include “.m”. By pressing enter, MATLAB will run your program and
perform all the commands given in your file.
In case your code has errors, MATLAB will complain when you try to run the
program in the command window. When this happens, try to interpret the error
message and make the necessary changes to your code in the editor. The error that
is reported by MATLAB is hyperlinked to the line in the file that caused the prob-
lem. Using your mouse, you can jump directly to the line in your program that has
caused the error. After you have made the changes, make sure you save your file
before trying to run the program again in the command window.
This section introduces general techniques for finding errors, as well as using automatic
code analysis functions in order to detect possible areas for improvement within the
MATLAB code. In particular, the MATLAB debugger features located within the Edi-
tor, as well as equivalent Command Window debugging functions, will be covered.
Debugging is the process by which you isolate and fix problems with your code.
Debugging helps to correct two kinds of errors:
You can check for coding problems using three different ways, all of which report
the same messages:
• Continuously check code in the Editor while you work: View M-Lint mes-
sages and modify your file based on the messages. The messages update au-
tomatically and continuously so you can see if your changes addressed the
A.3 Useful Matlab Tools 255
You can also get M-Lint messages using the mlint function. For more information
about this function, you can type help mlint in the Command Window. Read the
online documentation [3] for more information about this tool.
A.3.2 Debugger
The MATLAB Editor, graphical debugger, and MATLAB debugging functions are
useful for correcting run-time problems. They enable access to function workspaces
and examine or change the values they contain. You can also set and clear break-
points, which are indicators that temporarily halt execution in a file. While stopped
at a breakpoint, you can change the workspace contexts, view the function call
stack, and execute the lines in a file one by one.
There are two important techniques in debugging: one is the breakpoint while the
other is the step. Setting breakpoints to pause the execution of a function enables you to
examine values where you think the problem might be located. While debugging, you
can also step through an M-file, pausing at points where you want to examine values.
There are three basic types of breakpoints that you can set in the M-files, namely:
You cannot set breakpoints while MATLAB is busy (e.g., running an M-file) unless
that M-file is paused at a breakpoint. While the program is paused, you can view
the value of any variable currently in the workspace, thus allowing you to examine
256 Getting Started with MATLAB and Simulink
values when you want to see whether a line of code has produced the expected re-
sult or not. If the result is as expected, continue running or step to the next line. If
the result is not as expected, then that line, or a previous line, contains an error.
While debugging, you can change the value of a variable in the current workspace
to see if the new value produces expected results. While the program is paused, assign
a new value to the variable in the Command Window, Workspace browser, or Array
Editor. Then continue running or stepping through the program. If the new value does
not produce the expected results, the program has a different or another problem.
Besides using the Editor, which is a graphical user interface, you can also debug
MATLAB files by using debugging functions from the Command Window, or you
can use both methods interchangeably. Read the online documentation [4] for more
information about this tool.
A.3.3 Profiler
Profiling is a way to measure the amount of time a program spends on performing
various functions. Using the MATLAB Profiler, you can identify which functions
in your code consume the most time. You can then determine why you are calling
them and look for ways to minimize their use. It is often helpful to decide whether
the number of times a particular function is called is reasonable. Since programs of-
ten have several layers, your code may not explicitly call the most time-consuming
functions. Rather, functions within your code might be calling other time-consuming
functions that can be several layers down into the code. In this case, it is important
to determine which of your functions are responsible for such calls.
Profiling helps to uncover performance problems that you can solve by:
When you reach the point where most of the time is spent on calls to a small num-
ber of built-in functions, you have probably optimized the code as much as you can
expect. You can use any of the following methods to open the Profiler:
To profile an M-file or a line of code, follow these steps after you open the Profiler,
as shown in Figure A.1:
1. In the Run this code field in the Profiler, type the statement you want to run.
2. Click Start Profiling (or press Enter after typing the statement).
3. When profiling is complete, the Profile Summary report appears in the Pro-
filer window. Read the online documentation [5] for more information about
this tool.
A.5 Getting Started In Simulink 257
A.4 SIMULINK INTRODUCTION
Figure A.2 Start a Simulink session by clicking on the Simulink icon, which is circled.
Figure A.3 Simulink library browser. To create a new model, click on the new file icon, which is circled.
A.6 Build A Simulink Model 259
If your communications model does not work well with these default settings, you
can change each of the individual settings as the model requires.
To demonstrate how a system is represented using Simulink, we will build the block
diagram for a simple model consisting of a random number input and two scope
displays, which is shown in Figure A.5.
This model consists of four blocks: Random Number, Abs, and two Scopes.
The Random Number is a source block from which a random signal is generated.
This signal is transferred through a line in the direction indicated by the arrow to
the Abs Math Block. The Abs block modifies its input signal (calculates its absolute
value) and outputs a new signal through a line to the Scope block. The Scope is a
sink block used to display a signal like an oscilloscope.
1. Get the necessary blocks from the Library Browser and place them in the
model window.
2. Modify the parameters of the blocks according to the system that we are
modeling.
3. Connect all the blocks with lines to complete the Simulink model.
Each of these steps will be explained in detail using our example system. Once a
system is built, simulations can be run to analyze its behavior.
The same method can be used to place the Abs and Scope blocks in the model
window. The Abs block can be found in the “Math Operations” subfolder and the
Scope block is located in the “Sink” subfolder. Arrange the four blocks in the model
window by selecting and dragging an individual block to a new location so that
they look similar to Figure A.7.
Figure A.6 Obtain Random Number block from Simulink Library Browser.
Let us assume that our system’s random input has the following Gaussian dis-
tribution: Mean = 0, Variance = 1, Seed = 0. Enter these values into the appropriate
fields and leave the “Sample time” set to 0.1. Then click “OK” to accept them and
exit the window.
The Abs block simply returns the absolute value for the input and the Scope
block plots its input signal as a function of time, so there are no system parameters
Figure A.7 Obtain all the necessary blocks for the model and arrange them in the model window.
262 Getting Started with MATLAB and Simulink
Figure A.8 By double-clicking on the Random Number block, we can open the block and set the pa-
rameters. The “Variance” parameter is highlighted because its value will be changed in Section A.7.
that we can change for these two blocks. We will look at the Scope block in more
detail after we have run our simulation.
(a)
(b)
Figure A.9 Two blocks can be connected by drawing a line between them. (a) Two blocks are
properly connected, where an arrowhead is filled in. (b) Two blocks are not connected, where the
arrowhead is open.
themselves). After drawing in the lines and repositioning the blocks, the example
system model should look like Figure A.5.
In some models, it will be necessary to branch a signal so that it is transmitted
to two or more different input terminals. For example, in this Simulink model, the
output of the Random Number block is transmitted to two different input termi-
nals, one is Abs block, the other is Scope 1 block. This can be done by first placing
the mouse cursor at the location where the signal is to branch. Then, using either
the CTRL key in conjunction with the left mouse button or just the right mouse
button, drag the new line to its intended destination. The routing of lines and the
location of branches can be changed by dragging them to their desired new posi-
tion. To delete an incorrectly drawn line, simply click on it to select it, and hit the
DELETE key.
Figure A.10 Start running a simulation by clicking on the Start option of the Simulation menu,
which is circled.
264 Getting Started with MATLAB and Simulink
Now that our model has been constructed, we are ready to simulate the system. To do
this, go to the “Simulation” menu and click on “Start”, as shown in Figure A.10, or
just click on the “Start/Pause Simulation” button in the model window toolbar (looks
like the “Play” button on a VCR) if you are using Windows operating system. Since
our example is a relatively simple model, its simulation runs almost instantaneously.
With more complicated systems, however, you will be able to see the progress of the
simulation by observing its running time in the lower box of the model window.
Double click the Scope and the Scope 1 block to view the random signal and its abso-
lute value for the simulation as a function of time. Once the Scope window appears, click
the “Autoscale” button in its toolbar (looks like a pair of binoculars) to scale the graph
(a)
(b)
Figure A.11 The scope displays of the Simulink model. (a) The random signal displayed by Scope 1.
(b) The absolute value of the random signal displayed by Scope.
A.7 Run Simulink Simulations 265
to better fit the window. Having done this, you should be able to see the results shown in
Figure A.11. It is very obvious that corresponding to each point in Figure A.11(a), Figure
A.11(b) returns its absolute value, which satisfies the theory of this model.
As mentioned in Section A.6.2, we can easily change the parameters of the
block by double clicking the block. How would this change affect the output ob-
served by the scopes? Let us change the “Variance” of the Random Number block
to be 2 and keep the other parameters as they were. Then, re-run the simulation and
view the scopes. Please note the scope graphs will not change unless the simulation
is re-run, even though the “Variance” has been modified. The new scope graphs
should now look like Figure A.12.
Comparing Figure A.12 with Figure A.11, we notice that the only difference
between this output and the one from our original system is the amplitude of the
(a)
(b)
Figure A.12 The scope displays of the Simulink model with variance = 2. (a) The random signal
displayed by Scope 1. (b) The absolute value of the random signal displayed by Scope.
266 Getting Started with MATLAB and Simulink
random signal. The shapes are identical due to the same seed value. However,
since the variance is doubled, the new amplitude is about 2 times as the original
amplitude.
References
The UHD is the “Universal Software Radio Peripheral” hardware driver. It works
on all major platforms (Linux, Windows, and Mac), and can be built with GCC,
Clang, and MSVC compilers [1].
The goal of the UHD is to provide a host driver and API for current and future
Ettus Research products. Users will be able to use the UHD driver standalone or
with 3rd party applications, such as Simulink. However, there are a few things you
need to do before you can actually use your ethernet-based USRP hardware with
the MATLAB R2011b1 software via UHD interface, which will be introduced in
the following sections. In this appendix, we assume that the employed operating
system is Ubuntu [2].
Connect the ethernet-based USRP board to your computer using a gigabit Ethernet
cable. For optimal results, attach the USRP directly to the computer, on a private
network (i.e., one without any other computers, routers, or switches). Connecting
the USRP board directly to your computer requires the following:
With the latest UHD interface support and Communications System Toolbox, you
can use ethernet-based USRP with MATLAB and Simulink. Starting MATLAB
2012b, USRP support package can be downloaded and installed using “target-
installer” command in MATLAB. Before MATLAB 2012b, go to [3], download the
support package. You need to fill out a form and submit it before you can save the
package in your local computer. After you get the package, install it following
the steps 1–4 in README.txt.
267
268 Universal Hardware Driver (UHD)
MATLAB R2011b support for ethernet-based USRP devices has been tested using
UHD binary download release 003.002.003. For versions other than R2012b, the
user should check the MathWorks documentation for the support package to find
out which version of UHD is supported. You can download the necessary firmware
images from [4], and burn them to an SD card following the instructions below.
Please note you need to be a root user in order to perform the following steps on
Ubuntu operating system.
In Ubuntu operating system, in order to configure the Ethernet card for your USRP
hardware, you need to perform the following tasks:
Since USRP radio has an IP address of 192.168.10.2, you need to select a number
192.168.10.X where X is anything but 2.
B.7 Problems With Unicode 269
• sudo bash
• cd /root/scripts/
• emacs iptables.sh
Now, you should have the Iptables open in front of you. You only need to modify
two lines concerning the default firewall policies:
When this modification is done, you should save the Iptables and execute it by typ-
ing the following command on the terminal:
• ./iptables.sh
A user may get the following error when he runs findsdru (after running setupsdru):
*??? The conversion from a local code page string to unicode changes the num-
ber of characters. This is not supported. Error in Þ usrp_uhd_mapi
This error occurs due to incompatible localization between the USRP hardware
and your host computer. It can be fixed on Linux by going to a command shell and
executing:
• export LANG=C, or
• export LANG=en_US.ISO8859-1
These commands reset the locale to be compatible with the usrp_uhd_mapi code.
270 Universal Hardware Driver (UHD)
References
[1] http://ettus-apps.sourcerepo.com/redmine/ettus/projects/uhd/wiki/
[2] http://www.ubuntu.com/
[3] http://www.mathworks.com/programs/usrp/
[4] http://files.ettus.com/binaries/uhd_stable/releases/uhd_003.002.003-release/images-only/
[5] https://ettus-apps.sourcerepo.com/redmine/ettus/projects/uhd/repository/revisions/master/
show/host/utils
Appendix C
This appendix introduces how data flows on the USRP board. After reading this ap-
pendix, you should have a better insight on the USRP board. To make things clearer,
the data flow is introduced in two parts, receive (RX) path and transmit (TX) path,
as shown in Figure C.1.
At the RX path, we have four ADCs, and four DDCs. Generally speaking, on each
daughterboard, the two analog input signals are sent to two separate ADCs. The
digitized samples are then sent to the FPGA for processing. Upon entering the
FPGA, the digitized signals are routed by a multiplexer (MUX) to the appropriate
digital down-converter (DDC). There are four DDCs. Each of them has two inputs
(I and Q). The MUX is like a router or a circuit switcher. It determines which ADC
(or constant zero) is connected to each DDC input. We can control the MUX using
usrp.set_mux () method in Python. This allows for having multiple channels
selected out of the same ADC sample stream.
The standard FPGA configuration includes digital down-converters imple-
mented with four stages cascaded integrator-comb (CIC) filters. CIC filters are
very-high-performance filters using only adds and delays. For spectral shaping and
out-of-band signals rejection, there are also 31 tap halfband filters cascaded with
the CIC filters to form complete DDC stage. The standard FPGA configuration
implements two complete digital down-converters. Also there is an image with four
DDCs but without halfband filters. This allows one, two, or four separate RX
channels. It’s possible to specify the firmware and fpga files that are to be used by
loading the corresponding rbf files. By default, an rbf is loaded from /usr/local/
share/usrp/rev{2,4}. The one used unless you specify the fpga_filename con-
structor argument when instantiating a usrp source or sink is std_2rxhb_2tx.rbf.
Table C.1 lists the current three different rbf files and their usage.
Now let’s take a closer look at the digital down-converter. What does it do?
First, it down-converts the signal from the IF band to the base band. Second, it deci-
mates the signal so that the data rate can be adapted by the USB 2.0 and is reason-
able for the computers’ computing capability. Figure C.2 shows the block diagram
of the DDC. The complex input signal (IF) is multiplied by the constant frequency
(usually also IF) exponential signal. The resulting signal is also complex and entered
at 0. Then we decimate the signal with a factor M.
Note that when there are multiple channels (up to four), the channels are inter-
leaved. For example, with four channels, the sequence sent over the USB would be “I0
Q0 I1 Q1 I2 Q2 I3 Q3 I0 Q0 I1 Q1,” and so on. The USRP can operate in full-duplex
271
272 Data Flow on USRP
mode. When in this mode, the transmit and receive sides are completely independent of
one another. The only consideration is that the combined data rate over the bus must be
32 MBps or less. Finally, the I/Q complex signal enters the computer via the USB.
To illustrate the effect of half-band filter in Rx path, two different situations
are discussed. Given the bandwidth of each Rx channel is f, if we set the center
frequency at 0Hz, then, at frequency domain, it is [–f/2, f/2].
C.1.1 Situation 1
Suppose there are four users (u1, u2, u3 and u4), and each of them uses one of the
Rx channels, as shown in Figure C.3. In frequency domain, the bandwidth they
are occupying are all from -f/2 to f/2. In this situation, the FPGA rbf file should be
std_4rx_01x.rbf, which means the FPGA firmware contains four Rx paths
without halfbands and zero tx paths.
Figure C.2 The structure of digital down converter on receiver path, where the input RF signal I’(t)
and Q’(t) is first down-converted to the IF band by a carrier frequency of wc , and then decimated
with a factor M.
u1
u1
f
–f/2 f/2 f
f –Mf/4 Mf/4
u2
u2 f
f –Mf/4 Mf/4
f
–f/2 f/2
u3
u3
f f
f –Mf/4 Mf/4
–f/2 f/2
u4
f
f –Mf/4 Mf/4
u4
f
–f/2 f/2
Figure C.3 4 Rx channels (labeled 1, 2, 3, and 4), starting from the inputs A1, A2, B1, and B2, pass-
ing through one of the ADCs and one of the DDCs, then reaching the USB in the end.
274 Data Flow on USRP
u1
u1
f f
f
–f/2 f/2 –Mf/4 Mf/4 –Mf/2 Mf/2
f
f
u2
u2 f
f –Mf/2 Mf/2
–Mf/4 Mf/4
f f
–f/2 f/2
Figure C.4 Two Rx channels (labeled 1 and 2), starting from the inputs A1 and B1, passing through
one of the ADCs and one of the DDCs with half-band filter, then reaching the USB in the end.
DDC, the bandwidth of each signal will be expanded to [-M f/4, M f/4]. The sum
of all the bandwidth is 2Mf.
C.1.2 Situation 2
Suppose there are two users (u1 and u2), and each of them uses one of the Rx
channels, as shown in Figure C.4. In frequency domain, the bandwidth they are
occupying are both from –f/2 to f/2. In this situation, the FPGA rbf file should be
std_2rxhb_2tx.rbf, which means FPGA firmware contains two Rx paths with
half-bands and two tx paths. If the decimation value of DDC is M/2, after entering
the DDC, the bandwidth of each signal will be expanded to [–M f/4, M f/4]. Then,
after entering the half-band filter, the bandwidth of each signal will be further ex-
panded to [–M f/2, M f/2]. The sum of all the bandwidth is still 2Mf.
C.2 Transmit Path
At the TX path, the story is pretty much the same, except that it happens in reverse.
We need to send a baseband I/Q complex signal to the USRP board. The digital up-
C.2 Transmit Path 275
Figure C.5 The structure of digital up-converter on transmitter path, where a baseband I/Q com-
plex signal Iin (t) and Qin (t) is upconverted to the IF band by a carrier frequency of wc . I’(t) and Q’(t)
will be sent to the DAC.
converter (DUC) will interpolate the signal, up-convert it to the IF band, and finally
send it through the DAC, as shown in Figure C.5.
More specifically, first, interleaved data sent from the host computer is pushed
into the transmit FIFO on the USRP. Each complex sample is 32 bits long (16 bits
for in-phase, and 16 bits for quadrature). This data is de-interleaved and sent to the
input of an interpolation stage. Assuming, the user has specified an interpolation
factor of L, this interpolation stage will interpolate the input data by a factor of L/4
using CIC filters.
The output of the interpolation stage is sent to the demultiplexer (DEMUX).
The DEMUX is less complicated than the receiver MUX. Here, the in-phase and
quadrature output of each CIC interpolator is sent to in-phase and quadrature in-
puts of one of the DAC chips on the motherboard. The user specifies which DAC
chip receives the output of each CIC interpolator [3].
Inside of the AD9862 CODEC chip, the complex-valued signal is interpolated
by a factor of four using half-band filter interpolators. This completes the re-
quested factor of L interpolation. After the half-band interpolators, the complex-
valued signal is sent to a DUC. Note, at this point the signal is not necessarily
modulated to a carrier frequency. The daughterboard might further up-convert
the signal.
The in-phase and quadrature output of the DUC are sent as 14-bit samples to
individual digital-to-analog converters, which convert them to analog signals at a
rate of 128 mega-samples per second. These analog signals are then sent from the
AD9862 to either daughterboard interface J667 or J669, which represent slots TxA
and TxB, respectively.
Different from the DDCs, the DUCs on the transmit side are actually contained
in the AD9862 CODEC chips, not in the FPGA. The only transmit signal processing
blocks in the FPGA are the interpolators. The interpolator outputs can be routed to
any of the 4 CODEC inputs [1].
As we have mentioned before, the multiple RX channels (1, 2, or 4) must all
be the same data rate (i.e., same decimation ratio). The same applies to the 1, 2, or
4 TX channels, which each must be at the same data rate (which may be different
from the RX rate) [4].
276 Data Flow on USRP
References
[1] Patton, Lee K., A Gnn Radio Based Software-Defined Radar, Master’s Thesis, 2007.
[2] Proakis, John G., and Dimitris G. Manolakis, Digital Signal Processing Principles, Algo-
rithms, and Applications, 3rd ed., Upper Saddle River, NJ, 1996.
[3] Shen, Dawei, Gnn Radio Tutorials, http://www.nd.edu/˜dshen/GNU/, [Online]. 2005.
[4] Abbas Hamza, Firas, The USRP under 1.5X Magnifying Lens!, gnuradio.org/redmine/
attachments/129/USRP_ Documentation.pdf, 2008.
Appendix D
D.1 LINUX
• sudo bash
• cd /root/scripts/
• emacs iptables.sh
Now, you should have the Iptables open in front of you. You only need to
modify two lines concerning the default firewall policies:
When this modification is done, you should save the Iptables and execute it by
typing the following command on the terminal:
• ./iptables.sh
277
278 Quick Reference Sheet
D.2 MATLAB
If you do not want to have the result of a calculation printed out to the com-
mand window, you can put a semi-colon at the end of the command; MATLAB still
performs the command in the “background.”
• To know what variables have been declared, type whos. Alternatively, you
can view the values by opening the workspace window. This is done by se-
lecting the Workspace option from the View menu.
• To erase all variables from the MATLAB memory, type clear.
• To erase a specific variable, say x, type clear x.
• To clear two specific variables, say x and y, type clear x y, that is separate
the different variables with a space. Variables can also be cleared by selecting
them in the Workspace window and selecting the delete option.
D.2.5.2 Matrices
Matrices can be created according to the following example. The matrix A is cre-
ated by typing A=[1 2; 3 4; 5 6], (i.e., rows are separated with semi-colons).
• There are two cables inside the USRP2 that connect the RF1 and RF2 SMA
connectors on the front of the USRP2 case to connectors J2 and J1 on the
daughtercard.
• The XCVR2450 always uses J1 for transmit and J2 for receive. This behavior
appears to hard-coded into the FPGA firmware and is not software-controllable
at present time.
D.3.2 Sampling
D.3.2.1 Decimation
The sampling frequency of the USRP2 is:
100
fs = MHz (D.1)
D
where D is the decimation, whose value can be any multiple of 4 from 4 through
512. (Nonmultiples of 4 yield different types of cascaded integrator-comb (CIC)
filters, but this probably isn’t what you want.)
The decimation parameter affects the bandwidth of the received signal, since it
affects the samping rate. Decimation essentially involves lowpass-filtering and then
downsamping by the decimation factor. Recall that downsampling by a factor of X
is equivalent to sampling the original signal at a factor of fs/X instead of fs. There-
fore, larger decimation factors result in smaller sampling rates and thus smaller
bandwidths. (After all, the bandwidth depends on the Nyquist frequency, or fs/2.)
The frequency resolution of the FFT is determined by the sampling frequency
(fs) and the number of points (N) in the FFT:
fs
frequency resolution = (D.2)
N
For example, an FFT with 1024 points and a sampling frequency of 128 kHz
has a resolution of
128 kHz
frequency resolution = = 125 Hz (D.3)
1024 points
In other words, each bin of the FFT is 125 Hz wide. (Spectrum analyzers com-
monly refer to this value as resolution bandwidth.)
Note there is the tradeoff between resolution in time and resolution in frequency.
FFTs with more points have finer-grain frequency resolution, but require more time to
collect the samples (since there are more points). FFTs with fewer points have coarser
resolution in frequency. Also note that larger FFTs are more computationally expensive,
which is important to keep in mind because you will be running the FFTs in real time.
Larger decimation values result in a smaller sampling rate; thus, if you keep
your FFT at 1024 points, larger decimation values will yield finer-grain fre-
quency resolution because (BW)/(N points) will be a smaller resolution band-
width value.
D.3 USRP2 Hardware 281
The punchline: You should use a high decimation value (e.g., 400 or 512) and
a sufficient number of FFT points (e.g., 1024) when taking fine measurements, such
as your oscillator frequency offset error measurements. This will afford the best
possible measurement precision.
Finally, here are a few additional useful links:
• http://en.wikipedia.org/wiki/Decimation_(signal_
processing)
• http://zone.ni.com/devzone/cda/tut/p/id/3983
D.3.2.2 Interpolation
The interpolation parameter is used for transmitting in DAC whereas the decima-
tion parameter is used for receiving in ADC. Interpolation has the same range of
values (any multiple of four from 4 to 512) as decimation.
According to [1]: “The FPGA talks to the DAC at 100 MS/s just like it talks
to the ADC at 100 MS/s. The interpolation from 100 MS/s to 400 MS/s happens
inside the DAC chip itself. Unless you are doing something fancy, you can think of
the DAC as operating at 100 MS/s.”
D.3.3 Clocking
The clock rate of the Spartan-3 FPGA in the USRP2 is 100 MHz.
• http://gnuradio.org/redmine/wiki/gnuradio/USRP2GenFAQ
• http://gnuradio.org/redmine/wiki/gnuradio/UsrpFAQDDC
Differential phase shift keying (DPSK) is a form of phase modulation that conveys data
by changing the phase of the carrier wave. For example, a differentially encoded BPSK
(DBPSK) system is capable of transmitting a binary “1” by adding 180° to the current
phase, while a binary “0” can be transmitted by adding 0° to the current phase.
Since a physical channel is present between the transmitter and receiver within a
communication system, this channel will often introduce an unknown phase shift to
the PSK signal. Under these circumstances, the differential schemes can yield a bet-
ter error rate than the ordinary schemes, which rely on precise phase information.
Referring to both BPSK and QPSK modulation schemes, there is phase ambiguity
whenever the constellation is rotated by some amount due to the presence of phase
distortion within the communications channel through which the signal passes. This
problem can be resolved by using the data to change rather than set the phase.
Reference
[1] http://gnuradio.org/redmine/wiki/gnuradio/USRP2GenFAQ.
Appendix E
Trigonometric Identities
1
cos(θ ) = [exp(jθ ) + exp(− jθ)]
2
1
sin(θ ) = [exp(jθ ) − exp(− jθ)]
2j
1
cos2 (θ ) = [1 + cos(2θ)]
2
1
sin2 (θ ) = [1 − cos(2θ)]
2
tan(α ) ± tan( β )
tan(α ± β ) =
1 ∓ tan(α )tan( β)
1
sin(α )sin(β) = [cos(α − β ) − cos(α + β )]
2
1
cos(α )cos(β) = [cos(α − β ) + cos(α + β )]
2
1
sin(α )cos(β ) = [sin(α − β ) + sin(α + β )]
2
283
About the Authors
Di Pu was born in Suzhou, China. She received her B.S. degree (with distinction)
from Nanjing University of Science and Technology (NJUST), Nanjing, China, in
2007 and M.S. degree from Worcester Polytechnic Institute (WPI), Worcester, MA,
in 2009. Since March 2008, she has been a member of the Wireless Innovation
Laboratory, where she is conducting research into cognitive radio system imple-
mentations. Di Pu is a recipient of the 2007 Institute Fellowship for pursuing Ph.D.
studies at WPI in electrical engineering.
Dr. Alexander M. Wyglinski is an associate professor of electrical and computer
engineering at Worcester Polytechnic Institute (WPI) and director of the Wireless Innova-
tion Laboratory (WI Lab). He received his Ph.D. degree from McGill University in 2005,
his M.S.(Eng.) degree from Queens University at Kingston in 2000, and his B.Eng. degree
from McGill University in 1999, all in electrical engineering. Prior to joining WPI,
Dr. Wyglinski was at the Information and Telecommunication Technology Center (ITTC)
of the University of Kansas from 2005 to 2007 as an assistant research professor.
Dr. Wyglinski is very actively involved in the wireless communications research
community, especially in the fields of cognitive radio systems and dynamic spectrum
access networks. He currently serves on the editorial boards of both IEEE Communi-
cations magazine and IEEE Communications Surveys and Tutorials; he was a tutorial
co-chair for the 2008 IEEE Symposia on New Frontiers in Dynamic Spectrum Access
Networks (IEEE DySPAN 2008); he is currently the student travel grants chair for
the 2010 IEEE Symposia on New Frontiers in Dynamic Spectrum Access Networks
(IEEE DySPAN 2010); he is the Communications Techniques and Technologies track
co-chair of the 2009 IEEE Military Communications Conference; he was the Wire-
less Mobile Communications track chair of the 2008 IEEE Military Communications
Conference; he has been a guest co-editor for the IEEE Communications magazine
with respect to two feature topics on cognitive radio (May 2007, April 2008); he was a
technical program committee co-chair of the Second International Conference on Cog-
nitive Radio Oriented Wireless Networks and Communications (CrownCom 2007);
he was a track chair for both the 64th and 66th IEEE Vehicular Technology Confer-
ence (VTC); and he is currently a technical program committee member on numerous
IEEE and other international conferences in wireless communications and networks.
Dr. Wyglinski is a senior member of the IEEE, IEEE Communications Society,
IEEE Signal Processing Society, IEEE Vehicular Technology Society, IEEE Women
in Engineering, Eta Kappa Nu, and Sigma Xi. Dr. Wyglinski’s current research in-
terests are in the areas of wireless communications, wireless networks, cognitive
radios, software-defined radios, transceiver optimization algorithms, dynamic spec-
trum access networks, spectrum sensing techniques, machine learning techniques
for communication systems, signal processing techniques for digital communica-
tions, hybrid fiber-wireless networking, and both multihop and ad hoc networks.
285
Index
A Cyclic autocorrelation function (CAF), 220
Accelerate the Simulink model, 166–167 Cyclic prefix, 185–186
Analog-to-digital conversion, 23 Cyclostationary feature detection, 219–222
Aperiodic, 16 Cyclostationary process, 219
Applications of software-defined radio, 239 Cyclostationary random process, 77
ASCII, 171
Autocorrelation function, 75 D
Autocovariance function, 76 Decimation, 27–28
Averaging sweeps, 214 Decision region, 119
AWGN channel, 117–118 Decision rule, 116
Decision statistic, 218
B Detection theory, 113
Barker code, 168–169 Differential phase shift keying (DPSK),
Bayes’ rule, 58 163–165
Bit allocation, 187–188 Differential quadrature phase-shift keying
Bit error rate (BER), 105 (DQPSK), 166
Bivariate normal, 71–72 Digital down converter (DDC), 146
Block interleaving, 135 Digital modulation, 94
Digital transceiver, 1
C Digital transmission, 89–91
Digital up converter (DUC), 145
Callback functions, 165
Dirac comb, 25
Carrier sense multiple access (CSMA), 230
Discrete-time, 15
Cascaded integrator-comb (CIC) filter, 44–46
Discrete Fourier transform (DFT), 182
Causal, 18
Dispersive channel environment, 183–184
Central limit theorem (CLT), 71
Duplex communication, 197
Channel encoding, 92–93
Dwell time, 213
Channel filter, 191
Dynamic spectrum access (DSA), 207
Cognitive radio, 207, 239
Collecting spectral data, 209–214
E
Collision avoidance (CA), 231
Commutator, 182 Einstein-Wiener-Khinchin (EWK) relations, 79
Conditional expectation, 62–64 Energy detection, 218–219
Conditional probability, 57–58 Energy subtraction, 158–159
Continuous-time, 15 Energy threshold, 218
Convolutional interleaving, 135 Error bounding, 107–108
Correlator realization, 122–124, 159–162 Euclidean distance, 97
Cumulative distribution function (CDF), 69 Expected value/mean, 62
Eye diagram, 32
287
288 Index
R T
Raised cosine pulse, 37–38 Tapered window, 229
Random process, 72 Test-bed implementation, 245–246
Random variable, 59–60 Time division duplexing (TDD), 198
Receiver design, 245 Total probability, 58
Receiver operating characteristic (ROC), 217 Transmitter design, 244–245
Receiver structure, 153
Rectangular QAM, 178 U
Rectangular window, 229 Uniform sampling, 24
Region of convergence (ROC), 41 Universal Software Radio Peripheral (USRP),
Relative frequency, 53 8–9
Repetition coding, 132–133 University Wireless Open Access Research
Platform (WARP), 10
S
Sample space, 54 V
Sampling, 23 Vector representation, 108–109
Sampling function, 24 Vehicular communication network, 242
Satellite communication, 247 Venn diagram, 54
SDRu Receiver block, 141
SDRu Transmitter block, 141 W
Set theory, 53–54
Water filling principle, 189
Shannon’s channel coding theorem, 93–94
Waveform synthesis, 153
Signal, 15
Wide-sense stationary (WSS), 77
Signal constellation, 179
WiFi network, 141
Signal waveform, 108
Window size, 211
Simplex communication, 197
Wireless local area network (WLAN), 140–141
Sinc pulse, 34–37
Single carrier transmission, 177–181
X
Sliding window approach, 227
Software-defined radio (SDR), 1 XCVR2450 daughtercard, 139
Source encoding, 91
SpeakEasy, 1–2 Z
Spectral coherence function (SOF), 221 Z transform, 39
Spectral correlation function (SCF), 221
Spectrum analyzer, 210
Spectrum sensing, 207
Square root raised cosine filter, 194–196
Stationary process, 76
Strict-sense stationary (SSS), 76–77
Substitution law, 64
Sweep resolution, 211
Sweep time, 214
System, 16