0% found this document useful (0 votes)
156 views

Circuits Systems and Signal Processing PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views

Circuits Systems and Signal Processing PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 323

Suhash Chandra Dutta Roy

Circuits,
Systems and Signal
Processing
A Tutorials Approach
Circuits, Systems and Signal Processing
Suhash Chandra Dutta Roy

Circuits, Systems
and Signal Processing
A Tutorials Approach

123
Suhash Chandra Dutta Roy
Department of Electrical Engineering
Indian Institute of Technology Delhi
New Delhi, Delhi
India

ISBN 978-981-10-6918-5 ISBN 978-981-10-6919-2 (eBook)


https://doi.org/10.1007/978-981-10-6919-2

Library of Congress Control Number: 2017962026

© Springer Nature Singapore Pte Ltd. 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way,
and transmission or information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are
exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in
this book are believed to be true and accurate at the date of publication. Neither the publisher nor
the authors or the editors give a warranty, express or implied, with respect to the material
contained herein or for any errors or omissions that may have been made. The publisher remains
neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore
189721, Singapore
Dedicated
to
My Parents
Shri Suresh Chandra Dutta Roy
who told me ‘high quality steel cannot be made without burning
iron’
and said, ‘Suhash you will surely shine’
and
Shrimati Suruchi Bala Dutta Roy
who must have given me the genes to keep the fire on
and says, ‘Son, I am with you, even though you lost me at nine’
Preface

Starting from 1962, I have written and published a large number of articles in
IETE Journals. At various points of time, starting from the early 80s, I have
been requested by students, as well as teachers and researchers, to publish a
book of collected reprints, appropriately edited and sequenced. Of late, this
request has intensified, and the demand for reprints of some of the tutorial
papers I wrote, has increased considerably, not only from India, but also from
abroad, because of my five video courses related to Circuits, Systems and
Signal Processing (CSSP) successfully uploaded by NPTEL on the YouTube.
I thought it would be a good idea to venture into such a project at this time.
This is a book for you, students, teachers and researchers in the subjects
related to CSSP.
As you would notice, I have written this book in a conversational style to
make you feel at ease while reading it. I have also injected some wit and
humour at appropriate places to make it enjoyable.
This book is divided into four parts, dealing with Signals and Systems,
Passive Circuits, Active Circuits and Digital Signal Processing. An appendix
has also been added to give simple derivations of mathematics used
throughout this book.
In each chapter, I have added some examples so that the students may
appreciate the fundamentals treated in the chapter and apply them to practical
cases.
This book contains chapters based on only articles of tutorial nature and
those containing educational innovations. Purely research papers are
excluded.
The details of the parts are as follows:
Part I on Signals and Systems comprises six chapters on basic concepts in
signals and systems, state variable characterization, some fundamental issues
involving the impulse function and partial fraction expansion of rational
functions in s and z, having repeated poles.
Part II on Analysis of Passive Circuits consists of 16 chapters on circuit
analysis without transforms, transient response of RLC networks, circuits
which are deceptive in appearance, resonance, many faces of the single tuned
circuit, analysis and design of the parallel-T RC network, perfect transformer,
capacitor charging through a lamp, difference equations and resistive net-
works, a third-order driving point synthesis problem, an example of LC

vii
viii Preface

driving point synthesis, low-order Butterworth filters, band-pass/band-stop


filter design by frequency transformation and optimum passive differentiators.
Part III on Active Circuits contains eight chapters on BJT biasing, analysis
of a high-frequency transistor stage, transistor Wien bridge oscillator, anal-
ysis of sinusoidal oscillators, triangular to sine wave converter and the
Wilson current mirror.
Part IV Digital Signal Processing comprises eight chapters on the
ABCD’s of digital signal processing, second-order band-pass and band-stop
filters, all-pass digital filters, FIR lattice and fast Fourier transform.
In the Appendix, simplified treatment of some apparently difficult topics is
presented. These are roots of a polynomial, Euler’s relation, approximation
of the square root of the sum of two squares, solution to cubic and quartic
equations, solving second-order linear differential equations and Chebyshev
polynomials.
I hope students and teachers will benefit from these chapters. It is only
then that I shall feel adequately rewarded. For any mistakes/confusions/
clarifications, please feel free to contact me on email at s.c.dutta.
[email protected]. I take such mail on top priority, I assure you.
Happy learning!

New Delhi, India Suhash Chandra Dutta Roy


Acknowledgements

I owe so much to so many that it is not possible to acknowledge all of them in


this limited space. To the ones whose credit I have not been able to mention, I
ask for forgiveness.
To the successive three Presidents, Shri R. K. Gupta, Shrimati Smriti
Dagur and Lt Gen (Dr.) AKS Chandele, PVSM, AVSM (retd), I am indebted
for their enthusiasm and encouragement;
To the successive Publication Committee Chairpersons and Members
of the Governing Council, I would like to say a big thank you for supporting
my proposal of writing this book;
To Shrimati Sreelatha Menon, Former Managing Editor of IETE, for her
constant advice and suggestions for taking the project forward;
To Shrimati Sandeep Kaur Mangat, I would like to express my deep
gratitude for successfully leading this project to a conclusion, with her
immense patience, and love and passion for work, much beyond her duty,
and for carrying out all the necessary hard work;
To all the workers of the Publications Section and the Secretariat in
general, I have no words to convey my heartfelt appreciation for their diligent
work;
To my students—real and virtual, I would like to express my love and best
wishes for their raising interesting questions in the class as well as outside,
and through innumerable emails, which led to the polishing and refreshing
of the contents;
To my wife and lifelong companion, Shrimati Sudipta Dutta Roy, I cannot
say enough to express my love and adoration for being with me in all
weathers—in rain and sunshine—and in all the moments of happiness and
depression, particularly during the years that I devoted to compiling and
editing this book;
To my two sons, Sumantra and Shoubhik, and their families, I would like
to say ‘I have been and shall always be with you, even though many a times I
could not pay due attention to family duties and responsibilities’;
Finally, to my grandson Soham, is due a special word. He stood by my
table patiently and asked me to hurry up the work so that he gets his share of
time to play with me. It is because of this that this book has seen the light
of the day in such a short time.
Suhash Chandra Dutta Roy

ix
About the Book

This book, I claim, is unique in character. Such a book has never been
written. This book is unique, because it is innovative all throughout. It is
innovative in Titles of Chapters, Abstracts, Headings of Articles, Subhead-
ings, References and Problems. I have injected wit and humour, so as to
retain the interest of the readers and to ignite their imagination. I shall not
write more about this book. Read and you will know.
I do not wish to receive any compliments, whatsoever. What I wish to
receive is your frank and blunt criticism, pointing out the deficiencies.
I promise I shall take them seriously and make my best efforts to take care
of them in the next edition.
Happy reading!

xi
Contents

Part I Signals and Systems


1 Basic Concepts in Signals and Systems . . . . . . . . . . . . . . . . . . 3
Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Elementary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Time Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Impulse Response and Convolution . . . . . . . . . . . . . . . . . . . . . . . 6
LTI System Response to Exponential Signals . . . . . . . . . . . . . . . 7
The Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Linear System Response to Periodic Excitation . . . . . . . . . . . . . . 10
The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Spectral Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 The Mysterious Impulse Function and its Mysteries . . . . . . . . 17
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
How Do You Solve Differential Equations Involving
an Impulse Function? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 19
How Do You Solve Difference Equations Involving
an Impulse Function? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Fourier Transform of the Unit Step Function . . . . . . . . . . . . . . . 21
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3 State Variables—Part I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Why State Variables? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
What is a State? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Definitions and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Standard Form of Linear State Equations . . . . . . . . . . . . . . . . . . 27
How Do You Choose State Variables in a Physical System? . . . 28
How Do You Choose State Variables When the System
Differential Equation is Given? . . . . . . . . . . . . . . . . . . . . . . . . . . 29

xiii
xiv Contents

How Does One Solve Linear Time-Invariant


State Equations? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
An Advice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4 State Variables—Part II . . . . . . . . . . . . . . . . . . . . . . . . . . .... 37
Properties of the Fundamental Matrix . . . . . . . . . . . . . . . . . .... 37
The Fundamental State Transition Equation . . . . . . . . . . . . .... 39
Procedures for Evaluating the Fundamental Matrix:
Described in Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 39
By Exponential Series Expansion . . . . . . . . . . . . . . . . . . .... 39
By Solution of the Homogeneous Differential Equations
using Classical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
By Evaluating the Inverse Laplace Transform of (sI–A) . . . . . 40
State Transition Flow Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Concluding Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Appendix on Review of Matrix Algebra . . . . . . . . . . . . . . . . . . . 46
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5 Carry Out Partial Fraction Expansion of Functions
with Repeated Poles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
The Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Another Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6 A Very Simple Method of Finding the Residues
at Repeated Poles of a Rational Function in z−1 . . . . . . . . . . . 53
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
The Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Part II Passive Circuits


7 Circuit Analysis Without Transforms . . . . . . . . . . . . . . . . . . . 61
An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Some Nomenclatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Force-Free Response: General Considerations . . . . . . . . . . . . . . . 63
Force-Free Response of a Simple RC Circuit . . . . . . . . . . . . . . . 63
Force-Free Response of a Simple RL Circuit . . . . . . . . . . . . . . . 65
First-Order Circuits with More Than One Energy
Storage Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Force-Free Response of a Second-Order Circuit . . . . . . . . . . . . . 65
Root Locus of the Second-Order Circuit . . . . . . . . . . . . . . . . . . . 68
Natural Frequencies of Circuits with a Forcing Function . . . . . . . 68
Contents xv

Concept of Impedance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Relation Between Impedance and Natural Frequencies . . . . . . . . 69
Forced Response to an Exponential Excitation . . . . . . . . . . . . . . 70
Forced Response Due to DC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Forced Response to a Sinusoidal Excitation . . . . . . . . . . . . . . . . 70
Basic Elements and Their V–I Relationships for Sinusoidal
Excitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
An Example of the Use of Phasors and Impedances . . . . . . . . . . 72
Back to Complete Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Step Response of an RL Circuit . . . . . . . . . . . . . . . . . . . . . . . . . 75
Sinusoidal Response of a Series RC Circuit . . . . . . . . . . . . . . . . 75
Response of an RC Circuit to an Exponential Excitation . . . . . . 76
Step Response of an RLC Circuit . . . . . . . . . . . . . . . . . . . . . . . . 78
Sinusoidal Response of an RLC Circuit . . . . . . . . . . . . . . . . . . . 78
Pulse Response of an RC Circuit . . . . . . . . . . . . . . . . . . . . . . . . 80
Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8 Transient Response of RLC Networks Revisited . . . . . . . . . . . 83
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Example Circuit and the Differential Equation . . . . . . . . . . . . . . 83
Analytical Solution of the Differential Equation . . . . . . . . . . . . . 84
Evaluating the Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Overdamped Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Underdamped Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Critically Damped Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
9 Appearances Can Be Deceptive: A Circuit Paradox . . . . . . . . 89
The Illusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
AC Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
DC Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
10 Appearances Can Be Deceptive: An Initial
Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Establishing I2(0−): One Possibility . . . . . . . . . . . . . . . . . . . . . . . 93
Establishing I2 (0−): Another Possibility . . . . . . . . . . . . . . . . . . . 93
Solve the Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
11 Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Q: A Figure of Merit for Coils and Capacitors . . . . . . . . . . . . . . 95
Series Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Parallel Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
xvi Contents

Impedance/Admittance Variation with Frequency: Universal


Resonance Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Bandwidth of Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Other Types of Resonant Circuits . . . . . . . . . . . . . . . . . . . . . . . . 100
An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Some Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
12 The Many Faces of the Single-Tuned Circuit . . . . . . . . . . . . . 105
Notations: First Things First . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
The Possible Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
The Low-Pass Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
The High-Pass Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
The Band-pass Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
The Band-stop Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
13 Analyzing the Parallel-T RC Network . . . . . . . . . . . . . . . . . . . 111
Mesh Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Node Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Two-Port Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Analysis by Miller’s Equivalence . . . . . . . . . . . . . . . . . . . . . . . . 113
Splitting the T’S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Yet Another Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
14 Design of Parallel-T Resistance–Capacitance Networks
For Maximum Selectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Network Configuration and Simplification . . . . . . . . . . . . . . . . . . 118
Null Condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Selectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Linearity of the Selectivity Curve . . . . . . . . . . . . . . . . . . . . . . . . 121
Selectivity of an Amplifier Using the General Parallel-T RC
Network in the Negative Feedback Line . . . . . . . . . . . . . . . . . . . 123
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
15 Perfect Transformer, Current Discontinuity
and Degeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Analysis of the General Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Contents xvii

Condition for Continuity of Currents


Under Perfect Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
16 Analytical Solution to the Problem of Charging
a Capacitor Through a Lamp . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
The Circuit and the Differential Equation . . . . . . . . . . . . . . . . . . 131
Solution of the Differential Equation . . . . . . . . . . . . . . . . . . . . . . 132
Energy Dissipated in the Lamp . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
17 Difference Equations, Z-Transforms
and Resistive Ladders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Solution by Difference Equation Approach . . . . . . . . . . . . . . . . . 136
Z-Transform Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Resistance Between Any Two Arbitrary Nodes
of an Infinite Ladder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Concluding Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
18 A Third-Order Driving Point Synthesis Problem . . . . . . . . . . 141
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Is Z(s) at All Realizable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Alternative Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
A Problem for the Student. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
19 Interference Rejection in a UWB System:
An Example of LC Driving Point Synthesis . . . . . . . . . . . . . . 147
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
The Four Canonical Realizations . . . . . . . . . . . . . . . . . . . . . . . . . 148
Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Effect of Losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
20 Low-Order Butterworth Filters: From Magnitude
to Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Butterworth Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Basis of the Alternative Method . . . . . . . . . . . . . . . . . . . . . . . . . 152
Application to Low Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
First-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Second-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
xviii Contents

Third-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153


Fourth-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Fifth-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Sixth-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Seventh-Order Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Application to Chebyshev Filters . . . . . . . . . . . . . . . . . . . . . . . . . 156
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
21 Band-Pass/Band-Stop Filter Design by Frequency
Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Band-Pass Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Band-Stop Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
22 Optimum Passive Differentiators . . . . . . . . . . . . . . . . . . . . . . . 165
Optimal Transfer Function and Its Realizability . . . . . . . . . . . . . 166
Second-order Optimal and Suboptimal Differentiators . . . . . . . . . 170
Third-order Suboptimal Passive Differentiator . . . . . . . . . . . . . . . 171
Optimal RC Differentiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Suboptimal RC Differentiator . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Part III Active Circuits


23 Amplifier Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
24 Appearances Can Be Deceptive: The Case
of a BJT Biasing Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Analysis of N2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
An Example of Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
25 BJT Biasing Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
The Generalized Circuits and Special Cases . . . . . . . . . . . . . . . . 191
Bias Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Contents xix

An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Design of N1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Design of N2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Design of N3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Performances of N1, N2 and N3 . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Design and Performance of N4 . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Using the Total Change Formula . . . . . . . . . . . . . . . . . . . . . . . . . 197
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
26 Analysis of a High-Frequency Transistor Stage . . . . . . . . . . . 199
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Two Port Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
27 Transistor Wien Bridge Oscillator . . . . . . . . . . . . . . . . . . . . . . 203
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Circuit 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Circuit 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Circuit 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Practical Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
28 Analysing Sinusoidal Oscillator Circuits: A Different
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
An Op-Amp Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Transistor Version of the Wien Bridge Oscillator . . . . . . . . . . . . 214
Another Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
29 Triangular to Sine-Wave Converter . . . . . . . . . . . . . . . . . . . . . 217
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
30 Dynamic Output Resistance of the Wilson
Current Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
xx Contents

Part IV Digital Signal Processing


31 The ABCDs of Digital Signal Processing––PART 1 . . . . . . . . 229
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
The Basic Digital Signal Processor . . . . . . . . . . . . . . . . . . . . . . . 230
The Sampling Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Quantization Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Transfer Function of a Digital Signal Processor . . . . . . . . . . . . . 236
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
32 The ABCDs of Digital Signal Processing–PART 2 . . . . . . . . . 241
Realization of Digital Signal Processors . . . . . . . . . . . . . . . . . . . 241
The Discrete Fourier Transform. . . . . . . . . . . . . . . . . . . . . . . . . . 242
The Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Applications of FFT to Compute Convolution
and Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ......... 247
Application of FFT to Find the Spectrum of
a Continuous Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
33 On Second-Order Digital Band-Pass
and Band-Stop Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Design for Arbitrary Pass-band Tolerance . . . . . . . . . . . . . . . . . . 257
Realization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
34 Derivation of Second-Order Canonic All-Pass
Digital Filter Realizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Derivation of the Structure of Fig. 34.2 . . . . . . . . . . . . . . . . . . . . 263
Derivation of the Structure of Fig. 34.3 . . . . . . . . . . . . . . . . . . . . 263
Alternative Derivation of the Structure of Fig. 34.2 . . . . . . . . . . 264
Yet Another Derivation of the Structure of Fig. 34.2 . . . . . . . . . 265
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
35 Derivation of the FIR Lattice Structure . . . . . . . . . . . . . . . . . . 267
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Contents xxi

36 Solution to a Problem in FIR Lattice Synthesis . . . . . . . . . . . 271


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Conventional Synthesis Procedure . . . . . . . . . . . . . . . . . . . . . . . . 272
Linear Phase Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Nonlinear Phase FIR Function with hN (N ) = ±1 . . . . . . . . . . . . 274
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
37 FIR Lattice Structures with Single-Multiplier Sections . . . . . . 279
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Realization 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Realization 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Realization 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Realization 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
38 A Note on the FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Derivation of the Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Recurrence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Alternative Derivation for M(N) . . . . . . . . . . . . . . . . . . . . . . . . . 285
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Appendix: Some Mathematical Topics Simplified . . . . . . . . . . . . . . 287
About the Author

Suhash Chandra Dutta Roy I was fortunate


enough to be educated at the Calcutta University,
where great professors and researchers like
C. V. Raman, S. N. Bose, M. N. Saha and
S. Radhakrishnan taught and made a name,
globally, not only for them, but also for the uni-
versity and the country. Immediately after my
master's, I had to take up a job in a West Bengal
Government Research Institute. I soon discovered
that many things are done at the Institute, but not
research (like an American Drug Store!). I there-
fore quit and shifted to the newly established
University of Kalyani as a lecturer. There I spent
a few years, and after my Ph.D., I left the country
for taking up an Assistant Professorship at the
University of Minnesota. A few years passed by,
but I felt increasingly guilty that I was not doing
anything for my motherland. I therefore returned
to the country as an Associate Professor at IIT
Delhi where I served for more than four decades
as professor, head of the department and dean.
The last two positions were imposed upon me,
and I suffered because they cut down the time
available for research and interaction with my
dear students. I formally retired at the age of 60,
but continued to teach and carry out research, as
an Emeritus Fellow, followed by INSA Senior
Scientist and INSA Honorary Scientist. Since my
term as an Emeritus Fellow was over, I did not

xxiii
xxiv About the Author

take any money from IIT Delhi and served voluntarily, simply for the love of
teaching and research. I finally quit IIT Delhi 12 years after retirement. I am
now settled in a DDA Flat at Hauz Khas, where I live happily with my wife,
but still continue to do research. We both are reasonably healthy, because of
strict diet and exercise, including pranayama. I have been extraordinarily
lucky to have had gems of Ph.D. students, 30 of them, who have done
exemplary work in research and innovation. In fact, I consider myself as
shining from reflected glory. I have also been lucky to have received high
recognition through Fellowship of IEEE, Distinguished Fellowship of IETE
and Fellowship of all the relevant national academies. I have been awarded
some prestigious national awards, including the Shanti Swarup Bhatnagar
Prize. Over and above all the awards and recognition, however, what I value
most is the love and affection of my students. I now spend time giving
professional lectures and also delivering sermons on innovations in teaching
and research, in general, and also on how to improve their standards in the
country. I have the hobby of listening to Hindustani classical music and
researching on its masters. I also love spending quality time with my
grandson, Soham. Reading political history and detective stories are my other
hobbies. I also read poetry and compose some poems and short stories, for
my own pleasure. In short, I have lived a complete life, with nothing to
complain about.
Part I
Signals and Systems

This part contains six chapters based on the same number of articles.
Although they may appear to be disjointed, in reality, they are not. For
example, the second chapter is about an impulse function, which often
appears as a signal and in input–output characterization of systems. State
variable characterization, dealt with in the following two parts, relates to a
special kind of system, viz. linear system. State variable was a hot topic when
the corresponding articles were written. They are still used, mostly in control
systems. The fifth part relates to a rational function in s which is the Laplace
transform of the output of a system. A simple method is presented, in contrast
to usual textbook methods, for partial fraction expansion of a function with
repeated poles. The last part relates to the same topic in Digital Signal Pro-
cessing where s is replaced by the variable z.
All throughout here, as well as everywhere in the book, my effort has been
to simplify life so that you can sail in gentle waters and do not have to shed
tears. Tears are costly and should not be allowed to flow just like that!
Reserve them for practical life where you would have ample opportunities to
shed them.
All the examples here, at the end of each chapter, have some twists and
turns; you have to unwind them to be able to figure out how you should
proceed. Do not think along difficult lines because difficult problems always
have simple solutions. This is true not only here but in life in general. Do not
give up. If a path does not lead you to the goal, return and choose an
alternative path. Sometimes, when you go some way in the latter, it appears
that the first path gives you the solution! There are no straightforward rules;
you have to find your own path. All the time, do not allow your mind to be
polluted by complicated rules and procedures. All that is complicated is
useless. This is also true for life in general.
But enough of this philosophical discourse. Coming down to the problem
at hand, you should learn the fundamentals carefully and completely. Once
you make them your own, no application will appear difficult. I have done
this throughout my life, in teaching as well as research, and I have received
excellent rewards, well beyond my expectations.
2 Part I: Signals and Systems

Help your classmates whenever they are in difficulty in solving problems,


or otherwise. This helps in strengthening the bonds and also in clearing your
own understanding at places where they were fuzzy.
The problems I have set have a different character as compared to the
usual textbook ones. Some have intentionally designed wrong results. You
will have to correct them before going ahead. Do workout all the problems.
Intentionally, I have not given the correct answers. Consult your classmates;
if a majority gets the same result, then you can be confident that your solution
is correct. A good teacher is born and not made. Identify one or two such
good teachers and consult them in case of difficulties. Don’t give up if they
turn you down on one pretext or other. After all, your teacher should realize
that teaching–learning is an interactive process, and unless there is a strong
and loving relationship among the three components, viz. teacher, student
and subject, teaching–learning becomes routine, monotonous and repelling.
Consult the video courses, fully downloadable from the YouTube, I have
created five of them, but there are many more from IITs through the NPTEL
programme. Choose the best few and follow them. It is not that these are all
faultless, but faults are few and far between. If you find a mistake, confirm
with an email to the concerned teacher and clarify. I have been doing this
every day since my first video course appeared in the YouTube.
Finally, I wish you happy reading and happy learning. Write to me at the
email address given earlier, if you have a question. I enjoy interacting
through emails as most of the teachers also do.
Basic Concepts in Signals
and Systems 1

In this chapter, I shall tell you all you wanted Keywords


to know about signals and systems but were  
Linear systems Signals Amplitude
afraid to ask your teacher. Starting with the 
and phase spectra Response to periodic
definition of linear systems, some elementary  
signals Non-periodic signals Fourier series
signals are introduced, which is followed by 
and transforms Energy and power signals
the notion of time-invariant systems. Signals Spectral density
and systems are then coupled through impulse
response, convolution and response to expo-
nential signals. This naturally leads to Fourier
series representation of periodic signals and Linear System
consequently, signal representation in the
frequency domain in terms of amplitude and The concept of linear systems plays an important
phase spectra. Does this sound difficult? role in the analysis and synthesis of most prac-
I shall simplify it to the extent possible, do tical systems, be it communication, control or
not worry. Linear system response to periodic instrumentation. Consider a system S which
signals, discussed next, is then easy to under- produces an output y when an input x is applied
stand. To handle non-periodic signals, Fourier to it (both y and x are usually functions of time).
transform is introduced by viewing a non- We shall denote this symbolically as
periodic function as the limiting case of a
periodic one, and its application to linear x S y
!
system analysis is illustrated. The concepts of
energy and power signals and the correspond- Then S is said to be linear if it obeys two
ing spectral densities are then introduced. principles, viz. principle of superposition and
principle of homogeneity. The former implies
that if x1 ! y1 (note that we have omitted
S above the arrow: this is implied) and x2 ! y2,
then x1 + x2 ! y1 + y2. The principle of homo-
Source: S. C. Dutta Roy, “Basic Concepts in Signals and geneity implies that if x ! y, then ax ! ay where
Systems,” IETE Journal of Education, vol. 40, pp. 3–11, a is an arbitrary constant. Note, in passing, that a
January–June 1999.
could be zero, i.e. zero input should lead to zero
Lectures delivered at Trieste, Italy, in the URSI/ICTP output in a linear system. Combining the two
Basic Course on Telecommunication Science, January
1989.

© Springer Nature Singapore Pte Ltd. 2018 3


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_1
4 1 Basic Concepts in Signals and Systems

principles, we can now formally define a linear surprise you, because life, in general, is nonlinear
system as one in which and we make it simple by approximating it by a
linear one. Otherwise, life would have been so
x1;2 ! y1;2 ) ax1 þ bx2 ! ay1 þ by2 ; ð1:1Þ complicated that it would not be worth living.
Enjoyment would have been out of question!
where the notation ) is used to mean ‘implies’. Also, in many situations, a nonlinear system is
As an example, consider the system described ‘incrementally’ linear, i.e. the system is linear if
by the well-known equation of a straight line an increment Dx in x is considered as the input
and the corresponding increment Dy in y is
y ¼ mx þ c ð1:2Þ
considered as the output. Both Eqs. 1.2 and 1.4
are descriptions of such incrementally linear
It may seem surprising but Eq. 1.2 does not
systems. A transistor amplifier is a highly non-
describe a linear system unless c = 0, simply
linear system, but it behaves as a linear one if the
because zero input does not lead to zero output.
input is an AC signal superimposed on a much
Another way of demonstrating this is to apply ax
larger DC bias.
as input; then the output is

y0 ¼ max þ c 6¼ ay ¼ max þ ca ð1:3Þ


Elementary Signals
By the same token, the dynamic system
described by A signal, in the context of electrical engineering,
is a time-varying current or voltage. An arbitrary
dy signal can be decomposed into some elementary
þ 5y ¼ 5x þ 6 ð1:4Þ
dt or ‘basic’ signals, which, by themselves also
occur frequently in nature. These are (i) the
is not linear, because (x = 0, y = 0) does not exponential signal eat where a may be real,
satisfy the equation. imaginary or complex, (ii) the unit step function
Another, and a bit more subtle example, is u(t) and (iii) the unit impulse function, d(t).
shown in Fig. 1.1. Is this system linear? Obvi- When a is purely imaginary in eat, we get a
ously x = 0 leads to z = 0 but then this is only a particularly important situation, because if
necessary condition for a linear system. Is it a = jx and x is real, then
sufficient? To test this, apply x = x0; then z = z0.
Now apply x = −x0; the output is still z0 instead ejxt ¼ cos xt þ j sin xt ð1:5Þ
of –z0. The obvious conclusion is that the system
is nonlinear. Thus sinusoidal signals, cos xt and sin xt,
Almost all practical systems are nonlinear, which are so important in the study of commu-
which are usually much more difficult to handle nications, are special cases of the exponential
than linear systems. Hence, we make our life signal. The quantity x, as is well known, is the
comfortable by approximating (or idealizing?) a frequency in radians/s, while f = x/(2p) is the
nonlinear system by a linear one. This should not frequency in cycles/s or Hz.

y = yo (x/x0)2 z = zo (y/y0)1/2
x y z

Fig. 1.1 A subtle example of a nonlinear system


Elementary Signals 5

The unit step function, shown in Fig. 1.2, is representation of d(t). Two important properties
defined by of d(t) are that

0 t\0 xðtÞdðt to Þ ¼ xðto Þdðt to Þ ð1:10Þ
uð t Þ ¼ ð1:6Þ
1 t[0
and
Note that it is discontinuous at t = 0. The unit
Z1
impulse function d(t) is related to u(t) through
xðsÞdðt sÞds ¼ xðtÞ ð1:11Þ
Zt 1

uð t Þ ¼ dðsÞds ð1:7Þ
Equation 1.11 easily follows from Eqs. 1.9
1
and 1.10, and represents the ‘Sifting’ or ‘Sam-
pling’ property of the impulse function.
or

duðtÞ
dðtÞ ¼ ð1:8Þ Time Invariance
dt

Obviously, it exists only at t = 0, and the At this point, we need to introduce another
value there is infinitely large, but concept, viz. that of time invariance of a system.
A system S is time invariant if a time shift in the
Z1 Z0 þ input signal causes the same time shift in the
dðsÞds ¼ dðsÞds ¼ 1; ð1:9Þ output signal, i.e. if x(t) ! y(t) implies x(t – t0)
! y(t − t0).
1 0
Both Eqs. 1.2 and 1.4 are descriptions of
i.e. the area under the plot of d(t) versus t is unity. time-invariant systems. On the other hand,
This is called the strength of the impulse; for y(t) = tx(t) represents a time-varying system.
example, the strength of the impulse Kd(t) is K. Most of the practical systems we encounter are
Obviously, there is some formal difficulty with time-invariant systems.
regard to the definition of d(t), but we shall not Systems which are linear and time invariant
enter into this debate here. d(t) can be viewed as (LTI) are particularly simple to analyze in terms
the limit of the rectangular pulse shown in of their impulse response or frequency response
Fig. 1.3 as D tends to 0; Fig. 1.3 also shows the function, as will be demonstrated in what follows.

u(t)

1 (t)

0
1/ 1
t
0
t t
0 0

Fig. 1.2 The unit step function Fig. 1.3 A limiting view of d(t)
6 1 Basic Concepts in Signals and Systems

Impulse Response and Convolution h(t) * x(t)); in fact, this is what equivalence of
Eqs. 1.16 and 1.17 implies), associative (i.e.
Consider an LTI system whose response to a unit x(t) * [h1(t) * h2(t)] = [x(t) * h1(t)] * h2(t)); this
impulse function is h(t), i.e. is useful in the analysis of cascade connection of
systems) and distributive (i.e. x(t) * [h1(t) +
dð t Þ ! hð t Þ ð1:12Þ h2(t)] = x(t) * h1(t) + x(t) * h2(t)); this is useful
in the analysis of parallel systems).
By time invariance, therefore As an example of application of the convo-
lution integral, consider the RC network shown
dðt s Þ ! hð t sÞ ð1:13Þ
in Fig. 1.4, where both x(t) and y(t) are voltages,
and the capacitor is uncharged before application
By homogeneity, if we multiply the left-hand
of x(t) (an alternate way of expressing this is to
side of Eq. 1.13 by x(s)ds, the right-hand side
say that C is initially relaxed). When x(t) = d(t),
should also get multiplied by x(s)ds, i.e.
the current in the circuit is i(t) = d(t)/R. This
xðsÞdðt sÞds ! xðsÞhðt sÞds ð1:14Þ impulse of current charges the capacitor to a
voltage
By superposition, if we integrate the left-hand
side of Eq. 1.14, we should do the same for the Z0 þ
1 dðsÞ 1
right-hand side, i.e. ds ¼ ð1:18Þ
C R RC
0
Z1 Z1
xðsÞdðt sÞds ! xðsÞhðt sÞds at t = 0+. For t > 0+, d(t) = 0; hence the capaci-
1 1 tor charge decays exponentially; so does the
ð1:15Þ voltage across it, according to

But, by Eq. 1.11, the left-hand side of 1 t=ðRCÞ


yðtÞ ¼ e ð1:19Þ
Eq. 1.15 is simply x(t), so the right-hand side RC
should be y(t). Thus, if the unit impulse response
Thus, the impulse response of the RC network
h(t) of an LTI system is known, then one can find
is
the output of the system due to an arbitrary
excitation x(t) as 1 t=T
hðtÞ ¼ e uðtÞ; ð1:20Þ
Z1 T
yðtÞ ¼ xðsÞhðt sÞds ð1:16Þ
where T = RC is called the time constant of the
1
network. Now, suppose the input is changed to a
Z1 unit step voltage, i.e. x(t) = u(t). Then the
¼ xðt sÞhðsÞds; ð1:17Þ response is, by Eq. 1.17,
1

where the second form follows simply, through a


change of variable. The integral Eqs. 1.16 or 1.17 R
is called the convolution integral and the opera-
+ +
tion of convolution is symbolically denoted as
x(t) i(t) C y(t)
y ð t Þ ¼ x ð t Þ  hð t Þ
– –
It is a simple matter to prove that convolution
operation is commutative (i.e. x(t) * h(t) = Fig. 1.4 An RC network
Impulse Response and Convolution 7

Z1 part (cost xt) or imaginary part (sin xt), then the


1 s=T output will be H(jx) ejxt or Re [H(jx) ejxt] or Im
yðtÞ ¼ e uðsÞuðt sÞds ð1:21Þ
T [H(jx)ejxt], respectively. For example, if
1
H(jx) = |H(jx)| ej∠H(jx) and the input is cos xt,
Zt then the output shall be |H(jx)| cos (xt +
1 s=T

t=T

∠H(jx)). H(jx) varies with frequency, and the
¼ e ds ¼ 1 e uðtÞ; ð1:22Þ
T plots of |H(jx)| and ∠H(jx) versus x are known
0
as magnitude and phase responses, respectively.
where the lower limit arises due to the factor u(s) Since the principle of superposition holds in a
and the upper limit arises as a consequence of the linear system, the response to a linear combina-
tion of exponential signals, i ai esi t , will be of
P
factor u(t − s) in the integrand.
the form i ai Hðsi Þesi t . It is precisely this fact
P

which motivated Fourier to explore if an arbitrary


LTI System Response to Exponential signal could be represented as a superposition of
Signals exponential signals. As is now well known, this
can indeed be done by a Fourier series for a
Let x(t) = est where s, as you will see later, is the periodic signal and by the Fourier transform for a
complex frequency r + j x, be applied to a general, not necessarily periodic, signal.
system with impulse response h(t); then by
Eq. 1.17, the response is
The Fourier Series
Z1

yðtÞ ¼ hðsÞ esðt ds ð1:23Þ Consider a linear combination of the exponential
1 signal ejx0 t with its harmonically related expo-
nential signals ejkx0 t k = 0, ±1, ±2, … :
Z1 1
st ss
X
¼e hðsÞ e ds ð1:24Þ xðtÞ ¼ ak ejkx0 t ð1:27Þ
1 k¼ 1

In this, k = 0 gives a constant term or dc, in


¼ H ðsÞest ; ð1:25Þ electrical engineering language; ejx0 t is the
smallest frequency term, with a frequency x0 and
where period T = 2p/x0, and is called the fundamental
Z1 frequency. The term ej2x0 t has a frequency 2x0,
HðsÞ ¼ hðsÞ e ss
ds ð1:26Þ while e j2x0 t has a frequency −2x0; the period of
either term is T/2, and both the terms represent
1
what is known as the second harmonic. A similar
is called the system function or transfer function interpretation holds for the general term ejkx0 t ;
of the system and is a function of s only. A signal which has a period T/|k|. Note that we take the
for which the output differs from the input only frequency as positive or negative, but the period
by a scaling factor (perhaps complex) is called is taken as positive. Obviously, the summation
the eigenfunction of the system, and the scaling Eq. 1.27 is periodic with a period equal to T, in
factor is called the eigenvalue of the system. which there are |k| periods of the general term
Obviously, est is an eigen function of an LTI ejkx0 t but only one period of the fundamental.
system, and H(s) is its eigenvalue. What about a given periodic function x(t) with
When s = jx, H represents the frequency a period T, i.e. x(t + mT) = x(t), m = 0, ±1, ±2,
response of the system, i.e. if x(t) = ejxt or its real … ? Can it be decomposed into the form
8 1 Basic Concepts in Signals and Systems

Eq. 1.27? It turns out that under certain condi- T


2  t  T=2 which virtually becomes −s/2 
tions which are satisfied by all but a few excep- t  +s/2, because x(t) = 0 at other values of
tional cases, one can indeed do so. To determine t within the chosen interval. Hence
ak’s, multiply both sides of Eq. 1.27 by e jnx0 t
and integrate over the interval 0 to T. Obviously, Zs=2
RT A jkx0 t
this results in an integral 0 ejðk nÞx0 t dt on the ak ¼ e dt
T
right-hand side, which is zero if k 6¼ n, and T if s=2
RT
k = n. Thus, an = (1/T) 0 xðtÞ e jnx0 t dt or 2A kx0 s ð1:29Þ
¼ sin
kx0 2
Z T
1 jkx0 t sin kx20 s
ak ¼ xðtÞ e dt ð1:28Þ ¼ sA kx0 s
T 0
2

ak represents the weight of the kth harmonic and


This is of the form sA sin x/x, where x = kx0s/2.
is called the spectral coefficient of x(t). ak is, in
Note that ak is real, and can be positive, zero
general, complex. A plot of |ak| versus k will
or negative. Hence, separate amplitude and phase
consist of discrete lines at k = 0, ±1, ±2, …, it
spectrum plots are not necessary; a single dia-
resembles a spectrum as observed on a spectro-
gram suffices and is shown in Fig. 1.6. Note that
scope and is called the amplitude spectrum.
ak has a maximum value at DC, i.e. k = 0, the
Similarly, one can draw a phase spectrum.
value being sA (this checks with direct calcula-
It is obvious from Eq. 1.27 that x(t) could be
tion from Fig. 1.5). The envelope of the spec-
written as the summation of a sine and a cosine
trum is of the form sin x/x and exhibits damped
series, and that the corresponding coefficients
RT oscillations with zeros at x = p (i.e. kx0 = 2p/s),
could be found from 0 xðtÞ cos kxt dt 2p (i.e. kx0 = p/s), … Further, the sketch is
RT
and 0 xðtÞ sin kxt dt: It is, however, much more symmetrical about x = 0, because sin x/x is an
convenient to handle the exponential form of the even function. The spectrum consists of discrete
Fourier series as given in Eq. 1.27. lines, two adjacent lines being separated by 2p T ¼
As an example of application of the Fourier x0 rad/s.
series, consider the pulse stream shown in Some important points emerge from the
Fig. 1.5. sketch of Fig. 1.6. As T increases, the lines get
At this point, note that in Eq. 1.28, the lower closer and ultimately when T ! ∞, corre-
and the upper limit of integration are not sponding to a single pulse, the spectrum becomes
important so long as their difference is T. This is continuous and will be characterized by the
Rt þT sin xs=2
so because t00 ejðk nÞx0 t is independent of t0. function sA xs=2 , where kx0 has become the
In the example under consideration, it is obvi- continuous variable x. This, as we shall see, is
ously convenient to choose the interval the Fourier transform of the single pulse.
Second, since the lines concentrated in the
lower frequency range are of higher amplitude,
x(t) most of the energy of the periodic wave of
Fig. 1.5 must be confined to lower frequencies.
A Third, as s decreases, the spectrum spreads out,
i.e. there is an inverse relationship between pulse
width and frequency spread.
t Since the energy of the periodic wave is
–T –T/2 /2 0 /2 T/2 T
mostly confined to the lower frequency range, a
convenient measure of bandwidth of the signal is
Fig. 1.5 A rectangular pulse stream from zero frequency to the frequency of the first
The Fourier Series 9

ak

TA

–3p –2p –p p 2p 3p x
6p - 4p - 2p - –w w0 2p 4p 6p w = 2x/t
0
T T T T T T
0

Fig. 1.6 Spectrum of x(t) of Fig. 1.5

zero crossing, i.e. the bandwidth in Hz, B, can be A periodic signal that is of great importance in
taken as 1/s. digital communication is the impulse train
If x(t) of Eq. 1.27 is the voltage across or
1
the current through a one-ohm resistor, then X
RT xðtÞ ¼ dðt kTÞ ð1:32Þ
the average power dissipated is T1 0 jxðtÞj2 dt. k¼ 1
If one writes |x(t)|2 = x(t) x(t), where bar
denotes complex conjugate, and substitutes for as shown in Fig. 1.7. If this is expanded in
x(t) and x(t) from Eq. 1.27, there results the Fourier series
following: 1
X
1
X 1
X xðtÞ ¼ ak ejkx0 t ; ð1:33Þ
2 jðk nÞx0 t k¼ 1
jxðtÞj ¼ an e
ak  ð1:30Þ
k¼ 1 n¼ 1

RT
As we have already seen, 0 ejðk nÞx0 t dt is
x(t)
zero if k 6¼ n and equals T when k = n. Thus, the
average power becomes
1
ZT 1
1 X
jxðtÞj2 dt ¼ jak j2 ð1:31Þ
T t
0
k¼ 1 3T 2T T 0 T 2T 3T

This is known as Parseval’s theorem. Fig. 1.7 A periodic impulse train


10 1 Basic Concepts in Signals and Systems

where xo = 2p/T, then As a simple example, consider the RC net-


work of Fig. 1.4; we have already derived its
ZT=2 impulse response as
1 jkx0 t 1
ak ¼ dðtÞe dt ¼ ð1:34Þ
T T 1 t=ðRCÞ
T=2 hðtÞ ¼ e uðtÞ ð1:37Þ
RC
The spectrum is sketched in Fig. 1.8 so that
What is the bandwidth of this signal? The
amplitude is a constant at all frequencies, unlike Z1
1
e tðjx þ RCÞ dt
1
the spectrum of Fig. 1.6. Hence, the bandwidth is HðjxÞ ¼ ð1:38Þ
infinite. This agrees with our observation about RC
0
bandwidth and pulse duration, because Fig. 1.7
is the degenerate form of Fig. 1.5 with s ! 0 and 1
¼ ð1:39Þ
A ! ∞. jxRC þ 1

When excited by the periodic impulse train of


Linear System Response to Periodic Fig. 1.7, the response y(t) can be found in two
Excitation ways. First, since

dðtÞ ! hðtÞ;
From the discussion on LTI system response to
exponential signals, it follows that a linear sys-
it follows that
tem, excited by the periodic signal of Eq. 1.27,
will produce an output signal which is also 1
X 1
X
periodic with the same period, and is given by dðt kTÞ ! hðt kTÞ
k¼ 1 k¼ 1
1
X
yðtÞ ¼ ak Hðjkx0 Þejkx0 t ; ð1:35Þ so that
k¼ 1
1
1 X ðt kTÞ=ðRCÞ
where yðtÞ ¼ e uðt kTÞ ð1:40Þ
RC k¼ 1
Z1
HðjxÞ ¼ hðtÞe jxt
dt ð1:36Þ Should you try to sketch this waveform, you
would realize how messy it looks; also not much
1
information about the effect of the RC network
and h(t) is the unit impulse response. will be obvious from this sketch. On the other
hand, the Fourier series method gives, from
Eqs. 1.35, 1.34 and 1.39.
1
X 1=T
yðtÞ ¼ ejkx0 t ð1:41Þ
ak jkx0 RC þ 1
k¼ 1

1/T
Let this be written as 1 jkx0 t
P
k¼ 1 bk e ; then the
sketch of |bk| versus x = kx0 looks like that
shown in Fig. 1.9. Comparing this with Fig. 1.8,
w
2w 0 w0 0 w0 2w 0 we note that the RC network attenuates higher
frequencies as compared to lower ones and hence
Fig. 1.8 Spectrum of the impulse train of Fig. 1.7 acts as a low-pass filter. The bandwidth of the
Linear System Response to Periodic Excitation 11

bk Now, turn to the frequency domain. If the


signal bandwidth is taken as B = 1/s Hz, then
obviously for fidelity, the RC filter must pass all
frequencies up to 1/s Hz with as little attenuation
as possible. Thus, Bf must be at least equal to
B = 1/s. Since the attenuation is 3 dB instead of
zero at Bf and the input spectrum is not limited to
2w 0 w0 0 w0 w = kw 0
B, the pulse shape will be distorted. For reduced
2w 0
distortion, we need to increase Bf and we expect
Fig. 1.9 Spectrum of output y(t) given in Eq. 1.41 good results if Bf  B i.e. RC  s.

filter Bf is defined as the frequency at which the |


The Fourier Transform
H(jx)| falls down by 3 dB as compared to its dc
value, i.e.
Now consider a non-periodic function x(t) which
pffiffiffi exists in the range −T/2  t  T/2, and is zero
jHðj2pBf Þj ¼ Hðj0Þ= 2 ð1:42Þ
outside this range. Consider a periodic extension
xp(t) of x(t), as shown in Fig. 1.11. xp(t) can be
Combining this with Eq. 1.39 gives Bf =
expanded in Fourier series as
1/(2pRC).
What would be the response of the RC filter to 1
X
the rectangular pulse stream of Fig. 1.5? This xp ðtÞ ¼ ak ejkx0 t ; ð1:43Þ
will of course depend on the relative values of T, k¼ 1
s and Bf.
First let us confine ourselves to the time where x0 = 2p/T and
domain. If the product RC is comparable to T, ZT=2
then the output will consist of overlapping pul- 1 jkx0 t
ak ¼ xp ðtÞe dt ð1:44Þ
ses, and will retain very little similarity to the T
T=2
input. Let, therefore, RC  T; then depending
on s, the response during one period will be of
ZT=2
the form shown in Fig. 1.10. It is obvious that for 1 jkx0 t
¼ xðtÞe dt ð1:45Þ
fidelity, i.e. if the output is to closely resemble T
the input, we require RC  s < T. T=2

Fig. 1.10 Response of RC y(t) y(t)


network in one period of
Fig. 1.5

t t
0 t 0 t

RC ≥ t RC << t
12 1 Basic Concepts in Signals and Systems

x(t) Z1
jxt
XðjxÞ ¼ F ½xðtފ ¼ xðtÞe dt ð1:51Þ
1
t
-T 0 T
and the inverse Fourier transform as
2 2
xp (t)
Z1
1 1
xðtÞ ¼ F ½Xðjxފ ¼ XðjxÞejxt dx
2p
t 1
–T -T 0 T T ð1:52Þ
2 2
Without entering into the question of exis-
Fig. 1.11 A non-periodic function x(t) and its periodic
extension xp(t) tence, we simply state below the conditions,
named after Dirichlet, under which x(t) is Fourier
transformable. These are
because for |t|  T/2, xp(t) = x(t). Also, since
x(t) = 0 for |t| > T/2, we can write R1
(1) jxðtÞjdt\1
Z1 1
1 jkx0 t (2) finite number of maxima and minima within
ak ¼ xðtÞe dt ð1:46Þ
T any finite interval, and
1
(3) finite number of finite discontinuities within
If we define any finite interval.

Z1 Referring to Eqs. 1.26 or 1.36, it should be


jxt obvious that the impulse response h(t) and the
XðjxÞ ¼ xðtÞe dt ð1:47Þ
1
frequency response H(jx) are Fourier transform
pairs. Explicitly
then from Eq. 1.46, we get
HðjwÞ ¼ F ½hðtފ ð1:53Þ
1 1
ak ¼ Xðjkx0 Þ ¼ Xðjkx0 Þx0 ð1:48Þ As an example of Fourier transformation,
T 2p
consider the rectangular pulse shown in
Thus, Eq. 1.43 can be written as Fig. 1.12. Notice that this is the limiting form of
1
1 X
xp ðtÞ ¼ Xðjkx0 Þx0 ejkx0 t ð1:49Þ
2p k¼ 1
x(t)

Now let T tend to infinity; then xp(t) tends to


x(t), kx0 tends to x, a continuous variable, x0
tends to dx, and the summation becomes an
integral. Thus, Eq. 1.49 becomes

Z1
1
xðtÞ ¼ XðjxÞejxt dx ð1:50Þ t
2p –t /2 0 t /2
1

Combining Eqs. 1.47 and 1.48, we now for-


mally define the Fourier transform of x(t) as Fig. 1.12 A rectangular pulse
The Fourier Transform 13

the periodic function of Fig. 1.5 with T ! ∞. Let t − s = n; then the integral inside the
Applying Eq. 1.51, we get bracket becomes e−jxs H(jx), so that
Zs=2 Z1
jxt sinðxs=2Þ
XðjxÞ ¼ A e dt ¼ sA ð1:54Þ YðjxÞ ¼ HðjxÞxðsÞe jxt
ds; ð1:59Þ
ðxs=2Þ
s=2 1

This, as will be easily recognized, is the lim- i.e.


iting form of Fig. 1.6, and is the envelope of the
same figure. This verifies the observation made YðjxÞ ¼ HðjxÞXðjxÞ ð1:60Þ
in the discussion on Fourier series.
In words, this amounts to saying that the
The Fourier transform has many important
spectrum of the output of a linear system is
properties, the most important in the context of
simply the product of the spectrum of the input
analysis of linear systems being that it converts a
signal and the frequency response of the system.
convolution in the time domain to a multiplica-
The output in the time domain, y(t) can be simply
tion in the frequency domain, i.e. if
found by taking the inverse Fourier transform of
Z1 Y(jx).
yðtÞ ¼ xðtÞ  hðtÞ ¼ xðsÞhðt sÞds ð1:55Þ To illustrate the application of Eq. 1.60,
1 consider a linear system having the impulse
response
then, assuming that y(t), x(t) and h(t) are Fourier
at
transformable, and F [y(t)] = Y(jx), we get hðtÞ ¼ e uðtÞ; a [ 0 ð1:61Þ

YðjxÞ ¼ XðjxÞHðjxÞ ð1:56Þ which is excited by an input signal

bt
The proof of Eq. 1.56 is simple and proceeds x ðt Þ ¼ e uðtÞ; b [ 0 ð1:62Þ
as follows:
By direct integration, it is easily shown that
YðjxÞ ¼ F ½yðtފ
1 1
Z1 Z1
2 3
HðjxÞ ¼ and XðjxÞ ¼ ð1:63Þ
jxt a þ jx b þ jx
¼ 4 xðsÞhðt sÞds5 e dt
1 1 Thus
ð1:57Þ
1
YðjxÞ ¼ ð1:64Þ
Interchange the order of integration and notice ða þ jxÞðb þ jxÞ
that x(s) does not depend on t; the result is
To determine y(t), one may write
Z1 Z1
2 3
jxt A B
YðjxÞ ¼ xðsÞ4 hðt sÞ e dt5ds
YðjxÞ ¼ þ ð1:65Þ
1 1 ða þ jxÞ ðb þ jxÞ
ð1:58Þ
14 1 Basic Concepts in Signals and Systems

and find A and B as Depending on the nature of the signal, the spectral
density is also to be qualified as power or energy.
1 Consider an energy signal x(t). Using the facts
A¼ B¼ ð1:66Þ
b a that jxðtÞj2 ¼ xðtÞxðtÞ and F ½xðtފ ¼ Xð jxÞ and
combining with the inversion integral Eq. 1.52, it
so that
is not difficult to show that
  
1 1 1 1 Z1 Z1
yðtÞ ¼ F 1
b a a þ jx b þ jx E¼ 2
jxðtÞj dt ¼ jXðjxÞj2 dx ð1:72Þ
ð1:67Þ 2p
1 1

1 at bt

¼ e e uðtÞ ð1:68Þ The rightmost expression in Eq. 1.72 shows


b a
that |X(jx)|2/(2p) has the dimension of energy per
Things are of course, different if a = b; then unit radian frequency, i.e. |X(jx)|2 has the
one goes back to Eq. 1.64 and uses the property dimension of energy per unit Hz. For this reason,
that if F [x(t)] = X(jx) then F [tx(t)] = j dX(jx)/ |X(jx)|2 is referred to as the energy density
dx. Accordingly if a = b, then spectrum of the signal x(t). Incidentally, Eq. 1.72
is known as the Parseval’s relation (cf. Eq. 1.31).
y ðt Þ ¼ t e at
uðtÞ ð1:69Þ For a periodic signal, which is a power signal,
we have already seen in Eq. 1.31 that the average
power P is given by 1 2
P
k¼ 1 jak j ; where |ak| is
Spectral Density the amplitude of the kth harmonic. If we define,
in similarity with Eq. 1.72,
In using Fourier transform to calculate the energy
Z1
or power of a signal, the notion of spectral den-
P¼ Sx ðf Þdf ð1:73Þ
sity is an important one. The total energy and
1
average power of a signal x(t) are defined as

ZT Z1 then Sx(f) qualifies as the power per unit Hz and


2 2 is called the power spectral density. In terms of
E ¼ lim jxðtÞj dt ¼ jxðtÞj dt ð1:70Þ
T!1 |ak|, it is easily seen that
T 1
1
X
and Sx ðf Þ ¼ jak j2 dðf kfo Þ; ð1:74Þ
k¼ 1
ZT
1
P ¼ lim jxðtÞj2 dt; ð1:71Þ where f0 = x0/(2p) is the fundamental frequency.
T!1 2T
T

respectively. A signal x(t) is called an energy


signal if 0 < E < ∞ and a power signal if Concluding Comments
0 < P < ∞. A given signal x(t) can be either an
energy signal or a power signal but not both. For more on the impulse function, see the next
A periodic signal (e.g. the one of Fig. 1.5) is chapter. In this chapter, we have talked about the
usually a power signal, while a non-periodic signal fundamentals of signals and systems and their
(e.g. the one of Fig. 1.12) is usually an energy relationship. Fourier series and Fourier transform
signal. Power and energy signals are mutually and the concepts of spectra––amplitude as well
exclusive because the former has infinite energy as phase––and the concepts of energy, power and
while the latter has zero average power. their density functions have also been introduced.
Problems 15

P:5 Determine the impulse response of a system


Problems
consisting of a cascade of two systems
characterized by the impulse response
You have to think carefully. These are designed
also carefully, as the problems are designed to at
f ðt Þ ¼ e uð t Þ
test your grasp of the fundamentals and the ease
with which you can find a clue.
and
P:1 Sketch (−1), (−½), (0), (½), (1), (1 − t) gð t Þ ¼ e bt
uðtÞ;
P:2 The impulse response of a system is h(t − t0).
Find an expression for the output y(t), if the where a > 0, b > 0 and (i) a 6¼ b
input is t2x(t1/2). (ii) a = b.
P:3 Determine the Fourier transform of the
function tu(t).
P:4 Determine and sketch the spectrum of the
following function: Bibliography
8
0; t\0
1. A.V. Oppenheim, A.S. Willsky, I.T. Young, Signals
<
xðtÞ ¼ 1; 0\t\ T20 and systems. Prentice Hall (1983)
dðt TÞ; t[0
:
The Mysterious Impulse Function
and its Mysteries 2

Some fundamental issues relating to this


Introduction
mysterious but fascinating impulse function
in continuous as well as discrete time domain
The issues considered in this chapter are usually
are discussed first. These are: definition and
treated in a routine manner in courses on circuit
relation to the unit step function, dimension of
theory, signals and systems and digital signal
impulse response, solution of differential and
processing. That problems may arise without a
difference equations involving the impulse
thorough understanding of the concepts involved
function and Fourier transform of the unit step
are usually not clearly brought out, or avoided.
function. Conceptual understanding is empha-
These problems are usually related to the
sized at every point. This is very important,
occurrence of the impulse function dðtÞ in the
particularly for beginners like you. So read the
continuous time domain and dðnÞ in the discrete
chapter carefully and grasp the contents. This
time domain in various situations. The chapter
will help you to understand the later course on
makes an attempt to clarify the concepts involved
signals and systems, control, DSP, etc.
in a few of such problem cases and the manner in
which they can be solved. In particular, starting
Keywords
with the definitions of dðtÞ and dðnÞ and their

Impulse function Unit step function relationship with the unit step functions u(t) and

Impulse response Differential equation u(n), respectively, we discuss the following

Difference equation Fourier transform issues: dimension of impulse response, solution

Signals and systems Digital signal of differential and difference equations involving
the impulse function, and Fourier transforms of
processing
u(t) and u(n). The last discussion is a concluding
one.
The chapter is written in a tutorial style so as
to be appealing to students and teachers alike.
The chapter also asks some questions, the
answers to which are left as open problems.

Source: S. C. Dutta Roy, “Some Fundamental Issues


Related to the Impulse Function,” IETE Journal of
Education, Vol 57, pp 2–8, January–June 2016.

© Springer Nature Singapore Pte Ltd. 2018 17


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_2
18 2 The Mysterious Impulse Function and its Mysteries

Definitions starting time is t = 0, then the value of the pulse,


rectangular or triangular, is 0 at t = 0−, 0, as well
The unit impulse function dðtÞ in the continuous as 0+, which is not the case with dðtÞ.
time domain is defined as The impulse function may also be looked
upon as the limiting value of some distributions,
dðtÞ ¼ 0; t 6¼ 0 and dðtÞ ! 1; t ¼ 0 ð2:1Þ p
e.g. ða pÞ 1 expð t2 =a2 Þ or a 1 sinc ðx=aÞ, the
integral of which from −∞ to +∞ is unity, as
such that
a ! 0. However, this point of view is not rele-
Zb vant in the context of this chapter, and will not be
dðtÞ dt ¼ 1; ð2:2Þ pursued further.
The function dðtÞ is best visualized by relating
a
it to the unit step function u(t), defined by
where a  0− and b  0+. The adjective ‘unit’
refers to Eq. 2.2 and the area under the function, uðtÞ ¼ 1; t  0 þ and uðtÞ ¼ 0; t  0 ð2:3Þ
which is also known as the strength of the
Clearly, the differentiation of u(t) will give
impulse. An impulse A dðtÞ will have an area
dðtÞ, satisfying both Eqs. 2.1 and 2.2. Hence, the
A under the integral and will be called an impulse
relationship between the two functions can be
of strength A. The definition must include
formally written as
Eqs. 2.1 and 2.2 together. Often Eq. 2.2 is not
paid attention to, but it must be understood that Zt
infinity is incomprehensible in a finite world. If dðtÞ ¼ duðtÞ=dt and uðtÞ ¼ dðtÞ dt; ð2:4Þ
Eq. 2.2 was not coupled to Eq. 2.1, then dðtÞ
a
would not be admissible in the realm of func-
tions, particularly for engineering analysis and where a  0−. Unlike dðtÞ, u(t) can be generated
design. Even otherwise, the impulse function is a in the laboratory by a switch in series with a
mathematical anomaly. From a purely mathe- voltage source.
matical point of view, dðtÞ is not strictly a On the other hand, in the discrete time
function because any extended real function that domain, there is no uncertainty or confusion
is zero everywhere except at a single point must about the unit impulse function dðnÞ, which is
have a total integral equal to zero. There is a defined as
fascinating history behind the acceptance of dðtÞ,
not as a function, but as a distribution or a gen- dðnÞ ¼ 1; n ¼ 0 and dðnÞ ¼ 0; n 6¼ 0 ð2:5Þ
eralized function by mathematicians. However, it
would be too much of a diversion to go into the The unit step function u(n) is defined as
details in this chapter. Instead, we refer to an
excellent tutorial article by Balakrishnan [1] on uðnÞ ¼ 1; n  0 and uðnÞ ¼ 0; n\0 ð2:6Þ
the subject. For our purposes here, Eqs. 2.1 and
2.2 as the definition of dðtÞ will suffice. Thus, the relationships between dðnÞ and u
It must be understood that dðtÞ is only a (n) are
mathematical concept and is introduced to facil-
itate analysis and design. It cannot be generated dðnÞ ¼ uðnÞ uðn 1Þ and uðnÞ
n
in the laboratory. Textbooks usually give a visual
X
¼ dðn kÞ ð2:7Þ
aid of a rectangular pulse of duration s and height k¼0
1=s or a triangular pulse of width 2s and height
1=s. By allowing s ! 0, one can visualize dðtÞ. In order to distinguish between the continuous
However, this limiting approach may fail to give and discrete time domains, dðnÞ should more
correct results in system analysis, because if the appropriately be called the ‘unit sample
Definitions 19

function’. However, in conformity with the HðsÞ ¼ L½yðtÞ=L½xðtފjzero initial conditions ; ð2:10Þ
common usage, we shall continue to call dðnÞ as
the unit impulse function, it being implied that where L stands for the Laplace transform and the
the argument of d will make it clear which initial conditions are on y(t) and its derivative(s),
domain we are referring to. if the order of the system is more than one. It is
known that the h(t) and H(s) are related to each
other by
Impulse Response
HðsÞ ¼ L½hðtÞ evaluated under zero initial conditionsŠ
In the continuous time domain, the impulse
ð2:11Þ
response h(t) of a system is generally interpreted
as the response of the system to an excitation Textbooks usually omit the condition under
d(t). What is the dimension of h(t)? To answer which h(t) is to be evaluated. It must be
this question, look at the first relation in Eq. 2.4; emphasized that h(t) is the impulse response
if u(t) is a voltage, then dðtÞ has the dimension of under zero-state condition.
volts/second, which we can neither generate nor Similarly, for the discrete time domain, the
apply to a system. However, for a linear system, transfer function is defined as
differentiation of the unit step response will give
h(t). This is how we can measure h(t) in the HðzÞ ¼ Z ½yðnÞ=Z ½xðnފjzero initial conditions ð2:12Þ
laboratory. Obviously, h(t) will have the dimen-
sion of (second)−1. and
Now look at the convolution relation:
HðzÞ ¼ Z ½hðnÞ evaluated under zero initial conditionsŠ;
Zþ 1
ð2:13Þ
yðtÞ ¼ xðsÞhðt sÞds; ð2:8Þ
1 where Z stands for the z-transform.

where y(t) is the response of a system to an


excitation x(t). Clearly, if both x(t) and y(t) are
voltages or currents, then h(t) will have the How Do You Solve Differential
dimension of (second)−1. However, if x(t) is a Equations Involving an Impulse
voltage and y(t) is a current, then h(t) will have Function?
the dimension of (ohm-second)−1. Similarly, if x
(t) is a current and y(t) is a voltage, then the We shall illustrate the kind of difficulties that
dimension of h(t) will be ohm/second. arise in solving differential equations involving
In the discrete time domain, h(n) is dimen- an impulse function, with a typical problem.
sionless, as are all signals, because the process- Consider the following differential equation:
ing is concerned only with numbers. The
y00 þ 2y0 þ 2y ¼ dðtÞ; ð2:14Þ
convolution relation
1
X where prime denotes differentiation with respect
yðnÞ ¼ xðnÞ  hðnÞ ¼ xðkÞhðn kÞ ð2:9Þ to t. Let the given initial conditions be
k¼ a
yð0 Þ ¼ 1 and y0 ð0 Þ ¼ 2 ð2:15Þ
does not involve any dimensions.
In the context of the continuous time domain, Let us try to solve Eq. 2.14 by using Laplace
the transfer function is defined as transforms, as is usually done. In order to take
20 2 The Mysterious Impulse Function and its Mysteries

account of dðtÞ on the right-hand side of constants in the former from the initial condi-
Eq. 2.14, we take the Laplace transform of y(t) as tions. The method gives correct results if there is
no impulse function in the excitation function,
Z1 but not in the present case. Why? The question is
YðsÞ ¼ yðtÞ expð stÞdt ð2:16Þ left to you as an open problem.
0 Applying the first method, we first find the
zero-input solution, i.e. the solution of Eq. 2.14
Then, the Laplace transformation of Eq. 2.14 with the right-hand side equal to zero. It can be
gives easily verified that the solution is of the form
s2 YðsÞ syð0 Þ y0 ð0 Þ þ 2 ½sYðsÞ yð0 ފ þ 2YðsÞ ¼ 1 yzi ðtÞ ¼ A cos ðt þ hÞ; ð2:22Þ
ð2:17Þ
where A and h are constants to be evaluated from
Combining Eqs. 2.17 and 2.15 and simplify- the initial conditions given by Eq. 2.15. Carrying
ing, we get out the steps, we get
p
YðsÞ ¼ ðs þ 1Þ=ðs2 þ 2s þ 2Þ ð2:18Þ A¼ 5 and h ¼ arctan 2 ð2:23Þ

Using the standard table of Laplace trans- Now consider the zero-state solution where
forms, inversion of Eq. 2.18 gives the initial conditions are to be put equal to zero.
Obviously, this can be done by the Laplace
yðtÞ ¼ ½expð tÞ cos tŠ uðtÞ ð2:19Þ transform method by putting y(0−) = 0 and
y′(0−) = 0 in Eq. 2.17. Then, Eq. 2.17 gives
Differentiation of Eq. 2.19 gives
Yzs ðsÞ ¼ 1=ðs2 þ 2s þ 2Þ ð2:24Þ
y0 ðtÞ ¼ ½expð tÞ cos tŠdðtÞ exp ð tÞ:½cos t þ sin tŠ uðtÞ
ð2:20Þ On inversion, Eq. 2.24 gives

The solutions Eqs. 2.19 and 2.20 are valid for yzs ðtÞ ¼ ½expð tÞ sin tŠ uðtÞ ð2:25Þ
t  0+. In particular, at t = 0+,
Combining Eqs. 2.22, 2.23 and 2.25, we get
0
yð0 þ Þ ¼ 1 and y ð0 þ Þ ¼ 1 ð2:21Þ the total solution as
p
Note that while y(0 +) = y(0−), y′(0+) ¼ 6 y′(0−). yðtÞ ¼ 5 cosðt þ arctan 2Þ þ ½expð tÞ sin tŠ uðtÞ
This discontinuity in initial condition is typical, ð2:26Þ
whenever an impulse function figures in the
right-hand side of a differential equation. Also note Note that the first term does not involve u(t)
that if Eqs. 2.19 and 2.20 are applied at t = 0−, the because the zero-input solution is valid for all
values are: y(0−) = 0 and y′(0−) = 0, because both t. Differentiating Eq. 2.26, we get
u(t) and dðtÞ are zero at t = 0−. p
Can we get a solution which is valid at t = 0− y0 ðtÞ ¼ 5 sin ðt þ arctan 2Þ
also? The answer is yes, if we apply the super- þ ½expð tÞ sin tŠ dðtÞ ½expð tÞðsin t þ cos tފuðtÞ
position of ‘zero-input’ and ‘zero-state’ solu- ð2:27Þ
tions. This method is seldom emphasized in
circuit theory courses. Instead, one uses the Putting t = 0− in Eqs. 2.26 and 2.27 and
superposition of the complementary function and noting that both u(t) and dðtÞ are zero at t = 0−,
the particular solution, and then evaluates the we get the same values as in Eq. 2.15.
How Do You Solve Differential Equations Involving an Impulse Function? 21

On the other hand, putting t = 0+ in Eqs. 2.26 Combining Eqs. 2.31 with 2.29 and simpli-
and 2.27 and noting that u(0+) = 1 and fying gives
dð0 þ Þ ¼ 0; we get the same values as in
1 1
Eq. 2.21. YðzÞ ¼ 6ðz 1Þ=ð1 þ z 6z 2 Þ ð2:32Þ

Expanding Eq. 2.32 in partial fractions and


taking the inverse transform gives
How Do You Solve Difference
Equations Involving an Impulse yðnÞ ¼ ½4:8ð 3Þn þ 1:2ð2Þn Š uðnÞ ð2:33Þ
Function?
This solution is strictly valid for n  0.
Do the types of difficulties discussed in the pre- However, it can be easily modified to take care of
vious section arise in the discrete time domain n < 0 by adding the terms dðn þ 1Þ and
also? The answer is yes, if z-transform method is dðn þ 2Þ to the right-hand side of Eq. 2.33.
applied blindly because the resulting solution is Consider now the zero-input, zero-state
strictly valid for n  0. However, the situation method of solution. The zero-input solution is
can be corrected by adding the necessary terms of the form
representing the initial conditions for n < 0. As
in the continuous time domain, the complemen- yzi ðnÞ ¼ K1 ð 3Þn þ K2 ð2Þn ; ð2:34Þ
tary function and particular solution method does
not work here. Why? The answer to this question where K1 and K2 are to be evaluated from
is again left to you as an open problem. On the Eq. 2.29. The result is
other hand, as in the continuous time case, if the
difference equation is solved in the n- domain by yzi ðnÞ ¼ ½5:4ð 3Þn þ 1:6 ð2Þn Š ð2:35Þ
the zero-input and zero-state method, correct
This part of the solution is valid for all n.
solution is obtained, with initial conditions taken
The zero-state solution can be obtained by the z-
into account. As in the previous case, we shall
transform technique by putting y(−1) = y(−2) = 0
illustrate these facts with an example.
in Eq. 2.31. Inverting the resulting Y(z), we get
Let it be required to solve the difference
equation
yzs ðnÞ ¼ ½0:6 ð 3Þn þ 0:4 ð2Þn Š uðnÞ ð2:36Þ

yðnÞ þ yðn 1Þ 6yðn 2Þ ¼ dðnÞ ð2:28Þ which is valid for n  0. The total solution is the
sum of Eqs. 2.35 and 2.36, i.e.
subject to the initial conditions
yðnÞ ¼ ½5:4ð 3Þn þ 1:6 ð2Þn Š þ ½0:6 ð 3Þn þ 0:4 ð2Þn Š uðnÞ
yð 1Þ ¼ 1 and yð 2Þ ¼ 1 ð2:29Þ
ð2:37Þ
If we use the z-transform method blindly and
Clearly, Eq. 2.37 will give the correct values
take the z-transform of y(n) as
for y(−1) and y(−2). Also, for n  0, Eq. 2.37
1
X gives the same result as Eq. 2.33, as expected.
n
YðzÞ ¼ yðnÞz ð2:30Þ
n¼0

Fourier Transform of the Unit Step


then the z-transform of Eq. 2.28 gives
Function

YðzÞ þ z 1 YðzÞ þ yð 1Þ 6 z 2 YðzÞ



For ready reference, recall that the Fourier
1
ð2:31Þ
transform of a continuous time function y(t) is

þz yð 1Þ þ yð 2Þ ¼ 1
defined as
22 2 The Mysterious Impulse Function and its Mysteries

Zþ 1 Zþ 1
YðjxÞ ¼ yðtÞ exp ð jxtÞdt ð2:38Þ F ½uðtފ ¼ uðtÞ expð jxtÞdt;
1 1
Zþ 1 ð2:42Þ
while that of the discrete time function y(n) is ¼ expð jxtÞdt
defined as
0
¼ ½expð jxtÞ=ð jxފ 1

1
X 0
Y ½exp ðjxފ ¼ yðnÞ exp ð jxnÞ ð2:39Þ
n¼ 1 where F stands for the Fourier transform. Since
exp(−j∞) is not known, we evaluate Eq. 2.42
Note that in either case, the transform is a
indirectly. It is easy to show that
continuous function of x, although y(n) is a
discrete variable. In Eq. 2.38, x ¼ 2pf , where F ½expð atÞ uðtފ ¼ 1=ða þ jxÞ ð2:43Þ
f is the frequency in Hz, while in Eq. 2.39,
x ¼ 2pf =fs , where fs is the sampling frequency It is tempting to put a = 0 in Eq. 2.43 and
in Hz. Consequently, in Eq. 2.38, x has the conclude that F[u(t)] = 1/(jx), but this is part of
dimension of (second)−1, while in Eq. 2.39, x is the story. Hidden here is the fact that Eq. 2.43
dimensionless. has a real part as well as an imaginary part, i.e.
In order to be useful, any transformation must
be reversible, i.e. one must be able to recover the 1=ða þ jxÞ ¼ ½a=ða2 þ x2 ފ ½jx=ða2 þ x2 ފ
original signal from the transformed one in a ð2:44Þ
unique way. The Fourier transform is no excep-
tion for which the inverse Fourier transforms is While 1/( jx) takes care of the imaginary part
when a = 0, the real part becomes 1=x2 when
Zþ 1
a = 0, and it tends to infinity when x ! 0:
yðtÞ ¼ ½1=ð2pފ YðjxÞ expðjxtÞdx ð2:40Þ Hence, there is an impulse function at x ¼ 0: (It
1 would be wrong to say that the real part becomes
of the form 0/0 when both a and x are zero,
corresponding to Eq. 2.38 and
because the denominator becomes 02 so that the
Z1 real part becomes 1/0, i.e. infinite). To determine
yðnÞ ¼ ½1ð2pފ Y ½expðjxފ exp ðjxnÞdx the strength of the impulse, we find the area
under a=ða2 þ x2 Þ: This is given by
1
ð2:41Þ Zþ 1
½a=ða2 þ x2 ފdx ¼ ½arctan ðx=aފj11 ¼ p
corresponding to Eq. 2.40. The limits of the
1
integral in Eq. 2.41 cover a range of 2p because
Y[exp(jx)] is periodic with a period of 2p. That ð2:45Þ
both Eqs. 2.38 and 2.40 or Eqs. 2.39 and 2.41
Thus, the real part in Eq. 2.44 becomes pdðxÞ
cannot be definitions is often not appreciated by
when both a and x tend to zero. Finally,
students.
therefore,
Now, we turn to the main item of discussion,
i.e. the Fourier transforms of u(t) and u(n). F ½uðtފ ¼ pdðxÞ þ ½1=jxފ ð2:46Þ
Applying the definition, we get
Fourier Transform of the Unit Step Function 23

For finding the Fourier transform of u(n), if confusions have been clarified. That the com-
we apply the definition Eq. 2.39 blindly, then it plementary function, and particular solution
is tempting to conclude that the required trans- method does not work if the excitation contains
form is 1=½1 expð jxފ. However, as in the an impulse function does not appear to have been
case of u(t), this is a part of the story; hidden here recorded earlier. The reason why it does not work
is the fact that u(n) has an average value of ½. To has been left as an open problem for you.
show that 1=½1 expð jxފ is not F[u(n)], con- The reason why misconceptions and confu-
sider the sequence sions persist in the minds of students and
teachers lies in the kind of rote learning that is
sgn ðnÞ ¼ 1=2 for n\0 and þ 1=2 for n  0 emphasized and practiced in the class. Also, even
ð2:47Þ if the teacher is knowledgeable and wishes to
explain the concepts and subtle points, he/she
It is not difficult to show, by applying (2.39) finds no time to do this because of the pressure to
that F ½sgnðnފ ¼ 1=½1 expð jxފ. Clearly, ‘cover’ the syllabus, which invariably is
overloaded.
uðnÞ ¼ sgnðnÞ þ ð1=2Þ ð2:48Þ

Thus, F[u(n)] will have an additional term Problems


corresponding to the average value of ½. To find
this term, consider the following inverse P:1 Solve the equation
z-transform:
y000 þ 3y00 þ 3y0 þ y ¼ dðtÞ;
þ1
X Zþ p
F 1 ½p dðx þ 2pkފ ¼ ½1ð2pފ p
1
given
p
þ1

X
dðx þ 2pkÞ expðjxnÞdx yð0Þ ¼ 0; y0 ð0Þ ¼ 0; y00 ð0Þ
1

ð2:49Þ P:2 Solve the equation

In the range of integration, only the k = 0 term yðn 4Þ þ yðn 3Þ þ yðn 1Þ ¼ dðn 1Þ
will have an effect, which gives the right-hand
side of Eq. 2.49 as ½. Since the Fourier trans- given
form gives a one-to-one correspondence between
yð0Þ ¼ yð 1Þ ¼ yð 2Þ ¼ yð 3Þ ¼ 1:
the n and x domains, we conclude that
P:3 Solve the equation
F½uðnފ ¼ 1=½1 expð jxފ
1
X ð2:50Þ dyðtÞ at
þp dðx þ 2pkÞ þ q0 yðtÞ ¼ Ae uðtÞ
1
dt

given that

Conclusion yð0Þ ¼ y0

In this chapter, some fundamental and conceptual P:4 Solve the equation
issues, relating to the unit impulse function, in
both continuous and discrete time domains, are y0 þ q0 y ¼ q0 x0 þ x
presented and some possible misconceptions and
24 2 The Mysterious Impulse Function and its Mysteries

Acknowledgement The author acknowledges the help


given that received from Professor Yashwant V Joshi in the prepa-
ration of this chapter.
yð0Þ ¼ 0

P:5 Solve the equation


Reference
yðnÞ þ q0 yðn 1Þ ¼ xðnÞ;
1. V. Balakrishnan, All about the Dirac delta function(?).
given that Resonance 8(8), 48–58 (2003)
yð 1Þ ¼ y0 :
State Variables—Part I
3

Here, we introduce state variables, which were linear and time-invariant systems only. Linearity
the hot topics in the middle of 1960s. Later, implies that if inputs x1(t) and x2(t) produce
they have seeped into and made deep impact outputs y1(t) and y2(t), respectively, then an input
on signals and systems, circuit theory, con- ax1(t) + bx2(t), where a and b are arbitrary con-
trols, etc. You better get familiar with them stants, should lead to the output ay1(t) +by2(t).
and make friends with them as early as This involves the two principles of homogeneity
possible. and superposition. Time invariance implies that
if an input x(t) produces the output y(t), then the
delayed input x(t − s) should produce an output
Keywords y(t − s), delayed by the same amount s. For state
 
State variables Standard forms Choice of variable characterization of systems which do not

state variables Solution of state equations obey linearity and/or time invariance, the reader
is referred to the literature listed under
references.
This discussion on state variables is organized
Why State Variables?
in two parts. In Part I, we introduce the concept
of state; clarify definitions and symbols; present
The state variable approach provides an extre-
the standard form of linear state equations; dis-
mely powerful technique for the study of system
cuss how state variables are chosen in a given
theory and has led to many results of far-reaching
physical system, or a differential equation
importance. This chapter is intended to serve as
description of the same and elaborate on the
an introduction to the basic concepts and tech-
methods of solving the state equations. In Part II
niques involved in the state variable characteri-
of this presentation, in the next chapter, we shall
zation of systems. The discussion is restricted to
deal with the properties of the fundamental
matrix and procedures for evaluation of the same,
and dwell upon the state transition flow graph
method in considerable details. The references
Source: S. C. Dutta Roy, “State Variables—Part I,” for the whole chapter will be given in Part II,
IETE Journal of Education, vol. 38, pp. 11–18, January– which will also include an appendix on the
March 1997. essentials of matrix algebra.
This chapter and the next one are based on some notes
prepared for some students at the University of
Minnesota in USA as early as 1965.

© Springer Nature Singapore Pte Ltd. 2018 25


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_3
26 3 State Variables—Part I

What is a State? state is described by a set of numbers, called state


variables, which contains sufficient information
Consider an electrical network containing resis- regarding the past history of the system so that
tances, capacitances and inductances, whose the future behaviour can be computed. For
response to an excitation e(t) is r(t). Since the example, in an electrical network, the inductor
network is known, a complete knowledge of currents and capacitor voltages constitute the
e(t) over the time interval −∞ to t is sufficient to state variables.
determine the output r(t) over the same time To represent a system of more than two or
interval. However, if e(t) is known over the time three state variables, it is convenient to use the
interval t0 to t, as is usually the case (with t0 vector space notation of modern algebra. A sys-
taken as equal to zero), then the currents through tem defined by n independent states xi(t), i = 1,
the inductors and the voltages across the capac- 2, … n, (corresponding to an nth order system),
itors in the network must be known at some time can be represented by the state vector
t1, t0  t1  t (usually, t1 = t0, hence the name 2 3
‘initial conditions’), in order to determine r x1 ðtÞ
(t) over the interval t0 to t. These currents and 6 x2 ðtÞ 7
xð t Þ ¼ 6 . 7 ð3:1Þ
6 7
voltages constitute the ‘state’ of the network at 4 .. 5
time t1. In this sense, the state of the network is xn ðtÞ
related to its memory; for a purely resistive net-
work (zero memory), only the present input is Consider the multiple-input, multiple-output
required to determine the present output. (or multivariable) system shown in Fig. 3.1. It
Next, consider a general type of linear system, can be described by a set of simultaneous
which is described by a set of linear differential differential equations relating the state vector
equations for t  t0. The complete solution of x(t) given by Eq. 3.1, the excitation vector1
these equations will involve arbitrary constants
2 3
which can be determined from ‘initial’, ‘given’ u1 ðtÞ
or ‘boundary’ conditions at time t = t0 (or at 6 u2 ðtÞ 7
uðtÞ ¼ 6 . 7; ð3:2Þ
6 7
t = t1 where t0  t1  t). The boundary con- 4 .. 5
ditions can thus be termed as the ‘state’ of the
um ðtÞ
system at t = t0. Heuristically, the state of a
system separates the future from the past, so that
the state contains all relevant information con-
cerning the past history of the system, which is u1(t) y1(t)
required to determine the response for any input. u2(t) y2(t)
. .
. .
The evolution of an excited system from one .
.
.

. n-th Order .
.

state to another may be visualized as a process of .


.
.
.
System
.
.
.
.
. .

state transition. .
.
.
.
.
.
.
.
.
.
.
.

um(t) yp(t)

Definitions and Symbols Fig. 3.1 Schematic representation of a multivariable


system

The state of a dynamical system may now be


formally defined as the minimal amount of
information necessary at any time such that this
1
information, together with the input or the exci- Note that the symbol u(t) is usually reserved for the unit
step function. We have used u(t) to denote the excitation
tation function and the equations describing the
function for conformity with the control literature. We
dynamics of the system, characterize completely shall use the symbol l(t) for the unit step function when
the future state and output of the system. The the occasion arises.
Definitions and Symbols 27

where m is the number of inputs, and the D


response vector
. +
2 3 + Σ x x +
y1 ðtÞ u B C ? y
+
6 y2 ðtÞ 7
yðtÞ ¼ 6 . 7; ð3:3Þ
6 7
4 .. 5
A
yp ðtÞ
Fig. 3.2 Block diagram representation of the state
where p is the number of outputs, through the equations
system parameters, as discussed next.
Example 1 Consider the network shown in
Fig. 3.3. We identify the inductor current i1 and
Standard Form of Linear State
the capacitor voltage v2 as the state variables.
Equations
There are two excitations, e1(t) and e2(t). Let the
desired responses be i1 and i2. Thus, for this
In general, the equations of a dynamical system
system, we identify
can be written in the following functional forms
     
xðtÞ ¼ f½xðt0 Þ; uðt0 ; tފ ð3:4Þ i1 e1 i
x¼ ; u¼ and y¼ 1 ð3:8Þ
v2 e2 i2
yðtÞ ¼ g½xðt0 Þ; uðt0 ; tފ ð3:5Þ

for t  t0. If the system can be described by a A system of two first-order differential equa-
set of ordinary linear differential equations, then tions can be written for this electric circuit as
the state equations can be written as follows:

_ ¼ AðtÞxðtÞ þ BðtÞuðtÞ
xðtÞ ð3:6Þ Lðdi1 =dtÞ ¼ e1 ðv2 þ e2 Þ ¼ e1 e2 v2 ð3:9Þ
yðtÞ ¼ CðtÞxðtÞ þ DðtÞuðtÞ; ð3:7Þ
i2 ¼ Cðdv2 =dtÞ ¼ i1 ðv2 þ e2 Þ=R ð3:10Þ
where the dot above the symbol x denotes
differentiation with respect to time and A(t), B(t), Equations 3.9 and 3.10 can be rearranged as
C(t) and D(t) are, in general, time-varying real follows:
matrices. Equations 3.6 and 3.7 are the standard
forms of linear state equations, as will be illus- ðdi1 =dtÞ ¼ ð 1=LÞv2 þ ðl=LÞe1 þ ð 1=LÞe2
trated in the examples to follow. If the system ð3:11Þ
is time invariant, then the matrices A(t), B(t),
C(t) and D(t) are constants. A general block
ðdv2 =dtÞ ¼ ð1=C Þi1 þ ½ 1=ðRCފv2 þ ½ 1=ðRC ފe2
diagram representation for the state equations is
given in Fig. 3.2. ð3:12Þ
Referring to Eqs. 3.1 to 3.7, we observe that
These two equations can be combined into the
the dimensions of the matrices A, B, C and D are
following single matrix equation:
n  n, n  m, p  n and p  m, respectively.

       
d i1 0 1=L i1 1=L 1=L e1
¼ þ ð3:13Þ
dt v2 1=C 1=ðRCÞ v2 0 1=ðRCÞ e2
28 3 State Variables—Part I

L C How Do You Choose State Variables


in a Physical System?
i1 + v2 –
+ +
e1(t) R e2(t) The choice of state variables in Example 1 was
– –i very simple. We chose the capacitor voltage and
2
the inductor current as the two state variables.
This should not lead one to believe that, in
Fig. 3.3 Circuit for Example 1 general, an electrical network with l inductors
and c capacitors would require l + c state vari-
ables. This is because not all capacitor voltages
A comparison of Eq. 3.13 with Eq. 3.6 yields can be specified independently when there are
the identification capacitor loops in the network. Similarly, not all
  inductor currents can be specified independently
0 1=L when there are inductance cut-sets2 in the net-
A¼ and
1=C 1=ðRCÞ work. An allowable set of state variables is the
  ð3:14Þ
1=L 1=L set of all capacitor voltages, diminished by a
B¼ subset equal to the number of capacitor loops,
0 1=ðRCÞ
plus the set of inductor currents, diminished by a
The equation for the output vector y is derived subset equal to the number of inductance
by recognizing the fact that cut-sets. Thus to find the state equations, we must
eliminate, from the set of equations governing
the network, all non-state variables, that is all
i1 ¼ i1 ð3:15Þ
resistance voltages and currents, and those
and capacitor voltages and inductor currents which
correspond to certain members of capacitance
i2 ¼ i1 ðv2 =RÞ ðe2 =RÞ ð3:16Þ loops and inductance cut-sets.
These topological circuit constraints, which
Equations 3.15 and 3.16 may be combined reduce the total number of state variables from
into the following single matrix equation: the total number of storage elements, apply to
other systems also. However, the state variables

i1
 
1 0

i1
 
0 0

e1
 of a non-electrical system can perhaps be detec-
¼ þ ted more easily by drawing the analogous elec-
i2 1 1=R v2 0 1=R e2
trical system and by finding the all capacitor
ð3:17Þ loops and all inductance cut-sets in this analo-
gous representation.
Equation 3.17 is of the form Eq. 3.7 with
Example 2 Consider the network shown in
    Fig. 3.4 in which there is one all capacitor loop
1 0 0 0
C¼ and D ¼ and one all inductance node. Thus, although
1 1=R 0 1=R
there are six energy storage elements, we shall
ð3:18Þ require only 6 − 1 − 1 = 4 state variables. This
reduction occurs because we can assign arbitrary
In this chapter, we shall restrict the term
initial voltages to two capacitors only (the initial
‘state equation’ to denote the set of first-order
voltage on the remaining capacitor is determined
differential equations given by Eq. 3.6 only,
by Kirchoff’s voltage law); similarly, we can
because the other equation, viz. Eq. 3.7,
assign arbitrary initial currents in two inductors
relating the output to the input and the state
vectors, can be easily obtained by algebraic 2
A cut-set is a set of branches, which, when removed,
means. splits the network into two unconnected parts.
How Do You Choose State Variables in a Physical System? 29

C3 detailed exposition of this topic, see Kuh and


i1 i2
Rohrer [1].
R1

L1 L2
+ L3 +
+
V1 V1 C1 C2 V2

– – How Do You Choose State Variables
R2
When the System Differential
Equation is Given?
Fig. 3.4 Circuit for Example 2
When the physical system is given, the natural
choice of state variables is the quantities associ-
only (the third inductor current is determined by ated with the energy storage elements. Suppose,
Kirchoff’s current law). Let us, therefore, choose however, that the only given information about
v1, v2, i1 and i2 as the state variables. We can then the system is a single differential equation
write the following equations: involving the output of the system, y, and its first
3
9 k derivatives, e.g.
ðvi v1 Þ=R1 ¼ C1 ðdv1 =dtÞ þ i1 þ C3 ½dðv1 v2 Þ=dtŠ >
>
C2 ðdv2 =dtÞ ¼ i2 þ C3 ½dðv1 v2 Þ=dtŠ
=  
v1 L1 ðdi1 =dtÞ ¼ v2 þ L2 ðdi2 =dtÞ > F y; y_ ; yð2Þ ; . . .; yðkÞ þ uðtÞ ¼ 0 ð3:20Þ
>
¼ L3 ½dði1 i2 Þ=dtŠ þ R2 ði1 i2 Þ
;

ð3:19Þ If Eq. 3.20 can be rearranged to the form


 
yðkÞ ¼ f y; y_ ; yð2Þ ; . . .; yðk 1Þ
þ uð t Þ ð3:21Þ
By manipulating these equations, one can
express the quantities (dv1/dt), (dv2/dt), (di1/dt) then a natural, but quite arbitrary choice of state
and (di2/dt) in terms of v1, v2, i1, i2 and vi, and variables would be the following:
hence obtain the state formulation. This reduc-
tion to a canonical form is left to you as an x1 ¼ y; x2 ¼ y_ ; x3 ¼ yð2Þ ; . . .; xk ¼ yðk 1Þ
exercise.
ð3:22Þ
In the preceding example, it was easy to
identify the all-capacitance loop and the It follows that the state variables obey the
all-inductance node. In a complicated network, following differential equations:
however, difficulties may arise in counting the
number of such subsets. To organize the count- x_ 1 ¼ x2
ing in such a situation, first, open circuit all x_ 2 ¼ x3
elements in the network excepting the capacitors ... ð3:23Þ
and count the number of independent loops by x_ k 1 ¼ xk
the usual topological rule, namely (Nl)c = Nb − x_ k ¼ f ðx1 ; x2 ; . . .; xk Þ þ uðtÞ
Nj + Ns, where Nb is the number of branches, Nj
is the number of nodes, and Ns is the number of For linear systems, f(x1, x2,…, xk) will be of
separate parts. Next, short circuit all elements of the form
the network excepting the inductors and find the
number of independent nodes by the formula f ¼ a1 x 1 þ a2 x 2 þ    þ ak x k ð3:24Þ
(Nn)l = Nj − Ns. Then, the minimal number of
state variables is N = Nl+c − (Nl)c − (Nn)l, where
Nl+c is the total number of energy storage ele-
ments in the network. It should also become
apparent now that the choice of state variables for 3
The symbol y(k) stands for the kth derivative of y with
a given physical system is not unique. For a more respect to t.
30 3 State Variables—Part I

Hence, the state equation will be of the form where b is a constant to be determined, and leave
Eq. 3.6 with the other state variables unchanged. Then the
2 3 dynamic equations become
0 1 0 0 0  0
6 0
6 0 1 0 0  0 7 7 x_ 1 ¼ x2 ; x_ 2 ¼ x3 ; . . .; x_ k 2 ¼ xk 1 ; ð3:32Þ
6 0 0 0 1 0  0 7
A¼6 6
7
6      7 7 and
4 0 0 0 0 0  1 5
a1 a2 a3 a4 a5    ak x_ k 1 ¼ yðk 1Þ
¼ xk þ bu
ð3:25Þ
The last equation must satisfy Eq. 3.27, so
which is of dimension k  k, and that
0
2 3
ð2Þ
x_ k ¼ xk 1 bu_ ¼ yðkÞ bu_
607
¼ a1 ðxk þ buÞ a2 xk 1    ak x1
07
6 7
B¼6 6.7 ð3:26Þ
4 .. 5 þ ðb1 bÞu_ þ b2 u

1 ð3:33Þ

which has the dimension k  1. The term involving u_ disappears if we choose


A more complicated situation arises when the b1 = b. Then
differential equation also involves the derivatives
x_ k ¼ a1 x k a2 x k l ;    ak xl þ ðb2 a1 b1 Þu
of the input. For example, consider the equation
ð3:34Þ
ðk 1Þ
y ð k Þ þ a1 y þ    þ ak 1 y_ þ ak y
¼ b1 u_ ðtÞ þ b2 uðtÞ ð3:27Þ and the state equation becomes of the form
Eq. 3.6 with
If we make the same choice of the state 2 3
variables as in the previous case, i.e. 0 1 0 0  0
6
6 0 0 1 0  0 77
A¼6       7
x1 ¼ y; x2 ¼ y_ ; . . .; xk ¼ yðk 1Þ
ð3:28Þ 6 7
4 0 0 0 0  1 5
ak ak 1    a1
we would obtain the following dynamic equations:
ð3:35Þ
x_ 1 ¼ x2 ; x_ 2 ¼ x3 ; . . .; x_ k 1 ¼ xk ð3:29Þ
which is of dimension k  k, and
and the remaining equation for x_ k would be, from
Eq. 3.27, 2
0
3
6 0 7
x_ k ¼ a1 x k a2 x k l ;    ak x1 þ bl u_ þ b2 u 6 .. 7
B¼6 . 7 ð3:36Þ
ð3:30Þ 6 7
4 b 5
b2 a1 b 1
which is not in the normal form because of the
presence of the u_ term.
which is of dimension k  1. For a more general
To avoid this difficulty, let us choose
procedure for determining the normal form of
linear differential equations, you are referred to
xk ¼ yð k lÞ
bu; ð3:31Þ
Schwartz and Friedland [2].
How Does One Solve Linear Time-Invariant State Equations? 31

How Does One Solve Linear We expect the solution to be of the same form
Time-Invariant State Equations? as Eq. 3.41, but involving matrix exponential
functions.
Consider the familiar first-order differential
equation Laplace Transform Method
Taking the Laplace transform of Eq. 3.42 gives
ðdx=dtÞ ¼ ax þ bu; ð3:37Þ

where x(t) and u(t) are scalar functions of time. sXðsÞ xð0Þ ¼ AXðsÞ þ BUðsÞ ð3:43Þ
The solution of this equation can be obtained by
using either the integrating factor or the Laplace
A block diagram representation of Eq. 3.43 is
transform method. Using the latter method, we
shown in Fig. 3.5. The matrix solution of this
get, from Eq. 3.37,
equation is given by
sX ðsÞ xð0Þ ¼ aX ðsÞ þ bU ðsÞ ð3:38Þ
XðsÞ ¼ ðsI AÞ 1 xð0Þ þ ðsI AÞ 1 BUðsÞ;
or, ð3:44Þ
ðs aÞX ðsÞ ¼ xð0Þ þ bU ðsÞ ð3:39Þ where I is the identity matrix
or, 2 3
1 0 0  0
X ðsÞ ¼ ½xð0Þ=ðs aފ þ ½bU ðsÞ=ðs aފ ð3:40Þ 6 0 1 0  0 7
I¼6
4 
7 ð3:45Þ
   5
or, 0 0 0  1

Zt of dimension n  n and ( )−1 indicates matrix


1 at aðt
xðtÞ ¼ L X ðsÞ ¼ e xð0Þ þ e sÞ
buðsÞds inversion. Clearly, the solution is determined by
0 the properties of the matrix (sI − A), aptly called
ð3:41Þ the characteristic matrix of the system. If we
write
where the last term on the right-hand side is
easily recognized as the familiar convolution LðsÞ ¼ sI A ð3:46Þ
integral. The same result could also have been
then
obtained by using the integrating factor 2 at .
Now, consider the vector differential equation
L 1 ðsÞ ¼ ðsI AÞ 1 ¼ La ðsÞ=DðsÞ; ð3:47Þ
ðdx=dtÞ ¼ Ax þ Bu ð3:42Þ

Fig. 3.5 Block diagram x(o)


representation of (3.43)

+
U(s) Input B + å 1/S X(s)
+ State output vector

A
32 3 State Variables—Part I

where La(s) is the adjoint of the characteristic The Laplace transform method of solution
matrix and requires evaluation of the adjoint of a matrix.
Also, in general, it is difficult to obtain the
DðsÞ ¼ det ðsI AÞ ð3:48Þ
required inverse transformation.
Before we proceed to discuss a second
This determinant is a polynomial of degree
method of solution, we digress a little in order to
n and is called the characteristic polynomial of
introduce the important concept of the transfer
the matrix A or of the system characterized by
matrix. Taking the Laplace transform of Eq. 3.7
the matrix A. The roots of this polynomial are
and substituting the value of X(s) from Eq. 3.44,
called by various different names, some of them
we get
being eigenvalues, natural modes, natural fre-
quencies and characteristic roots. As the ele-
YðsÞ ¼ CðsI AÞ 1 xð0Þ þ ½CðsI AÞ 1 B þ DŠXðsÞ
ments in La(s) are the co-factors of L(s), these
will be of order (n − 1). ð3:52Þ
In the event that the entries in the adjoint
With zero initial conditions, this becomes
matrix have a common factor, this will also
appear in ∆(s). This will permit the cancellation
of the common factor in L−1(s). This cancellation YðsÞ ¼ ½CðsI AÞ 1 B þ DŠXðsÞ ¼ HðsÞXðsÞ;
should be done before evaluating the inverse ð3:53Þ
Laplace transform of L−1(s).
The time domain solution follows by taking where
the inverse Laplace transform of the expression
HðsÞ ¼ 2½CðsI AÞ 1 B þ DŠ
½1=DðsފLa ðsÞxð0Þ þ ½1=DðsފLa ðsÞBUðsÞ H11 ðsÞ H12 ðsÞ    H1n ðsÞ
3
ð3:49Þ 6 H21 ðsÞ H22 ðsÞ    H2n ðsÞ 7
¼64 
7
   5
As in the case of ordinary expressions in Hp1 ðsÞ Hp2 ðsÞ    Hpn ðsÞ
Laplace variable s, we expand each of the terms ð3:54Þ
in (3.49) into a partial fraction and take the
inverse Laplace transform. For example, for the Thus, the ith component Yi(s) of the transform
force-free system, assuming that the roots of Y(s) of the output vector may be written as
∆(s) = 0 are all distinct, we have
h i Yi ðsÞ ¼Hi1 ðsÞX1 ðsÞ þ Hi2 ðsÞX2 ðsÞ
x ðt Þ ¼ L l
ðsI AÞ 1
x ð 0Þ ð3:55Þ
þ    þ Hin ðsÞXn ðsÞ
¼ L 1 f½X1 =ðs s1 ފ þ ½X2 =ðs s2 ފ
þ    þ ½Xn =ðs sn ފg Clearly, Hij(s), the (i, j)th element of H(s), is
the transfer function between Xj(s) and Yi(s) and
¼ X1 es1 t þ X2 es2 t þ    þ Xn esn t
ð3:50Þ hij ðtÞ ¼ L 1 ½Hij ðsފ ð3:56Þ

where is the response at the ith output terminal due to an


unit impulse at the jth input terminal.
Xi ¼ lim ½ðs si ÞLa ðsÞ=Dðsފxð0Þ ð3:51Þ We have thus found a general expression for
s!sl
the transfer functions of a system in terms of the
The case in which the roots are not all distinct matrices appearing in the normal form charac-
can be handled by the same technique as in the terization. The matrix H(s) is called the transfer
ordinary expressions in the Laplace variable s. matrix of the system.
How Does One Solve Linear Time-Invariant State Equations? 33

Example 3 Consider again the circuit in Fig. 3.5 The transfer matrix is given by Eq. 3.61
with L = (1/2) Henry, C = 1 Farad and R = (1/3) Thus, the response i2(t) to the input
ohm. Then from Eqs. 3.14 and 3.18, A, B, C and e1(t) = d(t) is the inverse transform of H21(s),
D matrices become giving
2t t
0

2
 
2 2
 h21 ðtÞ ¼ 4e 2e ð3:62Þ
A¼ B¼
1 3 0 3 which checks with the result of a direct calcula-
   
1 0 0 0 tion. It may also be noted that only H22(s) has a
C¼ and D ¼
1 3 0 3 numerator of second degree, so that only
ð3:57Þ h22(t) involves an impulsive term. This is exactly
what we expect, since e2(t) = d(t) will result in
an impulse of current through C.
Therefore
By Series Expansion
  We next consider a solution of the vector dif-
s 2
LðsÞ ¼ ðsI AÞ ¼ ð3:58Þ ferential equation by series expansion. First,
1 sþ3
consider the homogeneous equation
and

sþ3
 
1 1 2
ðsI AÞ ¼
s2 þ 3s þ 2 1 s
ð3:59Þ
½ 1=ðs þ 2ފ þ ½2=ðs þ 1ފ½2=ðs þ 2ފ þ ½ 2=ðs þ 1ފ
 
¼
½ 1=ðs þ 2ފ þ ½1=ðs þ 1ފ½2=ðs þ 2ފ þ ½ 1=ðs þ 1ފ

For the force-free case, the time domain x_ ¼ Ax ð3:63Þ


solution is Performing a Taylor series expansion about
  the origin of the state space, we get
e 2t þ 2e t 2e 2t 2e t
xð t Þ ¼ x(0)
e 2t þ e t 2e 2t e t xðtÞ ¼ E0 þ E1 t þ E2 t2 þ    ð3:64Þ
ð3:60Þ
where Ei’s are column vectors, whose elements
are constants. In order to determine these vectors,

HðsÞ ¼ CðsI AÞ 1 B þ D
sþ3
     
1 1 0 2 2 2 0 0
¼ 2 þ
s þ 3s þ 2 1 3 1 s 0 3 0 3 ð3:61Þ
2ðs þ 3Þ
 
1 2s
¼ 2
s þ 3s þ 2 2s ð3s2 þ 2sÞ
34 3 State Variables—Part I

we successively differentiate Eq. 3.64 and set Substituting this expression in Eq. 3.6, we
t = 0; thus obtain

xð0Þ ¼ E0 ; xð0Þ
_ ¼ E1 ; xð2Þ ð0Þ ¼ 2E2 ; . . . ðd/=dtÞq þ /ðdq=dtÞ ¼ A/q þ B u ð3:70Þ
ð3:65Þ
But
However, from Eq. 3.63, we note that
d/=dt ¼ d eAt =dt ¼ AeAt ¼ A/ ð3:71Þ
ð2Þ
_
xð0Þ ¼ Axð0Þ; x ð0Þ ¼ A xð0Þ
_ ¼ A½Axð0ފ
so that
¼ A2 xð0Þ; . . .
ð3:66Þ /ðdq=dtÞ ¼ B u ð3:72Þ

where A2 implies the matrix multiplication or


A  A. Continuing the process, we obtain the
following series solution for x(t): dq=dt ¼ / 1 ðtÞB uðtÞ ð3:73Þ

xðtÞ ¼ I þ At þ ðA2 t2 =2!Þ þ ðA3 t3 =3!Þ þ    xð0Þ


 
or
ð3:67Þ
Zt
This infinite series defines the matric expo- qðtÞ ¼ / 1 ðsÞBuðsÞ ds ð3:74Þ
nential function eAt which may be shown to be 0

convergent for all square matrices A. Therefore


Thus, the particular integral is given by
At
xð t Þ ¼ e xð 0Þ ð3:68Þ
Zt
Comparing this result with Eq. 3.41, we note the xp ðtÞ ¼ /ðtÞ / 1 ðsÞBuðsÞ ds ð3:75Þ
equivalence of the unforced part of the solution. 0

The function eAt contains a great deal of informa-


and the complete solution is
tion about the system behaviour and, as such, is
called the fundamental or the transition matrix, Zt
denoted by /ðtÞ. It possesses all the important
xðtÞ ¼ /ðtÞxð0Þ þ /ðtÞ/ 1 ðsÞBuðsÞ ds
properties of the scalar exponential function. In
0
particular, the derivative of the function yields the
function itself pre-multiplied by a constant. Using ð3:76Þ
this property and the method of variation of
parameters, we now proceed to determine the This expression is analogous to the familiar
forced part of the solution, i.e. the particular complete solution of a scalar first-order differ-
integral of the original equation given by Eq. 3.6. ential equation, given in Eq. 3.41. The proper-
By analogy with the particular solution of the ties of the fundamental matrix /ðtÞ allow a
scalar equation given by Eq. 3.37, we let the further simplification of Eq. 3.76. These prop-
particular integral of the vector differential erties and methods for evaluation of /ðtÞ will be
equation be discussed in details in Part II, which will also
include a bibliography and an appendix on
xp ðtÞ ¼ /ðtÞqðtÞ ð3:69Þ matrix algebra.
An Advice 35

L3
+ C1 C3
u (t ) C2 R L
– L2
+ R1
u (t ) L1 R2
Fig. P.1 Circuit for Problem P.1 –

Fig. P.3 Circuit for Problem P.3

+ R C2
u(t) C1 C3 R
– L3

L1 R
Fig. P.2 Circuit for Problem P.2 +
u (t ) L2 C L4

An Advice Fig. P.4 Circuit for Problem P.4

The best way to comprehend a topic like this is to


follow the examples given here as well as those P:3. Write the state equations for the circuit
in the next chapter and in addition, solve as many shown in Fig. P.3.
examples as possible, from the reference books, P:4. Write the state equations for the circuit
cited below. shown in Fig. P.4.
P:5. Solve the state equations for the circuit in
Fig. P.4.
Problems

I believe, as I mentioned on earlier occasion, that


they are not difficult. References

P:1. Write the state equations for the circuit in 1. E.S. Kuh, R.A. Rohrer, The state variable approach to
network theory, in Proc IRE, vol 53 (July 1965),
Fig. P.1. pp. 672–686
P:2. Write the state equations for the circuit in 2. R.J. Schwartz, B. Friedland, Linear Systems (McGraw
Fig. P.2. Hill, 1965)
State Variables—Part II
4

In the first part of this discussion on state the same. In particular, we dwell upon the state
variables, which hopefully you have grasped, transition flow graph method in considerable
we presented the basic concepts of state details. We provide a bibliography at the end and
variables and state equations, and some include an appendix on the essentials of matrix
methods for solution of the latter. In this algebra.
second and concluding part, we dwell upon Equations and figures occurring in Part I and
the properties and evaluation of the funda- referred to here are not reproduced, for brevity,
mental matrix. An appendix on matrix algebra and it is suggested that you make it convenient to
is also included. Several examples have been have Part I at hand for ready reference.
given to illustrate the techniques. This is the
last part, be assured!
Properties of the Fundamental
Matrix
Keywords

Fundamental matrix Fundamental state One important property of the fundamental

equation Evaluating fundamental state matrix /(t), namely that

matrix State transition flow graphs
Review of matrix algebra d/=dt ¼ A/

has already been mentioned.


In Part I of this discussion, we introduced the Consider now the solution to the homoge-
basic concepts of state variables and state equa- neous vector differential equation, given by
tions, and discussed several methods of solution Eq. 3.68 in Part I:
of the latter. In this second and the concluding
part, we deal with the properties of the funda- xðtÞ ¼ /ðtÞxð0Þ ð4:1Þ
mental matrix and procedures for evaluation of
Or, explicitly

Source: S. C. Dutta Roy, “State Variables—Part II,”


IETE Journal of Education, vol. 38, pp. 99–107, April–
June 1997.

© Springer Nature Singapore Pte Ltd. 2018 37


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_4
38 4 State Variables—Part II

2
x1 ðtÞ
3 2 32 3 Consider the evaluation of the transient
/11 ðtÞ /12 ðtÞ  /1n ðtÞ x1 ð0Þ
6 x2 ðtÞ 7 6 6 x2 ð0Þ 7 response of a time-invariant system at various
/21 ðtÞ /22 ðtÞ  /2n ðtÞ 7
6 .. 7 ¼ 6 values of time t1 and t2 while the initial time was
   4 ... 5
6 7 76 7

6 7
4 . 5 4 5
t0. At t = t1,
xn ðtÞ /n1 ðtÞ /n2 ðtÞ  /nn ðtÞ xn ð0Þ
ð4:2Þ xðt1 Þ ¼ /ðt1 t0 Þxðt0 Þ ð4:7Þ

We can determine the elements of the funda- At t = t2, if the initial time is considered as t1, then
mental matrix by setting to zero all initial con-
ditions except one, evaluating the output of the xðt2 Þ ¼ /ðt2 t1 Þxðt1 Þ
ð4:8Þ
states, and repeating the procedure. For example, ¼ /ðt2 t1 Þ/ðt1 t0 Þxðt0 Þ
setting
On the other hand, considering the initial time
x1 ð0Þ ¼ 1; x2 ð0Þ ¼ x3 ð0Þ ¼    ¼ xn ð0Þ ¼ 0 as t0, we get
ð4:3Þ
xðt2 Þ ¼ /ðt2 t0 Þxðt0 Þ ð4:9Þ
we obtain
Comparing Eqs. 4.8 and 4.9, we get
x1 ðtÞ ¼ /11 ðtÞ; x2 ðtÞ ¼ /21 ðtÞ; . . .; xn ðtÞ ¼ /nl ðtÞ
/ðt2 t0 Þ ¼ /ðt2 t1 Þ/ðt1 t0 Þ ð4:10Þ
ð4:4Þ
This important relationship justifies the name
Thus, in general, /ij(t) is the transient response
‘state transition matrix’ for /(t). Clearly,
of the ith state due to unit initial condition of the
/(t2 − t0) is a sequence of state transitions. Since
jth state, when zero initial conditions apply to all
the relation must hold at t2 = t0, we obtain
other states. This property will be utilized later to
determine the state equations of any system
/ðt2 t0 Þ ¼ /ðt2 t2 Þ ¼ I
represented by an analog computer simulation. ð4:11Þ
So far, in solving the state equations, the ini- ¼ /ðt0 t1 Þ/ðt1 t0 Þ
tial conditions were taken at t = 0. If, instead, the
initial time is taken as t = t0, then the funda- and
mental matrix becomes
/ 1 ðt1 t0 Þ ¼ /ðt0 t1 Þ ð4:12Þ
t0 Þ
/ðt t0 Þ ¼ eAðt ð4:5Þ
Now setting the initial time of interest to zero,
and the complementary solution changes to we get

xðtÞ ¼ /ðt t0 Þxðt0 Þ ð4:6Þ / 1 ðt1 Þ ¼ /ð t1 Þ ð4:13Þ

Furthermore, we have only considered Or, in general


time-invariant systems. For a time-varying sys-
tem, we must use the fundamental matrix /(t, t0) / 1 ðtÞ ¼ /ð tÞ ð4:14Þ
which depends on both the present time t and the
This relation can be used to simplify the
initial time t0, and not just on the time difference
general solution, given by Eq. 3.76 of the state
t − t0. Also the matrices A, B, C and D become
equation.
functions of time t.
The Fundamental State Transition Equation 39

The Fundamental State Transition By Exponential Series Expansion


Equation
Example 4 For the network in Fig. 3.3 with the
For an initial time t = t0, the general solution of
specified element values, as in Example 3, the
the vector differential equation given by Eq. 3.6,
A matrix was shown to be
as found in Eq. 3.76, modifies as follows:
 
0 2
Zt A¼ ð4:18Þ
1 3
xðtÞ ¼ /ðt t0 Þ xðt0 Þ þ /(t)/ 1 ðsÞBuðsÞds
t0 Thus
ð4:15Þ     
0 2 0 2 2 6
A2 ¼ ¼ ð4:19Þ
Using Eqs. 4.10 and 4.14, we may write the 1 3 1 3 3 7
fundamental state transition equation of a     
3 2 6 0 2 6 14
time-invariant system as A ¼ ¼ ð4:20Þ
3 7 1 3 7 15
Zt
and so on. Hence
xðtÞ ¼ /ðt t0 Þ xðt0 Þ þ /(t - s)BuðsÞds
2 6 t2
     
t0 1 0 0 2
/ðtÞ ¼ þ tþ
0 1 1 3 3 7 2!
ð4:16Þ 
6 14 t3

þ þ 
7 15 3!
For a time-varying system, the more general "
t 2
t3 t2 t 3
#
1 þ 0:t 2 2! þ 6 3!    0 2t þ 6 2! 14 3! þ 
form of the transition matrix /(t, t0) must be used ¼ 2
3t t 3 2
7t t 3
0þt þ 7 3! þ 1 3t þ 15 3! þ 
and then the state transition equation becomes 2! 2!

ð4:21Þ
Zt
xðtÞ ¼ /ðt; t0 Þ xðt0 Þ þ /(t; s)BðsÞuðsÞds lf the calculation is continued, each series in
Eq. 4.21 turns out to be the expansion of the sum
t0
of two exponentials, and the fundamental matrix
ð4:17Þ
becomes
 
The solution in either Eqs. 4.16 or 4.17 e 2t þ 2e t 2e 2t 2e t
/ðtÞ ¼ ð4:22Þ
consists of the unforced natural response due to e 2t þ e t 2e 2t e t
the initial conditions plus a matrix convolu-
tion integral containing the matrix of the inputs This method is effective only in very simple
u (s). cases.

By Solution of the Homogeneous


Procedures for Evaluating Differential Equations using Classical
the Fundamental Matrix: Described Methods
in Steps

The fundamental matrix /(t) = eAt can be eval- Example 5 Consider the network in Fig. 3.3
uated by various methods which are now dis- again. The homogeneous differential equation for
cussed through some examples. this system is
40 4 State Variables—Part II

x_ ¼ Ax ð4:23Þ
    
x1 ðtÞ e 2t þ2e t 2e 2t 2e t x1 ð0Þ
¼
x2 ðtÞ e 2t þe t 2e 2t e t x2 ð0Þ
or,
ð4:31Þ
    
x_ 1 0 2 x1
¼ ð4:24Þ Since x(t) = /(t) x(0), the square matrix in
x_ 2 1 3 x2
Eq. 4.31 is /(t). This result checks with the one
obtained by the previous method.
This matrix equation represents two simulta-
This method also works out well for simple
neous differential equations, namely
systems but, for higher order systems, it becomes
x_ 1 ¼ 2x2
 laborious and time consuming.
ð4:25Þ
x_ 2 ¼ x1 3x2

Eliminating x2, we obtain By Evaluating the Inverse Laplace


Transform of (sI–A)
ð2Þ
x1 ¼ 2x2 ¼ 2x1 þ 6x2
Comparing the first equation in Eqs. 3.50 and
¼ 2x1 þ 6ð x_ 1 =2Þ ¼ 2x1 3_x1
4.1, we obtain the important relationship
or
/ðtÞ ¼ L 1 ½sI AÞ 1 Š ð4:32Þ
ð2Þ
x1 þ 3x1 þ 2x1 ¼ 0 ð4:26Þ This relation can be used to determine the
fundamental matrix, as illustrated in the follow-
The general solution for this homogeneous ing example.
second-order differential equation is
Example 6 Consider the circuit in Fig. 3.3 once
2t t more. For this,
x1 ðtÞ ¼ C1 e þ C2 e ; ð4:27Þ
   
where C1 and C2 are arbitrary constants. From s 0 0 2
ðsI AÞ ¼
Eqs. 4.25 and 4.27, we get  0 s 1 3
s 2
2t t ¼ ð4:33Þ
x2 ðtÞ ¼ C1 e þ ðC2 =2Þe ð4:28Þ 1 sþ3

We have to determine the matrix that multi- The characteristic function of the system is
plies the initial state vector x(0) to yield x(t).
Thus, we must express C1 and C2 in terms of the DðsÞ ¼ dctðsI AÞ ¼ ðs þ 1Þðs þ 2Þ ð4:34Þ
components of x(0). From Eqs. 4.27 and 4.28,
we find that which has two real distinct roots. Thus

C1 þ C2 ¼ x1 ð0Þ adjoint of ðsI AÞ


ð4:29Þ ðsI AÞ 1 ¼ ð4:35Þ
ðs þ 1Þðs þ 2Þ
C1 þ ðC2 =2Þ ¼ x2 ð0Þ
 
1 sþ3 2
Solving Eq. 4.29, we get ¼ ð4:36Þ
ðs þ 1Þðs þ 2Þ 1 s
 2 1 2 2

C1 ¼ x1 ð0Þ þ 2x2 ð0Þ sþ1 þ sþ2
ð4:30Þ ¼ s þ1 1 s þ1 2 1 2 ð4:37Þ
C2 ¼ 2x1 ð0Þ 2x2 ð0Þ sþ1 sþ2 sþ1 þ sþ2

Substitution of these expressions for the con- Evaluating the inverse transform of each of
stants C1 and C2 into Eqs. 4.27 and 4.28 gives the terms, we find
Procedures for Evaluating the Fundamental Matrix: Described in Steps 41

simplifies the necessary steps. The state transi-


/ðtÞ ¼ L 1 ½ðsI AÞ 1 Š ð4:38Þ
  tion flow graph provides such a method; in
2e t e 2t 2e t þ 2e 2t addition, it intuitively illustrates the physical
¼ ð4:39Þ
e t e 2t e t þ 2e 2t foundation of the vector state transition equation.

which agrees with Eq. 4.31.


Next, consider the system response to a unit State Transition Flow Graphs
step in place of e2(t), while e1(t) = 0. To simplify
matters, assume that the initial conditions are The state transition flow graphs differ from
zero. Then Mason’s signal flow graphs in that, while the latter
precludes consideration of initial conditions, the
Zt    former does include all initial conditions.
/11 ðt sÞ /12 ðt sÞ 0
xð t Þ ¼ ds Analog computers are considered obsolete
/21 ðt sÞ /22 ðt sÞ 1
t0 nowadays, but for the purpose of developing
ð4:40Þ state transition flow graphs, consider, for a
moment, how systems are simulated on an ana-
since1 l(t – t0) = 1 for t > t0. Thus log computer. For linear systems, we require
2 3 only the following computing abilities:
Rt
6 /12 ðt sÞds 7
(i) Multiply a machine variable by a positive
6t
xðtÞ ¼ 6 R0t ð4:41Þ
7
7 or a negative constant coefficient (In
/22 ðt sÞds
4 5
t0
practice, this can be done by potentiome-
ters and amplifiers).
Combining Eqs. 4.40 with 4.41, we get (ii) Sum up two or more machine variables
(This is accomplished by summing
Zt   amplifiers).
ðt sÞ 2ðt sÞ
x1 ðtÞ ¼ e e ds (iii) Produce the time integral of a machine
t0 ð4:42Þ variable (This is done by an integrating
1 1 amplifier).
ðt t0 Þ 2ðt t0 Þ
¼ e þ e
2 2
The mathematical descriptions of these three
Zt 
ðt sÞ 2ðt sÞ
 basic analog computations are
x2 ðtÞ ¼ e þ 2e ds
ð4:43Þ
ð1Þ
9
t0 x2 ðtÞ ¼ ax1 ðtÞ >
>
¼e ðt t0 Þ
e 2ðt t0 Þ ð2Þ x3 ðtÞ ¼ x0 ðtÞx1 ðtÞ x2 ðtÞ =
R Rt ;
ð3Þ x2 ðtÞ ¼ x1 ðtÞdt ¼ x1 ðtÞdt þ x2 ð0Þ >
>
;
In this method, we must determine the adjoint 0
of a matrix (sI − A), the roots of the character- ð4:44Þ
istic function D(s) and the inverse transform of
each term in (sI − A) matrix. This process will respectively, where a is a constant, positive or
demand an increasingly tedious calculation for negative.
higher order systems. Taking the Laplace transform of each of these
In view of the shortcomings of the three equations, we get
methods illustrated in the preceding examples, it
ð1Þ
9
would be worthwhile to develop a method of X2 ðsÞ ¼ aX1 ðsÞ =
determining the state transition matrix which ð2Þ X3 ðsÞ ¼ X0 ðsÞX1 ðsÞ X2 ðsÞ ð4:45Þ
ð3Þ X2 ðsÞ ¼ X1sðsÞ þ x2 sð0Þ
;
1
Recall that l(t) stands for the unit step function.
42 4 State Variables—Part II

The analog computer representations of where


Eq. 4.44 and the flow graph representations of
Eq. 4.45 are shown in Fig. 4.1 on the left- and Mk ðsÞ ¼ gain of the kth forward path; ð4:49aÞ
right-hand sides, respectively. Note that the rep-
DðsÞ ¼ system determinant or characteristic function
resentations of operations (1) and (2) are identi-
¼ 1 ðsum of all individual loop gainsÞ
cal to the corresponding flow graph notation but
þ ðsum of gain products of all possible
the integral operation is different.
A state transition of flow graph is defined as a combinations of two non - touching loops; i:e:;
cause and effect representation of a set of system two loops having no common nodeÞ
equations in normal form using the flow graphs of ðsum of gain products of all possible
basic computer elements such as those shown in combinations of three non - touching loopsÞ þ    :
Fig. 4.1. The only dynamic (or storage) element in ð4:49bÞ
the computer is the integrator. The output of each
integrator (or each node point in a signal flow graph Dk ðsÞ ¼ the value of DðsÞfor that part of the graph not
in which a branch with transmittance s−1 ends) can touching the kth forward path:
therefore be considered as a state variable (or ð4:49cÞ
transform of the state variable). These constitute a
set of state variables (or their transforms). There- The method can best be illustrated by con-
fore, if we can present the system under investi- sidering an example.
gation by an analog computer diagram or a state
Example 7 Consider, for a change, another
transition flow graph, we can easily determine the
second-order system, described by the following
state vector x(t) or its transform X(s).
state equations:
The state transition equation and its Laplace
transform are given by    
0 1 0
x_ ¼ xþ uðtÞ ð4:50Þ
Zt 2 3 1
xðtÞ ¼ /ðt t0 Þxðt0 Þ þ /ðt sÞBuðsÞds
Taking the Laplace transform of both sides,
t0
we get
ð4:46Þ
X1 ðsÞ X1 ðsÞ
    
1 0 1
XðsÞ ¼ ðsI AÞ 1 xðt0 Þ þ ðsI AÞ 1 BUðsÞ ¼
X2 ðsÞ s 2 3 X2 ðsÞ
ð4:47Þ ð4:51Þ
x1 ðt0 Þ
   
0
þ UðsÞ þ
Now if we can determine Eq. 4.47 directly 1 x2 ðt0 Þ
from the state transition flow graph, then we can
avoid the matrix inversion, and by taking the The state transition flow graph diagram can be
inverse Laplace transform of each element, we drawn as shown in Fig. 4.2, where the initial
can obtain Eq. 4.46 directly. The system equa- time has been assumed to be t0. There are only
tion is obtained in the form of Eq. 4.47 by two loops in the graph of gains −2s−2 and −3s−1
applying Mason’s gain formula to the state which touch at a node. Thus, the system char-
transition flow graph. This formula relates the acteristic function is
gain Mxy(s) between an independent node x and a
dependent node y in the following manner: 3 2 ðs þ 1Þðs þ 2Þ
DðsÞ ¼ 1 þ þ ¼ ð4:52Þ
s s2 s2
Xy ðsÞ X Mk ðsÞDk ðsÞ
Mxy ðsÞ ¼ ¼ ; ð4:48Þ Now, applying Mason’s gain formula
Xx ðsÞ k
DðsÞ
Eq. 4.48, we get
State Transition Flow Graphs 43

α
(1) x1(t) α x2(t) x1(s) x2(s)

x1(t)
x0(s)
+ 1

(2) x0(t) + ∑ x3(t) 1


x1(s) x3(s)
- -1

x2(s)
x2(t)
x2(o)

2(0)
-1
(3) s
1
-1
1 s
-1
s
x1(t) ∫ x2(t) x1(s) x2(s) x1(s) x2(s)

Fig. 4.1 Time and frequency domain representations of basic operations in an analog computer

U11 ðsÞ ¼ gain from node x1 ðt0 Þ to X1 ðsÞ s


X2 ðsÞ=U ðsÞ ¼ ð4:58Þ
ð4:53Þ ðs þ 1Þðs þ 2Þ
ðs þ 3Þ
¼
ðs þ 1Þðs þ 2Þ Thus
U12 ðsÞ ¼ gain from x2 ðt0 Þ to X1 ðsÞ  " sþ3 1
#
X1 ðsÞ x1 ðt0 Þ
 
1 ð4:54Þ ðs þ 1Þðs þ 2Þ ðs þ 1Þðs þ 2Þ
¼ ¼ 2 s
ðs þ 1Þðs þ 2Þ X2 ðsÞ ðs þ 1Þðs þ 2Þ ðs þ 1Þðs þ 2Þ
x2 ðt0 Þ
" #
1
U21 ðsÞ ¼ gain from node x1 ðt0 Þ to X2 ðsÞ þ
ðs þ 1Þðs þ 2Þ
UðsÞ
s
2 ð4:55Þ ðs þ 1Þðs þ 2Þ
¼
ðs þ 1Þðs þ 2Þ ð4:59Þ

and The state transition equation is obtained


by taking the inverse Laplace transform of
U22 ðsÞ ¼ gain from node x2 ðt0 Þ to X2 ðsÞ Eq. 4.59
s ð4:56Þ
¼
ðs þ 1Þðs þ 2Þ
x1 ðtÞ 2e t e 2t e t e 2t
   
¼
The relation between X1(s) and U(s) is given x2 ðtÞ 2e t þ 2e 2t e t þ 2e 2t
(" #)
1
by the gain, from node U(s) to X1(s) and is equal x1 ðt0 Þ
 
ðs þ 1Þðs þ 2Þ
to þL 1 s UðsÞ
x2 ðt0 Þ ðs þ 1Þðs þ 2Þ

1 ð4:60Þ
X1 ðsÞ=U ðsÞ ¼ ð4:57Þ
ðs þ 1Þðs þ 2Þ
The fundamental matrix in this result checks
The relation between X2(s) and U(s) is given by with the one found out earlier in Eq. 4.39 using
the gain from node U(s) to X2(s) and is equal to the inverse matrix process. The extra benefit that
44 4 State Variables—Part II

Fig. 4.2 State transition flow x2(to) x1(to)


graph of the system of
Example 7

-
s-1 s1

U(s) 1 s
-1
s-1
x2(s) x1(s)
-3
-2

has been derived from the state transition flow The state variables are chosen as
graph is that the effect of the input signal has
been included. It is a distinct advantage that we x1 ¼ w; x2 ¼ w_ and x3 ¼ wð2Þ ð4:64Þ
have avoided the necessity of evaluating the
integral Then, the set of first-order differential equa-
tions describing the system is
Zt 9
/ðt sÞBuðsÞ ds ð4:61Þ x_ 1 ¼ x2 =
t0
x_ 2 ¼ x3 ð4:65Þ
x_ 3 ¼ 2x2 3x3 þ uðtÞ
;

For example, consider the case where the


system is subjected to a unit step applied at In matrix form, this set of equations becomes
t = t0, i.e. u (t) = l(t − t0) and the initial condi- 2 3 2 3
tions are zero. From Eq. 4.60, we easily obtain 0 1 0 0
x_ ¼ 4 0 0 1 5x þ 4 0 5uðtÞ ð4:66Þ
1 ð1=sÞe st0 ðt t0 Þ
) 0 2 3 1
x1 ðtÞ ¼ L ðs þ 1Þðs þ 2Þ ¼ 1
2 e þ 12 e 2ðt t0 Þ
x2 ðtÞ ¼ L 1 e st0 ¼e ðt t0 Þ
e 2ðt t0 Þ
ðs þ 1Þðs þ 2Þ The state transition flow graph of this system
ð4:62Þ is shown in Fig. 4.3.
The characteristic function of the system is
These check with the earlier obtained results
[see Eqs. 4.42 and 4.43]. 1 3 2
2s 1 s 1

DðsÞ ¼ 1 3s ¼ 1þ þ
The dominant advantage of the signal flow s s2
graph method of obtaining the state transition ðs þ 1Þðs þ 2Þ
¼ ð4:67Þ
equation is that it does not become more difficult or s2
tedious as the order of the system increases. This
assertion will be clear from the next example, Using Mason’s gain formula, we obtain
where a third-order system is considered.
s 1 D1 ðsÞ s 2 D2 ðsÞ
X1 ðsÞ ¼ x1 ðt0 Þ þ x2 ðt0 Þ
Example 8 Consider a third-order system repre- DðsÞ DðsÞ
sented by the differential equation s 3 D3 ðsÞ s 3 D4 ðsÞ
þ x3 ðt0 Þ þ UðsÞ
DðsÞ DðsÞ
d3 w d2 w dw
3
þ3 2 þ2 ¼ uðtÞ ð4:63Þ ð4:68Þ
dt dt dt
State Transition Flow Graphs 45

x3(to) x2(to) x1(to)

s-1 s-1 s-1

U(s) 1 s-1 s-1 s-1


x3(s) x2(s) x1(s)
-3
-2

Fig. 4.3 State transition flow graph of the system of Example 8

Now 21 sþ3 1
3 2 UðsÞ
3
s sqðsÞ sqðsÞ sqðsÞ
sþ3 1 7 1 6 UðsÞ 7
xðtÞ ¼ L 1 4 0
9 6 7
D1 ðsÞ ¼ 1 ð 3s 1 2s 2 Þ ¼ DðsÞ > qðsÞ 5xðt0 Þ þ L 4 qðsÞ 5
6
> qðsÞ
D2 ðsÞ ¼ 1 ð 3s 1 Þ ¼ 1 þ 3=s 0 2s 1 sUðsÞ
=
ð4:69Þ qðsÞ qðsÞ qðsÞ
D3 ðsÞ ¼ 1 >
ð4:71Þ
>
D4 ðsÞ ¼ 1
;

Therefore where q(s) = s2 + 3 s + 2. For an input step of


magnitude u(t0), we get

2 3 s s
3
lðtÞ 2 lðsÞ 2e þ 12 e 2s 1
2 lðsÞ e þ 12 e 2s

xðtÞ ¼ 4 0 s 2s s 2s
2e e e e 5xðt0 Þ
6 7
s 2s s 2s
0 2e 4e e þ 2e
2 3 1 1 2s s
3 ð4:72Þ
4 lðsÞ þ 2 s 2e þ 2e
þ4 1 s 1 2s
lðsÞ e þ e 5uðt0 Þ;
6 7
2 2
s 2s
e e

where s = t − t0 and l(t) stands for the unit step


1 ðs þ 3Þ function.
X1 ðsÞ ¼ x1 ðt0 Þ þ x2 ðt0 Þ
s ðs þ 1Þðs þ 2Þ
1 1
þ x3 ðt0 Þ þ UðsÞ
sðs þ 1Þðs þ 2Þ sðs þ 1Þðs þ 2Þ Concluding Discussion
ð4:70Þ
As stated in the introduction in Part I, the aim
Continuing the process, we obtain the state of this presentation was to introduce the reader
transition equation as given by Eq. 4.72. to the fundamentals of state variable
46 4 State Variables—Part II

characterization of linear systems. We started


Dij ¼ ð 1Þi þ j Mij ð4:74Þ
with the concept of state and discussed the choice
of state variables for a given system. Next, we
attempted to solve the state equation and we Operations
discovered the importance of the state transition
or the fundamental matrix. Special methods were Addition
shown to be convenient for evaluating the fun- Two matrices of the same size are added by
damental matrix in simple cases. In the general summing the corresponding elements, i.e. if
case of a high-order system, one must take resort
to the state transition flow graph, the properties
AþB ¼ C ð4:75Þ
and applications of which have been briefly
explained.
then
The references cited below should be useful to
readers interested in further exploration of state cij ¼ aij þ bij ð4:76Þ
variables in characterization, analysis and syn-
thesis of systems. Multiplication of a matrix by a scalar
In the appendix, we have given a short, but If b is a scalar quantity and
comprehensive review of matrix algebra for
ready reference on the notations, operations, bA ¼ Ab ¼ C ¼ ½Cij Š ð4:77Þ
properties and types of matrices.
then the new matrix C has the elements

Appendix on Review of Matrix Cij ¼ baij ð4:78Þ


Algebra
Multiplication of matrices
The product AB may be formed if the number
Notations
of columns in A is equal to the number of rows in
A matrix A of dimension m  n is a rectangular
B. Such matrices are said to be conformable in the
array of mn elements arranged in m rows and
order stated. Thus, if A is m  p and B is p  n,
n columns as follows:
then AB = C is an m  n matrix defined by
2 3
a11 a12    a1n p
6 a21 a22    a2n 7
X
Cij ¼ aij bkj ð4:79Þ
 
A ¼4
6 7 ¼ aij ð4:73Þ
    5 k¼1
am1 am2    amn
Matrix multiplication is associative [i.e. (AB)
The elements aij are called scalars; they may C = A(BC)], distributive with respect to addition
be real or complex numbers, or functions of real [i.e. A (B + C) = AB + AC] but not, in general,
or complex variables. commutative [i.e. AB 6¼ BA in general]. In the
A square matrix has the same number of rows product AB, A is said to pre-multiply B; an
and columns and can be associated with a equivalent statement is that B post-multiplies A.
determinant, usually written as det A or |A|. Transpose of a matrix
A minor of an n  n determinant, denoted by The transpose of A, denoted as AT, is a matrix
Mij, is the (n − 1)  (n − 1) determinant formed formed by interchanging the rows and columns
by crossing out the ith row and the jth column. of A, i.e.
The corresponding co-factor Dij is
Appendix on Review of Matrix Algebra 47

Inverse and transpose of a product


½aij ŠT ¼ ½aji Š ¼ AT ð4:80Þ
It can be shown that
Adjoint of a matrix ðA B Þ 1
¼ B 1A 1
The adjoint of A, denoted as Adj A, is a ð4:86Þ
matrix formed by replacing each element in A by ðABÞT ¼ BT AT
its co-factor and then taking the transpose of the
result, i.e. Cayley–Hamilton theorem
The matrix (A − k I) is called the character-
Adj A ¼ ½Dij ŠT ð4:81Þ istic matrix of A. If A is a square matrix, then the
equation
Conjugate matrix
 is formed
The conjugate of A, denoted as A,
detðA kIÞ ¼ uðkÞ ¼ 0 ð4:87Þ
by replacing each element of A by its complex
conjugate, i.e.
is called its characteristic equation. Cayley–
 ¼ ½ Hamilton theorem states that every square matrix
A aij Š ð4:82Þ
A satisfies its characteristic equation, i.e. u (A) = 0
Inversion
The inverse of A, denoted by A−1 is defined Types
as that matrix which, when pre-multiplied or Real and complex
post-multiplied by A, gives the unit or identity A is real or complex according to whether
matrix (see Eq. 4.74) I, i.e. aij’s are real or complex.
Diagonal scalar and unit or identity
1
AA ¼ A 1A ¼ I ð4:83Þ A is diagonal if aij = 0, i 6¼ j. A is a scalar if
A is diagonal and diagonal elements are all equal,
It can be shown that i.e. A = kI. A is unit or identity matrix I if A is
diagonal and all the diagonal elements are unity.
1 adj A ½Dij ŠT Hermitian and Skew-Hermitian
A ¼ ¼ ð4:84Þ
det A D A is Hermitian if AT = A, i.e. aij = aij .
Obviously, the diagonal elements of a Hermitian
A are real numbers.
Properties  Obviously,
A is skew-Hermitian if AT = A.
Equality the diagonal elements in this case are either zero
The matrices A and B are said to be equal if or pure imaginaries.
and only if aij = bij for all i and j. Orthogonal and unitary
Equivalents A is orthogonal AT = A−1. A is unitary if A is
The matrices A and B are said to be equiva- both Hermitian and orthogonal, i.e. if A  T = A−1.
lent if and only if non-singular matrices P and Positive definite
Q exist such that A real symmetric A is positive definite if
n
B ¼ PAQ ð4:85Þ
X
xT Ax ¼ aij xi xj [ 0 ð4:88Þ
i¼j¼1
Rank
The rank of A is defined as the dimension of for all non-trivial x. A necessary and sufficient
the largest square sub-matrix in A whose deter- condition is that the characteristic roots of A are
minant does not vanish. positive.
48 4 State Variables—Part II

Singular P:1. Determine the fundamental state transition


A is singular if det A = 0, i.e. no inverse matrix for P.1 circuit of the previous
exists. chapter.
Symmetric P:2. Same for P.2 circuit of the previous chapter.
A is symmetric if it is square and A = AT, i.e. P:3. Same for P.3 circuit of previous chapter.
aij = aji. P:4. Draw the state transition flow graph for the
Vector, column or row circuit of P.4 of the previous chapter.
A is a column vector if it has one column and P:5. Same for P.3 circuit of the previous chapter.
a row vector if it has one row.
Zero
A is zero if aij = 0.
Bibliography

Problems 1. L.A. Zadeh, C.A. Desoer, Linear System Theory: A


State Space Approach (McGraw-Hill, 1963)
2. S. Seeley, Dynamic System Analysis (Reinhold Pub-
These problems are slightly more difficult than lishing Co, 1964)
those in the previous chapters. But have no
fear—if you have followed the contents of this
chapter, then these should not be an issue. You
will sail through comfortably.
Carry Out Partial Fraction Expansion
of Functions with Repeated Poles 5

This chapter aims to simplify partial fraction For finding the constants K0 ; K1 ; . . .Kn 1 ,
expansion with repeated poles––presented most of the textbooks (see, e.g. [1]) recommend a
here are some techniques which should make procedure based on differentiation of the function
this topic considerably easier.
F1 ðsÞ ¼ ðs s0 Þn F ðsÞN ðsÞ=DðsÞ ð5:3Þ

The general formula is, in fact


Keywords
 Repeated poles 1 dr F1 ðsÞ

Partial fraction expansion
Kr ¼ r ¼ 0 to n 1 ð5:4Þ
New method r! dsr s¼s0

For even a moderate value of n, this can


As you are well aware, the function become quite tedious. Several alternative proce-
dures have therefore been suggested in the liter-
NðsÞ ature—mostly in journals on circuits, systems
FðsÞ ¼ ð5:1Þ
ðs s0 Þn DðsÞ and controls—and a few of them have been
mentioned in some recent textbooks. While
can be expanded in partial fractions as follows: making a critical survey of all these procedures,

K0 K1 K2 Kn 1 N1 ðsÞ
FðsÞ ¼ n þ 1
þ n 2
þ  þ þ ð5:2Þ
ðs s0 Þ ðs s0 Þn ðs s0 Þ ðs s0 Þ DðsÞ

Source: S. C. Dutta Roy, “Carry Out Partial Fraction


Expansion of Functions with Repeated Poles without
Tears,” Students’ Journal of the IETE, vol. 26, pp. 129–
131, October 1985.

© Springer Nature Singapore Pte Ltd. 2018 49


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_5
50 5 Carry Out Partial Fraction Expansion of Functions …

the author has come to the conclusion that a On long division, we get
procedure given in [2] and credited to Professor
Leonard O. Goldstone of the Polytechnic Insti-
tute of Brooklyn, is the best. The method will be
described here with a slight modification; it is
hoped that you will readily appreciate its merits
as compared to the differentiation or any other
procedure you may have come across so far, and
will adopt it in your future work.

The Method
Thus
Look at Eq. 5.2 and note that if s s0 is replaced 1 1 p=4
GðpÞ ¼ p2 þ p ð5:8Þ
by 1p,1 then it becomes 2 4 2p þ 1

1 Now restoring s, we get


Gð pÞ ¼ K0 pn þ K1 pn þ    þ Kn 1 p þ G1 ð pÞ;
ð5:5Þ 1
2
1
4
1
4
GðpÞ ¼ FðsÞ ¼ 2
þ ð5:9Þ
ðs þ 1Þ sþ1 sþ3
where for convenience, we have called
F ðsÞ ¼ Fðs0 þ 1=pÞ as Gð pÞ and N1 ðs0 þ 1=pÞ= which completes the expansion.
Dðs0 þ 1=pÞ as G1 ð pÞ. Thus, if one transforms
F(s) to G(p) and carries out a long division to
extract, in the quotient, all powers of p from n to Another Example
1 then the constants K0 ; K1 ; . . .Kn 1 will auto-
matically be found out! The remaining function Consider, next, a more complicated example.
G1(p) should then be transformed back to the Let
s-variable, and expanded for other simple or
multiple poles. 2s2 þ s þ 1
FðsÞ ¼ ð5:10Þ
ðs þ 1Þðs þ 2Þ2 ðs þ 3Þ3

An Example Putting s þ 3 ¼ 1=p and simplifying, we get

Now consider a specific example for illustration. 2s2 þ s þ 1 ¼ ð2=p2 Þ ð11=pÞ þ 16 ð5:11Þ
Let
and
sþ2
FðsÞ ¼ ð5:6Þ
ðs þ 1Þ2 ðs þ 3Þ ðs þ 1Þðs þ 2Þ2 ¼ ð1=p3 Þ ð4=p2 Þ þ ð5=pÞ 2
ð5:12Þ
Putting s þ 1 ¼ 1=p and simplifying gives
We have found the transformed form of
p3 þ p2 (s + 1) (s + 2)2 separately because we shall need
GðpÞ ¼ ð5:7Þ
2p þ 1 it later, while expanding in terms of the multiple
pole at s = −2. Combining Eqs. 5.10, 5.11 and
5.12, we get, after elementary simplifications

1
This is the modification;
Another Example 51

16p6 11p5 þ 2p4 7q3 28q2 þ 85


4 q
FðsÞ ¼ GðpÞ ¼ ð5:13Þ G2 ðqÞ ¼ G1 ðpÞ ¼ ð5:17Þ
2p3 þ 5p2 4p þ 1 qþ1

The long division proceeds as follows: Carrying out the long division gives
85
- q + 1) 7 q 3 - 28q 2 +
4
(
q -7 q 2 + 21q
3 2
7q - 7q
85
-21q 2 + q
4
-21q 2 + 21q
1
4
q

Thus
1
q
G2 ðqÞ ¼ 7q2 þ 21q þ 4
ð5:18Þ
qþ1

From Eqs. 5.14 and 5.18, we get

Thus 29 2 85
FðsÞ ¼ 8p3 p p
2 41
q
29 2 85 7q2 þ 21q þ 4 ð5:19Þ
GðpÞ ¼ 8p3 p p þ G1 ðpÞ; ð5:14Þ qþ1
2 4

where Finally substituting p ¼ 1=ðs þ 3Þ and q ¼


1=ðs þ 2Þ in Eq. 5.19 gives the desired partial
225 3 141 2
þ 85 fraction expansion:
4 p 2 p 4 p
G1 ðpÞ ¼ ð5:15Þ
2p þ 5p2
3 4p þ 1
8 29=2 85=4
FðsÞ ¼ 3 2
Expansion in terms of the repeated pole at ðs þ 3Þ ðs þ 3Þ sþ3
ð5:20Þ
s = −3 is complete. The remainder function is 7 21 1

2
þ þ 4
now to be expanded in terms of the repeated pole ðs þ 2Þ sþ2 sþ1
at s = −2. In order to accomplish this, G1(p) has
to be transformed back to a function of s, which
we shall call F2(s); the job is not difficult because Problems
dividing both numerator and denominator of
Eq. 5.15 by p3, and taking help of Eq. 5.12, we P:1. Expand the function
simply get
sþ3
225 141 85 2 F ðsÞ ¼
4 2 ðs þ 3Þ þ 4 ðs þ 3Þ ðs þ 1Þ3 ðs þ 2Þ2
G1 ðpÞ ¼ F2 ðsÞ ¼ 2
ðs þ 1Þðs þ 2Þ
ð5:16Þ P:2. Do the same for

Now, put s + 2 = 1/q in Eq. 5.16 and call the sþ2


F ðsÞ ¼
resulting function as G2(q). After a bit of sim- ðs þ 1Þ4
plification, the following result is obtained:
52 5 Carry Out Partial Fraction Expansion of Functions …

P:3. Do the same for P:5. Do the same for

sþ3 sþ3
F ðsÞ ¼ F ðsÞ ¼
ðs þ 1Þ3 ðs þ 2Þ4 s2 ðs þ 1Þ2

P:4. Do the same for


References
3
ðs þ 1Þ ðs þ 2Þ
F ðsÞ ¼
s4 1. M.E. van Valkenburg, Network Analysis (Prentice-Hall
of India, New Delhi, pp. 186–187) (1974)
and find f(t). Note: f(t) will contain a 2. F.F. Kuo, Network Analysis and Synthesis (Wiley,
New York, pp. 153–154)
d-function. Beware!
A Very Simple Method of Finding
the Residues at Repeated Poles 6
of a Rational Function in z−1

If you have followed the last chapter carefully,


Y ðzÞ ¼ P1 ðzÞ=½ð1 pz 1 Þq Q1 ðzފ
this one would be a cakewalk! The two
discussions are similar except for the variables. ¼ ½A1 =ð1 pz 1 ފ þ ½A2 =ð1 pz 1 Þ2 Š þ . . .
A very simple method is given for finding the þ ½Aq =ð1 pz 1 Þq Š þ Y1 ðzÞ;
residues at repeated poles of a rational function ð6:1Þ
in z−1. Compared to the multiple differentiation
formula given in most textbooks, and several where Ai’s are the residues, i = 1 to q, and
other alternatives, this method appears to be Y1(z) contains terms due to other poles. Text-
the simplest and the most elegant. It requires books (see, e.g. [1]) usually give the following
only a long division preceded by a small formula for finding Ai’s:
amount of processing of the given function.
Ai ¼ f1=½ðq iÞ!ð pÞq i Šg  jdq i =dðz 1 Þq i

½ð1 pz 1 ÞL Y ðzފjpz1 ¼ 1:
Keywords ð6:2Þ
Partial fraction expansion  Repeated poles
New method This expression, involving multiple differen-
tiations, is indeed formidable, and students
invariably make mistakes in calculation. In a
recent paper [1, 2], three alternative methods
Introduction were outlined. These are

Let Y(z), a proper rational function in z−1, have a (1) Multiply both sides of Eq. 6.1 by
pole at z = p, where p may be real or complex, of ½ð1 pz 1 Þq Q1 ðzފ, simplify the right-hand
multiplicity q. Then, Y(z) can be expanded in side, equate the coefficients of powers of z−1
partial fractions as follows: on both sides to get a set of linear equations in
the unknown constants, and solve them.

Source: S. C. Dutta Roy, “A Very Simple Method of


Finding the Residues at Repeated Poles of a Rational
Function in z−1,” IETE Journal of Education, vol. 56,
pp 68–70, July–December 2015.

© Springer Nature Singapore Pte Ltd. 2018 53


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_6
54 6 A Very Simple Method of Finding the Residues at Repeated Poles …

(2) Put arbitrary specific values of z−1 on both and let, in general,
sides, like 0;  14 ; 1; 2 etc., and solve the F ðzÞjð1=pÞ ¼ x ¼ F 0 ð xÞ ð6:7Þ
z 1
resulting set of linear algebraic equations.
(3) Obtain Aq as Then, Eq. 6.1 becomes

Aq ¼ ð1 pz Þ1 q
Y ðzÞjpz1 ¼1 ð6:3Þ Y 0 ð xÞ ¼ P01 ð xÞ=½xq Q01 ð xފ ¼ ½ðA1 =pÞ=xŠ þ ½ðA2 =p2 Þ=x2 Š
þ . . . þ ½ðAq =pq Þ=xq Š þ Y10 ð xÞ:
and find the rational function, ð6:8Þ

Y2 ðzÞ ¼ Y ðzÞ ½Aq =ð1 pz 1 Þq Š ð6:4Þ Multiply both sides by xq, so that Eq. 6.8
becomes
Clearly, Y2(z) will have a multiple pole of order
q − 1 at z = p. Now, Aq−1 can be obtained as P01 ð xÞ=Q01 ð xÞ ¼ A01 xq 1
þ A02 xq 2
þ . . . þ A0q þ xq Y10 ð xÞ;
ð6:9Þ
Aq 1 ¼ ð1 pz 1 Þq 1 Y2 ðzÞjpz1 ¼ 1 ð6:5Þ
where
The process can now be repeated till all the
Ai’s are obtained and the remainder function A0i ¼ Ai =pi : ð6:10Þ
Yq(z) can then be handled.
Here, we present yet another method, which Now make a long division of P1′(x) by Q1′(x),
does not appear to be known to teachers and starting with the lowest powers. Then, the quo-
students, and to the best of knowledge of the tients will give all the required residues and the
author, it has not appeared in any literature. The remainder gives the numerator of the function
method is based on a small amount of prepro- Y1′(x) which can then be analysed for the resi-
cessing of Eq. 6.1 followed by a long division; dues at the other poles.
besides elegance, it can claim to be the simplest We shall now illustrate the method by an
of all known methods. example.
The method presented here is an adaptation of
a similar one for Laplace transforms, given in the
previous chapter and Kuo’s book [3] which was Example
published long back, in 1966, and a tutorial paper
on the method [1] appeared in 1985, the method Let
did not figure in any textbook so far, and is not
known to teachers and students. The author has Y ðzÞ ¼ ½1 ð1=8Þz 1 Š=½1 ð1=2Þz 1 Š3 ½1 ð1=4Þz 1 Šg
used this method routinely in his courses on ¼ fA1 =½1 1 3
ð1=2Þz Š g þ fA2 =½1 ð1=2Þz 1 Š2 g
network theory and signals and systems and has
þ fA3 =½1 ð1=2Þz 1 Šg þ fA4 =½1 ð1=4Þz 1 Šg:
found it to be well received by students.
ð6:11Þ
Equation 6.11 can be rewritten as
The Method
Y ðzÞ ¼ 4ð8 z 1 Þ=½ð2 z 1 Þ3 ð4 z 1 ފ
In the method under discussion, first make a
change of variable from z−1 to ¼ ½8A1 =ð2 z 1 Þ3 Š þ ½4A2 =ð2 z 1 Þ2 Š
1 þ ½2A3 =ð2 z 1 ފ þ ½4A4 =ð4 z 1 ފ:
x ¼ ð1=pÞ z ð6:6Þ
ð6:12Þ
Example 55

Put x ¼ 2 z 1 and multiply both sides of Conclusion


3
Eq. 6.12 by x . The result is
A very simple method, claimed to be the simplest
2 and the most elegant, has been presented for
4ð6 þ xÞ=ð2 þ xÞ ¼ 8A1 þ 4A2 x þ 2A3 x
þ ½4A4 x3 =ð2 þ xފ: finding the residues at multiple poles of a rational
function in z−1.
ð6:13Þ

Now, make a long division of 24 + 4x by


Problems
2 + x as shown below.
All problems concern partial fraction expansion
and finding the residues.

5 þ 4z 1 0:9z 2
P:1. F ðzÞ ¼ ð1 0:6z 1 Þ2 ð1 þ 0:5z 1 Þ

1:2z 1 þ 0:48z 2
P:2. F ðzÞ ¼ 4 ð1 0:4z 1 Þ3

4 3 2
þ 4:5z 5
Comparing the result with the right-hand side P:3. F ðzÞ ¼ ðz 4z0:5Þ2 ðz3:2z 5:2z
þ 2:1Þ2 ðz 3Þðz þ 4:5Þ
of Eq. 6.13, we get
Clue: First convert this F(z) into a rational
A1 ¼ 3=2; A2 ¼ 1; A3 ¼ 1; and A4 ¼ 1=2: function in z−1
ð6:14Þ
2
þ 4:5z
P:4. F ðzÞ ¼ ðz 2z
þ 0:6Þ3 ðz
5
2:1Þ
Substituting these values in Eq. 6.11 and
using the inversion formula Clue: Same as in P.3

4z 4 3:2z 3 5:2z 2 4:5z 1 5


Z 1
½1=ð1 pz 1 Þq Š ¼ ½1=ðq 1Þ!Šðn þ 1Þðn þ 2Þ. . . P:5. F ðzÞ ¼ ðz 1 0:5Þ2 ðz 1 þ 2:1Þ2 ðz 1 3Þðz 1 þ 4:5Þ
ðn þ q 1Þpn uðnÞ;
ð6:15Þ Clue: First convert the denominator factors
into (1 − az−1) forms.
we finally get, after simplification,

yðnÞ ¼ Z 1
Y ðzÞ ¼ f½ð3n2 þ 5n þ 6Þ=4Šð1=2Þn

ð1=2Þð1=4Þn guðnÞ ð6:16Þ References

u(n) being the unit step function. 1. S.C. Dutta Roy, Comments on fair and square
computation of inverse z-transforms of rational func-
tions. IEEE Trans. Educ. 58(1), 56–57 (Feb 2015)
2. S.C. Dutta Roy, Carry out partial fraction expansion of
rational functions with multiple poles—without tears.
Stud. J. IETE. 26, 129–31 (Oct 1985)
3. F.F. Kuo, in Network Analysis and Synthesis (Wiley,
New York, 1966), pp. 153–154
Part II
Passive Circuits

This is the largest of the four parts and contains 16 chapters. As in Part I, the
topics are interrelated here also. We deal with passive circuits and two dis-
tinct aspects of it, viz. analysis and synthesis. Most curricula today empha-
size only the first aspect, viz. analysis, which deals with the problem of
finding the response of a given circuit to a given excitation. The synthesis
aspect deals with designing a circuit to perform in a specified way; by far,
synthesis is more exciting than analysis. Analysis, however difficult it may
be, is always possible. However, a synthesis problem may or may not have a
solution. In real life, it is synthesis that is required more than analysis.
Synthesis is an art and not a science. The beauty of synthesis is that if one
solution exists, then there exists an indefinite number of solutions. For
analysis, the solution is unique. Synthesis, therefore, facilitates choice. From
among a variety of solutions, you can select the one that is the best for your
situation.
The first 11 chapters of this part are concerned with analysis. Circuit
analysis can be performed with ease in the case of linear circuits with the help
of transforms—Fourier or Laplace—which transports the problem from the
time domain to the frequency domain. In the latter, there is no differentiation
or integration; instead, there are only algebraic manipulations, viz. addition,
multiplication and division. That is the reason why you always prefer to work
in the frequency domain. However, frequency domain analysis has its own
demerits, as you have observed in Chap. 2. A differential (or difference)
equation cannot be solved for all times by transforming it to an algebraic
equation. There comes the difficulty of initial conditions which do not match.
So time domain solutions are comprehensive; they do not give rise to such
anomalies. The other reason why time domain cannot be divorced once for
all is that most undergraduate curricula treat transform techniques later, may
be in the second year. On the other hand, a basic Electrical Engineering
course is taught in the very first year. Hence, one has to work in the time
domain. This is how it was and is still is at IIT Delhi. Your curriculum would
be no exception. I faced a difficulty while teaching the first-year course on
basic Electrical Engineering. Circuit analysis forms a large part of the course,
and I found that most textbooks either bring in the Laplace transform there
58 Part II: Passive Circuits

itself or avoid it by introducing an artificial excitation est. Then follows the


concept of poles, zeroes and inversion of a function in s. When the student
goes to the second year, it is difficult for him or her to accept Laplace
transform in place of est excitation. To remove this difficulty, I prepared a set
of notes by remaining solely in the time domain and discarded the prescribed
textbook. Students found these notes very useful and my other colleagues
asked for them when they were assigned to teach this course. This is the
genesis of Chap. 7.
Chapter 8 deals most comprehensively with the RLC circuit analysis in
the time domain. The three cases of damping, viz. under-damping, critical
damping and over-damping, are thoroughly analysed, and simple methods
are devised for finding the response easily. Chapter 11 deals in the same
circuit with Fourier transforms. Many types of resonances that may occur by
taking the output across various elements or a combination of them are
treated in Chap. 12.
Chapters 9 and 10 deal with two problems which counteract the popular
belief that seeing is believing. Chapter 9 gives a circuit paradox which I hope
you will find interesting, while in Chap. 10, I talk of the problem of initial
values in an inductor.
The parallel-T RC network is an important circuit and finds application in
many situations. Primarily, it is a null network, but it can be used to make a
selective amplifier, a measuring instrument or a frequency discriminator.
Chapter 13 presents several different methods for analysing this network and
gives some simple methods to avoid the messy mesh or node equations.
Chapter 14 goes deeper into the performance of the network with regard to
selectivity and spells out design conditions to achieve maximum selectivity.
A perfect transformer does not exist in practice, but the concept facilitates
design and applications of this important component. There arise some
peculiar phenomena, like current discontinuity from t=0− to t=0+, and
degeneracy. These are of great theoretical interest and help you to understand
the limitations of the device. Read this with attention to realize that what you
take for granted with this simple device is not in general practical.
Resistive ladders are very commonly treated in the first course of circuits.
Infinite ladders are particularly important because they bring in some irra-
tional numbers and functions in their performance characteristics. Such
networks can be analysed by step-by-step analysis and clever reasoning
rather than mesh and node equation, which would be infinite in number.
Difference equation formulation, however, gives an easier method, and
z-transforms can be easily applied to them. This is what forms the contents of
Chap. 17.
Chapters 18 and 19 deal with synthesis. In Chap. 18, we start with the
driving-point synthesis of a relatively simple function, viz. one of the third
order. Chapter 19 shows how the practical problem of interference rejection
in an ultra-wideband system reduces to the synthesis of an LC driving-point
function. We work out several alternative solutions to facilitate a choice of
the best one.
A filter design problem has to be solved in several steps. First, from the
specification of cut-off frequency(ies) and pass- and stop-band tolerances,
Part II: Passive Circuits 59

you choose a magnitude function. Then by the process of analytic continu-


ation, find the corresponding transfer function. Finally, realize the transfer
function by using inductors and capacitors. The second step involves finding
the poles and zeroes of the transfer function. Chapter 20 gives some shortcuts
for moving from the magnitude function to the transfer function, by simple
coefficient matching. This saves a lot of calculation. Also, normally you
consider the Butterworth type because of its simplicity.
Design of band-pass (BP) and band-stop (BS) filters usually proceeds
from a corresponding normalized low-pass (LP) transfer function. After
realizing the LP function, you apply the frequency transformation technique
to obtain the actual BP/BS components. There exists some confusion
regarding this particular transformation. Chapter 21 attempts to remove these
confusions.
Chapter 22 deals with passive differentiators. It gives, starting with the
well-known RC differentiators, other circuits which give improved linearity
of the frequency response. This contains a number of innovations and the
new circuits are truly new, not available in textbooks.
Circuit Analysis Without Transforms
7

Is it simpler? In most cases, it is. Remember curriculum. At this stage, you are not exposed to
the difficulty you faced by working solely in Laplace and Fourier transforms, and hence you
the time domain, in the previous chapter cannot appreciate how they simplify circuit
(Chap. 2), in solving a differential equation analysis. This chapter is an attempt to show that
with impulsive excitation? Except for these circuits can indeed be completely analysed
odd cases, time domain analysis is usually without the help of transform techniques.
simpler. In this chapter, we discuss how linear We first introduce the concepts of natural
circuits can be completely analysed without response, forced response, transient response and
using Laplace or Fourier transforms. Is this steady-state response, and deal with typical
analysis simpler than that using transform examples of force-free response. The concepts of
techniques? You should judge for yourself to impedance, admittance, poles and zeros are then
realize. introduced through the artifice of est excitation. It
is shown that the natural frequencies of a circuit
depend upon the kind of forcing function, and
Keywords that they are related to the poles and zeros of an

Circuit analysis Differential equation impedance function.

Time domain Force-free response Next, we deal with forced response to expo-

First-order circuit Second-order circuit nential excitation, and in particular to sinusoidal

Damping Root locus Impedance  excitation; introduce the concept of phasors; and

Natural response Natural frequencies demonstrate how steady-state sinusoidal

DC and sinusoidal excitation Pulse response response can be found out only through phasors
Impulse response and impedances. Several examples of complete
response are worked out. The chapter concludes
Analysis of electrical circuits forms part of a core with an introduction to the impulse and an
course for all engineering students which is example of impulse response of a circuit.
usually offered in the very first semester of the Throughout the chapter, the emphasis is on
understanding through examples, rather than on
intricate theories. Once you learn the technique
through simple, heuristic and common sense
Source: S. C. Dutta Roy, “Circuit Analysis without arguments, detailed justification through trans-
Transforms,” IETE Journal of Education, vol. 39, form techniques, an exposure to which will come
pp. 111–127, April–June 1998.

© Springer Nature Singapore Pte Ltd. 2018 61


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_7
62 7 Circuit Analysis Without Transforms

to you later in the curriculum, will make you a d2 i R di 1


master of the same. 2
þ þ i¼0 ð7:5Þ
dt L dt LC
Let us start with an example.
In general, for a second-order equation like
Eq. 7.5, ic(t) will be of the form
An Example
ic ðtÞ ¼ A1 il ðtÞ þ A2 i2 ðtÞ; ð7:6Þ
Consider the circuit in Fig. 7.1, where we wish to
find i(t) due to an excitation by a current source where A1 and A2 are constants. The particular
is(t). Note that C may have an initial voltage integral ip(t) of Eq. 7.4 is any function which
vc(0) = V, say, and L may have an initial current satisfies the equation as it is. For example, if
i(0) = I, say, it being assumed that is(t) has been is(t) = est, then ip(t) = Kest where, by substitution
switched on at t = 0. By KCL, we get in Eq. 7.4, we get
dvc ðtÞ 
R 1

1
is ðtÞ ¼ C þ iðtÞ ð7:1Þ 2
K s þ sþ ¼ ð7:7Þ
dt L LC LC
Also
or
diðtÞ
vc ðtÞ ¼ L þ RiðtÞ ð7:2Þ 1=ðLCÞ
dt K¼ ð7:8Þ
1
s2 þ RL s þ LC
Substituting Eq. 7.2 in Eq. 7.1 gives
In general, therefore, the complete solution to
d2 i di Eq. 7.4 is given by
is ðtÞ ¼ LC 2 þ RC þ i; ð7:3Þ
dt dt
iðtÞ ¼ A1 i1 ðtÞ þ A2 i2 ðtÞ þ ip ðtÞ ð7:9Þ
where we have dropped the argument t from
i(t) for brevity. Equation 7.3 can be written more It is emphasized that ip(t) will not contain any
succinctly as unknown constants. The unknown constants here
are A1 and A2, which are to be evaluated from the
d2 i Rdi 1 is ðtÞ
þ þ i¼ ð7:4Þ two initial conditions. The condition i(0) = I
dt2 Ldt LC LC
gives
This is a second-order differential equation
with constant coefficients and as you are aware, I ¼ A1 i1 ð0Þ þ A2 i2 ð0Þ þ ip ð0Þ ð7:10Þ
the solution will consist of two parts—the com-
The other condition vc(0) = V gives, from
plementary function and the particular integral.
Eq. 7.2,
The complementary function ic(t) is the solution
to the homogeneous equation 
di
V ¼ Rið0Þ þ L  ð7:10aÞ
dt t¼0
L
or, combining with Eq. 7.9.
i(t)
+ V ¼ RI þ L½A1 i01 ð0Þ þ A2 i02 ð0Þ þ i0p ð0Þ; ð7:11Þ
C vc(t) R
iS (t) ˗ where the symbol f′(0) has been used for
df/dt|t = 0. In Eqs. 7.10 and 7.11, everything else
is known except A1 and A2, and hence they can
Fig. 7.1 An RLC circuit excited by a current source be found out.
Some Nomenclatures 63

Some Nomenclatures not necessarily the same, neither are the


steady-state response and forced response.
In the previous example, the term ic(t) of Eq. 7.6
arises due to initial energy in the capacitor as
Force-Free Response: General
well as the inductor. However, Eq. 7.6 only
Considerations
gives the form of the response due to the initial
conditions; the values of the constants depend
First, let us study some force-free cases. Obvi-
upon the particular solution ip(t) and its deriva-
ously, the equation to be solved will be a
tive at t = 0.
homogeneous equation of order determined by
This part of the response is called the natural
the number of energy storage elements. In the
response. Had the circuit been force free, i.e. if
example given earlier in the chapter (page 62),
is(t) = 0, then obviously A1 and A2 would be
the order was two because of one L and one C. In
determined from the initial conditions only.
general, our equation will be of the form
Hence, although the form of the natural response
remains the same in force free as well as forced dn x dn1 x dx
þ an1 n1 þ    þ a1 þ a0 ¼ 0;
cases, the actual values will be different. dt n dt dt
The part ip(t) of the total response depends ð7:12Þ
solely on the forcing function and the parameters
and the architecture of the circuit; it is indepen- where x is a voltage or a current and an–1, an–2,…
dent of the initial conditions. ip(t) is called the a1 and a0 are constants determined by the
forced response and is usually of the same form parameters of the circuit.
as the forcing function. An example has already Although there are various methods of solving
been given for is(t) = est. As another example, if Eq. 7.12, we find it convenient to assume a
is(t) = Is, a dc, then ip(t) is also a dc, Ip, whose solution of the form x = Aest, substitute it in
value is obtained from Eq. 7.4 as Ip = Is, because Eq. 7.12, and obtain the following algebraic
equation in s:
dIp d2 I p
¼ 0 and ¼ 0: sn þ an 1 sn 1 þ    þ a1 s þ a0 ¼ 0 ð7:13Þ
dt dt2
This is called the characteristic equation of
The superposition of natural response (in form
the circuit and its roots are called the charac-
only) and the forced response gives the total or
teristic roots or natural frequencies of the circuit.
complete response.
Since Eq. 7.13 has n roots—s1, s2, … sn, natu-
The natural response of the circuit, in the
rally, our desired solution will be of the form
force-free case, usually decays with time and
becomes negligible as t ! ∞ the term ‘usually’
x ¼ A1 e s 1 t þ A 2 e s 2 t þ . . . þ A n e s n t ; ð7:14Þ
indicates the practical situation where dissipation
is invariably present. In the dissipationless case, where the constants A1, A2, , An, are to be
it is possible that the natural response does not determined from the n initial conditions––one on
decay with time. The forced response can be each energy storage element.
maintained indefinitely by an appropriate forcing We now consider some specific examples.
function, but for some forcing functions, (like
e–at, a > 0), it may also decay with time. That
part of the total response which decays with time Force-Free Response of a Simple RC
is called the transient response. The value of the Circuit
response as t ! ∞ (or in practice, after the
transient part has become negligible) is called the Consider the situation depicted in Fig. 7.2, where
steady state response. It must be emphasized that the switch S is in position a for a long time so
the transient response and natural response are that C is fully charged to the voltage V. At t = 0,
64 7 Circuit Analysis Without Transforms

S with the fact that the circuit has only one


a b energy storage element, viz. C. Hence, our
i(t) solution is
t=0
+ iðtÞ ¼ Aet=ðRCÞ ð7:18Þ
– V R
To evaluate A, we take help of the fact that i
C (0) R = V. Here, the argument of i is to be
interpreted as 0+, i.e. immediately after the
switch has been shifted from a to b [note that i
Fig. 7.2 A simple RC circuit (0−) = 0]. Hence

the switch is shifted to the position b. We then V


¼A ð7:19Þ
have a force-free RC circuit (R and C are in series R
or parallel?) with an initial condition on C, as
and finally
shown in Fig. 7.3. KVL around the loop in
Fig. 7.3 gives
V t=ðRCÞ
iðtÞ ¼ e ð7:20Þ
Zt R
1
Ri þ idt  V ¼ 0 ð7:15Þ
C A plot of Eq. 7.20 is shown in Fig. 7.4. The
0
product RC has the dimension of time, is denoted
The sign is negative on V because V and by T, and is called the time constant of the cir-
i oppose each other, i.e. i tends to charge C to a cuit. It is the time after which the current decays
voltage whose polarity is opposite to that of to e1 ¼ 0:368 times the initial value. At t = 5T,
V. Equation 7.15 is an integral equation and can the current value is VR e5 ffi 0:0067 VR which is
be converted to a homogeneous differential only 0.67% of the initial value and hence can be
equation by differentiation. The result is: neglected. It is therefore said that the current has
a life time of 5T.
di 1 Physically, the current i represents the rate of
R þ i¼0 ð7:16Þ
dt C decay of charge in the capacitor. As t ! ∞, the
charge tends to zero and the current also tends to
Assuming i = Aest, we get the characteristic zero.
equation

1
sþ ¼0 ð7:17Þ
RC i(t)
V
Thus, there is only one characteristic root or R
1
natural frequency at s ¼  RC ; this is consistent
V
eR

i(t)
+
C V R

t
T

Fig. 7.3 Circuit in Fig. 7.2 at t  0, with i(0) R = V Fig. 7.4 Plot of Eq. 7.20
Force-Free Response of a Simple RL Circuit 65

Force-Free Response of a Simple RL


Circuit

Now consider the circuit in Fig. 7.5, where the


switch S is in the position a for a long time, so
that a steady current V/r is established in the
inductor, and then at t = 0, S is shifted to position
b. We wish to find the current i(t), t  0+. The
differential equation is, by KVL,

di
L þ Ri ¼ 0 ð7:21Þ
dt

and by following a similar procedure as in the


previous example, we get

V Rt=L
iðtÞ ¼ e ð7:22Þ
r Fig. 7.6 More than one C or L but still first-order
Here, the time constant is T = L/R.

First-Order Circuits with More Than


One Energy Storage Element

Both of the circuits in the previous two examples


are first-order circuits, because they are governed Fig. 7.7 Second order circuits
by a first-order differential equation. It is not
necessary that there should be only one energy
storage element for a first-order circuit. There can
be more than one capacitor, for example, in an Force-Free Response
RC circuit, but they must be connected in such a of a Second-Order Circuit
fashion that the capacitors can be combined into
one. As an example, the circuits in Fig. 7.6 are Consider a charged capacitor C, with an initial
first-order circuits, but not the ones in Fig. 7.7. voltage V0 to be connected across a series RL
combination at t = 0 as shown in Fig. 7.8.
Assume the inductor to be initially relaxed.
r Application of KVL gives
a
Zt
S di 1
L þ Ri þ idt  V0 ¼ 0 ð7:23Þ
b dt C
t=0 0

V Differentiating both sides gives


L
R d2 i Rdi i
þ þ ¼0 ð7:24Þ
dt2 L dt LC

Fig. 7.5 A simple RL circuit


66 7 Circuit Analysis Without Transforms

i (t ) Hence, the solution becomes

V0
L R iðtÞ ¼ ðes1 t  es2 t Þ ð7:32Þ
Lðs1  s2 Þ
C
Case I:
+ vC (t) – R 2 1
 
 [0
2L LC
Fig. 7.8 A second-order circuit
In this case, the roots s1 and s2 are real, neg-
which is the same as Eq. 7.5. The characteristic ative and distinct and the solution is
equation is
iðtÞ ¼ V0 e2bL ðebt  ebt Þ
at
)
R 1 ; ð7:33Þ
s2 þ sþ ¼0 ð7:25Þ ¼ V0 e sinh bt
at

L LC bL

which has the roots where


rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
s R R2 1
R 2 1 a¼ þ and b ¼ ¼ ð7:34Þ

R 2
s1;2 ¼   ð7:26Þ 2L 4L LC
2L 2L LC
Case II:
The most general solution is, therefore,
R 2 1
 
 \0
iðtÞ ¼ A1 es1 t þ A2 es2 t ð7:27Þ 2L LC

Clearly, the nature of the solution will depend In this case, the roots will be complex con-
upon whether 2L
 R 2 1
 LC is >, = or <0. Accord- jugates of each other:
ingly, we shall have three cases. But first let us s1;2 ¼ a  jx; ð7:35Þ
evaluate A1 and A2. The condition i(0) = 0 gives,
from Eq. 7.27, where a is defined in Eq. 7.34 and
A1 þ A 2 ¼ 0 ð7:28Þ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 R2
x¼  2 ¼ x2n  a2 ð7:36Þ
Also, from Eq. 7.23, putting t = 0, we get LC 4L

di xn being given by
L  ¼ V0 ð7:29Þ
dt t¼0
x2n ¼ 1=ðLC Þ ð7:37Þ
which, combined with Eq. 7.27, gives
Hence, the complete solution is
s1 A1 þ s2 A2 ¼ V0 =L ð7:30Þ
iðtÞ ¼ eat ½A1 ejxt þ A2 ejxt  ð7:38Þ
Solving Eqs. 7.28 and 7.30, we get
which can be simplified to the form
V0
A1 ¼ ¼ A2 ð7:31Þ iðtÞ ¼ Aeat sinðxt þ hÞ ð7:39Þ
Lðs1  s2 Þ
Force-Free Response of a Second-Order Circuit 67

This is a damped sinusoid with the damping there are oscillations, while case I is called the
coefficient a (per second), natural frequency of overdamped case because the response decays
oscillation x and initial phase h. A and h are to be monotonically with time after reaching the first
determined, as in the earlier case, from i(0) = 0 maximum. In the background of this nomencla-

and ddti ¼ VL0 . Applied to Eq. 7.39, it gives
 ture, the case under consideration should be
t¼0 called the critically damped case. From the the-
ory of differential equations, the solution to this
h¼0 ð7:40Þ
case will be since s1 = s2 = −a,
and
iðtÞ ¼ eat ðA1 þ A2 tÞ ð7:43Þ
V0
A¼ ð7:41Þ While this is the general form, i(0) = 0 indi-
xL
cates that A1 = 0. Hence,
Thus, finally,
iðtÞ ¼ A2 t eat ð7:44Þ
V0 at
iðtÞ ¼ e sin xt ð7:42Þ 
xL The other initial condition, viz. ddti ¼ V0 =L

t¼0
Note that Eq. 7.39 is the most general form; it gives
has reduced to Eq. 7.42 because we took the
V0
initial current in the inductor as zero. A plot of A2 ¼ ð7:44aÞ
L
Eq. 7.42 is shown in Fig. 7.9.

Case III: Hence, finally,

V0 at
R 2 1 iðtÞ ¼ te ð7:44bÞ
 
¼ L
2L LC
a plot of which is shown in Fig. 7.10. Note that
In this case, the roots are real and equal. i(t) = 0 at t = 0 as well as at t = ∞, and that there is
Case II is called the underdamped case, because a maximum at t = 1/a, the maximum value being

V0
i(t) IM ¼ ð7:45Þ
aLe

V
0 V -a t
0e
wL i(t)
wL

V
0
a Le

V
- 0 2p t
wL 1
w a

Fig. 7.9 Plot of Eq. 7.42 Fig. 7.10 Plot of Eq. 7.44b
68 7 Circuit Analysis Without Transforms

It is worth mentioning here that the plot of


qffiffiffi
L
(iii) when R ¼ 2 C, both the roots coalesce at
Eq. 7.33 (for the overdamped case) will be sim-
ilar to that of Fig. 7.10 except that the maximum s = −a, i.e. on the negative real axis (point
will be smaller and the decay will be slower. P in Fig. 7.11); and
qffiffiffi
(iv) when R increases beyond 2 CL , we get the
overdamped case, with both roots on the
Root Locus of the Second-Order negative real axis, one (s1) going towards
Circuit the origin and the other (s2) moving
towards −∞.
The roots of the characteristic equation of the
circuit considered in the preceding section move
in the complex plane s = r + jx as R is varied.
Referring to Eq. 7.26, we see that Natural Frequencies of Circuits
with a Forcing Function
(i) when R = 0 (note R cannot be negative),
the roots are purely imaginary Consider the example in Fig. 7.1 again, we have
pffiffiffiffiffiffi seen that the natural response satisfies Eq. 7.5,
s1;2 ¼ j= LC ¼ jxn ð7:46Þ which is same as Eq. 7.25. The natural response
pffiffiffiffiffiffiffiffiffi is therefore of the form Eq. 7.27 with the natural
(ii) When 0 \ R\ 2 L=C , the roots are frequencies given by Eq. 7.26. Now suppose
complex: instead of a current source is(t), we connect a
voltage source vs(t) across C, as shown in
s1;2 ¼ a  jx; ð7:47Þ Fig. 7.12. The differential equation now becomes
where di
L þ Ri ¼ vs ðtÞ ð7:48Þ
dt
a2 þ x2 ¼ x2n ; ð7:47aÞ
so that the complementary function ic(t) satisfies
i.e. roots move in a circle, starting from + the following homogeneous equation
jxn and −jxn, towards the negative real
axis; the centre of the circle is at the origin di
L þ Ri ¼ 0 ð7:49Þ
of the complex plane and its radius is xn, dt
as shown in Fig. 7.11;
The characteristic equation is, therefore,

s þ ðR=LÞ ¼ 0 ð7:50Þ

jωn

i(t)
ωn
P
σ L
α
+
vs (t) C R
˗
– jωn

Fig. 7.11 Root locus of the series RLC circuit Fig. 7.12 Circuit of Fig. 7.1 driven by a voltage source
Natural Frequencies of Circuits with a Forcing Function 69

and the natural frequency is concerned, they perform the same role as that of
a resistance. Impedances can be combined
s0 ¼ R=L ð7:51Þ exactly as resistances. For example, consider a
series combination of R, L and C carrying a
This is quite different from Eq. 7.26 and leads current est, then the voltage drop across the
us to the conclusion that the natural frequencies combination is
of a circuit under a forcing function depend upon
the nature of the latter. We next consider a
 
1 st
general technique based on the concept of v¼ R þ sL þ e ð7:56Þ
sC
impedance for finding the natural frequencies.
But before that, note that s1, 2 of Eq. 7.26 can be Thus, the ratio of v to i is
called open-circuit natural frequencies because
is(t) = 0 in Fig. 7.1 makes C look at an open 1
ZðsÞ ¼ R þ sL þ ð7:57Þ
circuit to the left, while s0 of Eq. 7.51 qualifies to sC
be called a short-circuit natural frequency
because vs(t) = 0 makes C look at a short circuit. which is simply the addition of the individual
impedances. For the circuit in Fig. 7.1, the
impedance seen by is(t) is

Concept of Impedance 1
sC ðR þ sLÞ
ZðsÞ ¼ 1
ð7:58Þ
R þ sL þ sC
Suppose a current est passes through an inductor
L, then the voltage developed across it will be

di Relation Between Impedance


v¼L ¼ sLest ð7:52Þ and Natural Frequencies
dt

The ratio of v to i is Equation 7.58 can be simplified to as follows:

ZL ¼ sL ð7:53Þ 1 ðs þ R=LÞ
ZðsÞ ¼ 1
ð7:59Þ
C s2 þ s RL þ LC
and is called the impedance of the inductor.
Similarly, if the voltage across a capacitor C is and can be rewritten as
est, then the current through it is given by
1 ðs  s0 Þ
dv ZðsÞ ¼ ; ð7:60Þ
i ¼ C ¼ sCest ð7:54Þ C ðs  s1 Þðs  s2 Þ
dt
where s0 is the same as the natural frequency
so that the ratio of v to i in this case is
with voltage excitation, given in Eq. 7.51, and s1
1 and s2 are the natural frequencies with current
Zc ¼ ð7:55Þ excitation, given by Eq. 7.26. These observations
sC
hold in general, too. That is, the open-circuit
Zc is called the impedance of the capacitor. natural frequencies are the value of s at which Z
For a resistance R, the ratio of v to i is simply R, (s) ! ∞; these values are called the poles of Z
independent of the form of v or i. It follows that (s). Similarly, the short-circuit natural frequen-
Zc and ZL have the same dimension as that of R, cies are those values of s at which Z(s) = 0; these
and that as far as an exponential excitation is values are given the name zeros. Clearly, both
70 7 Circuit Analysis Without Transforms

jω 2Ω

1 R2
X j - 5e -3t + +
LC 4L2 1H
0.5 F
v vC (t)
– –

σ
R Fig. 7.14 An example of exponential excitation
-
L
Since Z(s) = v/i when v or i is of the form est
or Aest, it follows that
R
- X
2L vðtÞ ¼ iðtÞZ ðsÞjs ¼  3
ð7:62Þ
¼ 5e3t  2 ¼ 10e3t

Fig. 7.13 Pole-zero plot for the circuit in Fig. 7.1 with If we are interested in finding the voltage
R2 < 4L/C
across the capacitor, then we can use potential
division, i.e.
poles and zeros can, in general, be complex 
quantities. In the s = r + jx plane, a zero is 2ðs þ 3Þ 
ðs þ 2Þðs þ 1Þ 
indicated by a small circle, while a pole is indi- vc ðtÞ ¼ vðtÞ  þ 3Þ 
¼ 0 ð7:63Þ
cated by a cross and the picture so obtained is 2 þ ðs þ2ðs2Þðs þ 1Þs¼3
called the pole-zero plot. For the circuit in
Fig. 7.1, the pole-zero plot for a typical under-
damped case is shown in Fig. 7.13. Forced Response Due to DC
An admittance is defined as the reciprocal of
an impedance and is denoted by Y. The poles and A DC can be considered as an exponential
zeros of Y are obviously the zeros and poles, excitation with s = 0. Then, the impedances of
respectively, of Z. R, L and C become R, 0 and ∞, respectively.
Hence for DC excitation, an inductor acts as a
short circuit and a capacitor behaves as an open
Forced Response to an Exponential circuit.
Excitation

We shall illustrate the calculation of forced Forced Response to a Sinusoidal


response to an exponential excitation by an Excitation
example. Consider the circuit in Fig. 7.14. We
are interested in finding the voltage v. The The forced response to an excitation of the form
impedance faced by the current source is est is of the same form, except for multiplication
by a constant. The forced response to dc is also a
1
ðs þ 3Þ 0:5s dc. If s = jx, then the forced response will be of
ZðsÞ ¼ 2 þ 1
s þ 3 þ 0:5s the form Aeixx where A can be a complex
ð7:61Þ
2ðs þ 3Þ quantity. Let A = |A|ejh; then, symbolically, we
¼ 2þ
ðs þ 2Þðs þ 1Þ can write
Forced Response to a Sinusoidal Excitation 71

Imaginary part
ejxt ! j Ajejðxt þ hÞ ð7:64Þ

Since ejxt is the superposition of cos xt and


b
j sin xt, and since we are considering only linear b
circuits, it follows that a

Imðejxt Þ ¼ sinxt ! Im½j Ajejðxt þ hÞ 


ð7:65Þ
¼ j Ajsinðxt þ hÞ θ
Real part

and a

Reðejxt Þ ¼ cosxt ! Re½j Ajejðxt þ hÞ  Fig. 7.15 Phasor representation


ð7:66Þ
¼ j Ajcosðxt þ hÞ
or
This suggests a methodology for finding the
Vm cosðxt þ hÞ ! jBjcosðxt þ /Þ ð7:70Þ
forced response of a general circuit to sinusoidal
excitation. Suppose, we have a voltage excitation
Thus, one can divorce the time dependence
of the form
and work only in terms of A = |A|ejh and impe-
pffiffiffi dances to find B = |B|ej/. It is conventional to
vðtÞ ¼ Vm cosðxt þ hÞ ¼ 2Vcosðxt þ hÞ;
work in terms of rms rather than peak values, so
ð7:67Þ
that the given excitation will be represented by
pffiffiffi
where V is the rms value of the voltage. We can a ¼ A= 2 instead of A and the response
pffiffiffi
write obtained will then be in the form b ¼ B= 2
rather than B. Each of these quantities is called a
vðtÞ ¼ Re½Vm ejh ejxt  ¼ Re½Aejxt ; ð7:68Þ phasor and can be represented as a vector in the
complex plane as shown in Fig. 7.15.
where A is the complex quantity Vm eih. We can From the convention just discussed, obviously
then find the response of the circuit to the a current 10 cos xt will be represented by the
pffiffiffi
exponential Aejxt by using impedance concepts. phasor ð10= 2Þej0 ; i.e. cos xt has been taken to
In this case, the impedance of R, L and C will be define the reference direction for angle. (It must
ZR = R be mentioned that there is nothing sacred about
ZL ¼ jxL ¼ jXL ð7:69Þ this convention. In fact, we shall, as illustrated
later, sometime find it convenient to use sinxt as
the reference phasor for angles.)
1
ZC ¼ ¼ jXC As examples, the current 10 cos (xt + p/6) will
jxC pffiffiffi
be represented by the phasor ð10= 2Þejp=6 while
XL = xL and Xc = –1/(xC) are called the the current 10 sin (xt + p/6) will be represented
by the phasor p10ffiffi2 e j p6  p2 ¼ p10ffiffi2 ejp=3 because
 
reactances of L and C, respectively.
Suppose the response is Bejxt where B is
another complex quantity |B|ej/. Then sinðxt þ p=6Þ ¼ cosðxt þ p=6 p=2Þ ð7:71Þ

Aejxt ! Bejxt Phasors can be added or subtracted like vec-


Re½Aejxt  ! Re½Bejxt  tors. Suppose, for example, we wish to find v =
pffiffiffi
v1 − v2 where v1 ¼ 100 2 cos ð100ptp=6Þ
72 7 Circuit Analysis Without Transforms

Im
B
Basic Elements and Their
V–I Relationships for
Sinusoidal Excitation
200
pffiffiffi
Note that if a current i ¼ 2I cos xt flows
pffiffiffi
through a resistance, the voltage across it is 2IR
p /3
0
cos xt. Thus, the current and the voltage phasors
-p / 6
Rc are Iej0 and IRej0. Thus, the voltage and current
are in phase.
100
A If the same current flows through an induc-
pffiffiffi
tance L, the voltage across it is  2 LIx sin
pffiffiffi
xt ¼ 2x LI cos ðxt þ p=2Þ. The phasor cor-
responding to this is xL I ejp/2 = jxLI.
The impedance of the inductor jxL is
C therefore simply the ratio of voltage to current
phasor. Also note that the voltage leads the
D current by 90° or the current lags the voltage by
90°.
Fig. 7.16 Phasor addition
Similarly for a capacitor C, with sinusoidal
pffiffiffi 1
excitation, the impedance is jxC and the voltage
and v2 ¼ 200 2 cosð100pt þ p=3Þ. The phasors
1 ¼ across it lags the current by 90°.
corresponding to these two voltage are V
In general, if a current phasor I flows that an
100e jp=6 
and V2 ¼ 200e þ jp=3
, which are shown impedance Z(jx), then the voltage phasor across
in Fig. 7.16 by the vectors OA and OB, respec- the latter is
tively. The vector OC represents the negative of
OB and OD and the vector sum of OA and OC  ¼ IZ
V ð7:75Þ
represents the phasor corresponding to v1 − v2.
One can find the phasor corresponding to OD by which is often referred to as Ohm’s law for
measurements or by geometrical formulas. It is, sinusoidal excitation. Phasors, like actual
however, most convenient to compute this by voltages/currents, obey KCL and KVL, pro-
converting both OA and OB to rectangular vided, of course, the circuit is linear, for which
coordinates. Thus all currents and voltages will be of the same
frequency.
1 ¼ 100 cos p=6  j100 sin p=6

V pffiffiffi ð7:72Þ
¼ 50 3  j50
2 ¼ 200 cos p=3  j200 sin p=3

V pffiffiffi ð7:73Þ
An Example of the Use of Phasors
¼ 100 þ j100 3 and Impedances
pffiffiffi
Hence Let a current source iðtÞ ¼ 10 2 cos
pffiffiffi pffiffiffi ð1000t þ p=4Þ be connected across a parallel
1  V
V 2 ¼ ð50 3  100Þ  jð50 þ 100 3Þ
combination of R, L and C as shown in Fig. 7.17.
ð7:74Þ It is required to determine the voltage v(t) across
the combination and the currents through the
whose magnitude M and phase h can be easily individual branches.
found out. The quantity v1 − v2 will then be The admittance (=1/impedance) of the com-
pffiffiffi
M 2 cos ð100pt þ hÞ. bination is
An Example of the Use of Phasors and Impedances 73

Thus

C = 0.2 mF
+ IR ¼ p10ffiffi A 9

L = 1 mH
i(t) 2 pffiffi >
=
v R=1Ω IL ¼ 10= 2
¼ 10ffiffi jp=2
p e
jxL

pffiffiffi 2 pffiffiffi pffiffiffi >
Ic ¼ ð10= 2ÞjxC ¼ j10 2 ¼ 10 2ejp=2
;
iR iC
iL
ð7:80Þ

Fig. 7.17 A parallel RLC circuit with sinusoidal These various phasors are shown in Fig. 7.18.
excitation
In the time domain, the expressions would be
Y ¼ R1 þ jxL
1
9
þ jxC 9
1 3
= vðtÞ ¼ 10 cos 1000t >
¼ 1  j 1000103 þ j  1000  2  10 iR ðtÞ ¼ 10 cos 1000t
>
=
¼ 1þj
; ð7:81Þ
iL ðtÞ ¼ 10 sin 1000t >
>
ð7:76Þ ic ðtÞ ¼ 20 sin 1000t
;

Hence, the impedance is

1 1j Back to Complete Response


Z¼ ¼ ð7:77Þ
1þj 2
We now return to the complete response of a
The current phasor is given circuit to a given excitation. Consider first
a simple RL circuit to which a sinusoidal voltage
10 source Vm cos xt is connected in series at t = 0,
I ¼ 10ejp=4 ¼ p ffiffiffi ð1 þ jÞ ð7:78Þ
2 as shown in Fig. 7.19. The differential equation
is
Hence, the voltage phasor will be
di
L þ Ri ¼ Vm cos xt ð7:82Þ
 ¼ IZ ¼ 10
V
10
pffiffiffi ð1  jÞð1 þ jÞ ¼ pffiffiffi ð7:79Þ
dt
2 2 2
whose complete solution is

iðtÞ ¼ ic ðtÞ þ ip ðtÞ; ð7:83Þ


Im

IC = 10 2e j p /2
t=0
R
I = 10e j p /2
+
i(t)
Vm cos ωt
Re

V = 10 / 2
I R = 10 / 2
L
10
IL = e - j p /2
2

Fig. 7.18 Phasor diagram for the circuit in Fig. 7.17


Fig. 7.19 A series RL circuit with sinusoidal excitation
74 7 Circuit Analysis Without Transforms

where To obtain A, one appeals to the initial condi-


tion. Let i(0) = 0; then from Eq. 7.91,
ic ðtÞ ¼ AeRt=L ð7:84Þ  
Vm 1 xL
A ¼  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos tan ð7:92Þ
and R2 þ x2 L2 R

ip ðtÞ ¼ Im cosðxt þ hÞ ð7:85Þ As is evident from Fig. 7.20.

The particular solution parameters Im and h


 
1 xL R
cos tan ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð7:93Þ
can be easily obtained from phasor analysis. The R R þ x2 L2
2
voltage phasor is
Vm jo Hence, finally
 ¼p
V ffiffiffi e ð7:86Þ
2

 

Vm 1 xL R Rt=L
iðtÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos xt  tan  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e ð7:94Þ
R2 þ x2 L2 R R2 þ x2 L2

The impedance is
The first term in square brackets is the forced
Z ¼ R þ jxL ð7:87Þ
response, while the second term is the natural
Hence, the current phasor is response. Note that the natural response form
depends on R and L only, but the actual value is
Vmffiffi
p affected by the forcing function. Also note that
2 Vm 1
I ¼ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ej tan ðxL=RÞ the natural response here is also the transient part
R þ jxL 2ðR2 þ x2 L2 Þ while the forced response is also the steady-state
ð7:88Þ response.
In general, if the forcing function is not
Hence sinusoidal, one finds the particular solution by
assuming it to be of the same form as the forcing
Vm
Im ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð7:89Þ function, and substituting it in the complete dif-
ðR þ x2 L2 Þ
2
ferential equation to determine the unknown
constant.
and

xL
h ¼  tan1 ð7:90Þ
R
R2
Thus +
w2
L2
Vm ωL
iðtÞ ¼ AeRt=L þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
R2 þ x2 L2
  ð7:91Þ θ
xL
cos xt  tan1 R
R
Fig. 7.20 Computing cos tan1 xL
 
R
Back to Complete Response 75

S
t=0
t=0 R R i(t)
V i(t) +
L v(t) =Vm sin t
+
– vC (t)
C
Fig. 7.21 An RL circuit excited by a DC voltage source –

Fig. 7.23 A series RC circuit with sinusoidal excitation


We now consider several other examples of
complete response.
and finally

V
Step Response of an RL Circuit iðtÞ ¼ ð1  eRt=L Þ ð7:98Þ
R
Consider the circuit in Fig. 7.21, where S was A plot of i is shown in Fig. 7.22.
open for a long time and switched on at t = 0.
Hence, there is no initial current in L, i.e.
i(0+) = i(0−) = 0. ( continuity of current in an Sinusoidal Response of a Series RC
inductor). From the differential equation Circuit
di
L þ Ri ¼ V; t [ 0 þ ð7:95Þ Consider the situation shown in Fig. 7.23, where
dt
vc(0) = 0. The impedance of the series RC circuit
it follows that for est excitation is

iðtÞ ¼ ic ðtÞ þ ip ðtÞ 1


ZðsÞ ¼ R þ ð7:99Þ
ð7:96Þ sC
V
¼ AeRt=L þ
R and since the excitation is a voltage source, the
natural frequency will be the zero of Z(s),
because the natural frequency is the zero of Z
occurring at s = −1/(CR). Also, the forced current
(s) = R + sL and the forced response is a con-
response can be found from the voltage phasor
stant whose value is determined by Eq. 7.95
itself. Note that i = constant means that di/dt = 0 pffiffiffi
 ¼ ðVm = 2Þej0
V ð7:100Þ
so that the particular solution is V/R. To find A,
take help of the fact that i(0+) = 0; thus
(where we have taken sin xt as the reference
phasor, instead of cos xt; this does not change
A ¼ V=R ð7:97Þ
the result) and the impedance for ejxt excitation;

1
ZðjxÞ ¼ R þ
i
jxC
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
V 1 1
¼ R2 þ 2 2 \  tan1
R x C xCR
ð7:101Þ
t 1
Note that instead of writing e tan 1=ðxCRÞ , we
Fig. 7.22 Step response of an RL circuit have indicated ∠tan−1 1/(xCR); these are
76 7 Circuit Analysis Without Transforms

interchangeable notations. Hence, the current


phasor will be
1
+
w2
 1
I ¼ V ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Vm l C2
ffi \ tan1 R2
ZðjxÞ 
2 R2 þ 1
 xCR
x2 C 2
θ
ð7:102Þ
ωCR

Thus, the forced response is Fig. 7.24 Computing sin tan1 xCR
 1


 
Vm 1
ip ðtÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin xt þ tan1
R2 þ x21C2 xCR
Response of an RC Circuit
ð7:103Þ to an Exponential Excitation
and the complete response is In the circuit shown in Fig. 7.25, let
  vc(0) = V and the current source be exponentially
Vm 1
iðtÞ ¼ Aet=ðRCÞ þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin xt þ tan1 decaying, i.e.
R2 þ x2 C2 1 xCR

ð7:104Þ is ðtÞ ¼ Ieat ; ð7:108Þ

To evaluate A, note that vc(0−) = vc(0+) has


been assumed to be zero so that i(0+) = v(0)/
R = 0. Hence
  t=0
Vm 1 +
A ¼  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin tan1 ð7:105Þ R
xCR iS (t) C v(t)
R2 þ x21C2

Vm 1
¼  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð7:106Þ
2 1 1 þ x 2 C 2 R2
R þ x2 C 2 Fig. 7.25 Current excited parallel RC circuit

from Fig. 7.24. Hence, finally

et=ðRCÞ
 

Vm 1
iðtÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin xt þ tan1  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð7:107Þ
R2 þ x21C2 xCR 1 þ x2 C2 R2
Response of an RC Circuit to an Exponential Excitation 77

where a is real, positive and not equal to 1/(CR). This gives the same value of K as in
The impedance to est excitation is Eq. 7.113. Hence, we have the complete
response of the circuit as
1
R  sC 1 1
ZðsÞ ¼ 1
¼ 1
ð7:109Þ I 1
R þ sC C s þ CR vðtÞ ¼ AetðCRÞ þ eat ð7:115Þ
1
C CR a
and therefore the natural frequency is given by
To find A, put v(0) = V; this gives
1
s¼ ð7:110Þ I
CR V ¼ Aþ 1  ð7:116Þ
C CR a
the pole of Z(s). The forced response will be of
the form vp(t) = Ke−at and referring to the dif- or,
ferential equation
I
dv v A¼V 1  ð7:117Þ
C þ ¼ is ðtÞ ð7:111Þ C CR  a
dt R
This can be substituted in Eq. 7.115 to get the
we get
complete solution. The result is
K
CKa þ ¼I ð7:112Þ I
R vðtÞ ¼  1
1 1
 ðeat  eCR Þ þ VeCR
C CR  a
or
ð7:118Þ
I I 1
K¼1 ¼ 1 ð7:113Þ
R  Ca C CR  a We have written v(t) in this form to illustrate
1
the case when a ¼ CR . In general, by series
expansion, we can write

" #
I n o
 et ð Þ1
1 1
vðtÞ ¼ e CR
Vþ 1 CRa
C CR a
" ( 1 2 )# ð7:119Þ
t2 CR a

1
CR I 1
¼e Vþ 1  t a þ þ 
C CR  a CR 2!

Note that an easier method for calculating 1


K would have been to refer to the fact that Z When a ! CR , Eq. 7.119 gives
(s) = v/is(t) for est excitation so that for Ie−at  
at I
excitation, we would have vðtÞ ¼ e Vþ t ð7:120Þ
C
I 1
vp ðtÞ ¼ Ieat ZðaÞ ¼ 1
eat ð7:114Þ This result can also be derived from the fact
C CR a
that if the characteristic root of a first-order
78 7 Circuit Analysis Without Transforms

equation (s = −a) is also contained in the forcing equation, it is clear that the forced response is
exponential function, then the particular solution zero. Then, the total response is
is of the form Kte−at. Substituting this in
Eq. 7.111 gives iðtÞ ¼ A1 es1 t þ A2 es2 t ; ð7:126Þ

1 where A1 and A2 are to be found from the initial


CK½tðaÞeat þ eat  þ Kteat ¼ Ieat
R conditions on L and C. Depending on the value
ð7:121Þ of R, we can have three cases. For real and
distinct s1,2 (overdamped case), Eq. 7.126 is the
in which the first and the third terms on the general form. For s1 = s2 (critically damped
left-hand side cancel, giving case), the solution becomes

I iðtÞ ¼ ðA1 þ A2 tÞes1 t ð7:127Þ


K¼ ð7:122Þ
C
For complex s1,2 = −a ± jx (underdamped
Hence, v(t) is of the form case), the solution can be put in the form
 
I iðtÞ ¼ Aeat sinðxt þ hÞ ð7:128Þ
vðtÞ ¼ A þ t eat ð7:123Þ
C
In each case, there are two constants to be
Putting t = 0 in this gives determined from the initial conditions. Let L and
C be initially relaxed. Then i(0) = 0 and vc(0) = 0.
A¼V ð7:124Þ The first condition gives, for the overdamped case,

A1 þ A 2 ¼ 0 ð7:128aÞ
Step Response of an RLC Circuit
while the second gives
In the second-order circuit of Fig. 7.26, as we
have seen, the natural response will be of the vL ð0Þ ¼ Við0ÞRvc ð0Þ ¼ V; ð7:128bÞ
form A1 es1 t þ A2 es2 t ; where
i.e.

di
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi )
R R2 1
s1;2 ¼  2L  4L 2  LC
ð7:125Þ L  ¼V ð7:128cÞ
dt t ¼ 0
D  a  jx
or
while the forced response will be a constant.
From the circuit as well as from this differential V
s 1 A2 þ S 2 A2 ¼ ð7:129Þ
L

Hence, A1 and A2 can be determined. The


S other two cases can be similarly dealt with.
t=0 R i(t)

V + Sinusoidal Response of an RLC


VL L Circuit

+
C vc
– Consider the circuit in Fig. 7.27, where the switch
has been in the closed position for a long time so
Fig. 7.26 An RLC circuit with step excitation that v(0) = 0 and i(0) = 0 (why don’t we write v
Sinusoidal Response of an RLC Circuit 79

The complete solution is, therefore,


R = 1Ω pffiffiffi 
t=2 3
+ vðtÞ ¼ Ae cos t þ h þ sinðt  p=4Þ
2

C = 1F
iS (t) =
v ð7:134Þ
1 t=0
sin t –
2 L = 1H
To determine A and h, take help of v(0) = 0
and i(0) = 0. The first condition gives

Fig. 7.27 RLC circuit excited by a sinusoidal current 0 ¼ A cos h sin p=4 ð7:135Þ

(0) = i(0) = 0?) Since the excitation is a current or


generator, the poles of Z(s) or the zeros of Y(s) will
1
determine the natural frequencies. We have A cos h ¼ pffiffiffi ð7:136Þ
2
1 1 s2 þ s þ 1
YðsÞ ¼ sC þ ¼ sþ ¼ The second condition says that
R þ sL sþ1 sþ1
ð7:130Þ 
dv
C  ¼ is ð0Þ  ið0Þ ð7:137Þ
The natural frequencies are, therefore, the dt t¼0
roots of
Substituting for v(t) from Eq. 7.134 and
2
s þ s þ 1 ¼ 0; ð7:130aÞ simplifying, we get
pffiffiffi
i.e. 2
pffiffiffi A ¼ pffiffiffi ð7:138Þ
1 3 3 sin h þ cos h
s1;2 ¼ j ð7:131Þ
2 2 Combining Eq. 7.136 and Eq. 7.138, we get
The natural response is thus of the form Ae –t/2
pffiffi pffiffiffi
cos 3 t þ h . The forced response phasor, V,  is 1 2
2 pffiffiffi ¼ pffiffiffi ð7:139Þ
given by the product of the current phasor cor- 2 cos h 3 sin h þ cos h
responding to i(t) and the impedance at s = j1.
Hence or
pffiffiffi

3 sin h ¼ cos h ð7:140Þ
 ¼ 1 \0
1 þ j1
V
2 j1
ð7:132Þ or
1
1
¼ \0 ð1  j1Þ ¼ pffiffiffi \  p=4;
2 2 p
h¼ ð7:141Þ
6
where we have again taken sin xt as the refer-
ence phasor, without any loss of accuracy. Hence, from Eq. 7.136,
Hence, the forced response will be rffiffiffi
2
vP ¼ sinðtp=4Þ ð7:133Þ A¼ ð7:142Þ
3
80 7 Circuit Analysis Without Transforms

Finally, therefore, the solution for v is v(t)


rffiffiffi pffiffiffi 
2 t=2 3 p V
vðtÞ ¼ e cos tþ þ sinðt  p=4Þ
3 2 6
ð7:143Þ

The first term is the transient response while 0


T t
the second term represents the steady-state
response. Fig. 7.29 Pulse excitation of the circuit in Fig. 7.28: A
gate

Pulse Response of an RC Circuit iðtÞ ¼ i1 ðtÞi2 ðtÞ


Vh 1 tT
i ð7:147Þ
For the circuit in Fig. 7.28, where v(t) is a pulse ¼ e RC uðtÞ  e RC uðt  TÞ
R
as shown in Fig. 7.29, we can write
A sketch of i(t) is shown in Fig. 7.30.
vðtÞ ¼ V ½uðtÞuðtT Þ; ð7:144Þ What about the voltage across the capacitor?
It can be found by integrating i(t) from 0 to t.
where u(t) is the unit step function. We know the Physically, it is not difficult to argue that C charges
response to Vu(t) to be according to the relation
V 1
i1 ðtÞ ¼ e RC uðtÞ; ð7:145Þ vc ðtÞ ¼ V½1et=ðCRÞ  ð7:148Þ
R
for 0 < t T. At t = T, the voltage source
where the multiplication by u(t) indicates that
becomes zero, and current direction reverses.
i1(t) = 0 for t < 0. The response to the delayed
Hence, C discharges through R from the value V
step Vu(t − T) will be, from the principle of time
[1 − e−T/(CR)] to zero exponentially. Hence,
invariance (which states that if the excitation is
vc(t) will vary with time as shown in Fig. 7.31.
delayed, so will the response be),
The expression for vc(t) is
V tT
i2 ðtÞ ¼ e RC uðt  TÞ; ð7:146Þ
R
Current
where multiplication by u(t − T) indicates that
i2(t) = 0 for t < T. Hence, by superposition, the
complete response will be i(t) i2 (t)
V
R
i1 (t)

0 t
R i(t ) T
+
v(t)
– V
C - -i2 (t)
R

Fig. 7.28 An RC circuit excited by the pulse of Fig. 7.29 Fig. 7.30 Pulse response of the circuit in Fig. 7.28
Pulse Response of an RC Circuit 81

vC (t) Z0 þ
T
V (1 - e
-
CR ) dðtÞdt ¼ 1 ð7:150Þ
V
0

It follows that d(t − a) is a unit impulse


occurring at t = a and that
Za þ
dðt  aÞdt ¼ 1 ð7:151Þ
T t a

Fig. 7.31 Plot of vc(t) for the circuit in Fig. 7.28 If the range of integral does not include the
value of t at which the impulse occurs, then the
integral will be zero, e.g.
8
< V 1  eCR1 ; 0 t T Z1 Z0
vc ðtÞ ¼ ð7:149Þ dðtÞdt ¼ 0 ¼ dðtÞdt ð7:152Þ
: V 1  eCRT etT
CR t  T

0þ 1

Also
Impulse Response
f ðtÞdðt  aÞ ¼ f ðaÞdðt  aÞ ð7:153Þ
Consider the pulse shown in Fig. 7.32 whose
area is unity. Let T be decreased to T/2, and the and
height increased to 2/T so that the area is still Za þ
unity. Let T ! 0 and the height ! ∞ in such f ðtÞdðt  aÞdt ¼ f ðaÞ ð7:154Þ
manner that the area is still unity. This limiting
a
condition of the pulse, whose duration is zero
and the height is infinite such that the area under Impulse is, of course, a hypothetical function,
it is unity, is called a unit impulse. A unit but is a useful concept in analysing circuits and
impulse function is denoted by d(t), and since systems. Let an impulse Q(t) be applied to an RC
infinite height is not a determinate quantity, we circuit as shown in Fig. 7.33 with v(0−) = 0.
define d(t) by the integral Then, the infinite current Q(t) flows through
C and establishes a voltage
Z0 þ
þ 1 Q
vð0 Þ ¼ QdðtÞ dt ¼ ð7:155Þ
C C
0
f(t)
At t  0+, Q(t) acts as an open circuit; hence
1/T v(t) decays with time according to

Q t=ðCRÞ
vðtÞ ¼ e ð7:156Þ
C

T t Note that here v(0−) 6¼ v(0+), this is so


because an infinite amount of current flows
Fig. 7.32 A pulse which becomes a unit impulse when through C for the short duration 0− to 0+.
T!0
82 7 Circuit Analysis Without Transforms

i i1

+ R
R
Q (t) C v R +
– v ~ i2
– C

L = CR 2
Fig. 7.33 An RC circuit excited by an impulsive current
source Fig. 7.34 Circuit for P.1

Similarly when a voltage excitation /d(t) is P:2. Find the transfer impedance ZT ðsÞ ¼ VðsÞ
IðsÞ for
applied across a series RL combination with no the circuit of P.1 and sketch its poles and
initial current in L, a current i(0+) = //L is zeros.
established. Here also i(0−) 6¼ i(0+) in an P:3. Write the differential equation governing
inductor. But for these exceptions, which are, of the circuit in Fig. 7.33 in the text. Clue:
course, hypothetical ones, the inductor current Chap. 8.
and capacitor voltage cannot change
P:4. Determine the transfer function ZT ðsÞ ¼ VðsÞ
Is ðsÞ
instantaneously.
for the circuit in Fig. 7.27 of the text and
sketch is poles and zeros.
Problems P:5. Working in the frequency domain, find the
unit step response of the circuit in Fig. 7.26
P:1. Write the differential equation for the circuit in the text. Find the inverse transform and
shown below for i, i1 as well as i2 verify that the response in the time domain,
(Fig. 7.34). as determine in the text, is identical.
Transient Response of RLC Networks
Revisited 8

As compared to the conventional approach of Aest is shown to be suitable for the overdamping
trial solutions for solving the differential and underdamping cases while for critical
equation governing the transient response of damping, the trial solution assumes the form
RLC networks, we present here a different (A1 + A2t)est because of repeated roots of the
approach which is totally analytical. We also characteristic equation. To a beginner, it is not
show that the three cases of damping, viz. clear why one has to use trial solutions, and why
overdamping, critical damping and under- they have to be of these specific forms, but since
damping, can be dealt with in a unified they work, he (or she) accepts the solutions
manner from the general solution. Won’t faithfully (This illustrates the principle of the end
you appreciate my innovations? Please do justifying the means!). This chapter presents a
and encourage me. different approach to the solution of the differ-
ential equation, which is totally analytical. Also,
it is shown that the three cases, particularly, the
Keywords critical damping one, need not be treated sepa-
Transient response of RLC network rately and that a unified treatment is possible

Overdamped case Underdamped case from the general solution of the differential
Critically damped case equation.

Introduction Example Circuit and the Differential


Equation
In dealing with transients in RLC networks, most
textbooks on circuit theory [1, 2] derive the For a clear exposition and understanding of the
governing second-order differential equation and analytical approach, we shall consider the simple
then treat the three different cases of damping, series RLC circuit shown in Fig. 8.1, where the
viz. overdamping, critical damping and under- switch is closed at t = 0 with i (0−) = 0 and vC
damping separately. A trial solution of the form (0−) = V. KVL gives, for t > 0.

Zt
di 1
Source: S. C. Dutta Roy, “Transient Response of RLC Ri þ L þ idt V ¼ 0: ð8:1Þ
dt C
Networks Revisited,” IETE Journal of Education, vol. 0
44, pp. 207–211, October–December 2003.

© Springer Nature Singapore Pte Ltd. 2018 83


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_8
84 8 Transient Response of RLC Networks Revisited

t=0
R y0 þ ðs þ 2aÞy þ ðs2 þ 2as þ x20 Þi ¼ 0 ð8:7Þ

Since s is our choice, let

s2 þ 2as þ x20 ¼ 0 ð8:8Þ


+
C i(t)
– L Note that this is precisely the result we would
have obtained if we tried the solution i = Aest on
Eq. 8.2, and as is well known, Eq. 8.8 is called
the characteristic equation of the system, having
the two roots
Fig. 8.1 The simple RLC circuit
s1;2 ¼ a  b; ð8:9Þ
Differentiating Eq. 8.1 and dividing both sides where
by L, we get qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
b¼ a2 x20 ð8:10Þ
i00 þ 2ai0 þ x20 i ¼ 0; ð8:2Þ
Combining Eqs. 8.7 and 8.8, we have
where prime (′) denotes differentiation with
respect to t, y0 þ ðs þ 2aÞy ¼ 0; ð8:11Þ

2a ¼ R=L and x20 ¼ 1=ðLCÞ ð8:3Þ which can be written in the form

dy=y ¼ ðs þ 2aÞdt ð8:12Þ


Analytical Solution of the Differential
Integrating Eq. 9.12 and simplifying, we get
Equation
ðs þ 2aÞt
y ¼ k1 e ; ð8:13Þ
As already mentioned, the conventional approach
to solving Eq. 8.2 is to try the solution i = Aest,
where k1 is a constant, as are all the k1’s in what
with little or, at the best, heuristic justification.
follows. Combining Eqs. 8.13 with 8.4, we get
We take a different approach here, by introducing
the following first-order equation in i:
the new variable
i0 si ¼ k1 e ðs þ 2aÞt
ð8:14Þ
y ¼ i0 si; ð8:4Þ
Unlike Eq. 8.11, however, Eq. 8.14 is not a
where s is a constant to be chosen shortly. From
homogeneous equation. That should not be a
Eq. 8.4, we have
cause for worry because we can use the
i0 ¼ y þ si ð8:5Þ well-known method of integrating factor, which
in this case is e–st. Multiplying both sides of
so that Eq. 8.14 by this factor, the result can be put in
the form
i00 ¼ y0 þ si0 ¼ y0 þ sy þ s2 i ð8:6Þ
st 0 2ðs þ aÞt
ðie Þ ¼ k1 e ð8:15Þ
Substituting Eqs. 8.5 and 8.6 in Eq. 8.2 and
simplifying, we get Integrating Eq. 8.15 and simplifying, we get
Analytical Solution of the Differential Equation 85

i ¼ k2 est þ k2 e ðs þ 2aÞt
ð8:16Þ Also, referring to Eq. 8.1 and putting t = 0+,
we get
Now s has two possible values given by
Eq. 8.9. If we choose s = s1 = −a + b then
−(s + 2a) = −a – b = s2. Similarly, if we choose ðdi=dtÞ0 þ ¼ V=L ð8:21Þ
s = s2, then −(s + 2a) = −a + b = s1. Thus in
Combining this with Eq. 8.20 and simplify-
either case, our solution is of the form
ing, we get
i ¼ k4 es1 t þ k5 es2 t ð8:17Þ
k4 ¼ V=½Lðs1 s2 ފ ð8:22Þ

From Eqs. 8.20, 8.22 and 8.9, the general


Evaluating the Constants solution can be written in the form

Now we go back to the example circuit. To V at


i¼ e ðebt e bt
Þ ð8:23Þ
evaluate the constants in Eq. 8.17, we invoke the 2bL
two initial conditions, viz. i(0−) = 0 and
vC(0−) = V. Because of continuity of inductor This single equation, as will be shown here, is
currents and capacitor voltages, we have adequate for considering all the cases of
damping.
ið0 þ Þ ¼ ið0 Þ ¼ 0 and vC ð0 þ Þ ¼ vC ð0 Þ ¼ V
ð8:18Þ
Overdamped Case
From Eq. 8.17 and the first condition in
Eq. 8.18, we get The RLC circuit is overdamped if a > x0, i.e.
from Eq. 8.3, R > 2√(L/C). Consequently, b is
k4 þ k5 ¼ 0 ð8:19Þ real and we can write Eq. 8.23 in the form

so that V at
i¼ e sinh bt ð8:24Þ
bL
i ¼ k4 ðes1 t es2 t Þ ð8:20Þ

Fig. 8.2 Current response for various cases of damping


86 8 Transient Response of RLC Networks Revisited

This gives i = 0 at t = 0, as it should; the while its zero crossings occur at intervals of
same holds at t = ∞ also because b< a. Hence, p/xd. The maxima and minima occur at times
there must be a maximum at some value of t, satisfying the equation di/dt = 0. Carrying out
say t. Differentiating Eq. 8.24 with respect to the required algebra, we get maxima at
t and putting the result to zero gives, after
simplification, t2n þ 1 ¼ ð1=xd Þ½tan 1 ðxd =aÞ þ 2npŠ; n
¼ 0; 1; 2; . . . ð8:31Þ
tb ¼ ð1=bÞ tanh 1 ðb=aÞ ð8:25Þ
and minima at
The maximum value of i (= i) is obtained by
combining Eqs. 8.24 and 8.25; using the identity t2n ¼ ðl=xd Þ½tan 1 ðxd =aÞ þ ð2n lÞpŠ;
ð8:32Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n ¼ 1; 2; 3; . . .
sinh h ¼ 1= 1 þ coth2 h ð8:26Þ
Combining Eqs. 8.30, 8.31 and the identity
to simplify the result, we get pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
sin h ¼ 1= 1 þ cot2 h; ð8:33Þ
ða=bÞ tanh 1 ðb=aÞ
pffiffiffiffiffiffiffiffiffi
ib ¼ V C=Le ð8:27Þ
and simplifying, we get the value of the first
The general nature of variation of the current maximum current as
is shown in Fig. 8.2. Using the series
ða=xd Þ tan 1 ðxd =aÞ
pffiffiffiffiffiffiffiffiffi
imin;1 ¼ V C=Le ð8:34Þ
1 3 5
tanh h ¼ h þ ðh =3Þ þ ðh =5Þ þ . . .to 1; jhj\1;
ð8:28Þ Similarly, the value of the first minimum
current can be found as
It is easily shown that as b increases, tb
ða=xd Þ½tan 1 ðxd =aÞ þ pŠ
pffiffiffiffiffiffiffiffiffi
increases and ib decreases. The time tb is the imin;1 ¼ V C=Le ð8:35Þ
smallest and the current ib is the highest when
b = 0, but we shall talk about this limiting situ- All successive maxima (as well as minima)
ation later. differ from each other in magnitude by the factor
e 2ap=xd :

Underdamped Case
Critically Damped Case
The RLC circuit is underdamped if a < x0, i.e.
R < 2√(L/C). In this case, b is imaginary. Let
For this case, b = 0, i.e. a = x0 or R = 2√(L/C),
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi and as already mentioned, in the conventional
b ¼ j x20 a2 ¼ jxd ð8:29Þ approach, one notes that here s1 = s2 = −a, and
to obtain two independent solutions, one tries out
Putting this value in Eq. 8.23 and simplifying, a solution of the form (A1 + A2t)est. One
we get approach [3] for justifying this trial solution is to
V assume a solution of the form f(t) est; substitute it
at
i¼ e sin xd t ð8:30Þ in Eq. 8.2; take help of the fact that s + a = 0;
xd L
and hence obtain f″(t) = 0, so that f(t) has to be of
Obviously, the response is oscillatory but the form A1 + A2t. Here, we obtain the solution
damped, as shown in Fig. 8.2. The envelope of directly from the general solution Eq. 8.23 as the
the damped sine wave deceases exponentially, limiting case of b ! 0. Thus
Critically Damped Case 87

V ebt e bt Concluding Comments


at
i¼ e lim : ð8:36Þ
2L b!0 b
A totally analytical approach is given here for
Using L’Hospital’s rule, this becomes dealing with transients in second-order networks
and systems, which does not require trial solu-
V tions. It deserves to be mentioned here that our
at
i¼ te ð8:37Þ method is an improvement over that using the
L
operator concept, as given in some books [4]. In
A plot of Eq. 8.37 is shown in Fig. 8.2. It is the latter, one uses the operator D for d/dt and D2
easily shown that the maximum occurs at the for d2/dt2 so that Eq. 8.2 can be written as
time t = t0 where
ðD2 þ 2aD þ x20 Þi ¼ 0 ð8:43Þ
t0 ¼ 1=a; ð8:38Þ
One then treats (D2 + 2aD + x20) as an alge-
the maximum value being braic expression, finds its roots s1 and s2 (same as
those given by Eq. 8.9) and rewrites Eq. 8.43 as
pffiffiffiffiffiffiffiffiffi
i0 ¼ ðV=eÞ C=L; ð8:39Þ ðD s1 ÞðD s2 Þi ¼ 0 ð8:44Þ

where use has been made of the fact that The conceptual difficulty of a student arises on
R = 2√(L/C). Using the infinite series Eq. 8.28 two counts—first, in accepting D2 for d2/dt2 and
and the following: second, in treating (D2 + 2aD + x20) as an alge-
braic expression in D, i.e. in treating D as a
sinh h ¼ h þ ðh3 =3!Þ þ ðh5 =5!Þ þ . . .: to 1; jhj\1; variable instead of an operator. Of course, here
also, as in the trial solution method, the end can
ð8:40Þ
be used to justify the means as follows:
it is easily shown that Eqs. 8.38 and 8.39 are the  
limiting values of Eqs. 8.25 and 8.27, respec- ðD s1 ÞðD s2 Þi ¼ ðD s1 Þ ddti s2 i
2
tively, for b ! 0. ¼ ddt2i s2 ddti s1 ddti þ s1 s2 i ¼ i00 þ 2ai0 þ x20 i;
The critically damped case could also have
been treated ab initio. The procedure is the same ð8:45Þ
up to Eq. 8.15, at which stage we take account of
which is the same as the left-hand side of
the fact that s1 = s2 = −a. Equation 8.15 then
Eq. 8.2. The method of solution is similar to ours
becomes
in that one solves two first-order differential
equations, viz. the homogeneous equation
st 0
ðie Þ ¼ k1 ð8:41Þ (D − s1)y = 0 first, and then the nonhomoge-
neous equation (D − s2)i = y. Here also, no
Integrating Eq. 8.41 gives special care is needed to deal with the critically
damped case in which s1 = s2 = −a.
i ¼ ðk1 t þ k2 Þest ð8:42Þ

k1 and k2 can now be evaluated from the ini- Problems


tial conditions to arrive at precisely the same
result as Eq. 8.37. P:1. Arrange an RLC circuit to give a band-stop
response in as many ways as you can.
88 8 Transient Response of RLC Networks Revisited

L1 L3 C2 P:4. Draw as many third-order circuits as pos-


sible and comment on the nature of the
+ frequency response.
+
v/H L2 C1 C3 R v o /H P:5. Take any one circuit of P.4 and determine
– –
its poles and zeros.

Fig. 8.3 For P.3


References
P:2. Find at least two circuits using a parallel LC
1. W.H Hayt, Jr, J. Kemmerly, Engineering Circuit
and a series LC circuit to obtain a band-stop Analysis (McGraw-Hill, 1978)
response. Find the poles and zeroes of the 2. M.E Van Valkenburg, Network Analysis (Prentice Hall
transfer function. of India, 1974)
P:3. Obtain the differential equation governing 3. A.B. Carlson, Circuits (John Wiley, 1996)
4. D.K. Cheng, Analysis of Linear Systems (Addison
the circuit Fig. 8.3. Wesley, 1959)
Appearances Can Be Deceptive:
A Circuit Paradox 9

How can a paradox be deceptive? What – gmRc, where gm = gm3 = gm4. Since Q1 as well
appears to be an obvious conclusion may not as Q2 basically act as emitter followers, it is
be correct after all! This is illustrated with the logical to assume that v3 ≅ v1 and v4 ≅ v2 so that
help of a differential amplifier circuit, whose the overall gain of the circuit is the same as Ad.
gain is actually half of what it appears to be. However convincing this logic may be, things
This paradox is indeed deceptive. See for turn out to be quite different in practice. In fact,
yourself and decide if you wish to agree or the differential gain of the overall amplifier is
not. half of Ad! Let us see how.

Keywords AC Analysis
Paradox  DC analysis
First, recall that the gain of the simple follower
(all gains are referred to the mid-band situation,
of course) in Fig. 9.2, ignoring the effects of rx
and r0 in the hybrid-p equivalent circuit, is [1–3].

v0 ðb þ 1ÞRE
¼ ð9:1Þ
vi rp þ ðb þ 1ÞRE
The Illusion
ffi l, if rp  ðb þ 1ÞRE ð9:2Þ
Consider the differential amplifier circuit shown
in Fig. 9.1, where the symbols v0–v4 are used for The emitter follower Q1 has a load of rp3 in
small signal ac voltages. As has been proved in the differential mode (in this mode the node
standard textbooks [1–3], the gain of the internal E acts as virtual ac ground); hence its gain is
differential amplifier comprising of matched
transistors Q3 and Q4 is Ad = v0/(v3 – v4) ≅ v3 ðb1 þ 1Þrp3
¼ ; ð9:3Þ
v1 rp1 þ ðb1 þ 1Þrp3

where the subscripts on the parameters refer to


Source: S. C. Dutta Roy, “Appearances can be
Deceptive: A Circuit Paradox,” Students’ Journal of the corresponding transistors. Now
the IETE, vol. 37, pp. 79–81, July–September 1996.

© Springer Nature Singapore Pte Ltd. 2018 89


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_9
90 9 Appearances Can Be Deceptive: A Circuit Paradox

Fig. 9.1 The circuit under +VCC


consideration

RC RC

+ – Q2 +
+ Q1 v0
v2
v1


Q3 Q4
+ +
v3 v4
– –
E

IEE
R EE

b1 b V T b V T VT rp1
rp1 ¼ ¼ 1 ¼ 1 ¼ ð9:4Þ ¼ ; ð9:6Þ
gm1 IC1 b1B1 IB1 b1 þ 1

and, similarly, where VT stands for the thermal voltage (kT/q ≅


25 mV at room temperature) and capital I with
V T VT VT capital subscript stands for DC current. Substi-
rp3 ¼ ¼ ¼ ð9:5Þ
IB3 IE1 ðb1 þ 1ÞIB1 tuting Eq. 9.6 in Eq. 9.3 gives

v3 1
¼ ð9:7Þ
v1 2
Fig. 9.2 AC equiva-
lent of a simple emitter Hence, each of the emitter followers Q1 and
follower circuit Q2 has a gain of 1/2 instead of 1, and the actual
differential gain of the overall circuit is
Q
+
vi v0 v0 gm R c
¼ ¼ ; ð9:8Þ

+
v1  v2 2ðv3  v4 Þ 2

v0 where gm = gm3 = gm4, as mentioned earlier.


RE

DC Analysis 91

DC Analysis Thus,
 
The result obtained above can be corroborated by b3 ðb1 þ 1ÞIC1 b2
VBE3  VBE4 ¼ VT ln
analysing the DC characteristics of the circuit. b1 b4 ðb2 þ 1Þ=IC2
We wish to establish a relationship between ð9:15Þ

V0 D VC3  VC4 ¼ ðVCC IC3 RC ÞðVCC IC4 RC Þ If the transistors are matched, as is the case
with IC fabrication, then Eq. 9.15 becomes
¼ ðIC4 IC3 ÞRC
ð9:9Þ IC1
VBE3  VBE4 ¼ VT ln ¼ VBE1  VBE2
IC2
and ð9:16Þ
Vi D VB1  VB2 ð9:10Þ
From Eqs. 9.12 and 9.16, we get
Recall the basic current–voltage relationship
IC3
of an active transistor Vi ¼ 2ðVBE3  VBE4 Þ ¼ 2VT ln ð9:17Þ
IC4
Ic ffi Is exp ðVBE =VT Þ ð9:11Þ Solving for IC4/IC3 from Eq. 9.17 gives

Now,  
IC4 Vi
¼ exp  ð9:18Þ
IC3 2VT
Vi ¼VBE1 þ VBE3 þ VE  ðVBE2 þ VBE4 þ VE Þ
¼ðVBE1 VBE2 Þ þ ðVBE3 VBE4 Þ so that
ð9:12Þ  
Vi
IC4  IC3 exp  2V T
1
Also, assuming IS1 = IS2 = IS3 = IS4 = IS, we ¼   ð9:19Þ
IC4 þ IC3 Vi
exp  2V þ1
have T

IC3 b IB3 Also, note that IC4 + IC3 = IEE = constant.


VBE3 ¼ VT ln ¼ VT ln ¼ 3
IS IS Hence from Eqs. 9.19 and 9.9, we get

Vi
b IE1 b ðb þ 1ÞIC1 V0 ¼ IEE R; tanh ð9:20Þ
¼ VT ln 3 ¼ VT ln ¼ 3 1 ð9:13Þ 4VT
IS b1 IS
The differential gain, evaluated at V1 = 0, is
Similarly,
 
b ðb þ 1ÞIC2 dV0 2 Vi 1
VBE4 ¼ VT ln 4 2 ð9:14Þ ¼ IEE RC sech
b2 IS dVi Vi ¼0 4VT 4VT Vi ¼0
92 9 Appearances Can Be Deceptive: A Circuit Paradox

IEE RC P:4. What happens when b1, b2 & b3 all ! ∞ in


¼ ð9:21Þ Eq. 9.15 of text? Comment on the result.
4VT
P:5 What happens when RE is replaced by a
For Vi = 0, IC3 = IC4 = IEE/2; hence the gain current generator?
is
Acknowledgements Acknowledgement is due to my
IC3 gm RC students in the EE204 N class, to whom I had posed this
 RC ¼  ; ð9:22Þ paradox as a challenge during the semester commencing
2VT 2
January 1996. Special mention must be made of Ankur
Srivastava, Atul Saroop and Ram Sadhwani whose
where gm = gm3 = gm4. This is exactly the same enthusiastic participation in the resolution of the paradox
as the result derived under AC analysis. made it an enjoyable experience for me.

References
Problems
1. S.G. Burns, P.R. Bond, Principles of Electronic
P:1. What happens when REE ! 1 in Fig. 9.1 Circuits (West Publishing Co, St Paul, USA, 1987)
of text. What about REE = 1? 2. A.S. Sedra, K.C. Smith, Microelectronic Circuits
P:2. What happens when RE ! 1 in Fig. 9.2 of (Sanders College Publishing, Fort Worth, USA, 1992)
text. What about RE = 1? 3. J. Millman, A. Grabel, Microelectronics, 2nd edn.
(McGraw Hill, New York, 1987)
P:3. Approximate exp. function in Eq. 9.19 of
text by the first two terms and find IC4 in
terms of IC3.
Appearances Can Be Deceptive:
An Initial Value Problem 10

An initial value problem is posed and solved Establishing I2(0−): One Possibility
in a systematic way, illustrating the fact that
what meets the eye may not be the truth! To investigate the problem, let us examine how
i2(0−) can be established in the circuit. One pos-
sible way is shown in Fig. 10.2, where the switch is
Keyword
closed at t = −∞ with i1(−∞−) = 0 and
Initial value problem
i2(−∞−) = I2, an arbitrary value. Since for
t > −∞, the circuit is equivalent to a series com-
bination of V, R and L1, i1 increases exponentially
The Problem from zero towards the asymptotic value V/R,
reachable, at t = 0. All this time, can i2 change
Consider the circuit shown in Fig. 10.1 in which from the value I2? This is not possible, because any
the switch is closed from t = −∞ and is opened change in i2 requires a corresponding change in the
at t = 0. The problem that is posed here is: what voltage across it (v2), but since L2 is
is i2(0−) (This forms part of Problem 5.9 in [1])? short-circuited, v2 is forced to remain zero. Thus, i2
One way of looking at the problem is to remains I2 throughout, and I2 can be arbitrary! The
realize that at t = 0−, L2 behaves as a short circuit current through the short circuit keeps on changing
and therefore, there are two short circuits in from −I2 at t = −∞− to (V/R) −I2 at t = 0−.
parallel. Since i1(0−) = V/R, is i2(0−) = V/(2R)?
The answer, as we shall demonstrate here, is; no,
not necessarily. In fact, we show that i2(0−) can Establishing I2 (0−): Another
have an arbitrary value I2. Possibility

Figure 10.3 shows another method of establish-


ing i2(0−). The switch is shorted at t = −∞ with
i1(−∞−) = i2(−∞−) = V1/R. The current i1
remains constant at V1/R (why?) and so does i2 at
the same value, the current through the short

Source: S. C. Dutta Roy, “Appearances can be


Deceptive: An Initial Value Problem,” IETE Journal of
Education, vol. 45, pp. 31–32, January–March 2004.

© Springer Nature Singapore Pte Ltd. 2018 93


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_10
94 10 Appearances Can Be Deceptive: An Initial Value Problem

L1

i1 (t) L1 i1 ð0 Þ þ L2 i2 ð0 Þ ¼ L1 i1 ð0 þ Þ þ L2 i2 ð0 þ Þ:
ð10:1Þ
R

i2(t)
L2 S Since i1(0+) = i2(0+), i1(0−) = V/R and
+ t=0 i2(0−) = I2, we get
V
i1 ð0 þ Þ ¼ ðVL1 =R þ L2 I2 Þ=ðL1 þ L2 Þ: ð10:2Þ
Fig. 10.1 The circuit under consideration
For t  0+, i1 = i2 so that differential equa-
tion governing the circuit becomes
t= ∞
L1 Ri1 þ ðL1 þ L2 Þ ðdi1 =dtÞ ¼ V: ð10:3Þ
S
i1 (t)
This equation has a solution of the form
i 1(t) –I2

R Rt=ðL1 þ L2 Þ
i1 ðtÞ ¼ A þ Be : ð10:4Þ
i2(t)

L2
+ Using the boundary conditions given by
V Eq. 10.2 and i1(∞) = V/R, one can find out the

constants A and B. The final result is:

Fig. 10.2 One possible way of establishing i2 (0−)


 
V L2 I2 VL2 =R Rt
i1 ðtÞ ¼ þ e L1 þ L2
ð10:5Þ
R L 1 þ L2
L1
Note that the circuit of Fig. 10.1 is an exam-
i1 (t)
ple of a situation where, upon switching, inductor
currents are not continuous, i.e. i1(0−) 6¼ i1(0+)
R
and i2(0−) 6¼ i2(0+).
i2(t)

L2 S
t= ∞
+ Problems
V1

P:1. What happens when the switch in Fig. 10.1


Fig. 10.3 Another possible way of establishing i2(0 ) − of text is closed for t = 0 to t = T0 and then
opened?
P:2. What happens when the switch in Fig. 10.1
circuit being zero all the time. Since V1 is arbi- is shifted to be across L1?
trary, we see that i2 (0−) can also be arbitrary. P:3. Determine the response of the circuit of
Fig. 10.1 when L2 is replaced by a capacitor C.
P:4. Same when C of P.3 is shifted to be across L1.
Solve the Circuit P:5. Same when C as well as the switch are
shifted to be across L1.
We have established that i2(0−) in the circuit of
Fig. 10.1 can have an arbitrary value I2. Hence,
for solving for i1 in this circuit, I2 also needs to Reference
be specified (in [1], I2 is not specified). The
solution proceeds by invoking the principle of 1. F F Kuo, Network Analysis and Synthesis (Wiley,
1966), p. 129
conservation of flux, viz.
Resonance
11

In this chapter, we discuss the basic concepts power factor in unity. It is also clear that this
of resonance in electrical circuits, and its situation can arise only when the inductive and
characterization, and illustrate its application capacitive reactances (or susceptances) cancel
by an example. Several problems have been each other. Such a situation is known as reso-
added at the end for the students to work out. nance, which plays a very important role in
Do work them out. impedance matching, filtering, measurements,
and many other applications.
Two types of resonance are distinguished, viz.
Keywords series and parallel. A cancellation of reactances

Resonance Figure of merit for coils and in series is referred to as series resonance, while
 
capacitors Q Series resonance Parallel  if susceptances in parallel cancel, we call it par-
 
resonance Impedance Admittance allel resonance. In either case, the condition of
Bandwidth resonance is usually associated with extremum
(maximum or minimum) of impedance or
admittance magnitude, and voltage/current.
A one-port network containing resistors and
inductors has the property that the current lags
behind the voltage. For a resistor–capacitor Q: A Figure of Merit for Coils
one-port, on the other hand, current leads the and Capacitors
voltage. If a one-port contains inductors as well
as capacitors in addition to the inevitable dissi- Dissipation, as already mentioned, is an inevi-
pative element, viz. resistors, then the circuit table phenomenon in nature, in general, and in
may be inductive at some frequencies, capaci- electric circuits, in particular. In other words, you
tive at some other frequencies and, most inter- cannot make a pure inductor or capacitor in
estingly, purely resistive at one or more practice; there will always be some losses. The
frequencies. In the last situation, the current less the losses are, the better is the reactive ele-
obviously is in phase with the voltage and the ment. A figure of merit, Q, is defined for reactive

Source: S. C. Dutta Roy, “Resonance,” Students’ Journal


of the IETE, vol. 36, pp. 169–178, October–December
1995.

© Springer Nature Singapore Pte Ltd. 2018 95


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_11
96 11 Resonance

elements in terms of energy stored and dissipated


in it, as follows:

maximum energy stored per cycle


Q ¼ 2p :
energy disspated per cycle
ð11:1Þ

Consider a coil having an inductance L and a


series resistance R, through which a current1
iðtÞ ¼ I sin xt flows. Then, the maximum energy Fig. 11.1 A series RLC circuit
stored in a cycle is ð1=2ÞLIm2 , while the average
power dissipated is ð1=2ÞRIm2 : But power is
the phasor Vg ∠0°. The current through the circuit,
energy per unit time, so that energy dissipated per
represented by the phasor I ¼ jIj\h is given by
cycle is (average power)  (time duration of one
cycle) = ð1=2ÞRIm2 f . Thus, the Q of an inductor is Vg
I¼ 1
; ð11:4Þ
R þ jxL þ jxC
2pfL xL
Q¼ ¼ : ð11:2Þ
R R where the denominator represents the total impe-
dance ZðjxÞ. Note that R includes the generator
Similarly, for a capacitor, usually represented internal resistance, coil losses and any external
by a pure capacitance C in parallel with a resis- resistance which may have been inserted. Taking
tance R, you can show (Problem 1) that the Q is the magnitude and phase of Eq. 11.4, we have
given by
Vg
jI j ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi: ð11:5Þ
Q ¼ xCR: ð11:3Þ  1 2
R2 þ xL  xC
As you can see from Eqs. 11.2 and 11.3, Q is a
linear function of frequency. While this is very 1
xL  xC
nearly the case in the case of an air capacitor, it is h ¼  tan1 ð11:6Þ
not so in the case of a coil or other types of R
capacitors. For a coil, the Q usually increases at Note that at x ¼ 0, i.e. DC, at which the
low frequencies, attains a maximum at some capacitor acts as an open circuit, the current is
frequency, and then decreases. This happens zero; the same is true at x ¼ 1, at which the
because of skin effect, due to which the resistance inductor acts as an open circuit. In between, the
increases with frequency, and at sufficiently high current will show a maximum at the frequency
frequencies, this increase is at a more rapid rate x0 at which the second term in the denominator
than the linear increase of frequency. of Eq. 11.5 vanishes, i.e.
1 1
xo L ¼ or xo ¼ pffiffiffiffiffiffi ð11:7Þ
Series Resonance xo C LC

Consider the circuit of Fig. 11.1 in which a series The maximum value of the current, denoted
RLC circuit is excited by a sinusoidal voltage by Io is
generator of variable frequency, represented by Vg
Io ¼ ð11:8Þ
R
1
As a matter of notations, we shall use small i or v for
instantaneous value, subscript m for maximum value,
A sketch of |I| versus x is shown in
capital I or V for phasor representation, and |I| or |V| for Fig. 11.2a, while Fig. 11.2b shows a sketch of
the rms value.
Series Resonance 97

the corresponding phase h. It is easy to argue Parallel Resonance


from Eq. 11.6 that the phase is +p/2 at dc,
decreasing to 0° at x ¼ xo and then to –p/2 as Figure 11.4 shows a parallel RLC circuit excited
x ! 1. This is a reflection of the fact that the by a sinusoidal current generator of varying fre-
impedance ZðjxÞ is capacitive for x\xo quency. The voltage across the circuit is given by
purely resistive at x ¼ xo and inductive for the product of Ig and total impedance, i.e.
x [ xo . At the frequency xo , the current and
Ig
voltage are in phase, and by definition, reso- V¼ 1
; ð11:12Þ
G þ jxC þ
nance occurs. jxL

Taking the current as the reference, the phasor


diagram for the series resonant circuit is shown in where G = 1/R. Note the similarity of this with
Fig. 11.3 for three frequencies, viz. (i) x\xo , Eq. 11.4; this is, in fact, expected because of the
(ii) x ¼ xo and (iii) x [ xo . In each of these duality of the circuits of Figs. 11.1 and 11.4.
diagrams, Thus, all the results derived earlier will apply
here also. For completeness, we summarize
I below the main results:
VR ¼ IR; VL ¼ jxLI and VC ¼ ð11:9Þ
jxC
(i) If Ig = Ig ∠0°, V = |V| ∠/, then
At x\xo ; 1=xC [ xL so that |VC| > |VL|.
Ig
Consequently, the resultant of VR and VC + VL jVj ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð11:13Þ
1 2
 
which gives Vg lagging behind I, i.e. I leads Vg. G2 þ xC  xL
At x [ xo the reverse is the case, while at
x ¼ xo , VC = −VL and Vg and VR are the same.
1
It is interesting to observe that at resonance, xC  xL
/ ¼  tan1 ð11:14Þ
G
xo L
VL ¼ jxo LIo ¼ j Vg ¼ jQVg : ð11:10Þ
R (ii) |V| is a maximum, equal to IgR at the res-
onance frequency xo given by Eq. 11.7
where Q refers to that of the series RL combi- (iii) V leads Ig for x\xo lags Ig for x [ xo
nation at resonance. Similarly, at resonance and is in phase with Ig at x ¼ xo
(iv) At resonance, IL = –jQ Ig and Ic = jQ Ig,
Io
VC ¼ ¼ jxo LIo ¼ jQVg ; ð11:11Þ where Q ¼ xRo L ¼ xo CR (Problem 3).
jxo C

where use has been made of Eq. 11.7.2 In most The phasor diagram can be easily constructed,
practical situations, Q is required to be greater and is left to you as an exercise (Problem 4).
than unity; thus the voltage across the capacitor
or the inductor at resonance is greater than the
input voltage, i.e. the resonant circuit can be used Impedance/Admittance Variation
as a voltage magnifier. with Frequency: Universal Resonance
Curves

From the similarity between series and parallel


resonance equations, it follows that the two cases
can be described by a single equation of the form

2
Equation 11.11 can also be written as VC = Vg/(jo CR).
1
H ðjxÞ ¼  ; ð11:15Þ
Obviously, the Q of the series RC circuit at resonance is l/ 1
1 þ j xb  xy
(o CR), in contrast to Eq. 11.3 for a parallel RC circuit
(Problem 2).
98 11 Resonance

(a)

(b)

Fig. 11.2 Variation of a magnitude and b phase of current in a series RLC circuit

(a) (b) (c)

Fig. 11.3 Phasor diagrams for the series RLC circuit at frequencies a below, b at and c above the resonance frequency

1
AðjxÞ ¼ aH ðjxÞ ¼   : ð11:16Þ
j 1
aþ a xb  xc

The interpretation of AðjxÞ is also given in


Table 11.1, where the subscript zero refers to the
value at resonance.
Fig. 11.4 A parallel RLC circuit
Note that in either series or parallel resonance,
pffiffiffiffiffi
the resonance frequency is xo ¼ 1= bc;
where the interpretation of the symbols for the sffiffiffi sffiffiffi
pffiffiffiffiffi b bx
two kinds of resonance are given in Table 11.1. xb ¼ x bc ¼ ð11:17Þ
Equation 11.15 can be written in a normalized c c xo
form as follows:

Table 11.1 Interpretation of the symbols in Eq. 11.15 for series and parallel resonances
Type of resonance H ðjxÞ a b c AðjxÞ
Y ðjxÞ
Series Y ðjxÞ R L C
Yo ¼ I ðIjx
o
Þ

Z ðjxÞ
Parallel Z ðjxÞ G C L
Zo ¼ V ðVjx
o
Þ
Impedance/Admittance Variation with Frequency: Universal … 99

and similarly, A more useful form of the universal resonance


rffiffiffi curve can be obtained if the behaviour at or near
c x
xc ¼ ð11:18Þ resonance is only of concern. Define the frac-
b xo
tional deviation of source frequency from the
resonance frequency as
introducing these in Eq. 11.16, we get
1 x  xo
AðjxÞ ¼ : ð11:19Þ d¼ ; ð11:23Þ
j
qffiffi
b x xo
xo
1þ a c xo  x
i.e.
Now, for series resonance, x ¼ xo ð1 þ dÞ ð11:24Þ
sffiffiffi rffiffiffiffi
1 b 1 L L xo L Then Eq. 11.22 can be written as
¼ ¼ pffiffiffiffiffiffi ¼ ¼ Q: ð11:20Þ
a c R C R LC R
1
AðjxÞ ¼ : ð11:25Þ
Similarly for parallel resonance, 1 þ jQdð2 þ dÞ=ð1 þ dÞ

sffiffiffi Universal resonance curves are obtained by


1 b R plotting |A| and ∠A versus d with Q as a
¼ ¼ Q: ð11:21Þ
a c xo L parameter. Note that d = 0 corresponds to x ¼
x0 or x ¼ x=x0 ¼ 1 (Fig. 11.5).
Thus, Eq. 11.19 becomes Equation 11.25 is an exact expression; for
d  1, it can be approximated by
1
AðjxÞ ¼  : ð11:22Þ
1 þ jQ x
 xxo 1
xo AðjxÞ ffi : ð11:26Þ
1 þ j2Qd
This expression is independent of the type of
resonance or the actual element values used in
the circuit, and is, therefore, applicable to all Bandwidth of Resonance
resonant circuits of the form of Fig. 11.1 or
Fig. 11.4. A family of curves can be drawn for The sharpness of resonance, as we shall see, is
the variation of |A| and ∠A with the normalized determined by Q. A measure of the sharpness is
frequency x ¼ xxo , taking Q as a parameter. the bandwidth, defined as the band of frequencies
Because of universal applicability, these are around xo at which the magnitude of AðjxÞ is no
pffiffiffi
called universal resonance curves. less than 1= 2

(a)
(b)

Fig. 11.5 Showing the universal resonance curves


100 11 Resonance

Figure 11.5 shows the bandwidth, B, as Other Types of Resonant Circuits


B ¼ x2 ¼ x1 ; ð11:27Þ
The parallel resonant circuit of Fig. 11.4 cannot
where x2 is called the ‘upper half power fre- be made in practice because you cannot make a
quency’ and x1 is called the ‘lower half power pure inductance (in contrast, almost pure capac-
frequency’. The nomenclature ‘half power’ is itances can be readily obtained). A practical cir-
derived from the following consideration. Con- cuit is shown in Fig. 11.6, where R is the
sider, for example series resonance in which winding resistance of the coil, usually much less
AðjxÞ ¼ I ðjxÞ=Io . The power dissipated in R at than the inductive reactance xL at or around
 pffiffiffi2  resonance. If this is true, then the total impedance
x1;2 will be Io = 2 R ¼ Io2 R 2, which is half
of the circuit is
of that dissipated at xo . A similar interpretation
can be derived for parallel resonance. 1
To determine x1;2 turn to Eq. 11.22 and notice Z ðjxÞ ¼
jxC þ R þ1jxL
that jAðjxÞj ¼ p1ffiffi2 implies R þ jxL
¼ ð11:32Þ
 1 þ jxRC  x2 LC
x xo
Q  ¼ 1 ð11:28Þ jxL
xo x ffi :
1 þ jxRC  x2 LC
Solving 11.28, we get four solutions for x, The last expression can be written as
two of which are negative while the other two are
positive. Obviously, the latter are the acceptable 1
ones. These are given by (Problem 7) ZðjxÞ ¼ RC 1
ð11:33Þ
L þ jxC þ jxL
2sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3
 2
1 1 which is of the form of Eq. 11.15, and hence all
x1;2 ¼ xo 4 1 þ  5: ð11:29Þ
2Q 2Q the results we have derived will apply. Notice
that at resonance, the impedance attains a maxi-
mum value, given by
Thus
xo L xo L 1
B ¼ x2  x1 ¼ : ð11:30Þ Zo ffi ¼
Q RC R xo C
ð11:34Þ
2
xo L 2

Thus, higher Q leads to a lower B and hence a ðxo LÞ
¼ ¼R ¼ RQ2 ;
sharper resonance. R R
In a general selective response curve of the
type of Fig. 11.5, one often defines the ‘sharp-
ness of resonance’ or ‘selectivity’ as the ratio
(frequency of maximum response)/(bandwidth).
Applied to the two cases of resonance discussed
here, this quantity, as is evident from Eq. 11.3, is
identical with Q.
Also note, from Eq. 11.29, that
x1 x2 ¼ x20 ð11:31Þ

which shows that the resonance curve is geo-


metrically symmetrical about xo , i.e. the
response at x ¼ xo =a is the same as that at a xo .
This is also obvious from Eq. 11.22. Fig. 11.6 A practical parallel resonant circuit
Other Types of Resonant Circuits 101

where Q is again xo L=R because R and L are in where


series. The condition xL  R assumed for the
1 1
approximations is usually taken to mean Q 10, Z ðjxÞ ¼ ¼ : ð11:36Þ
jxC þ R þ1jxL Y ðjxÞ
as a rule of thumb.
Another interesting resonant circuit is shown
in Fig. 11.7, which has the property that if Winding a coil of Q 10 at 1000 kHz does
pffiffiffiffiffiffiffiffiffi not pose a problem at all; thus from the results
R1 ¼ R2 ¼ L=C , then the impedance is R at all
derived under ‘other types of resonant circuits’
frequencies, i.e. this becomes an all-pass reso-
discussed earlier,
nant circuit (Problem 8).
1
xo ¼ 2p  106 ffi pffiffiffiffiffiffi ð11:37Þ
LC
An Example
and
In concluding this discussion, we illustrate the L
analysis and design of a resonant circuit of Zðjxo Þ ¼ Zo ffi ¼ RQ2 ð11:38Þ
RC
practical importance.
The fractional deviation in frequency corre-
Example 1 The voltage induced in a radio
sponding to x2 = 2p  1050  103 r/s is
receiver aerial may be approximated by a voltage
generator of internal resistance 2000 X and an 1050  1000 50
emf containing equal amplitudes of the frequen- d¼ ¼ ¼ 0:05  1;
1000 1000
cies 1000 and 1050 kHz. It is desired to tune the ð11:39Þ
receiver to the first frequency with the second
frequency discriminated at least by a factor of 2. so that we can apply Eq. 11.26. Thus
Design a resonant circuit of the form of Fig. 11.6
for the purpose. Z ðjx2 Þ 1
¼ : ð11:40Þ
Zo 1 þ j0:1Q
Solution: The overall circuit is shown in
Fig. 11.8. By superposition, the voltage V will be The condition of the problem demands that
the sum of the voltages developed due to the two

sources applied independently. Let us, therefore,


Vg Vg


1 þ R Y þ 1 þ R Y ðjx Þ
2: ð11:41Þ

consider the response due to a source Vg ∠0° of g o g 2


frequency x; then
Combining Eqs. 11.40 and 11.41 and simpli-
Vg Z ðjxÞ Vg fying we get
V¼ ¼ ; ð11:35Þ
Rg þ Z ðjxÞ 1 þ Rg Y ðjxÞ pffiffiffi
0:1Q
3: ð11:42Þ
1 þ Zo =Rg

Let
0:1Q
¼ 2: ð11:43Þ
1 þ Zo =Rg

Then
Zo
¼ :05 Q  1: ð11:44Þ
Rg

Fig. 11.7 A resonant circuit having resistances in both


parallel branches
102 11 Resonance

Fig. 11.8 Circuit for


Example 1

Thus, we must have Q > 20. Let Q = 40; then 4. With V as the reference phasor, draw the
phasor diagrams for the circuit of Fig. 11.4
Zo ¼ Rg ¼ 2000 X ð11:45Þ for x\xo ; x ¼ xo and x [ xo where
pffiffiffiffiffiffi
xo ¼ 1= LC .
From Eq. 11.38, therefore 5. In a series resonant circuit, derive an
expression for the voltage across the capac-
R ¼ Zo =Q2 ¼ 2000=16; 000 ¼ 1:25 X ð11:46Þ
itor. Find the frequency at which this voltage
is a maximum; find also this maximum value.
Also, from Q = Q ¼ xo L=R, we have
6. Derive an expression for the bandwidth on
RQ 1:25  40 25 the basis of expression Eq. 11.26.
L¼ ¼ H ¼ lH: ð11:47Þ 7. Verify Eq. 11.29.
xo 2p  106 p
8. Analyze the circuit of Fig. 11.7 and derive
Hence, finally, from Eq. 11.37, the conditions for all-pass resonance.
9. A medium wave broadcast receiver spans
1 1 the range 570–1560 kHz, with tuning
C¼ ¼ 2 F
xo L 4p  10  25
2 12
p  10
6 accomplished by a series resonant circuit
1 using an air variable capacitor, ranging from
¼ lF ð11:48Þ
100p 3 to 500 pF. What value of inductance is
needed? At 570 kHz, the capacitor voltage
and the design is complete. is desired to be 10 times the signal picked
up by the aerial. What is the total resistance
in the tuned circuit? What will be the signal
Some Problems multiplication factor at 1560 kHz? What are
the bandwidths of the circuit when tuned at
If you have understood this chapter, then you 570 and 1560 kHz?
should be able to work out the following prob- 10. A signal generator produces a fundamental
lems. Try them. at 1 kHz of 1 V amplitude and its second
and third harmonics at 0.5 and 0.3 V
1. Show that for a capacitor, represented by an amplitudes respectively. It is required to
equivalent circuit consisting of a pure suppress each harmonic to less than 1% of
capacitance C in parallel with a resistance the fundamental. Design a suitable circuit
R, the Q is given by Q ¼ xCR. for the purpose.
2. Show that for a series RC circuit, the Q is 11. Determine the frequency of resonance for
given by Q ¼ 1=ðxCRÞ. the circuit of Fig. 11.7 exactly, and the
3. Show that for a parallel RL circuit, the Q is value of the impedance at resonance. Also
given by Q ¼ R=ðxLÞ. find the condition for maximum impedance.
Some Problems 103

12. A 1 V, 1 MHz, 2500 X internal resistance Bibliography


source is to deliver maximum power to a
load of 1 X. Explain how this can be 1. F.F. Kuo, Network Analysis and Synthesis (Wiley,
achieved using resonance. New York, 1966)
The Many Faces of the Single-Tuned
Circuit 12

It is shown that the simple single-tuned circuit


Notations: First Things First
is capable of performing a variety of filtering
functions and that it can be analyzed graph-
In the analysis of the various configurations of
ically for obtaining the relevant performance
Fig. 12.1, we shall adopt the following notations,
parameters.
which are more or less standard, and illustrated in
Fig. 12.2.
Keywords
 
Single-turned circuit Low-pass High-pass p1, p*1 = complex conjugate poles of the network
Band-pass function
−a =−R/(2L) = real part of either pole
b = {[1/(LC)] − [R2/(4L2)]}1/2 = imaginary part
Figure 12.1 shows the simple single-tuned RLC of either pole
series circuit under consideration. In the usual f = (R/2)(L/C)1/2 = damping factor
textbooks, the circuit is analyzed for its beha- h = tan−1(b/a) = cos−1f
viour at and near resonance, where resonance xn = l/(LC)1/2 = (a2 + b2)1/2 = undamped natu-
implies the in-phase condition of V1 and I. That it ral frequency
is capable of performing a variety of filtering xm = frequency of maximum response
functions, depending on how the output is cho- M1(x), M2(x) = vectors drawn from the poles to
sen, is not generally discussed. Also, except for an arbitrary point jx
Kuo [1] who treated one of the configurations w = angle between M1(x) and M2(x)
graphically, it is hard to find a reference in which x2,1 = upper and lower 3-dB cutoff frequencies
graphical construction is used to find the major
performance indices of the circuit. In this chap-
ter, we bring out the versatility of the circuit and * denotes complex conjugate
extend the graphical analysis of [1] to other
configurations. The Possible Configurations

The possible configurations in which the circuit of


Source: S. C. Dutta Roy, “The Many Faces of the Single Fig. 12.1 can be used differ from each other in the
Tuned Circuit,” IETE Journal of Education, vol. 41, location of the output and are shown in Fig. 12.3.
pp. 101–104, July-December 2000. Let Hx denote the transfer function V2/V1 of
© Springer Nature Singapore Pte Ltd. 2018 105
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_12
106 12 The Many Faces of the Single-Tuned Circuit

Fig. 12.1 The circuit under consideration

Fig. 12.3, where x = a, b, c, d, e or f. Then, the


variety of transfer functions obtained is as follows.

Ha ðsÞ ¼ x2n =ðs2 þ 2fxn s þ x2n Þ ð12:1aÞ

Hb ðsÞ ¼ s2 =ðs2 þ 2fxn s þ x2n Þ ð12:1bÞ

Hc ðsÞ ¼ 2fxn s=ðs2 þ 2fxn s þ x2n Þ ð12:1cÞ

Hd ðsÞ ¼ ðs2 þ x2n Þ=ðs2 þ 2fxn s þ x2n Þ ð12:1dÞ

He ðsÞ ¼ ð2fxn s þ x2n Þ=ðs2 þ 2fxn s þ x2n Þ


ð12:1eÞ Fig. 12.2 Illustrating the notations used in the chapter

graphics gives much more physical meaning than


Hf ðsÞ ¼ ðs2 þ 2fxn sÞ=ðs2 þ 2fxn s þ x2n Þ is possible by using routine algebra and calculus.
ð12:1fÞ

The first four transfer functions are basically The Low-Pass Configuration
those of low-pass, high-pass, band-pass and
band-stop filters, respectively. However, if f is small, Figure 12.1a represents a low-pass configuration
Ha, as well as Hb, may be used as band-pass filters. with unity DC response and zero response at
The transfer functions He and Hf represent mixed infinite frequency. In between these two
type filters and are not of much interest; hence, they extremes, the response may be monotonically
will not be considered any further in this chapter. decreasing or may show a maximum. By putting
We shall analyze each configuration in a s ¼ jx in (1a) and taking the magnitude, we get
conventional manner and follow it up with
graphical constructions and interpretations. While
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
jHðjxÞj ¼ x2n = ðx2n x2 Þ2 þ 4f2 x2n x2
only the final results are of interest, the use of
ð12:2Þ
The Low-Pass Configuration 107

Fig. 12.3 Various possible (a) R L (b) R


configurations of the
+ +
single-tuned circuit C
+ +
V1 C V2 V1 V2
– – L

– –

(c) (d)
R
L C + +
+ + C
V1 R V2 V1 V2
– –
L
– –

(e) L
(f)
C

C + +
L
+ +
V1 V1
V2 V2
– R –
R
– –

By differentiating Eq. 12.2 with respect to x


and putting the result to zero shows that the
maximum response occurs at the frequency xma,
where
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2
xma ¼ b a ¼ xn 1 2f2
2 ð12:3Þ

Obviously, the maximum will exist if


pffiffiffi
f\1= 2, which is equivalent to b > a or h > p/
pffiffiffi
4. For f ¼ f\1= 2, the response will be max-
pffiffiffi
imally flat (MF), while for f [ 1= 2, the
response will lie below the MF curve, as shown
Fig. 12.4 Magnitude response of the low-pass configu-
in Fig. 12.4. The maximum response is obtained ration of Fig 12.1 for various values of f
by combining Eqs. 12.2 and 12.3 and is given by

equation obtained by equating Eq. 12.2 to


qffiffiffiffiffiffiffiffiffiffiffiffiffi
Hma ¼ 1=ð2f 1 f2 Þ ð12:4Þ pffiffiffi
Hma = 2. After a considerable amount of algebra,
pffiffiffi we get
If Hma is greater than 2, which occurs for
f < 0.383, then there will exist two 3 dB cutoff  qffiffiffiffiffiffiffiffiffiffiffiffiffi1=2
frequencies x2a and x1a, otherwise there will x2a ; x1a ¼ xn 1 2f2  2f 1 f2
exist only one upper cut off frequency x2a. These
frequencies can be calculated by solving the ð12:5Þ
108 12 The Many Faces of the Single-Tuned Circuit

Note that is easily shown from the right-angled triangle O-


qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jxma-C that xma is indeed given by Eq. 12.3.
x2a ; x1a ¼ x2n 1 8f2 þ 8f4 6¼ x2ma ð12:6Þ Now if an arc is drawn with A as the centre and
pffiffiffi
Ap1 = 2b as the radius, then its intersection
in contrast to what Kuo claims in [1], p. 236. with the positive jx-axis occurs at jx2a, x2a being
At this point, it is instructive to bring in Kuo’s the upper cutoff frequency, because the value of x
graphical analysis of the configuration of there is p/4, being half of p/2, the angle subtended
Fig. 12.1a. Note that the area of the shaded tri- at the centre A by the arc from p1 to p*1. It can be
angle in Fig. 12.2 is given by ba as well as (1/2)| verified by considering the right-angled triangle
M1||M2| sin x so that O-jx2a-A that x2a is indeed given by Eq. 12.5.
Kuo [1], at this point, claimed that the lower
jM1 jjM2 j ¼ 2ba= sin w ð12:7Þ cutoff frequency x1a is given by x2ma/x2a, which,
as we have shown in Eq. 12.6, is not true. How-
Now the magnitude response of Ha can be ever, a graphical construction is also possible for
written as finding x1a, as shown by Martinez [2]. Draw an
arc with B as the centre and Bp1 as the radius. The
jHa j ¼ x2n =jM1 jjM2 j ¼ ½x2n =ð2baފ sin w point of its intersection with the positive jx-axis is
ð12:8Þ jx1a, because the value of x there is (p/2) + (p/4)
pffiffiffi
i.e. sin x is again 1= 2. From the right-angled
Thus, the variation of |Ha| with x is consoli- triangle O-jx1a-B, it can now be verified that x1a
dated in the variation of the angle w with x. Now is indeed given by Eq. 12.5.
refer to Fig. 12.5, where p1Ap*1B is the so-called
peaking circle of radius b, centred at C. Its
intersection with the jx-axis previously occurs at The High-Pass Configuration
jxma (xma being the frequency of maximum
response) because the value of w at this point is The configuration of Fig. 12.1b is the dual of
the maximum, equal to p/2, being the angle Fig. 12.1a, because its dc response is zero while
subtended by a diameter on the circumference. It at infinite frequency, Hb becomes unity. By fol-
lowing the same procedure as in the low-pass
case, it can be shown that the maximum response
occurs at the frequency xmb, where
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xmb ¼ xn = 1 2f2 ð12:9Þ

Comparing Eqs. 12.3 and 12.9, we see that


xma xmb = x2n i.e. xma and xmb are geometri-
cally symmetrical about xn. In fact, xmb can be
obtained by the graphical construction shown in
Fig. 12.6. Mark the points −xma and −xn, on the
negative real axis. Join −xma to jxn, thereby
creating the angle x. Construct the same angle x
at −xn. The intersection of the new line with the
positive jx-axis gives jxmb, because

tan u ¼ xn =xma ¼ xmb =xn ð12:10Þ

Fig. 12.5 Graphical analysis of the low-pass configura-


tion of Fig. 12.1
The High-Pass Configuration 109

  
x xn
Hc ðjxÞ ¼ 2fxn 2fxn þ jxn
xn x
ð12:13Þ
It is clear that the magnitude characteristic
will have true geometric symmetry about xn,
which is also the frequency of maximum
response. The maximum response, in this case, is
unity, and by the usual procedure, the 3 dB
cutoff frequencies are obtained as
qffiffiffiffiffiffiffiffiffiffiffiffi 
2
x2c ; x1c ¼ xn 1þf  f ð12:14Þ

Fig. 12.6 Graphical construction for obtaining xmb from x Clearly, x2c ; x1c ¼ x2n ; which is a reflection
of the geometric symmetry of the transfer func-
tion Hc(jx).
A graphical construction for finding x2c and
It is easily shown that Hmb, the maximum
x1c is shown in Fig. 12.7. With the origin as the
response of |Hb(jx)| is the same as Hma, as given
centre and Op1 (=xn) as the radius, draw the
by Eq. 12.4.
circle p1Ap*1B. It cuts the positive jx-axis at C
Continuing the analysis, the 3 dB cutoff
which is jxn. Since OD is fxn, the distance DC
frequencies x2b and x1b can be obtained by
pffiffiffi
pffiffiffiffiffiffiffiffiffiffiffiffi
must be xn 1 þ f2 . With D as the centre and
equating |Hb(jx)| to Hmb 2. After consider-
DC as the radius, draw the circle ECFG. Now the
able algebra, one obtains the following pffiffiffiffiffiffiffiffiffiffiffiffi
result: distance OE is xn 1 þ f2 þ f , while the
pffiffiffiffiffiffiffiffiffiffiffiffi
 qffiffiffiffiffiffiffiffiffiffiffiffiffi1=2 distance OF is xn 1 þ f2 f . Hence,
x2b ; x1b ¼ xn = 1 2f2  2f 1 f2 drawing two arcs with OE and OF as radii will
cut the positive jx-axis at jx2c and jx1c, respec-
ð12:11Þ tively, as shown in Fig. 12.7.

Comparing Eqs. 12.5 and 12.11, we note that

x2b x1a ¼ x2a x1b ¼ x2n ð12:12Þ

Thus, following the same procedure as


depicted in Fig. 12.6, one can find x2b and x1b
graphically from Fig. 12.5.

The Band-pass Configuration

The configuration of Fig. 12.1c, characterized by


the transfer function Hc of (1c), is a true band-pass
filter because of its response at dc, as well as Fig. 12.7 Graphical analysis of the bandpass configura-
infinite frequency, are zero. By writing Hc(jx) as tion of Fig. 12.1c
110 12 The Many Faces of the Single-Tuned Circuit

The Band-stop Configuration Problems

By looking at the circuit of Fig. 12.1d or its P:1. In a single-tuned circuit, if the output is
transfer function Hd, it is clear that it will give taken across the series combination of R and
null transmission at the frequency xn and that the L, what kind of frequency response will you
dc and infinite frequency responses would be obtain? Sketch it.
unity. To determine the 3 dB cutoff frequencies P:2. If L and C are in parallel in an RLC circuit
x2d and x1d, we let and the output is taken across L, what will
be the frequency response? Sketch it.
ðx2 x2n Þ2 1 P:3. You require two poles in the frequency
¼ ð12:15Þ
ðx2 x2n Þ2 þ 4f 2
x2 x2n 2 response and are supplied with three reactive
elements. Draw an appropriate circuit and
and obtain the same values as those given by sketch its frequency response. Comment on
Eq. 12.14. Hence, the graphical construction of the d.c and infinite frequency responses.
Fig. 12.7 works for this circuit also except that P:4. Same as above except that you require two
xn is now the frequency of rejection and not of nulls in the frequency response.
maximum response. P:5. If you require three nulls, what is the min-
imum number of reactances needed? Draw
an appropriate circuit and comment on the
Conclusion d.c. and infinite frequency responses. Also,
comment on the height of the peaks.
We have shown in this chapter, how a relatively
simple circuit like that of series RLC combination
can be used to illustrate various circuit concepts
like poles, zeros and their effects on the fre- References
quency response; filtering of various kinds;
1. F.F. Kuo, Network Analysis and Synthesis (Wiley,
graphical analysis; geometric symmetry, etc.
1966)
This should be of interest to teachers as well as 2. J.R. Martinez, Graphical solution for 3 dB points.
students of circuit theory. Electron. Eng., 48–51 (January 1967)
Analyzing the Parallel-T RC Network
13

Following a review of the various alternative [16, 17], FM detection [18], and sine-wave
methods available for analyzing the parallel-T generation [19]. Various methods are available
RC network, we present yet another concep- in the literature for analyzing this network, a
tually elegant method. This discussion illus- review of which is given in this chapter. This is
trates the famous saying of Ramakrishna followed by yet another method, which is con-
Paramhansa: As many religions, as many ceptually elegant and is believed to be new.
ways. Don’t just grab one method; learn all For illustrating the various methods of anal-
of them and decide for yourself which one you ysis, we have used the symmetrical configuration
find to be the simplest. shown in Fig. 13.1b, for simplicity. It should,
however, be emphasized that the method of
analysis does not depend upon the composition
Keywords of the individual arms, each of which could as

Parallel-T-network Mesh analysis well be a general RLC impedance.

Node analysis Two-port method Splitting 
the parallel-T

Mesh Analysis
The parallel-T RC network shown in Fig. 13.1a
has fascinated many circuits researchers, includ- The network of Fig. 13.1b has been redrawn in
ing me [1–9]. It has many applications, foremost Fig. 13.2 in order to clearly indicate one choice
among them being notch filtering [10, 11], active of four independent meshes. Following standard
band-pass filtering [12, 13], measurements procedure, the four mesh equations can be writ-
[14, 15], compensation in control systems ten in the matrix form as follows:

Source: S. C. Dutta Roy, “Analyzing the Parallel-T RC


Network,” IETE Journal of Education, vol. 44, pp. 111–
116, July–September 2003.

© Springer Nature Singapore Pte Ltd. 2018 111


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_13
112 13 Analyzing the Parallel-T RC Network

2 32 3 2 3
R þ ½1=ð2sCފ 1=ð2sCÞ R 0 I1 VI
6
6 1=ð2sCÞ R þ RL þ ½1=ð2sCÞ R RL 7 6 I2 7 6 0 7
76 7 ¼ 6 7:
4 R R 2R þ ½2=ðsCފ 1=ðsCÞ 5 4 I3 5 4 0 5
0 RL 1=ðsCÞ ðR=2Þ þ RL þ ½1=ðsCފ I4 0
ð13:1Þ

(a)
C1 C22 I2 ¼ Vi D12 =D and I4 ¼ Vi D14 =D ð13:3Þ

R1 R2 so that from Eq. 13.2, the voltage transfer func-


+ + tion is obtained as
Vi R3 C3 V0 RL

– T ¼ V0 =Vi ¼ ðD12 D14 ÞRL =D
¼ ðM14 M12 ÞRL =D; ð13:4Þ

(b) where Mij is the minor of Dij i.e. Dij = (–1)i+j Mij.
i1 C C i2 Evaluating D and its minors, and simplification
of Eq. 13.4 will require pages of calculations.
R R However, if it is done with care, one ends up
+ +
with the following expression:
Vi R/2 2C V0 RL

– T ¼ ðp2 þ 1Þ=½p2 þ 2ð2 þ rÞp þ 1 þ 2rŠ; ð13:5Þ

where we have used the following notations:


Fig. 13.1 a The general parallel-T-network; b The sym-
metrical form
p ¼ sCR and r ¼ R=RL : ð13:6Þ

I3
C Node Analysis
I4
R R Refer to Fig. 13.1b again where all node voltages
+ +
have been identified. The node equations for
Vi I1 2C I2 V0 RL R/2
– V1, V2 and V0 can be written in the following

matrix form:
2 32 3
Fig. 13.2 Redrawn form of Fig. 13.lb for mesh analysis 2ðG þ sCÞ 0 G V1
4 0 2ðG þ sCÞ sC 54 V 2 5
2G 3 sC sC þ G þ GL V0
Since GVi
V0 ¼ ðI2 I4 ÞRL ; ð13:2Þ ¼ 4 sCVi 5;
0
we need to evaluate I2 and I4 only. If D denotes ð13:7Þ
the determinant of the 4  4 mesh impedance
matrix in Eq. 13.2 and Dij its cofactors, then where
Node Analysis 113

G ¼ 1=R and GL ¼ 1=RL : ð13:8Þ Finally, the transfer function is

Denoting the determinant of the 3  3 matrix on T ¼ y21 =ðy22 þ GL Þ


the left of Eq. 13.7 by D, we get ¼ ðp2 þ 1Þ=½p2 þ 4p þ 1 þ 2RGL ðp þ 1ފ;
ð13:14Þ
V0 ¼ ðGD013 þ sCD023 Þ=Vi =D0 ð13:9Þ
which is identical to Eq. 13.5 because RGL = r.
so that
0
T ¼ ðGM13 0
sCM23 Þ=D0 : ð13:10Þ
Analysis by Miller’s Equivalence
0
The evaluation of D, M13 and M23 is a much
easier task than that in mesh analysis. It is not Refer to Fig. 13.1b again. By Miller’s theorem,
difficult to show that simplifying the resulting this circuit is equivalent to that shown in
expression gives the same result as in Eq. 13.5. Fig. 13.3a
where

Two-Port Method Y1 ¼ i1 =Vi and Y2 ¼ i2 =V0 : ð13:15Þ

The network of Fig. 13.1b is the parallel con- Since Y1 occurs across the input voltage
nection of two T-networks, viz. A: R–2C–R and source, it does not affect the transfer function.
B: C–R/2–C, which is terminated in RL. The We, therefore, have to find Y2 only. Obviously,
z-parameters of T-networks are as follows:
i2 ¼ sCðV0 V2 Þ: ð13:16Þ
z11A ¼ z22A ¼ R þ ½1=ð2sCފ;
To find V2, write the node equation at this
z12A ¼ z21A ¼ 1=ð2sCÞ; node as follows:
ð13:11Þ
z11B ¼ z22B ¼ ðR=2Þ þ ½1=ðsCފ;
and z12B ¼ z21B ¼ R=2: ð2sC þ 2GÞV2 ¼ sCVi þ sCV0 ð13:17Þ

The corresponding y-parameters can be found or,


from the conversion formulas [20] as
V2 ¼ sCðVi þ V0 Þ=½2ðsC þ Gފ: ð13:18Þ
y11A ¼ y22A ¼ ð2p þ 1Þ=½2Rðp þ 1ފ;
Combining Eq. 13.15 with Eqs. 13.16 and
y12A ¼ y21A ¼ 1=½2Rðp þ 1ފ; 13.18, we get
y11B ¼ y22B ¼ pðp þ 2Þ=½2Rðp þ 1ފ; and
1
y12B ¼ y21B ¼ p2 =½2Rðp þ 1ފ: Y2 ¼ p½ð1 T Þp þ 2Š=½2Rðp þ 1ފ: ð13:19Þ
ð13:12Þ Although Y2 involves T which we wish to find, one
should not be worried. As you would see, we shall
Thus, the overall y-parameters are the following:
find T in terms of T–1 and then by cross multiplying
and simplifying, we shall find an explicit expres-
y11 ¼ y22 ¼ ðp2 þ 4p þ 1Þ=½2Rðp þ 1ފ and y12
sion for T. By applying Thevenin’s theorem to the
¼ y21 ¼ ðp2 þ 1Þ=½2Rðp þ 1ފ:
left of the XX′ line in Fig. 13.3a, we get the
ð13:13Þ equivalent circuit shown in Fig. 13.3b. Thus
114 13 Analyzing the Parallel-T RC Network

X
(a) (b)
i1 R i2 R
R R
+ + + +
Vi 2p + 1
Vi Y1 2C Y2 RL V0 2p + 1 Y2 + GL V0

Fig. 13.3 a Miller’s equivalent of the network of Fig. 13.1b; b Equivalent circuit of Fig. 13.3a obtained by using
Thevenin’s theorem

T ¼ V0 =Vi Simplifying Eq. 13.22 and taking the ratio


T gives the same result as Eq. 13.5.
¼ ½1=ð2p þ 1ފ: ½1=ðY2 þ GL ފ=f½1=ðY2 þ GL ފ :
þ R þ ½R=ð2p þ 1ފg
ð13:20Þ Yet Another Method
Combining Eqs. 13.20 with 13.19, and simpli-
Look at Fig. 13.4a again. Instead of applying
fying, we get
Thevenin’s theorem, let us apply ladder analysis
method [20] starting from RL and going to the
T ¼ 1=fp½ðl T 1 Þp þ 2Š þ 2ðp þ 1ÞRGL þ 2p þ 1g:
left, and again starting from RL and going to the
ð13:21Þ right. In order not to clutter Fig. 13.4a, we have
redrawn the circuit in Fig. 13.5, where all branch
Now cross multiply and simplify. The final result
currents have been identified. At node V0,
is the same as Eq. 13.5.
I1 þ I2 ¼ IL ¼ V0 GL : ð13:23Þ
Splitting the T’S
Going to the left, we get
Let the C–R/2–C T-network in Fig. 13.1b be
V1 ¼ RI1 þ V0 ; ð13:24aÞ
separated at the input side, turned through 180°
and be terminated in another voltage source Vi. I3 ¼ 2sCV1 ¼ 2pI1 þ 2pGV0 ; ð13:24bÞ
The result is shown in Fig. 13.4a, which would
be completely equivalent to Fig. 13.1b because I5 ¼ I1 þ I3 ¼ ð2p þ 1ÞI1 þ 2pGV0 ð13:24cÞ
potentials at all the nodes have been preserved.
and
Now apply Thevenin’s theorem to the left of XX′
and to the right of YY′ to get the equivalent circuit Vi ¼ RI5 þ V1 ¼ 2ðp þ lÞRI1 þ ð2p þ lÞV0 :
shown in Fig. 13.4b [8]. Next, write the node
ð13:24dÞ
equation at the load as follows:

V0 ½Vi =ð2p þ 1ފ V0 V0 ½Vi p=ðp þ 2ފ From the last equation, we have
þ þ
R þ ½R=ð2p þ 1ފ RL ðR=pÞ þ ½R=ðp þ 2ފ I1 ¼ ½Vi ð2p þ lÞV0 Š=½2ðp þ 1ÞRŠ: ð13:25Þ
¼ 0:
ð13:22Þ
Yet Another Method 115

Fig. 13.4 a Spread out X Y


version of Fig. 13.1b by (a)
V1 V0 V2
splitting the two T’s;
b Equivalent circuit of R R C C
Fig. 13.4a obtained by two +
applications of Thevenin’s Vi 2C +
RL R/2 Vi
theorem –

X Y

(b) R/(2p + 1) V0 R/(p + 2)

R C
+
+
Vi /(2p + 1) RL
– pVi /(p + 2)

Fig. 13.5 Fig 13.4a circuit I5 I1 I2 I6


redrawn to illustrate the new V1 V0 V2
method
R R C C
+
Vi 2C +
RL R/2 Vi

I3 IL I4 –

Now start from V0 and go to the right. This is Now combine Eqs. 13.23, 13.25 and 13.27 to get
what we get:
ðp2 þ 1ÞVi ðp2 þ 4p þ 1ÞV0 ¼ 2ðp þ lÞRGL V0 :
V2 ¼ ½I2 =ðsCފ þ V0 ¼ ðR=pÞI2 þ V0 ; ð13:26aÞ ð13:28Þ
I4 ¼ 2GV2 ¼ ð2=pÞI2 þ 2GV0 ; ð13:26bÞ
Simplifying Eq. 13.28 gives the same result as
I6 ¼ I4 þ I2 ¼ ½ð2=pÞ þ 1ŠI2 þ 2GV0 ð13:26cÞ Eq. 13.5.

and
Conclusion
Vi ¼ ½1=ðsC ފI6 þ V2
¼ ð2R=pÞ½ð1=pÞ þ 1ŠI2 þ ½ð2=pÞ þ 1ŠV0 : In this chapter, we have discussed six different
ð13:26dÞ methods for analyzing the parallel-T RC net-
work. Of these, mesh analysis requires more
The last equation gives effort than any other method. The node analysis
comes next in terms of computational effort. The
I2 ¼ ½p2 Vi pðp þ 2ÞV0 Š=½2ðp þ 1ÞRŠ: ð13:27Þ efforts needed in the two-port method and the
116 13 Analyzing the Parallel-T RC Network

method using Miller’s equivalence are compa- 4. S.C. Dutta Roy, The definition of Q of RC networks.
rable and can be bracketed to occupy the joint Proc. IEEE. 52, 44, (1964)
5. D.G.O. Morris & S.C. Dutta Roy, Q and selectivity.
third position in terms of decreasing computa- Proc. IEEE. 53, 87–89, (1965)
tional effort. Splitting the T’s is common to the 6. S.C. Dutta Roy, Dual input null networks, Proc.
last two methods—one using Thevenin’s theo- IEEE. 55, 221–222, (1967)
rem, and the other using ladder analysis tech- 7. S.C. Dutta Roy & N. Choudhury, An application of
dual input networks. Proc. IEEE. 58, 847–848,
nique. Both are conceptually elegant and require (1970)
almost the same amount of effort. They, there- 8. S.C. Dutta Roy, A quick method for analyzing
fore, qualify for the joint fourth position in the parallel ladder networks. Int. J. Elect. Eng. Educ. 13,
list; of these, the last method does not seem to 70–75, (1976)
9. S.C. Dutta Roy, Miller’s theorem revisited Circ. Syst.
have appeared earlier in the literature and is Signal Process. 19, 487–499, (2000)
therefore believed to be new. 10. L. Stanton, Theory and applications of the parallel-
T resistance capacitance frequency selective network.
Proc. IRE. 34, 447–456 (1946)
11. A.E. Hastings, Analysis of the resistance capacitance
Problems parallel-T network and applications. Proc. IRE. 34,
126–129 (1946)
P:1. What kind of transfer function do you get if, 12. H. Fleischer, Low frequency feedback amplifiers, in
in Fig. 13.1a R3 ! ∞ and C3 = 0? Vacuum Tube Amplifiers, ed. by G.E. Valley Jr., H.
Wallman, McGrawHill, (1948, Chapter 10)
P:2. Same, if in Fig. 13.1b, R/2 ! ∞ and 13. C.K. Battye, A low frequency selective amplifier.
2C = 0 in the shunt branches? J. Sci. Inst. 34, 263–265 (1957)
P:3. Find the two-port parameters of the circuit 14. W.N. Tuttle, Bridged-T and parallel-T null networks
of P.1 and hence the transfer function. for measurements at rf. Proc. IRE. 28, 23–30 (1940)
15. K. Posel, Recording of pressure step functions of low
P:4. Same for the circuit of P.2. amplitude by means of composite dielectric capaci-
P:5. Apply Miller to P.3 and P.4 and verify that tance transducer in parallel-T network. Amer. Rocket
you get the same transfer functions. Soc. J. 21, 1243–1251 (1961)
16. A.B. Rosenstein, J. Slaughter, Twin T compensation
using root locus method. AlEE Trans, Part II
(Applications and Industry) 81, 339–350 (1963)
17. A.C. Barker, A.B. Rosenstein, s-plane synthesis of
References the symmetrical twin-T network. IEEE. Trans. Appl.
Indus. 83, 382–388 (1964)
18. J.R. Tillman, Linear frequency discriminator. Wirel.
1. S.C. Dutta Roy, A twin-tuned RC network. Ind. Eng. 23, 281–286 (1946)
J. Phys. 36, 369–378 (1962) 19. A.P. Bolle, Theory of twin-T RC networks and their
2. S.C. Dutta Roy, On the design of parallel-T resistance applications to oscillators. J. Brit. IRE. 13, 571–587
capacitance networks for maximum selectivity. (1953)
J. Inst. Telecommun. Eng. 8, 218–233, (1962) 20. F.F. Kuo, Network Analysis and Synthesis (John
3. S.C. Dutta Roy, Parallel-T RC networks: limitations of Wiley, 1966, Chapter 9)
design equations and shaping the transmission char-
acteristic. Ind. J. Pure Appl. Phys. 1, 175–181, (1963)
Design of Parallel-T Resistance–
Capacitance Networks For Maximum 14
Selectivity

A simple analysis is presented for obtaining an Keywords


expression for the transfer function and hence Parallel-T RC network  Selectivity
the selectivity, QT, of a general parallel-T Selective amplifier
resistance–capacitance network. The maxi-
mum value of QT obtainable by a suitable
choice of elements is shown to be 12. A design
Introduction
procedure for approaching this maximum
value is given.An expression for the selectiv-
In the low-frequency range, an inductance–
ity, QA, of an amplifier using a general
capacitance-tuned circuit is seldom used as a
parallel-T resistance–capacitance network in
frequency-selective network because of the follow-
the negative feedback line has been deduced
ing disadvantages: (a) large physical size of the
and the advantages of having an increased QT
inductor requires space and makes the equipment
explained. Parallel-T RC is an important
bulky, (b) an inductor of large value is expensive and
network and you cannot do without it, if you
(c) the value of Q obtainable is low. A resistance–
wish to remain in circuit design.An expression
capacitance network is a better choice for all these
has been given for estimating the departure
considerations. Of all such networks, the parallel-T
from linearity of the amplitude response
RC network is the most extensively used one.
characteristic at a particular frequency. This
Much work has been done on the symmetrical
is used to find an optimum value of QT for
configuration of the parallel-T RC network and a
best performance of the network as an F.M.
fairly impressive list of references is available on
discriminator at low frequencies. It is shown
its theory and applications. The general asym-
that the required value of QT is very near its
metrical configuration of the network has
maximum value.
received less attention, important work in this
line being due to Stanton [1], Wolf [2] and Oono
[3]. Stanton [1] has given an expression for the
transfer function of a general parallel-T RC net-
work in which the components occur as the ratios

ða series arm resistance or reactance)


Source: S. C. Dutta Roy, “Design of Parallel-T Resistance– ðtotal series arm resistance or reactance)
Capacitance Networks for Maximum Selectivity,” Journal
of the Institution of Telecommunication Engineers, vol. 8, ð14:1Þ
pp. 218–223, September 1962.

© Springer Nature Singapore Pte Ltd. 2018 117


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_14
118 14 Design of Parallel-T Resistance–Capacitance Networks For …

and By itself, the parallel-T RC network behaves


as a rejection filter. A resonance characteristic,
ðtotal series arm reactance) similar to that of a tuned amplifier, can be
:
ðtotal series arm resistance) obtained by using it as the feedback network of
an amplifier which has an odd number of stages.
A simpler expression is deduced in this chapter Fleischer [7] has shown that by using a sym-
by assuming the series arm impedances to be metrical configuration of the network, the maxi-
arbitrary multiples of the shunt arm impedances. mum value of Q obtainable is approximately Go/
Using Morris’s [4] definition of Q of resistance– 4, where Go is the open-loop gain of the ampli-
capacitance networks, an expression is deduced fier. In this chapter, an expression has been
for the selectivity, QT, of the general parallel-T deduced for the selectivity, QA, of an amplifier
network. It is shown that the maximum value of using a general parallel-T network in the negative
QT is 12, which is in conformity with Wolf’s [2] feedback line and the advantages of having an
result. A design procedure is then suggested for increased value of QT are explained.
such networks, which is more general than that
given by Wolf [2]. Oono’s [3] work is an
extension of that of Stanton [1] for the case when Network Configuration
the effects of the source and the load impedances and Simplification
are not negligible. Throughout this chapter,
however, the source impedance has been A parallel-T RC network with completely arbi-
assumed to be negligible and the load impedance trary values of the elements is shown in
has been assumed to be infinite. Fig. 14.1, where R is a resistance parameter, C a
In its application in an F.M. discriminator [5, capacitance parameter and m1, m2, m3, n1, n2 and
6], it is desired that the amplitude transfer func- n3 are numerical constants. In the conventional
tion should have a linear variation with fre- symmetrical configuration,
quency. An expression is given in this chapter for
estimating the departure from linearity of the m1 ¼ m2 ¼ n1 ¼ n2 ¼ 1; m3 ¼ 2=k and
amplitude transfer characteristic at a particular n3 ¼ 1=ð2kÞ;
frequency. This is used to find an optimum value
of QT for best performance of a single parallel-T where the parameter k controls the selectivity of
network in the above application. It is shown that the transfer characteristic and has a value of unity
in the frequency range of interest, the required for maximum Q equal to 14.
value of QT is very near to its maximum value. Without any loss of generality, we can assume
m3 = n3 = 1. Each of the two tees in Fig. 14.1
can be converted to an equivalent pi-network.
m1C m2C For the two networks in Fig. 14.2 to be equiva-
lent, the elements should be related as follows:

ZA ¼ R=Z2 ; ZB ¼ R=Z3 ; ZC ¼ R=Z1 ; ð14:1Þ


n1R n2R

m3C where R = Z1Z2 + Z2Z3 + Z3Z1. Employing Eq. 14.1


IN n3R OUT
and denoting by subscripts 1 and 2 the equivalent
pi-elements of the R-C-R and C-R-C tees respectively,
we have
Fig. 14.1 A General Parallel-T network
Network Configuration and Simplification 119

(a) Z1 Z2 (b)

ZB
Z3 ZA ZC

Fig. 14.2 a A Tee-network, b A Pi-network

9 (n1 + n2 )R n1n2CR 2
ZA1 ¼ n1 R þ ðn1 þ n2 Þ=ðn2 pCÞ >
>
>
ZB1 ¼ ðn1 þ n2 ÞR þ pn1 n2 CR2
>
>
>
>
>
>
ZC1 ¼ n2 R þ ðn1 þ n2 Þ=ðn1 pCÞ
>
>
>
=
ZA2 ¼ Rðm1 þ m2 Þ=m1 þ 1=ðpm1 CÞ ;
>
2 2
¼ 1=ðm1 m2 p C RÞ
>
ZB2 >
>
>
> 1 m1m2
C
>
þ ðm1 þ m2 Þ=ðpCm1 m2 Þ >
> - 2 2
m1m2w C R (m1 + m2 )
>
>
>
and ZC2 ¼ Rðm1 þ m2 Þ=m2 þ 1=ðpm2 CÞ ;
ð14:2Þ Fig. 14.3 Showing the ZB arm of the pi-equivalent of the
network of Fig. 14.1
where p = jx, x being the angular frequency.
The pi-equivalent of the network of Fig. 14.1 Without any loss of generality, we can let
will then have its elements given by x20 ¼ 1=ðC 2 R2 Þ. Then

ZA1  ZA2 ZB1  ZB2 m1 m2 ðn1 þ n2 Þ ¼ m1 m2 n1 n2 =ðm1 þ m2 Þ ¼ 1:


ZA ¼ ; ZB ¼ ; ZC
ZA1 þ ZA2 ZB1 þ ZB2 ð14:4Þ
ZC1  ZC2
¼ : ð14:3Þ
ZC1 þ ZC2 Equation 14.4 shows that if x0 is fixed, then the
number of arbitrary numerical constants is
reduced to two only.
Null Condition

For zero transmission, since ZA and ZC cannot be Transfer Function


zero with ordinary circuit elements, we must
have ZB = a. At this point, it is convenient to Let xCR = x; then the rejection frequency is
have a look at the elements composing the ZB given by x = 1 and from Eq. 14.2,
arm as shown in Fig. 14.3. We note that this is
ZB1 ¼ Rfðn1 þ n2 Þ þ jn1 n2 xg
9
simply an anti-resonant circuit having infinite >
>
¼ fR=ðm1 m2 xÞgfð1=xÞ þ jðm1 þ m2 Þg =
>
impedance at a frequency given by ZB2
:
ZC1 ¼ fR=ðn1 xÞgfn1 n2 x  jðn1 þ n2 Þg >
>
1 m1 þ m2
>
;
x20 ¼ ¼ 2 2 : and ZC2 ¼ fR=ðm2 xÞgfðm1 þ m2 Þx  jg
C 2 R2 m 1 m2 ðn1 þ n2 Þ C R m1 m2 n1 n2
ð14:5Þ
120 14 Design of Parallel-T Resistance–Capacitance Networks For …

Table 14.1 Examples of n1 m1 QT m2 n2


design for maximum QT
1.0 0.95 0.487 0.026 40.04
1.0 0.90 0.475 0.053 20.11
1.0 0.85 0.459 0.081 13.51
1.0 0.80 0.444 0.111 10.25
1.0 0.75 0.429 0.143 8.33

Combining Eqs. 14.3, 14.4 and 14.5 and Using Morris’s [4] definition of Q of a resis-
simplifying, we get tance–capacitance network, we have from
Eq. 14.8
1 þ jm1 m2 n1 n2 x >
9
ZB ¼ R >
m1 m2 ð1  x2 Þ = QT ¼ l=ðn1 þ 1=m1 Þ: ð14:9Þ
: ð14:6Þ
m 1 m 2 n1 n2 x  j >
ZC ¼ R
>
m2 ð1 þ m1 n1 Þx
; Thus QT can be increased by decreasing n1
and increasing m1. The extent to which this can
For zero source and infinite load impedances, be done depends, however, on m2 and n2 also,
the transfer function of the network is given by because these must remain positive as m1 and n1
are changed. At this stage, therefore, we require
T ¼ ZC =ðZB þ ZC Þ: the expressions for m2 and n2 in terms of m1 and
n1. They can be easily obtained from Eq. 14.4 as
Substituting for ZB and ZC from Eq. 14.6 and
simplifying, we get n1  m1 m21 n1 þ 1
m2 ¼ 2
and n2 ¼ :
m 1 n1 þ 1 m1 ðn1  m1 Þ
1
T¼ : ð14:7Þ ð14:10Þ
1  jðn1 þ 1=m1 Þ=ðx  1=xÞ
Thus for m2 and n2 to be positive, (n1 − m1) must
This expression for the transfer function of a
remain positive. Under this restriction, QT will
general parallel-T network is much simpler to
have a maximum value of 12 when m1 = n1 = 1.
handle than that of Stanton [1], as it contains
only two numerical constants. In Eq. 14.7, the
term varying with frequency occurs as (x − 1/x);
Design
it thus follows that both the amplitude and phase
transfer characteristics of the network will be
At m1 = n1 = 1, Eq. 14.10 gives m2 = 0 and
symmetrical about x = 1 when plotted on a log
n2 = a, so that the corresponding arms are
(x) scale.
effectively open circuited and the output is zero
at all frequencies. Even with finite elements of
moderate values, however, QT can be made to
Selectivity
approach this maximum value, as will be evident
from the following example. Let n1 = 1.0 and
Equation 14.7 can be written as
m1 = 0.9; then QT = 0.475. In the conventional
1  x2 symmetrical case, QT = 0.250 so that the
T¼ ð14:8Þ improvement is as much as 90%. Also from
1 þ jðn1 þ 1=m1 Þx  x2
Eq. 14.10, m2 = 0.053 and n2 = 20.11. For a
Design 121

rejection frequency of 1000 c/s., we can choose or,


C = 0.01 lF and R = 16 KX. Then the series
1=jT j2 ¼ 1 þ 1= Q2T y2

ð14:11Þ
resistances required are n1R = l6 KX and n2R =
321.7 KX and the series capacitances required
where y = x − 1/x. Differentiating Eq. 14.11
are m1C = 0.009 lF, and m2C = 530 llF. Thus
with respect to y gives
elements of reasonable values can be used to
approach the maximum selectivity. Table 14.1 djTj jTj3
shows some typical examples of design for ¼ 2 3: ð14:12Þ
dy QT y
improved QT.
Differentiating again, we get
!
Linearity of the Selectivity Curve d2 jT j 3jT j3 jT j2
¼ 2 4 1 :
dy2 QT y Q2T y2
Detection of a frequency-modulated signal is
usually carried out by first converting it into an Combining this with Eq. 14.11, we have
amplitude-modulated signal by a device called a
discriminator and then applying the A.M. signal d 2 jT j 3jT j5
¼  : ð14:13Þ
to an ordinary A.M. detector. The circuit dy2 Q2T y4
arrangement of the discriminators used in the
high-frequency range may be looked upon as Again,
consisting of two channels, each containing an djT j d jT j dy
inductance–capacitance circuit. The two LC cir- ¼ 
dx dy dx
cuits are tuned to two different frequencies f1 and
d 2 jT j d jT j d 2 y d2 jT j dy 2
 
f2 such that (f1 * f2) is slightly greater than
) ¼  þ  :
twice the peak deviation and (f1 + f2)/2 is equal dx2 dy dx2 dy2 dx
to the carrier frequency of the F.M. wave to be
detected. The difference between the rectified Substituting the values of d|T|/dy and d2|T|/dy2
outputs of the two channels then varies linearly from Eqs. 14.12 and 14.13, we have
with frequency in the frequency range of interest. (   )
In the low-frequency range, the two tuned cir- d2 jTj jTj3 d2 y 3jTj2 dy 2
¼ 2 3  : ð14:14Þ
cuits are replaced by two parallel-T RC networks dx2 QT y dx2 y dx
[5, 6] whose rejection frequencies are chosen in
the same manner as f1 and f2 in the Also,
high-frequency circuit. A single parallel-T net-
dy d2 y
work can also be used as a discriminator if it can ¼ 1 þ 1=x2 and 2 ¼ 2=x3 : ð14:15Þ
be so designed that a linear relation exists dx dx
between the amplitude transfer function (|T|) and Combining Eq. 14.14 with Eqs. 14.11 and 14.15
the frequency (x) in the frequency range of gives
interest. It will be shown that this condition is
approximately satisfied when the network d 2 jT j 1
¼
dx2
3=2
selectivity is nearly equal to its maximum

Q2T y3 1 þ Q21y2
value. 8 T
2 9
From Eqs. 14.7 and 14.9, we can write <2 3 1 þ x12 =
 þ   :
:x3 y 1 þ 1 ;
1 2 QT y 2
jT j ¼ 1=2
f1 þ 1=ðQ2T y2 Þg
For a perfectly linear curve, the first differential
coefficient is a constant and the second
122 14 Design of Parallel-T Resistance–Capacitance Networks For …

differential coefficient is zero. Thus, the value of It is natural to suggest that x0 should be cho-
d2|T|/dx2 (neglecting sign) is a measure of the sen to be somewhere near the centre of the band
departure from linearity, the least value corre- 0 < x < 1 so that with the carrier frequency
sponding to maximum linearity. From the above, coincident with x0, a frequency deviation of the
we see that d2|T|/dx2 is a function of both x and order of 50 per cent of the carrier frequency can
QT so that for a particular value of QT, the lin- be detected. Since, however, |T| ! 1 as x ! 0,
earity varies from point to point. there will be a considerable deviation from lin-
In the particular application considered, the earity at very low frequencies. We thus choose x0
frequency range of interest is 0 < x < 1. In this to be nearer to 1 than to 0. Let x0 = 0.55; then the
range, y is negative and the expression within the required value of QT is 0.485, which is very near
second bracket can be made zero, i.e. perfect to its maximum value. The improvement in lin-
linearity can be attained at a single frequency by earity as QT approaches this value will be evident
suitably choosing QT. If the normalized value of from Fig. 14.4, where the magnitude of the
this frequency is denoted by x0 and y0 = x0–1/x0, amplitude transfer function has been plotted in
the required value of QT is given by the band 0  x  1 for various values of QT.
The curve for QT = 0.495 is appreciably linear
1=2
over the range 0.2 < x < 1.

ð1=y0 Þ
QT ¼ 3
: ð14:16Þ
1:5x0 ð1 þ 1=x20 Þ þ y0

Fig. 14.4 Showing the


1.0
selectivity curves of the
parallel-T network for
QT = 0.250, 0.350 and 0.495

0.8

0.6
QT
=0
.49
0.3 0
0.2

5
50
5

0.4

0.2

0 0.2 0.4 0.6 0.8 1.0


x
Selectivity of an Amplifier Using the General Parallel-T RC … 123

Selectivity of an Amplifier Using Thus with the network considered previously,


the General Parallel-T RC Network
in the Negative Feedback Line QA ¼ 0:475ðG0 þ 1Þ

In a low-frequency selective amplifier, a while with the conventional symmetrical


parallel-T RC network is used in the negative network,
feedback line. If the open-loop gain of the
QA ¼ 0:25ðG0 þ 1Þ:
amplifier is G0, then the gain with feedback is

G ¼ G0 =ð1 þ G0 TÞ: For the same open-loop gain of 50 (say), the


values of QA in these cases are respectively 24.20
Combining this with Eq. 14.7, we get and 12.75 while for the same QA of value 12.75,
the amplifier with the asymmetrical network need
j
have a gain of 26 only.
x
1þ QT  1x 2
G ¼ G0 j x
G0 þ 1 þ QT  1x 2

91=2
Conclusion
8  2
x 1
>
< 1þ  1x 2
QT
>
=
) jGj ¼ G0  2 : In situations where a continuous adjustment of
:ðG0 þ 1Þ2 þ 1  x 2 >
> ; the rejection frequency is desired, a general
QT 1x
configuration will, of course, be of limited
applicability, as the elements of the same kind
The resonant gain is G0; the gain is 3 dB. below
are neither equal nor simply related. But for a
this value at frequencies given by |G| = 2−1/2 G0
fixed rejection frequency, a general network with
which on simplification reduces to the following:
proper asymmetry will definitely be a better
 2 choice. Also in its application as an F.M. dis-
1 x
 ¼ G20 þ 2G0  1: criminator in the low-frequency range, a value of
QT 1  x2
QT nearly equal to its maximum value is
required. Thus, the design procedure given in the
The solutions of this equation are
chapter will be of much use in these situations.
( 1=2 )
1 1 1
x1;2 ¼ þ 4G00  ;
2G00 Q2T QT Problems

where P:1. Determine the transfer function of Fig. 14.1


circuit if m1 = 0 and comment on the kind
G00 ¼ ðG20 þ 2G0  1Þ2 :
1
of filtering it can do.
P:2. Same if m2 = 0.
Thus the selectivity of the amplifier is P:3. Same if m3 = 0.
P:4. Same if n1 = n2 = 0
QA ¼ 1=ðx2 x1 Þ ¼ G00 QT : P:5. Same if n3 = ∞.

For G0 > 20, G00 ’G0 þ 1 to within an error of


less than 0.25% so that Acknowledgments The author is indebted to Prof. J. N.
Bhar, D.Sc., F.N.I., and to Dr. A. K. Choudhury, M.Sc.,
D. Phil., for their kind help and advice in the preparation
QA ’ ðG0 þ 1ÞQT : of this chapter.
124 14 Design of Parallel-T Resistance–Capacitance Networks For …

References 4. D. Morris, Q as a mathematical parameter. Electron.


Eng. 306 (1954)
5. J.R. Tillman, Linear frequency discriminator. Wirel.
1. L. Stanton, Theory and applications of parallel-T Engr. 23, 281 (1946)
resistance capacitance frequency selective network. 6. Paul T. Stine, Parallel-T discriminator design tech-
Proc. IRE. 34, 447 (1946) nique. Proc. Natl. Elec. Conf. IX, 26 (1950)
2. A. Wolf, Note on a parallel-T resistance capacitance 7. H. Fleischer, in Vacuum Tube Amplifiers, ed. by G.E.
network, Proc. I.R.E. 34, 659 (1946) Valley Jr. and H. Wallman (McGraw-Hill, 1948),
3. Y. Oono, Design of parallel-T resistance capacitance Chap. 10, p. 394
network. Proc. I.R.E. 43, 617 (1953)
Perfect Transformer, Current
Discontinuity and Degeneracy 15

That on connecting a source in the primary It is not common to find an analysis of coupled
circuit of a perfectly coupled transformer, the coils with initial currents in textbooks on circuit
currents in both the primary and secondary theory. Somehow, in the large number of books
coils may be discontinuous does not appear to consulted by the author, it is always assumed that
have been widely discussed in the literature. the coils are initially relaxed and imperfectly
In this discussion, we present an analysis of coupled. The only exception happens to be the
the general circuit and show that in general, book by Kuo [1], where the circuit shown in
the currents will be discontinuous, except for Fig. 15.1 has been analyzed with due regard to
specific combinations of the initial currents in initial conditions. It has been shown that when
the two coils. Although unity coupling coef- M 2 < L1L2, i.e. when the coefficient of coupling
pffiffiffiffiffiffiffiffiffiffi
ficient cannot be realized in practice, a k ¼ M= L1 L2 \1, the currents i1 and i2 must be
perfectly coupled transformer is a useful continuous at t = 0. On the other hand, if k = 1,
concept in circuit analysis and synthesis, and then for the specific case i1(0–) = i2(0–) = 0, the
the results presented here should be of interest currents are discontinuous, with
to students as well as teachers of circuit
theory. i1 ð0 þ Þ ¼ VL2 =ðR1 L2 þ R2 L1 Þ ð15:1Þ

and

Keywords i2 ð0 þ Þ ¼ VM=ðR1 L2 þ R2 L1 Þ ð15:2Þ


Perfect transform  Current discontinuity
Degeneracy Kuo is, however, silent on what happens when
the coils are not initially relaxed. The specific
question is the following: Is k = 1 necessary as
well as sufficient for the currents to be discon-
tinuous? We show, in this chapter, that this con-
dition is necessary but not sufficient. In other
words, even for k = 1, the currents may display
Source: S. C. Dutta Roy, “Perfect Transformer, Current continuity. First, we demonstrate this through an
Discontinuity and Degeneracy”, IETE Journal of example. We next consider a more general circuit
Education, vol. 43, pp. 135–138, July–September 2002. and analyze it to obtain expressions for i1(0+) and

© Springer Nature Singapore Pte Ltd. 2018 125


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_15
126 15 Perfect Transformer, Current Discontinuity and Degeneracy

R1 voltage source is generalized to v(t) instead of a


M battery. We assume that v(t) does not contain
t=0
+ impulses. The loop equations now become
V
– I1 I2 R2
L1 L2
vðtÞ ¼R1 i1 ðtÞ þ L1 i01 ðtÞ þ Mi02 ðtÞ
Zt
1 ð15:5Þ
þ i1 ðtÞ dt þ v1 ð0 Þ
C1
Fig. 15.1 The circuit analyzed by Kuo [1] and used in 0
the example of this chapter with specific values
0 ¼Mi01 ðtÞ þ R2 i2 þ L2 i02 ðtÞ
Zt
i2(0+) in terms of circuit parameters, source value 1 ð15:6Þ
þ i2 ðtÞ dt þ v2 ð0 Þ
at t = 0+, and currents and voltages in the circuit C2
0
at t = 0−. We then derive the condition for cur-
rent continuity in a perfectly coupled transformer.
The sum of the last two terms on the right-hand
Finally, we consolidate the main results of the
side of Eq. 15.5 represents v1(t) = q1(t)/C1, where
chapter and make some concluding remarks.
q1(t) denotes the charge on C1. Similarly, v2(t) =
q2(t)/C2. Integrals of v1(t), v2(t) as well as v(t) from
t = 0− to t = 0+ will be zero because none of
An Example them contains impulses. Thus, if we integrate
Eqs. 15.5 and 15.6 from t = 0− to t = 0+,
For the sake of completeness and for ready ref-
we get
erence, we include, briefly, the analysis and
results of Kuo for the circuit shown in Fig. 15.1 0 ¼ L1 ½i1 ð0 þ Þ
in Appendix A. Let i1 ð0 ފ þ M ½i2 ð0 þ Þ i2 ð0 ފ ð15:7Þ
9
L1 ¼ 4H; L2 ¼ 1H; M ¼ 2H = 0 ¼ M ½i1 ð0 þ Þ i1 ð0 ފ þ L2 ½i2 ð0 þ Þ i2 ð0 ފ
R1 ¼ 8X; R2 ¼ 3X; V ¼ 6V ð15:3Þ
i1 ð0 Þ ¼ 0 and ið0 Þ ¼ 1A
; ð15:8Þ

From Eqs. 15.25 and 15.26, then, we get These are the same as in Kuo’s circuit, as given in
Eqs. 15.20 and 15.21. Note that Eqs. 15.7 and
15.8 imply that the principle of conservation of

i1 ð0 þ Þ ¼ 0
ð15:4Þ flux applies to each coil individually, i.e. the flux
i2 ð0 þ Þ ¼ 1A
in either coil at t = 0− is the same as that at t = 0+.
Hence, the currents are continuous despite k = 1. Also note that the generalized circuit does not
This counterexample is sufficient to prove that change the conclusion arrived at in Kuo’s circuit,
k = 1 is only a necessary but not a sufficient viz. that if k < 1, then the currents in the two coils
condition for current discontinuity. must be continuous.
For the case k = 1, Eq. 15.6 gives, at t = 0+,
the following equation:
Analysis of the General Circuit
0 ¼R2 i2 ð0 þ Þ þ ðM=L1 Þ½L1 i01 ð0 þ Þ
ð15:9Þ
We now consider the general circuit shown in þ Mi02 ð0 þ ފ þ v2 ð0 Þ;
Fig. 15.2 which includes an initially charged
capacitor in each loop and, in addition, the which can be rewritten as
Analysis of the General Circuit 127

R1 Condition for Continuity of Currents


t=0
M Under Perfect Coupling
+
v(t) If the currents are to be continuous, then it
– I1 L1 I2 R2
L2 suffices to equate Eq. 15.14 to i1(0−) or
C1 C2 Eq. 15.15 to i2(0−) because from Eq. 15.7
or Eq. 15.8, i1(0+) = i(0−) guarantees that
– v1(t) + +v2(t) – i2(0+) = i2(0−), and vice versa. Equating
Eq. 15.15 to i2(0−) gives the following condition
Fig. 15.2 A more general circuit than that shown in
Fig. 15.1 with a generalized source v(t), and initially
for continuity:
charged capacitors in each loop
R1 Mi1 ð0 Þ M½vð0 þ Þ v1 ð0 ފ L1 v2 ð0 Þ
L1 i01 ð0 þ Þ þ Mi02 ð0 þ Þ ¼ ðL1 =MÞ i2 ð0 Þ ¼
R2 L1
ð15:10Þ
½R2 i2 ð0 þ Þ þ v2 ð0 ފ ð15:16Þ

Now, putting t = 0+ in Eq. 15.5 and substituting with i1(0−) arbitrary. In other words, for every
from Eq. 15.10, we get i1(0−), there exists one i2(0−) for the currents to
be continuous and vice versa. For all other com-
vð0þ Þ¼R1 i01 ð0þ Þ binations of i1(0−) and i2(0−), the currents will be
ðL1 =MÞ½R2 i2 ð0þ Þþv2 ð0 ފþv1 ð0 Þ:
discontinuous. It is, of course, implied that other
ð15:11Þ conditions, viz., v(0+), v1(0−) and v2(0−), do not
change. Should that be the case, it is clear that the
Combining Eq. 15.11 with 15.7, we get the fol- relationship between i2(0−) and i1(0−), as given
lowing two simultaneous equations in i1(0+) and by Eq. 15.16, is a straight line with a slope of
i2(0+): R1M/(R2L1) and an intercept of
R2 L1
i1 ð0 þ Þ i2 ð0 þ Þ
R1 M M½vð0 þ Þ v1 ð0 ފ þ L1 v2 ð0 Þ
vð0 þ Þ v1 ð0 Þ þ ðL1 =MÞv2 ð0 Þ ð15:17Þ
¼ R2 L1
R1
ð15:12Þ on the i2(0−) axis. For the example considered
earlier, the slope is 4/3 while the intercept is
M M −1 A.
i1 ð0 þ Þ þ i2 ð0 þ Þ ¼ i1 ð0 Þ þ i2 ð0 Þ
L1 L1
ð15:13Þ

Solving Eqs. 15.12 and 15.13 gives, finally,

R2 ½L1 i1 ð0 Þ þ Mi2 ð0 ފ þ L2 ½vð0 þ Þ v1 ð0 ފ þ Mv2 ð0 Þ


i1 ð0 þ Þ ¼ ð15:14Þ
R1 L2 þ R2 L1

R1 ½L2 i2 ð0 Þ þ Mi1 ð0 ފ þ M½vð0 þ Þ v1 ð0 ފ þ L1 v2 ð0 Þ


i2 ð0 þ Þ ¼ ð15:15Þ
R1 L2 þ R2 L1
128 15 Perfect Transformer, Current Discontinuity and Degeneracy

Concluding Remarks by the two examples in [1] (pp. 124–126). For


k < 1 in the circuit shown in Fig. 15.1, the sys-
We have shown in this chapter that in an tem has two natural frequencies, although it has
imperfectly coupled transformer, the currents in three inductances L1, L2 and M. They are not
the two coils are always continuous. For perfect physically connected at a junction, but in the
coupling, on the other hand, the currents are equivalent1 circuit shown in Fig. 15.3, we do
always discontinuous except for specific combi- have a junction of L1 − M, M and L2 − M. This
nations of the two initial currents. More specifi- is not an ‘effective’ junction in the sense of [2],
cally, for each initial current in one coil, there but we may call it an ‘equivalent’ junction. It is
exists a particular value of the initial current in no wonder, therefore, that we get the
the other coil, for which the currents will be second-order system, instead of the third-order
continuous. These combinations lie on a straight one. An alternative way of justifying the result is
line, when one current is plotted against the to note that we can specify only two initial
other. These conclusions are valid for any com- conditions for the system, the initial current in
bination of resistors and capacitors in the two M being dependent on those in L1 and L2.
loops, with or without initial charges in the However, despite the degeneracy, there is no
capacitors. It is obvious, however, that including discontinuity in the currents!
another inductor in either or both loops makes When k = 1, further degeneracy sets in, not
the coupling imperfect, and the currents will then because we cannot specify two initial currents,
be always continuous. but (in our opinion) because M is completely
In this context, the following two observations specified if L1 and L2 are specified. As the second
made by Seshu and Balabanian [2] are of example of [2] (pp. 125–126) demonstrates, the
interest: system now has only one natural frequency and
behaves like the first-order system. Despite this
(1) ‘If idealized R, L and C branches, voltage ‘double’ degeneracy, however, the currents are
generators, and current generators are arbi- not always discontinuous, as demonstrated in this
trarily connected together, the system may chapter analytically and by an example. It is clear
not have the maximum possible order. It is that a deeper examination of the case is needed to
only when such degeneracies are present that resolve the issue in terms of physical concepts.2
discontinuities in inductance currents and
capacitance voltages are encountered. No
existence theorems have been proved by Problems
mathematicians for these cases …’ (p. 103).
(2) ‘… it may be expected that inductance cur- P:1. Suppose R1 in Fig. 15.1 circuit is shunted
rents will be discontinuous when there are by a capacitor C. Investigate the disconti-
junctions or effective junctions… at which nuity in this circuit.
only inductances and current generators are P:2. Same, with C shifted to be across R2.
present’ (p. 104). P:3. Same, with C shifted to be in series with R2.
P:4. Same, with C in series with R1.
By an effective junction in the second obser-
vation, the authors mean ‘a junction at which 1
This ‘equivalent’ circuit implies only mathematical
only inductances and current generators would equivalence (of the loop equations) but not physical
meet if we suitably interchanged series connected equivalence, because the two coils have no common
two terminal networks or shorted some branches. point.
2
Thus an effective junction is the same as a cut Notably, in [2], there are no examples or discussions on
set’ (p. 104, footnote). initial conditions in coupled coils. In the only example in
which a coupled coil appears (pp 110–112), inductor
The first observation regarding degeneracy is junctions are created through additional inductors in each
clearly demonstrated in the case of coupled coils circuit and a current generator in the secondary circuit.
Problems 129

Fig. 15.3 A mathematical L2 – M


L1 – M
equivalent circuit for two
coupled coils: coupling is not
always good! M

M
L1 ≡
L2

P:5. Same, with C1 in series with R1, C2 in series which, along with Eq. 15.20 or 15.21, clearly
with R2, and a voltage output taken across indicates that if L1L2 > M2, i.e. k < l, then the
R2. currents are continuous at t = 0. On the other
hand, if k = 1, then they need not be. In fact, in
this case, Eq. 15.19 gives at t = 0+:

ðM=L1 Þ L1 i01 ð0 þ Þ þ Mi02 ð0 þ Þ


 
Appendix R2 i 2 ð 0 þ Þ ¼
ð15:23Þ
Kuo’s analysis and results for the circuit are
shown in Fig. 15.1. which, substituted in Eq. 15.18 with t = 0+,
The loop equations for the circuit shown in yields
Fig. 15.1 are
V ¼ R1 i1 ð0 þ Þ ðL1 =MÞR2 i2 ð0 þ Þ ð15:24Þ
VuðtÞ ¼ L1 i01 ðtÞ þ R1 i1 ðtÞ þ Mi02 ðtÞ ð15:18Þ
Combining this with Eq. 15.20, one can solve
0 ¼ Mi01 ðtÞ þ R2 i2 ðtÞ þ L2 i02 ðtÞ ð15:19Þ for i1(0+) and i2(0+). The results are3
Integrating Eqs. 15.18 and 15.19 from t = 0− VL2 þ R2 ½L1 i1 ð0 Þ þ Mi2 ð0 ފ
to t = 0+, we get i1 ð0 þ Þ ¼
R1 L2 þ R2 L1
ð15:25Þ
L1 ½il ð0 þ Þ i 1 ð0 ފ þ M ½ i 2 ð0 þ Þ i2 ð0 ފ
¼0 VM þ R1 ½Mi1 ð0 Þ þ L2 i2 ð0 ފ
i2 ð0 þ Þ ¼
ð15:20Þ R1 L2 þ R2 L1
ð15:26Þ
M ½i1 ð0 þ Þ il ð0 ފ þ L2 ½i2 ð0 þ Þ i2 ð0 ފ
¼0
ð15:21Þ
References
Combining Eqs. 15.19 and 15.21 gives
1. F.F. Kuo, Network Analysis and Synthesis (John
Wiley, New York, 1966), pp. 123–126
ðL1 L2 2. S. Seshu, N. Balabanian, Linear Network Analysis
M 2 Þ½i1 ð0 þ Þ i1 ð0 ފ½i2 ð0 þ Þ i2 ð0 ފ (John Wiley, New York, 1963), pp. 101–112
¼0
ð15:22Þ

3
Kuo [1], at this point, assumes i1(0–) = i2(0–), presum-
ably, as an example. We give general results in Eqs. 15.25
and 15.26.
Analytical Solution to the Problem
of Charging a Capacitor Through 16
a Lamp

An analytical solution is presented for the R varies with the current i flowing through it.
problem of charging a capacitor through a They solved the resulting differential equation by
lamp, by assuming a polynomial relationship applying numerical techniques and found a close
between the resistance of the lamp and the fit between these results and the experimental
current flowing through it. The total energy ones. The aim of this chapter is to present an
dissipated in the lamp is also easily calculated analytical, rather than numerical solution to the
thereby. An example of an available practical problem. For this purpose, we assume a poly-
case is used to illustrate the theory. nomial relationship between R(i) and i. The total
energy dissipated in the lamp is also easily cal-
culated thereby. The experimental data of RV are
Keywords used to illustrate the validity of the theory.
Capacitor charging  Differential equation
Energy
The Circuit and the Differential
Equation

Introduction The circuit under consideration is shown in


Fig. 16.1, which obeys the integral equation
The charging of a capacitor from a battery
Zt
through a resistance is a standard topic in the 1
iRðiÞ þ i dt ¼ V ð16:1Þ
undergraduate curriculum of Physics or Engi- C
neering in the theory as well as laboratory clas- 0

ses. A 2006 paper by Ross and Venugopal [1]


Differentiating Eq. 16.1, we get
(hereafter referred to as RV) deals with an
interesting variation of this topic in which the di dR i
resistor is replaced by a lamp, whose resistance R þi þ ¼ 0; ð16:2Þ
dt dt C

where, for brevity, the dependence of R on i is


not shown explicitly. Assuming, as in RV, that
Source: S. C. Dutta Roy, “Analytical Solution to the the thermal relaxation time of the lamp filament
Problem of Charging a Capacitor through a Lamp,” is much less than the time during which the
IETE Journal of Education, vol. 47, pp. 145–147, July–
September 2006.

© Springer Nature Singapore Pte Ltd. 2018 131


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_16
132 16 Analytical Solution to the Problem of Charging a Capacitor …

LAMP C dR
¼ R0 ða1 þ 2a2 i þ 3a3 i2 Þ ð16:7Þ
R(i ) di
i
Combining Eq. 16.4 with Eqs. 16.6 and 16.7, we
get, on simplification,
V t=0
½ð1=iÞ þ 2a1 þ 3a2 i þ 4a3 i2 Šdi ¼ dt=ðR0 CÞ:
Fig. 16.1 The basic charging circuit ð16:8Þ

Integrating both sides of Eq. 16.8 gives


current in the filament changes significantly, we
can write
ln i þ 2a1 i þ ð3=2Þa2 i2 þ ð4=3Þa3 i3
dR dR di ¼ ½t=ðR0 Cފ þ K: ð16:9Þ
¼ : ð16:3Þ
dt di dt
To evaluate the integration constant K, we
Combining Eqs. 16.2 and 16.3, we get note that at t = 0, i = i0 = V/(R0). Putting this
  initial condition in Eq. 16.9, we get the value of
dR di i K as the left hand side of Eq. 16.9 with i replaced
Rþi þ ¼ 0: ð16:4Þ
di dt C by i0. Finally, therefore, the equation for the
current becomes

Solution of the Differential Equation t ¼ R0 C½Inði0 =iÞ þ 2a1 ði0 iÞ þ ð3=2Þa2 ði20
i2 Þ þ ð4=3Þa3 ði30 i3 ފ:
As illustrated in Fig. 3 of RV, the variation of R ð16:10Þ
(i) with i is approximately linear, except at high
values of i. In general, we can assume R and i to Equation 16.10 is transcendental in i and for a
obey a polynomial relationship of the form given t, it has to be solved numerically. A better
! strategy would be to compute t for various values
N
X of i in the range of interest and then to plot the
R ¼ R0 1 þ ak i k ; ð16:5Þ variation of i with t, as we shall do in the
k¼1
example to follow.
where N will depend upon the required accuracy.
For most practical situations, N = 2 or 3 suffices.
We shall consider here a third order polynomial,
Energy Dissipated in the Lamp
but if required, the treatment can be extended to
The energy dissipated in the lamp is given by
any order. Let, therefore,
Z1
R ¼ R0 ð1 þ a1 i þ a2 i2 þ a3 i3 Þ ð16:6Þ
E¼ RðiÞi2 dt ð16:11Þ
Then 0
Energy Dissipated in the Lamp 133

Combining Eq. 16.11 with Eq. 16.6, substituting A plot of Eq. 16.14 is shown in Fig. 16.2, which,
for dt from Eq. 16.8, changing the limits of the as predicted by RV, is virtually indistinguishable
integral (from t = 0 to i = i0 and t = ∞ to i = 0) from that given in Fig. 4 of their paper.
and simplifying, we get

Zt0
CR20 i þ 3a1 i2 þ 4a2 þ 2a21 i3 þ 5ða3 þ a1 a2 Þi4 þ 6a1 a3 þ 3a22 i5 þ 7a2 a3 i6 þ 4a23 i7 di
   

0
2
a21 4 a22 6 a23 8
    
2 i0 3 5 7
¼ CR0 þ a1 i0 þ a2 þ i þ ða3 þ a1 a2 Þi0 þ a1 a3 þ i þ a2 a3 i0 þ i0
2 2 0 2 0 2
ð16:12Þ

The total energy dissipated in the lamp for this


case is given by
Example 2
a21 4

i0
E¼ CR20 3
þ a1 i 0 þ i 0 ; ð16:15Þ
2 2
We use the experimental data given in RV to
illustrate the application of the theory presented
which is calculated as 2.772 J.
here. As mentioned earlier, Fig. 3 of RV shows
that the variation of R(i) with i is predominantly
linear. By considering the two points (0.03 A,
Conclusion
10 X) and (0.07 A, 20 X) in this figure, we get

RðiÞ ¼ 2:5ð1 þ 100iÞ ð16:13Þ It is shown that if the functional dependence of the
lamp resistance on current is known in the form of
With C = 0.154 F and i0 = 0.15 A (as given in a polynomial relationship, then the charging
Fig. 4 of RV), Eq. 16.10 becomes, for this case, process of a series capacitor can be analytically
determined. It is then also easy to determine the
t ¼ 0:385ð28:1 ln i 200iÞ: ð16:14Þ energy dissipated in the lamp during the charging
process. It is easily shown that the discharging of
0
a charged capacitor through a lamp also follows
10
Eq. 16.2 and hence the theory presented here also
applies to the discharging process.

–1
Current (i)

10
Problems

P:1. In the circuit of Fig. 16.1, add an inductor L


in series. Write the differential equation and
–2
10
solve it.
0 2 4 6 8 10 12
P:2. Let, in Fig. 16.1, C be shifted to be across
Time (t )
the lamp. Obtain the differential equation
Fig. 16.2 Variation of i with t for the example and solve it.
134 16 Analytical Solution to the Problem of Charging a Capacitor …

P:3. Can you solve Eq. 16.10 analytically? After Acknowledgements The author thanks Professor Jaya-
all, it is a cubic equation, and can be solved deva for his help in the preparation of Fig. 16.2.
by Cardan’s method. Try it.
P:4. What happens if Eq. 16.6 has another term Reference
a 4t 4?
P:5. Repeat the example in the text with an extra 1. R. Ross, P. Venugopal, On the problem of (dis) charg-
term 10i2 in Eq. 16.13. ing a capacitor through a lamp. Am. J. Phys. 74, 523–
525 (2006)
Difference Equations, Z-Transforms
and Resistive Ladders 17

It is shown that the semi-infinite and infinite equations. KCL (Kirchoff’s Current Law), KVL
resistive ladder networks composed of identi- (Kirchoff’s Voltage Law) and Ohm’s law should
cal resistors can be conveniently analyzed by be adequate for dealing with such networks. Yet,
the use of difference equations or z-trans- there are situations where the use of difference
forms. Explicit and simple expressions are equations and/or frequency domain techniques
obtained for the input resistance, node volt- offers significant advantages over conventional
ages and the resistance between two arbitrary methods. This chapter is concerned with one
nodes of the network. such situation, viz. a semi-infinite or infinite
resistive ladder network.
The semi-infinite resistive ladder network
Keywords shown in Fig. 17.1 is often posed as a problem

Infinite networks Resistive ladders [1, 2] to undergraduate students for finding the

Difference equations Z-transforms input resistance Ri = V0/I0. The solution is easily
found by noting that the resistance looking to the
right of nodes 1 and ground should also be Ri.
Thus,

Introduction R2i RRi R2 ¼ 0 ð17:1Þ

Difference equations and z-transforms are tech- which gives the quadratic equation
niques for dealing with discrete time signals and
systems, of which the former is in time domain R2i RRi R2 ¼ 0: ð17:2Þ
and the latter is in the frequency domain. Anal-
ysis of a purely resistive network does not nor- Noting that Ri must be positive, we get
mally require any tool in the frequency domain,  pffiffiffi
neither does the network process discrete time Ri ¼ R 1 þ 5 =2 ¼ RU; ð17:3Þ
signals so as to require the use of difference
where U is the so-called ‘golden ratio’.
What about the potential vn at node n, 1 
Source: S. C. Dutta Roy, “Difference Equations, n < ∞? Solution of this problem appears in [3]
Z-Transforms and Resistive Ladders,” IETE Journal of in the form of an integral obtained by using the
Education, vol. 52, pp. 11–15, January–June 2011.
concept of discrete Fourier transform. It will be
© Springer Nature Singapore Pte Ltd. 2018 135
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_17
136 17 Difference Equations, Z-Transforms and Resistive Ladders

0 I0 1 2 n 1 n n+

V0 ... ... ... to


infinity

Fig. 17.1 The semi-infinite resistive ladder. Each resistance is of value R

shown in this chapter that the solution can be analyzed a semi-infinite ladder in which the
obtained in a simpler form by using the theory of resistors in the successive sections differ by a
difference equations or by application of the z- factor of b. He showed that by choosing b ap-
p
transform technique. In the process, we have also propriately, one can obtain the golden ratio, 2
considered the infinite resistive ladder of and some other irrational numbers in a
Fig. 17.2 and have calculated the resistance non-geometric context. Parera-Lopez [7] made
offered to a battery connected between two some generalizations of [5, 6]. Denardo et al. [8]
arbitrary nodes of this infinite ladder. presented some numerical and laboratory exper-
Besides [3], there exists a substantial volume iments on finite N-section ladders and showed
of literature on the subject of semi-infinite and that the convergence of the input resistance to
infinite resistive ladders. Some of the prominent RU is exponential and rapid. For example, for
ones, which are of educational and pedagogic N  5, the deviation from RU occurs only in the
interest, will be reviewed here. Lavatelli [4] fourth place of decimal, while for N  7, the
considered an infinite balanced ladder i.e. one in deviation occurs in fifth place of decimal.
which the lower ground line of Fig. 17.2 is Bapeswara Rao [9] related the finite resistance
replaced by a chain of resistors. He gave a dif- ladder to the effective resistance between the
ference equation formulation for the resistance centre and a vertex of an N-sided polygon of
between two arbitrary nodes. resistors.
Our treatment here in Part IV has been Besides these papers of pedagogic interest,
inspired by his work and follows the same line of there have appeared many scholarly papers on
analysis. Srinivasan [5] considered the infinite networks in IEEE and other professional
semi-infinite ladder with different values of series journals, the most prominent author being
and shunt resistors and showed that when they Zemanian (see, e.g. [10, 11] and the references
are equal, the input resistance is RU, as in cited there). Reference [10] deserves special
Eq. 17.3. He also showed that the successive mention because it is a tutorial paper addressed
convergents of the continued fraction form of the to undergraduate students in a rather unique and
input resistance are related to the Fibonacci enjoyable style. Zemanian’s book [12] gives a
sequence. As an extension of [5], Thomson [6] comprehensive treatment of the subject with the
necessary mathematical rigour.

1 0 +1

Solution by Difference Equation


to ...
infinity I0 ... ... to Approach
infinity

Consider, in Fig. 17.1, the nodes n − 1, n and


n + 1, n > 0. By writing KCL at node n and
Fig. 17.2 Infinite resistive ladder driven by a current
simplifying, we get
source I0 at node 0. Each resistance has a value R
Solution by Difference Equation Approach 137

3vn vn vn þ 1 ¼ 0: ð17:4Þ we have


1

This is a difference equation of order 2, and Z ½dðnފ ¼ 1; Z ½vn þ 1 Š ¼ zV ðzÞ;


ð17:11Þ
assuming a solution of the form kn, we get the Z ½vn 1 Š ¼ z 1 V ðzÞ:
characteristic equation
Thus taking the z-transform of both sides of
k2 3k þ 1 ¼ 0: ð17:5Þ 17.9 and simplifying, we get
1
The solution of Eq. 17.5 are VðzÞ z
¼ 1
I0 R 1 3z þz 2
 pffiffiffi z 1
k1; 2 ¼ 3  5 =2: ð17:6Þ ¼ ; ð17:12Þ
ð1 az Þð1 z 1 =aÞ
1

Note that k1k2 = 1; for convenience, we shall where a is the same as that given by 17.7.
call k1 as a so that k2 = a−1. Thus, the general Expending 17.12 in partial fractions and using
solution for vn is p
the fact that a 1=a ¼ 5, we get
 pffiffiffi
vn ¼ Aan þ Ba n ; a ¼ 3 þ 5 =2: ð17:7Þ  
VðzÞ 1 1 1
¼ pffiffiffi : ð17:13Þ
I0 R 5 1 z 1 =a 1 az 1
The constants A and B are evaluated from the
boundary conditions v0 = V0 and v∞ = 0, the The pole at z = a is outside the unit circle
latter being dictated by physical considerations. while that at z = 1/a is inside the unit circle. The
The second condition forces A to be zero while physical situation demands that the sequence vn
the first one makes B = V0. Thus, finally, should decrease on both sides of n = 0 and tend
h pffiffiffi in to zero when n ! ∞. Hence, the first term in
v n ¼ V0 3 5 =2 : ð17:8Þ 17.13 represents the z-transform of the
right-sided sequence {v0, v1, … to ∞} with
|z| < a as the region of convergence, while the
Z-Transform Solution second term represents the z-transform of
the left-sided sequence {v−1, v−2, … to ∞} with
To apply the z-transform technique [13], it is |z| < a as the region of convergence. Thus, the
instructive to consider the infinite ladder of inversion of Eq. 17.13 gives
Fig. 17.2, with a current generator I0 connected
vn 1
between node 0 and ground. Then the difference ¼ pffiffiffi ½a n uðnÞ þ an uð n 1ފ; ð17:14Þ
I0 R 5
equation 17.4 is modified to the following:
where u(n) is the unit step function, having the
3vn vn 1 vn þ 1 ¼ I0 dðnÞ; ð17:9Þ
value unity for n  0 and zero otherwise. More
where d(n) = 1 for n = 0 and zero otherwise. explicitly,
Defining the z-transform in the usual manner, i.e. pffiffiffi!n
I0 R 3 5
1 vn ¼ pffiffiffi ; n  0; ð17:15Þ
X
n 5 2
Z ½vn Š ¼ VðzÞ ¼ vn z ; ð17:10Þ
n¼ 1
138 17 Difference Equations, Z-Transforms and Resistive Ladders

pffiffiffi!n The only difference between 17.18 and 17.4 is


I0 R 3 þ 5
vn ¼ pffiffiffi ; n\0 ð17:16Þ that the right-hand side in the former is not zero.
5 2 Hence we shall have a constant term, represent-
ing the particular solution of Eq. 17.18, in addi-
This gives the complete solution for the infi- tion to the solution of the form given by
nite ladder of Fig. 17.2. The resistance seen by Eq. 17.7. It is easily seen from Eq. 17.18 that
the current generator I0 is this constant term is I0. Thus the solution of
pffiffiffi Eq. 17.18 is
R1 ¼ RRi Ri ¼ R= 5 ð17:17Þ
 pffiffiffi
p in ¼ Aan þ Ba n
þ I0 ; a ¼ 3 þ 5 =2:
so that v0 ¼ I0 R= 5; this verifies that Eq. 17.15
gives correct results for the semi-infinite ladder, ð17:19Þ
as derived independently in Eq. 17.8. Also, as
expected, vn = v−n, and both tend to zero as The constant A and B have to be determined
n ! ∞. from the boundary conditions that hold at nodes
m and m + r. Since the network is perfectly
symmetrical with respect to an imaginary vertical
Resistance Between Any Two line at the middle, the voltages at nodes
Arbitrary Nodes of an Infinite Ladder m + r and m are, respectively, +V0/2 and −V0/2.
Thus
We now consider another relevant problem in the
infinite ladder, viz. that of finding the resistance i1 ¼ ir ¼ V0 =ð2RT Þ: ð17:20Þ
offered to a source connected between any two
Combining Eqs. 17.19 and 17.20, we get two
arbitrary nodes m and m + r. Let the source be a
simultaneous equations in A and B, the solution
voltage generator V0 and let a set of r + 1 mesh
of which gives
currents be formulated as shown in Fig. 17.3,
where the last mesh includes V0 and the network ½V0 =ð2RT ފ I0  1 r 
to the left of node m and that to the right of node ðA; BÞ ¼ a ;a : ð17:21Þ
1 þ ar 1
m + r have been replaced by an equivalent
p
resistance RT ¼ RjjRi ¼ ð 5 1Þ=2. Consider Thus, finally,
the nth mesh, 1 < n < r. Writing KVL around
this mesh gives the equation ½V0 =ð2RT ފ I0  n 1 nþr

in ¼ I0 þ a þa :
1 þ ar 1
3in in 1 in þ 1 ¼ I0 : ð17:18Þ ð17:22Þ

I0

m m +1 m
V0 /2 in 1
i1 in in+1 ir
RT ... ... RT

Fig. 17.3 Circuit for determining the resistance between any two arbitrary nodes m and m + r. Each unmarked
resistance has a value R
Resistance Between Any Two Arbitrary Nodes of an Infinite Ladder 139

Application of KVL around the (r + 1)th mesh earlier, the use of z-transforms is believed to be
gives new and instructive. The explicit formulas for the
node voltages and the resistance between two
r
X arbitrary nodes also appear to be new.
V0 ¼ ðI0 in ÞR: ð17:23Þ
n¼1

Combining Eqs. 17.22 and 17.23 gives Problems


r 
½V0 =ð2RT ފ þ I0 X 1 nþr P:1. Suppose in Fig. 17.1, the ladder is termi-
an

V0 ¼ þa : nated at the third node on the right. What
1 þ ar 1 n¼1
impedance does I0 face? This is easy!
ð17:24Þ
P:2. Suppose, in Fig. 17.2, the current generator
is replaced by a voltage generator and the
Clearly,
ladder is terminated in node
r
X r
X 1 ar marked + n and −n. What current will flow
1 nþr
an ¼ a ¼ : ð17:25Þ from the generator? This is super-easy!
n¼1 n¼1
1 a
P:3. Suppose, in Fig. 17.2, each shunt resistors
or is replaced by a capacitor C. What is the
Using Eq. 17.25 in Eq. 17.24 and simplify-
input impedance? This is not so easy, but
ing, we get
not difficult too!
" pffiffiffi!n # P:4. Same as P.4, but each series resistor is
2R 3 5 replaced by a capacitor C. What is the input
Rr ¼ pffiffiffi 1 : ð17:26Þ
5 2 impedance? Same level of difficulty as in
P.3.
It is easily verified by direct calculation that P:5. Same as P.5, but each shunt resistor is
Eq. 17.26 give correct results for r = 1, 2 and 3, replaced by an inductance L.
which are, respectively,
pffiffiffi
R1 ¼ Rð1 1= 5Þ; Acknowledgments This work was supported by the
pffiffiffi Indian National Science Academy through the Honorary
R2 ¼ Rð3 5Þ; ð17:27Þ Scientist scheme.
pffiffiffi
R3 ¼ 8Rð1 2= 5Þ:

For a semi-infinite ladder, the condition of References


symmetry no longer holds and the appropriate
boundary conditions have to be used at both the 1. E.M. Purcell, in Electricity and Magnetism, Berkeley
Physics Course—Vol. 2, 2nd edn. (New York,
end meshes 1 and r. For examples, if m = 1, then McGraw-Hill, 1985), pp. 167–168
the boundary conditions are i1 = ir = V0/(RT + 2. F.W. Sears, M.W. Zemansky, in College Physics,
R); for m = 0, the resistance would be R + (the World Students, 5th edn. (Reading, MA,
Addison-Wesley, 1980)
value for m = 1); for m = 2, the boundary con-
3. R.M. Dimeo, Fourier transform solution to the
ditions are i1 = ir = V0/(RT + 2/3) and so on. semi-infinite resistance ladder. American J. Phys.
68(7), 669–670 (2000)
4. L. Lavatelli, The resistive net and difference equa-
tion. American J. Phys. 40(9), 1246–1257 (1972,
Concluding Discussion September)
5. T.P. Srinivasan, Fibonacci sequence, golden ratio
and a network of resistors. American J. Phys. 60(5),
We have used difference equations and z-trans-
461–462 (1992)
form to analyze semi-infinite and infinite resis- 6. D. Thompson, Resistor networks and irrational
tive ladders. While the former has been used numbers. American J. Phys. 65(1), 88 (1997)
140 17 Difference Equations, Z-Transforms and Resistive Ladders

7. J.J. Parera-Lopez, T-iterated electrical networks and 11. A.H. Zemanian, Infinite electrical networks. Proc.
numerical sequences. American J. Phys. 65(5), 437– IEEE 64(1), 1–17 (1976)
439 (1997) 12. A.H. Zemanian, Transfiniteness for graphs, electrical
8. B. Denardo, J. Earwood, V. Sazonava, Experiments networks and random walks (Birkhauser, Boston,
with electrical resistive networks. American J. Phys. MA, 1996)
67(11), 981–986 (1999, November) 13. S.K. Mitra, in Digital Signal Processing—A Com-
9. V.V. Bapeswara Rao, Analysis of doubly excited puter Based Approach, 3rd edn, Chapter 6 (New
symmetric ladder networks. American J. Phys. 68(5), York, McGraw-Hill, 2006)
484–485 (2000)
10. A.H. Zemanian, Infinite electrical networks: a
reprise. IEEE Trans. Circuits Sys. 35(11), 1346–
1358 (1988)
A Third-Order Driving Point
Synthesis Problem 18

Minimal realizations of an interesting multiplying constant, which in this case is unity,


third-order impedance function are discussed. is a hidden specification.
The solution, based on an elegant algebraic The order of the impedance function, defined
identity, illustrates several basic concepts of as the degree of the numerator or denominator,
driving point function synthesis. whichever is higher, being three, we shall natu-
rally require three reactive elements. The fourth
element must then be a resistance. Can all the
Keyword three reactive elements be of the same kind, viz.
Driving point synthesis  Third-order either inductance or capacitance? Having a pole
impedance function at the origin (s = 0) obviously excludes an all
inductor solution, because an RL impedance
cannot have such a pole. How about all reactive
elements being capacitances? That is, how about
Introduction an RC realization of Eq. 18.1? Note that
Eq. 18.1 has a pole at s = ∞ and we know that
Consider the impedance function an RC impedance cannot have such a pole.
Also note that Z(s) poles are at s = 0 and
ð s þ aÞ ð s þ bÞ ð s þ c Þ s = −(a + b + c) while its zeros are at s = −a,
Z ðsÞ ¼ ; ð18:1Þ
sðs þ a þ b þ cÞ −b and −c. Since a + b + c > a, b as well as c,
we conclude that poles and zeros of Z(s) do not
where a, b and c are arbitrary non-negative real alternate. This alternation of poles and zeros is an
quantities. The problem is to have a minimal essential requirement of RC or RL impedances.
realization of Eq. 18.1, i.e. a realization which Hence we conclude that Eq. 18.1 can neither be
uses no more than four elements. Why four? RL nor RC; if at all realizable, it must be RLC.
Apparently, there are three specifications, namely
a, b and c, but then you should realize that the
Is Z(s) at All Realizable?

The question that arises at this stage is the fol-


Source: S. C. Dutta Roy, “A Third-Order Driving Point lowing: Is Z(s) at all realizable? It is known that
Synthesis Problem,” Students’ Journal of the IETE, vol. Z(s) will be realizable if it is a positive real
36, pp. 179–183, October–December 1995. function (PRF) [1], i.e. if (i) Z(s) is real for s real
© Springer Nature Singapore Pte Ltd. 2018 141
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_18
142 18 A Third-Order Driving Point Synthesis Problem

and (ii) Re Z(s)  0 for Re s  0. There are is twice the residue of Z(s) at s ¼ jx1 , and
many ways of testing for a PRF, but one Z3(s) is the remaining function to be tested. The
pre-processing or simplification that should first term in Eq. 18.6 represents a parallel con-
invariably be carried out is to look for poles and nection between an inductance K1 =x21 and a
zeros on the jx-axis, including s = 0 and s = ∞, capacitance 1/K1.
and to remove them. This step is the testing for a If instead of a pole, one finds one or more
PRF is known as the ‘Foster preamble’. In par- zeros of Z(s) on the jx-axis, then one removes
ticular, if Z(s) has a pole at the origin (s = 0), them from Y(s) = 1/Z(s) which will have a pole
then one can write at those points. Here, removal of a pole at s = 0,
s = ∞ and s ¼ jx1 corresponds to the removal
K0 of an inductance, capacitance and a series con-
Z ðsÞ ¼ þ Z1 ðsÞ; ð18:2Þ
s nection of inductance and capacitance, respec-
tively, all in parallel with the remaining
where
admittance function to be tested.
K0 ¼ sZ ðsÞjs¼0 : ð18:3Þ It can be shown that if the original function
was PR, then so is the remainder function after
is the residue of Z(s) at the pole at s = 0 and removal of any pole or zero on the jx-axis. This,
Z1(s) is the remaining function to be tested. The in fact, gives validity of the Foster preamble! But
term K0/s obviously represents a capacitor of then, how are we simplifying the testing? Note
value 1/K0. Naturally K0 has to be positive, that Z1(s) of Eq. 18.2 as well as Z2(s) of Eq. 18.4
otherwise no further testing is needed. In fact if Z will be one order less than Z(s), while Z3(s) of
(s) is PRF, then it can be shown that its residue at Eq. 18.6 will have an order reduction by two.
all poles on the jx-axis have to be real and Hence, indeed, the remainder functions are
positive, but not necessarily vice versa. simplified.
If, instead of the origin, Z(s) has a pole at In the present case of Z(s) given by Eq. 18.1,
s = ∞, then one can write we have a pole s = 0 due to the factor s in the
denominator, and also at s = ∞ because the
Z ðsÞ ¼ K1 s þ Z2 ðsÞ; ð18:4Þ degree of the numerator is one greater than that
of the denominator (it cannot be more than one or
where less than one, see [1]). Let us remove them. The
residues are, from Eqs. 18.3 and 18.5,
Z ðsÞ
K1 ¼ Lim ð18:5Þ
s!1 s K0 ¼ abc=ða þ b þ cÞ and K1 ¼ 1: ð18:8Þ

is the residue of Z(s) at the pole at s = ∞ and If we remove the pole at s = ∞ first, the
Z2(s) is the remaining function to be tested. Here remainder function is
K∞s represents an inductor of value K∞. Finally,
if Z(s) has poles at s ¼ jx1 , then one can write Z10 ðsÞ ¼ Z ðsÞ s
ð s þ aÞ ð s þ bÞ ð s þ c Þ
K1 s ¼ s
Z ðsÞ ¼ þ Z3 ðsÞ; ð18:6Þ sðs þ a þ b þ cÞ ð18:9Þ
s2 þ x21
sðab þ bc þ caÞ þ abc
¼ :
where sðs þ a þ b þ c Þ

This step, as explained earlier, leads to the



s2 þ x21 Z ðsÞ

K1 ¼ ð18:7Þ partial realization of Fig. 18.1a and reduces the
s 2
s ¼ x21
order from three to two. As is obvious from
Is Z(s) at All Realizable? 143

Eq. 18.9, Z10 ðsÞ does not (and cannot) have a pole which can be easily verified. Thus Z20 ðsÞ has no
at s = ∞, but it retains the pole at s = 0 of Z obvious defect for positive realness. Not only
(s) with the same residue. If we now remove this that, because Z20 ðsÞ has a zero at s = ∞, its
pole from Z10 ðsÞ, we have a remainder function reciprocal Y20 ðsÞ has a pole at s = ∞ which can
be removed. The corresponding residue is
abc
Z20 ðsÞ ¼ Z10 ðsÞ : ð18:10Þ
sða þ b þ c Þ
0 Y20 ðsÞ
On simplification, this reduces to the K12 ¼ Lims!1
s
following: ða þ b þ cÞðs þ a þ b þ cÞ
¼ Lims!1
s ð a þ bÞ ð b þ c Þ ð c þ aÞ
ða þ b þ cÞðab þ bc þ caÞ abc
Z20 ðsÞ ¼ : ð18:13Þ
ða þ b þ c Þðs þ a þ b þ c Þ
ð18:11Þ
aþbþc
The partial realization resulting from this step ¼ ; ð18:14Þ
ð a þ bÞ ð b þ c Þ ð c þ aÞ
is shown in Fig. 18.1b. Note also that the order
of Z20 ðsÞ is one, which is one less than that of where in Eq. 18.13, we have used the identity
Z10 ðsÞ, as expected. Eq. 18.12 in conjunction with Eq. 18.11. If we
In order to proceed further with the testing, it remove this pole from Y20 ðsÞ which corresponds
is necessary to ensure that the numerator constant 0
to a capacitance of value K12 in parallel, we are
of Eq. 18.11 is positive. It is indeed so, because left with the following remainder function
of the algebraic identity
ða þ b þ cÞs
ða þ b þ cÞðab þ bc þ caÞ Y30 ðsÞ ¼ Y20 ðsÞ :
ða þ bÞðb þ cÞðc þ aÞ
ð18:12Þ
¼ ða þ bÞðb þ cÞðc þ aÞ þ abc; ð18:15Þ

Fig. 18.1 Various steps in


a +b + c
the testing of Z(s) for positive
realness, leading to a (a) 1 (b) abc
1
complete realization through
Foster preamble only
Z(s)
Z1¢ (s) Z(s) Z2¢ (s)

a +b + c
1 abc
(c)

Z(s) (a +b)(b + c)(c + a)


(a +b + c)2

a +b + c
(a +b)(b + c)(c + a)
144 18 A Third-Order Driving Point Synthesis Problem

Simplification of Eq. 18.15 leads to Y10 ðsÞ


0
K11 ¼ Lims!1
s
ða þ b þ c Þ2 sðs þ a þ b þ c Þ
Y30 ðsÞ ¼ ð18:16Þ ¼ Lims!1
ða þ bÞðb þ cÞðc þ aÞ s½sðab þ bc þ caÞ þ abcŠ
1
which is a positive constant, equivalent to a ¼ :
ab þ bc þ ca
resistance of value (a + b) (b + c) (c + a)/(a +
b + c)2. The realization obtained at this stage is ð18:17Þ
shown in Fig. 18.1c, which is, in fact, a complete
This removal means partial realization
realization. Nothing is left to test anymore! 0
through a capacitance of value K11 in parallel,
We have, therefore, shown that Z(s) is PR and 0

leaving a remainder Y2 ðsÞ, as shown in
in the process, which involved only the Foster
preamble, we have solved the synthesis problem. Fig. 18.2b, where

sðs þ a þ b þ c Þ s
Y2 ðsÞ ¼
sðab þ bc þ caÞ þ abc ab þ bc þ ca
Alternative Realization
s½ða þ b þ cÞðab þ bc þ caÞ abcŠ
¼ :
It is known that solution to a synthesis problem, ðab þ bc þ caÞ½sðab þ bc þ caÞ þ abcŠ
if it exists, is never unique [1]. Can we, in the ð18:18Þ
present case, find another realization? Let us see.
As in the previous section, let us first remove Once again, because of Eq. 18.12, the coeffi-
the pole at s = ∞, leaving the remainder Z10 ðsÞ cient of s in the numerator of Eq. 18.18 is posi-
given by Eq. 18.9. Instead of removing the pole tive, and we can re-write Y2 ðsÞ as
at s = 0 from Z10 ðsÞ, note that Z10 ðsÞ has a zero at
s = ∞. Let us, therefore, consider the admittance s ð a þ bÞ ð b þ c Þ ð c þ aÞ
Y2 ðsÞ ¼ :
Y10 ðsÞ ¼ l=Z10 ðsÞ and remove its pole at s = ∞. ðab þ bc þ caÞ½sðab þ bc þ caÞ þ abcŠ
The residue is ð18:19Þ

Fig. 18.2 Various steps in (a)


the alternative realization of Z 1 (b) 1
(s)
1
Z(s) Z(s) Y2 (s )
Z1¢ (s) ab +bc + ca

(a + b )(b + c )(c + a )
1 abc (ab + bc + ca )
(c)

1
Z(s) (ab + bc + ca )2
ab + bc + ca
(a + b )(b + c )(c + a )
Alternative Realization 145

Y2 ðsÞ has a zero at the origin, which can be A word of caution must be sounded here. That
removed as the pole of Z2 ðsÞ ¼ 1=Y2 ðsÞ: In fact, continued fraction expansion works here is a
we can easily see that matter of luck; it may not work in a general RLC
case. Even in this case, you may try continued
ðab þ bc þ caÞ2 fraction expansion starting with the lowest
Z2 ðsÞ ¼
ða þ bÞðb þ cÞðc þ aÞ powers and soon get frustrated!
ð18:20Þ
abcðab þ bc þ caÞ
þ :
sða þ bÞðb þ cÞðc þ aÞ
A Problem for the Student
The second term corresponds to the pole at the
origin. Also, observe that this decomposition Can you find out another alternative minimal
corresponds to a series combination of a capaci- realization? If this proves tough, try relaxing on
tance and resistance value indicated in Fig. 18.2c. the minimal requirement—first with three reac-
The synthesis is complete and as you can see, this tances and more than one resistance and later
is different from the network of Fig. 18.1c. with more than three reactances and more than
It is interesting to observe that the alternative one resistance. No more! Isn’t life simple?
realization can be mechanized through continued
fraction expansion starting with the highest Acknowledgments Acknowledgement is made to S.
powers, as follows: Tirtoprodjo who first posed the problem [2] and to S.
Erfani et al. who gave the solution of Fig. 18.1, although
s3 þ s2 ða þ b þ cÞ þ sðab þ bc þ caÞ þ abc in a cryptic form [3].
Z ðsÞ ¼
s2 þ sða þ b þ c Þ
ð18:21Þ
(
References
s ðab þ bc þ caÞ2
¼ s þ 1= þ 1=
ab þ bc þ ca ða þ bÞðb þ cÞðc þ aÞ 1. F.F. Kuo, Network Analysis and Synthesis (Wiley,

sða þ bÞðb þ cÞðc þ aÞ
 New York, 1966). Chapter 10
þ 1= : 2. S. Tirtoprodjo, On the lighter side. IEEE CAS
abcðab þ bc þ caÞ
Magazine, 5(1), 25 (1983, March)
ð18:22Þ 3. S. Erfani et al., On the lighter side—Solution to the
march puzzle. IEEE CAS Magazine 5(2), 22 (1983)
Interference Rejection in a UWB
System: An Example of LC Driving 19
Point Synthesis

Synthesis of an LC driving point function is extensive tables are available in textbooks [1]
one of the initial topics in the study of network and handbooks [2]. In particular, one-port or
synthesis. This chapter gives a practical driving point synthesis, one of the starting topics
example of application of such synthesis in in the subject, appears to be of little use in
the design of a notch filter for interference practice. This chapter deals with a recent appli-
rejection in an ultra wide-band (UWB) system. cation of LC driving point synthesis, which may
The example can used to motivate students to be used to enhance the motivation of students to
learn network synthesis with all seriousness, learn the subject with all seriousness.
and not merely as a matter of academic The example is taken from a 2009 paper [3]
exercise. dealing with an integrated double-notch filter and
implemented with 0.13 lm CMOS technology,
for rejection of interference in an ultra wide-band
Keywords (UWB) system. The problem, translated to net-

LC driving point synthesis Notch filter work synthesis language, is to design a filter to

UWB systems Network synthesis reject the frequencies around f1 and f2 and
pass those around fp, where f1 < fp < f2. The
authors of [3] set the design values as x1 = 2p
2.4  109 rad/s, x2 = 2p  5.2  109 rad/s and
xp = 2p  4.8  109 rad/s, where x = 2pf, and
Introduction
suggested and designed the network shown in
Fig. 19.1 for this purpose. The current generator
The subject of network synthesis is considered
and the shunt resistance in Fig. 19.1 represent the
by most students as moderately difficult, mathe-
equivalent circuit of an amplifier and the LC
matical and mainly of academic interest. The
network has a driving point impedance Z(s),
underlying reason is that it encounters very few
which is required to have series resonance (and
practical applications, except in the design of
hence zero impedance) at x1 and x2, thus
filters, which is a two-port network, for which
shunting out all the current from the load at these
Source: S. C. Dutta Roy, “Interference Rejection in a frequencies, and parallel resonance (and hence
UWB System: An Example of LC Driving Point infinite impedance) at xp, thus passing all the
Synthesis,” IETE Journal of Education, vol. 50, current through the load at this frequency. Thus,
pp. 55–58, May–August 2009.

© Springer Nature Singapore Pte Ltd. 2018 147


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_19
148 19 Interference Rejection in a UWB System: An Example …

basically, one requires to design an impedance K1 ¼ 1 ¼ L1 ; ð19:3bÞ


Z(s) of the form
  and
s2 þ x21 s2 þ x22
ZðsÞ ¼   ; ð19:1Þ   
s s2 þ x2p Kp ¼ x2p x21 x22 x2p =x2p ) C2 ¼ 1=Kp ;
L2 ¼ Kp =x2p :
where without any loss in generality, the scaling
ð19:3cÞ
constant is assumed to be unity. In this chapter,
we shall treat Eq. 19.1 as the function to be These results are reproduced in Table 19.1, in
synthesized, and derive the form of Fig. 19.1, as
which the capacitors are given as C1 x21 and
well as the other alternative canonic forms, along
C2 x21 , for later convenience.
with their element values. We shall then compare
Foster II form is obtained by the PFE of Y(s)
the various networks on the basis of the
= 1/Z(s); the results are as follows:
required total inductance, total capacitance and
grounded and ungrounded capacitors, which are K1 s K2 s
important considerations for integrated circuit Y ðsÞ ¼ þ 2 ;
þ x1 s þ x22
s22
implementation.    
K1 ¼ x2p x21 = x22 x2p ; ð19:4aÞ
 
K2 ¼ x22 x2p = x22 x21 ;

The Four Canonical Realizations

L1 ¼ 1=K1 ; C1 ¼ 1= x21 L1 ;

The network of Fig. 19.1 is easily recognized as the L2 ¼ 1=K2 ;
C2 ¼ 1= x22 L2 :

Foster I realization of Eq. 19.1 [4]. As is well
known, there are four basic structures for the ð19:4bÞ
canonical synthesis of an LC driving point synthe-
sis, viz. Foster I, Foster II, Cauer I and Cauer II [4]. The elements in Eq. 19.4b refer to the net-
Foster I form is obtained by partial fraction work shown in Fig. 19.2, and the values are
expansion (PFE) of Z(s), given by shown in Table 19.1.
For Cauer I network, we make a continued
K0 Kp s fraction expansion (CFE) of Eq. 19.1 starting
Z ðsÞ ¼ þ K1 s þ 2 ; ð19:2Þ
s s þ x2p with the highest powers. The quotients of the
CFE give the following element values with
where with reference to Fig. 19.1, reference to the structure shown in Fig. 19.3.
These values are also shown in Table 19.1.
K0 ¼ x21 x22 =x2p ) C1 ¼ 1=K0 ; ð19:3aÞ
1 1
L1 ¼ 1; C1 ¼ ¼ ;
x21 þ x22 x2p x23
C1 L1
x43 x23 x2p x21 x22
L2 ¼ ; C2 ¼ 
x23 x2p x21 x2
2 x23 x22 x21
Z(s) L2 C2 ð19:5Þ

The Cauer II realization will be of the form


shown in Fig. 19.4, and the element values are
Fig. 19.1 Foster I network connected to a current obtained from the quotients of the CFE of
generator and a shunt resistance, which represent the
Eq. 19.1 starting with the lowest powers. These
equivalent circuit of an amplifier
The Four Canonical Realizations 149

Table 19.1 I Comparison of the four canonical structures


Parameter Expressions and values of network
L1 L2 L1 + L2 x21 C1 x21 C2 x21 (C1 + C2)
(Num. (Num. value)
value)
! !
Foster I 1 x21 x22 2.130 x2p x21 2.775
1 1 x2p L2
x2p x2p x22
(0.130) (0.852) (1.923)

Foster II ðx22 x21 Þ ðx22 x21 Þ 1.630 1/L1 x21 3.429


ðx2p x21 Þ ðx22 x2p Þ (3.268) x22 L2
0.306) (1.324) (0.161)
Cauer I 1 x43 2.378 x21 x23 0.852
x23 x2p x21 x22 x23 x22 L2
(1.378) (0.590) (0.262)
x23 ¼ x21 þ x22 x2p
Cauer II x24 x24 9.810 x2p x21 0.877
x2p x24 x2p x22 x24 L2
(1.130) (8.680) (0.852) (0.025)
x21 x22
x24 ¼ x21 x22
x2p
Note The expressions are slightly modified versions of those in the text

values are given below and are also shown in


Table 19.1.

L1 L2 L1 ¼ x24 =x2p ; x24 ¼ x21 þ x22 x21 x22 =x2p ;


Z(s)
 
L2 ¼ x24 = x24 x2p ; C2 ¼ 1= x24 L2 ;

C1 C2
ð19:6aÞ
Fig. 19.2 Foster II form of Z(s)
and
L1 L2
C1 ¼ x2p = x21 x22 :

ð19:6bÞ

Z(s) C1 C2

Comparison

Fig. 19.3 Cauer I form of Z(s) Using the same specifications as given in [3],
and mentioned in the Introduction, we have
C1 C2 computed the numerical values of the elements
for the various structures. These are shown in
Table 19.1 inside brackets below the corre-
Z(s) L1 L2 sponding algebraic expression. Note that no
powers of 10 are involved in the expressions
for C1 x21 and C2 x21 because of multiplica-
tion of the capacitors by x21 . Also, for
Fig. 19.4 Cauer II form of Z(s)
150 19 Interference Rejection in a UWB System: An Example …

computational convenience, some of the alge- Problems


braic expressions given in Table 19.1 are also
slightly modified versions of the formulas given P:1. Can you find an alternative network to the
in the text. C1, L1, L2, C2 configuration in Fig. 19.1?
A look at the total capacitance (Ct) and total P:2. Could we do with third-order impedances
inductance (Lt) values in Table 19.1 show that for C1, L1 combination as will as L2, C2
Cauer realizations have considerably smaller Ct combination in Fig. 19.1. What will this
as compared to the Foster realizations, with circuit perform as?
Cauer I having the lowest value and Cauer II P:3. Suppose there are two frequencies which
having a marginal increase over the same. The have to be rejected. Draw the necessary
reverse is the case with respect to Lt with Foster I circuit configuration.
having the lowest value and Foster II having a P:4. Same as P.3 except that three frequencies
marginal increase over it. Another point to be have to be rejected. Draw an alternation
noted is that both Foster II and Cauer I networks circuit also.
have both capacitors connected to ground, which P:5. Draw another alternative circuit for P.4 and
is, in general, a desirable feature in integrated compare the two.
circuits.
Acknowledgements The work was supported by the
Indian National Science Academy through their Honorary
Scientist scheme. The author acknowledges the help of
Effect of Losses Dr. Sumantra Dutta Roy for his help in the preparation of
the diagrams.
In practice, all reactive elements are lossy, i.e.
all inductors have a series resistance and all
capacitors have a shunt conductance. How- References
ever, the losses in inductors dominate over
those in the capacitors. A practical scheme for 1. L. Weinberg, Network Analysis and Synthesis
(McGrawHill, New York, 1962)
effective compensation of the losses for the
2. W.K. Chen (ed.), in Passive, Active and Digital
network in Fig. 19.1 has been given in [3] Filters, Volume 3 of Handbook of Circuits and Filters
using a single negative resistance realized with (Boca Raton, CRC Press, 2009)
active devices. Analysis of the effects of los- 3. A. Vallese, A. Bevilacqua, C. Sandner, M. Tiebout, A.
Gerosa, A. Neviani, Analysis and design of an
ses on the notch depths and maximum output
integrated notch filter for the rejection of interference
for the four structures will be a worthwhile in UWB systems. IEEE J. Solid-State Circuits 44,
project for the students, and a comparison 331–343 (2009)
may reveal the superiority of one network 4. F.F. Kuo, Network Analysis and Synthesis (Wiley,
New York, 1966). Chapter 11
over the others.
Low-Order Butterworth Filters: From
Magnitude to Transfer Function 20

A simple method is given for obtaining the HN(s) has all its poles on the left half of the unit
transfer function of Butterworth filters of circle centered at s = 0, at equal angular intervals
orders 1 to 6. of p/N, with none occurring on the jx-axis. This
gives rise to the property that if we write

Keywords HN ðsÞ ¼ 1=BN ðsÞ ð20:2Þ



Butterworth filters Transfer functions
 
Magnitude Orders of filter Chebyshev filter then, BN(s), the so-called Butterworth polyno-
mial, has symmetrical coefficients, i.e. BN(s) is of
the form

BN ðsÞ ¼ 1 þ b1 s þ b2 s2 þ    þ b2 sN 2
þ b1 sN 1
þ sN
Butterworth Filters ð20:3Þ

Butterworth filter is the most elegant of all filters In standard textbooks (see, e.g. [1–3]), the
in more than one way. The magnitude squared procedure prescribed for finding BN(s) is to locate
function of a low-pass Butterworth filter, having all the roots of
a normalized 3 dB cutoff frequency of 1 rad/sec,
is given by BN ðsÞBN ð sÞ ¼ 1 þ ð s2 ÞN ð20:4Þ

jHN ðjxÞj2 ¼ 1= 1 þ x2N ;


 
ð20:1Þ and then to take the left half-plane ones for BN(s),
i.e.
where N is the order of the filter. As is well N
Y
known, it has a monotonically decaying response BN ð s Þ ¼ ðs sk Þ; Re sk \0; k ¼ 1 to N
in the whole frequency range. It is maximally flat k¼1
at dc, and the corresponding transfer function ð20:5Þ

Source: S. C. Dutta Roy, “Low Order Butterworth


Filters: From Magnitude to Transfer Function,” Journal
of the IETE, vol. 37, pp. 221–225, October–December
1996.

© Springer Nature Singapore Pte Ltd. 2018 151


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_20
152 20 Low-Order Butterworth Filters: From Magnitude to Transfer …

It may be noted that Eq. 20.4 is simply the so that


denominator of Eq. 20.1 with x2 replaced by BN ð sÞ ¼ mN ðsÞ nN ðsÞ ð20:10Þ
−s2, a process known as ‘analytic continuation’.
An explicit formula for sk is [3]: Combining Eqs. 20.4, 20.8 and 20.9, we get

sk ¼ sin½ð2k 1Þp=ð2N ފ þ jcos½ð2k 1Þp=ð2N ފ; m2N n2N ¼ 1 þ ð lÞN s2N ð20:11Þ
k ¼ 1 to N
ð20:6Þ or,
2  2
Obviously, these roots occur in symmetry 1 þ b2 s2 þ    b1 s þ b3 s 3 þ   

with respect to the real as well as the imaginary ¼ 1 þ ð 1ÞN s2N ð20:12Þ
axes. It is also clear that if N is odd, then there is
a root of BN(s) at s = −1, i.e. (s + 1) is a factor of Obviously, the constant terms and the
BN(s); the other (N − 1) roots will be complex coefficients of s2N on both sides of Eq. 20.12 are
and will occur in conjugate pairs. Consequently, identically equal. The coefficients of s2 ; s4 ; . . .;
one can write s2N 2 must then each be zero. This, combined

8
> N=2
Q2 
>
>
< s þ 2s sin½ð2k 1Þp=ð2N ފ þ 1 ; N even
k¼1
BN ð s Þ ¼ ðN Q1Þ=2 
ð20:7Þ
> 
: ðs þ 1Þ s2 2s sin½ð2k 1Þp=ð2N ފ þ 1 ; N odd
>
>
k¼1

When N is large, one can use Eq. 20.7 for with the symmetry of the coefficients, makes it
finding BN(s) in quickest possibly way. However, easy to find BN(s) for low orders.
when N is low, an alternative trick can be applied, A further simplification can be obtained for
and that is the subject matter of this chapter. the odd-order case. It follows from Eq. 20.11
that if N is odd, then (1 − s2) will be a factor
of BN(s) BN(−s), which contributes (1 + s) to
Basis of the Alternative Method BN(s) and (1 − s) to BN(−s) as factors. The
other factor of BN(s) BN(−s) will be
Since the roots of BN(s) are strictly in the left half
ð1 þ s2 þ s4 þ    þ s2N 2 Þ. Hence, the proce-
of the s-plane, BN(s) qualifies as a strict Hurwitz
dure simplifies to that of finding the coefficients
polynomial. We can write
of the polynomial
BN ð s Þ ¼ 1 þ b 2 s 2 þ    þ b 1 s þ b 3 s 3 þ   
   
CN 1 ðsÞ ¼1 þ c1 s þ c2 s2 þ    þ c2 sN 3
¼ mN ðsÞ þ nN ðsÞ;
2
þ c 1 sN þ sN 1 ;
ð20:8Þ
ð20:13Þ
where mN and nN are the even and odd parts of
BN(s). By definition where

mN ð sÞ ¼ mN ðsÞ and nN ð sÞ ¼ nN ðsÞ; CN 1 ðsÞCN 1 ð sÞ ¼ 1 þ s2 þ s4 þ    þ s2N 2

ð20:9Þ ð20:14Þ
Basis of the Alternative Method 153

Notice that CN−1(s) is also written with coef- Second-Order Case


ficient symmetry. This follows from the fact that
all real polynomial factors of a polynomial with For N = 2, we have from Eq. 20.3,
symmetrical coefficients retain the symmetry
property. From Eqs. 20.13 and 20.14, we have B2 ð s Þ ¼ 1 þ b 1 s þ s 2 ð20:19Þ
2  2 and hence, from Eq. 20.12,
1 þ c2 s2 þ    c1 s þ c 3 s3 þ   


¼ 1 þ s2 þ s4 þ    þ s2N 2 ð20:15Þ 2
1 þ s2 b21 s2 ¼ 1 þ s4

ð20:20Þ
0 2N−2
Again, the coefficients of s and s are
identical on both sides, while the coefficients of Equating the coefficients of s2 on both sides gives
s2 ; s4 ; . . .; s2N 4 , each equated to unity, will form
a set of equations for determining the ci 2 b21 ¼ 0 ð20:21Þ
coefficients.
or
We shall now apply the method to all the pffiffiffi
possible low orders; in the process, the difficul- b1 ¼ 2 ð20:22Þ
ties encountered for high orders will become
obvious. It will also be clear that one cannot Hence,
apply the method blindly, because of the occur- pffiffiffi
rence of multiple solutions for the coefficients, B2 ðsÞ ¼ 1 þ 2s þ s2 ð20:23Þ
and the consequent need for identifying the
correct solutions. An obvious constraint is that all
coefficients must be positive, but, as will be Third-Order Case
shown, this alone is not enough.
As already pointed out, (1 + s) will be a factor of
B3(s), so that
Application to Low Orders
B3 ðsÞ ¼ ð1 þ sÞC2 ðsÞ ð20:24Þ
First-Order Case where

For N = 1, we have from Eq. 20.11, C2 ðsÞ ¼ 1 þ c1 s þ s2 ð20:25Þ

m21 n21 ¼ 1 s2 ð20:16Þ From Eq. 20.15, then, we get


2
1 þ s2 c21 s2 ¼ 1 þ s2 þ s4

Consequently, ð20:26Þ

m1 ¼ 1 and n1 ¼ s ð20:17Þ Equating the coefficients of s2 on both sides gives

so that 2 c21 ¼ 1 ð20:27Þ


B1 ðsÞ ¼ 1 þ s ð20:18Þ or,
This is, of course, a trivial case. c1 ¼ 1 ð20:28Þ
154 20 Low-Order Butterworth Filters: From Magnitude to Transfer …

so that Which ones are admissible? To test this, we


bring in the strict Hurwitz character of B4(s),
B3 ð s Þ ¼ ð 1 þ s Þ 1 þ s þ s 2
 
ð20:29Þ which demands that the ratio m4(s)/n4(s) should
be an LC driving point function of the fourth
¼ 1 þ 2s þ 2s2 þ s3 ð20:30Þ order. Let us try

pffiffiffi pffiffiffi
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

b2 ¼ 2 2 and b1 ¼ 2 2 2
Fourth-Order Case
We can write
For N = 4, we have from Eq. 20.8 and the
symmetry of coefficients,  pffiffiffi 2
m4 ðsÞ s4 þ 2 2 s þ1
¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
pffiffiffiffi ;
B4 ð s Þ ¼ 1 þ b1 s þ b2 s 2 þ b1 s 3 þ s 4 ð20:31Þ n4 ð s Þ 
2 2 2 ðs3 þ sÞ

and from Eq. 20.12, we get " pffiffiffi 2 #


1 1 2 s þ1
¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
pffiffiffiffi þs ;

1 þ b2 s2 þ s4
2  2
b1 s þ b1 s 3 ¼ 1 þ s 8

2 2 2 s3 þ s
ð20:32Þ ð20:38Þ

Equating the coefficients of s and s on both 6 4 p


Since ð1 2Þ is negative, Eq. 20.38 cannot
sides of Eq. 20.32 gives rise to the following set qualify as an LC driving point function. Hence,
of nonlinear equations: the acceptable solutions are the ones with positive
signs in Eqs. 20.36 and 20.37. We therefore get
2b2 ¼ b21 ð20:33Þ
pffiffiffi  pffiffiffi
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 ffi
and B4 ðsÞ ¼1 þ 2 2 þ 2 s þ 2 þ 2 s2
b22 þ 2 ¼ 2b21 ð20:34Þ pffiffiffi 3
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 ffi
þ 2 2 þ 2 s þ s4
Note that because of symmetry, equating the
ð20:39Þ
coefficients of s2 on both sides of Eq. 20.32 gives
the same result as Eq. 20.33. Combining
Eqs. 20.33 and 20.34 gives ¼ 1 þ 2:6131 s þ 3:4142s2 þ 2:6131 s3 þ s4
ð20:40Þ
b22 4b2 þ 2 ¼ 0 ð20:35Þ
This case illustrates the difficulty we
Solving this quadratic, we get the following encounter with multiple solutions. Finding the
two solutions for b2: acceptable solution requires a Hurwitz test,
pffiffiffi which, in this particular case, has not proved to
b2 ¼ 2  2 ð20:36Þ be difficult. Note that for the Hurwitz test, one
can also use continued fraction expansion, rather
Consequently, from Eq. 20.33 than partial fraction expansion.

pffiffiffi
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

b1 ¼ 2 2 2 ð20:37Þ Fifth-Order Case

Both of these values of b2 and b1 are posi- As in the third-order case, (1 + s) is a factor of
tive and are candidates for belonging to B4(s). B5(s) and we can write
Application to Low Orders 155

p p p
B5 ðsÞ ¼ ð1 þ sÞC4 ðsÞ; ð20:41Þ ¼ð1 þ ð 5 þ 1Þs þ ð 5 þ 3Þs2 þ ð 5 þ 3Þs3
p
þ ð 5 þ 1Þs4 þ s5 Þ;
¼ ðl þ sÞ l þ c1 s þ c2 s2 þ c1 s3 þ s4 ; ð20:42Þ
 
ð20:52Þ
where
¼1 þ 3:2361 s þ 5:2361 s2 þ 5:2361 s3
2  2
1 þ c2 s2 þ s4 c1 s þ c 1 s3 þ 3:2361 s4 þ s5


¼ 1 þ s2 þ s4 þ s6 þ s8 ð20:43Þ ð20:53Þ

Equating the coefficients of s2 and s4 on both


sides gives Sixth-Order Case
2c2 c21 ¼1 ð20:44Þ
For N = 6,
and
B6 ðsÞ ¼ 1 þ b1 s þ b2 s2 þ b3 s3 þ b2 s4 þ b1 s5 þ s6
c22 þ2 2c21 ¼1 ð20:45Þ ð20:54Þ

Combining Eqs. 20.44 and 20.45 give the and


following quadratic equation in c2:
2  2
1 þ b2 s 2 þ b2 s 4 þ s 6 b1 s þ b3 s 3 þ b1 s 5

c22 4c2 þ 3 ¼ 0 ð20:46Þ ¼ 1 þ s12
It is easy to see that the solutions for c2 are ð20:55Þ

c2 ¼ 3; 1 ð20:47Þ Equating the coefficients of s2, s4 and s6 on both


sides give the following equations:
Correspondingly, from Eq. 20.44,
2b22 ¼ b21 ; ð20:56Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p
c1 ¼ 2c2 1 ¼ 5; 1 ð20:48Þ
2b2 þ b22 ¼ 2b1 b3 ; ð20:57Þ
Try c2 = c1 = 1; then
and
C4 ðsÞ ¼ 1 þ s þ s2 þ s3 þ s4 ð20:49Þ
2 þ 2b22 ¼ 2b21 þ b23 ð20:58Þ
The ratio of even to odd parts of C4(s) is
From Eq. 20.56, we get
s4 þ s2 þ 1 1
¼ sþ 3 ; ð20:50Þ pffiffiffiffiffiffiffi
s3 þ s s þs b1 ¼ 2b2 ð20:59Þ

which is obviously not an LC driving point and from Eqs. 20.58 and 20.59, we get
function. Hence, the acceptable solutions are:
p p
c2 = 3 and c1 ¼ 5 giving b3 ¼ 2j1 b2 j; ð20:60Þ
p p p
B5 ðsÞ ¼ ð1 þ sÞ 1 þ 5s þ 3s2 þ 5s3 þ s4
 
where the magnitude sign is included to indicate
ð20:51Þ that b3 must be positive. Combining Eqs. 20.57,
156 20 Low-Order Butterworth Filters: From Magnitude to Transfer …

20.59 and 20.60 give the following cubic equa- Equating the coefficients of like powers of
tion in b2: s on both sides gives
b32 12b22 þ 36b2 16 ¼ 0 ð20:61Þ 2c2 c21 ¼ 1; ð20:69Þ
A cubic equation is analytically solvable [4],
although not as easily as the quadratic equation. c22 þ 2c2 2c1 c3 ¼ 1; ð20:70Þ
For the general cubic equation
and
3 2
ax þ bx þ cx þ d ¼ 0 ð20:62Þ
2 þ 2c22 c23 2c21 ¼ 1 ð20:71Þ
2
all the three roots are real if b − 3ac > 0, while
From Eq. 20.69, we get
if b2 − 3ac < 0, then there is only one real root,
the other two being complex conjugates of each pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
other. As can be easily verified, the second case c1 ¼ 2c2 1 ð20:72Þ
is valid for Eq. 20.61; hence, our job is simply to
find the real root of Eq. 20.61. Trial and error while Eq. 20.71 gives
seems to be the best policy at this stage, and after
a few trials, we get qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
c3 ¼ 3 4c2 þ 2c22 ð20:73Þ
b2 ¼ 7:4641 ð20:63Þ
Substituting these values in Eq. 20.70, we get
as a reasonably good solution. The correspond-
ing values of the other two coefficients are qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi
c22 þ 2c2

obtained from Eqs. 20.59 and 20.60 as 1¼2 ð2c2 1Þ 3 4c2 þ 2c22

b1 ¼ 3:8637 and b3 ¼ 9:1416 ð20:64Þ ð20:74Þ

Hence, finally, Squaring both sides and simplifying, we get the


following quartic equation in c2:
B6 ðsÞ ¼1 þ 3:8637 s þ 7:4671 s2 þ 9:1416 s3
c42 12c32 þ 42c22 44c2 þ 13 ¼ 0 ð20:75Þ
þ 7:4671 s4 þ 3:8637 s5 þ s6
ð20:65Þ A cubic equation was bad enough; this is
worse! Fortunately, however, a quartic equation
Seventh-Order Case can also be solved analytically [4]; however, the
effort involved does not justify proceeding fur-
Since the order is odd, we have ther. Using Eq. 20.7 appears to be a much better
proposition, not just for N = 7, but for all higher
B7 ðsÞ ¼ ð1 þ sÞC6 ðsÞ; ð20:66Þ orders.

¼ ð1 þ sÞ 1 þ c1 s þ c2 s2 þ c3 s3 þ c2 s4 þ c1 s5 þ s6 ;
 

ð20:67Þ
Application to Chebyshev Filters
where
What about filters other than Butterworth? For
2 4 6 2
2 example, Chebyshev? Does the procedure pre-
c1 s þ c 3 s3 þ c 1 s5
  
1 þ c2 s þ c2 s þ s
sented here offer any simplicity? Let us
¼ 1 þ s2 þ s4 þ s6 þ s8 þ s10 examine.
ð20:68Þ
Application to Chebyshev Filters 157

For the Chebyshev low-pass filter, the nor- Thus,


malized magnitude squared function is given by b0 ¼ 1 and b1 ¼ e ð20:82Þ

jHN ðjxÞj2 ¼ 1= 1 þ e2 TN2 ðxÞ ;


ð20:76Þ and

where TN is the Chebyshev polynomial of the D1 ðsÞ ¼ 1 þ es ð20:83Þ


first kind, of order N. For the first few orders, we
For the second-order case, we get
have
2 2
b0 þ b2 s2 b21 s2 ¼ 1 þ e2 2x2
 
T1 ð x Þ ¼ x 1 jx2 ¼ s2

T2 ðxÞ ¼ 2x2 ð20:84Þ


ð20:77Þ
T3 ðxÞ ¼ 4x3 3x 1 On simplification, Eq. 20.84 becomes
T4 ðxÞ ¼ 8x4 8x2 þ 1
b20 þ 2b2 b21 s2 þ b22 s4
 

¼ 1 þ e2 þ 4e2 s2 þ 4e2 s4
 
Let ð20:85Þ

HN ðsÞ ¼ 1=DN ðsÞ ¼ 1=½mN ðsÞ þ nN ðsފ; Equating the coefficients of like powers of
ð20:78Þ s gives
pffiffiffiffiffiffiffiffiffiffiffiffi
b0 ¼ 1 þ e 2 ð20:86Þ
¼ 1= b0 þ b1 s þ b2 s2 þ    þ bN sN ; ð20:79Þ
 

b2 ¼ 2e ð20:87Þ
where the usual notations have been used. Note
that neither b0 nor bN is fixed, as in the Butter- and
worth case; there is no coefficient symmetry
either. The equation for finding the coefficients is 2b0 b2 b21 ¼ 4e2 ð20:88Þ


b0 þ b2 s 2 þ   
2 
b1 s þ b3 s 3 þ   
2 Combining Eqs. 20.86–20.88, we get
¼ 1 þ e2 TN2 ðs=jÞ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
pffiffiffiffiffiffiffiffiffiffiffiffi 
ð20:80Þ b1 ¼ 2 e 1 þ e2 e ð20:89Þ

Although the right-hand side of Eq. 20.80 Thus,


involves j, in actual practice, CN2 ðxÞ shall involve
only x2 ; x4 ; . . . so that x2 has to be replaced by
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
pffiffiffiffiffiffiffiffiffiffiffiffi ffi
pffiffiffiffiffiffiffiffiffiffiffiffi
D2 ðsÞ ¼ 1 þ e2 þ 2 e 1þe 2 e s þ 2es2
−s2; hence, only real coefficients would be
encountered. ð20:90Þ
For the first-order case, we have
Now consider the third-order case, for which we
b20 b21 s2 ¼ 1 e2 s2 ð20:81Þ get, from Eqs. 20.77 and 20.80.

2  2 2
b0 þ b2 s 2 b1 s þ b3 s 3 ¼ 1 þ e2 4x3
 
3x jx2 ¼ s2
2
¼ 1 þ e2 ð sÞ2 4s2 3 ð20:91Þ


¼ 1 9e2 s2 24e2 s4 16e2 s6


158 20 Low-Order Butterworth Filters: From Magnitude to Transfer …

Equating the coefficients of like powers of s on Butterworth filter and other types of filters, even
both sides, we get of a lower order, have been pointed out.

b0 ¼ 1; b3 ¼ 4e ð20:92Þ
Problems
2b2 b21 ¼ 9e2 ð20:93Þ
P:1. Without finding poles and zeroes, can you
and formulate a procedure and give the equa-
tions for finding an Nth-order Butterworth
b22 8eb1 ¼ 24e2 ; ð20:94Þ polynomial? No, no, I am not asking you to
solve these equations, because that will be
where, in Eqs. 20.93 and 20.94, the values given too must to ask for. With the kind of
in Eq. 20.92 have been utilized. Combining training I have given to you, I believe you
Eqs. 20.93 and 20.94 give the following cubic should be able to do it.
equation for b2: P:2. For the third-order case, find the zeroes of
the third-order Butterworth polynomial. Do
b32 þ 48e2 b2 128e2 ¼ 0 ð20:95Þ not bring poles and zeroes into the scene.
They pollute and hamper you intellectual
According to the theory of cubic equations
development! Of course, you substitute the
[4], this also has only one real root, which
values of the coefficients from the text.
obviously depends on e.
P:3. Same as P.1 for order = 4.
One should, at this point, be convinced that
P:4. Same as P.2 for order = 5.
the applicability of the technique presented in
P:5. Same as P.2 for order = 7.
this chapter: to the Chebyshev filter, or for that
matter, to any other kind of filter would be lim-
ited. Even for the Butterworth case, the limit
appears to be set by the sixth order. References

1. M.E. Van Valkenburg, Introduction to Modern Net-


work Synthesis (Wiley, New York, 1964)
Conclusion 2. N. Balabanian, in Network Synthesis (Englewood
Cliffs, NJ, Prentice Hall, Inc, 1958)
3. S. Karni, Network Theory: analysis and Synthesis
A simple method is presented for finding the (Allyn and Bacon Inc, Boston, 1966)
Butterworth polynomials of orders one to six, 4. S. Neumark, Solution of Cubic and Quartic Equations
and its limitations for higher orders of (Pergamon Press, London, 1965)
Band-Pass/Band-Stop Filter Design
by Frequency Transformation 21

 
Given the specifications of a band-pass filter x0 s x0
S¼ þ ; ð21:1Þ
(BPF) or a band-stop filter (BSF), the same B x0 s
can be translated to those of a normalized
low-pass filter (LPF) by frequency transfor- where S = R + jX is the LPF complex frequency
mation. Once the latter is designed, one can variable, s ¼ r þ jx is the BPF complex fre-
realize the BPF/BSF by using the same quency variable,
transformation in a reverse manner. The
process of translation to the normalized LPF B ¼ xp2 xp1 ð21:2Þ
is usually not explained in details in standard
textbooks, and in some of them, the process is the bandwidth and
has even been wrongly stated or illustrated. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x0 ¼ xp1 xp2 ¼ xs1 xs2 ð21:3Þ
This chapter clarifies this important step in
BPF/BSF design.
is the centre frequency of the BPF response,
which is geometrically symmetrical about x0 .
Keywords
Similarly, the BSF response of Fig. 21.1c can be

Band-Pass Band-Stop  Frequency obtained from the LPF response of Fig. 21.1a
through the transformation.
transformation
  
x0 s x0
S ¼ 1= þ ; ð21:4Þ
B x0 s

Introduction where, again Eqs. 21.2 and 21.3 are valid, but
B does not have the interpretation of bandwidth.
As is well known the normalized LPF response As in the BPF case, the BSF characteristic is also
of Fig. 21.1a can be transformed to the BPF geometrically symmetrical about x0 .
response of Fig. 21.1b by the transformation Given Fig. 21.1a, it is easy to obtain the
characteristics of Fig. 21.1b or Fig. 21.1c by
using Eqs. 21.1 or 21.4 as the case may be, but
given a BPF or BSF response, how does one go
Source: S. C. Dutta Roy, “Band-Pass/Band-Stop Filter
Design by Frequency Transformation,” IETE Journal of to the normalized LPF response? In particular,
Education, vol. 45, pp. 145–149, July–September 2004. how does one find the edge of the stop-band Xs,

© Springer Nature Singapore Pte Ltd. 2018 159


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_21
160 21 Band-Pass/Band-Stop Filter Design by Frequency Transformation

(a) (b)

(c)

Fig. 21.1 a Normalized LPF characteristic. b BPF response obtained through Eq. 21.1. c BSF response obtained
through Eq. 21.4

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
in Fig. 21.1a? Also, obviously xp1 xp2 may not If xp1 xp2 ¼ x0s1 x0s2 then no modification of
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
be equal to xs1 xs2 in the given specifications. the characteristics is necessary. However, if this
How does one proceed? These questions are is not the case, then two cases may arise:
either not answered or not adequately explained
in textbooks [1–3]. Some textbooks [4, 5] in fact I. xp1 xp2 \x0s1 x0s2
have given wrong answers/illustrations (see II. xp1 xp2 [ x0s1 x0s2
Appendix). The purpose of this chapter is to
clarify these important points. In case I, reduce x0s2 to xs2 ¼ xp1 xp2 =x0s1
and rename x0s1 as xs1 , as shown in Fig. 21.2a.
In case II, increase x0s1 to xs1 ¼ xp1 xp2 =x0s2 and
Band-Pass Case rename x0s2 as xs2 , as shown in Fig. 21.2b. In
both cases adjustments have been made to
Let the BPF specifications be: guarantee geometric symmetry, thus facilitating
the application of Eq. 21.1.
1  magnitude  dp ; for xp1  x  p2 ; and Now comes the question of finding Xs. Note
0  magnitude  ds ; for 0  x  x0s1 and x0s2 that S = ±jXs should correspond to s ¼ jxs1 as
 x  1: well as s ¼ jxs2 , where the signs may or may
Band-Pass Case 161

(a) (b)

Fig. 21.2 Adjustments in given BPF response to ensure that xp1 xp2 ¼ xs1 xs2

not correspond to each other. Putting S = jXs and xp2 xp1


Xs ¼ ð21:6Þ
s ¼ jxs1 in Eq. 21.1 and simplifying, we get xs2 xs1
xs2 xs1
Xs ¼ ð21:5Þ
xp2 xp1
Example
Since a positive value of Xs has been
obtained, the correspondence of S = jXs to s ¼ As an example, let us design a maximally flat
jxs1 is validated. Similarly, one can show that BSF to satisfy the specifications shown in
S = jXs also corresponds to s ¼ jxs2 and that Fig. 21.4a, where magnitude refers to that of the
S = −jXs corresponds to s ¼ jxs1 as well as transfer function V2/11 of the network shown in
s ¼ jxs2 .
Fig. 21.4b. Since here fs10 fs20 ¼ 6 ðkHzÞ2 [ fp1 fp2
¼ 5 ðkHzÞ2 , f denoting x=2p, we adjust fs10 to
Band-Stop Case fs1 ¼ 5=3 kHz, and set fs2 ¼ fs20 ¼ 3kHz. Thus,
by Eq. 21.6, we get
The adjustments needed in the band-stop case are
illustrated in Fig. 21.3. It is easily shown that, here

(a)
(b)

Fig. 21.3 Adjustments in given BSF response to ensure that xp1 xp2 ¼ xs1 xs2 . a xp1 xp2 \x0s1 x0s2 ;
xs1 ¼ xp1 xp2 =x0s2 ; x0s2 ¼ xs2 ; b xp1 xp2 \x0s1 x0s2 ; xs2 ¼ xp1 xp2 =x0s2 ; x0s1 ¼ xs1
162 21 Band-Pass/Band-Stop Filter Design by Frequency Transformation

(a)

(b)

Fig. 21.4 a BSF specification, b desired network

fp2 fp1
Xs ¼ ¼ 2:25 ð21:7Þ
fs2 fs1

Thus, the normalized LPF to be designed has


the specifications shown in Fig. 21.5 with a
network of the form of Fig. 21.4b but with a
terminating resistance of 1 ohm. Since a maxi-
mally flat design is needed, the order required is
given by
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
log10 104 1
N ¼ 5:67887 ð21:8Þ
log10 2:25

Thus, a sixth-order Butterworth filter is nee-


ded. Taking values from standard Tables [1], we
get the complete normalized LPF shown in Fig. 21.5 Characteristics of the normalized LPF corre-
Fig. 21.6. To convert it to a de-normalized BSF sponding to the adjusted BSF response of Fig. 21.4a
with a termination of 1 K, the following
replacements are to be made:

Fig. 21.6 Normalized Butterworth filter satisfying the specifications of Fig. 21.5
Example 163

(1) each inductance Li by a parallel combination Appendix


of inductance RLi B=x20 and capacitance
1=ðRLi BÞ; Temes and Lapatra [4] recommend adjusting
(2) each capacitance Ci by a series combination either of the pass-band edges to obtain geometric
of inductance R=ðCi BÞ and capacitance symmetry. This is obviously not advisable
Ci B=ðx20 RÞ; and because admitting part of the transition band into
(3) resistance 1 ohms by R. the pass-band allows undesirable frequencies to
be passed, along with the noise at these fre-
p quencies, thus deteriorating the signal to noise
where R = 1000 ohms, x0 ¼ 2p 5 
103 radians/sec and B = 8p  103 radians/sec. ratio. On the other hand, adjusting the stop-band
by pushing some of the transition band into the
stop-band not only attenuates undesired fre-
Concluding Comments quencies to a greater extent, but also improves
the signal to noise ratio.
This chapter attempts to supply the important Karni [5], in an example to illustrate the
steps needed in designing a BPF/BSF through design of a BPF, computes the stop-band edge of
frequency transformation. As noted in the intro- the normalized LPF as the ratio of the upper
duction, these steps are either wrongly stop-band edge of the BPF to the bandwidth.
stated/illustrated, as discussed in the Appendix, This is obviously wrong!
or not explained in details in standard textbooks.

References
Problems
1. F.F. Kuo, in Network Analysis and Synthesis (Wiley,
1996)
P:1. Determine the low-pass transfer function 2. H. Ruston, J. Bordogna, in Electric Networks: func-
corresponding to a BPF having the same tions, Filters, Analysis (McGraw-Hill, 1966)
specification as those given in the example 3. M.E. Van Valkenburg, in Introduction to Modem
in the text. Network Synthesis (Wiley, 1964)
4. G.C. Temes, J.W. Lapatra, in Introduction to Circuit
P:2. Design the BPF corresponding to the above. Synthesis and Design (McGraw-Hill, 1977), pp. 556–
P:3. Design the LPF corresponding to P.1. 557
P:4. What will happen if geometric symmetry is 5. S. Karni, in Network Theory–Analysis and Synthesis
ignored in the BPF design? (Allyn and Bacon, 1966), p. 379
P:5. Same as above except that the design is to
be for a BSF.
Optimum Passive Differentiators
22

A general, nth order, the transfer function (TF) This chapter is complementary to [1], which
is derived, whose time-domain response deals with optimum passive integrators. Follow-
approximates optimally that of an ideal differ- ing, a parallel approach, optimum differentiators
entiator, optimality criterion chosen being the of order n have been suggested here, the opti-
maximization of the first n derivatives of the mality criterion being the maximum possible
ramp response at t = 0+. It is shown that values for the first n derivatives of the ramp
transformerless, passive, unbalanced realizabil- response at t = 0+.
ity is ensured for n < 3, but for n > 3, the TF is We show that transformerless RLC unbal-
unstable. For n = 3, the TF is not realizable, anced realization is possible only for n < 3, and
however, near optimum results can be obtained that for n  3, the optimum transfer function
by perturbation of the pole locations. Optimum (TF) is unstable. Near optimum results can be
TFs are also derived for the additional con- achieved for n = 3 by perturbation of the pole
straint of inductorless realizability. It is shown locations. RLC realizations for n  2 give, in
that TFs for n  2 are not realizable. For all n, general, a damped oscillatory output around the
however, near optimum results can be achieved ideal differentiated value. However, one can
by small perturbations of the pole locations; this reduce the amplitude of these oscillations such
is illustrated in this chapter for n = 2. Network that the output is within a prescribed limit of
realizations, for a variety of cases, are also tolerance. In this chapter, we assume the toler-
given. ance as ±5% of the ideal differentiated value.
We will also derive an nth-order TF with an
additional constraint of RC realizability, and
Keywords show that for n > 2, optimal RC realizations are
Differentiators  Networks  Optimization not possible; however, near optimum results can
be achieved by small perturbations of the pole
locations for all n.

© Springer Nature Singapore Pte Ltd. 2018 165


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_22
166 22 Optimum Passive Differentiators

Network realizations are given for the fol- With an input vin(t) = t u(t), the output will be
lowing cases: n = 3, suboptimal, RLC; n = 2,
vn ðtÞ ¼ L 1 Vn ðsÞ ¼ L 1
ð1=s2 ÞHn ðsÞ :
 
optimal RLC; n = 2, suboptimal RLC (Oscilla-
tions limited to ±5% of the ideal differentiated ð22:4bÞ
value); n = 2, suboptimal, RC.
The optimality criteria chosen are:

Optimal Transfer Function and Its


Realizability ðiÞ vn ðtÞjt¼1 ¼ 1 ð22:5aÞ

The ideal differentiator has a normalized TF maximum possible mnð1Þ ð0 þ Þ; mnð2Þ ð0 þ Þ;


ðiiÞ
mnð3Þ ð0 þ Þ. . .; mðnÞ þ
n ð0 Þ:
H 0 ðsÞ ¼ s ð22:1Þ
ð22:5bÞ
so that when the input voltage vin(t) is t u(t), i.e. a ðiÞ
where vn ðtÞ:
denotes the ith derivative of
ramp function, the output voltage vout(t) = u(t), a vn(t), and
unit step function. (iii) highest possible order zero, at s = 0, of
The TF given by Eq. 22.1 is not realizable and
most common approximation used is L½1 vn ðtފ ð22:5cÞ

H1 ðsÞ ¼ sT=ðsT þ 1Þ: ð22:2Þ


Criterion (ii) is chosen to minimise the rise
which can be realized by an RC or an RL net- time of vn(t); the reason for the choice of criterion
work, shown in Figs. 22.1a, b, respectively. (iii) will be discussed later. We shall, in the
When driven by vin(t) = t u(t), the output is sequel, impose the condition of a grounded
transformerless network as part of realization
t=T
v1 ðtÞ ¼ Tð1 e ÞuðtÞ: ð22:3Þ constraints. Combining Eqs. 22.4a, 22.4b and
22.5a with the final value theorem gives
v1(t) rises exponentially to the true (ideal) value of
differentiation (=T u(t)), with a time constant T, a0 ¼ 0 and a1 ¼ b0 ð22:6Þ
and reaches the ideal value only at t = ∞. How-
ever, in practice, the time taken for the output to Using Eq. 22.6, Hn(s) and vn(s) become
reach ±10% of the ideal value determines the
usefulness of a differentiator. We shall choose a an sn þ an 1 sn 1 þ    þ a2 s2 þ b0 s
Hn ðsÞ ¼
tighter tolerance of ±5% to compare the perfor- s n þ bn 1 s n 1 þ    þ b2 s 2 þ b1 s þ b0
mance of various differentiators in this chapter. ð22:7aÞ
The approximation will improve with
decreasing T, in Eq. 22.3, but it also reduces the and
output level. We, therefore, derive an optimal TF,
1 an sn þan 1 sn 1 þ  þa2 s2 þb0 s
 
of general order n, which does not suffer from this
Vn ðsÞ ¼ 2 n
disadvantage. Let, the required nth-order TF be s s þbn 1 sn 1 þ  þb2 s2 þb1 sþb0
ð22:7bÞ
an s n þ an 1 s n 1 þ    þ a1 s þ a0
Hn ðsÞ ¼ n :
s þ bn 1 sn 1 þ    þ b1 s þ b0 By the initial value theorem, we have
ð22:4aÞ
vn ð0Þ ¼ Lim s Vn ðsÞ ¼ 0 ð22:7cÞ
s!1
Optimal Transfer Function and Its Realizability 167

(a) (b) (c)

(d) (e) (f)

(g) (h)

(a)
(b)
(c)
(d)
(e)
(f)

(g)
(h)

Fig. 22.1 Differentiator Networks


168 22 Optimum Passive Differentiators

ðiÞ ð2Þ
If we let Vn,1(s) D Lmn ðtÞ; then by the differen- To maximize vn ð0Þ; (criterion Eq. 22.5b),
tiation theorem of Laplace transform, under the Fialkow–Gerst condition, an−1 
bn−1, we have to choose
Vn;i ðsÞ ¼ sVn;i 1 ðsÞ vin 1 ð0Þ; i  1 ð22:8Þ
an 1 ¼ bn 1 ð22:12Þ
Thus,
which gives

Vn;1 ðsÞ sVn ðsÞ vn ð0Þ vnð2Þ ð0Þ ¼ 0 ð22:13Þ


1 an s n þ an 1 s n 1 þ    þ a2 s 2 þ b0 s
 
¼
s sn þ bn 1 sn 1 þ    þ b2 s2 þ b1 s þ b0 From Eqs. 22.11a and 22.12, we get
ð22:9aÞ Vn;2 ðsÞ
ðan 2 bn 2 Þsn 2 þ    þ ða2 b2 Þs2 þ ðb0 b1 Þs b0
¼
and sn þ bn 1 sn 1 þ    þ b2 s2 þ b1 s þ b0
ð22:14Þ
vnð1Þ ð0Þ ¼ Lim s Vn;1 ðsÞ ¼ an ð22:9bÞ
s!1 Again, using Eq. 22.8, we have
Vn;3 ðsÞ ¼ sVn 2 ðsÞ mð2Þ
n ð0Þ
From criterion Eq. 22.5b, and the Fialkow–
ðiÞ ðan 2 bn 2 Þsn 1 þ    þ ða2 b2 Þs3 þ ðb0 b1 Þs2 b0 s
Gerst condition, we note that vn ð0Þ ¼ an ¼ 1: ¼
sn þ bn 1 sn 1 þ    þ b2 s2 þ b1 s þ b0
Thus Eqs. 22.7a and 22.9a become, respectively, ð22:15aÞ
s n þ an 1 s n 1 þ    þ a2 s 2 þ b0 s and
Hn ðsÞ ¼
s n þ bn 1 s n 1 þ    þ b2 s 2 þ b1 s þ b0
ð22:10aÞ vnð3Þ ð0Þ ¼ Lim s Vn;3 ðsÞ ¼ an 2 bn 2
s!1

1
 n
s þ an 1 sn 1 þ    þ a2 s2 þ b0 s
 ð22:15bÞ
Vn;1 ðsÞ ¼
s sn þ bn 1 sn 1 þ    þ b2 s2 þ b1 s þ b0
By arguments similar to those already used,
ð22:10bÞ ð3Þ
the maximum value of vn ð0Þ is obtained when
Again we have, from Eqs. 22.8 and 22.10b
an 2 ¼ bn 2 ð22:16Þ
n 1
Vn;2 ðsÞ ¼ ðan 1 bn 1 Þs
ðan 2 bn 2 Þsn 2 þ    þ ða2 b2 Þs2 þ ðb0 b1 Þs b0 for which
þ
sn þ bn 1 sn 1 þ    þ b2 s2 þ b1 s þ b0
ð22:11aÞ vnð3Þ ð0Þ ¼ 0 ð22:17Þ

so that This yields

vnð2Þ ð0Þ ¼ Lim s Vn;2 ðsÞ ¼ an 1 bn 1 Vn;3 ðsÞ


s!1
ðan 3 bn 3 Þsn 2 þ    þ ða2 b2 Þs3 þ ðb0 b1 Þs2 b0 s
ð22:11bÞ ¼
sn þ bn 1 sn 1 þ    þ b2 s2 þ b1 s þ b0
ð22:18Þ
Optimal Transfer Function and Its Realizability 169

Proceeding in this manner, till we maximize and finally, the optimum TF is


ðnÞ
vn ð0Þ we get
sn þ sn 1 þ sn 2 þ    þ s2 þ s
Hn ðsÞ ¼
a1 ¼ bi ; i ¼ 1; 2; . . .; ðn 1Þ ð22:19Þ sn þ sn 1 þ sn 2 þ    þ s2 þ s þ 1
ð22:25Þ
Combining these with Eq. 22.6 gives sðsn 1Þ
¼ ð22:26Þ
sn þ 1 1
s n þ bn 1 s n 1 þ    þ b2 s 2 þ b0 s
Hn ðsÞ ¼ n
s þ bn 1 s n 1 þ    þ b2 s 2 þ b0 s þ b0 The poles of Hn(s) are located on the unit
ð22:20aÞ circle, at

and Sr ¼ cj2pr=ðn þ 1Þ  r ¼ 1; 2; . . .; n ð22:27Þ

1 sn 1 þbn 1 sn 2 þ  þb2 sþb0


 
where r = 0 is excluded because s = 1 is a pole
Vn ðsÞ ¼
s sn þbn 1 sn 1 þ  þb2 s2 þb0 sþb0 as well as a zero. For stability, the poles should
ð22:20bÞ be in the left half of the s-plane, i.e.

We may also write Hn(s) of Eq. 22.20a as p=2  2pr=ðn þ 1Þ  3p=2 ð22:28Þ

Nn ðsÞ Nn ðsÞ Equation 22.28 is violated for n = 4. The TF


Hn ðsÞ ¼ ¼ ; ð22:21Þ for n = 3 is
Dn ðsÞ Nn ðsÞ þ Dn ð0Þ

where Nn(s) and Dn(s) denote, respectively, the s3 þ s2 þ s


H3 ðsÞ ¼ ð22:29Þ
numerator and the denominator polynomials of s3 þ s2 þ s þ 1
Eq. 22.20a.
It can be easily shown that the poles of H3(s),
Let
at s = ±j, do not have purely imaginary residues,
qðtÞDð1 vn ðtÞÞ uðtÞ ð22:22Þ hence H3(s) cannot be realized [3]. Had
H3(s) been realizable, the ramp response would
denote the deviation of vn(t) from the ideal output have been given by
u(t). Had vn(t) been the ideal output; q(t) as well 1
1=s2 H3 ðsÞ
  
vs ðtÞ ¼ L
as Q(s) = Lq(t) would be zero. Since this is not   12 
1 t 1
the case in practice, we impose condition ¼ 1 e cosðt þ p=4Þ uðtÞ
2 2
Eq. 22.5c, i.e. Q(s) should have a zero of the
highest possible order at s = 0. Now ð22:30Þ

1 A plot of v3(t) is shown in Fig. 22.2 (curve f).


Q ðsÞ ¼ Vn ðsÞ ð22:23Þ
s Clearly, v3(t) is of little use due to the undamped
oscillations.
Substituting for Vn(s) from Eq. 22.20b and We thus conclude that the only optimum,
simplifying, it is easy to observe that Q(s) will passive, grounded and transformerless network
have a zero of the highest order (=n − 1), at realizable approximations of Eq. 22.1 are
s = 0 if

bn 1 ¼ bn 2 ¼    ¼ b2 ¼ b0 ¼ 1 ð22:24Þ
170 22 Optimum Passive Differentiators

Fig. 22.2 Ramp responses of various differentiators with (e = 0.601), f third-order, optimal RLC (unrealizable),
final value normalised to unity. a Ideal case, b first-order g third-order, suboptimal RLC (e = 0.5), h third-order,
RC, c second-order, suboptimal RC (e = 0.01), d sec- suboptimal RLC (e = 0.71)
ond-order, optimal RLC, e second-order, suboptimal RLC

H1 ðsÞ ¼ s=ðs þ 1Þ ð22:31Þ fully compensated by appropriate reduction in


the series resistance R. The ramp response of the
second-order differentiator is given by
H2 ðsÞ ¼ ðs2 þ sÞ=ðs2 þ s þ 1Þ ð22:32Þ
ms ðtÞ ¼ L 1 1=s2 H2 ðsÞ
  
n  pffiffiffi hpffiffiffiffiffiffiffiffi io
¼ 1 2= 3 e t=2 cos 3=2 t þ p=6
Equation 22.31 is the same as Eq. 22.2 with uðtÞ
T normalized to unity.
ð22:33Þ

and is plotted in Fig. 22.2, curve d.


The damped oscillations exhibited by v2(t) re-
Second-order Optimal duce the utility of H2(s) vis-a-vis H1(s). How-
and Suboptimal Differentiators ever, by shifting the poles of H2(s), we may bring
down the amplitude of the oscillations to achieve
Dividing the numerator and denominator of the desired tolerance limits of the output. If we
Eq. 22.32 by s, two simple network realizations take
of second-order passive differentiators, as shown
in Figs. 22.1c, d are obtained. Realization of H20 ðsÞ¼ðs2 þsÞ= s2 þð1þeÞsþ1 ;where0\e\1
 
Fig. 22.1c may be preferred to that of Fig. 22.1d
ð22:34aÞ
since the effect of losses in the inductor L can be
Second-order Optimal and Suboptimal Differentiators 171

then where
1
" # 2 e 2
1 1
2 ð1þeÞt=2 a ¼ tan and b ¼ ð1 e2 =4Þ2
m2 ðtÞ ¼ 1 1=2
e cosðbt þaÞ uðtÞ; 2þe
ð3þeÞ
ð22:36bÞ
ð22:34bÞ
H20 ðtÞ can be realized using the Fialkow-Gerst
Where technique [3], one such realization being shown

1 e 2
1
1
in Fig. 22.1g.
1
a ¼ tan and b ¼ ð3 2e e2 Þ2 =2 As is clear from Eq. 22.36a, v03 ðtÞ also gives
3þe
damped oscillations around the ideal differenti-
ð22:34cÞ
ated value. However, we can decrease the oscil-
lations by increasing e. In particular, e = 1 gives
It may be seen that damping increases as we
critical damping [4], i.e. no oscillations and
increase e from 0 to 1. In particular, H20 ðsÞ e¼0 ¼
v03 ðtÞje¼1 ¼ 1 ¼ v1 ðtÞ. The optimum value of e so
H2 ðsÞ and m02 ðtÞ e¼0 ¼ v2 ðtÞ: Critical damping is that v03 ðtÞ may reach the ideal differentiated value
achieved for e = 1 [4], and v02 ðtÞje¼1 ¼ 1 ¼ v1 ðtÞ. within a tolerance of ±5%, in the shortest pos-
Thus for critical damping, v02 ðtÞ coincides with sible time, is found to be e = 0.71. Curves g and
v1(t) and the rise time of v02 ðtÞ is maximal. The h in Fig. 22.2. show v03 ðtÞ for e = 0.5 and for
rise time of v02 ðtÞ decreases with the decrease of e = 0.71, respectively.
damping (i.e. with the decrease of e). The opti-
mum value of e, such that v02 ðtÞ may reach the
ideal differentiated value, within a tolerance of Optimal RC Differentiators
±5%, in the shortest possible time is found to be
e = 0.601, and the response under this condition From Eq. 22.21, one can write the nth-order
is shown by curve e in Fig. 22.2. differentiator TF as

Nn ðsÞ
Third-order Suboptimal Passive Hn ðsÞ ¼ ð22:37Þ
Nn ðsÞ þ bo
Differentiator
For RC realizability, the roots of the denom-
The third-order TF given by Eq. 22.29 is not inator polynomial should be distinct and located
realizable due to its poles at s = ±j. We may, on the negative real axis of the s-plane, i.e.
however, realize a network by shifting the poles
n
slightly to the left in the s-plane. The TF will no Y
Nn ðsÞ þ bo ¼ ðs þ ri Þ; ð22:38Þ
longer remain optimal, and we call this a sub-
i¼1
optimal realization. The suboptimal TF will be
where
sðs2 þ s þ 1Þ
H30 ðsÞ ¼ ; where 0\e\ 1
ðs þ 1Þðs2 þ es þ 1Þ 0\r1 \r2 \    \rn ð22:39Þ
ð22:35Þ
Equating the constant terms and the coeffi-
The ramp response of Eq. 22.35 is given by cients of s on both sides of Eq. 22.38, we get
" # n
e t ð1 eÞ
Y
v03 ðtÞ¼ 1 e 1
et=2
cosðbtþaÞ uðtÞ; bo ¼ ri ð22:40Þ
2 e bð2 eÞ 2
i¼1

ð22:36aÞ
172 22 Optimum Passive Differentiators

and where
!
n n
1 Y a ¼ 2ð1 þ eÞ and b ¼ 2ð2e þ e2 Þ1=2 ð22:47bÞ
X
bo ¼ ri ð22:41Þ
r
j¼1 j i¼1
Also,
Combining Eqs. 22.40 and 22.41 gives
v02RC ðtÞ e¼0 Dv2RC ðtÞ ¼ 1 2t
 
ð1 þ tÞe uðtÞ
n
X ð22:48Þ
1=ri ¼ 1 ð22:42Þ
i¼1
A plot of v02RC ðtÞ, for e = 0.01, is shown in
For optimum results, we must maximize bo, as Fig. 22.2. (curve c). As the plots for v2RC ðtÞ and
shown in the appendix. Following, a procedure v02RC ðtÞ do not differ by more than 1% in the time
similar to one used in Section 5 of [1]; maxi- range shown, these are not shown separately in
mization of bo yields Fig. 22.2.
A realization of H20 RC ðsÞ; using the F-G tech-
r1 ¼ r2 ¼ r3 ¼    ¼ rn ¼ n ð22:43Þ nique [3] is shown in Fig. 22.1h.
and

bo ¼ nn ð22:44Þ Conclusion

Thus nth-order optimal RC differentiator is The problem of obtaining an optimum approxima-


tion to the ideal differentiator by passive, transfor-
ðs þ nÞn nn merless, unbalanced network has been investigated.
HnRC ðsÞ ¼ ð22:45Þ
ðs þ nÞn The following conclusions have been arrived at:

(a) RLC, optimal differentiators are not realiz-


able for order n  3; however, suboptimal
Suboptimal RC Differentiator
RLC differentiators, for n = 3 can be realized
by pole perturbation technique.
HnRC ðsÞ is not realizable for n  2 as the TFs
(b) RC, optimal differentiators are not realizable
have multiple poles at s = −n for n  2. To
for order n  2; however, suboptimal RC
make the poles distinct, we perturb them from
differentiators for all n can be realized by
location −n, so that HnRC ðsÞ becomes realizable
pole perturbation.
for all n. The same methodology as suggested in
(c) RC differentiators, of all orders, reach the
Section 6 of [1] can be followed; then the
ideal value of differentiation only at t = ∞.
second-order suboptimal TF becomes
(d) Although the response of optimal RLC dif-
s2 þ 4s ferentiators approaches the ideal value faster
H20 RC ðsÞ ¼ 2 ð22:46Þ than that of RC differentiators, the former
s þ 4ð1 þ eÞs þ 4
exhibit damped/undamped oscillations. This
which gives, for a ramp input restricts the use of optimal RLC differentia-
tors. However, the amplitude of oscillations
can be limited to any desired tolerance by
 
1
m02RC ðtÞ¼ 1 e at ðaþb 1Þebt ða b 1Þe bt uðtÞ;
 
2b pole perturbation, such that the RLC differ-
ð22:47aÞ entiators give better performance than the RC
differentiators. We have chosen a tolerance of
Conclusions 173

±5% of the ideal differentiated value and the

e ¼ 0:71
results obtained are given in Table 22.1.

A variety of differentiators using operational

RLC, suboptimal n = 3

2 þ es þ 1Þ
amplifiers are known. But the constraints of the

þs þs
active device viz. offset voltages and currents,

2
H30 ðsÞ ¼ ðs þ s1Þðs
finite gain-bandwidth product, finite dynamic
2
range and slew rate limiting, etc. lead to further
problems. The proposed optimum passive dif-
0.576
Table 22.1 Values of s5, the normalised time taken by various differentiators to give output voltage within ±5% of the ideal output voltage

ferentiators, which are free from aforesaid limi-


tations, may be successfully employed in areas,
where the frequency spectrum of the signal is
e ¼ 0:601

relatively wide or where simple and reliable cir-


RLC, suboptimal n = 2

cuits with minimum power consumption are a


necessity. A few such applications are the hom-
H20 ðsÞ ¼ s2 sþ esþ þs 1

ing devices of an underwater torpedo, (where the


2

dynamic range requirement is large of the order


of 80 dB) and guidance system of a long-range
0.587

guided missile (where the signal spectrum is


wide, up to about 60 MHz).
e ¼ 0:01

Problems
þ eÞs þ 4
RC, suboptimal n = 2

P:1. Apply a square pulse of duration 1 s and


s þ 4s

height 1 V to a first-order differentiator.


H20 RC ðsÞ ¼ s2 þ 4ð1
2

Find the output v0/H and sketch it.


P:2. Determine the transfer function of a sub-
optimal differentiator of order 4 and obtain
0.7

the output for a ramp function.


P:3. Obtain the transfer function for an optimal
H1 ðsÞ ¼ s þs 1

RC differentiator of order 2 and find its


RC, optimal

support for a unit step input.


n=1

P:4. Same as P.3 except that the input is a ramp


1.0

function.
P:5. Same as P.3 except that the input is an
H0(s) = s (unrealisable)

impulse function.
Ideal case

Appendix
0.0

In this section, we examine the nature of bo and


substantiate the assertions made in Section 5 that
differentiators

higher the value of bo, better is the approxima-


TF of the

tion. For the first-order case, the TF and the


corresponding ramp response are, respectively,
Case

s5
174 22 Optimum Passive Differentiators

bo s
H1 ðsÞ ¼ ð22:49Þ
s þ bo 1
D GðsÞ ð22:56Þ
and s
bo t where
v1 ðtÞ ¼ ð1 e ÞuðtÞ ð22:50Þ

Clearly, higher the bo, closer is v1(t) to u s n 1 þ bn 1 s n 2 þ    þ b3 s 2 þ b2 s þ bo


GðsÞ ¼
(t) which is the ideal ramp response. Maximum s n þ bn 1 s n 1 þ    þ b2 s 2 þ b1 s þ bo
value of bo can be unity in H1(s) [F-G conditions ð22:57aÞ
of realizability of H1(s)].
For the second-order case, the TF and the 1 þ ðb2 =bo Þs þ ðb3 =bo Þs2 þ   
¼ ð22:57bÞ
ramp response are, respectively 1 þ ðb1 =bo Þs þ ðb2 =bo Þs2 þ   

s 2 þ bo s Equation 22.56 shows that vn(t) = L−1


H2 ðsÞ ¼ 2 ð22:51Þ Vn(s) can be interpreted as the unit step response
s þ bo s þ bo
of the low-pass function G(s). Equation 22.55,
and together with the initial and final value theorems
of Laplace transforms shows that vn(t) rises from
" #
e bo t=2

1 bo a value zero at t = 0 to unity at t = ∞. To enable
m2 ðtÞ¼ 1 1=2
cos xo t tan uðtÞ; us make vn(t) achieve unity value in as short a
ð4 bo Þ 2xo
time as possible, we must choose b0 such that the
ð22:52Þ
rise time sr, of vn(t) is as small as possible. Using
Elmore’s formula [2], with the assumption that
where
the plot of vn(t) is monotonic, (whereby Elmore’s
1=2
b2o =4

x o ¼ bo ð22:53Þ formula can be applied), we get
 1   2 12
b22

1 bo sr ¼ 2p b1 2bo ðb2 b3 Þ
As cos xo t tan  1; smaller the

2xo bo
value e
bo t=2
; closer is v2(t) to u(t). Increase of ð22:58Þ
ð4 bo Þ1=2
bo t=2
bo decreases e faster (i.e. exponentially)
sr, decreases monotonically with the increase of
than (4 − bo)1/2. Thus higher the bo, smaller is
bo. Thus bo should be as large as possible. The
the value of e
bo t=2
and consequently closer is assumption of vn(t) being monotonic has impli-
ð4 bo Þ1=2
the v2(t) to the ideal value u(t). cations as mentioned in the Appendix of [1].
For the general case, a semi-rigorous argu-
ment can be forwarded as follows. As
References
s n þ bn 1 s n 1 þ    þ b2 s 2 þ bo s
Hn ðsÞ ¼ n
s þ bn 1 s n 1 þ    þ b2 s 2 þ b1 s þ bo 1. S.C. Dutta Roy, Optimum passive integrators, in IEE
Proceedings, part G, (vol. 130, No. 5, pp. 196–200),
ð22:54Þ
Oct 1983
2. W.C. Elmore, Transient response of damped linear
and network with particular regard to wide band ampli-
fiers. J. Appl. Phys. 19, 55–63 (1948)
1 sn 1 þbn 1 sn 2 þ  þb3 s2 þb2 sþbo 3. N. Balabanian, Network Synthesis (Prentice Hall, 1958)
 
Vn ðsÞ ¼ 4. M.E. Van Valkenburg, Network analysis (Prentice
s sn þbn 1 sn 1 þ  þb2 s2 þb1 sþbo Hall of India, 1983)
ð22:55Þ
Part III
Active Circuits

Passive circuits have their own limitations and can do very little when
amplification, oscillation and other essential function of practical circuits can
offer. Part III therefore concentrates on active circuits, which are combina-
tions of passive circuits with active devices. Vacuum tubes are the things of
the past and are seldom used, except in broadcast applications. We therefore
treat circuits with transistors and operational amplifiers as the active devices.
Amplifier fundamentals are presented in Chap. 23; this material was the first
broadcast in India on actual educational materials and was done from studios
of Space Application Centre at Ahmedabad, under the ‘Teacher in the Sky’
experiment of IETE. Judged by the positive feedback from students, it was a
great success.
Again, that appearances can be deceptive occurs in active circuits also.
This is illustrated in Chap. 24 with the BJT biasing circuit as an example.
BJT biasing is dealt with, comprehensively, in Chap. 25, and it is proved that
bias stability is the best in the four resistor circuit. A high-frequency tran-
sistor stage, consisting of emitter feedback, is analysed in detail in Chap. 26,
using the hybrid equivalent circuit of the transistor, which was carefully
avoided till then in most textbooks. Transistor Wien Bridge oscillator is
treated comprehensively in Chap. 27, where various circuits and their merits
and demerits are enumerated. In contrast to the hybrid parameter, I used the
h-parameter equivalent circuit of the transistor because the former was not
known till then.
The usual analysis of the oscillator, as given in textbooks, is to use the
Nyquist criterion Ab=1. In Chap. 28, I formulate several simpler, in fact much
simpler, methods for doing the same, without the difficulty of identifying
A and b, which is not easy even for experienced researchers. The triangular to
sine wave converter is discussed in Chap. 29 with step-by-step logical anal-
ysis. The Wilson current mirror, presented in Chap. 30, is a versatile circuit
and is used as an essential component in various analog ICs. The dynamic
resistance is calculated easily. That completes our journey through active
circuits. I hope it will be a smooth one, without getting lost in the rather
complicated equivalent circuits.
Amplifier Fundamentals
23

This chapter presents the fundamentals of a amplifier is an essential component. The Public
bipolar junction transistor amplifier and Address system used at large gatherings, like
includes the following aspects: choice of political rallies and music concerts, is another
Q point, classes of operation, incremental very common example. Under this topic, we
equivalent circuit, frequency response, cas- shall discuss the essential features of an amplifier
cading, broadbanding and pulse testing. The along with the analysis of a typical circuit.
emphasis is on understanding the fundamen- In order to introduce the subject, consider a
tal, rather than rigorous analysis or elaborate typical single-transistor amplifier circuit, shown in
design procedure. Fig. 23.1, in which the transistor is connected in
the common emitter configuration. The phrase
‘common emitter’, incidentally, implies that the
Keywords emitter terminal is common between the input and

Amplifier Transistor characteristics the output. In the circuit of Fig. 23.1, we have

CE configuration Biasing Hybrid P shown an n-p-n transistor, whose dc
equivalent circuit collector-to-emitter voltage, VCE, is determined by
the supply voltage VCC, the collector resistance RC
and the emitter resistance RE through the equation
The term ‘amplifier’ stands for any device which
amplifies or magnifies a weak signal so as to VCE ¼ VCC IC RC IE RE ð23:1Þ
make it detectable and useful. An amplifier is
perhaps the most important electronic circuit and IC and IE are the dc collector and emitter
was the motivation or the leading reason behind currents, which are, of course, approximately
the invention of the triode and the transistor. An equal, because the DC base current IB is much
amplifier is also a part of our daily life. The smaller than IC (IB = IC/b, b * 50). IB is deter-
Radio, the Television and the Stereo are common mined by the relation (see Fig. 23.2)
examples of electronic equipment where the
R2
VCC ¼ IB ðR1 jjR2 Þ þ VBE þ IE RE ;
R 1 þ R2
ð23:2Þ
S.C Dutta Roy, “Amplifier Fundamentals,” Students’
Journal of the IETE, vol. 35, pp. 143–150, July– where VBE is the dc base-to-emitter voltage and
December 1994.
is of the order of 0.7 V for a silicon transistor.

© Springer Nature Singapore Pte Ltd. 2018 177


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_23
178 23 Amplifier Fundamentals

virtually short-circuited. If CE is not there


(opened), then the signal voltage developed
across RE would cause a negative feedback,
because the actual signal input to the transistor
would have been the voltage across R2 minus the
voltage across RE. The gain is thereby reduced to
approximately—RL =RE ; RL ¼ RC jjR0L .
Now let us come back to the question of
choosing the Q point. Figure 23.3 shows the
transistor characteristics along with a plot of
Eq. 23.1 given by the line ABC. The usable
region of the characteristic is bounded by the
Fig. 23.1 A single-stage CE amplifier maximum ratings of the transistor viz VCEmax,
ICmax, the maximum collector dissipation
IC and VCE determine the dc operating point or (VCEIC), PDmax, along with the saturation line
the so-called Q point of the transistor. More will near VCE = 0, and the cutoff line iB = 0. If VCE
be said about this later. and IC are such that the Q point is near B, then
CC1 and CC2 in Fig. 23.1 are coupling signal excursions like that shown will lead to
capacitors, their values being so chosen that they faithful variations in the collector current, as
act as short circuits at the signal frequency. Had shown on the left side of the figure. This is the
CC1 not been there (short-circuited), R2 would linear range of operation and goes under the
have been virtually short-circuited (R1  R2, name of Class A operation in the literature.
usually) and the transistor could not have been Obviously, Class A operation cannot extend to a
biased properly. Similarly, if CC2 is not there point close to A during the positive excursions of
(short-circuited), the DC through RC would have iB, or to a point close to C during the negative
found a path through the load resistor R0L , so that excursions of iB, because of the distortion due to
the Q point would shift with variations in the saturation and cutoff, respectively. If the Q point
load. The capacitor CE across RE is chosen to be is the same as the point marked VCC in Fig. 23.3,
large so that at the signal frequency, RE is then current will flow in the collector circuit only
during the positive excursions of the signal. No
current will flow during the negative excursions
(this condition can be achieved by opening R1 in
Fig. 23.1). Obviously, the circuit will act as a
half-wave rectifier. In order to reproduce the
positive, as well as the negative portions of the
signal, we need another transistor to take care of
the negative half. These two transistors are con-
figured in the so-called push–pull operation, as
shown in Fig. 23.4, which, incidentally, uses a
complementary symmetric pair of transistors
(n-p-n and p-n-p) in a transformerless configu-
ration. The usual arrangement of push–pull uses
a centre-tapped transformer at the input and
another such transformer at the output. The
important point is that the transistors in Fig. 23.4
operate under what is defined as the Class B
Fig. 23.2 DC equivalent of the base-emitter circuit condition. Naturally, the Class B condition
23 Amplifier Fundamentals 179

Fig. 23.3 Transistor


characteristics, showing load
line and the bounds of
operation

condition. A further improvement in efficiency


is possible if the transistor is biased beyond
cutoff under quiescent conditions; the collector
current will then flow for only a part of the
positive half cycle (assuming an n-p-n transistor).
This, of course, leads to excessive distortion,
which can be avoided by using a parallel reso-
nant circuit in the collector, tuned to the signal
frequency. Then the voltage developed across the
tuned circuit will mostly consist of the signal
component. This is called the Class C mode of
operation, and can be used to achieve practical
Fig. 23.4 Complementary symmetry push–pull class B efficiencies of about 85%.
amplifier We shall now confine our attention to small
signal Class A operation so that the analysis is
possible through an incremental or ac equivalent
produces more distortion than the Class A circuit. The adjective ‘incremental’ refers to the
operation. Nevertheless, Class B is invariably condition that signal components of current and
used in low-frequency (audio) large signal or voltage in the transistor are small perturbations to
power amplifiers, because of a drastic improve- the total current and voltage respectively. Under
ment in efficiency (from a maximum of 25% in this assumption, the amplifier behaves as a linear
Class A to a maximum of 78.5% in Class B) system and DC and AC analyses can be done
arising out of the reduced (to zero, ideally) power separately, the latter being carried out with the
dissipation under the quiescent or no-signal equivalent circuit.
180 23 Amplifier Fundamentals

Fig. 23.5 Hybrid-p


equivalent circuit of transistor

We shall now consider the circuit of Fig. 23.1


again, and carry out its analysis. Although vari-
ous equivalent circuits have been proposed in the
literature, the most versatile one viz the hybrid p,
as shown in Fig. 23.5, shall be used here. We
have used somewhat simplified notation as
compared to what is used in most textbooks. Fig. 23.6 Simplified hybrid-p equivalent circuit of the
While B, E and C stand for the base, emitter and transistor
collector respectively, B′ is used to denote the
internal base. The difference between B and B′ is has also been ignored. By inspection, you see
the occurrence of the ‘base-spreading’ resistor, rx that the gain is
which is usually an order of magnitude smaller vo
than rp, the base-emitter resistor. Typically, Ao ¼ ¼ gm R L ð23:3Þ
vi
rx = 100 Ω, rp = 1 K, r = 4 MΩ, ro = 80 K,
Cp = 100 pF, C = 3 pF and Co = 1 pF, while gm Next, consider the low-frequency response for
is determined by the quiescent collector current which we ignore internal capacitances but not the
IC according to the relation external ones. There are three such capacitances
viz CC1, CC2 and CE, and it becomes rather
gm ¼ ½IC ðin mAÞ=26Š mhos involved to consider the effects of all three
simultaneously. We, therefore, consider them
Except at high frequencies, or in rigorous
one by one. Suppose CE, CC2 ! ∞; then the
analysis at other frequencies, we can ignore rx.
equivalent circuit becomes that shown in
Also, notice that ro will come across the load
Fig. 23.8. With Rp ¼ R1 kR2 krp , we have the
which is normally much smaller than 80 K;
low-frequency gain
hence it can be ignored, along with Co which is
basically the stray capacitance. Normally, r can vo vo V
also be ignored in comparison to the impedance AL ðsÞ ¼ ¼  ;
vi V vi
of C at the frequencies at which it counts. Hence,
r also qualifies to be omitted from consideration, Rp
¼ gm R L ; ð23:4Þ
thus leading to the much simplified form of Rp þ sC1C1
Fig. 23.6.
First, we consider midband frequencies at Putting s = jx and Rp CC1 = l/x1 we get
which the effects of all capacitances—internal
(Cp, C) as well as external (CC1, CC2 and CE) can Ao
AL ðjxÞ ¼ ð23:5Þ
be ignored, the former acting as open circuits 1 j xx1
while the latter act as short circuits. Then, the
equivalent circuit of Fig. 23.1 becomes that This shows that with increasing frequency, the
shown in Fig. 23.7, where the source resistance gain rises from zero to the midband value Ao and
23 Amplifier Fundamentals 181

Fig. 23.7 Equivalent circuit of Fig. 23.1 at midband frequencies

and
   
1 1
x001 ¼ 1 þ gm þ RE ; ð23:8Þ
RE C E rp
gm
 ð23:9Þ
CE
Fig. 23.8 Low-frequency equivalent circuit with CE,
CC2 ! ∞
The form of Eq. 23.9 arises because
pffiffiffi
reaches the value Ao = 2 when x = x1. In terms gm rp ¼ bð¼ hfe Þ  1
of decibels, this is equivalent to saying that the
gain reaches 3 dB below the midband value at Usually, x001  xz .
x = x1. Hence, x1 is called the low-frequency The question now arises: how to determine the
cutoff point. low-frequency 3 dB cutoff (xL) when none of the
In a similar manner, we can calculate the three capacitances can be considered as
effect of CC2 with CC1, CE ! ∞ as giving rise to short-circuits. A guideline for the designer is that
an expression of the form Eq. 23.5 with x1 one of the capacitances should be used to control
replaced by x01 ¼ 1= CC2 ðRC þ R0L Þ . The effect xL while the other two should be so chosen that the
 

of CE with CC1, CC2 ! ∞ is a bit more critical frequencies due to them are an order of
involved, and it can be shown that the gain is magnitude less than the desired xL. For example,
proportional to with CC1 ! ∞, CE = 200 lF, CC2 = 10 lF,
RE = 100 Ω, Rc = R′L = 2 K, Rp  rp = 1 K and
1 jxz =x b = gm rp = 100, the gain is of the form
; ð23:6Þ
1 jx001 =x
ðconstantÞð1 jx=xz Þ
; ð23:10Þ
where ð1 jx=x01 Þð1 jx=x001 Þ

1 where xz, = 50 r/s, x′1 = 25 r/s and x001 ¼ 509.


xz ¼ ð23:7Þ
ðCE RE Þ 1 r/s. The value of xL is then determined by x001
and is given by fL = 509.1/(2p) = 81.02 Hz.
182 23 Amplifier Fundamentals

Fig. 23.9 High frequency


equivalent circuit of the
amplifier of Fig. 23.1

Finally, we consider the high-frequency The effect of Rx, as expected, is to reduce the
response of the amplifier, for which the equiva- midband gain by the factor rp/(Rx + rp). Also
lent circuit is shown in Fig. 23.9. Notice that we putting s = jx, and denoting CT ðrp k Rx Þ by
have no longer ignored the effect of rp or of the 1/x2, we see that the HF 3-dB cutoff is given by
source internal resistance Rs. The reason is that
the small capacitor Cl reflects at the input (across 1
x2 ¼ ð23:14Þ
Cp) as a much larger capacitance CT ðrp k Rx Þ

To get an idea of x2, let, for a typical tran-


CM ¼ Cl ð1 þ gm RL Þ; ð23:11Þ
sistor amplifier,
which is approximately the midband gain times RS ¼ 900 X; rx ¼ 100 X: rp ¼ 1 K; Cl ¼ 4 pF
Cl. This is known as the ‘Miller effect’.
Cp ¼ 31 pF; gm ¼ 58 m - mhos and RL ¼ 2 K
The total capacitance
ð23:15Þ
CT ¼ Cp þ CM ð23:12Þ
Then
may have a reactance which is comparable to 3
CT ¼ 31 þ 4  ð1 þ 58  10  2  103 Þ
Rx = rx + Rs at high frequencies. With Miller
 500 pF
effect taken into account, the equivalent circuit
simplifies to that shown in Fig. 23.10, where we ð23:16Þ
have ignored R1 k R2 in comparison to rp.
rp k Rx ¼ 1 K k 1 K ¼ 500 X ð23:17Þ
Obviously, the gain is given by
and
vo V gm RL ðrp k 1=sCT Þ x2 1
AH ðsÞ ¼ ¼ f2 ¼ ¼ 12
Hz
V vi Rx þ ðrp k 1=sCT Þ 2p 2p  500  10  500
gm RL r p ¼ 636 kHz
¼ ð23:13Þ
sCT rp Rx þ Rx þ rp ð23:18Þ
g m RL r p 1
¼
Rx þ rp 1 þ sCT ðrp k Rx Þ

Fig. 23.10 Simplified equivalent of Fig. 23.9 using Fig. 23.11 Frequency response of the gain of circuit of
Miller effect Fig. 23.1
23 Amplifier Fundamentals 183

Fig. 23.12 A cascade of three stages

Fig. 23.13 Showing how stages interact with each other

As we can see from the analysis, the gain of the


typical amplifier of Fig. 23.1, called an RC cou- Fig. 23.14 Effect of cascading a number of stages
pled amplifier for obvious reasons, will have a
band-pass characteristic as shown in Fig. 23.11. Aon ¼ Ano ð23:19Þ
The gain of a single-stage amplifier may not
be adequate for the specific application. Hence, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
one uses multistage amplifiers by cascading x1n ¼ x1 = 21=n 1 ð23:20Þ
several stages as shown in Fig. 23.12. Analysis pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
of such multistage amplifiers proceeds stage by x2n ¼ x2 21=n 1 ð23:21Þ
stage by considering the Thevenin equivalent of
the previous stage as constituting the source, In a given amplifier, how does one increase
while the succeeding stage constitutes the load. the x2 and decrease x1? A number of such
This is illustrated in Fig. 23.13 for analysis of compensation techniques are available and we
Stage#2 of the circuit in Fig. 23.12. In general, shall discuss, qualitatively, one example of each.
therefore, cascading does not lead to a multipli- A simple philosophy of increasing x2 is to use a
cation of gains. load whose impedance increases with frequency,
Suppose that we have succeeded in cascading so that the fall in gain due to the factor 1 + jx/x2 in
a number of identical non-interacting stages, the denominator can be partially compensated.
each having a midband gain Ao and low-and Such a load is shown in Fig. 23.15.
high-frequency 3-dB cutoff at x1 and x2 A similar philosophy can be applied to com-
respectively. The term ‘non-interacting’ here pensate for low-frequency fall off, by using a
means that each stage has an input impedance load which increases with decreasing frequency.
which is much higher than the output impedance One such load is shown in Fig. 23.16.
of the previous stage, i.e. Zi, n+1  Z0, n. Under We conclude this discussion by pointing out
this condition, what happens to the overall x1 that in order to test a given amplifier for its low-
and x2? This is illustrated qualitatively in and high-frequency responses, it is convenient to
Fig. 23.14. Obviously, x1 increase and x2 use a pulse as the input. As shown in Fig. 23.17,
decreases, i.e. the overall bandwidth decreases the response to a pulse will be a gradually rising
while the gain increases. It can be easily shown waveform, which after reaching a maximum,
that when n such stages are cascaded, the overall does not stay there, but sags a little before set-
parameters are given by tling down to the zero value. The rise time,
184 23 Amplifier Fundamentals

Fig. 23.15 HF
compensation

Fig. 23.17 Pulse response of an RC coupled amplifier

between the joint of R1A and R1B and


Fig. 23.16 L-F
ground. Derive the necessary equations for
compensation the biasing condition of the transistor.
P:2. At frequencies at which r  1/(xC) and
r0  (1/xC0), find an expression for the
frequency response of a transistor, assuming
a load RL and a source of resistance RS.
P:3. Consider a two-stage cascaded amplifier
with source of resistance RS and load RL.
Find an expression for the overall gain, if
the stages interact with each other.
P:4. In Fig. 23.15, a capacitor C is connected
from the joint of L and RC and ground. Find
an expression for the high-frequency gain.
P:5. In Fig. 23.16, C is neither open-nor
short-circuit. Derive an expression for the
defined as the time required for the waveform to
low-frequency gain.
rise from 10 to 90% of the final value, can be
related to x2, while the amount of sag can be
For further information on amplifiers, see [1]
related to x1.

Reference
Problems
1. A.S. Sedra, K.C. Smith, Microelectronic Circuits
P:1. Suppose in Fig. 23.1, R1 is split into R1A (Sanders College Publishing, Fortworth, 1992)
and R1B and a resistor R1C is connected
Appearances Can Be Deceptive: The
Case of a BJT Biasing Circuit 24

It is shown that bias stability is the best with Keywords


the four resistor circuit. A two-resistor BJT 
Bias Bias stability  2, 3 and 4 resistor
biasing circuit, which appears to be an attrac- biasing
tive alternative to the familiar four resistor
circuit, is shown to have serious limitations. It
is also shown that even when augmented by
one or two resistors, these limitations are only
partially overcome and that the bias stability Introduction
that can be achieved thereby is poorer than
that of the four resistor circuit. Any student of Electronics should be familiar
with the four resistors BJT biasing circuit shown
in Fig. 24.1, to be called N1, hereafter, and any
standard textbook would give the derivation of
the following expression for the collector current
IC (see, e.g. [1–3]):

½VCC R1 =ðR1 þ R2 Þ  VBE þ ICBO ½1 þ ð1bÞ½ðRE þ R1 k R2 Þ


IC ¼ ; ð24:1Þ
RE þ f½RE þ ðR1 k R2 Þ=bg

where the symbols VBE, ICBO and b have their


usual meanings. Stabilization of IC against vari-
ations of these three parameters due to tempera-
ture change and/or replacement of transistor
demands that
Source: S. C. Dutta Roy, “Appearances can be
Deceptive: The Case of a BJT Biasing Circuit,”
Students’ Journal of the IETE, vol. 37, pp. 3–6,
January–June 1996.

© Springer Nature Singapore Pte Ltd. 2018 185


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_24
186 24 Appearances Can Be Deceptive …

R1 k R2  bRE ð24:2Þ

and

VCC R1 =ðR1 þ R2 Þ  VBE þ ICBO ½RE þ ðR1


k R2 Þ;
ð24:3Þ

where the usual assumption of b  1 has been


made. It would be recognized that Eq. 24.3 is a
conservative condition because what we require
is that the left-hand side should be much greater
than both of the two terms on the right-hand side. Fig. 24.2 N2—an apparently attractive alternative to N1
It is also easy to see that the midband gain of the
circuit is −gmRc if RE is bypassed for AC by a stability achieved thereby compares poorly with
capacitor, as shown in Fig. 24.1. that of N1.
Another circuit for BJT biasing, usually
mentioned as an exercise for the student in
textbooks, is that of Fig. 24.2, to be designated Analysis of N2
as N2, for brevity. As compared to N1, N2 looks
attractive because it uses only two, instead of Application of Kirchoff’s voltage law to the loop
four resistors. It is believed that bias stabilization formed by VCC, RC, RF and the base–emitter
occurs in N2 due to feedback through RF. We junction gives
shall examine this circuit critically in this chapter
and demonstrate that it has serious limitations, as VCC ¼ ðIC þ IB ÞRC þ IB RF þ VBE ð24:4Þ
compared to N1. We also show that even with the
addition of one or two resistors, these limitations Also, the base and collector currents are
can only be overcome partially, and that the bias related to the following equation

IC ¼ bIB þ ðb þ 1ÞICBO ð24:5Þ

Combining Eqs. 24.4 and 24.5, we get

VCC  VBE þ ICBO ½1 þ ð1=bÞðRC þ RF Þ


IC ¼
RC þ ½ðRC þ RF Þ=b
ð24:6Þ

Bias stability now demands that

RF  bRC ð24:7Þ

and

VCC  VBE þ ICBO ðRC þ RF Þ ð24:8Þ

under the usual assumption that b  1 and the


conservatism mentioned earlier. The midband
Fig. 24.1 N1—the familiar four resistor BJT biasing gain of the circuit can be easily derived to be −gm
circuit
Analysis of N2 187

(RF k RC ) under the assumption of gm  (1/RF),


which is usually satisfied. The limitations of the
circuit are best brought out through a numerical
example, as worked out next.

An Example of Design

Let the transistor Q have the following


parameters

VBE ¼ 0:6 V; ICBO ¼ 10 nA and b ¼ 100


ð24:9Þ
Fig. 24.3 N3—modified form of N2 to avoid saturation
at 25 °C, and let the Q-point be of Q

VCE ¼ 4 V and IC ¼ 4 mA ð24:10Þ

with VCC = 12 V. Also, let the midband gain for a gain of −160, we need RF1 = 2 K. RF2 has
required be −160. The gm of the transistor is to be chosen to satisfy the base current IB =
0.04 mA. From the relation
gm ¼ 4 mA/25 mV ¼ 0:16 f ð24:11Þ
IB ¼ ðVCC VBE Þ=ðRF2 þ RF1 Þ ð24:16Þ
so that for a gain of 160, we need
we calculate RF2 as 83 K. Thus, our RF is 85 K,
RC k RF ¼ 1K ð24:12Þ which, to our disappointment, does not satisfy
Eq. 24.7, because b RC ¼ 200 K. To satisfy
The specification on VCE determines RC as Eq. 24.7, RF should not exceed 20 K, taking the
thumb rule of 1:10 for the sign ‘’ to be satis-
RC ffi 2K ð24:13Þ fied. Clearly, N3 does not achieve bias stability!
What should we do now? Use one more
From Eqs. 24.12 and 24.13, we get resistor? Let us see.
This additional resistor can be either from the
RF ¼ 2K ð24:14Þ
base to ground, as shown in Fig. 24.4, or from the
emitter to ground, as shown in Fig. 24.5. These two
Note that 1/RF equals 0:001 f which is indeed
circuits will be designated as N4 and N5, respectively.
much smaller than gm ¼ 0:16 f thus validating
Consider N5 first. It is not difficult to realize that
the midband gain formula, but the problem arises
since the same current passes through RC and RE, the
elsewhere. With RF = 2 K, the base current is
expression for IC will be the same as Eq. 24.6 except
IB ¼ ðVCE VBE Þ=RF ¼ 1:7 mA ð24:15Þ that RC + RE will take the place of RC. Thus, for bias
stability, we need RF  b ðRE þ RC Þ and since IB
Since bIB has a value of 170 mA, clearly, the is still given by Eq. 24.16, RF does not change.
transistor will be saturated! Hence, this modification is of no use.
What is the remedy? In [2], it is suggested that For N4, given in Fig. 24.4, the currents in the
we split RF into two parts and use a bypass various branches, as indicated, can be easily
capacitor, as shown in Fig. 24.3. This circuit, to established. Kirchoff’s voltage law can be used
be called N3, has a gain −gm (RC k RF1 ) so that to write the following equation:
188 24 Appearances Can Be Deceptive …

With b  1, bias stability requirements now


become

RF  bRC ð24:19Þ

and

VCC R1 =ðR1 þ RC þ RF Þ  VBE þ ICBO ½R1


k ðRC þ RF Þ
ð24:20Þ

Since RC will have to be 2 K to satisfy the Q-


point and RF1 will also have to be 2 K to satisfy
Fig. 24.4 N4—a modification of N3 the gain requirement, the question that arises is
the following: is it possible to choose
RF2 < 83 K?
Note that

½IB þ ðVBE =R1 ÞRF ¼ VCE VBE ð24:21Þ

Putting numerical values, this gives

ð3:4=RF Þð0:6=R1 Þ ¼ 0:04  103 ð24:22Þ

We want to have RF 20 K. If RF is chosen,


arbitrarily, as 17 K, Eq. 24.22 gives
R1 = 3.75 K. Now look at the other requirement,
given by Eq. 24.20. Putting numerical values,
the left-hand side is calculated as 1.978 V while
the right-hand side is greater than 0.6 V. Hence,
Eq. 24.20 is not satisfied. In fact, it can be shown
Fig. 24.5 N5—another modification of N3 that the highest value of the left-hand side of
Eq. 24.20 under the constraint of Eq. 24.22 and
VCC ¼ ½IC þ IB þ ðVBE =R1 ÞRC RF 20 K occurs when RF = 20 K, and that
ð24:17Þ this value is only 2.075. Hence, we conclude that
þ ½IB þ ðVBE =R1 ÞRF þ VBE
N4, like N5, is also not of much use in stabilizing
Combining this with Eq. 24.5, and solving for the transistor Q-point.
IC gives

½VCC R1 =ðR1 þ RC þ RF Þ  VBE þ ICBO ½1 þ ð1=bÞ½ðR1 jjRC þ RF Þ


IC ¼ ð24:18Þ
½R1 RC =ðR1 þ RC þ RF Þ þ f½R1 jjðRC þ RF Þ=bg
Conclusion 189

Conclusion P:2. In Fig. 24.1 circuit, a resistor RL is con-


nected between the collector and ground.
The preceding example clearly demonstrates the Comment on the bias stability of the circuit.
limitations of N2, and its modified versions—N3, Justify.
N4 and N5, in stabilizing the Q-point of a BJT. P:3. In Fig. 24.3 circuit, the capacitor is there
On the other hand, one can easily show that N1 and is neither a short circuit nor an open
with RC = RE = 1 K and R1 = R2 = 20 K gives circuit. Deriver an expression for the
low-frequency gain.
R1 k R2 ¼ 10 K; bRE ¼ 100 K; P:4. In Fig. 24.4 circuit, the capacitor is there
VCC R1 =ðR1 þ R2 Þ ¼ 6 V; and and is neither a short circuit nor an open
VBE þ ICBO ðRE þ R1 k R2 Þ ffi 0:6V circuit. Derive an expression for the
high-frequency gain.
so that both Eqs. 24.2 and 24.3 are approximately P:5. What happens when the capacitor is shifted
satisfied. We conclude therefore that N1 is the to have a position between the collector and
best choice for stabilizing the Q-point of a BJT. ground? Carry out the necessary analysis
for the low-frequency gain.

Problems

You may have to couple these with the previous References


chapter
1. J. Millman, A. Grabel, Microelectronics (McGraw-
P:1. In Fig. 24.1 circuit, the capacitor is neither Hill, New York, 1987)
2. S.G. Burns, P.R. Bond, Principles of Electronic
open nor short. Find an expression for the Circuits (West Publishing Company, St. Paul, 1987)
low-frequency gain, using, of course, the 3. A.S. Sedra, K.C. Smith, Microelectronic Circuits
hybrid p equivalent circuit. (Sanders College Publishing, Fortworth, 1992)
BJT Biasing Revisited
25

The familiar four resistor circuit for biasing a choice of a few additional resistors and then
bipolar junction transistor (BJT) is generalized transformed to a different topology. The latter is
through simple reasoning, and transformed to shown to yield, as special cases, three alternative
yield a different topology. Three alternative four resistor circuits, to be called N2, N3 and N4,
four resistor circuits are derived as special which do not appear to have been widely known
cases of the transformed generalized circuit, in the literature in the context of biasing a BJT.
which do not appear to have been widely From a detailed and careful analysis, it is shown
known in the literature. A detailed and careful that the bias stability parameters achieved in all
analysis reveals that the bias stability param- the four circuits—N1, N2, N3 and N4—are com-
eters of all alternative circuits are comparable parable. An illustrative example of bias design is
to those of the conventional circuit. An worked out to demonstrate this fact.
illustrative example is included for demon-
strating this fact.

The Generalized Circuits and Special


Keywords Cases
BJT  Biasing  Bias stability  Design
Let a resistor connected between nodes X and
Y be denoted by R(X, Y). In Fig. 25.1a, there are
five nodes—A, B, C, E and G—and a most
Introduction general biasing circuit would be the one in which
every node is connected to every other node by a
Figure 25.1a shows the familiar BJT biasing resistor. Several exceptions are to be made,
circuit for linear class A amplification, and is to however. The resistors R(A, G), R(A, E), R(C, E)
be called N1 in the sequel. It uses four resistors, and R(C, G) are not necessary for biasing and
of which RE1 is usually by-passed for AC [1–3]. cause additional loss of power. If these four
In this chapter, N1 is generalized by a proper resistors are excluded, then the generalized
biasing circuit looks like that shown in
Fig. 25.1b.
If the two p-networks BCA and BEG are
Source: S. C. Dutta Roy, “BJT Biasing Revisited,” IETE converted into T’s, then the transformed circuit
Journal of Education, vol. 46, pp. 27–33, January–
March 2005. becomes that shown in Fig. 25.1c. Note that

© Springer Nature Singapore Pte Ltd. 2018 191


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_25
192 25 BJT Biasing Revisited

A
+VCC

IC + I 1 RC1
(a) (b) (c)
D

R2 IC RC2
I1
A A
+VCC +VCC IB C

R2 Rc2 IC B Q
E
C C R1 I1 IB

B Q B Q IC + IB RE1

E E F
R1 RBE
C RE1 RE2 IC + I 1

G G G

Fig. 25.1 a N1—the familiar four resistor BJT biasing circuit; b a generalized BJT biasing circuit; c alternative
generalized BJT biasing circuit obtained by transformation of the circuit of (b)

since p to T conversion does not involve any gives the circuit of Fig. 25.4, henceforth to
subtraction operation, all resistances in be referred to as N4.
Fig. 25.1c are positive. Hence, both circuits can
claim to be general canonic biasing circuits for Note that (i) for the convenience of reference,
the BJT. Four special cases of four resistor cir- we have designated the circuits such that Ni
cuits can be derived from the two generalized refers to Fig i, and that (ii) N2 cannot be derived
circuits as follows. from Fig. 25.1b. Because of the latter, the circuit
of Fig. 25.1c, therefore seems to have an edge
(1) Let R(B, C) = R(B, E) = ∞ in Fig. 25.1b or over that of Fig. 25.1b in terms of topological
R(A, D) = R(F, G) = 0 in Fig. 25.1c; then generality. Also, note that another four resistor
we get the conventional circuit N1 of circuit can be obtained by setting R(A, B) = R(B,
Fig. 25.1a. G) = ∞ or R(D, C) = R(E, F) = 0 in Fig. 25.1c.
(2) Let R(E, F) = R(F, G) = 0 in Fig. 25.1c; However, in the resulting circuit, RC1 and RE2
then we get the circuit of Fig. 25.2, hence- carry the same DC; hence for biasing purposes,
forth referred to as N2. they can be combined into one resistance and the
(3) Let R(B, C) = R(B, G) = ∞ in Fig. 25.1b or circuit thereby behaves as a three resistor one. It
R(A, D) = R(E, F) = 0 in Fig. 25.1c; then we has been found that such a circuit has a poorer
obtain the circuit of Fig. 25.3, which we bias stability than N1, and will not, therefore, be
shall refer to as N3. considered further.
(4) Let R(A, B) = R(B, E) = ∞ in Fig. 25.1b or In N2, it is of advantage to by-pass RC1 for
R(D, C) = R(F, G) = 0 in Fig. 25.1c; this AC, as shown, by connecting a large capacitor
The Generalized Circuits and Special Cases 193

A +V•
+VCC A

RC1 I1 + IC
RC1 I1 + I C

C
D
IC
I1 R2
I1 R2 IC RC2
B Q
IC IB
C E
I1 - I B R1
B Q RE1
IB E

R1 G

G Fig. 25.4 N4—yet another alternative BJT biasing


circuit

Fig. 25.2 N2—an alternative BJT biasing circuit

A Bias Stability Analysis


+VCC

RC2 IC The generalized circuit of Fig. 25.1c will only be


R2 analyzed because it gives all the four special
cases. For this purpose, all the resistors have
C
been named and the currents in all of them have
B Q been identified. Let the transistor Q be charac-
E terized by the parameters VBE, b and ICBO, where
the symbols have their usual meanings. The
R1 currents IC and IB are related by the following
equation
RE2
IC ¼ bIB þ ðb þ 1ÞICBO : ð25:1Þ
G
By applying Kirchoff’s voltage law, one
obtains
Fig. 25.3 N3—another alternative BJT biasing circuit
VCC ðIC þ I1 Þ ðRC1 þ RE2 Þ ¼ I1 R2 þ ðI1 IB ÞR1 ;
between nodes D and G, so that the gain is
ð25:2Þ
determined by RC2 only. It may be noted that this
form of the circuit finds use in multistage and
amplifiers for power supply decoupling, where
RC1 is common to all the stages. For single-stage ðI1 IB ÞR1 ¼ VBE þ ðIC þ IB ÞRE1 ð25:3Þ
biasing, however, N2 does not appear to have
been used. As compared to N1, N2 trades an From Eq. 25.3, I1 can be obtained in terms of
additional resistor at the collector side for that at IB and IC. Substituting this value in Eq. 25.2,
the emitter side. In N3 as well as N4, the emitter replacing IB in terms of IC and ICBO from
resistance is by-passed for ac, as in N1. Eq. 25.1, and carrying out some algebraic
194 25 BJT Biasing Revisited

manipulations, one obtains the following and/or replacement of transistor, where the
expression for IC: changes are not infinitesimal. One can find, from
Eq. 25.4, the net change ∆IC = IC2 − IC1 and
V1  VBE þ ICBO r1 divide by IC1 to determine the fractional (or
IC ¼ ; ð25:4Þ
r2 þ ðr1 =bÞ percentage) variation of IC. In most textbooks,
however, this procedure is not followed because
where the resulting expression is considered to be ‘very
formidable and not too informative’ [2, p. 412].
V1 ¼ VCC R1 =ðR1 þ R2 þ RC1 þ RE2 Þ; ð25:5aÞ
Instead, they consider the change of IC due to
r1 ¼ RE1 þ R1 k ðR2 þ RC1 þ RE2 Þ; ð25:5bÞ each parameter separately, holding the other two
constant. We shall also follow the same proce-
r2 ¼ RE1 þ ½R1 ðRC1 þ RE2 Þ=ðR1 þ R2 þ RC1 þ RE2 Þ;
dure to start with, and then show that considering
ð25:5cÞ all the changes simultaneously is not as difficult
as it is made out to be.
and we have made the simplifying, but practical
To follow the conventional procedure, let d,
assumption that b  1 so that the factor [1 + (1/
dv and dI denote the partial fractional changes in
b)] can be approximated by unity.
IC due to changes in b, VBE and ICBO, respec-
Bias stability is achieved if IC can be made
tively. When all the three parameters vary
insensitive to variations in VBE, b and ICBO. Note
simultaneously, and each d is small (<0.1), the
that IC is independent of RC2, but of course, RC2
total fractional change in IC, to be denoted by dT,
has an important effect on VCE. Also, note that
is estimated as the sum of d, dv, and dI.
RE2 always occurs in combination with RC1 in the
From Eq. 25.4, one can easily derive expres-
form RC1 + RE2. This is to be expected because,
sions for the fractional deviations. The results
as is clear from Fig. 25.1c, the same current
are:
IC + I1 flows in them. As mentioned earlier, both
resistors are not necessary; one can make either Db=b1
RC1 = 0 or RE2 = 0 without any loss of general- db ¼ ; ð25:9Þ
1 þ b2 ðr2 =r1 Þ
ity. This fact is also reflected in the circuits N1–
N4. DVBE
dv ¼ ; ð25:10Þ
Referring to Eq. 25.4, we observe that bias V1  VBE1 þ ICBO r1
stability demands the following conditions to be
met: and
DICBO
r2 =r1  1=b; ð25:6Þ d1 ¼ ð25:11Þ
ICBO1 þ ½ðV1  VBE1 Þ=r1 
V1  VBE ; ð25:7Þ
It is clear that the resistance r1, given by
and Eq. 25.5b determines dv and dI, while d is
determined by the ratio r2/r1, which can be
V1  ICBO r1 ð25:8Þ obtained from Eqs. 25.5b and 25.5c as

As will be evident from the practical designs r2 RE1 ðR1 þR2 þRC1 þRE2 ÞþR1 ðRC1 þRE2 Þ
¼
worked out later in the chapter, usually VBE  r1 RE1 ðR1 þR2 þRC1 þRE2 ÞþR1 ðRC1 þRE2 ÞþR1 R2
ICBO r1 so that satisfying Eq. 25.7 automatically ð25:12Þ
satisfies Eq. 25.8.
To obtain a quantitative measure of bias sta- The values of V1, r1 and r2/r1 for the four
bility, consider the case in which the parameter circuits are given in Table 25.1. In practical cir-
set (VBE, b, ICBO) changes from (VBE1, b1, ICBO1) cuits, RC1, RE1 and RE2 will be of the same orders
to (VBE2, b2, ICBO2) due to temperature variation of magnitude (≅1 K) while R1 and R2 will be one
Bias Stability Analysis 195

order higher. It can, therefore, be observed that r1 a negligible shunting effect. Let the various cir-
and r2/r1 are comparable for all the four circuits, cuit and transistor parameters be as follows:
which makes them comparable in terms of bias
stability performance. In particular, if RC1 of N2 VCE ¼ 4 V; IC ¼ 4 mA; VCC ¼ 12 V; VBE
is the same as RE2 of N3, then N2 and N3 will ¼ 0:6 V; ICBO ¼ 10 nA
have identical behaviour. and b ¼ 100;
When all the three parameters vary simulta- ð25:14Þ
neously, as is usually the case in practice, one
can easily show, using Eq. 25.4, that The last two quantities being measured at 25 °
  C. Also, let the gain required be −160, so that the
DIC DVBE þ ICBO r1 required RC2 = 160/gm = 160/(40 IC) = 1 K.
d0 ¼ ¼ 1þ
IC1 V1  VBE þ ICBO r1
  1
Db Db
1þ 1þ 1 Design of N1
b1 b1 þ ðr1 =r2 Þ
ð25:13Þ
Since b = 100  1, we require RC2 + RE1 ≅ (
VCC − VCE)/IC = 2 K. Thus RE1 = 1 K. From
This expression does indeed look formidable,
Eq. 25.6 and Table 25.1, it is required to have
but is not difficult for computation once the
RE1 =ðRE1 þ R1 k R2 Þ  1=b; with numerical
numerical values are available. Also, to compare
values substituted, this translates to
the competing circuits, all that changes are the
R1 k R2  99 K. Let R1 and R2 be arbitrarily
values of r1 and r2/r1.
chosen as 20 K each. Then V1 becomes 6 V so
In the next section, an illustrative example of
that Eq. 25.7 is satisfied. Also, r1 is calculated as
design is worked out for absolute as well as
11 K so that ICBO r1 = 11  10−5; thus Eq. 25.8
comparative performances of the four circuits.
is also satisfied. The design is summarized in
column 2 of Table 25.2.

An Example
Design of N2
For a fair comparison of the four circuits, one
should design each circuit for the same Q point The gain requirement fixes RC2 as 1 K. From
and the same gain. First, consider N1, N2 and N3, Eqs. 25.6–25.8 and Table 25.1, the requirements
in all of which, the gain is approximately of bias stability become
−gmRC2. Since the Q points are the same, one
should have identical RC2 in each circuit. This R2  99RC1
ensures that the output resistance is also equal, and VCC R1 =ðR1 þ RC1 þ R2 Þ  0:6; 108 R1
under the usual assumption of r0 of the BJT k ðRC1 þ R2 Þ:
being much greater than RC2. The input resis- ð25:15Þ
tance in each circuit is approximately r in the
usual situation of base biasing resistances having Also, for this circuit,

Table 25.1 Values of r1 Circuit r1 r2/r1


and r2/r1 for N1–N4
N1 RE1 þ R1 k R2 RE1 =ðRE1 þ R1 k R2 Þ
N2 R1 k ðRC1 þ R2 Þ RC1/(RC1 + R2)
N3 R1 k ðRE2 þ R2 Þ RE2/(RE2 + R2)
N4 RE1 þ R1 k ðR2 þ RC1 Þ RE1 ðR1 þ R2 þ RC1 Þ þ R1 RC1
RE1 ðR1 þ R2 þ RC1 Þ þ R1 RC1 þ R1 R2
196 25 BJT Biasing Revisited

Table 25.2 Bias stability of example designs


Circuit N1 N2 N3 N4
Design parameters RC2 = RE1 = 1 K RC2 = RC1 = 1 K RC2 = RC1 = 1 K RE1 = 0.896 K
R2 = R1 = 20 K R1 = 10 K R1 = 10 K R1 = 32.5 K
R2 = 9 K R2 = 9 K R2 = 20 K
RC1 = 1.05 K
V1(volts) 6 6 Same 7.28
r1(K) 11 5 Values 12.76
r2(K) 1 0.5 As 1.43
db 0.0342 0.0313 Those 0.0281
dv 0.0463 0.0463 Listed 0.0374
dI 0.0208 0.0095 For 0.0195
dT 0.1013 0.0871 Circuit 0.0850
d0 0.1034 0.0883 N2 0.0863

VCE ¼ VCC ðI1 þ IC ÞRC1 IC RC2 ; ð25:16Þ


Performances of N1, N2 and N3
and
Let b change from b1 = 100 to b2 = 150 and let
I1 ¼ IB þ ðVBE =R1 Þ ffi ðIC =bÞ þ ðVBE =R1 Þ: the temperature change from 25 to 125 °C. As is
ð25:17Þ well known, VBE decreases with temperature at
the rate of 2.5 mV/°C, and ICBO doubles for
Combining Eqs. 25.16 and 25.17 and substi- every 10 °C rise in temperature. Thus, VBE
tuting numerical values (note RC2 = 1 K) gives changes by ∆VBE = –250 mV while the corre-
the condition sponding change in ICBO is ∆ICBO = (210 − 1)
ICBO = 10.23 lA.
ð4=RC1 Þð0:6=R1 Þ ¼ 4:04  103 ð25:18Þ For each of the three designs, the values of d,
dv and dI can now be calculated from Eqs. 25.9–
Let R1 = 10 K; then RC1 is obtained from 25.11 and Table 25.1. These values are given
Eq. 25.18 as 0.976 K ≅ 1 K. Now from Table 25.2 along with other necessary informa-
Eq. 25.15, we should have R2  99 K. With tion. Note that no partial fractional deviation is
R2 = 9 K, V1, which is the left-hand side of more than 0.1 so that in each circuit, the total
Eq. 25.15, becomes 6 V, and ICBO r1, which is fractional deviation can be obtained by summing
the second expression on the right-hand side of the three partial ones. The circuits N2 and N3
Eq. 25.15, becomes 5  10−8; thus Eq. 25.15 is cause a smaller change in IC than N1, although no
satisfied. The design is complete and is given in special care was taken to show N2 and N3 in a
column 3 of Table 25.2. brighter light than N1. However, no generaliza-
tion can be made from this specific design; it is
possible that with a redesign of N1, balance can
Design of N3 be tilted in its favour. All that can be said, and it
has been said earlier, is that bias stabilities that
As mentioned earlier, the design of N2 will also can be achieved by the three circuits are
work for N3 if RE2 is taken as 1 K. comparable.
Design and Performance of N4 197

Design and Performance of N4 as in the other designs, the value of ICBO r1 is


negligible compared to 0.6 so that Eq. 25.8 is
N4 is different from N1, N2 and N3 in that it had satisfied.
DC as well as AC feedback through R2. The ac The complete design along with values of d,
equivalent circuit is shown in Fig. 25.5, from dv, dI and dT are given in Table 25.2.
which the gain can be calculated as

Vo ð1=R2 Þ  gm Using the Total Change Formula


¼ ð25:19Þ
Vi ð1=RC1 Þ þ ð1=R2 Þ
For the example under consideration, the total
By Miller’s theorem, the input impedance change formula Eq. 25.13 was used for each
would be R1 krp kR2 =ð1 þ jgainjÞ. While the design, and the values of d0 are found to be
shunting effect of R1 can be ignored, that of R2/ 0.1034, 0.0883, 0.0883 and 0.0863 for N1, N2, N3
(1 + |gain|) cannot; in fact, the latter will, in and N4, respectively. Clearly, d0 > dT. Thereby
practice, be one order smaller than r ! The output showing that the estimate of total fractional
impedance is approximately RC1 k R2 =ð1 þ change on the basis of partial changes is an
jgainj1 Þ ffi RC1 if R2  RC1 as is usually the optimistic one.
case.
With a gain requirement of −160, and R2
arbitrarily chosen as 20 K, Eq. 25.19 gives Conclusion
RC1 = 1.05 K. Referring to Fig. 25.4, we see that
I1 = (VCE − VBE)/R2 = 0.17 mA. Hence, voltage The BJT biasing circuit has been generalized and
drop across RE1 is VCC − VCE − (IC + I1)RC1 = transformed to yield three alternative four resistor
3.62 V; since the current through RE1 is circuits, whose bias stability performance is
4.04 mA, we get RE1 = 0.896 K. Finally, comparable to that of the commonly used four
R1 = (VBE + voltage drop across RE1)/ resistor circuit. It has been shown that the com-
(I1 − IB) = 32.5 K. monly used performance measure dT obtained by
With the above design, r1 and r2/r1 are cal- summing the partial fractional changes is an
culated from Table 25.1 as 12.76 K and 0.112; optimistic one. It has also been shown that the
thus Eq. 25.6 is satisfied. Also, V1 here becomes calculation of the total fractional change d0 poses
7.28 V so that Eq. 25.7 is also satisfied. Finally, no problem even though the formula looks for-
midable. Another general guideline in designing
a bias circuit that has been revealed in our
R2 designs is that once V1  VBE has been estab-
lished, V1  ICBO r1 need not be checked
+
+ because VBE is usually a few orders greater than
Vi R1 rx gm Vi RC1 Vo ICBO r1.
-
The circuits N1, N2 and N3 have similar per-
- formance in terms of gain, input and output
impedances, while for the same gain and output
impedance, N4 has one order lower input
Fig. 25.5 AC equivalent circuit of N4 impedance.
198 25 BJT Biasing Revisited

Problems References
P:1. Replace the dotted capacitor C by a firm 1. J. Millman, A. Grabel, Microelectronics
connection. Choose C such that its impe- (McGraw-Hill, New York, 1987)
dance is comparable to RE1. What happens 2. S.G. Burns, P.R. Bond, Principles of Electronic
Circuits (West Publishing House, St Paul, 1987)
to the biasing? Analyze.
3. A.S. Sedra, K.C. Smith, Microelectronic Circuits
P:2. What if RBE is absent in Fig. 25.1b? (Oxford University Press, New York, 1998)
P:3. What if RBE = 0 in Fig. 25.1b?
P:4. Let REZ = 0 in Fig. 25.1c. What is the effect
on biasing?
P:5. What if C in Fig. 25.2 is not too large to
become a short circuit at AC?
Analysis of a High-Frequency
Transistor Stage 26

It is shown that, contrary to popular belief, circuit is not valid for output impedance calcu-
classical two-port network theory is adequate lations. There exist several other methods for
for an exact analysis of a general carrying out the analysis: the classical node or
high-frequency transistor stage, including mesh analysis, analysis using feedback concepts,
emitter feedback, almost by inspection. driving point impedance technique [3], and the
recently proposed open and short circuit tech-
nique [2]. Of these, the last one appears attrac-
Keywords tive, and is based on the calculation of two
Two-port analysis  High-frequency stage simpler gain functions and a driving point
impedance.
The purpose of this chapter is to show that
classical two-port network theory is adequate for
Introduction analyzing the circuit exactly, almost by inspec-
tion. The method has been tested in the under-
Consider the high-frequency amplifier circuit graduate classes and has been well received.
shown in Fig. 26.1, which includes an
un-bypassed emitter resistance RE. The capacitor
Cl is traditionally singled out as the troublesome Two Port Analysis
element, but for which the analysis would have
been much simpler. In most textbooks on elec- Let as indicated in Fig. 26.1,
tronics, therefore, the circuit is unilateralized
through application of Miller’s theorem, and Rx ¼ Rs þ rz and Zp ¼ 1=ðgp þ sCp Þ ¼ 1=Yp
simplified by assuming a resistive load, and ð26:1Þ
ignoring the reflected Miller admittance on the
load side [1]. These assumptions, as one readily where gp = 1/rp. We shall carry out the analysis
appreciates, are not always valid; further, as in several steps. First, consider the two-port
pointed out by Yeung [2], the Miller equivalent shown in Fig. 26.2a. By inspection, its y-matrix
is
 
yp 0
Source: S. C. Dutta Roy, “Analysis of a High Frequency ½yŠa ¼ ð26:2Þ
gm 0
Transistor Stage,” Students’ Journal of the IETE, vol.
29, pp. 5–7, January 1988

© Springer Nature Singapore Pte Ltd. 2018 199


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_26
200 26 Analysis of a High-Frequency Transistor Stage

N (a) (b)
IS IL
+ Cm
1 V Zp gm V 2 1 2
RS rx Cm
+
RX Cp
V rp gmV ZL VL
+ -
VS (c) Cm
(d)
-
Zp +
RE 1 V Zp gm V 2 1 RE 2

Fig. 26.1 Equivalent circuit incorporated along with the (e) Cm


actual components
+
V Zp gm V

Next, consider the two-port of Fig. 26.2b for


1 2
which, again by inspection, RE
 
sCl sCl
½yŠb ¼ ð26:3Þ
sCl sCl Fig. 26.2 Steps in derivation

Now, connect the two two-ports of Fig. 26.2a, If we connect the two-ports of Fig. 26.2c, d in
b in parallel, as in Fig. 26.2c; the y-matrix of this series, the two-port of Fig. 26.2e results, whose
two-port is the sum of Eqs. 26.2 and 26.3, i.e., z-matrix is obtained by adding 26.6 and 26.7, i.e.,
  " 1 1
#
yp þ sCl sCl RE þ yp þ gm RE þ yp þ g m
½yŠc ¼ ð26:4Þ ½zŠe ¼ yp þ sCl
gm sCl sCl RE þ
sCl gm
RE þ
sCl ðyp þ gm Þ sCl ðyp þ gm Þ

The determinant of Eq. 26.4 is ð26:8Þ

j yjc ¼ sCl ðyp þ gm Þ ð26:5Þ Adding Rx in series at the input port in


Fig. 26.2e gives the two-port N, indicated by
Hence the z-matrix of the two-port of dashed outline in Fig. 26.1. The z-matrix of N is
Fig. 26.2c is therefore the same as given in Eq. 26.8 except
  for an increase of z11 by Rx. Hence
1 y22c y12c
½zŠc ¼ " 1 1
#
jyj y21c y11c R x þ RE þ yp þ g m RE þ y p þ gm
" c 1 1
# ½zŠN ¼ sCl gm yp þ sCl
y p þ gm y p þ gm RE þ sCl ðyp þ gm Þ RE þ sCl ðyp þ gm Þ
¼ gm þ sCl yp þ sCl ð26:6Þ
sCl ðyp þ gm Þ sCl ðyp þ gm Þ ð26:9Þ

Next consider the two-port of Fig. 26.2d. Its Now postulate the currents IS and IL as shown
z-matrix is given by in Fig. 26.1. Then
 
RE RE Vs ¼ Is z11N þ IL z12N ð26:10Þ
½zŠd ¼ ð26:7Þ
RE RE
Two Port Analysis 201

VL ¼ Is z21N þ IL z22N ð26:11Þ high-frequency transistor stage having an


un-bypassed emitter resistor and a general load.
It should be clear that the effect of rl could be
VL ¼ IL ZL ð26:12Þ taken account of by replacing sCl in Eq. 26.9 by
gl + sCl and that the effect of a parallel ro, Co
Solving for IL from Eqs. 26.11 and 26.12, we combination across the current generator gm
get V could be taken account of by putting y22 a =
go + sCo, instead of zero, in Eq. 26.2 and con-
IL ¼ Is z21N =ðz22N þ ZL Þ ð26:13Þ tinuing the analysis.

Substituting this in Eqs. 26.11 and 26.12


gives the voltage transfer function H(s) = VL/Vs Problems
and the input impedance Zin = Vs/Is as
P:1. Rederive the equations by assuming gm
H ðsÞ ¼ ZL z21N =ðZ11N ZL þ jzjN Þ ð26:14Þ
! ∞ in Fig. 26.1.
P:2. What happens if Cl = 0 in Fig. 26.1?
Zin ¼ ðz11N ZL þ jzjNÞ=ðz22N þ ZL Þ ð26:15Þ P:3. What happens if Cl ! ∞ in Fig. 26.1?
P:4. Now, it’s the turn of RE. What happens
The output impedance Zout faced by the load when RE = 0 in Fig. 26.1?
is precisely 1/y22N, i.e. P:5. What happens when RE ! ∞ in Fig. 26.1?

Zout ¼ jzjN =z11N ð26:16Þ

Combining Eqs. 26.14–26.16 with Eq. 26.9 References


gives the desired expressions.
1. J. Millman, Microelectronics (McGraw-Hill, New
York, 1979)
2. K.S. Yeung, An open and short circuit technique for
Conclusion analyzing electronic circuits. IEEE Trans. Educ. E-30,
55–56 (1987)
It has been demonstrated that simple two-port 3. R.D. Kelly, Electronic circuit analysis and design by
techniques are adequate for analyzing, exactly driving point impedance techniques. IEEE Trans.
Educ. E-13, 154–167 (1970)
and almost by inspection, a general
Transistor Wien Bridge Oscillator
27

Three possible circuits of transistor Wien working into an infinite impedance load, it has a
bridge oscillator, derived from analogy with transfer function given by
the corresponding vacuum tube circuit, are
described. Approximate formulas for the fre- vout 1
¼  
quency of oscillation and the voltage gain vin 1þ C1 R1 þ C2 R2
þ j xC2 R1  xC11 R2
C1 R 2
required for maintenance of oscillations are
ð27:1Þ
deduced. A practical circuit using two OC71
transistors is given. The frequency of oscilla- where x = 2 pf denotes the angular frequency of
tion is found to agree fairly well with that the driving source. The phase shift produced by
calculated from theory. The relative merits of the network is zero at a frequency xo, where
the different forms have also been discussed.
 1=2
1
xo ¼ ð27:2Þ
Keywords
R1 R2 C1 C2
Transistor  Oscillator  Wien bridge Figure 27.2 shows the circuit of a vacuum
tube oscillator using the network of Fig. 27.1. It
consists of a two-stage amplifier with positive
feedback provided through the Wien network.
Introduction Under open loop conditions, the amplifier has a
flat gain and a phase shift of 360° over the fre-
The RC network shown in Fig. 27.1 is a quency range of interest. Thus the circuit will
degenerated form of the Wien bridge and will, oscillate at a frequency given by Eq. 27.2 pro-
henceforward, be referred to as the Wien net- vided that the open loop gain Ao of the amplifier
work. Driven by an ideal voltage generator and satisfies the inequality

C2 R1
Ao  1 þ þ
C1 R2

A common emitter (CE) transistor amplifier is


Source: S. C. Dutta Roy, “Transistor Wien Bridge
Oscillator,” Journal of the Institution of analogous to a common cathode vacuum tube
Telecommunication Engineers, vol. 8, pp. 186–196, amplifier in that both give voltage amplification
July 1962 with a phase reversal. A transistor circuit,
© Springer Nature Singapore Pte Ltd. 2018 203
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_27
204 27 Transistor Wien Bridge Oscillator

maintenance of oscillations. In view of the


R1 C1 low input impedance of a CE amplifier, the
gain of the first stage will be small; the
vin R2 C2 vout second stage will then have to provide the
necessary voltage gain.
(4) In an oscillator circuit, it is desirable that the
frequency of oscillation should be controlled
Fig. 27.1 Degenerated Wien bridge network by varying the passive elements only. In the
above circuit, since both the series and the
shunt arms arc supplemented by transistor
H .T . + impedances, the frequency will be largely
RL controlled by the transistor impedance
R1 C1 CL
parameters, which are again functions of the
OUT transistor operating point.
V1 V2
These difficulties may be obviated by using a
current dual of the Wien network as the feedback
R2 C2
+ + network [1, 2]. In this chapter, it is shown that
- - H .T . – circuits analogous to that of Fig. 27.2 can be
designed to overcome some or all the difficulties
listed above. In all, three circuits have been dis-
Fig. 27.2 Circuit of a vacuum tube Wien bridge
cussed, using 4, 3 and 2 transistors. In the anal-
oscillator
ysis of these circuits, a transistor amplifier has
been assumed to be capable of representation by
analogous to that of Fig. 27.2, will, however, the equivalent circuit shown in Fig. 27.3 in
have the following drawbacks. which the effect of the collector capacitance has
been assumed to be negligible. The expressions
(1) A transistor amplifier in the CE mode has a for the voltage gain (A) and the output impedance
high output impedance so that the behaviour (Zi) are:
at the output terminals is essentially that of a
current generator. The series arm of the Wien v2 h21 ZL
A¼ ¼ ð27:3Þ
network will thus be supplemented by the v1 h11 þ Dk ZL
output impedance of the equivalent voltage
generator. In contrast, in Fig. 27.2, V2 acts as
a voltage generator with an internal impe- v1 h11 þ Dk ZL
Zi ¼ ¼ ð27:4Þ
dance rpRL/(rp + RL) (rp = plate resistance of i1 1 þ h22 ZL
V2) which is usually small compared to R1.
(2) The input impedance of a CE transistor
i1 i2
amplifier is low and causes a considerable + -
loading of the shunt arm of the Wien net- h11 1/h22
h12v2
work. In contrast, the input impedance of a Zg
vacuum tube is very high, ideally infinite, so v1 v2 ZL
that the Wien network in Fig. 27.2 works h21i1
vin
into an open circuit load.
(3) Since the shunt arm is heavily loaded and the
impedance of the series arm is increased, a Fig. 27.3 Low frequency equivalent circuit of a transis-
large voltage gain will be required for tor amplifier
Introduction 205

Table 27.1 Low Parameter Configuration


frequency h-parameters of
OC71 at Vc = −2 V, CB CE CC
Ic = 3 mA h11 (X) 17 800 800
h12 8  10−1 5.4  10−4 1
h21 −0.979 47 −47
h22 ðfÞ 1.6  10−6 80  10−6 80  10−6
∆k 8.1  10−1 0.0386 47.06

where effect of the biasing resistors is negligible and that


the coupling and decoupling condensers behave
Dk ¼ h11 h22 h12 h21 as short circuits at the frequency of oscillation.
The two-stage amplifier T1T2 gives an output
voltage which is in phase with the input voltage.
A practical circuit, using two OC71 transis- The first stage has a load approximately equal to
tors, is given. Each transistor is maintained at the the input impedance of the second and as such
following operating point: collector to emitter has a low gain. In view of the high input impe-
voltage, Vc = −2 V; collector current, Ic = 3 dance of the CC stage T3, T2 has a load
mA. The h-parameters at the above operating approximately equal to RL2 and can be designed
point are given in Table 27.1. The frequency of to have a high gain. As the output impedance of
oscillation is found to agree fairly well with that T3 is low, the Wien network is effectively sup-
calculated from theory. plied by a voltage generator. The network has a
load equal to the input impedance of the CC
stage T4 and as such, if R2 is not too high, the
Circuit 1 loading may be considered to be negligible. T4
has a low output impedance and is not loaded
A common collector (CC) transistor amplifier is when connected to the input of T1. The CC
analogous to a cathode follower circuit. It has a stages produce no phase shift. Thus the circuit
high input and a low output impedance while the will oscillate at a frequency at which the phase
gain is slightly less than unity. Thus the high shift through the network is zero.
impedance output of a CE stage can he trans- An approximate solution for this circuit can be
formed into a low impedance one by cascading a obtained as follows. Let us assume that the
CC stage to it. The problem of loading of the transistors T1–T4 are identical. If the effective
shunt arm of the Wien network can be solved in a load impedance of T2 is not very large, then its
similar way. The transistor analogue of the Wien input impedance is approximately h11e, where the
bridge oscillator of Fig. 27.2 will then look like subscript e is used to mean a common emitter
that shown in Fig. 27.4. Only the A.C. equivalent parameter. The effective load impedance of T1 is
circuit has been drawn on the assumption that the

T1 T2 T3 T4

RL1 RL2 C1 R1
Re1 C2 R2 Re2

Fig. 27.4 Circuit 1 using four transistors (2 CE and 2 CC)


206 27 Transistor Wien Bridge Oscillator

RL1 h11e A3 ’ 1. Thus if the voltage at the input of T1 is


ðRL Þ1 ’ v1, then that at the input of the Wien network is
RL1 þ h11e
A1A2v1.
The voltage gain of this stage is, by formula Since Re2 is shunted by the input impedance of
Eq. 27.3, T1 which is small ð’ h11e Þ the input impedance of
T4, (Zi)4, will not, in general, be negligible com-
h21e ðRL Þ1 pared with the impedance of the shunt arm of the
A1 ¼
h11e þ ðDh Þe ðRL Þ1 Wien network. (Zi)4, however, can be calculated as
follows. The load impedance of T4 is
Now, (RL)1 < h11e and from Table 27.1,
(Dk)e  1. Thus Re2 h11e
ðRL Þ4 ’
Re2 þ h11e
h21e h21e RL1
A1 ’  ðRL Þ1 ¼ ð27:5Þ
h11e RL1 þ h11e Thus from Eq. 27.4,
The effective load impedance of T2 is
h11e þ ðDk Þe ðRL Þ4
ðZi Þ4 ¼
RL1 ðZi Þ3 1 þ h22e ðRL Þ4
ðRL Þ2 ¼
RL2 þ ðZi Þ3
Now, (RL)4 < h11e so that from Table 27.1,
where (Zi)3 is the input impedance of T3. From h22e(RL)4  1. Therefore,
Eq. 27.4,
Re2 h11e
ðZi Þ4 ’ h11e þ ðDk Þ ð27:7Þ
h11e þ ðDh Þe Re1 Re2 þ h11e
ðZi Þs ’
1 þ h22e Re1
Let
The approximation involved in the above
R2 ðZi Þ4
equation is that the loading of Re1 by the Wien R02 ¼ ð27:8Þ
network and the following stage is negligible. R2 þ ðZi Þ4
Substituting values from Table 27.1 and assum-
ing Re1 = 1 kX, we get ðZi Þ3 ’ 48 kX. Now, RL2 Then from Eq. 27.1, the voltage at the input
will be of the order of 3 kX or less so that we can of T4 is
put (Zi)3  RL2 and get ðRL Þ2 ’ RL2 . Thus the A1 A2 v 1
gain of the second stage is C1 R1 þ C2 R02
  ð27:9Þ
1þ C1 R02 þ j xC2 R1  xC11 R0
2

h21e RL2
A2 ’ ð27:6Þ The gain of T4 is, by formula Eq. 27.3,
h11e þ ðDk Þe RL2
h21e ðRL Þ4
The gain of the third stage is A4 ¼
h11e þ ðDk Þe ðRL Þ4
h21e Re2
A3 ’ Assuming Re2 = 1 kX and substituting for the
h11e þ ðDk Þe Re1
parameters from Table 27.1, we get A4 ’ 1.
Assuming Re1 = 1 kX and putting the values Thus the output voltage of T4 is given by
of the parameters from Table 27.1, we get Eq. 27.9; but, this is equal to v1 so that
Circuit 1 207

C1 R1 þ C2 R02 Total negative feedback may be applied by


 
1
A1 A2 ¼ 1 þ 0 þ j xC 2 R 1  0 connecting a suitable resistance from the output
C1 R2 xC1 R2
ð27:10Þ of T2, T3 or T4 to the first emitter, the biasing
resistance connected to it being partly or wholly
Equating the imaginary parts on either side of unbypassed. If a suitable non-linear element is
Eq. 27.10 gives the frequency of oscillation as used for the feedback resistance, amplitude sta-
bilization of the oscillator output will result.

1
1=2 The use of negative feedback raises the input
xo ¼ 0 ð27:11Þ impedance of T1 and thus reduces the loading of
C1 C2 R1 R2
Re2. As a result, the input impedance of T4
Combining Eqs. 27.7 and 27.8 with increases and the loading of the shunt arm of the
Eq. 27.11, we have Wien network decreases. If sufficient negative
feedback can be used, then a1 ! 0 and x0 ! xn.
" ( )#1=2 The output of the oscillator is to be taken from
R2 ðRe2 þh11e Þ
xo ¼xn 1þ the third emitter, as the output impedance of T3 is
h11e ðh11e þRe2 ÞþðDk Þe Re2 h11e small (’ 64 X for OC71 transistor if RL2 = 2
ð27:12Þ 2 kX and Re1 = 1 kX). The output impedance of
T4 is also small, but a load connected at this point
where xn denotes the angular frequency at will reduce (Zi)4 and as such R02 . Thus a variation
which the phase shift through the isolated net- in the load impedance will result in a change in
work is zero. The quantity within the second the frequency of oscillation.
bracket in Eq. 27.12 may be looked upon as a
correction factor, a1. If R2 < 3 kX and Re2 = 1
kX, then for the OC71 transistor, a1 < 0 14 and Circuit 2
we can write
  A common base transistor amplifier can give
1 voltage amplification with zero phase shift. Thus
xo ’ xn 1 þ a1
2 the two-stage CE amplifier in Fig. 27.4 can be
replaced by a single stage CB amplifier as shown
The condition for maintenance of oscillations
in Fig. 27.5. As in the previous case, only the A.
is obtained by equating the real parts on either
C. equivalent circuit has been drawn on the same
side of Eq. 27.10. Combining this with Eqs. 27.5
assumptions as made before.
and 27.6 gives
Let the voltage at the input of T1 be v1. As in
2
h21e RL1 RL2 the previous case, the input impedance of T2 will
be high compared to RL so that the gain of T1 is
ðRL1 þ h11e Þ h11e þ ðDk Þe RL2

C 2 R1 h21b RL
¼ 1þ þ ð1 þ a1 Þ ð27:13Þ A1 ’
C 1 R2 h11b þ ðDk Þ RL
b

Normally, the left-hand side of eq. 27.13 will Substituting values from Table 27.1, we note
be far in excess of the right-hand side, so that the that h21b ’ 1 and that if RL < 3 kX then
output waveform will be distorted. A good (Dk)bRL < 2  43. Thus, to a first approximation,
waveform can be obtained by inserting a suitable we can neglect (Dk)bRL compared to h11b and get
resistance Rf at the point marked X in Fig. 27.4. It
is, however, better to reduce the gain by negative RL
A1 ’ ð27:14Þ
feedback. Local negative feedback may be h11b
applied through unbypassed emitter resistance.
208 27 Transistor Wien Bridge Oscillator

Fig. 27.5 Circuit 2 using T1


three transistors (1 CB and 2 T2 T3
CC)

RL C1 R1
Re1 C2 R2 Re2

The gain of T2 is approximately unity so that the


wo ¼ wn ð1 þ a2 Þ1=2
input to the Wien network is A1v1. The input
impedance of T3, (Zi)3, will be small compared
where
with that of T2 because of the heavy loading of Re2
by the input circuit of T1. The load impedance of R2
T3 is ’ h11b and from Table 27.1, h22eh11b  1; a2 ¼
h11e þ ðDk Þe h11b
thus
The correction factor a2 in this case is quite
ðZi Þ3 ’ h11e þ ðDk Þe h11b large. For the OC71 transistor, if R2 = 3.2 kX,
then a2 = 2. Equating the real parts on either side
Let
of Eq. 27.17 gives the condition for maintenance
of oscillations. Combining this with Eqs. 27.14
R2 h11e þ ðDk Þe h11b

R02 ¼ ð27:15Þ and 27.16, we get
R2 þ h11e þ ðDk Þe h11b
h21e RL C 2 R1
Then the output of the Wien network will be k
¼ 1þ þ ð1 þ a2 Þ
h11e þ ðD Þe h11b C 1 R2
given by
ð27:18Þ
A 1 v1
v3 ¼ C1 R1 þ C2 R02
  As in the previous case, the left-hand side of
1þ C1 R02 þ j xC2 R1  xC11 R0 Eq. 27.18 will usually be in excess of the
2
right-hand side. A suitable resistance may be
The gain of T3 is used between the emitters of T3 and T1 to get a
good waveform. Alternatively, negative feedback
h21e h11b may be applied by connecting a suitable resis-
A3 ’ ð27:16Þ
h11e þ ðDk Þe h11b tance from the output of T1, T2 or T3 to the base
of T1, the biasing resistances at this point being
The output of T3 is given by v4 = A3v3. But partly or wholly unbypassed. As in the previous
v4 = v1; thus case, negative feedback reduces the correction
  factor, a2.
C2 R1 1 As the output impedance of T3 and the input
A 1 A3 ¼ 1 þ þ þ j xC2 R1 
C1 R02 xC1 R02 impedance of T1 are both small, the output
ð27:17Þ voltage may be taken from the third emitter, if
the load impedance is not too small. Alterna-
Equating the imaginary parts on either side of tively, the output may be taken from the second
Eq. 27.17 gives the frequency of oscillation as emitter as in the previous case.
Circuit 3 209

T1 h21b ZL
C1 R1 T2 A1 ¼
h11b þ ðDk Þb ZL
RL C2
R2 Re
Now ZL < RL and RL is of the order of 3 kX.
Thus (Dk)bZL  h11b and since h21b ’ 1,

Fig. 27.6 Circuit 3 using two transistors (1 CB and 1 ZL


CC) A1 ’ ð27:23Þ
h11b

The voltage across R1 is A1v1, v1 being the


Circuit 3 voltage at the input of T1. The output voltage of
the Wien network is, therefore,
A further simplification of the Wien bridge
oscillator circuit is possible if we omit the tran-
A 1 v1
sistor T2 in Fig. 27.5. The resulting circuit is v2 ¼ C1 R1 þ C2 R02
 
shown in Fig. 27.6. In view of the high output 1þ C1 R02 þ j xC2 R1  xC11 R0
2
impedance of T1, the series arm of the Wien
ð27:24Þ
network will also be supplemented by an extra
impedance in this circuit. The voltage gain of T2 is
As in circuit 2, the effective load impedance of
T2 is approximately h11b and since h22ch11b  1, h21c h11b
A2 ’ ð27:25Þ
its input impedance is h11c þ ðDk Þc h11b

ðZi Þ2 ’ h11c þ ðDk Þc h11b The output of T2 is A2v2 = v1. Combining this
with Eqs. 27.22–27.25, we get
Let  
h21c RL RL
¼ D 1þ ð27:26Þ
R2 ðZi Þ2 R2 h11c þ ðDk Þc h11b Z1 þ Z2
R02 ¼ ¼
R2 þ ðZi Þ2 1 þ R2 = h11c þ ðDk Þc h11b

where D denotes the denominator of the
ð27:19Þ
right-hand side of Eq. 27.24. Now from
1 Eqs. 27.20 and 27.21,
Z1 ¼ R1 þ ð27:20Þ
jxC1
R02 D
and Z1 þ Z2 ¼ ð27:27Þ
jxC2 R02 þ 1

R02 Combining 27.26 and 27.27 gives


Z2 ¼ ð27:21Þ
jxC2 R02 þ 1
h21c RL C2 R1 þRL
¼1þ þ
The effective load impedance of T1 is h
h11c þðD Þc h11b C1 R02
 
1
RL ðZ1 þ Z2 Þ þj xC2 ðR1 þRL Þ 
ZL ¼ ð27:22Þ xC1 R02
R L þ Z1 þ Z2
ð27:28Þ
so that the voltage gain of T1 is
210 27 Transistor Wien Bridge Oscillator

Equating the imaginary parts on either side of because even if (ZL)2 = 1 kX, h22c(ZL)2 = 80
Eq. 27.28 and substituting for R02 from Eq. 27.19  10−3 1. The frequency of oscillation will be
gives the frequency of oscillation as given by
" #1=2
1 þ R2 = h11c þ ðDk Þc h11b
 R2

( )
xo ¼ xn h11c þ ðDk Þc Re ðRf þ h11b Þ=ðRe þ Rf þ h11b Þ
1 þ ðRL =R1 Þ xo ¼ xn
1 þ ðRL =R1 Þ
ð27:29Þ ð27:32Þ
Equation 27.29 shows that xo can be made The condition of oscillation is modified to the
equal to xn by choosing following:

R1 R2 ¼ RL h11c þ ðDk Þc h11b



ð27:30Þ
 
h21c RL C R R
n o ¼ 1þ 2 þ 1 1þ L
h ðRe þ Rf þ h11b Þ C1 R2 R1
h11b ðDk Þc þ 11cRe ðR f þ h11b Þ
Equating the real parts on either side of 8 9
Eq. 27.28 gives the condition for maintenance of
< R2 =
 1þ
ðDk Þc Re ðRf þ h11b Þ;
oscillations as
: h þ
11c ðRe þ Rf þ h11b Þ

 
h21c RL C2 R1 RL
k
¼ 1þ þ 1þ This can be solved to find the appropriate
h11c þ ðD Þc h11b C R2 R1
( 1 ) value of Rf. It is, however, more convenient to
R2 put a variable resistance for Rf and to adjust it
 1þ
h11e þ ðDk Þe h11b experimentally.
The output voltage is taken from the emitter of
ð27:31Þ
T2 from the same considerations as stated in the
Here also the left-hand side of Eq. 27.31 will previous case.
be in excess of the right-hand side, and the gain
can be reduced by the same methods as
employed in circuit 2. If degeneration is used, Practical Circuit
then the input impedance of T1 will be raised and
the output impedance lowered. The former A practical version of the two-transistor circuit is
reduces the loading of the shunt arm and the shown in Fig. 27.7a. Each transistor is main-
latter reduces the impedance adding to the series tained at the operating point at which the
arm of the Wien network. If sufficient negative parameters of Table 27.1 apply. This was done
feedback can be applied, then xo can be made to for comparing the actual frequency with that
approach xn very closely. calculated from theory. By choosing a smaller
For simplicity’s sake, let us suppose that the value of Ic, the circuit could be designed to work
gain is reduced by inserting a resistance Rf on a 6 V. battery. A 9 V battery could also be
between the emitters of T2 and T1. Then the load used for establishing the required operating
impedance of T2 is point, but the biasing resistors required are so
small that besides drawing a large power from
ðRf þ h11b ÞRe the battery, their effect on the A.C. operation
ðZL Þ2 ’
Re þ Rf þ h11b becomes quite appreciable.
A slightly lower value of resistance was used
Therefore at the collector of T2 than that at the collector of
ðZi Þ2 ’ h21e þ ðDk Þe ðZL Þ2 T1 to establish a slight difference of potential
between the two emitters.
Practical Circuit 211

(a) (b)
2.2 K 30 K 30 K 2.1 K
25 m T1 C1 R1
- + (OC71) 25 m
+ - T2 - T1
(OC71) 1M
12 V
+ + -
C2 -
100 m

1.11 K
1.11 K

1K - R2 19 K + 10 K 25 m
19 K 25 m
+ -
9K 25 m
+

Fig. 27.7 a Practical oscillator circuit using two transistors; b arrangement for negative feedback

Table 27.2 Comparing R1 (kX) C1 (lF) R2 (kX) C2 (lF) R4 (X) fa (c/s) fc (c/s)
the actual frequency with
that calculated from theory 40 0.106 89 0.105 140 115 113
0.97 0.106 10.4 0.105 395 215 212
0.97 0.106 1.48 0.105 415 755 730
1.4 0.022 1.48 0.0208 375 3365 3400
0.97 0.011 1.48 0.0095 410 7223 7580
0.97 0.0065 1.48 0.00642 385 11,312 12,000

In Fig. 27.7a, the gain is shown to be reduced The dependence of frequency of oscillation
by inserting a variable resistance Rf in the posi- on the transistor operating point is most pro-
tive feedback line. Thus for this circuit, for- nounced in circuit 2, because the correction
mula 27.32 will be applicable. The arrangement factor a2 is usually greater than unity. The
for reducing the gain by negative feedback is correction factor a1 in circuit 1 is generally less
shown in Fig. 27.7b. than unity while that in circuit 3 can be made a
The values of the Wien network components minimum by choosing R1, R2 and RL such that
(R1, C1, R2 and C2), the feedback resistance (Rf). Eq. 27.30 is satisfied. Condition Eq. 27.30
the actual frequency of oscillation (fa) and the cannot, however, be maintained in the lower
frequency calculated from Eq. 27.32 (fc) are audio range because of the large values of
shown in Table 27.2. In calculating fc for the first condensers required.
two cases, the effects of the biasing resistances The change in the frequency of oscillation due
were also taken into account. It will be seen that to a given change of load impedance will be the
fa agrees fairly well with fc in the frequency range highest in circuit 3 and the least in circuit 1.
shown. The lower limit of frequency in either of the
three circuits considered will be set by the maxi-
mum value of the coupling and bypass capacitors
Discussions that can be used while the high frequency limit will
be set primarily by the collector capacitance.
From economic considerations, circuit 3 should Note that OC71 is obsolete. So do not search
be preferred as it uses the least number of tran- for one in the market. Instead wire of a circuit
sistors and other components. with a commonly available tansistor.
212 27 Transistor Wien Bridge Oscillator

When this paper was written, [3] was our Bible P:5. What if there is no negative feedback?
for transistor circuits. Also see [4] for an early
form of transistor oscillator.
References
Problems
1. D.E. Hooper, A.E. Jackets, Current derived resistance
capacitance oscillators using junction transistor. Elec-
P:1. Analyze the circuit of Fig. 27.3 with tron. Eng. 28, 333 (1956)
hybrid-p parameter. h-parameters are not 2. R. Hutchins, Selective RC amplifier using transistors.
used anymore. Do you know the reason? Electron. Eng. 33, 84 (1961)
3. R.F. Shea, Principles of Transistor Circuits (Wiley,
P:2. Same for the circuit of Fig. 27.4. 1953), p. 336
P:3. Same for the circuit of Fig. 27.5. 4. P.G. Sulzer, Low distortion transistor audio oscillator.
P:4. Same for the circuit of Fig. 27.6. Electronics 26, 171 (1953)
Analysing Sinusoidal Oscillator
Circuits: A Different Approach 28

Conventionally, in analysing sinusoidal oscil- the amplifier and the feedback networks load
lator circuits, one uses the Berkhausen’s each other and the identification of A and b poses
criterion, viz. Ab = 1 in a positive feedback a problem. In this chapter, we propose a different
amplifier whose gain without feedback is approach which does not require such identifi-
A and whose feedback factor is b. However, cation. In fact, we do not use feedback concepts
the identification of A and b poses problems at all. Instead, we assume a voltage at an arbi-
because of mutual loading of the amplifier and trary node and come back to the same node
the feedback network. A different approach is through the feedback loop. This results in the
presented here which does not require such so-called characteristic equation of the oscillator.
identification. The method is based on assum- By putting s = jx and equating the real and
ing a voltage at an arbitrary node and coming imaginary parts of the equation to zero, we get
back to it through the feedback loop. the condition for, and the frequency of
oscillation.

Keywords
Sinusoidal oscillator  Different approach An Op-Amp Oscillator

Introduction Consider the Wien bridge RC oscillator, shown


in Fig. 28.1, using an op-amp as the gain ele-
In most textbooks on analog electronic circuits, ment. Here, A refers to the gain between the
sinusoidal oscillator circuits are analysed by nodes N1 and N2, and clearly,
using the Berkhausen’s criterion, viz Ab = 1 in a
A ¼ 1 þ ðR2 =R1 Þ ð28:1Þ
positive feedback amplifier, where A is the gain
of the amplifier without feedback and b is the which is independent of the feedback network
feedback factor. However, except where the because the input impedance of the op-amp tends
amplifier is nearly ideal, e.g. in op-amp circuits, to infinity and the output impedance tends to zero.

Source: S. C. Dutta Roy, “Analyzing Sinusoidal


Oscillator Circuits: A Different Approach,” IETE
Journal of Education, vol. 45, pp. 9–12, January–
March 2004.
© Springer Nature Singapore Pte Ltd. 2018 213
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_28
214 28 Analysing Sinusoidal Oscillator Circuits: A Different Approach

R2 +VCC

R1 RC R3 R
- R C R1

+ N2 • C
N1
R C •

C R R2 R4
• •

Fig. 28.1 An op-amp Wien bridge oscillator

Fig. 28.2 Transistor Wien bridge oscillator


Because of the same reason, the transfer function
of the b network is, by inspection,
gm1V1 gm2V2

b¼½R=ððsCRþ1Þފ=fRþ ½ð1=sC ފ rp1 C


+ +
þR=½ðsCRþ1ފg¼sCR= s2 C2 R2 þ3sCRþ1 :
 
V1 V2 R R C
RC rp2
- -
ð28:2Þ

Putting the values of A and b from Eqs. 28.1


and 28.2 in Ab = 1, and simplifying, we get the Fig. 28.3 AC equivalent circuit of Fig. 28.2
characteristic equation as

s2 C 2 R2 þ ½2 ððR2 =R1 ފsCR þ 1 ¼ 0: ð28:3Þ usual meanings. To analyse this circuit, we start
at V1, and return to V1 through the feedback
Now putting s = jx in Eq. 28.3 and equating loop. Note that
its real and imaginary parts, we get the condition
of oscillation as V2 ¼ gm1 V1 R0c ; ð28:6Þ
R2 ¼ 2R1 ð28:4Þ where
and the frequency of oscillation as R0c ¼ Rc jjrp2 : ð28:7Þ
x0 ¼ 1=ðRC Þ: ð28:5Þ The current generator gm2 V2 in parallel with
R can be converted to a voltage source −gm2
V2R in series with R. Then, one can find V1 as
Transistor Version of the Wien
Bridge Oscillator R0 =ð1 þ sCR0 Þ
V1 ¼ gm2 V2 R ;
R þ ½1=ðsC ފ þ ½R0 =ðð1 þ sCR0 ފ
Now consider the transistorized version of the ð28:8Þ
Wien bridge oscillator shown in Fig. 28.2.
Assume that the shunting effects of R1, R2, R3 where
and R4 are negligible that the coupling and
bypass capacitances behave as short circuits and R0 ¼ Rjjrp1 : ð28:9Þ
that the transistor internal capacitances behave as
open circuits at the frequency of oscillation. Combining Eq. 28.8 with Eq. 28.6, cancelling
Then, the AC equivalent circuit becomes that V1 from both sides and simplifying, we get the
shown in Fig. 28.3, where the symbols have their following characteristic equation:
Transistor Version of the Wien Bridge Oscillator 215

Cm L
s2 C2 RR0 þ sCðR þ 2R0 gm1 gm2 RR0 R0c Þ þ 1 ¼ 0
ð28:10Þ +
V1 rp Cp RC C C

Putting s = jx in this equation and equating - g m V1


the real and imaginary parts, we get the condition
of oscillation as
Fig. 28.5 AC equivalent circuit of Fig. 28.4
gm1 gm2 RR0 R0c ¼ R þ 2R0 ð28:11Þ

and the frequency of oscillation as Cm

 pffiffiffiffiffiffiffiffi
x0 ¼ 1= C RR0 : ð28:12Þ
L
+
gmV1 RC C rp C V1
Cp -
Another Example

As another example, consider the transistor


Colpitt’s oscillator shown in Fig. 28.4. At the Fig. 28.6 Redrawn form of Fig. 28.5
frequencies at which the Colpitt’s circuit is used,
the transistor internal capacitances also may have
to be considered and we shall do so. Again, we Z1 Z2
+
assume the coupling and bypass capacitances to -
behave as shorts and ignore the shunting effects g mV 1 Z 1 Z3 V1
of R1 and R2. Then, the ac equivalent circuit +
becomes that shown in Fig. 28.5, which is -
redrawn in Fig. 28.6 in a form more suitable for
analysis. The current generator gmV1 and its
shunting elements Rc and C can be replaced by a Fig. 28.7 Simplified version of Fig. 28.6
Thevenin equivalent, and the resulting circuit is
shown in Fig. 28.7, where Z1 ¼ Rc =ðsRc C þ lÞ; Z2
¼ sL= s2 LCl þ 1 and Z3
 

¼ rp =½srp ðC þ Cp Þ þ 1Š: ð28:13Þ

+VCC From Fig. 28.7, we get

V1 ¼ gm V1 Z1 Z3 =ðZ1 þ Z2 þ Z3 Þ: ð28:14Þ
RC
R1 Cancelling V1 from both sides, combining
with Eq. 28.13, and simplifying, one gets the
• following characteristic equation:
L

s3 L Cm 2C þCp þC C þCp
    
R2 • C C
þs2 L Cm Gc þgp þgm þGc C þCp þgp C
     
   
þs 2C þCp þLGC gp þ Gc þgp þgm ¼ 0:
ð28:15Þ

Fig. 28.4 Transistor Colpitt’s oscillator


216 28 Analysing Sinusoidal Oscillator Circuits: A Different Approach

Putting s = jx in Eq. 28.15 and equating the real and imaginary parts on both sides give the
real and imaginary parts, we get the frequency frequency of, as well as the condition for
oscillation as given by oscillation.

2C þ Cp þ LGc gp
x20 ¼  
L Cl ðð2C þ Cp Þ þ CðC þ Cp Þ Problems
G c þ gp þ gm
¼  ;
L Cl ðGc þ gp þ gm Þ þ Gc ðC þ Cp Þ þ gp C P:1. What happens when series RC is inter-
changed with parallel RC in Fig. 28.1?
ð28:16Þ
Derive the necessary equations, and justify
where the second part gives the condition of your conclusions.
oscillation. P:2. Suppose in Fig. 28.1, the series R is absent,
what will happen? Oscillations? Justify
your answer with necessary equations.
Concluding Comments P:3. What happens in Fig. 28.2 if the capacitor
marked ∞ is not infinite? Again, justify
Rather than undertaking the involved task of your answer with equations.
identifying A and b in a sinusoidal oscillator P:4. In Fig. 28.3, if rp1 and rp2 are infinitely
circuit, we show that it is easier and less prone to large, what will happen? Justify.
mistake to start at a convenient node voltage and P:5. If in Fig. 28.1, the two C’s are replaced by
return to the same through the feedback two L’s and C is replaced by a single L,
loop. The characteristic equation is thus what would happen? Justify you answer
obtained; putting s = jx in it and equating the with necessary derivations.
Triangular to Sine-Wave Converter
29

This chapter describes how a triangular wave zero. If Vi is a symmetrical triangular wave with a
is converted into a sine wave by using a peak value of Vp = V4′, then the output shall be
piecewise linear transfer characteristic. A de- an approximation to a sine wave with a peak
tailed analysis of the basic circuit is given, and value of V4′ as shown in Fig. 29.3. If Vp exceeds
its actual implementation in an available IC V4′, it is obvious that the resulting sine wave
chip is briefly discussed. shall have a clipped top and bottom (Fig. 29.3).
On the other hand, if Vp < V4′, then we get a
reduced amplitude sine wave of poorer quality as
Keyword compared to the case when Vp = V4′.
Conversion of waves The basic electronic circuit utilized to achieve
the transfer characteristics (Fig. 29.2) is shown in
Fig. 29.4, where V1 < V2 < V3 < V4, these being
reference voltages derived from the power supply
Introduction through an appropriate resistive voltage divider
network. The circuit is functionally symmetrical
Given a symmetrical triangular wave as shown in about the centre line. The upper half of the circuit
Fig. 29.1, is it possible to convert it into a sine realizes the characteristic for Vi > 0, while the
wave by an electronic circuit? The answer turns lower half takes care of the part Vi < 0. Because
out to be in the affirmative. Such a converter is, of symmetry, it suffices to consider only the part
in fact, available as an analog IC chip, whose for Vi > 0.
transfer or input–output characteristic consists of To keep life simple, assume that all the diodes
nine symmetrical, piecewise linear segments, as are ideal, i.e. they act as short circuits. We shall
shown in Fig. 29.2. The central segment has a see later that in the actual chip, this is approxi-
slope of unity, while the slope of the succeeding mately ensured by a pnp–npn transistor combi-
segments is in decreasing order as we go to the nation. Suppose 0 < Vi < V1; then none of the
right or to the left. The last two segments, viz. diodes conduct and Vo = Vi. This is the situation
those for Vi > V4′ and Vi < –V4′, have a slope of for the central part of the characteristic in
Fig. 29.2. When Vi is increased such that V1 
Vi < V2′, where V2′ is the input needed to make
Vo = V2′, diode D1 conducts and the equivalent
Source: S. C. Dutta Roy, “Triangular to Sine Wave
Converter,” Students’ Journal of the IETE, vol. 31,
pp. 90–94, April 1990.

© Springer Nature Singapore Pte Ltd. 2018 217


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_29
218 29 Triangular to Sine-Wave Converter

Fig. 29.1 A symmetrical Voltage


triangular wave and the
approximate sine wave that Approximate
can be obtained by shaping it
Vp sinewave

Time

-Vp Triangular
wave

Fig. 29.2 Transfer V0


characteristics of the
triangular to sine-wave V4
converter V3
V2
V1

-V4¢ -V3¢ -V2¢ -V1


0 V1 V2¢ V3¢ V4¢ Vi

-V1
-V2
-V3
-V4

V4 V1 V2 V3 V4

0 D1 D2 D3 D4

V4
R1 R2 R3
Vp < V 4¢ Vp = V 4¢ Vp > V 4¢

Fig. 29.3 Shape of the output waveform for various


values of Vp in relation to V4′ Vi Vo
Ri

circuit is shown in Fig. 29.5a. To find Vo, apply


KCL at the node which gives R1 R2 R3

D¢1 D¢2 D¢3 D 4¢


ðVi Vo ÞGi ¼ ðVo V1 ÞG1 ; ð29:lÞ

where Gx = l/Rx, x = i, 1, 2, 3. Solving for Vo -V1 -V2 -V3 -V4


gives
Fig. 29.4 The basic circuit of the triangular to sine-wave
Vo ¼ ðV1 G1 þ Vi Gi Þ=ðG1 þ Gi Þ ð29:2Þ converter
Introduction 219

V1
V20 ¼ V2 ðl þ Ri =R1 Þ V1 ðRi =R1 Þ ð29:3Þ

R1
V1 £ Vi < V2 Note that V2′ > V2, which is of course
expected.
Vi Vo When V20  Vi \V30 ; where V3′ is the input
R1 needed to make Vo = V3, both of the diodes D1
(a)
and D2 conduct and the equivalent circuit is
V1 V2 shown in Fig. 29.5b. Again applying KCL and
simplifying, we get
R1 R2 V ¢2 £ Vi < V3 Vo ¼ ðVi Gi þ V1 G1 þ V2 G2 Þ=ðGi þ G1 þ G2 Þ
ð29:4Þ
V1 Vo
R1
(b) This describes the third segment in Fig. 29.2,
whose slope is Gi =ðGi þ G1 þ G2 Þ ¼ ðR1 jjR2 Þ=
V1 V2 V3
½ðR1 jjR3 Þ þ Ri Š: To determine V3′, put Vi = V3′
and Vo = V3 in Eq. 29.4 and solve for V3′. The
R1 R2 R3 V ¢3 £ Vi < V4 result is

V1 V0 V30 ¼ V3 ½1 þ ðRi =R1 Þ þ ðRi =R2 ފ V1 Ri =R1 V2 Ri =R2


R1 ð29:5Þ
(c)

V1 V2 V3 V4 When Vi is further increased so that V3′ 


Vi < V4′, where V4′ is the input needed to make
Vo = V4′, diodes D1, D2 and D3 conduct and the
R1 R2 R3 V ¢4 £ Vi
equivalent circuit is shown in Fig. 29.5c. From
this, one can solve for Vo as
V1 V0
R1
Vo ¼ðVi G1 þ V1 G1 þ V2 G2 þ V3 G3 Þ=
ð29:6Þ
Fig. 29.5 Equivalent circuits for various input voltage ðGi þ G1 þ G2 þ G3 Þ
ranges
This characterizes the fourth segment of
Fig. 29.2, which has a slope of
This describes the second segment of the
characteristic in Fig. 29.2. Whose slope is Gi =ðGi þ G1 þ G2 þ G3 Þ
Gi =ðG1 þ Gi Þ ¼ R1 =ðR1 þ Ri Þ\1: To determine ¼ ðR1 jjR2 jjR3 Þ=½ðR1 jjR2 jjR3 Þ þ Ri Š:
V2′, put Vo = V2 and Vi = V2′ in Eq. 29.2; this
gives, on simplification. By putting V1 = V4′ and Vo = V4 in Eq. 29.6,
one can obtain V4′ as

V40 ¼ V4 ½1 þ ðRi =R1 Þ þ ðR1 =R2 Þ þ ðR1 =R3 ފ V1 Ri =R1 V2 Ri =R2 V3 R1 =R3 ð29:7Þ
220 29 Triangular to Sine-Wave Converter

Finally, when V1  V4′, all four diodes D1′, Fig. 29.6 The resistive +10 V
D2, D3 and D4 conduct, the equivalent circuit is voltage divider network
for generating the refer-
shown in Fig. 29.5d and Vo settles at V4. This ence voltages 5.2 K
corresponds to the last segment of the charac-
teristic in Fig. 29.2. V4
For negative input voltages, a similar analysis
200 W
can be performed with the part of the circuit
below the centre line in Fig. 29.4 and it can be V3
shown that the characteristic shown in Fig. 29.2
in the third quadrant is realized thereby. 375 W
In the Intersil 8038 chip implementation of the
V2
circuit shown in Fig. 29.4, the resistance values
used are Ri = l K, R1 = 10 K, R2 = 2.7 K and 330 W
R3 = 0.8 K. The voltages V1, V2, V3 and V4 are
derived from the +10 V, −10 V supplies through V1
the resistive network shown in Fig. 29.6. It is
1.6 K
readily calculated that V4 = 2.469 V,
V3 = 2.180 V, V2 = 1.637 V and V1 = 1.159 V. -V1
The implementation of the diode is done in a
clever way such that (i) the Thevenin impedance 330 W
of each reference voltage is transformed to an
-V2
insignificant value, and (ii) the voltage drop
across the conducting diode is virtually reduced 375 W
to zero. The actual circuit for the R1, D1, V1 and
R1, D1′, −V1 legs of Fig. 29.4 is shown in -V3
Fig. 29.7. The diode D1 is realized by the com-
plementary pnp (Q2)–npn (Q1) emitter follower 200 W
pair. If Q1 and Q2 are matched, then their -V4
base-emitter drops will be equal and opposite.
Thus, the voltage at the emitter of Q2 will be 5.2 K
V1–VBE, Q1−VBE, Q2 = V1. Also, because of the
33 K resistor in the emitter lead of Q1, it will –10 V
present an impedance of 33 K multiplied by its
beta (bQ1 *100) to the source V1. This impe-
dance will therefore be of the order 3300 K 0.34, 0.33 and 0.32 X, respectively. The scheme
and should not affect the potential divider shown shown in Fig. 29.4 is therefore realized to a high
in Fig. 29.6 at all! On the other hand, the degree of accuracy.
effective Thevenin impedance of the V1 source, Since all parameters are known, we can now
viz. [(1.6 + 0.33 + 0.375 + 0.2 + 5.2)|| (0.33 + calculate the input voltages at the various break
0.375 + 0.2 + 5.2)] K ≅ 3.4 K will be trans- points shown in Fig. 29.2 from Eqs. 29.3, 29.5
formed to 3.4 K/Q1 at the emitter of Q1, and to and 29.7. These are, respectively, V2′ = 1.685 V,
3.4 (K/Q1)/Q2 at the emitter of Q2. Assuming V3′ = 2.483 V and V4′ = 3.269 V. The circuit can
bQ1 = bQ2 = 100, the resulting impedance redu- therefore be made optimum use of if the trian-
ces to 0.34 X only. Similarly, the impedances gular wave peak voltage is 3.269 V; then a sine
presented to D2, D3 and D4 can be calculated as wave with a peak voltage of 2.469 V is obtained.
Introduction 221

V+ Problems
R1
P:1. Convert a sine waveform to a triangular
Q1 V1
(10 K) waveform.
Q2
P:2. Same, but a square waveform.
33 k
V-
P:3. Same, but the desired waveform is shown in
R1 the figure below.
Vi V0
(1 K) V+
amplitude

33 k
Q¢2
R1
Time
Q¢1 -V1
(10 K)
Fig P.3.
V- P:4. Suppose the diodes in Fig. 29.4 are all
nonideal, but identical. What will happen?
Fig. 29.7 Realization of diodes D1 and D1′
P:5. Given a square wave going positive as well
as negative. How would you generate a
chain of positive impulses?
Why should one bother about generating a
sine wave by conversion of a triangular wave?
Instead, why should not one use an LC or RC
sinusoidal oscillator? The reason is that it is very Bibliography
difficult to obtain a wide range variable fre-
quency sinusoidal oscillator. In contrast, one can 1. S. Soclof, Applications of Analog Integrated Circuits.
easily generate a 100: 1 frequency sweep with a (Prentice Hall, 1985)
voltage-controlled triangular wave oscillator. The 2. A.B. Grebene, Analog Integrated Circuit Design.
resulting wave can then be shaped to a sine wave (Van Nostrand Reinhold, 1972)
by a triangular to sine-wave converter as
described in this chapter.
Dynamic Output Resistance
of the Wilson Current Mirror 30

A simple derivation is given for the dynamic also do not prove this result. Soclof [5] attempted
output resistance of the Wilson current mirror, a simple proof but his result is higher by a factor
which forms a basic building block in many of 2 due to a mistake in the assumed current
analog integrated circuits. distributions. In view of the importance of the
Wilson current mirror and in view of the fact that
Soclof’s books [5, 6] are the most comprehensive
Keywords texts available on the subject, we present here a

Current mirror Wilson circuit  Dynamic simple and correct analysis leading to the result
output resistance claimed by Wilson [1] and others.

Introduction Derivation

The Wilson current mirror, shown in Fig. 30.1b, We adopt here the same approach as that of Soclof
is a basic building block in many analog inte- [5] and represent Q3 by an ideal transistor Q03 in
grated circuits. As compared to the simple cur- parallel with its dynamic collector-to-emitter
rent mirror shown in Fig. 30.1a, it has the conductance g0 ¼ 1=r0 , as shown in Fig. 30.2.
advantage of achieving base current cancellation, Let the output voltage change by a small amount
so that I0 = I1, even if the base currents of the ∆V0 and let the consequent change in the output
transistors (all assumed identical) are not negli- current be ∆I0. If ∆I0 can be determined in terms
gible as compared to their respective collector of ∆V0 and transistor parameters, then the
currents. Further, its dynamic output resistance is dynamic output conductance can be calculated as
greater than that of the simple current mirror by a
factor of b/2. This has been mentioned by Wilson g00 ¼ 1=r00 ¼ D I0 =DV0 :
[1] but not proved. Grebene [2] follows Wilson,
but refers to a hybrid-p equivalent circuit anal- Due to the incremental change ∆V0, let the
ysis made by Davidse [3]. Gray and Meyer [4] collector current of Q2 change by ∆I; since Q1 and
Q2 are matched and have the same base-to-emitter
voltages, the collector current of Q1 will also
change by the same amount ∆I. Assuming that the
current I1 remains a constant, i.e. DI1 ¼ 0, KCL
Source: S. C. Dutta Roy, “Dynamic Output Resistance
of the Wilson Current Mirror,” Students’ Journal of the dictates that the base current of Q3 must change
IETE, vol. 31(4) 1990 and 32(1), pp. 165–168, 1991
© Springer Nature Singapore Pte Ltd. 2018 223
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_30
224 30 Dynamic Output Resistance of the Wilson Current Mirror

Fig. 30.1 a A simple current (a) (b)


mirror b Wilson current V+
mirror I1 I0

I1 I0 Q3
R1

Q1 Q2 Q1 Q2

v v+

DI0 incremental base currents of Q1 as well as Q2,


each of which is ∆I/b. Thus
DI1 = 0
bDI DI 0 ¼ D I þ 2 DI=b ð30:2Þ
g0 g0DV0
Q3
DI Combining Eqs. 30.1 and 30.2, we get
DI DV0
D I ¼ D I0 =½2ð1 þ 1=bފ ð30:3Þ
DI DI
2DI/b
Q3
Q1 Q2
Now applying KCL at the collector of Q03 , we
DI/b DI/b
have

D I0 ¼ g0 DV0 bD I ð30:4Þ
Fig. 30.2 Incremental equivalent circuit of the Wilson
Combining Eqs. 30.3 and 30.4 and simplify-
current mirror
ing, we get

D I0 =DV0 ¼ g00 ¼ g0 =f1 þ b=½2ð1 þ 1=bފg


by −∆I. This causes Q03
collector current to ð30:5Þ
change by −b ∆I. The increment of current
through g0 will be g0 ∆V0, because the Assuming b  2, as is the case in practice,
diode-connected transistor Q2 offers a negligible this reduces to
dynamic resistance compared to that of Q3 (= r0).
Now consider the dotted rectangular box in g00 ¼ 2g0 =b ð30:6Þ
Fig. 30.2 (representing Q3); it has two currents
entering, viz. −∆ I and ∆ I0, and one current so that the equivalent dynamic output resistance
leaving, viz, ∆I′, so that by KCL, becomes

DI0 ¼ D I ¼ D I 0 ð30:1Þ r00 ¼ br0 =2 ð30:7Þ

Assuming all transistors to have the same b, This agrees with the claim of Wilson [1] and
we see that DI 0 supplies ∆I, and also the others.
Derivation 225

It may be mentioned here that Soclof’s P:5. What happens when g0 end is shifted from
equivalent circuit assumed DI ¼ DI0 , which, as emitter to base?
is obvious, violates KCL and makes DI 0 ¼ 0!

Problems References

P:1. Search out the literature for other current 1. G.R. Wilson, A monolithic junction FET-NPN oper-
mirrors. Make a list of them and enumerate ational amplifier. IEEE J. Solid State Circ.SC-3, 341–
348 (December 1968)
their merits and demerits as compared to the 2. A.B. Grebene, Bipolar and MOS Analog Integrated
Wilson current mirror. Hint: Consult the Circuit Design. (John Wiley, 1984)
references given above, and if you cannot 3. J. Davidse, Integration of Analogue Electronic Cir-
find them, ask me. cuits. (Academic Press, 1979)
4. P.R. Gray, R.G. Meyer, Analysis and Design of
P:2. Justify, with derivations, what happens Analog Integrated Circuits. (John Wiley, 1984)
when Q1 and Q2 emitters are connected to 5. S. Soclof, Analog Integrated Circuits. Prentice (Hall,
current generators. 1985)
P:3. Why is DI1 6¼ 0 in Fig. 30.2. What happens 6. S. Soclof, Applications of Analog Integrated Circuit.
(Prentice Hall, 1985)
when DI1 6¼ 0?
P:4. Why is g0 connected from the collector to
emitter in Fig. 30.2?
Part IV
Digital Signal Processing

The field of Digital Signal Processing (DSP) has fascinated me for over
three decades, since I first met it in the early 1970s. This is reflected in the
eight chapters of Part IV. The first two articles, on which Chaps. 31 and 32
are based, were written while I was teaching at the University of Leeds,
during 1972–1973. DSP was at its infant state at that time, and students had
difficulty in understanding the basic concepts. That is why, I innovated the
title as ‘The ABCD’s of Digital Signal Processing’. In these articles, I
described DSP from common-sense arguments. These became popular with
the students I taught, almost instantly. I hope beginning students of DSP
will like them, even now. The article on second-order digital filters,
described in Chap. 33, was again inspired by inquisitive questions from
students in the class, and I took pains to give simple derivations of
band-pass and band-stop filters, touching on the limits of selectivity
attainable by them. Chapters 34 through 37 were also inspired by student’s
queries in the class. Chapter 34 topic was in fact solved by four M.Tech.
students of IIT Delhi, with clues from me, and it was satisfying to note that
they could come up with simple derivations of this important element of
DSP, viz. all pass filters. I have always encouraged my teacher colleagues
at IIT Delhi and elsewhere to involve students in their research by throwing
challenges on unsolved problems to students in the class. This article was a
result of such a challenge.
The FIR lattice structure is usually first described and then analysed to
find the performance parameters. I took upon myself the task of viewing this
as a synthesis problem and succeeded. This forms the content of Chap. 35.
A special problem arose during the course of this development, and I solved
it in the article on which Chap. 36 is based. FIR lattice, as described in the
textbooks, uses twice the minimum number of multipliers, as required in a
canonic realization. Long back, Johnson had answered the question with
affirmative, but surprisingly, his paper, although published in IEEE Trans-
228 Part IV: Digital Signal Processing

actions on Audio and Electroacoustics, went completely ignored by later


workers, even the famous ones. I took upon myself the job of giving due
credit to this unsung hero in the article on which Chap. 36 is based.
In FFT signal flow graphs, there are some redundant multipliers. Also,
there are identical multipliers which can be combined. Taking all these into
account, the minimum possible number of multipliers is found out in
Chap. 38.
That completes Part IV and the main contents of the book.
The ABCDs of Digital Signal
Processing––PART 1 31

In this chapter, the basic concepts of digital


Introduction
signal processing will be introduced, leading
to a mathematical description of a digital
The first textbook on digital signal processing, by
signal processor in terms of, first, a difference
Rader and Gold (1969), came out in 1969, and
equation and, second, a z-domain transfer
included the following sentence in its preface:
function. In the process, the effects of sam-
‘The field of digital signal processing is too new
pling and quantization will be briefly touched
to allow us to predict subsequent developments’.
upon. Implementation of a processor by
Today, more than four decades later, one cannot
special purpose hardware and discrete Fourier
certainly claim the field to be new, particularly in
transform technique will be discussed. The
view of the phenomenal progress made in the
fast Fourier transform (FFT) will be intro-
techniques of digital signal processing, leading to
duced and several of its applications will be
dramatic improvements in system efficiency, and
presented, along with the pitfalls and incorrect
its many applications in very diverse fields like
usage of the technique.
biomedical engineering, geophysics, acoustics,
radar and sonar, radio astronomy, etc. These
advances, in turn, have been stimulated by fan-
Keyword
tastic advances in integrated circuit technology

Fundamentals of DSP Difference equation
and computer hardware, in terms of volume, cost
 
Z-transform Sampling Quantization
and speed.

DFT FFT and its pitfalls
One way of gauging the progress of a field is
to look at the available literature related to it. By
as early as the 1970s, there were more than ten
textbooks [1–10], four collections of significant
papers [11–14], a large number of special issues
of professional research journals [15] and
numerous journal conference articles reporting
on new techniques, or improvements on known
ones, or novel applications of digital signal pro-
Source: S. C. Dutta Roy, “The ABCD’s of Digital Signal
Processing (Part 1),” Students’ Journal of the IETE, vol. cessing [16].
21, pp. 3–12, January 1980. On a subject as vast as the literature explosion
suggests, it is not easy to decide as to what

© Springer Nature Singapore Pte Ltd. 2018 229


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_31
230 31 The ABCDs of Digital Signal Processing––PART 1

x*(nT) x (nT) y(nT)


x(t) LPF1 A/D Processor

y(T) Extra- y*(nT)


y(t) LPF2 polator D/A

Fig. 31.1 Block diagram of a digital signal processor whose input and output are continuous signals

should be included in the ‘ABCD’s’ because as with. Let us then consider a continuous signal x
the subject advances, so do the ‘ABCD’s’. The (t), which is to be processed so as to facilitate the
choice and organization of topics in this chapter extraction of a desired information which, we
have been greatly influenced by discussions with will assume, is again a continuous signal. In
some members of potential readers and, of other words, we wish to transform the given
course, by personal preferences. Starting with the signal x(t) into another signal y(t) which is, in
basic concepts involved in sampling and quan- some sense, more desirable than the original. For
tization, a mathematical description will be given example, x(t) may, typically, be the desired sig-
of the digital signal processor in terms of a dif- nal contaminated by an undesired interference
ference equation and then in terms of a z-domain and our aim in processing may be to get rid of the
transfer function. Implementation of the proces- latter. As another example, we may wish to
sor by special purpose hardware as well as by the enhance or estimate some component or param-
discrete Fourier transform (DFT) technique will eter of a signal. Whatever our aim may be for the
be discussed. In the latter, two basic forms of the processing, if the processor is to be digital, x(t)
fast Fourier transform (FFT) algorithm will be must be converted first to a discrete form x*(nT).
introduced. The presentation will conclude with A conceptually simple way of doing this is to
some applications of the FFT, along with its have a switch in series which closes for a very
pitfalls and potential incorrect usage. short time s after every T seconds, where s  T,
Throughout the presentation, efforts will be as shown in Fig. 31.1. Ignore the box marked
made to keep the mathematics as simple as LPF, for the time being; also do not worry as to
possible, and rigorous proofs and derivations will what should the value of T be. The resulting
be avoided as much as practicable. System signal x*(nT) would appear as a series of narrow
design aspects, starting from a given specifica- pulses at times t = 0, T, 2T, …, the height of each
tion, will not be dealt with at all. This topic, pulse being equal to the value of the continuous
along with others, will form the subject matter of signal at that instant of time. This sampled signal
a future chapter. x*(nT) is then suitably coded or quantized in the
analog-to-digital converter (A/D). The output of
the A/D converter, x(nT), is a series of coded
The Basic Digital Signal Processor numbers, coming out every T seconds (ignoring
the small but non-zero A/D conversion time, as
While digital signals and systems can be compared to T). The signal x(nT) is now in a
designed without reference to continuous sys- form which can be processed by digital hard-
tems, it is intuitively appealing, and often easier ware, indicated by the box labelled PRO-
to understand, to build the theory of digital signal CESSOR in Fig. 31.1. The processor thus
processing starting from continuous signals and accepts a time series, i.e. series of numbers
systems which most engineers are more familiar appearing at equal intervals of T seconds at its
The Basic Digital Signal Processor 231

input, and performs some operations on them to filtering and the digital signal processor is then
produce another time series y(nT). These opera- more commonly known as a digital filter.
tions, in the most general linear digital signal The output of the processor in Fig. 31.1, y
processor, can be described by the following (nT), is another sequence of numbers, which can
linear1 difference equation: be fed to a decoder, the digital-to-analog con-
verter, to produce pulses of short duration, whose
M N
X X amplitudes are in proportion with the value of y
yðnTÞ ¼ ai xðnT iTÞ þ bj yðnT jTÞ;
(nT). Continuous output can be obtained by
i¼0 j¼1
passing this pulse sequence through an extrapo-
ð31:1Þ lator or data reconstructor, which can be a simple
zero-order hold, described by
where ai’s and bj’s are suitable constants. x(nT)
and y(nT) are, respectively, the input and output y1 ðnT þ tÞ ¼ y1 ðnT Þ; 0  t  T; ð31:2Þ
of the processor at the instant nT, while x(nT – iT)
and y(nT – jT) represent, respectively, the input at i.e. the value between two sampling instants is
some past instant nT – iT and the output at another held at the value of the immediately preceding
past instant nT – jT. At the instant nT, therefore, sample.2 The output of the extrapolator, y1(t),
the computation of the output y(nT) requires the will contain some high-frequency ripples, which
past M inputs and N outputs; the processor may be removed, if desired, by passing the signal
therefore should have a memory in which these through a low-pass filter, LPF2. This filter is,
past input and output numbers can be stored. The however, a simple, inexpensive one and typi-
constants ai and bj are also to be stored, of course. cally, a single R–C section serves the purpose.
What the processor does after receiving the pre- Before we conclude this section, a discussion of
sent input x(nT), in fact, is then to recall the the block LPF1, is in order. The choice of T or the
constants, past inputs and past outputs and per- sampling frequency fs = 1/T should be in confor-
form the computation specified in Eq. 31.1, to mity with the sampling theorem, i.e. fs should be
deliver the output y(nT). The computation does greater than 2fh, where fh is the highest frequency
require some small but non-zero time, but this content of the signal x(t) to be processed. If this is
must be smaller than T in order that the processor not the case, distortion occurs and x*(nT) will not be
may be ready to receive the next input with a a true discrete representation of x(t); another way of
clean state. We shall, in the following discus- saying this is that it would not be possible to recover
sions, ignore this computation time and continue x(t) from x*(nT). The sampling frequency should
to call the output due to x(nT) as y(nT). therefore be sufficiently high. However, for other
Before we proceed further, it is wise to recall reasons (e.g. coefficient quantization error, to be
the following sources of nonidealness in our discussed later), the sampling frequency should not
signal processing system in Fig. 31.1. (i) s, the be too high. For a signal contaminated with
duration for which the sampling switch remains high-frequency noise (impulse noise), the mini-
closed, (ii) A/D conversion time, and (iii) pro- mum required sampling frequency may be inordi-
cessor computation time. The first two ideally nately high. In such cases, it would be advantageous
should be zero, and the third should be at least to filter the continuous signal x(t), before sampling,
less than T, if not significantly so. by passing it through an inexpensive analog
By choosing the ai’s, bj’s, M and N in low-pass filter, LPF1 as shown in Fig. 31.1. LPF1
Eq. 31.1, one can achieve a variety of process- may also typically be a single R–C section
ing. If the emphasis is on shaping the spectrum of
x(t) in a desired fashion, we call the processing as 2
It can be shown that the zero-order hold has a lowpass
frequency response with linear phase characteristic.
Higher-order holds are not generally used because their
1
We shall confine our attention to linear digital signal implementation is difficult, their phase response is not
processing only. linear and they introduce more delay to the signal.
232 31 The ABCDs of Digital Signal Processing––PART 1

The Sampling Process 1 X 1


SðtÞ ¼ ejkxst ; ð31:4Þ
T k¼ 1
As already mentioned, a sampler may be viewed
as a switch which closes for a short duration s,
where xs ¼ 2p=T ¼ 2pfs . Thus,
after every T seconds, where s  T, as shown in
Fig. 31.2a. Implicit, of course, is also the 1
1 X
assumption that the sampler can open and close x  ðnTÞ ¼ xðtÞejkxs t ð31:5Þ
T k¼ 1
instantaneously. A typical input x(t) and output
x* (nT) of the sampler are shown in Fig. 31.2b, c,
respectively. A considerable simplicity in If one now takes the Fourier transform of both
understanding and analysis is achieved if s is sides and uses the notations X  ðxÞ and XðxÞ to
assumed to be zero, i.e. if the sampler is assumed represent the spectra of x(t) and x*(nT) respec-
to be an ideal impulse sampler whose output is a tively, then one obtains.
series of impulses appearing at the sampling 1 X
X  ðxÞ ¼ Xðx þ kxs Þ ð31:6Þ
instants, as shown in Fig. 31.2d, the strength of T k¼ 1
the impulse at t = nT being equal to x(t)|t = nT.
Then one can write Thus the spectrum of the sampled signal is
1
obtained by superimposing, on the original signal
spectrum, the same spectrum shifted on the fre-
X
x  ðnTÞ ¼ xðtÞ dðt nTÞD xðtÞSðtÞ; say
n¼ 1 quency scale by ks, where k takes all integer
ð31:3Þ values, positive and negative, except zero. This is
pictorially shown in Fig. 31.3 for a hypothetical
What Eq. 31.3 says in essence is that the X  ðxÞ which is bandlimited such that XðxÞ ¼ 0
sampled sequence x*(nT) is obtained by multi- for jxj [ xs =2. Note that X  ðxÞ is the same as
plying the continuous signal x(t) by an impulse XðxÞ in the band jxj [ xs =2, called the base
train occurring at 2T, – T, 0, T, 2T,…; we have band, and it is possible to recover XðxÞ from
named this impulse train as another function S(t). X  ðxÞ by using a low-pass filter whose trans-
Obviously, S(t) is periodic with period T and can mission characteristic is shown by dotted lines.
be expanded in Fourier series. When this is done, An example of a situation where XðxÞ is not
one finds bandlimited and has considerable amplitude

Fig. 31.2 The sampling x*(n )


process a sampler, b a (a) (c)
continuous signal, c its
sampled version and
d idealized sampled version t

x(t) T
t
T

(b) x(t)
(d) x*(nT)|t = 0

t t
The Sampling Process 233

xw

wh w
-ws -wh O ws
2 2

x* w
LPF RESPONSE

w
-3ws -ws –ws -ws O ws ws ws 3ws
2 2 2 2
Fig. 31.3 Showing the spectrum of the sampled signal in relation to that of the continuous signal

beyond xs/2, called the Nyquist frequency, is frequency, xh, contained in the signal. This, in
shown in Fig. 31.4. An alternative equivalent essence, is the well-known sampling theorem.
description of this situation is that the Nyquist Practical signal are not, however, bandlimited
frequency is lower than the highest frequency xh and the signal spectrum recovered by passing
contained in XðxÞ. In this case obviously, XðxÞ X  ðxÞ through an ideal low-pass filter will be
does not keep its identity in X  ðxÞ and no filter different from XðxÞ due to spillover from the
can recover XðxÞ from X  ðxÞ. Thus, to keep the adjacent bands. The error so introduced is called
information content of the signal intact in the aliasing or folding error. Further, an ideal filter,
sampled version, we must choose the sampling with the brickwall characteristic, is not realizable
frequency xs to be at least twice the highest in practice and this introduces an additional error.
To keep these two errors within the tolerable
limits, the sampling frequency is often required
to be sufficiently high.
xw Another source of error is the fact that s is not
zero in practice; the nature and extent of its
contribution to distortion has been discussed in
Shapiro (1978) and will not be considered here.

Quantization Errors3
-ws O w
ws
2 2
-wh wh
In this section, we would like to point out an
x* w inherent limitation on the accuracy of digital
signal processors. This limitation arises due to

-wh -ws O ws wh w
2 2
3
This example is reprinted by kind permission of John
Fig. 31.4 Aliasing error Wiley & Sons Inc.
234 31 The ABCDs of Digital Signal Processing––PART 1

the fact that all digital systems operate with a To begin with, the numbers we are dealing
finite number of bits, or a finite word length. with must be represented in binary notation in
Rather than going into a detailed theory, we order to be stored, manipulated and operated
prefer to illustrate the various errors, resulting upon by digital hardware. Consider the coeffi-
from finite word length, through a simple cient 0.81, it can be written as
example (Peled and Liu [7]). Suppose our pro-
1 1 1 1
cessor is required to implement the following 0:81 ¼ ð Þ1 þ ð Þ2 þ ð Þ5 þ ð Þ6 þ   
2 2 2 2
difference equation.
i.e. in base 2, 0.81 can be represented as
yðnÞ ¼xðnÞ xðn 2Þ 0.11001110101 … An infinite number of bits are
þ 1:2727922yðn 1Þ 0:81yðn 2Þ; needed to represent this coefficient exactly. Since
all practical memory circuits have a finite number
of bits for each word, the infinite binary string
yðnÞ ¼xðnÞ xðn 2Þ must be modified. If one uses a memory with a
þ 1:2727922yðn 1Þ 0:81yðn 2Þ; 6-bit word length, a simple way to store our
ð31:7Þ number will be to keep only the 6 most signifi-
cant bits, that is, 0.11001 as the approximate
where for brevity, x(nT) and y(nT) have been value for 0.81. However, 0.11001 in base 2
represented as x(n) and y(n), respectively. A pos- represents the number 0.78125, thus introducing
sible basic arrangement is shown in Fig. 31.5. an error of 0.02875 in this coefficient. Similarly,
It consists of a memory for storing the coef- 1.2727922 has a 6-bit base 2 representation of
ficients; a set of data registers for storing the 1.01000 or l.25, resulting in an error of
input and output samples; an arithmetic unit to 0.0227922. Obviously, ±1 or 0 can be repre-
perform the computation according to Eq. 31.7 sented exactly; finally, therefore, the equation
and a control unit (not shown) for providing the that the processor actually implements is
timing signals.
yðnÞ ¼ xðnÞ xðn 2Þ þ 1:25yðn 1Þ 0:78125yðn 2Þ
ð31:8Þ

MEMORY FOR CO-EFFICIENTS The resulting error is called the coefficient


0.81 quantization error.
1.2727922 Another source of error is the quantization of
the input data in the A/D converter. Suppose x(t) in
Fig. 31.1 is sinusoidal and consider the following
DATA REGISTERS input segment… 0.2955, 0.5564, 0.8912,
0.9320… Suppose the A/D converter yields 8 bits
x(n) and let the data registers in Fig. 31.5 be of the same
capacity. Truncated to 8 bits, the above input data
x(n 2) segment becomes, in binary form,
ARITHMETIC x(n)
UNIT 0:0100101; 0:1000111; 0:111010; 0:111011
y(n 1)

corresponding to the values… 0.2890625,


y(n 2) 0.5546875, 0.890625, 0.9265,… which, obvi-
ously, differ from the actual samples of the
sinusoidal signal. The resulting error is called the
Fig. 31.5 Implementation of Eq. 31.7. Control unit is input quantization error.
not shown
Quantization Errors 235

The third source of error is due to the limited Also, to keep things simple, let us consider a
accuracy with which arithmetic operation can be causal signal, i.e. let x(t) = 0, t < 0. Further,
performed. In computing the term—0.78125y since a delta function exists only when its argu-
(n − 2) in Eq. 31.8, for example, the product of a ment is zero, we could rewrite Eq. 31.9 in the
6-bit number (−0.78125) and an 8-bit number form
[y (n − 2)] will give 14 significant bits. This must
1
be shortened to 8 bits so that the result will fit in X
X  ðnTÞ ¼ xðnTÞdðt nTÞ ð31:10Þ
the 8-bit data register. The error thus committed n¼0
is known as the round-off error. Further and
more importantly, the previously computed out- If we take Laplace transform (LT) on both
put samples are used via Eq. 31.8 to compute sides and call the LT of x*(nT) as x*(s), we get
later output samples, and this has a cumulative
1
effect. X
X  ðsÞ ¼ xðnTÞe snT
ð31:11Þ
What are the overall effects of these errors?
n¼0
Unless carefully analysed and accounted for, the
results can be very disappointing. For example, To get rid of the transcendental function est,
coefficient quantization error may convert a let us replace it by z, i.e. let
stable processor into an unstable one. The arith-
metic round-off errors can result in low-level z ¼ esT ð31:12Þ
limit cycles and overflow oscillations.
Further, let X  ðsÞ , X ðzÞ; then
1
Z-Transform X
XðzÞ ¼ X  ðsÞD xðnTÞz n
ð31:13Þ
n¼0
Recall that the input–output relation of a digital
signal processor is expressed by a linear differ- The variable z need not necessarily be thought
ence equation of the form of Eq. 31.1. It is well of as esT; it could be interpreted as an ordinary
known that the solution of such an equation is variable whose exponent (ignoring the negative
greatly simplified by using the z-transform (just sign) represents the position of the particular
as the solution of linear differential equations, pulse in the sequence {x(nT)}. When viewed in
which can be used to characterize linear contin- the latter light, X(z) is a ‘generating function’ and
uous systems, is greatly simplified by using may be treated without identification with a
Laplace transforms). Further, a better under- Laplace transform.
standing of the digital signal processor, particu- The infinite summation Eq. 31.13 defines the
larly its frequency domain or spectral behaviour, z-transform of the sequence {x(nT)} or more
is obtained from its z-transformed description. concisely {x(n)}. (Note the use of {.} to represent
Consider the sampled signal x*(nT) described a sequence and the dropping of T for brevity).
in Eq. 31.3, reproduced below in a slightly dif- Thus, formally,
ferent, but equivalent form
1
X
n
1
X ZfxðnÞg ¼ XðzÞD xðnÞz ð31:14Þ
X  ðnTÞ ¼ xðtÞdðt nTÞ ð31:9Þ n¼0
n¼ 1
236 31 The ABCDs of Digital Signal Processing––PART 1

We shall not go into the details of existence, In the discrete case, by analogy with Eq. (19),
convergence and other mathematical properties we define convolution of two sequences {x1(n)}
of the z-transform here, but it is better to and {x2(n)} as another sequence {x(n)} such that
remember that the series Eq. 31.14 converges
n
outside a circle in the z-plane whose radius X
xðnÞ ¼ x1 ðrÞx2 ðn rÞ
equals the n-th root of maximum x(n) in {x (n)}. r¼0
Given X(z), one can recover {x(n)}, in gen- n
ð31:20Þ
X
eral, by applying the inversion integral and ¼ x1 ðn rÞx2 ðrÞ
Cauchy’s residue theorem; however, for rational r¼0

X (z), as is usually the case, a long division is


adequate. As an example, if The discrete convolution can be given a
graphical interpretation, analogous to continuous
z 1 convolution, but we would not discuss this here.
XðzÞ ¼ ¼ 1
ð31:15Þ Instead, we now state the third important prop-
z k 1 kz
erty of z-transforms, viz. that if Eq. 31.20 is true,
then then so is,
X ðzÞ ¼ 3 þ kz l þ k2 z 2
þ  ð31:16Þ X ðzÞ ¼ X1 ðzÞX2 ðzÞ ð31:21Þ
and obviously x(n) = kn. This result is exactly analogous of the Laplace
Three important properties of z-transforms transform of the convolution of two continuous
will now be stated without proof. First, the functions. The proofs of Eqs. 31.17, 31.18 and
z-transform is a linear operation, i.e. if 31.21 follow easily from the definition of
Z fxi ðnÞg ¼ Xi ðzÞ; then z-transform given in Eq. 31.14.
" #
P
X P
X
Z fai xi ðnÞg ¼ ai xi ðzÞ ð31:17Þ
i¼1 i¼1 Transfer Function of a Digital Signal
Processor
The second property concerns the z-transform
of a shifted sequence, viz. Consider the digital signal processor described in
Eq. 31.1 once again and let Z[{x(n)}; {y
m
Z fxðn mÞg ¼ z X ðzÞ ð31:18Þ (n)}] = X(z); Y(z). If one takes the z-transform of
both sides and uses the shifting property of
The third concerns the z-transform of a con- z-transforms mentioned in the preceding section,
volution of two sequences. Before we state this, it is easy to see that
however, let us understand what we mean by the
convolution of two sequences. In the continuous M
X N
X
domain, convolution x(t) of two functions YðzÞ ¼ ai XðzÞz i þ bj YðzÞz j
ð31:22Þ
x1(t) and x2(t) is defined by i¼0 j¼0

xðtÞ ¼ x1 ðtÞ  x2 ðtÞ This can be put in the form


Zt M
1
P
¼ x1 ðtÞx2 ðt TÞdT ai z
YðzÞ i¼0
0 ð31:19Þ HðzÞ ¼ ¼ ð31:23Þ
XðzÞ N
P
Zt 1 bj z 1
j¼0
¼ x1 ðt TÞx2 ðTÞdT
0
The quantity H(z), defined ‘as the ratio of
z-transform of output sequence to the z-transform
Transfer Function of a Digital Signal Processor 237

of the input sequence’, obviously is a character- M


X
istic of the processor only and is an adequate yðnÞ ¼ hðrÞxðn rÞ ð31:28Þ
representation of it. It is, by analogy with con- r¼0

tinuous systems, called the z-domain transfer


Note that this is of the same form as Eq. 13.1
function or simply the transfer of the digital
with bj’s equal to zero, and ar = h (r). Under this
signal processor. Note that
condition, the transfer function H (z) given in
Y ðzÞ ¼ H ðzÞ when X ðzÞ ¼ 1 ð31:24Þ Eq. 31.23 becomes a polynomial in z−1. A pro-
cessor which is not FIR is of the Infinite Impulse
What does X(z) = 1 signify? We can write Response (IIR) type. For this, at least one bj in
Eq. 31.1 is non-zero and H (z) in Eq. 31.23 is a
X ðzÞ ¼ 1 þ 0:z l þ 0:z 2
ð31:25Þ rational function in z−1.
There are two other terms which are very
If one compares Eq. 31.25 with Eq. 31.14, it commonly used in digital signal processing ter-
is obvious that XðzÞ ¼ 1 corresponds to an input minology; these are non-recursive and recursive.
sequence. A very common mistake that has been perpetu-
ated in the literature is to identify FIR with
x ð nÞ ¼ 1 for n ¼ 0 ¼ 0 otherwise ð31:26Þ non-recursive and IIR with recursive. As pointed
out by Gold and Jordan [17], the terms recursive
i:e:xðnÞ ¼ fl; 0; 0; . . .g. This is called the unit and non-recursive should be used only to describe
pulse and we see that it plays the same role as the the method of realization. A realization in which
impulse function dðtÞ in a continuous system. no past values of the output have to be called back
The inverse z-transform of H(z), denoted by to compute the present output is called
fhðnÞg, is obviously the output sequence of the non-recursive; if one or more past values of the
processor when the input is a unit pulse. fhðnÞg output are required for computing the present
is called the impulse response of the digital sig- value of the output, the realization is called
nal processor. recursive. Obviously, FIR processors are realized
From Eq. 31.23 and the z-transform property most conveniently in non-recursive form, while
for convolution, it should also be apparent that recursive form is to be preferred for IIR proces-
for a general input fxðnÞg, the output sequence sors. But, as shown by Gold and Jordan (loc. cit.),
fyðnÞg should be given by the convolution of the FIR processors can be realized recursively, and
input sequence with the impulse response, i.e. IIR processors can be realized non-recursively.
N
The transfer function Eq. 31.23 can be
expressed as
X
yðnÞ ¼ xðrÞhðn rÞ ð31:27Þ
r¼0
1 1 1
ðz a1 Þðz a2 Þ. . .ðz aM Þ
HðzÞ ¼ A 1 1 1
;
n
X ðz b1 Þðz b2 Þ. . .ðz bN Þ
¼ xðn rÞhðrÞ
ð31:29Þ
r¼0

Suppose we have a digital signal processor, where A is a real constant, and a0 s and b0 s are
characterized by an impulse response fhðnÞg, either real or complex if they are complex they
where h (n) = 0, n > M. Such a processor is said occur in conjugate pairs, a0 s are called zeros and
to be of the Finite Impulse Response (FIR) type. b0 s are called poles of the digital signal processor
For this, Eq. 31.27 becomes in the z−1 plane. A digital signal processor is
238 31 The ABCDs of Digital Signal Processing––PART 1

(a) j Im Z (b) (c)


H (e jwT )
Avg H (e jwT )
wT ws
Re Z 2 ws

w O
|Z| = 1 ws ws
2

Fig. 31.6 Pole-zero sketch of Eqs. 31.30 or 31.31, b Magnitude response, c Phase response (To be continued)

stable if the poles in the z−1 plane all lie outside magnitude response is symmetrical while the
the unit circle, which is equivalent to having all phase response is antisymmetrical around the
poles inside the unit circle with z-plane; this Nyquist frequency xs ; =2 and that both responses
comes from the correspondence of the jx-axis in repeat after every xs radians.
the s-plane to the unit circle in the z−1 plane, see
Eq. 31.12. The frequency response of the digital
signal processor is obtained by putting z ¼ ejx T Problems
in H (z). As an example, let a digital signal
processor be described by the difference equation P:1. What happens to the spectrum if the
impulses in Fig. 31.2d are replaced by thin
yðnÞ ¼ xðnÞ xðn lÞ 0:8yðn 2Þ ð31:30Þ rectangular pulses?
P:2. If the base spectrum in Fig. 31.3 is a full
The transfer function of the system is sinusoid form—xh and xh [ x2s ; what will
happen to the sampled spectrum?
1 z 1 P:3. Why are all powers of z in z-transform
HðzÞ ¼ negative? What is the meaning of positive
1 þ 0:81z 2
1 z 1 powers? Comment on their realizability in
¼ ð31:31Þ real time and virtual time.
ð1 þ j0:9z 1 Þð1 j0:9z 1 Þ
P:4. Can a z-transform with negative powers of
zðz 1Þ z have a numerator of degree higher than
¼ ;
ðz þ j0:9Þðz j0:9Þ that of the denominator? What will be its
inverse transform?
where the last form has been used to facilitate a P:5. Can you realize a difference equation with
pole-zero sketch in the z-plane, as shown in term like x(n + 1), x(n + 2) …?
Fig. 31.6a. Putting z ¼ ejxt in Eq. 31.31 and
evaluating the amplitude and phase, one can
obtain the plots shown in Figs. 31.6b, c. It can References
also be done graphically by drawing the vectors
shown in Fig. 31.6a for a particular frequency. 1. A.V. Oppenheim, R.W. Schafer, Digital Signal
Obviously, our signal processor described by Processing. (Prentice-Hall, 1975)
Eq. 31.30 represents a band-pass filter in the 2. L.R. Rabiner, B. Gold, Theory and Applications of
baseband. It should be mentioned that the Digital Signal Processing. (Prentice Hall, 1975)
References 239

3. W.D. Stanley. Digital Signal Processing. (Reston, on Communications, IEEE Transaction on Comput-
1975) ers, Bell System International Journal of Circuit
4. M.H. Ackroyd, Digital Filters. (Butterworth, 1973) Theory and Applications, Proceedings IEEE, IEEE,
5. E.O. Brigham, The Fast Fourier Transform. Journal on Electronic Circuits and Systems, Elec-
(Prentice-Hall, 1974) tronics Letters, Radio Electronic Engineer, Journal
6. K. Steiglitz, An Introduction to Discrete Systems. on Acoustical Society of America., and many others.
(Wiley, 1974) Conferences which devote a significant portion of
7. A. Peled, B. Liu, Digital Signal Processing.(Wiley, time to digital signal processing papers are IEEE
1976) International Conference on ASSP, IEEE Interna-
8. S.A. Tretter, Introduction to Discrete Time Signal tional Conference on CAS, Allerton, Asilomar,
Processing. (Wiley, 1976) Midwest Symposium., European Conference on Cir-
9. R.E. Bogner, A.G. Constantinides, Introduction to cuit Theory and Design, NATO Special Conferences,
Digital Filtering. (Wiley Interscience, 1975) Summer Schools on Circuit Theory held at Prague,
10. D. Childers, A. Durling, Digital Filtering and Signal etc. etc
Processing. (West Pub. Co., 1975) 17. B. Gold, K.L. Jordan, Digital Signal Processing.
11. A.V. Oppenheim (ed.), Papers on Digital Signal (McGraw-Hill, 1968)
Processing. (MIT Press, 1969)
12. L.R. Rabiner, C. Rader (eds.), Digital Signal Pro-
cessing. (IEEE Press, 1972)
13. B. Liu, (ed.), Digital Filters and the Fast Fourier Bibliography
Transform. (Dowden Hutchinson Ross, 1975)
14. A.V. Oppenheim et al. (eds.), Selected Papers in
Digital Signal Processing II. (IEEE Press, 1976) 18. B. Gold, C. Rader, Digital Processing of Signals.
15. See e.g. IEEE Transactions on Audio and Electroa- (McGraw-Hill, 1969)
coustics; June 1967, September 1968, June 1969, 19. H.D. Helms, et al. (eds.), Literature in Digital Signal
June 1970, December 1970, October 1972, June Processing. (IEEE Press, 1976)
1973, June 1975. IEEE Transactions on Circuit 20. L. Shapiro, Sampling Theory in Digital Processing.
Theory: November 1971, July 1973. IEEE Transac- Electron. Eng. 45–50 (May 1978)
tions on Circuits and Systems: March 1975. Pro- 21. B. Gold, K. Jordan, A Note on Digital Filter
ceedings of lEEE: July 1972, October 1972, April Synthesis Proceedings on IEEE 56, October 1968.
l975. IEEE Transactions on Computers: July 1972, pp. 1717–1718
May 1974. IEEE Transactions on Communication 22. J.W. Cooley, J.W. Tukey, An algorithm for the
Technology: December 1971 Machine Calculation of Complex Fourier Series.
16. Digital signal processing papers appear. in Proceed- Math. Comput. 19, 297–301 (April 1965)
ings of IEEE, IEEE Transaction on Acoustics, 23. W.M. Gentlemen, G. Sande, Fast Fourier Trans-
Speech and Signal Processing (formerly Audio and forms–for Fun and Profit. in 1966 Fall Joint
Electroacoustics), IEEE Transaction on Circuits and Computer Conference on AFIPS Proceedings,
Systems (formerly Circuit Theory), IEEE Transaction pp. 563–578
The ABCDs of Digital Signal
Processing–PART 2 32

Here, we deal with the realizations of DSP’s can be represented by a variety of equivalent
DFT, FFT, application of FFT to compute realization diagrams or structures. When imple-
convolution and correlation, and application mented in a general-purpose computer, the
of FFT to find the spectrum of a continuous structure may be thought of as the representation
signal. of a computational algorithm, from which a
computer program is derived. When imple-
mented by special-purpose hardware, it is often
Keyword convenient to think of the structure as specifying
 
DSP realization DFT FFT and its a hardware configuration.
 
applications Convolution Correlation Corresponding to the basic operations required
Picket fence effect for implementation of a digital signal processor, the
basic elements required to represent a difference
equation pictorially are an adder, a delay and a
constant multiplier, the commonly used symbols for
Realization of Digital Signal
which are shown in Fig. 32.1. Physically, Fig. 32.1a
Processors
represents a means for adding together two
sequences, Fig. 32.1b represents a means for mul-
It should be clear from what has been discussed
tiplying a sequence by a constant and Fig. 32.1c
so far that a digital signal processor may be
represents a means for storing the previous value of a
realized by use of the storage registers, arithmetic
sequence. The representation used for a single
unit and the control unit of a general-purpose
sample delay arises from the fact that the z-transform
computer. Alternatively, special digital hardware
of x(n – 1) is simply z–1 times the z-transform of x(n).
may be designed to perform the required com-
As an example of the representation of a dif-
putations; this would result in a special-purpose
ference equation in terms of these elements,
processor (e.g. for radar or sonar signals) that
consider the second-order equation
would more or less be committed to a specific
job. In either case, the digital signal processor yðnÞ ¼ b1 yðn 1Þ þ b2 yðn 2Þ þ axðnÞ: ð32:1Þ

A realization structure for Eq. 32.2 is shown


Source: S. C. Dutta Roy, “The ABCDs of Digital Signal in Fig. 32.2. In terms of a computer program,
Processing (Part 2),” Students’ Journal of the IETE, vol. Fig. 32.2 shows explicitly that storage must be
21, pp. 60–70, April 1980.
provided for the variables y(n – 1) and y(n – 2)
Revised version of the text of a seminar series. and also the constants b1, b2 and a. Further,
© Springer Nature Singapore Pte Ltd. 2018 241
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_32
242 32 The ABCDs of Digital Signal Processing-PART 2

x2 (n ) a1 a2
(a) HðzÞ ¼ 1
 1
ð32:3Þ
1 c1 z 1 þ c2 z

x1(n) x1(n) x2(n) a1 a11


¼ 1
þ 1
; ð32:4Þ
1 c1 z 1 þ c2 z
(b)
a where c1 – c2 = b1, c1 c2 = b2, a1 a2 = a =
x(n) a (n)
a1 + a2 and a1c2 – a2 c2 = 0. Each first-order
(c) form
x(n) z-1 x(n 1) constant
1  c1 z 1
Fig. 32.1 Basic elements used in the realization diagram
of a digital signal processor can be implemented by using a delay, two con-
stant multiplications and an adder. To implement
form Eq. 32.1, one needs to cascade two
a
(n) y(n) first-order realizations, as shown in Fig. 32.3a,
while form Eq. 32.4 requires parallel connection
z -1 of two first-order forms, as shown in Fig. 32.3b.
b1 Obviously, the three realizations of Figs. 32.1,
y(n 1)
32.3a, b differ in computational algorithm and
z -1 hardware requirements; what is more important,
however, is that the quantization errors also, in
b2 y(n 2) general, differ from structure to structure. We
shall not, however, explore this point further in
Fig. 32.2 A realization of Eq. 32.1 this chapter, but shall be content with the
knowledge that a digital signal processor can be
implemented by various equivalent structures,
Fig. 32.2 shows that to compute an output sam-
and one should choose the one which is the
ple y(n), one must form the products b1y(n – l)
optimum under the given set of constraints.
and b2y(n – 2), ax(n), and add them together. In
We shall not also discuss here methods of
terms of special digital hardware, Fig. 32.2
finding out the difference equation to suit a par-
indicates that we must provide storage for the
ticular set of specifications; this we shall reserve
variables and constants, as well as means for
for a future article. The rest of this chapter will
multiplication and addition. Thus, diagrams such
deal with DFT technique in relation to digital
as Fig. 32.2 serve to depict the complexity of a
signal processing.
digital signal processor algorithm and the amount
of hardware required to realize the processor.
As mentioned earlier, a variety of structures The Discrete Fourier Transform
can be derived to implement a given difference
equation. In case of Eq. 32.1, for example, the As we have already seen, one can implement a
transfer function is digital signal processor, described by Eq. 31.1, in
a general-purpose computer or by a
1
HðzÞ ¼ a=ð1 b1 z b2 z 2 Þ: ð32:2Þ special-purpose hardware. Another way of
implementing a digital signal processor is based
Let b1 and b2 be such that the poles of H(z) are on the fact that the output sequence {y(n)} is the
real. Then, one could write H(z) in the following convolution of the input sequence {x(n)} with
two equivalent forms: the impulse response sequence {h(n)} of the
The Discrete Fourier Transform 243

Fig. 32.3 a Cascade (a)


realization. b Parallel a1 a2
realization x(n) y(n)

C1 z -1 C2 z -1

(b)
a1
x(n)

C1 z -1

y(n)

a11

C2 z -1

processor, as given by Eq. 31.27. This equation (DFT) has been found most suitable. The signal
is reproduced here for convenience as follows: processing operation then simply boils down to
the following sequence of computations:
n
X
yðnÞ ¼ hðrÞxðn rÞ
r¼0
(i) Compute the DFT of {x(n)}.
n
ð32:5Þ (ii) Compute the DFT of {h(n)}.
X
¼ hðn rÞxðrÞ: (iii) Multiply the two.
r¼0 (iv) Compute the inverse DFT (IDFT) of the
product.
Recall that in the continuous signal case, the
convolution of the input signal x(t) with the Let, for simplicity, the notation xk be used for
impulse response h(t) gives the output y(t) and x(k)  x(kT), and consider a sequence {xk} of
that in the frequency domain, this amounts to a length N, i.e. k = 0, 1, 2, … N – 1. Then, the
multiplication of the transforms (Laplace or DFT of {xk} is defined by
Fourier) of x(t) and h(t) to give the transform of y N 1
X
(t). y(t) can then be obtained by the inverse Ar ¼ xk e j2prk=N ; r ¼ 0; 1; 2; . . . N 1:
transform operation. A similar operation can be k¼0
performed with discrete time systems if we have ð32:6Þ
a suitable transform. As we have already seen,
the z-transform does provide such a vehicle; The DFT is thus also a sequence {Ar} of
however, for numerical computation, a modified length N. The xk’s may be complex numbers; the
version of it, called the discrete Fourier transform
244 32 The ABCDs of Digital Signal Processing-PART 2

Ar ’s are almost always complex. For notational would require N2 complex multiplications; in
convenience, let contrast, application of FFT can reduce this
j2p=N number to (N/2) log2N. For example, for
W ¼e ; ð32:7Þ
N = 512, the ratio (N/2) log2N  N2 becomes
so that less than 1%. This drastic reduction in compu-
N 1
X tation time through FFT has made the FFT an
Ar ¼ xk W rk ; r ¼ 0; 1; . . .N 1: ð32:8Þ important tool in many signal processing
k¼0 applications.
The DFT, given by Eq. 32.8 and its inverse,
If one compares Eq. 32.6 with the continuous given by Eq. 32.10 are of the same form so that
Fourier transform A(x) of a signal x(t), viz. any algorithm capable of computing one may be
Z1 used for computing the other by simply
j2pft exchanging the roles of xk and Ar, and making
AðxÞ ¼ xðtÞe dt ð32:9Þ
appropriate scale factor and sign changes. There
1
are two basic forms of FFT; the first, due to
Cooley and Tukey [1], is known as decimation in
then one way of interpreting the DFT is that it
time, while the other, obtained by reversing the
gives the N-point discrete spectrum of the N-
roles of xk and Ar, gives the form called deci-
point time series {x(kT)} at the frequency points
mation in frequency, and was proposed by
r/(NT); r = 0, 1, … N – 1; the fundamental fre-
Gentleman and Sande [2]. Clearly, they should
quency, obviously, is fo = 1/(NT).
be equivalent; it is, however, worth distinguish-
The inverse DFT (IDFT) of the complex
ing between them and discussing them
sequence {Ar}, r = 0, 1 … N – 1, is given by
separately.
N 1 Let N be even and the sequence {xk} be
1X rk
xk ¼ Ar W ; k ¼ 0; 1; . . . N 1: decomposed as
N r¼0
ð32:10Þ fxk g ¼ fuk g þ fvk g; ð32:12Þ
where
That this exists and is unique can be easily
established by substituting Eq. 32.8 in Eq. 32.10 uk ¼ x2k
and carrying out some elementary manipulations. k ¼ 0; 1; 2; . . . N=2 1 ð32:13Þ
vk ¼ x2k þ 1 :
Since ej is periodic with a period 2p, it follows
from Eq. 32.8 and Eq. 32.10 that
Thus {uk} contains the even-numbered points
Ar ¼ Ar þ mN ; m ¼ 0; 1; 2; . . . and {vk} contains the odd-numbered points of
ð32:11Þ {xk} and each has N/2 points. The DFTs of {uk}
xk ¼ xk þ mN :
and {vk} are, therefore,
i.e. both DFT and IDFT yield sequences which
are periodic, with periods Nfo = T−1 = fs and NT P1
N=2
j2prk=ðN=2Þ
Br ¼ uk e
respectively. k¼0
P1
N=2
j4prk=N
¼ uk e r ¼ 0; 1; 2; . . .N=2 1
k¼0
The Fast Fourier Transform P1
N=2
j4prk=N
Cr ¼ vk e :
k¼0
The fast Fourier transform (FFT) is a highly
ð32:14Þ
efficient method for computing the DFT of a time
series. A direct computation from Eq. 32.8
The Fast Fourier Transform 245

The DFT we want is x0 = u0 B0


A0

x2 = u1 B1 W0
N 1
X
j2prk=N DISCRETE A1
Ar ¼ xk e FOURIER W1
k¼0 x4 = u2 TRANSFORM B2
(N = 4) A2
X1
N=2 X1
N=2
W2
j4rpk=N j2prð2k þ 1Þ=N
¼ x2k e þ x2k þ 1 e ; x6 = u3 B3
A3
k¼0 k¼0
W3
r ¼ 0; 1; 2; . . .N 1 x1 = v0 C0
A4
j2pr=N W4
¼ Br þ e Cr ; 0  r\N=2: x3 = v1 C1
DISCRETE A5
ð32:15Þ FOURIER W5
x5= v2 TRANSFORM C2
(N = 4) A6
because Br and Cr are defined for r = 0 to (N/2) – W6
x7 = v3 C3
1. Further, Br and Cr are periodic with period N/2 A7
W7
so that
Fig. 32.4 Illustrating the first step in decimation in time
Br þ N=2 ¼ Br and Cr þ N=2 ¼ Cr : ð32:16Þ form of FFT for N = 8

Thus computation of {Br} and {Cr} reduces to the task


j2pðr þ N=2Þ N
of finding the DFTs of four sequences, each of N/
Ar þ N=2 ¼ Br þ e = Cr 4 samples. These reductions can be continued as
j2pr=N long as each sequence has an even number of
¼ Br e Cr ; 0  r\N=2:
ð32:17Þ samples. Thus if N = 2n, one can make n such
reductions by applying Eq. 32.13 and Eq. 32.18,
Finally, using Eqs. 32.7, 32.15 and 32.17, we first for N, then for N/2 and so on, and finally for
get a two-point function. The DFT of a one-point
function is, of course, the sample itself. The
Ar ¼ Br þ W r Cr successive reduction of an 8-point DFT, which
0  r\N=2 ð32:18Þ began in Fig. 32.4, is continued in Figs. 32.5 and
Ar þ N=2 ¼ Br W r Cr: 32.6. In Fig. 32.6, the operation has been com-
pletely reduced to complex multiplications and
A direct calculation of Br and Cr, from additions. The number of summing nodes is
Eq. 32.14 requires (N/2)2 complex multiplica- (8) (3) = 24 and 24 complex additions are,
tions each. Another N such multiplications are therefore, required; the number of complex
required to compute Ars from Eq. 32.18, thus multiplications needed are also 24 = (no. of
making a total of 2(N/2)2 + N = N2/2 + N, stages) (no. of multiplications in each stage) =
which is less than N2 if N > 2. This is illustrated (3) (8). Half of these multiplications are easily
in Fig. 32.4 by a signal flow diagram for N = 8, eliminated by noting that W7 = −W3, W6 = −W2,
where we have used the fact that WN/2 = –1, so W5 = −W1 and W4 = −W0. Thus, in general,
that –Wr = Wr + N/2, N log2 N complex additions and, at most, (1/2)
The DFTs of {uk} and {vk}, k = 0, l, … (N/2) N log2 N complex multiplications are required for
–1, can now be computed through a similar the computation of an N-point DFT, when N is a
decomposition if N/2 is even; thus the power of 2.
246 32 The ABCDs of Digital Signal Processing-PART 2

Fig. 32.5 Illustrating two x0 = u0 B0


A0
steps of decimation in DFT
W0 W0
frequency form of FFT for (N = 2)
x4 = u2 A1
N=8 B1 W1
W2
x2 = u1 B2
A2
W4 W2
x6 = u3 A3
W5 B3 W3

x1 = v0 C0
A4
W0 W4
x5 = v2 A5
C1 W5
W2
x3 = v1 C2
A6
W4 W6
x7 = v3 A7
W6 C3 W7

Fig. 32.6 Illustrating x0 A0


decimation in time form of
W0 W0 W0
FFT for N = 8
x4 A1
W4
W2 W1
x2 A2
W0 W4
W2
x6 A3
W4 W6 W3
x1 A4
W0 W4
W0
x5 A5
W4 W2 W5
x3 A6
W0 W4 W6
x7 A7
W4 W6 W7

When N is not a power of 2, but has a factor of ðiÞ


Each of these sequences has a DFT Br , and
p, one can develop equations analogous to 32.13 the DFT of {xk} can be computed from p simpler
through 32.18 by forming p different sequences, DFT’s. Further simplification occurs if N has
ðiÞ
fuk g ¼ fxpk þ i g; i = 0 to p – 1, each having N/ additional prime factors.
p samples. For example, if N = 15 having a In the decimation in frequency form of FFT,
factor 3, we can form three sequences as follows: the sequence {xk}, k = 0, l, …, N – 1 and N even,
is decomposed as
ð0Þ
fuk g ¼ fx0 ; x3 ; x6 ; x9 ; x12 g
ð1Þ uk ¼ xk k ¼ 0; 1; . . .; N=2 1
fuk g ¼ fx1 ; x4 ; x7 ; x10 ; x13 g ð32:19Þ ð32:20Þ
vk ¼ xk þ N=2
ð2Þ
fuk g ¼ fx2 ; x5 ; x8 ; x11 ; x14 g:
The Fast Fourier Transform 247

i.e. {uk} is composed of the first N/2 points and x0 A0


{vk} is composed of the last N/2 points of {xk}.
x1 A2
Then one can write DFT
(N = 4)
x2 A4
X1
N=2
j2prk=N 2prðk þ N=2Þ=N
Ar ¼ ½uk e þ vk e Š x3 A8
k¼0
W0
X1
N=2
x4 A1
jpr 2prk=N
¼ ðuk þ e vk Þe ; W0
W1
k¼0 x5 A3
W1 DFT
r ¼ 0; 1; . . .N 1: x6
W2 (N = 4)
A5
W2
ð32:21Þ W3
x7
A7
W3
Consider the even-numbered and
odd-numbered points of the DFT separately; let Fig. 32.7 Illustrating the first step in decimation in
frequency form of FFT for N = 8
Rr ¼ A2r 0  r\N=2
ð32:22Þ
Sr ¼ A2r þ 1 : Fig. 32.2 by two 2-point DFTs, and each of the
2-point DFTs by two 1-point transforms, these
It is this step that may be called the decima- last being equivalency operations. These steps
tion in frequency. Note that for computing Rr, are indicated in Figs. 32.8 and 32.9.
Eq. 32.21 becomes There are many variations and modifications
of the two basic FFT schemes, which we would
X1
N=2
not discuss here.
j2rpk=ðN=2Þ
Rr ¼ A2r ¼ ðuk þ vk Þe :
k¼0
ð32:23Þ
Applications of FFT to Compute
which we recognize as the N/2-point DFT of the
Convolution and Correlation
sequence {uk + vk}. Similarly,
It may be recalled that our motivation for intro-
X1
N=2 ducing the DFT and FFT was to convert the
jpð2r þ 1Þ j2pð2r þ 1Þk=N
Sr ¼ A2r þ 1 ¼ ½uk þ vk e Še convolution relation Eq. 32.5, viz.
k¼0
n
X
X1
N=2
yn ¼ hr x n r
j2pk=N j2prk=ðN=2Þ
¼ ðuk vk Þe e :
r¼0
k¼0
n
ð32:25Þ
ð32:24Þ
X
¼ hn r x r
r¼0
which we recognize as the N/2-point DFT of the
sequence fðuk vk Þe j2pk=n g. into a product form, through the DFT. To this
Thus, the DFT of an N–sample sequence {xk}, end, assume that both the impulse response {hn},
N even, can be computed as the N/2 point DFT of and the input {xn} are band limited to 1/2T Hz.
a simple combination of the first N/2 and the last Then the output {yn} is also frequency band
N/2 samples of {xk} for even-numbered points, limited. Also, if both {hn} and {xn} are defined
and a similar DFT of a different combination of for the range 0  n  N – I, then {yn} is defined
the same samples of {xk} for the odd-numbered for the range 0  n  2 N – 1. For example, if
points. This is illustrated in Fig. 32.7 for N = 8. {hn} = {h0, h1} and {xn} = {x0, x1}, then
As was the case with decimation in time, we fyn g ¼ fh0 x0 ; h0 x1 þ h1 x0 ; h1 x1 g. Let the DFT’s
can replace each of the DFTs indicated in of {xn} and {hn} be {Ar} and {Hr} respectively.
248 32 The ABCDs of Digital Signal Processing-PART 2

x0 A0
DFT
(N = 2)
x1 A4

W0
x2 A2
– W0 DFT
W2 (N = 2)
x3 A6
– W2
W0
x4 A1
W0 DFT
W1 (N = 2)
x5 A5
– W1
W2
W0
x6 A3
– W2 – W0 DFT
W3 W2 (N = 2)
x7 A7
– W3 – W2

Fig. 32.8 Illustrating two steps of decimation in time form of FFT for N = 8

x0 A0

W0
x1 A4
W0
W0
x2 A2
– W0
W2 W0
x3
A3
– W2 W0
W0
x4 A1
– W0
W1 W0
x5 A5
– W1 W0
W2 W0
x6 A3
– W2 – W0
W3 W2 W0
x7 A7
– W3 – W2 W0

Fig. 32.9 Illustrating decimation in frequency form of FFT for N = 8


Applications of FFT to Compute Convolution and Correlation 249

Then, the nth sample in the IDFT of the product arises due to the fact that the DFT assumes both
{ArHr} is {xn} and {hn} to be periodic. Further, fy0n g is of
length N instead of 2 N –1. Note that if we
N 1
X extend both {xn} and {hn} to a length 2 N by
y0n ¼ 1=N Ar H r W rn
;
ð32:26Þ adding N zeros to each, i.e. if we change {xn} to
r¼0
f^xn g = {x0, x1 … xN – 1, 0, … 0} and similarly
n ¼ 0; 1. . . N 1:
for {hn}, then the perturbation term becomes
zero. Further, the sequence {yn} will be N + N –
Substituting in Eq. 32.26,
1 = 2 N – 1 terms long, i.e. y2N– 2 will be the last,
N 1
X non-zero term in {yn}. As an example, let N = 4,
Ar ¼ xk W rk ; i.e.
k¼0
ð32:27Þ fxn g ¼ fx0 ; x1 ; x2 ; x3 g
N 1
X ð32:30Þ
Hr ¼ hl W :rl fhn g ¼ fh0 ; h1 ; h2 ; h3 g:
l¼0
The true convolution of {xn} with {hn} gives
and carrying out some elementary manipulations,
it is not difficult to show that Eq. 32.26 simplifies y 0 ¼ x 0 h0
to y 1 ¼ x 0 h1 þ x 1 h0
N 1
X y 2 ¼ x 0 h2 þ x 1 h1 þ x 2 h0
y0n ¼ x k hn k y 3 ¼ x 0 h3 þ x 1 h2 þ x 2 h1 þ x 3 h0 ð32:31Þ
k¼0
ð32:28Þ y 4 ¼ x 1 h3 þ x 2 h2 þ x 3 h1
n
X N 1
X
¼ x k hn k þ x k hN þ n k y 5 ¼ x 2 h3 þ x 3 h2
k¼0 k¼n þ 1 y 6 ¼ x 3 h3 :

On the other hand, the DFT procedure, lead-


¼ yn þ perturbation term: ð32:29Þ
ing to Eq. 32.28 gives
The last form is obtained by comparison with 3
X
Eq. 32.25, while the last term in Eq. 32.28 rep- y0n ¼ xk h n k ; ð32:32Þ
resents the ‘cyclical’ part of the convolution, k¼0
arising out of the periodicity of DFT; and IDFT; so that
h is the cyclical variable passing from ho to hN – 1
as k passes from n to n + 1. The convolution can y00 ¼ x0 h0 þ x1 h 1 þ x2 h 2 þ x3 h 3

be made cyclical in x instead of h by ¼ x0 h0 þ ðx1 h3 þ x2 h2 þ x3 h1 Þ


inter-changing x and h in Eq. 32.28. y01 ¼ x0 h1 þ x1 h0 þ ðx2 h3 þ x3 h2 Þ ð32:33Þ
The procedure outlined for implementing a y02 ¼ x0 h2 þ x1 h1 þ x2 h0 þ ðx3 h1 Þ
digital signal processor, viz. taking the DFTs of
y 0 3 ¼ x 0 h3 þ x 1 h2 þ x 2 h1 þ x 3 h0 :
{xn} and {hn}, multiplying them, and taking
IDFT of the product, does not, therefore, give the
where the perturbation terms are bracketed. Also,
desired output sequence {yn} unless the pertur-
fy0n g consists of only 4 terms. Now let
bation term in 32.28 can be made zero. This term
250 32 The ABCDs of Digital Signal Processing-PART 2

f^xn g ¼ fx0 ; x1 ; x2 ; x3 ; 0; 0; 0; 0g be corrected, as demonstrated by the example, by


ð32:34Þ adding zeros to both {xn} and {hn} and thereby
^n g ¼ fh0 ; h1 ; h2 ; h3 ; 0; 0; 0; 0g:
fh increase their lengths sufficiently so that no
overlap occurs in the resultant convolution.
Then the DFT procedure gives We now state formally the steps for comput-
7
ing convolution by DFT:
X
y0n ¼ xk h n k ; ð32:35Þ
k¼0 (i) Let {xn} be defined for
so that
0nM 1
y00 ¼ x 0 h0
y01 ¼ x 0 h1 þ x 1 h0 and {hn} be defined for
y02 ¼ x 0 h2 þ x 1 h1 þ x 2 h0 0nP 1
y03 ¼ x 0 h3 þ x 1 h2 þ x 2 h1 þ x 3 h0 ð32:36Þ
y04 ¼ x 1 h3 þ x 2 h2 þ x 3 h1 (ii) Select N such that
y05 ¼ x 2 h3 þ x 3 h2 N PþM 1
y06 ¼ x 3 h3 : N ¼ 2k

By comparing with Eq. 32.31, we see that (iii) Form the new sequences f^xn g and f^hn g
such that
fy0n g ¼ fyn g
n ¼ 0; 1; 2; . . .7: xn ; 0  n  M

1
^xn ¼
0; M  n  N 1
Thus, the modification does give correct 
hn ; 0  n  P 1
results. ^hn ¼
0; P  n  N 1
Before stating this simple remedy in formal
terms, we would like to emphasize that blind use ^ r g and fH
(iv) Compute the DFTs fA ^ r g of f^xn g
of FFT for computing the convolution of two
^
and fhn g by FFT.
sequences will lead to incorrect results, because
the DFT introduces a periodic extension of both (v) Compute
data and processor impulse response. This results ^rH
^ r g ¼ fA
fB ^rg
in cyclic or periodic convolution, rather than the
desired noncyclic or aperiodic convolution. If ^ r g by FFT; the result
(vi) Find the IDFT of fB
{xn} and {hn} contain N samples each, then the is {yn}.
true convolution should result in 2 N – 1 samples
for {yn}. If DFT is used, then {Ar} and {Hr} each This technique is referred to as select-saving.
consist of N samples, so does {ArHr} and hence Next, we consider the application of FFT to
its IDFT. Hence, {yn′} found by DFT is not the compute the cross-correlation sequence {Rxy(k)}
same as {yn} because of folding (or aliasing or of two given sequences {xn} and {yn}, each of
cycling) occurring in the time domain. This can length N, where
Applications of FFT to Compute Convolution and Correlation 251

N 1 Application of FFT to Find


ð32:37Þ the Spectrum of a Continuous Signal
X
Rxy ðkÞ D 1=N xn yn k
n¼0
The DFT, as we have seen, is specifically con-
and the auto-correlation sequence {Rxx(k)} of a cerned with the analysis and processing of dis-
sequence {xn}, where crete periodic signals, and that it is a zero-order
N 1
X approximation of the continuous Fourier trans-
Rxx ðkÞ D 1=N xn xn k : ð32:38Þ form. It is, therefore, tempting to apply the DFT
n¼0 directly to provide, through FFT, a numerical
spectral analysis of sampled versions of contin-
Note that the essential difference between uous signals. This would be a perfectly valid
convolution, as given by Eq. 32.25, and corre- application, if the continuous, signal is periodic,
lation, as given by Eqs. 32.37 and 32.38 is that band limited and sampled in accordance with the
one of the sequences is reversed in direction for sampling theorem. Deviations from these cause
one operation as compared with the other. Thus, errors, and most of the problems in using the
if FFT is to be used to compute correlation, the DFT to approximate the CFT (C for continuous)
same kind of precautions, as discussed for con- are caused by a misunderstanding of what this
volution, are to be exercised. The procedure here, approximation involves.
is based on the fact that if There are, essentially, three phenomena,
which contribute to errors in relating the DFT to
DFTfxn g ¼ fAr g
the CFT. The first, called aliasing, has already
and been discussed (Part 1). The solution to this
DFTfyn g ¼ fBr g problem is to ensure that the sampling rate is
high enough to avoid any spectral overlap. This
then requires some prior knowledge of the nature of
( the spectrum, so that the appropriate sampling
N 1
X  rate may be chosen. In absence of such prior
DFT xn yn k g ¼ Ar Br ; ð32:39Þ knowledge, the signal must be prefiltered to
n¼0
ensure that no components higher than the fold-
where bar denotes complex conjugate. Thus ing frequency appear.
applied to Eqs. 32.37 and 32.38, one obtains The second problem is that of leakage, arising
due to the practical requirement of observing the
fRxy ðkÞg ¼ IDFTfAr Br =Ng signal over a finite interval. This is equivalent to
ð32:40Þ multiplying the signal by a window function. The
¼ IDFTfSxy ðrÞg
simplest window is a rectangular function as
shown in Fig. 32.10a, and its effect on the
spectrum of a sine signal, shown in Fig. 32.10b,
fRxx ðkÞg ¼ IDFTf1 Ar =2 =Ng
ð32:41Þ is displayed in Fig. 32.10c. Note that there
¼ IDFTfSxx ðrÞg occurs a spreading or leakage of the spectral
components away from the correct frequency;
where {Sxy(r)} and {Sxx(r)} are the cross-power this results in an undesirable modification of the
spectrum sequence and auto power spectrum total spectrum.
sequences respectively.
252 32 The ABCDs of Digital Signal Processing-PART 2

(a) w(t) |D(t)|

t t

(b) w(t) |w(t)|

t t

(c) w(t)
|D(t) * w(t)|

t t

Fig. 32.10 Illustrating ‘leakage’ due to finite observation time

The leakage effect cannot always be isolated window in the time domain is relatively small, as
from the aliasing effect because leakage may also compared to other continuously varying weight
lead to aliasing if the highest frequency of the windows, e.g. the Hamming window.
composite spectrum moves beyond the folding The third problem in relating the DFT to the CFT
frequency. This possibility is particularly signif- is the picket fence effect, resulting from the inability
icant in the case of a rectangular window, of the DFT to observe the spectrum as a continuous
because the tail of the window spectrum does not function, since the computation of the spectrum is
converge rapidly. limited to integer multiples of the fundamental fre-
The solution to the leakage problem is to choose quency fo = 1/(NT). In a sense, the observation of
a window function that minimizes the spreading. the spectrum with the DFT is analogous to looking
One example is the so-called ‘raised cosine’ win- at it through a sort of ‘picket fence’ since we can
dow in which a raised cosine wave is applied to the observe the exact behaviour only at discrete points.
first and last 10 per cent of the data and a weight of It is possible that a major peak lies between two of
unity is applied in between. Since only 20% of the the discrete transform lines, and this will go unde-
terms in the time series is given a weight other than tected without some additional processing.
unity, the computation required to apply this
Application of FFT to Find the Spectrum of a Continuous Signal 253

One procedure for reducing the picket fence are always ‘better than one’. ‘… And three are
effect is to vary the number of points N in a time better still’, the proverb continues; this third ‘ar-
period by adding zeros at the end of the original row’ is provided by the charge transfer devices,
record, while maintaining the original record intact. which can perform analog as well as digital signal
This process artificially changes the period, which, processing. As compared to the digital signal
in turn, changes the locations of the spectral lines processors we have talked about, the charge
without altering the continuous form of the original transfer processing has the distinct advantage of
spectrum. In this manner, spectral components not requiring an A/D conversion, and hence is less
originally hidden from view may be shifted to expensive, more versatile and more accurate.
points where they may be observed.

Problems
Concluding Comments
P:1. Draw the equivalent of Fig. 32.6 for
The aim of this chapter was to introduce the basic N = 16. You have to take an A3 paper with
concepts involved in digital signal processing, 90° turn around.
including an introduction to the FFT and its P:2. Draw the FFT diagram for N = 16 using
applications. We went through the sampling decimation in frequency.
process carefully, and pointed out the various P:3. What is the minimum possible number of
errors introduced by quantization. A brief dis- non-trivial multipliers in Fig. 32.8?
cussion on structures was included to facilitate an P:4. What is better? DIT or DIF?
understanding of the implementation of a digital P:5. What is a possible remedy for eliminating
signal processor in a general-purpose computer leakage altogether? Is it practicable?
or by a special-purpose hardware. Two basic
forms of FFT were introduced, and two of the
most important applications of the FFT were
discussed. It was pointed out that correct appli- References
cation of FFT requires a much more than casual
understanding of the periodic extension intro- 1. J.W. Cooley, J.W. Tukey An algorithm for the
duced by the DFT process. machine calculation of complex fourier series. Math.
Comput. 19, 297–301 (April 1965)
In conclusion, it is worth mentioning that 2. W.M. Gentleman, G. Sande, Fast Fourier
digital signal processing is not an answer to all Transforms-for Fun and Profit. in 1966 Fall Joint
signal processing problems. Digital and analog Computer Conference of AFIPS Proceedings,
techniques form ‘two arrows in the quiver’, which pp. 563–578
On Second-Order Digital Band-Pass
and Band-Stop Filters 33

The chapter deals with the derivation, design, is that of a normalized digital band-pass filter
limitations and realization of second-order (BPF) whose centre frequency x0 and 3-dB
digital band-pass (BP) and band-stop (BS) fil- bandwidth B are given by
ters with independent control of the centre
frequency and the bandwidth in the BP case, 2a
x0 ¼ cos 1 b and B ¼ cos 1
: ð33:2Þ
and rejection frequency and the difference 1 þ a2
between the pass-band edges in the BS case.
Thus x0 and b are independently controllable by
varying a and b, provided one can realize Eq. 33.1
Keywords by only two multipliers of the same values. Such a

Digital filter Band-stop  Band-pass realization using the lattice structure has been given
Second-order filters in [1]. The complement of Eq. 33.1, obtained by
subtracting H1 (z) from unity, is

1þa 1 2bz 1 þ z 2
H2 ðzÞ ¼ : ð33:3Þ
2 1 bð1 þ aÞz 1 þ az 2

It represents a band-stop filter (BSF) whose


rejection frequency x0 and the parameter B = x2
Introduction – x1, where x2 and x1 are the two 3 dB fre-
quencies, are also given by Eq. 33.2. Hence, the
The second-order transfer function BPF realization can also be used for realizing the
BSF.
1 a 1 z 2 While one appreciates the elegance of the
H1 ðzÞ ¼ ð33:1Þ transfer function Eq. 33.1, the question of how it
2 1 bð1 þ aÞz 1 þ az 2
was conceived of has not been answered in
textbooks. Another question that arises is the
following: Is the a-controllability of the band-
width valid for any arbitrary pass-band toler-
ance? Yet another relevant question concerns the
Source: S. C. Dutta Roy, “On Second-Order Digital realization of a canonic structure with multipliers
Band-Pass and Band-Stop Filters,” IETE Journal of a and b. Are structures other than that given in
Education, vol. 49, pp. 59–63, May–August 2008.

© Springer Nature Singapore Pte Ltd. 2018 255


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_33
256 33 On Second-Order Digital Band-Pass and Band-Stop Filters

Eq. 33.1 possible? This chapter presents answers cos x0 ¼ 2r cos h=ð1 þ r 2 Þ: ð33:8Þ
to all these questions.
Comparing with Eq. 33.2, we, therefore, have

Derivation 2r cos h=ð1 þ r 2 Þ ¼ b: ð33:9Þ

For a BPF, we argue that the response at x = 0 Also, combining Eqs. 33.7 and 33.8, we note
and p should both be zero. With z = ejx, x = 0 that the maximum magnitude will be unity if
and p translate to z = 1 and –1, respectively.
Hence, the numerator polynomial of the transfer K ¼ ð1 r 2 Þ=2; ð33:10Þ
function must be of the form K(1 – z−2), K being
a constant. Also, we know that real poles and, combining Eq. 33.7 with Eq. 33.10, we can
severely limit the selectivity of a BPF. Hence, we write
let the poles be at re±jh, where r is close to but
less than unity for high selectivity. The denom- 1
jH1 ðejx Þj2 ¼ i2 : ð33:11Þ
inator polynomial of the BPF transfer function is,
h
ð1 þ r2 Þ cos x 2r cos h
1þ ð1 r 2 Þ sin x
therefore,

ð1 rejh z 1 Þð1 re jh z 1 Þ Now, suppose, instead of the range


pffiffi
¼ 1 2r cos hz 1 þ r 2 z 2 : ð33:4Þ 1= 2\ jHðejx Þj \1; the pass-band is defined
pffiffiffiffiffiffiffiffiffiffiffiffi
by 1= 1 þ e2 \ jHðejx Þj \1; where e < 1 is
The required transfer function is, therefore, arbitrary. Then from Eq. 33.11, the pass-band
edge frequencies x2 and x1 will satisfy the
Kð1 z 2 Þ equation
H1 ðzÞ ¼ : ð33:5Þ
1 2r cos hz 1 þ r 2 z 2
ð1 þ r 2 Þcos x 2r cos h ¼ ð1 r 2 Þsin x:
where K will be chosen to normalize the maxi- ð33:12Þ
mum magnitude to unity. The frequency
response of Eq. 33.5 is given by We let (at the moment arbitrarily, but justified
by later results)
2jK sin x
H1 ðejx Þ ¼ :
½ð 1 r 2 Þ cos x 2r cos hŠð1 r2 Þ sin x
ð1 þ r 2 Þcos x1 2r cosh ¼ ð1 r 2 Þsin x1 ;
ð33:6Þ
ð33:13Þ
The magnitude squared function can be writ-
and
ten as
ð1 þ r 2 Þcosx2 2r cos h ¼ ð1 r 2 Þsin x2 :
jx 2 4K 2
jH1 ðe Þj ¼ h i2 : ð33:14Þ
ð1 þ r 2 Þ cos x 2r cos h
sin x þ ð1 r 2 Þ2
Subtracting Eq. 33.14 from Eq. 33.13 gives
ð33:7Þ

Maximum value of |H1(ejx)|2 is reached when ð1 þ r 2 Þ ðcosx1 cosx2 Þ


the first term in the denominator vanishes, i.e. at ¼ ð1 r 2 Þ ðsinx1 þ sinx2 Þ: ð33:15Þ
x = x0, where
Derivation 257

Or, Or,

x2 þ x1 x2 x1 cos x1 þ cos x2  4rcos h=ð1 þ r 2 Þ ¼ 2b


ð1 þ r 2 Þ sin sin
2 2 ¼ 2cos x0 :
x2 þ x1 x2 x1
¼ eð1 r 2 Þ sin cos : ð33:23Þ
2 2
ð33:16Þ
Thus, cosx1 and cosx2 are approximately
This gives arithmetically symmetrical about cosx0.

x2 x1 eð1 r 2 Þ
tan ¼ : ð33:17Þ
2 1 þ r2 Design for Arbitrary Pass-band
Tolerance
Using the relationships
Combining Eqs. 33.17 and 33.21, and letting be
cos 2 h ¼ 2cos2 h 1 and sec2 h ¼ 1 þ tan2 h;
denote the bandwidth for an arbitrary pass-band
ð33:18Þ tolerance specified by e, we get
we get from Eq. 33.17, after simplification, Be eð1 aÞ
tan ¼ : ð33:24Þ
2 1þa
ð1 þ r 2 Þ2 e2 ð1 r 2 Þ2
cosðx2 x1 Þ ¼ :
ð1 þ r 2 Þ2 þ e2 ð1 r 2 Þ2 Thus, given e and Be, one can choose a from
ð33:19Þ
e tan B2e
a¼ : ð33:25Þ
The 3-dB bandwidth B is obtained as x2 – x1 e þ tan B2e
with e = 1. Hence,
Equation 33.25 puts a constraint on the
2r 2 specifications of e and Be. Since 0 < Be < p, tan
cos b ¼ : ð33:20Þ
1 þ r4 (Be/2) > 0; also a = r2 is a real positive quantity.
Hence, e and Be must satisfy
Comparing Eq. 33.20 with Eq. 33.2, we get
Be
2 e [ tan : ð33:26Þ
r ¼ a: ð33:21Þ 2

Substituting Eqs. 33.9, 33.10 and 33.21 in This is quite logical because, with two
Eq. 33.5, it becomes identical with Eq. 33.1. parameters e and b, one cannot satisfy three
Note, in passing, that for small pass-band tol- specifications, viz. e, Be and x0. The constraint
erance (e ! 0) or small bandwidth (x2 – x1 ! 0) Eq. 33.26 is shown graphically in Fig. 33.1 in
or both addition of Eqs. 33.13 and 33.14 gives the form of a plot of e ¼ tan B2e : No specification
point which lies below the curve can be met by
ð1 þ r 2 Þðcos x1 þ cos x2 Þ  4r cos h: ð33:22Þ the second-order BPF characterized by Eq. 33.1.
258 33 On Second-Order Digital Band-Pass and Band-Stop Filters

2.0
a bð1 þ aÞz 1 þ z 2
A2 ðzÞ ¼ : ð33:27Þ
1 bð1 þ aÞz 1 þ az 2

1.5 by noting that

H1;2 ðzÞ ¼ ð1=2Þ½1  A2 ðzފ: ð33:28Þ


e 1.0
Figure 33.2 shows the implementation of
Eq. 33.28, while Fig. 33.3 shows the realization
of A2(z) with a lattice structure using the two
0.5
multipliers a and b.
We give here another realization of A2(z),
starting from that of the transfer function
0
0 p/4 p/2 3p/4 p d2 þ d1 z 1 þ z 2
Be A2 ðzÞ ¼ : ð33:29Þ
1 þ d1 z 1 þ d2 z 2
Fig. 33.1 Plot of e ¼ tan B2e : Specification points below
as given in [1] and reproduced in Fig. 33.4. For
the curve cannot be met by the second-order filter
Eq. 33.28,

d1 ¼ bð1 þ aÞ and d2 ¼ a: ð33:30Þ


Realization
The part marked by nodes A, B and C in
In [1], the realizations of Eq. 33.1 and 33.3 have Fig. 33.4 can be modified to the equivalent
been derived from that of the all-pass filter configuration shown in Fig. 33.5, where the two

Fig. 33.2 Implementation of X(z)H1(z)


Eq. 33.28

1
1/2
X(z)

A2(z) X(z)H2(z)

Fig. 33.3 Realization of


A2(z) of Eq. 33.27, as given in
X1(z) a b
[1]

1
1

A2(z)X1(z) z 1 z 1
Realization 259

Fig. 33.4 Implementation of 1 1


z z Y1
A2(z) of Eq. 33.29, as given in X1
[1]

A
d1
1

B d2 C

Fig. 33.5 Equivalent A A


representation of the part
ABC of Fig. 33.4

d1= b (1+a) b

1+a
B C B C
d2 = a

multipliers a and b have been separated out. each other. Design equations have been derived
Replacing the part ABC of Fig. 33.4 by part and the limitations of the design have been
(b) of Fig. 33.5 gives an alternative to the lattice pointed out for arbitrary pass-band tolerance. An
structure of Fig. 33.3. Whether other alternative alternative canonic realization structure has also
structures are possible or not is left as an open been presented, in which a and b are the only
problem for the reader. two multipliers.

Conclusion Problems

A derivation has been given of the elegant P:1. Suppose b = 0 in Eq. 33.3. What kind of
second-order band-pass/band-stop filter transfer filter do you get?
function, in which the two parameters a and b P:2. In Eq. 33.3, investigate what happens when
control the centre frequency and the difference (i) b = +1 and (ii) b = –1.
between the pass-band edges, independently of
260 33 On Second-Order Digital Band-Pass and Band-Stop Filters

P:3. Look at Eq. 33.19. Find cos (x2 + x1) and Reference
find the product cos (x2 + 1) cos (x2 – x1).
Interpret the result. 1. S.K. Mitra, Digital Signal Processing—A Computer
P:4. What happens when b = 0 in Eq. 33.27. Based Approach, Second Edition (McGraw-Hill, New
P:5. What happens when (i) b = +1 and York, 2000)
(ii) b = –1 in Eq. 33.27?
Derivation of Second-Order Canonic
All-Pass Digital Filter Realizations 34

This chapter deals with the derivation of two 34.3, which realize, respectively, the following
canonic all-pass digital filter realizations, first transfer functions:
proposed by Mitra and Hirano. In contrast to d1 þ z 1
their derivation, which uses a three-pair A1 ðzÞ ¼ ; ð34:1Þ
1 þ d1 z 1
approach, our derivation is much simpler
because we use a two-pair approach, in which
only four, instead of nine parameters have to d1 d2 þ d 1 z 1 þ z 2
A2 ðzÞ ¼ ; ð34:2Þ
be chosen. 1 þ d1 z 1 þ d1 d2 z 2

and
Keywords d2 þ d1 z 1 þ z 2

Canonical All-pass  Digital filter B2 ðzÞ ¼
1 þ d1 z 1 þ d2 z 2
 ð34:3Þ
Realizations
Note that the transformed forms of these
structures [3] will also give canonic realizations
Introduction of the same transfer functions; these are not being
considered in this chapter.
All-pass digital filters have been recognized as The derivation of the first-order structure was
basic building blocks of many digital signal given in [2] by using the two-pair approach, i.e.
processors [1]. Any arbitrary order all-pass filter by assuming a multiplier-less two-pair with the
can be realized by cascading first- and single multiplier d1 as its termination, as shown
second-order ones only. Mitra and Hirano [2] in Fig. 34.4. Using the two-pair relationship
proposed the canonic first- and second-order     
configurations shown in Figs. 34.1, 34.2 and Y1 t11 t12 X1
¼ ; ð34:4Þ
Y2 t21 t22 X2

and the terminating constraint:


X2 ¼ d1 Y2 ; ð34:5Þ
Source: S. C. Dutta Roy, P. Uday Kiran, Bhargav R.
Vyas,Tarun Aggarwal and D. G. Senthil Kumar,
“Derivation of Second-Order Canonic All-Pass Digital
Filter Realizations”, IETE Journal of Education, vol. 47,
pp. 153–157, October–December 2006.

© Springer Nature Singapore Pte Ltd. 2018 261


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_34
262 34 Derivation of Second-Order Canonic All-Pass …

X1 z –1 –1

d1

Y1

Fig. 34.1 Canonical realization of Eq. 34.1

Fig. 34.2 Canonical z 1 1 Y1


X1 z
realization of Eq. 34.2

1
d1 d2
X2
Y2

Fig. 34.3 Canonical


X1 z 1 z 1 Y1
realization of Eq. 34.3

1 d1
d2

X1
one obtains
Y2
Y1 t11 d1 ðt11 t12 t12 t21 Þ
A1 ðzÞ ¼ ¼ : ð34:6Þ
X1 1 d1 t22
Digital d1
2-pair Comparing Eq. 34.6 with Eq. 34.1, we see
Y1 that various choices are possible, of which the set
X2
t11 ¼ z 1 ; t22 ¼ z 1 ; t12 ¼ 1 þ z 1 ; and
Fig. 34.4 Digital two-pair terminated in multiplier d1 1
t21 ¼ 1 z ð34:7Þ
Introduction 263

gives the structure of Fig. 34.1, while inter- Again, obvious choices of t12 and t21 are as
changing the expressions of t12 and t21 in follows:
Eq. 34.7 gives the transposed form of Fig. 34.1.
2
If we follow the same procedure for deriving t12 ¼ 1 z ð34:12Þ
the structures for Eqs. 34.2 and 34.3, we have to
start with a multiplier-less three-pair, two of whose and
pairs will be terminated in d1 and d2. As given in
1
[2], analysis for the required t-parameters of the t21 ¼ d2 þ z þ d2 z 2 : ð34:13Þ
3  3 t-matrix becomes quite involved. The pur-
pose of this chapter is to present much simpler From Eqs. 34.4, 34.9, 34.12 and 34.13, we
derivations of the structures of Figs. 34.2 and get
34.3, by using the two-pair approach only. Y1 ¼ z 2 X1 þ ð1 z 2 ÞX2 ; ð34:14Þ

and
Derivation of the Structure 1
of Fig. 34.2 Y2 ¼ ðd2 þ z þ d2 z 2 ÞX1 ðz 1
þ d2 z 2 ÞX2 :
ð34:15Þ
With the aim of deriving the structure of Fig. 34.2,
we start with same constrained two-pair shown in Equations 34.14 and 34.15 can be rewritten in
Fig. 34.4, where the two-pair is no longer the following forms
multiplier-less. Instead, it contains one multiplier
(d2) and two delays. Following the steps of Y1 ¼ z 2 ðX1 X2 Þ þ X2 ; ð34:16Þ
Eqs. 34.4, 34.5 and 34.6, we now have to match
the right-hand sides of Eqs. 34.6 and 34.2, i.e. and

Y2 ¼ z 1 ðX1 X2 Þ þ d2 X1 þ z 2 ðX1
 
t11 d1 ðt11 t22 t12 t21 Þ d1 d2 þ d1 z þ z 1 2 X2 Þ
¼ ð34:17Þ
1 d1 t22 1 þ d1 z 1 þ d1 d2 z 2
ð34:8Þ
It is easily verified that Fig. 34.2 is a real-
An obvious set of simple choices is the ization of these two equations, where, for con-
following: venience, the locations of the signals Y2 and X2
are also indicated.
t11 ¼ z 2 ; t22 ¼ ðz 1
þ d2 z 2 Þ; ð34:9Þ

t11 t22 t12 t21 ¼ ðd2 þ z 1 Þ:


Derivation of the Structure
From Eq. 34.9, we get of Fig. 34.3
t12 t21 ¼ t11 t22 þ d2 þ z 1
In contrast to the derivation of [2], which again
¼ z 2 ðz 1 þ d2 z 2 Þ þ d2 þ z 1 ;
uses the three-pair approach, ab initio, for
ð34:10Þ Eq. 34.3, we derive the structure of Fig. 34.3
from that of Fig. 34.2 by an elementary manip-
which can be simplified to the following: ulation. Let, in Eq. 34.3, d2 = d1 p2. Then
Eq. 34.3 becomes identical in form to Eq. 34.2
t12 t21 ¼ ð1 z 2 Þðd2 þ z 1
þ d2 z 2 Þ ð34:11Þ with d2 replaced by p2. In the resulting diagram,
the part with p2, Y2 and d1 is reproduced in
264 34 Derivation of Second-Order Canonic All-Pass …

(a) (b) Then from Eqs. 34.19–34.21, we get


2
d1 z þ d1 z 1 d1 z 2
d1 t12 t21 ¼ 1
 ;
1 þ d1 z 1 þ d1 z 1 1 þ d 1 z 1
P2 d2
ð34:22Þ
d1 Y2
which, on simplification, gives
Fig. 34.5 a Part of modified Fig. 34.2 and b its
equivalent
d1 ð1 z 2 Þð1 þ d1 z 1 þ z 2 Þ
t12 t21 ¼ 1
 
1 þ d1 z 1 þ d1 z 1
Fig. 34.5a. Shifting d1 to the two inputs of ð34:23Þ
summing point gives the equivalent diagram of
Fig. 34.5b. Replacing the latter in the original Subject to the constraint of Eq. 34.23, various
diagram gives the configuration of Fig. 34.3. choices for t12 and t21 are possible. By some trial
and error, we have found the following choices
to yield a canonic solution:
Alternative Derivation
of the Structure of Fig. 34.2 d1 ð1 z 2 Þ
t12 ¼ ; ð34:24Þ
1 þ d1 z 1
We now ask the question: If, in Fig. 34.4, we
replace d1 by d2, can we get another canonic and
realization of Eq. 34.2? In effect, then, we should
have ð1 þ d1 z 1 þ z 2 Þ z 2
t21 ¼ 1
¼ 1þ 
1 þ d1 z 1 þ d1 z 1
z 2 þ d1 z 1 þ d1 d2 t11 d2 ðt11 t22 t12 t21 Þ ð34:25Þ
¼ 
d1 d2 z 2 þ d1 z 1 þ 1 1 d2 t22
ð34:18Þ We now have the following basic equations:
2
In order to have the two denominators in z þ d1 z 1 d1 ð1 z 2 Þ
Y1 ¼ X 1 þ X2 ; ð34:26Þ
Eq. 34.18 of the same form, we now divide both 1 þ d1 z 1 1 þ d1 z 1
the numerator and denominator of the left-hand
side of Eq. 34.18 by (1 + d1z−1). Then, we and
identify
z 2 d1 z 2
 
Y2 ¼ 1þ 1
X1 1
X2 
d1 z 2 1 þ d1 z 1 þ d1 z
t22 ¼ : ð34:19Þ
1 þ d1 z 1 ð34:27Þ

Correspondingly, for the modified numerator, A systematic procedure for obtaining the
we get realization diagram is depicted in Fig. 34.6. Part
(a) of the figure shows how t11X1 is obtained with
d1 X2 = 0, while part (b) of the same figure shows
t11 t22 t12 t21 ¼ 1
; ð34:20Þ
1 þ d1 z how the same hardware can realize t12X2 with
X1 = 0. Superimposing parts (a) and (b), we get
and the part (c) with the solid lines, which give the
2 output Y1. To obtain the output Y2, note the value
z þ d1 z 1
t11 ¼ ; ð34:21Þ of the signal at node A, as indicated, and that just
1 þ d1 z 1
adding X1 to it gives Y2 according to Eq. 34.26.
Alternative Derivation of the Structure of Fig. 34.2 265

(a)
X1 X1z -2 - X 2d1z -2
(b)
1 + d1z -1 1 + d1z -1
1 + d1z -1

X1 z 1 t11X1 z 1 t12X2

1 z 1
z
1 1
d1 d1 X2

X1d1z -1 X 2d1 X2
1 + d1z -1 1 + d1z -1 1 + d1z -1

X1z -2 X 2d1z -2
(c) 1 + d1z -1 1 + d1z -1

X1 z 1 A Y1

z 1
1
d1 X2 d2
Y2

Fig. 34.6 Steps in the realization of Eqs. 34.25 and 34.26

Finally, multiplying Y2 by d2 gives X2, as indi- under this constraint, one possible set of choices
cated by the broken lines in Fig. 34.6c. This for the t-parameters is the following:
configuration is indeed identical with that of
Fig. 34.2. t11 ¼ d1 d2 ; t22 ¼ ðd1 þ d1 d2 z 1 Þ; t12
Besides the transposed structures of Figs. 34.2 ¼ ð1 d1 d2 Þ and t21 ¼ d1 þ ð1 þ d1 d2 Þz 1 ;
and 34.3, are there other canonical possibilities? ð34:28Þ
We leave this as an open question to the reader.
and that the resulting realization is the transpose
of that of Fig. 34.2.
Yet Another Derivation
of the Structure of Fig. 34.2
Conclusion
Another question that arises at this point is the
following: If, in Fig. 34.4, we replace d1 by z−1, In this chapter, we have derived canonic real-
and aim for a two-pair containing one delay and izations of second-order all-pass digital filters by
the multipliers d1 and d2, do we get a new a procedure, which is much simpler than that of
structure? It is left to the readers to verify that the original three-pair approach of [2]. We have
266 34 Derivation of Second-Order Canonic All-Pass …

also derived the realization of Eq. 34.3 from that to Eq. 34.1). Find the overall transfer
for Eq. 34.2 by an elementary manipulation of function and find its characteristics.
Fig. 34.2; this is drastically simpler as compared P:5. Do the same for two second-order ones,
to the repetition of the three-pair approach, as having different d1 and d2 (refer to
done in [2]. Eq. 34.2).

Problems

P:1. Write down the transfer function of a References


third-order all-pass filter and draw its
structure. 1. P.A. Regalia, S.K. Mitra, P.P. Vaidyanathan,
The digital all-pass network: a versatile signal
P:2. Do the same for a fourth-order transfer processing building block. Proc. IEEE 76, 19–37
function. (1988)
P:3. When two digital 2-points are cascaded, 2. S.K Mitra, K. Hirano, Digital all-pass networks.
how do you find the overall parameters. Do IEEE Trans. Circ. Sys. 21, 688–700 (September
1974)
this in terms of t-parameter. 3. A.V. Oppenheim, R.W. Schafer, J.R. Buck,
P:4. Two first-order all-pass transfer functions Discrete-Time Signal Processing (Prentice Hall, New
are cascaded. They have different d1s (refer Jersey, 2000), p. 363
Derivation of the FIR Lattice
Structure 35

A simple derivation is presented for the FIR and then derive the recursion formula for the
lattice structure, based on the digital two-pair coefficients of the lower order transfer functions
concept. Go ahead, read it and judge for
yourself whether it is simple or not! Xi ðzÞ
Hi ðzÞ ¼
X0 ðzÞ
X i
¼ 1þ anðiÞ z n ; i ¼ N 1 to 1:
Keywords
n¼1
Lattice structure  Realization ð35:2Þ

In the process, one finds the multipliers as

ðiÞ
k i ¼ ai ð35:3Þ
Introduction
and also the relationship
In discussing the FIR lattice structure, it is usual in
Xi0 ðzÞ
textbooks on digital signal processing (see, e.g. Hi0 ðzÞ ¼ ¼ z i Hi ðz 1 Þ: ð35:4Þ
[1–3]) to assume the configuration of Fig. 35.1a, X0 ðzÞ
where each section is of the form shown in
Fig. 35.1b, for realizing the transfer function i.e. the two transfer functions Hi(z) and Hi0 (z) are
a pair of mirror image polynomials. Specifically,
XN ðzÞ XN with Hi(z) given by Eq. 35.2.
HN ðzÞ ¼ ¼ 1þ anðNÞ z n
ð35:1Þ
X0 ðzÞ n¼1 i 1
ðiÞ
X
Hi0 ðzÞ ¼ z i þ ai n z n: ð35:5Þ
n¼0

Even Mitra [1], who introduced the concept of


digital two-pair [4] and used the same to derive
IIR lattice structures, did not use it to derive FIR
lattice structure. However, Vaidyanathan [5], a
Source: S. C. Dutta Roy, “Derivation of the FIR Lattice former student of Mitra, used this approach to
Structure”, IETE Journal of Education, vol. 45, pp. 211– derive a variety of FIR lattice structures for the
212, October–December 2004.

© Springer Nature Singapore Pte Ltd. 2018 267


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_35
268 35 Derivation of the FIR Lattice Structure

(a)
X1 X2 XN 1 XN

X0 #1 #2 #N

X 1¢ X 2¢ X N¢ - 1 X N¢

(b)
Xi Xi
1
ki

ki

X i¢- 1 X i¢
z 1

Fig. 35.1 a The general cascaded FIR lattice structure. b The ith section of (a)

so-called ‘lossless bounded real (LBR)’ transfer Hi ðzÞ ¼ t11 Hi 1 ðzÞ þ t12 Hi0 1 ðzÞ; ð35:9Þ
functions.
We present here a simple derivation of the Hi0 ðzÞ ¼ t21 Hi 1 ðzÞ þ t22 Hi0 1 ðzÞ: ð35:10Þ
FIR lattice structure of Fig. 35.1 using the
Observe that in Eq. 35.9, t11 has to be unity in
two-pair approach, for a general transfer function
order to satisfy the requirement of Eq. 35.2 that
of the form Eq. 35.1 with the only constraints of
the constant term in Hi(z) should be unity. Also,
Eqs. 35.2 and 35.4, and no others.
for satisfying the requirement that Hi(z) should
be a polynomial of order i in z−1, t12 must be of
the form kiz−1. Thus, we have
Derivation
t11 ¼ 1 and t12 ¼ ki z 1 : ð35:11Þ
Consider the ith stage of the FIR lattice, shown in
Fig. 35.1b, and let it be characterized by the We next put the constraint of Eq. 35.4 and
transmission matrix get, from Eqs. 35.9 and 35.11,

Hi0 ðzÞ ¼ z i Hi ðz 1 Þ
" #
ðiÞ ðiÞ
ðiÞ t11 t12
T ¼ ðiÞ ðiÞ : ð35:6Þ
¼ z i Hi 1 ðz 1 Þ þ ki zHi0 1 ðz 1 Þ
 
t21 t22
ði 1Þ
¼ z i Hi 1 ðz 1 Þ þ ki z Hi0 1 ðz 1 Þ
For simplicity, we shall drop the superscript ði 1Þ ði 1Þ
(i) in the following discussion. Equation 35.6 ¼ z 1z Hi 1 ðz 1 Þ þ ki z Hi0 1 ðz 1 Þ
implies that ¼ ki Hi 1 ðzÞ þ z 1 Hi0 1 ðzÞ

Xi ðzÞ ¼ t11 Xi 1 ðzÞ þ t12 Xi0 1 ðzÞ ð35:7Þ ð35:12Þ

Comparing the last line of Eq. 35.12 with


Xi0 ðzÞ ¼ t21 Xi 1 ðzÞ þ t22 Xi0 1 ðzÞ: ð35:8Þ
Eq. 35.10, we observe that
In terms of the transfer functions Hi(z) and 1
t21 ¼ ki and t22 ¼ z ð35:13Þ
Hi0 ðzÞ; Eqs. 35.7 and 35.8 translate to the
following:
Derivation 269

The structure resulting from Eqs. 35.11 and Problems


35.13 is precisely that of Fig. 35.1b.
To obtain the coefficients of Hi − 1(z) from P:1. What happens if the lower bound slanting
those of Hi(z), one follows the same procedure as arrows in Fig. 35.1b points in the opposite
in [1]. The result is direction? What if the upper bound slanting
arrow points in the opposite direction?
ðiÞ ðiÞ
an k i ai What if both do the same?
anði 1Þ
¼ n
; n¼i 1 to 1; P:2. Suppose all arrows in Fig. 35.1a are
1 ki2
reversed. What kind of transfer function do
i¼N 1 to 1
we get?
ð35:14Þ P:3. Suppose x1, x1′ arrows point in the opposite
direction. What kind of transfer function
would you get?
Concluding Comments P:4. Re-derive the equations in terms of trans-
mission parameters.
A simple derivation has been presented for the P:5. Besides t- and transmission parameters,
FIR lattice structure on the basis of the constraint what other parameter are meaningful in the
that the two transfer functions obtained at the context of digital two-pairs? How are they
output of any section bear a mirror image rela- related t- and transmission parameters?
tionship to each other.
It is not difficult to appreciate that other lattice
structures are possible to derive by assuming some
other relationship between Hi(z) and Hi0 ðzÞ, e.g. References

Hi0 ðzÞ ¼ z i Hi ð z 1 Þ: ð35:15Þ 1. S.K. Mitra, Digital Signal Processing—A


Computer-Based Approach (McGrawHill, New York,
These structures, when carefully derived, differ 2001)
2. A.V. Oppenheim, R.W. Schafer, Discrete-Time Signal
from those of Fig. 35.1b in one or more of the
Processing, Englewood Cliffs (Prentice Hall, NJ,
following aspects: the position of the delay branch; 1989)
positions of the multipliers; relative signs of the two 3. J.G. Proakis, D.G. Manolakis, Introduction to Digital
multipliers, i.e. one multiplier may be the negative Signal Processing (Macmillan, New York, 1989)
4. S.K. Mitra, R.J. Sherwood, Digital ladder networks.
of the other; and nonuniformity of the signs of the IEEE Trans. Audio Electroacoust. AU-21, 30–36
multipliers from one section to the next. Examples (February 1973)
of such structures can be found in [5]. Detailed 5. P.P. Vaidyanathan, Passive cascaded lattice structures
derivation of these structures for the general transfer for low-sensitivity FIR design, with applications to
filter banks. IEEE Trans. Circ. Syst. CAS-33, 1045–
function of Eq. 35.1 and their recurrence relations 1064 (November 1986)
will be presented in a later chapter.
Solution to a Problem in FIR Lattice
Synthesis 36

In FIR lattice synthesis, if at any but the last which is known to be realizable by the lattice
stage, a lattice parameter becomes ±1, then structure of Fig. 36.1a, where each ki block has
the synthesis fails. A linear phase transfer the structure shown in Fig. 36.1b, provided that
function is an example of this situation. This at no stage in the synthesis procedure except the
chapter, written in a tutorial style, is con- last one, one encounters a parameter ki = ±1. For
cerned with a simple solution to this problem, example, if hN(N) = ±1, one cannot obtain a
demonstrated through simple examples, rather lattice by the usual procedure. This problem was
than detailed mathematical analysis, some of not adequately addressed to in the literature (see
which is available in (Dutta Roy in IEE e.g. [3–5]), and was solved in [1] for the linear
Proc-Vis Image Signal Process 147:549–552, phase case with rigorous mathematical analysis
2000 [1]). and proofs. However, the solution for the non-
linear phase case with hN(N) = ±1 was given in
[1] in terms of parallel lattices, which, in general,
Keywords is neither delay canonic nor multiplier canonic.
FIR filters  Lattice synthesis In this chapter, written in a tutorial style, we
present the essence of [1] through several simple
examples for easy comprehension by the stu-
dents. We also give a canonic solution to the
nonlinear phase case through a tapped lattice
Introduction
structure.
Note that although two multipliers have
Consider the FIR transfer function
been shown in Fig. 36.1b, each lattice section
N
X should also be realized by a single multiplier
n structure. Unfortunately, however, such a struc-
HN ðzÞ ¼ 1 þ hN ðnÞz ð36:1Þ
n¼1 ture does not exist as yet. However, we shall
refer to the structure of Fig. 36.1 as multiplier
canonic, even if we show two multipliers in each
lattice section.

Source: S. C. Dutta Roy, ‘Solution to a Problem in FIR


Lattice Synthesis’, IETE Journal of Education, vol. 43,
pp. 33–36, January–March 2002 (Corrections on p. 219,
October–December 2002).

© Springer Nature Singapore Pte Ltd. 2018 271


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_36
272 36 Solution to a Problem in FIR Lattice Synthesis

(a) x(n) y(n)


0 1 2 N 1 N 0.38889 0.38462 0.2
x(n) y (n)
k1 k1 kN

Fig. 36.2 Lattice structure for the transfer function given


(b) i 1 i by Eq. 36.6
i 1 i ki
ki Example 1
ki
Let
z 1
1 2 3
H3 ðzÞ ¼ 1 þ 0:5 z þ 0:3 z 0:2 z ð36:6Þ
Fig. 36.1 a General FIR lattice structure b Composition
of the ith block in (a)
Here, h3(1) = 0.5, h3(2) = 0.3 and
h3(3) = −0.2. Hence k3 = h3(3) = −0.2. From
Eq. 36.5,

Conventional Synthesis Procedure h2 ð1Þ ¼ ½h3 ð1Þ k3 h3 ð2ފ=ð1 k32 Þ ¼ 0:53846


ð36:7Þ
We first review the conventional FIR lattice
synthesis procedure [2]. By analysis of Fig. 36.1, and
it is easily shown that if Hi(z) is the transfer
function from node 0 to node i, then h2 ð2Þ ¼ ½h3 ð2Þ k3 h3 ð1ފ=ð1 k32 Þ ¼ 0:38462
ð36:8Þ
Hi ðzÞ ¼ Hi 1 ðzÞ þ ki z i Hi 1 ðz 1 Þ; i ¼ 1 ! N
ð36:2Þ Hence,
1 2
Also, Hi(z) is of the form H2 ðzÞ ¼ 1 þ 0:53846 z þ 0:38462 z ð36:9Þ

i
X so that k2 = 0.38462. Apply Eq. 36.5 again to get
n
Hi ðzÞ ¼ 1 þ hi ðnÞz ð36:3Þ
n¼1
h1 ð1Þ ¼ ½h2 ð1Þ k2 h2 ð1ފ=ð1 k22 Þ
ð36:10Þ
The lattice parameters (ki) of Fig. 36.1 are ¼ h2 ð1Þ=ð1 þ k2 Þ ¼ 0:38889
given by
Thus
ki ¼ hi ðiÞ; i ¼ 1 ! N ð36:4Þ H1 ðzÞ ¼ 1 þ 0:38889 z 1
ð36:11Þ
To obtain Hi−1(z) from Hi(z), i = N ! 2, one giving k1 = 0.38889. The synthesis is thereby
uses the following recursion formula: complete and the resulting structure is shown in
Fig. 36.2.
hi 1 ðnÞ ¼½hi ðnÞ ki hi ði nފ=ð1 ki2 Þ
i¼N!2
ð36:5Þ Linear Phase Transfer Function

Obviously, if ki = ±1, then the synthesis fails! As is well known, there are four different types of
We now illustrate the conventional procedure linear phase transfer functions, viz. (1) symmet-
by an example. rical impulse response of even length;
Linear Phase Transfer Function 273

(2) symmetrical impulse response of odd length; Nth-order case, N odd, which will require (N−1)/2
(3) asymmetrical impulse response of even non-trivial lattice sections to begin with, followed
length; and (4) asymmetrical impulse response of by the same number of simple delays, and ending
odd length. We shall consider each of these cases in one unity parameter lattice section.
through simple examples.
Example 3: Illustrating case 2
Example 2: Illustrating Case 1 Let
Let
1 2 3
4 ðzÞ ¼ 1 þ h4 ð1Þz þ h4 ð2Þz þ h4 ð1Þz þz 4
1 2 3
H5 ðzÞ ¼1 þ h5 ð1Þz þ h5 ð2Þz þ h5 ð2Þz ð36:16Þ
4 5
þ h5 ð1Þz þz
Here we have k4 = 1; also Eq. 36.16 can be
ð36:12Þ
rewritten as
Here k5 = 1. Note that Eq. 36.12 can be rewritten as
H4 ðzÞ ¼ 1 þ h4 ð1Þz 1 þ ð1=2Þh4 ð2Þz 2
 

þ ð1Þz 4 1 þ h4 ð1Þz þ ð1=2Þh4 ð2Þz2


 
H5 ðzÞ ¼ 1 þ h5 ð1Þz 1 þ h5 ð2Þz 2
 

þ ð1Þz 5 1 þ h5 ð1Þz þ h5 ð2Þz2 ð36:17Þ


 

ð36:13Þ
Combining this with Eq. 36.2 with i = 4, we get
Comparing this with Eq. 36.2 with i = 5, we 1 2
H3 ðzÞ ¼ 1 þ h4 ð1Þz þ ð1=2Þh4 ð2Þz ¼ H2 ðzÞ
note that
ð36:18Þ
1 2
H4 ðzÞ ¼ 1 þ h5 ð1Þz þ h5 ð2Þz ¼ H2 ðzÞ
which implies that k3 = 0 and k2 = (1/2)h4(2).
ð36:14Þ
Finally, as in Eq. 36.15, we have
because the order of the polynomial is 2. This order
k1 ¼ h1 ð1Þ ¼ h2 ð1Þ=ð1 þ k2 Þ
reduction means that k4 = k3 = 0. Also k2 = h5(2), ð36:19Þ
and by the formula in Eq. 36.10, we get ¼ h4 ð1Þ=½1 þ ð1=2Þh4 ð2ފ

k1 ¼ h1 ð1Þ ¼ h2 ð1Þ=ð1 þ k2 Þ The resulting lattice is shown in Fig. 36.4. In


¼ h5 ð1Þ=½1 þ h5 ð2ފ ð36:15Þ general, for an Nth-order transfer function with
N even, we shall require N/2 non-trivial lattice
sections, (N/2)−1 simple delays, and a unity
The synthesis is now complete and the result- parameter lattice section.
ing structure is shown in Fig. 36.3. Note that only
two non-trivial lattice parameters are needed for Example 4: Illustrating case 3
the synthesis of a fifth-order transfer function; this Let
is what it should be, because there are only two
1 2 3
independent parameters in the transfer function H5 ðzÞ ¼ 1 þ h5 ð1Þz þ h5 ð2Þz h5 ð2Þz
36.12. This can be easily generalized to the 4 5
h5 ð1Þz z
ð36:20Þ

x(n) y(n) y(n)


k1 k2 k5 = 1 x(n)
k1 k2 k4 = 1
z 1 z 1 z 1

Fig. 36.3 Synthesis of Eq. 36.12 Fig. 36.4 Synthesis of Eq. 36.16
274 36 Solution to a Problem in FIR Lattice Synthesis

x(n) y(n)
k1 = h4(1) k4 = 1

z 1 z 1

Fig. 36.5 Synthesis of Eq. 36.22

Here k5 = −1 and we can rewrite Eq. 36.20 as


Nonlinear Phase FIR Function
H5 ðzÞ ¼ 1 þ h5 ð1Þz 1 þ h5 ð2Þz 2 with hN(N) = –1
 

þ ð 1Þz 5 1 þ h5 ð1Þz þ h5 ð2Þz2


 
Let
ð36:21Þ
N 1
X
n N
Comparing with Example 2, we see that the HN ðzÞ ¼ 1 þ hN ðnÞz z ð36:25Þ
lower order polynomial is the same in this case n¼1
also. Hence, the realization of Fig. 36.2 is valid
for this case also, except that the last lattice where N may be even or odd. We first consider
section will have k5 = −1. the case of even N and illustrate our new pro-
cedure with examples.
Example 5: Illustrating case 4 Example 6
Let Let
1 3 4
H4 ðzÞ ¼ 1 þ h4 ð1Þz h4 ð1Þz z ð36:22Þ H4 ðzÞ ¼ 1 þ h4 ð1Þz 1
þ h4 ð2Þz 2
þ h4 ð3Þz 3
þz 4

ð36:26Þ
Note that because of asymmetry, h4(2) is identi-
cally zero. Here, we have k4 = −1 and we can write This can be decomposed as follows:
1
þ ð 1Þz 4 ½1 þ h4 ð1ÞzŠ
 
H4 ðzÞ ¼ 1 þ h4 ð1Þz
H4 ðzÞ ¼ 1 þ h4 ð3Þz 1 þ h4 ð2Þz 2 þ h4 ð3Þz 3 4
 
þz
ð36:23Þ þ ½h4 ð1Þ h4 ð3ފz 1


Thus, ð36:27Þ

1 The first transfer function (within square


H3 ðzÞ ¼ 1 þ h4 ð1Þz ¼ H1 ðzÞ ð36:24Þ
brackets) is linear phase and is the same as
i.e. k3 = k2 = 0 and k1 = h4(1). The synthesis is, Eq. 36.16 with h4(1) replaced by h4(3). The
therefore, complete and the resulting structure is second transfer function in Eq. 36.27 (within
shown in Fig. 36.5. In general, for N even and curly brackets) can be realized by tapping the k1
asymmetric impulse response, there will be (N/2) block after the delay z−1, multiplying it by [h4(1)
−1 non-trivial lattice sections, N/2 simple delays −h4(3)], and adding it to the main output, as
and a last lattice section with the parameter −1. shown in Fig. 36.6.
Nonlinear Phase FIR Function with hN(N) = ±1 275

Fig. 36.6 Synthesis of x(n)


Eq. 36.26: k1 ¼
h4 ð3Þ=½1 þ ð1=2Þh4 ð2ފ k1 y(n)
k2 k4 = 1
and k2 ¼ ð1=2Þh4 ð2Þ
k1
z 1
z 1

[h4(1) h4 (3)]

Note that if each lattice can be realized by a 


H4 ðzÞ ¼ 1 h4 ð3Þz 1
þ h4 ð3Þz 3
z 4

single multiplier structure; then the realization of  1 2

Fig. 36.6 will be a delay, as well as multiplier þ ½h4 ð3Þ þ h4 ð1ފz þ h4 ð2Þz
canonic. Unfortunately, however, such a struc- ð36:29Þ
ture does not exist; this is still an unsolved
problem. For higher order transfer functions, one The first of these transfer functions is linear
would require more tappings at the end of delays phase with k4 = −1, k3 = k2 = 0 and k1 = −h4(3).
and the multiplier coefficients have to be appro- The second transfer function is realized by taking
priately chosen. The next example illustrates tappings after the first and second delays, as
both of these points, although the order of the shown in Fig. 36.7. The three outputs are then
transfer function is the same. combined. The multipliers a and b are found
from the following equation:
Example 7
z 1 a þ k1 þ z 1 1 1 2

Let z b ¼ ½h4 ð3Þ þ h4 ð1ފz þ h4 ð2Þz
ð36:30Þ
1 2 3 4
H4 ðzÞ ¼ 1 þ h4 ð1Þz þ h4 ð2Þz þ h4 ð3Þz z
ð36:28Þ This gives

Here k4 = −1 and the necessary decomposition is b ¼ h4 ð2Þ and a


¼ h4 ð3Þ þ h4 ð1Þ h4 ð2Þh4 ð3Þ ð36:31Þ
as follows:

Fig. 36.7 Synthesis of x(n)


Eq. 36.28: a and b are given
by Eq. 36.31 k1 S y(n)
k4 = 1
k1= h4 (3)
z 1

1 1
z z

b
a

S
276 36 Solution to a Problem in FIR Lattice Synthesis

x(n) The first transfer function is linear phase,


k1 k2
antisymmetrical and can be realized by the pro-
k1 k2
k5 = 1 cedure already illustrated. We shall have

z 1 z 1 z 1 z 1 k5 ¼ 1; k4 ¼ k3 ¼ 0; k2 ¼ h5 ð3Þ and
a b k1 ¼ h5 ð4Þ=½1 h5 ð3ފ
S S
ð36:36Þ
y(n)
The second transfer function in Eq. 36.35 is
Fig. 36.8 Synthesis of Eq. 36.33: k1 ¼ h5 ð4Þ=
realized by tappings after the first and second
½1 þ h5 ð3ފ; k2 ¼ h5 ð3Þ; a ¼ h5 ð1Þ h5 ð4Þ bk1 and
b ¼ h5 ð2Þ þ h5 ð3Þ delays. The final realization is the same as that
shown in Fig. 36.8 with k values given by
Eq. 36.36 and
Example 8
Let a ¼ h5 ð1Þ þ h5 ð4Þ bk1 and
ð36:37Þ
b ¼ h5 ð2Þ þ h5 ð3Þ
1 2
H5 ðzÞ ¼1 þ h5 ð1Þz þ h5 ð2Þz
3 4 5
ð36:32Þ
þ h5 ð3Þz þ h5 ð4Þz þz

Here k5 = 1 and we can rewrite Eq. 36.32 as Conclusion

H5 ðzÞ ¼ 1 þ h5 ð4Þz 1 þ h5 ð3Þz 2 þ h5 ð3Þz 3 þ h5 ð4Þz 4 5


 
þz We have demonstrated, through simple exam-
þ ½h5 ð1Þ h5 ð4ފz 1 þ ½h5 ð2Þ h5 ð3ފz 2

ples, how an FIR lattice transfer function
HN ðzÞ ¼ 1 þ Nn¼11 hN ðnÞz n  z N can be
P
ð36:33Þ
realized for both linear and nonlinear phase cases
The first transfer function has already been with canonic delays and multipliers. The proce-
realized in Example 2, except that here h5(4) dure is expected to be useful to students and
takes the place of h5(1) and h5(3) takes the place teachers, as well as designers concerned with
of h5(2). The second transfer function in digital signal processing.
Eq. 36.33 can be realized by tapping the signals
after the first and the second delays. The proce-
dure is the same as in Example 7 and is not
repeated here. The final result is shown in
Problems
Fig. 36.8.
P:1. What happens in Fig. 36.1b if one k1 is in
the reverse direction?
Example 9
P:2. Obtain a lattice structure for
Let
1 2
H5 ðzÞ ¼1 þ h5 ð1Þz þ h5 ð2Þz
3 4 5
ð36:34Þ
þ h5 ð3Þz þ h5 ð4Þz z H0 ðzÞ ¼ 1 þ 0:5 z 1
þ 0:3 z 2
þ 0:5 z 3
þz 4

The necessary decomposition is as follows:


P:3. Obtain a lattice if z−5 in Eq. 36.12 has a
H5 ðzÞ ¼ 1 h5 ð4Þz 1 h5 ð3Þz 2 þ h5 ð3Þz 3 þ h5 ð4Þz 4 5 negative sign.
 
z
þ ½h5 ð1Þ þ h5 ð4ފz 1 þ ½h5 ð2Þ þ h5 ð3ފz 2 P:4. Obtain a lattice for Eq. 36.16 with


ð36:35Þ h4(2) = 0.
Problems 277

P:5. Can you decompose Eq. 36.26 in any 2. A.V. Oppenheim, R.W. Schafer, Discrete Time Signal
fashion other than Eq. 36.27? Give as many Processing (Prentice Hall, New Jersey, 1989)
3. M. Bellanger, Digital Processing of Signals (Wiley,
as possible if you can. Hoboken, 1984)
4. J.G. Proakis, D.G. Manolakis, Digital Signal Process-
ing (McMillan, Basingstoke, 1992)
5. L.B. Jackson, Digital Filters and Signal Processing
(Kluwer, Alphen aan den Rijn, 1989)
References

1. S.C. Dutta Roy, Synthesis of FIR lattice structures.


IEE Proc-Vis Image Signal Process 147, 549–552
(2000)
FIR Lattice Structures
with Single-Multiplier Sections 37

An alternative derivation is given for the linear This structure uses two identical multipliers in
prediction FIR lattice structures with each section, and is attributed to Itakura and
single-multiplier sections. As compared to the Saito [1]. Most textbooks on Digital Signal
previous approaches, this method is believed to Processing (DSP) refer to this structure and its
be conceptually simpler and more straightfor- variations in details. They assume, rather than
ward. derive, the structure and then analyze it for
finding the transfer function as well as some
recurrence formulas. A simple derivation of the
Keywords structure has recently been given in [2], and an
FIR lattice  Single-multiplier realization alternative lattice structure in which the two
transfer functions HN(z) and HN′(z) are comple-
mentary to each other, i.e. HN′(z) = z– N
HN(−z−1), has been given in [3].
Introduction There exists a corresponding lattice structure
for all-pass IIR transfer functions, which also
The linear prediction FIR lattice filter, shown in
uses two multipliers per section. This structure
Fig. 37.1, realizes the transfer function
has been derived by Mitra by using his two-pair
XN or multiplier extraction approach [4]. Using the
YðzÞ n same approach, Mitra also derived a modified IIR
HN ðzÞ ¼ ¼ 1þ hn z ð37:1Þ
XðzÞ n¼1 lattice structure which uses a single multiplier per
section, and is therefore canonic in multipliers.
and its mirror image transfer function As it is well known, the basic all-pass IIR
structure can be used with additional multipliers
Y 0 ðzÞ
HN0 ðzÞ ¼ ¼z N
HN ðz 1 Þ: ð37:2Þ and summers to realize any arbitrary IIR transfer
XðzÞ function [5].
A question arises as to whether a
single-multiplier structure is possible for the FIR
lattice also. The answer was given by Makhoul,
as early as 1978 [6]. He derived two structures,
called LF2 and LF3, each section of which
Source: S. C. Dutta Roy, “FIR Lattice Structures with
Single-Multiplier Sections,” IETE Journal of Education, contains three multipliers, and then converted
vol. 47, pp. 119–122, July–September 2006. them to single-multiplier ones by a clever

© Springer Nature Singapore Pte Ltd. 2018 279


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_37
280 37 FIR Lattice Structures with Single-Multiplier Sections

Fig. 37.1 The basic f0(n) f1(n) f2(n) fN (n)


two-multiplier structure of [1] = y(n)
k1 k2 kN
x(n) = f(n)

z 1 k1 z 1 k2 z 1 kN

g0(n) g1(n) g2(n) gN(n) = y ¢(n)

manipulation. Nine years after the appearance of single-multiplier realizations, we require one or
[6], Dognata and Vaidyanathan [7] gave an more additional variables involving a linear
alternative derivation of the single-multiplier FIR combination of X1 and z−1 X2 without any mul-
lattice by the multiplier extraction approach, and tiplication. The possibilities for a third variable
another 2 years later, Krishna [8] indicated that are X1 + z−1 X2; X1 − z1 X2; −X1 + z−1 X2; and
the same structure could also be arrived at by the −X1 − z−1 X2. We shall now investigate some of
eigen-decomposition approach. these cases.
Surprisingly, no mention could be found of
Makhoul’s work in any of the large number of
textbooks scanned by the author. The purpose of Realization 1
this chapter is, first, to bring this fine piece of
work to the attention of teachers and students of In terms of the variable set (X1, z−1 X2, X1 + z−1
DSP, and, second, to present an alternative, class X2), Eqs. 37.4 and 37.5 can be rewritten as
tested and conceptually simpler procedure for
deriving Makhoul’s single-multiplier structures. Y1 ¼ ð1 km ÞX1 þ km ðX1 þ z 1 X2 Þ ð37:6Þ

and
Derivation
Y2 ¼ ð1 km Þz 1 X2 þ km ðX1 þ z 1 X2 Þ: ð37:7Þ
Consider the mth section of the two-multiplier
structure, shown in Fig. 37.2 with signals trans- Dividing both sides of Eqs. 37.6 and 37.7 by
formed to the z-domain. Also, for simplicity, let (1 − km), we get

Y1 km
Fm 1 ðzÞ ¼ X1 ðzÞ; Gm 1 ðzÞ ¼ X2 ðzÞ; ð37:3aÞ ¼ X1 þ ðX1 þ z 1 X2 Þ; ð37:8Þ
1 km 1 km
and

Fm ðzÞ ¼ Y1 ðzÞ; and Gm ðzÞ ¼ Y2 ðzÞ; ð37:3bÞ


Fm -1 (z )[= X1 ] Fm (z )[ = Y1]
By inspection of Fig. 37.2, we get
km
1
Y1 ¼ X1 þ km z X2 ; ð37:4Þ

and
1 km
z
1
Y2 ¼ km X1 þ z X2 : ð37:5Þ
Gm -1(z )[ = X 2 ] Gm (z )[ = Y2 ]
The input variables occurring in Eqs. 37.4 and
37.5 are X1 and z−1 X2. Clearly, in order to obtain Fig. 37.2 The mth section of Fig. 37.1: Is m = 1?
Derivation 281

and Since the procedure for other realizations is


similar, we shall, for brevity, only give the main
Y2 km results for three other cases.
¼ z 1 X2 þ ðX1 þ z 1 X2 Þ ð37:9Þ
1 km 1 km

The resulting structure involves only one Realization 2


multiplier, as shown in Fig. 37.3a. However,
each output is now scaled by the factor Taking the variable set as (X1, z−1 X2, X1 − z−1
1/(1 − km). Y1 and Y2 can, of course, be recov- X2), Eqs. 37.4 and 37.5 can be rewritten as
ered by multiplying the outputs by the factor
(1 − km). If each stage of Fig. 37.1 is thus con- Y1 ¼ ð1 þ km ÞX1 km ðX1 z 1 X2 Þ; ð37:10Þ
verted into a single-multiplier one, then all output
multipliers can be clubbed into a single multiplier and
QN
of value ð1 km Þ at the input of the overall
m¼1 Y2 ¼ ð1 þ km Þz 1 X2 þ km ðX1 z 1 X2 Þ: ð37:11Þ
lattice, thus reducing the total number of multi-
pliers from 2N to N + l. Note that instead of Single-multiplier realization is obtained by
lumping the multipliers at the input, one can also dividing both sides of Eqs. 37.10 and 37.11 by
distribute them appropriately in order to prevent (1 + km), and is shown in Fig. 37.3b which is the
overflow and/or minimize quantization errors. same as LF2/l(b) of [6]. The multiplier needed at
The total number of multipliers may still be Q
N
the input of the lattice is ð1 þ km Þ in this case.
much less than 2N, as required in the lattice of m¼1
Fig. 37.1. One should also keep in mind that
considerations of overflow and quantization
errors may dictate the use of additional Realization 3
lumped/distributed scaling in the structure of
Fig. 37.1 too. Also, observe that Fig. 37.3a is the For the same variable set as in realization 1,
same as the one-multiplier lattice form LF2/l(a) Eqs. 37.4 and 37.5 can also be rewritten as
of Makhoul [6]. follows:

(a) (b) Y1/(1+km)


Y1/(1 km )
X1 X1
1

1
km/(1 km ) km/(1 + km)
1
z 1 z
X2 X2

Y2/(1 km ) Y2/(1 + km)

(c) Y1/(km 1) (d) Y1/(km + 1)


X1 X1
1/(km 1) 1/(km + 1)

1
z 1
1
1
z
X2 X2
Y2/(km – 1) Y2/(km + 1)

Fig. 37.3 Four single-multiplier realizations of the lattice section of Fig. 37.2
282 37 FIR Lattice Structures with Single-Multiplier Sections

Y1 ¼ ðX1 þ z 1
X2 Þ þ ðkm lÞz 1 X2 ; ð37:12Þ be written or in the new editions of the books
which exist. It is also hoped that the conceptually
and simpler approach presented here for deriving the
single-multiplier FIR lattice would appeal to
Y2 ¼ ðX1 þ z 1 X2 Þ þ ðkm 1Þz 1 X1 : ð37:13Þ students and teachers of DSP.

The corresponding single-multiplier realiza-


tion is obtained by dividing both sides of Problems
Eqs. 37.12 and 37.13 by (km − 1) and is shown
in Fig. 37.3c, which is the same as LF3/l(a) of P:1. In Fig. 37.1, if all arrows are reversed, what
[6]. The multiplier needed at the input N of the kind of transfer function is obtained?
Q
N
P:2. What if only the lower line arrows are
lattice is ðkm 1Þ:
m¼1 reversed in Fig. 37.1?
P:3. Write the equations in Fig. 37.2 with only
the lattice arrows reversed.
Realization 4 P:4. In Fig. 37.3a and b, again reverse all the
arrows, and comment on the transfer func-
Taking the same variable set as in realization 2, tion so obtained.
we can also modify Eqs. 37.4 and 37.5 as P:5. Do the same for Fig. 37.3c and d.
follows:

Y1 ¼ ðX1 z 1 X2 Þ þ ð1 þ km Þz 1 X2 ; ð37:14Þ Acknowledgments The author thanks R. Vishwanath


for helpful discussions.
and

Y2 ¼ ðX1 z 1
X2 Þ þ ð1 þ km ÞX1 : ð37:15Þ References

Single-multiplier realization is obtained 1. F. Itakura, S. Saito, Digital Filtering Techniques for


by dividing both sides of Eqs. 37.14 and 37.15 Speech Analysis and Synthesis, in Proc 7th lnt Cong
Acoust, (Budapest, Hungary, 1971) pp. 261–264
by (1 + km), and is shown in Fig. 37.3d, which is 2. S.C. Dutta Roy, R. Vishwanath, Derivation of the FIR
the same as LF3/l(b) of [6]. The multiplier lattice. IETE J. Educ. 45, 211–212 (October–Decem-
needed at the input of the lattice is now ber 2004)
Q
N 3. S.C. Dutta Roy, R. Vishwanath, Another FIR lattice
ð1 þ km Þ: structure, Int. J. Circ. Theor. Appl. 33, 347–351,
m¼1 (July–August 2005)
As can be easily verified, the other input 4. S.K. Mitra, Digital Signal Processing: A Computer
variable sets like (X1, z−1 X2, −X1 ± z−1 X2), Based Approach (McGrawHill, New York, 2001)
5. A.H. Gray, J.D. Markel, Digital lattice and ladder filter
(−X1, z−1 X2, −X1 ± z−1 X2), etc., give simple synthesis. IEEE Trans. Audio Electroacoust. AU-21,
variations of the four structures shown in 491–500 (December 1973)
Fig. 37.3. 6. J. Makhoul, A class of all-zero lattice digital filters:
properties and applications. IEEE Trans. Acoust.
Speech Sig. Process. ASSP-26, 304–314, (August
1978)
Conclusion 7. Z. Doganata, P.P. Vaidyanathan, On one-multiplier
implementations of FIR lattice structures. IEEE Trans.
As indicated in the introduction, Makhoul’s work Circ. Syst. CAS-34, 1608–1609 (December 1987)
8. H. Krishna, An eigen-decomposition approach to
[6] has not received adequate recognition in one-multiplier realizations of FIR lattice structures.
textbooks on DSP. It is hoped that this chapter IEEE Trans. Circ. Syst. CAS-36, 145–146, (January
will facilitate inclusion of this work in books to 1989)
A Note on the FFT
38

This chapter gives a formula for the exact predicted by the N log2 N formula. As another
number of non-trival multipliers required in example, the FFT diagrams for a 32-point
the basic N-point FFT algorithms, where N is sequence, in both DIT and DIF forms show the
an integral power of 2. Now proceed further, number of non-trivial multipliers to be 49,
but not too far! instead of 160, as predicted by the N log2 N for-
mula. This reduction is effected by using the
butterfly simplification and the facts that WN0 ¼ 1
Keywords N=2
and WN ¼ 1; where WN = exp (−j2p/N). It
FFT  Computation  Number of multipliers is, therefore, of interest to find out the actual
number of non-trivial multipliers needed in a
general N-point FFT, where N = 2q, q being a
positive integer. This note gives a formula for
this purpose.
Introduction

In the usual presentation of the Fast Fourier


Transform (FFT) in textbooks, it is mentioned Derivation of the Formula
that the basic N-point FFT algorithms reduce the
number of multipliers from N2 to N log2 N ([1], Consider the DIT algorithm of an N-point FFT,
p 287), if N is a power of 2. If one looks at the incorporating butterfly simplification. The last
actual FFT diagrams, either decimation-in-time (qth) stage of the computation will have N/2
(DIT) or decimation-in-frequency (DIF), of an multipliers, of which WN0 is trivial and the others
8-point sequence, as shown in Fig. 38.1a and b, ðN=2Þ 1
are WN1 ; WN2 ; . . .; WN : Thus the number of
respectively, one finds that the actual number of multipliers at this stage is [(N/2) −1]. The pre-
non-trivial multipliers is 5, instead of 24, as ceding stage [(q−1)th] will have two groups of
multipliers, each having (N/4) members. In each
group, there will be a WN0 multiplier. Hence, the
number of multipliers at the [(q−1)th] stage is
2[(N/4)−1]. Similarly, at the (q−2)th stage, the
Source: S. C. Dutta Roy, “A Note on the FFT,” IETE
Journal of Education, vol. 46, pp. 61–63, April–June
2005.

© Springer Nature Singapore Pte Ltd. 2018 283


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2_38
284 38 A Note on the FFT

(a)
x(0) X(0)

x(4) X(1)
–1

x(2) X(2)
–1
W8 2

x(6) X(3)
–1 –1

x(1) X(4)
–1
W81

x(5) X(5)
–1 –1
W8 2

x(3) X(6)
–1 –1
W8 2 W83

x(7) X(7)
–1 –1 –1
(b)
x(0) X(0)

x(1) X(4)
–1

x(2) X(2)
–1
W8 2
x(3) X(6)
–1 –1

x(4) X(1)
–1
W81
x(5) X(5)
–1 –1
W8 2
x(6) X(3)
–1 –1
3 2
W8 W8
x(7) X(7)
–1 –1 –1

Fig. 38.1 a Decimation-in-time FFT flow diagram for a 8-point sequence, showing only the non-trivial multipliers;
b Decimation-in-frequency FFT flow diagram for a 8-point sequence, again showing only the non-trivial multipliers

number of multipliers is 4[(N/8) −1] and so on, second stage contributes to (N/4) multipliers.
till we reach the second stage where there are The first stage has (N/2) multipliers, each of
(N/4) groups of multipliers, with two multipliers value WN0 : Thus, the total number of non-trivial
in each group, one of them being WN0 : Hence the multipliers in an N-point FFT becomes
Derivation of the Formula 285


N
 
N
 
N

Recurrence Relation
M ðN Þ ¼20 1 þ 21 1 þ 22 1
2 4 8
N A recurrence formula for M(N) can be derived as
þ    þ ð2 1Þ
4
follows. For a 2N-point FFT, Eq. 28.2 gives
" " " "
qth stage jðq 1Þth stage j ðq 2Þth stagej2nd stage
      M ð2N Þ ¼N ½log2 ð2N Þ 2Š þ 1
N N N
¼ 1 þ 2 þ 4 ¼N ½log2 2 þ log2 N 2Š þ 1
2 2 2
  ð38:3Þ
þ  þ
N N ¼N ðI þ log2 N 2Þ þ 1
2 4
  ¼N ðlog2 N 1Þ þ 1:
N N
¼ ðq 1Þ 1þ2þ4þ  þ
2 4
N Also,
1 þ 2 þ 2 þ    þ 2q 2
2

¼ ðlog2 N 1Þ
2
N 1 2q 1 2M ðN Þ ¼ N ðlog2 N 2Þ þ 2:
¼ ðlog2 N 1Þ
2 1 2 
N N From Eqs. 38.3 and 38.4, we get
¼ ðlog2 N 1Þ þ 1 :
2 2
ð38:1Þ M ð2N Þ ¼ 2M ðN Þ þ N 1;

Finally, therefore, which is the required recurrence formula.

N
M ðN Þ ¼ ðlog2 N 2Þ þ 1: ð38:2Þ
2 Alternative Derivation for M(N)
The same number of multipliers arises in DIF
An alternative derivation of the formula for
also, by recognizing that the index of stages will
M(N) follows by noting that the actual number of
be reversed in Eq. 38.1. Table 38.1 shows a
multipliers after using the butterfly simplification
comparison of the actual number of non-trivial
is (N/2) log2 N, in which the number of WN0
multipliers M(N) and the number predicted by
the N log2 N formula. multipliers is, using the DIT,

1 at the qth stage;


2 at theðq 1Þth stage;
Table 38.1 Number of multipliers in FFT 4 at the ðq 2Þth stage;
N M(N) N log2 N . . .. . .. . .. . .. . .. . .. . .. . .;
2 0 2 ðN=4Þ at the 2nd stage; and
4 1 8 ðN=2Þ at the 1st stage:
8 5 24
16 17 64 Hence, the total number of WN0 multipliers is
32 49 160 N
64 129 384 1þ2þ4þ  þ
2
128 321 896 ¼ 1 þ 2 þ 22 þ    2q l

256 769 2048 1 2q


512 1793 3584
¼
1 2
¼ N 1:
286 38 A Note on the FFT

Thus, P:3. What if the number of points is 15? I mean


is DIF.
N P:4. What if the number of points is 6? DIF
M ðN Þ ¼ log2 N ðN 1Þ
2 ð38:4Þ again.
N P:5. Same as P.4 for DIT.
¼ ðlog2 N 2Þ þ 1;
2

which is the same as Eq. 38.2.


Reference
Problems
1. A.V. Oppenheim, R.W. Schafer, Digital Signal Pro-
cessing (Prentice Hall, New Jersey, 1975)
P:1. What looks simpler? DIT or DIF? Why?
P:2. Draw the DIF diagram for a 16-point FFT.
Appendix: Some Mathematical Topics
Simplified

In the Appendix, I give some simple, common Chebyshev was out-and-out a mathematician.
sense methods for deriving mathematical for- Little did he know that his polynomials would be
mulas frequently used in CSSP. Appendix A.1 found so useful by filter designers. They do
gives a semi-analytical method for finding the appear to be complicated to students, but a
roots of a polynomial. Euler’s relation forms the reading of Appendices A.6 and A.7 would show
basis of complex numbers. A fresh look at it that you can derive Chebyshev polynomial
forms the content of Appendix A.2. The square identities with ease and compute the coefficients
root of the sum of two squares is required in of Chebyshev polynomials without difficulty.
finding the magnitude of complex quantity. An As in Section I, all chapters in this section end up
approximation appears in Appendix A.3. with five carefully designed problems. Each prob-
It is well known that algebraic equations of lem requires a thorough understanding of the con-
order more than 2 are difficult to solve. For third tents of the corresponding chapter. Work them out
and fourth orders, analytical solutions exist, but carefully and the joy of finding the clue can perhaps
are difficult to implement. For still higher orders, be compared with the joy you derive when you get a
numerical methods have to be resorted to. For piece of your most favourite food. Learning is, by
cubic and quartic equations, simplified proce- all accounts, feeding yourself. You can never
dures are given in Appendix A.4. Appendix A.5 overeat, and if you think you have done so, it will
gives many ways of solving an ordinary linear cause no uncomfortable feeling. Learning is con-
second order differential equation. A simple suming food for your intellectual development. The
method has been presented in Appendix A.5 for more you learn, the more you would like to learn.
this purpose. Happy learning, dear students!

© Springer Nature Singapore Pte Ltd. 2018 287


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
A.1: A Semi-analytical Method for Finding
the Roots of a Polynomial

A systematic method, which combines graphical, ing, in connection with the determination of
analytical, and numerical techniques, is pre- poles and zeros of transfer functions, and in
sented for finding the roots of a polynomial testing for stability of a given system. It is well
P0(s) of any degree. Real roots are first found by known that analytical solutions are possible only
a simple graphical method, and then the purely for polynomials of degree 4 or less, and for
imaginary roots are found by the Hurwitz test. higher degrees, numerical methods have to be
When all the real and purely imaginary roots are resorted to.
removed from P0(s), the remainder polynomial In this section, we present a semi-analytical
P2(s) will have only complex conjugate roots and method, consisting of a combination of graphi-
hence will be of even degree. When this degree is cal, analytical, and numerical techniques for
2, the roots are obvious. For P2(s) of degree 4, a finding the roots of a polynomial P0(s) of any
variation of a previously published analytical degree. When P0(s) contains only one or two
method, combined with a graphical display, is pairs of complex conjugate roots, besides those
presented which is easier to apply. When the on the real and imaginary axes, it is shown that a
degree of P2(s) is greater than 4, only numerical combination of graphical and analytical methods
methods have to be used. suffices. When P0(s) contains three or more pairs
of complex conjugate roots, numerical methods
have to be resorted to, after extracting all the real
Keywords and imaginary axis roots from P0(s).
Hurwitz test • Polynomial roots • Quartic poly-
nomial • Solution of algebraic equations
Roots on the Real Axis

Let
Introduction
P0 ðsÞ ¼ sN þ aN 1 sN1 þ   
ðA:1:1Þ
The problem of finding the roots of a given þ a2 s 2 þ a1 s þ a0
polynomial arises in all fields of science and
engineering, particularly in electrical engineer- where the coefficients are real and a0 6¼ 0. (If a0
= 0, then there is a root at s = 0, and the poly-
nomial degree is reduced by one.) Let s = r + jx,
where r and x are real and can be positive or
negative. If P0(s) contains real roots, all on the
Source: S. C. Dutta Roy, “A Semi-Analytical Method for negative r axis, then all ai’s will be positive,
Finding the Roots of a Polynomial,” IETE Journal of while the existence of one or more positive real
Education, vol. 55, pp. 90–93, July–December 2014. roots will be indicated by one or more ai’s being

© Springer Nature Singapore Pte Ltd. 2018 289


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
290 A.1: A Semi-analytical Method for Finding the Roots of a Polynomial

negative or missing. Such real roots can be indicates that there are real roots at s = −2 and s =
simply obtained by plotting jP0 ðrÞj versus r −3.
for one or both sides of r = 0, as appropriate. In general, if P0(s) of Eq. A.1.1 contains real
We plot the magnitude rather than the value roots at s = ri ; i ¼ 1; 2; . . .; M; M  N, where ri
because visually the zero crossings from negative can be positive as well as negative, then
to positive values, or vice versa, of P0(r) are not P0(s) can be written as
as appealing (or perhaps not as accurate) as the Y
position of the nulls, similar to those occurring in P0 ðsÞ ¼ ðs  ri ÞP1 ðsÞ ðA:1:3Þ
the magnitude response of null networks. i¼1M
As an example, consider the 10th-degree
polynomial where P1(s) does not have any real roots. In the
case of Eq. A.1.2, the continued product term
P0 ðsÞ ¼ s10 þ 8s9 þ 31s8 þ 87s7 þ 188s6 þ 317s5 simplifies to (s2 + 5s + 6) and P1(s), obtained by
dividing Eq. A.1.2 by this quadratic becomes
þ 428s4 þ 452s3 þ 372s2 þ 204s þ 72:
ðA:1:2Þ P1 ðsÞ ¼ s8 þ 3s7 þ 10s6 þ 19s5 þ 33s4
þ 38s3 þ 40s2 þ 24s þ 12:
Since there are no negative coefficients, we
need to plot jP0 ðrÞj only for negative values of r. ðA:1:4Þ
This plot is shown in Fig. A.1.1 which clearly

16

14

12

10

6
|P 0 (s)|, dB

4
5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0

Fig. A.1.1 Plot of jP0 ðrÞj, in dB, versus r for the example of Eq. A.1.2
A.1: A Semi-analytical Method for Finding the Roots of a Polynomial 291

Roots on the Imaginary Axis Also, the plot of jP1 ðjxÞj2 versus x2, as given
in Fig. A.1.2, shows nulls at x2 = 2 and 3, thus
As is well known [1], roots on the imaginary axis confirming Eq. A.1.7.
are revealed by performing the Hurwitz test. It P2(s) of Eq. A.1.5 can be obtained by long
consists of performing a continued fraction division of P1(s) by D(s). For the example case,
expansion (CFE), starting with the highest pow- this process gives
ers, of the odd rational function (even part of the
polynomial)/(odd part of the polynomial) or its P2 ðsÞ ¼ s4 þ 3s3 þ 5s2 þ 9s þ 2 ðA:1:8Þ
reciprocal, depending on which one has a pole at
infinity. The existence of jx-axis roots makes the This will have two pairs of complex conjugate
CFE end prematurely, and the last divisor con- roots. How to find them will be discussed in the
tains all these roots. (Note, in passing, that if the next section.
coefficient of any quotient in the CFE is negative,
then the polynomial has roots in the right half
plane; this is important in stability testing.) Complex Conjugate Roots
In the present case of P1(s), if the CFE men-
tioned above does end prematurely, and if the In general, P2(s) will have (N − M − 2Q)/2 pairs
last divisor is D(s), then we can write of complex conjugate roots. If this number is 1,
then P2(s) is a quadratic and its roots are easily
P1 ðsÞ ¼ DðsÞP2 ðsÞ ðA:1:5Þ found. If P2(s) is of degree 4, as in Eq. A.1.8, the
method given in [2] or [3] may be followed.
where D(s) is of the form However, a confusion is likely to arise about
Y signs in following this procedure. A variation of
DðsÞ ¼ ðs2 þ x2k Þ: ðA:1:6Þ this procedure will now be given, which avoids
k¼1Q
this confusion and also does not require the
analytical solution of the ‘resolvent’ cubic
Note that we have taken D(s) to be an even
equation. Let
polynomial because a possible root at s = 0 can
be taken out either at the beginning or while
P2 ðsÞ ¼ s4 þ a3 s3 þ a2 s2 þ a1 s þ a0 : ðA:1:9Þ
finding the real roots. P2(s) is of degree N − M −
2Q and contains only complex conjugate roots. If
We express the right-hand side of Eq. A.1.9
the degree of D(s) is high, it may not be possible
as the difference of two squares, rather than the
to find its roots analytically. In such a case, put s2
product of two quadratics, as in [2] and [3], as
= S. The resulting polynomial in S will have
follows:
roots only on the negative real axis of the com-
plex variable S, and hence the graphical proce- s4 þ a3 s3 þ a2 s2 þ a1 s þ a0
dure used in Sect. “Roots on the Real Axis” can ðA:1:10Þ
be used. ¼ ðs2 þ as þ bÞ2  ðcs þ dÞ2
Clearly, Hurwitz test could also be avoided by
plotting |P1(jx)|2 versus x2 which will show where a, b, c, and d are constants to be deter-
nulls at x2 = x2k . For the example of Eq. A.1.4, mined. Equating the coefficients of powers of
CFE of the even part/odd part ends prematurely s on both sides of Eq. A.1.10, we get the fol-
at the fourth step, and the last divisor is lowing set of four equations:

2a ¼ a3 ðA:1:11Þ
DðsÞ ¼ s4 þ 5s2 þ 6 ¼ ðs2 þ 3Þ ðs2 þ 2Þ:
ðA:1:7Þ
292 A.1: A Semi-analytical Method for Finding the Roots of a Polynomial

16

14

12

10

6
|P 1 (jw)|2

4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
w2

Fig. A.1.2 Plot of jP1 ðjxÞj2 versus x2 for Eq. A.1.4

while Eqs. A.1.13 and A.1.15 give


a2 þ 2b  c2 ¼ a2 ðA:1:12Þ
d ¼ ½ða3 y=2Þ  a1 =ð2cÞ: ðA:1:17Þ
2ðab  cdÞ ¼ a1 ðA:1:13Þ
Finally, Eqs. A.1.14 and A.1.15 give
b2  d 2 ¼ a0 : ðA:1:14Þ
d2 ¼ ðy2 =4Þa0 : ðA:1:18Þ
Equation A.1.11 gives
Now combine Eqs. A.1.16–A.1.18; after
a ¼ a3 =2: ðA:1:15Þ simplification, we get the following cubic equa-
tion in y:
To solve for b, c, and d, we have found it
convenient to express c and d in terms of 2b,
FðyÞ ¼ y3 a2 y2 þ ða1 a3  4a0 Þy
which, for reasons to be made clear a little later,
will be denoted by y. Then from Eqs. A.1.12 and þ ð4a0 a2  a3  a0 a23  a1 Þ ¼ 0:
A.1.15, we get ðA:1:19Þ

c2 ¼ a2 þ y þ ða23 =4Þ ðA:1:16Þ


A.1: A Semi-analytical Method for Finding the Roots of a Polynomial 293

60

50

40

30
|F(y)|

20

10

0
0 1 2 3 4 5 6
y

Fig. A.1.3 Plot of |F(y)| versus y

It is interesting to note that although the For the example of Eq. A.1.8, Eq. A.1.19
approaches are slightly different, Eq. A.1.19 is becomes
the same as the ‘resolvent’ cubic Eq. A.1.12 of
[3]. This is not unexpected though, because B + FðyÞ ¼ y3  5y2 þ 4y þ 6 ¼ 0: ðA:1:24Þ
b of [3] is the same as 2b in the approach adopted
here. This is why 2b was denoted by y earlier in The plot of jFðyÞj versus y, as shown in
this section. Fig. A.1.3, reveals only one real root at y1 = 3.
F(y), being a cubic polynomial, must have at Substituting this value in Eqs. A.1.20–A.1.23,
least one real root. Instead of following the along with the values of ai’s, we get a = b = ±1.5
analytical procedure of [3], we can plot jFðyÞj and c = d = ±0.5 (these are coincidences and not
versus y to get the real root(s) from the null true in general). Substituting these values in
location(s). If y1 is a real root, then our final Eq. A.1.10 and factorizing, we get
solution will be as follows:
P2 ðsÞ ¼ ðs2 þ 2s þ 2Þðs2 þ s þ 1Þ: ðA:1:25Þ
a ¼ a3 =2 ðA:1:20Þ
If P2(s) is of degree 6 or more, there exists no
b ¼ y1 =2 ðA:1:21Þ graphical or analytical method for finding the roots,
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 ffi and one has to take help of numerical methods.
c ¼  a2 þ y1 þ ða23 =4Þ ðA:1:22Þ

d ¼ ½ða3 y1 =2Þ a1 =ð2cÞ ðA:1:23Þ


294 A.1: A Semi-analytical Method for Finding the Roots of a Polynomial

Conclusions P:3. Solve s4 + 2s2 + 3s + 4 = 0 by any method.


P:4. Write P(s) = s5 + as4 + bs3 + cs2 + d =
In this section, a semi-analytical method has (s + e)(s + fs + g)(s2 + hs + j) and by
2

been presented for finding the real and purely solving for P(s) = 0.
imaginary roots of an arbitrary polynomial. After P:5. Find the roots of (s + 1)4 = 0.
removing the factors corresponding to these two
types of roots, the remaining polynomial will Any resemblance to Butterworth polynomial
only have complex conjugate roots. If there is roots? Show the roots graphically.
only one pair of such roots, then the roots are
obtained by solving a quadratic equation. For
two pairs of complex conjugate roots, a variation Acknowledgements This work was supported by the
Indian National Science Academy through the Honorary
of an earlier published method is given, which is Scientist programme. The author thanks Professor Y. V.
easier to apply. If there are more than two pairs Joshi for his help in the preparation of this manuscript.
of complex conjugate roots, then there is no
alternative but to use numerical methods. References

1. E.A. Guillemin, Synthesis of Passive Net-


Problems
works (Wiley, Hoboken, NJ, 1964)
2. M. Abramowitz, I.A. Stegun (eds.), Hand-
P:1. Read Ref. 3, and try solving a cubic equa- book of Mathematical Functions (Dover,
tion s3 + as2 + bs + c = 0 by decomposing New York, 1965)
the LHS as (s2 + ds + e) (s + f) = 0 and 3. S.C. Dutta Roy, On the solution of quadratic
eliminating all constants except one. What and cubic equations. IETE J. Educ. 47(2),
do you get in this remaining constant? 91–95 (2006)
P:2. Solve s3 + 2s2 + 4s + 1 = 0 by any method.
A.2: A Fresh Look at the Euler’s Relation

A direct proof of the Euler’s relation ej = cos h + Integrating Eq. A.2.3 and using the initial
j sin h, is presented. It is direct in the sense that condition y(0) = 1, it follows that y(h) = ejh.
unlike the existing proofs, it does not presume Both of these proofs presume that there is a
any connection between ejh and the trigonometric connection between ejh and the trigonometric cos h
functions cosh and sinh. and sin h. Presented here is a proof which does not
do so, and in this sense it can be considered as a
direct proof.
Keywords
Euler’s formula • Proof
The Proof
Euler’s relation
Let the real and imaginary parts of the complex
ejh ¼ cos h þ j sin h ðA:2:1Þ quantity ejh be denoted by f(h) and g(h),
respectively; then
is usually proved in mathematics and circuit
theory texts [1, 2] by appealing to the infinite ejh ¼ f ðhÞ þ jgðhÞ: ðA:2:4Þ
h
series expansions of ej , cosh and sinh. Another
way [3, 4] of showing the truth of the formula is Differentiating Eq. A.2.4 and denoting d()/dh
based on the observation that if by ()′, one obtains

yðhÞ ¼ cos h þ j sin h ðA:2:2Þ f 0 ðhÞ þ jg0 ðhÞ ¼ ejh ¼ jf ðhÞ  gðhÞ ðA:2:5Þ

then Equating the real and imaginary parts on the


two sides of Eq. A.2.5 gives
dyðhÞ=yðhÞ ¼ jdh: ðA:2:3Þ
f 0 ðhÞ  g0 ðhÞ and g0 ðhÞ ¼ f ðhÞ ðA:2:6Þ

Differentiating one equation in Eq. A.2.6 and


combining with the other yields the following
differential equations for f(h) and g(h):
Source: S. C. Dutta Roy, “A Fresh Look at the Euler’s
f 00 ðhÞ þ f ðhÞ ¼ 0 and g00 ðhÞ þ gðhÞ ¼ 0 ðA:2:7Þ
Relation,” Students’ Journal of the IETE, vol. 22, pp. 1–
2, January 1981.

© Springer Nature Singapore Pte Ltd. 2018 295


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
296 A.2: A Fresh Look at the Euler’s Relation

Each of these equations describes a simple Substituting for hf in either of the equations in
harmonic motion; the solutions for f(h) and g(h) Eq. A.2.11 gives
are, therefore K¼l ðA:2:14Þ

f ðhÞ ¼ Kf cosðh þ hf Þ and gðhÞ Thus, finally,


¼ Kg cosðh þ hg Þ ðA:2:8Þ
f ðhÞ ¼ cos h and gðhÞ ¼ sin h ðA:2:15Þ
where Kf, hf, Kg and hg are constants. Putting in
Eq. A.2.4 gives the initial conditions f(0) = 1 and and from equations A.2.4 and A.2.15, Eq. A.2.1
g(0) = 0. Substituting these in Eq. A.2.8, there follows.
results Q.E.D.

Kf ¼ 1= cos hf and hg ¼ ð2r þ lÞp=2; Problems


ðA:2:9Þ
r ¼ 0; 1; 2; . . .
P:1. Solve for h: ðcos hÞ4 ¼ 1.
From Eqs. A.2.8 and A.2.9, one obtains P:2. Repeat for ðsin hÞ4 ¼ 1.
h
P:3. Solve for h: ejN ¼ 1, N > 1. Any relation
f ðhÞ ¼ cos ðh þ hf Þ= cos hf and gðhÞ ¼ K sin h
with the solutions of P.1 and P.2? Any rela-
ðA:2:10Þ tion with Chebyshev? Maybe, maybe not.
h h
P:4. Solve for h : ej þ ejN ¼ 1, again N > 1.
where K = ±Kg. Substituting Eq. A.2.10 in  N  N
Eq. A.2.6 results in the following two equations: P:5. Solve for h: ejh 1 þ ejh 2 ¼ 1; N1 6¼
N2 ; N1;2 [ 1:
sinðh þ hf Þ= cos hf ¼ K sin h and
References
cosðh þ hf Þ= cos hf ¼ K cos h: ðA:2:11Þ

Dividing the first equation in Eq. A.2.11 by 1. H. Sohon, Engineering Mathematics (Van
the second gives Nostrand, New York, 1953), p. 65
2. W.H. Hayt Jr., J.E. Kemmerly, Engineering
tanðh þ hf Þ ¼ tan h; ðA:2:12Þ Circuit Analysis (McGraw-Hill, New York,
1978), pp. 747–748
which is satisfied only if 3. W.H. Hayt Jr., J.E. Kemmerly, Engineering
Circuit Analysis (McGraw-Hill, New York,
hf ¼ 2pp; p ¼ 0; 1; 2; . . . ðA:2:13Þ 1962), p. 283
4. A.G. Beged-Dov, Another look at Euler’s
relation. IEEE Trans. Educ. E-9, 44 (1966)
A.3: Approximating the Square Root
of the Sum of Two Squares

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
It is shown that ðx2 þ y2 Þ; with x > y, can be loss of accuracy, we propose, in this section, an
approximated by x + y2/(2x) for 0  y/x  1/2, approximation for S for ~S; consisting of two
and by 0.816 (x + 0.722 y) for 1/2  y/x  1 to expressions valid for the ranges 0  y/x  0.5
within a relative error of 0.64%. This should be and 0.5  y/x  l, and show that the maximum
useful in computations, but more so in analytical percentage relative error e, defined by
developments involving such expressions.  
 
e ¼ ðS  e
SÞ=S  100 ðA:3:3Þ
Keywords
Square root • Sum of two squares is thereby reduced to a value of 0.64 only e
S
When dealing with complex numbers, as in cir- being the approximated value.
cuit analysis or FFT computation, it is often
required to calculate the value of
Derivation
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
S ¼ ðx2 þ y2 Þ ðA:3:1Þ
In view of Eq. A.3.2, we can write
where, without loss of generality, it may be pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
S¼x ð1 þ t2 Þ ðA:3:4Þ
assumed that
where
0\y\x ðA:3:2Þ
0\t ¼ y=x\1 ðA:3:5Þ
As is well known, evaluating the square root
is somewhat tedious and time consuming. To
The problem is therefore to approximate
speed up the processing time, without significant pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð1 þ t2 Þ; a plot of which is shown in Fig.
A.3.1 (not to scale). Note that the latter part of
the graph (for t > t1 say) can be approximated by
a straight line, a possible candidate for which is
indicated as a + bt. Also, when t is small, one can
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
approximate ð1 þ t2 Þ by 1 + t2/2, a plot of
Source: S. C. Dutta Roy, “Approximating the Square Root
of the Sum of Two Squares,” Students’ Journal of the which is also shown in Fig. A.3.1 (in a slightly
IETE, vol. 32, pp. 11–13, April–June 1991. exaggerated form). In order to obtain a uniformly

© Springer Nature Singapore Pte Ltd. 2018 297


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
298 A.3: Approximating the Square Root of the Sum of Two Squares

2
e3%
1+ t2

1 +ct e2 %

a +bt
1 + t2/2

e¢1%
e 1%

1
0 t1 t01 t2 t02 1
t
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Fig. A.3.1 Showing the variation (not to scale) of ð1 þ t2 Þ; ð1 þ t2 =2Þ and a þ bt

‘good’ approximation over the entire range of t, Now, according to Eq. A.3.7b, we have
we assume that qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ t2 =2 0  t  t1
ð1 þ t2 Þ ffi ðA:3:6a; bÞ ð1 þ t12 Þ  ða þ b t1 Þ ð1 þ t12 Þ
a þ bt t1  t  1 hpffiffiffi i pffiffiffi
¼ 2  ða þ bÞ = 2 ðA:3:8Þ
and that the following relative errors are equal:

(i) e1′ at t = t1 computed by using Eq. A.3.6a,b which can be simplified to the following:
(ii) e1 at t = t1 computed by using Eq. A.3.6a,b
(iii) e3 at t = 1, and b ¼ ab ðA:3:9Þ
(iv) e2 at t = t2, which is the maximum value of e
where
in the range
t01  t  t02 (see Fig. A.3.1).

pffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi


b¼ 2  ð1 þ t12 Þ = ð1 þ t12 Þ  2t1  ðA:3:10Þ
We therefore have the three equations
To determine e2, note that in the range t01 
e0t ¼ e2 ðA:3:7aÞ t  t02.
e1 ¼ e3 ðA:3:7bÞ h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
e ¼ 100 að1 þ btÞ  ð1 þ t2 Þ = ð1 þ t2 Þ ðA:3:11Þ
e2 ¼ e3 ðA:3:7cÞ
The maximum of e occurs when de/dt = 0;
for determining the unknown quantities t1, a and b. carrying out the differentiation and simplifying
Once these are known, the maximum relative error gives the rather simple result
em ð¼ e1 ¼ e01 ¼ e2 ¼ e3 Þ will also be known.
A.3: Approximating the Square Root of the Sum of Two Squares 299

t2 ¼ b ðA:3:12Þ Numerical experimentation with this seem-


ingly hopeless equation for t1 reveals that the
Putting this in Eq. A.3.11, we get solution is, surprisingly, t1 ≅ 0.5. Further
refinement shows that t1 ≅ 0.5035, and a simple
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

2 calculation shows that under this condition, em =


e2 ¼ 100 a ð1 þ b Þ  1 ðA:3:13Þ 0.64. Finally, we calculate b and a from Eqs.
A.3.10 and A.3.15 as
Now using Eq. A.3.7c, we get
b ¼ 0:722 and a ¼ 0:816 ðA:3:19Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi
a =ð1 þ b2 Þ  1 ¼ 1  að1 þ bÞ= 2 ðA:3:14Þ

This can be simplified to the following;


Concluding Comments
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
We have shown that
pffiffiffi
a ¼ 2= ð1 þ b2 Þ þ ð1 þ bÞ= 2 ðA:3:15Þ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x þ y2 =ð2xÞ 0  y=x  0:5
ðx2 þ y2 Þ ffi
0:816ðx þ 0:722yÞ 0:5  y=x  1
Thus if b is known, which in turn requires that
t1 is known, we can compute a from Eq. A.3.15 ðA:3:20a; bÞ
and b from Eq. A.3.9. To find t1, we use Eq.
A.3.7a, which dictates that to within a relative error of 0.64%. This repre-
sents a uniformly ‘good’ approximation, and
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi should be useful in speeding up the computation
2
ð1 þ t1 =2Þ  ð1 þ t12 Þ ð1 þ t12 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

of ðx2 þ y2 Þ, but more so in analytical devel-
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
opments involving such expressions. As an
¼ a þ b t2  ð1 þ t22 Þ ð1 þ t22 Þ
example, let it be required to find out if the
ðA:3:16Þ equation
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Combining this with Eqs. A.3.9 and A.3.12, ð1 þ x2 Þ þ x2 þ x ¼ 2 ðA:3:21Þ
and simplifying, we get
has a real root in the range 0.5  x  1 and if
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
so, what is its approximate value. Using Eq.
ð1 þ t12 =2Þ= ð1 þ t12 Þ ¼ a ð1 þ b2 Þ ðA:3:17Þ
A.3.20a,b, Eq. A.3.21 becomes

Now substituting the values of a and b from 0:816 ð1 þ 0:722xÞ þ x2 þ x2 ¼ 0 ðA:3:22Þ
Eqs. A.3.15 and A.3.10 respectively, and sim-
plifying, we get

1 þ t12 =2 2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðA:3:18Þ
ð1 þ t1 Þ 1 þ ð1  t1 Þ= ½4ð1 þ t1 Þ  2ð1 þ t1 Þ f2ð1 þ t12 Þg
2 2
300 A.3: Approximating the Square Root of the Sum of Two Squares

Solving this quadratic gives one value of x as P:3. Will the answer to P.2. be 0.816
0.5528. Putting this value in Eq. A.3.21, the (x + 0.722y) (f + 0.722g)?
left-hand side becomes 2.001, which differs from P:4. What will be the approximation of
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
the right-hand side by 0.05% only. Note that the ðx2 þ y2 Þ if y x?
exact solution of Eq. A.3.21 will require the P:5. Repeat this for P.1.
solution of a quadratic equation.

Problems
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P:1. Can you approximate ða þ bxyÞðx2 þ y2 Þ?
Take b a.
P:2. How about approximating
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ðx2 þ y2 Þðf 2 þ g2 Þ?
A.4: On the Solution of Quartic and Cubic
Equations

A method of solving a quartic equation, which to find the pass-band edges. Abramowitz and
does not require extracting the roots of complex Stegun’s Handbook [1], which has been consid-
numbers is explained in detail. In the process, the ered as ‘The Bible’ by scientists and engineers
solution of a cubic equation has also been pre- for ages, was consulted, but to the author’s sur-
sented, with the same degree of simplicity. prise, the correct solution could not be obtained.
On deeper examination, it was found that there is
a typographical mistake in signs in the last
Keywords
equation on page 17 and that the opening state-
Quartic equation • Cubic equation • Solution ment on page 18 is ambiguous. Also, the method
requires handling square roots of complex num-
bers. A number of other such references [2–7]
and internet sources [8, 9] were also consulted
Introduction
and it was found that they were either sketchy or
had typographical mistakes or required finding
When faced with a problem in mathematics,
the square and cube roots of complex numbers.
electrical engineers–students, faculty, researchers
We present here a solution to the problem which
and practitioners alike–usually consult mathe-
does not require messy calculations with com-
matics handbooks and encyclopaedias for a
plex numbers. In the process, we also deal with
quick solution. While trying to design a dualband
the solution of a general cubic equation of the
band-pass filter by using frequency transforma-
form
tion of a normalized low-pass filter, the author
was confronted with the problem of solving a
y 3 þ b2 y 2 þ b1 y þ b0 ¼ 0 ðA:4:2Þ
quartic equation of the form
with the same kind of simplicity, as compared to
z4 þ a3 z3 þ a2 z2 þ a1 z þ a0 ¼ 0; ðA:4:1Þ
the solutions given in [1–9] and also in [10] and
[11]. The treatment is based on simplification
and consolidation of a monograph [12] by
S Neumark, a British aeronautical engineer,
whose work does not appear to have been
Source: S. C. Dutta Roy, “On the Solution of Quartic and appreciated or even referred to in the literature.
Cubic Equations,” IETE Journal of Education, vol. 47, We illustrate the procedures by examples whose
pp. 91–95, April–June 2006. solutions are known beforehand.

© Springer Nature Singapore Pte Ltd. 2018 301


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
302 A.4: On the Solution of Quartic and Cubic Equations

Solution to the Quartic This is the so-called ‘resolvent’ cubic and


checks with the equation given in [1], page 17. If
Equation A.4.1 can be written as y = y1 satisfies Eq. A.4.12, then from Eqs. A.4.3,
A.4.9 and A.4.10, the roots of the quartic Eq.
ðz2 þ Az þ BÞðz2 þ a z þ bÞ ¼ 0; ðA:4:3Þ A.4.1 are obtained by solving the following two
quadratic equations:
where
! rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
A þ a ¼ a3 ; ðA:4:4Þ a3
2 a3 y1  2a1 y1 y21
z þ  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z þ   a0
2 2 y21  4a0 2 4
B þ b þ Aa ¼ a2 ; ðA:4:5Þ
¼ 0:
Ab þ aB ¼ a1 ðA:4:6Þ ðA:4:13Þ

and From Eq. A.4.11, the second term in the


Bb ¼ a0 : ðA:4:7Þ coefficient of z in Eq. A.4.13 can be written as
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

a2
 43 þ y1  a2 : Choosing the positive sign
Let
here leads to the mistake in [1] as pointed out in
B þ b ¼ y: ðA:4:8Þ the Introduction. Choosing the negative sign
gives the correct results, as demonstrated in the
Then from Eqs. A.4.7 and A.4.8, we get a Example worked out later. Hence we get the
quadratic equation in B or b, the solution of simplified form of Eq. A.4.13 as:
which gives
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 a3 a23 y1 y21
y  y2  4a0 y
y2  4a0 z þ
þ y 1  a2 z þ   a0
B¼ and b ¼ 2 4 2 4
2 2
ðA:4:9Þ ¼ 0:
ðA:4:14Þ
From Eqs. A.4.4, A.4.6 and A.4.9 (hence-
forth, we take the positive sign for B and the We next consider the solution of the cubic
negative sign for b, without any loss of gener- equation A.4.12, and shall do so with general
ality), we get coefficients, as in Eq. A.4.2.

a3 a3 y  2a1
A; a ¼  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ðA:4:10Þ
2 2 y2  4a0 Solution of the Cubic Equation

Finally, substituting for A, a, B and b in Consider Eq. A.4.2. In the literature, it is the
Eq. A.4.5, and simplifying gives usual practice to derive a ‘depressed’ cubic i.e.
another cubic equation in which the y2 term is
a23 ða3 y  2a1 Þ2 missing. To this end, we include the first two
yþ  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ a2 : ðA:4:11Þ  3
4 4 y2  4a0 terms of Eq. A.4.2 in y þ b32 . Then Eq. A.4.2
can be written as
Simplification of Eq. A.4.11 results in the
following cubic equation in y: 2
b2 3 b2 b3
yþ   b1 y þ b0  2 ¼ 0:
y3  a2 y2 þ ða1 a3  4a0 Þy 3 3 27
ðA:4:15Þ
ða21 þ a0 a23 4a0 a2 Þ ¼ 0: ðA:4:12Þ
A.4: On the Solution of Quartic and Cubic Equations 303

We next supplement y in the second term by Since a cubic equation is required to have one
+b2/3; then Eq. A.4.15 becomes real root, Eq. A.4.22 will be applicable only
when b22 − 3b1 > 0. We shall consider the other
2
b2 3 b b case, i.e. b22 − 3b1 < 0 shortly.
yþ  2  b1 y þ 2 þ b0
3 Notice that the left-hand side of Eq. A.4.21 is
3 3
b32 b2 b22 the third order Chebyshev polynomial in x which
 þ  b1
27 3 3 oscillates between −1 and +1 for 1  x  þ 1;
¼ 0: and is monotonically increasing or decreasing for
ðA:4:16Þ |x| > 1, as shown in Fig. A.4.1. This figure will
only give real root(s), and clearly, it suffices to
Let consider R 0, because changing the sign of R,
simply leads to the roots changing their signs. It
b2 is also obvious from the figure that for R < 1,
yþ ¼ kx: ðA:4:17Þ
3 there will be three real roots, of which one is
positive and the other two are negative. When
Usually, k is taken as unity in the literature. R > 1, there will be only one real (positive) root.
With a general k, Eq. A.4.16 can be simplified to When R = 1, there will be a double root at −1/2
the following: and a single root at +1.
For R < 1, the real positive root x = x1 occurs
27k3 x3 9kðb22 3b1 Þx þ ð27b0 þ 2b32 9b1 b2 Þ between the point A, where x = √3/2 and x = 1.
¼ 0: Since x < 1, we can write
ðA:4:18Þ
x1 ¼ cos h; ðA:4:23Þ
Dividing both sides by (27k3/4), we get
where 0\h\p=6. Equation A.4.21 then
4ðb22  3b1 Þ becomes
4x3  xþ
3k2
cos 3h ¼ R: ðA:4:24Þ
4ð27b0 þ 2b32  9b1 b2 Þ
¼ 0: ðA:4:19Þ
27k3 Hence

Now comes the brilliant idea of forcing the y1 ¼ cos ½ðcos1 RÞ=3: ðA:4:25Þ
coefficient of x as −3 by choosing
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Equation A.4.21 can now be rewritten in the
b22  3b1
2 form
k¼ ðA:4:20Þ
3
4x3 3xR ¼ ðxx1 Þ ð4x2 þ c1 x þ c0 Þ ¼ 0:
Using the negative sign in Eq. A.4.20, Eq. ðA:4:26Þ
A.4.19 becomes
Comparing coefficients, we get c1 = 4x1 and
4x3  3x ¼ R; ðA:4:21Þ c0 = R/x1 = 4x21 − 3. Thus the other two real roots
of Eq. A.4.21 are the solutions of the quadratic
where
27b0 þ 2b32  9b1 b2 equation
R¼ : ðA:4:22Þ
2ðb22  3b1 Þ3=2 4x2 þ 4x1 x þ 4x21 3 ¼ 0 ðA:4:27Þ
304 A.4: On the Solution of Quartic and Cubic Equations

0
R

A
1

5
1 5
x

Fig. A.4.1 Plot of R = 4x3 − 3x

so that pffiffiffi
x2;3 ¼  ½ðcosh aÞ=2Þ  jð 3=2Þ sinh a ðA:4:31Þ
pffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffi
x2;3 ¼ ðx1 =2Þ  3=2 1  x2 Finally, there remains the case b22 − 3b1 < 0.
pffiffiffi  1
In this case, choose k such that the coefficient of
¼ ½ðcos hÞ=2Þ  3=2 sin h
p  x in Eq. A.4.19 becomes +3, i.e. let
¼  cos  h : ðA:4:28Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
3
2 3b1  b22
k¼ ðA:4:32Þ
Now consider the case R > 1. Figure A.4.1 3
shows that there is only one real (positive) root at
Then Eq. A.4.21 becomes
x = x1 > 1. We can therefore write

x1 ¼ cos h a: ðA:4:29Þ 27b0 þ 2b32  9b1 b2


4x3 þ 3x ¼ R ¼ ðA:4:33Þ
2jb22  3b1 j3=2
Substituting this in Eq. A.4.21 and simplify-
ing, we get Notice that for uniformity with the previous
case, we have modified the denominator of R.
x1 ¼ cos h ½ðcos h1 RÞ=3: ðA:4:30Þ Here also, as in the previous case, we need to
solve only for positive R; for negative R, all roots
Equations A.4.26 and A.4.27 are applicable will change sign. The plot of the left-hand side of
here also, so that the two complex roots are given Eq. A.4.33 is shown in Fig. A.4.2, from which it
by is clear that there is only one real root x1 which is
A.4: On the Solution of Quartic and Cubic Equations 305

15

10

5
R

10

15
1.5 1 0.5 0 0.5 1 1.5
x

Fig. A.4.2 Plot of R = 4x3 + 3x

positive for positive R. Since x1 can have any


value, we let Example
x1 ¼ sinh b: ðA:4:34Þ Let the equation to be solved be

Then Eq. A.4.33 gives z4  5z3 þ 5z2 þ 5z  6 ¼ 0: ðA:4:38Þ


  
x1 ¼ sinh sinh1 R =3 : ðA:4:35Þ As can be easily verified, Eq. A.4.38 has the
roots −1, +1 and +5. Let us see what our pro-
As in the earlier case, the cubic equation cedure gives. From Eq. A.4.12, the resolvent
Eq. A.4.33 can be factorized as cubic becomes

ðxx1 Þð4x2 þ 4x1 x þ 4x21 þ 3Þ ¼ 0 ðA:4:36Þ y3 5y2 y þ 5 ¼ 0: ðA:4:39Þ

so that the other two roots which are complex As can be easily verified, the roots of Eq.
conjugates, are given by A.4.39 are −1, +1 and 5, but let us follow the
pffiffiffi  procedure as given here. From Eqs. A.4.22 and
x2;3 ¼ ½ðsinh bÞ=2Þ  j 3=2 cosh b: A.4.39, R is calculated as −10/(7 √7). Since |
R| < 1, we get, by applying Table A.4.1,
ðA:4:37Þ
x1;2;3 ¼ 0:756; 0:189; 0:945: ðA:4:40Þ
Table A.4.1 gives a summary of the proce-
dure for solving a general cubic equation.
306 A.4: On the Solution of Quartic and Cubic Equations

Table A.4.1 Procedure for solving a cubic equation


Equation to be solved: y3 + b2y2 + b1y + b0 = 0
27b0 þ 2b32 9b1 b2
Compute R ¼
2jb22 3b1 j3=2

b22
– 3b1 > 0 b22 – 3b1 < 0
0<R<1 R>1 Any R
h ¼ ðcos 1 RÞ=3 a ¼ ðcosh 1 RÞ=3 b = (sinh −1R)/3
x1 = cos h x1 ¼ cosh h x1 = sinh b
  pffiffiffi pffiffiffi
x2;3 ¼  cos p3  R x2;3 ¼ ðcosh a  j 3 sinh aÞ=2 x2;3 ¼ ðsinh b  j 3 cosh bÞ=2
  1=2 
yi ¼  b2 þ 2b22 3b1  xi =3; i ¼ 1; 2; 3
Note The solutions are valid for positive R. For negative R, compute the solutions for |R|, and then negate the values of
x before substitution in the equation for Yi

Correspondingly,
 pffiffiffi  For example, the quartic equation
y1;2;3 ¼ 54 7x1;2;3 =3 ¼ 1; 1; 5:
pffiffiffiffiffi
ðA:4:41Þ z4 2z2 þ 3z0:5 ¼ 0 ðA:4:43Þ

These values check with the solutions has the following resolvent cubic:
obtained by inspection of Eq. A.4.39. Selecting
y1 = 1 and using Eq. A.4.14, the quadratics then y3 þ 2y2 þ 2y þ 1 ¼ 0 ðA:4:44Þ
become
Equation A.4.44 has a real root at y = −1 and
 pffiffiffi
2
z 4z þ 3 ¼ 0 and z z2 ¼ 0:2
ðA:4:42Þ a pair of complex roots at y ¼ 1 þ j 3 =2.
Using either of the complex roots, it can show
These give z = −1, 1, 2 and 3 as expected. The that we get the correct roots as obtained by using
same results are obtained had we selected y1 = y1 = −1. The real root is preferred because it
−1 or 5. It can be verified that a reversal of signs reduces the computational effort, considerably.
in the constant term of Eq. A.4.14 gives wrong
results. This example also demonstrates that the
opening sentence on p. 18 of [1] is ambiguous, Problems
because it does not say what is to be done if all
the three real roots give real coefficients in the P:1. Can you solve a sixth order equation by
quadratic equations. As the present example decomposing as in Eq. A.4.3?
shows, any of them can be used. A question that P:2. How about an eight order?
arises at this point is the following: Would it give P:3. Could you solve a cubic equation by start-
the correct answer if we choose a complex, ing with trigonometry right from the start?
instead of a real root for y1? The answer is yes. P:4. Solve x4 + x2 + 1 = 0 by any method.
A.4: On the Solution of Quartic and Cubic Equations 307

P:5. Will the trigonometric approach work for 4. C.E. Pearson, Handbook of Applicable
P.4? Yes or no answer will not do. You Mathematics (Van Nostrand, 1974)
must have the necessary mathematical 5. W. Gellet et al., The VNR Concise Ency-
support to justify your answer. clopaedia of Mathematics (Van Nostrand,
1975)
6. E.W. Weisstein, CRC Concise Encyclopaedia
Acknowledgement The author thanks his former student of Mathematics (Chapman and Hall, 1999)
and current colleague, Professor Jayadeva, for many 7. I.N. Bronshtein et al., Handbook of Mathe-
helpful discussions on this topic during their evening
matics (Springer, Berlin, 2000)
walks in the corridors of IIT Delhi.
8. http://www.sosmath.com/algebra/factor/
fac12/fac12.html: The quartic formula
References 9. http://mathforum.org/dr.math/faq/cubic.
equations.html: Cubic and quartic equations
1. M. Abramowitz, I.A. Stegun, Handbook of 10. http://www.sosmath.com/algebra/factor/
Mathematical Functions (Dover, 1965) fac1/fac1l.html: The cubic formula
2. G.A. Korn, J.M. Korn, Mathematical 11. http://mathforum.org/dr.math/faq/cubic.
Handbook for Scientists and Engineers equations.html: Cubic equations—another
(McGraw-Hill, 1968) solution
3. R.S. Burington, Handbook of Mathematical 12. S. Neumark, Solution of Cubic and Quartic
Tables and Formulas (McGraw-Hill, 1973) Equations (Pergamon, 1965)
A.5: Many Ways of Solving an Ordinary
Linear Second Order Differential Equation
with Constant Coefficients

There are many different ways of solving an the following ordinary linear second order dif-
ordinary linear second order differential equation ferential equation:
with constant coefficients. Some of them are
available in textbooks while some others are y00 þ 2ay0 þ x20 y ¼ 0; ðA:5:1Þ
scattered in journal publications. A comprehen-
sive survey of these methods is presented in this where, in the usual situation, the prime denotes
section, along with the essential steps in each differentiation with respect to time t, and a and
method and the relevant references. x0 are constants, subject to the initial conditions
‘As many faiths, as many ways’—Shri
Ramakrishna Paramhansa yð0Þ ¼ y0 and y0 ð0Þ ¼ p0 : ðA:5:2Þ

For example, when a capacitor C is charged to


Keywords a voltage V and discharged through an induc-
tance L in series with a resistance R, the current
ODE • Solution y in the circuit obeys Eq. A.5.1 with a = R/(2L)
and x20 = 1/(LC) [1]. Many techniques exist in
the literature for solving Eq. A.5.1, of which the
Introduction following are commonly available in one text-
book or the other; (1) Laplace transform method;
There are many problems in electrical engineer- (2) assuming an exponential solution; and
ing and physics where one is required to solve (3) operator method. Several other methods have
appeared in the literature, mostly in journals,
some of which are quite simple, innovative,
and/or of pedagogical interest. We present here a
survey of all these techniques, along with the
essential steps in each method and the relevant
reference(s).

Source: S. C. Dutta Roy, “Many Ways of Solving an


Ordinary Linear Second Order Differential Equation with
Constant Coefficients,” IETE Journal of Education, vol.
48, pp. 73–76, April–June 2007.
© Springer Nature Singapore Pte Ltd. 2018 309
S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
310 A.5: Many Ways of Solving an Ordinary Linear Second Order Differential Equation …

Laplace Transform Method Putting these values in Eq. A.5.8 and simpli-
fying, we get
Taking the Laplace transform of Eq. A.5.1 and
denoting the Laplace transform of y(t) by Y(s), yðtÞ ¼ y0 cos x0 t þ ðp0 =x0 Þ sin x0 t; ðA:5:10Þ
we get
which can be put in the form
2
s YðsÞsy0 p0 þ 2a½sYðsÞ  y0  þ x20 YðsÞ ¼ 0: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  
ðA:5:3Þ yðtÞ ¼ y20 þ ðp0 =x0 Þ2 cos x0 t þ tan1 ½p0 =ðy0 x0 Þ
ðA:5:11Þ
On simplification and factorization, this gives

ðs þ 2aÞy0 þ p0
YðsÞ ¼ ; ðA:5:4Þ Assuming an Exponential
ðs  s1 Þðs  s2 Þ
Solution: Why Not Do It With
where Trigonometric Functions?
Because of Approximation
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Errors
s1;2 ¼ a  b and b ¼ a2  x20 : ðA:5:5Þ
In this method, we assume a solution of the form
Expanding Eq. A.5.4 in partial fractions and Aest for a 6¼ x0 and (A + Bt)est for a = x0 In the
taking the inverse Laplace transform, we get first case, putting the assumed solution in Eq.
A.5.1 gives the so-called characteristic equation
yðtÞ ¼ A1 es1 t þ A2 es2 t ; ðA:5:6Þ
s2 þ 2as þ x20 þ 0 ðA:5:12Þ
where
2ay0 þ p0 þ s1;2 y0 which has the roots given by Eq. A.5.5. Thus
A1;2 ¼  ðA:5:7Þ
2b both es1 t and es2 t are solutions of Eq. A.5.1 and
the general solution is the same as that given by
Combining Eqs. A.5.5–A.5.7 and simplify- Eq. A.5.6. It is easily shown that for satisfying
ing, one obtains the initial conditions given in Eq. A.5.2, A1, 2 are
the same as given in Eq. A.5.7 so that the
yðtÞ ¼ ðeat =bÞ ½ðy0 a þ p0 Þ sinh bt þ y0 b cosh bt: required solution is given by Eq. A.5.8. As
ðA:5:8Þ mentioned earlier, this expression can handle all
the three cases of relative values of a and x0, but
As shown in [1], this expression is adequate for it is instructive to pursue the assumed solution
considering all the three cases, viz. (i) overdamp- (A + Bt)est for the case a = x0 a bit further. As
ing: a > x0; (ii) critical damping: a = x0; and (iii) pointed out in [1], a heuristic justification for this
underdamping: a < x0. We shall not therefore assumed solution comes from the argument that
pursue these cases separately, at this stage. in this case, one can try a general solution of the
It is interesting to observe that expression form f(t)est. instead of Aest, where f(t) is to be
Eq. A.5.8 is also adequate for considering the determined. Substituting this trial solution in
undamped case i.e. a = 0. Under this condition, Eq. A.5.1 and simplifying, we get

b ¼ jx0 ; sinh bt ¼ j sin x0 t and cosh bt f 00 ðtÞ þ ðs2 þ 2as þ x20 Þf 0 ðtÞ þ 2ðs þ aÞf ðtÞ ¼ 0:
¼ cos x0 t: ðA:5:13Þ
ðA:5:9Þ
A.5: Many Ways of Solving an Ordinary Linear Second Order Differential Equation … 311

Since from Eq. A.5.12, s = −a = −x0 in this is of the same form as Eq. A.5.6. The rest of the
case, Eq. A.5.13 reduces to procedure is the same as in the previous section.
The critical damping case poses no problem
f 00 ðtÞ ¼ 0; ðA:5:14Þ with this method. Under critical damping, Eq.
A.5.18 becomes
i.e.
ðD þ aÞ ðD þ aÞy ¼ 0 ðA:5:22Þ
f ðtÞ ¼ A þ Bt and yðtÞ ¼ ðA þ BtÞest : ðA:5:15Þ

Evaluating A and B from the initial conditions, which can be solved by following the same steps
we finally get, for this critically damped case, as in the case a 6¼ x0, and results in the same
expression as Eq. A.5.16.
yðtÞ ¼ ½y0 þ ðp0 þ y0 aÞt eat : ðA:5:16Þ

Operator Method Solution by Change of Variable

In this method, we define the operator D ¼ ddt As pointed out in [1], for the beginner student,
2 who has not been exposed to Laplace transforms,
and D2 ¼ ddt2 so that Eq. A.5.1 becomes the conventional approach is to use either an
assumed solution or the operator method. In
ðD2 þ 2aD þ x20 Þy ¼ 0: ðA:5:17Þ either case, the student has conceptual difficulty
in accepting why a solution should be assumed,
We then treat the quadratic operator and that too of a particular type, or why with
(D2 + 2aD + x20) as an algebraic expression and 2  2
factorize it to obtain the following changed form D ¼ ddt ; D2 y is ddt2y and not ddyt , and how
of Eq. A.5.17: ðD2 þ 2aD þ x20 Þ can be treated as a polynomial
and factorized. To obviate these difficulties, we
ðDs1 ÞðDs2 Þy ¼ 0; ðA:5:18Þ proposed in [1] a change of variable from y to
z with
where s1,2 arc given by Eq. A.5.5. Now let
z ¼ y0 sy ðA:5:23Þ
ðDs2 Þy ¼ z ðA:5:19Þ
where s is an unknown constant. Obtain y″ from
so that Eq. A.5.18 becomes the following first
Eq. A.5.23 and substitute in Eq. A.5.1; the result
order homogeneous equation in z:
is
ðDs1 Þz ¼ 0: ðA:5:20Þ
z0 þ ðs þ 2aÞz þ ðs2 þ 2as þ x20 Þy ¼ 0 ðA:5:24Þ
The solution of Eq. A.5.20 is
Now choose s such that the y term vanishes in
s1 t
zðtÞ ¼ K1 e : ðA:5:21Þ Eq. A.5.24; this gives the same equation as
Eq. A.5.12 with the possible values of s as given
where K1 is a constant. Now combine equa- in Eq. A.5.5. Taking either value of s and solving
tions A.5.19 and A.5.21 and solve the resulting the first order homogeneous equation in z, we get
non-homogeneous first order differential equa-
tion by the integrating factor method. The result zðtÞ ¼ K1 eðs þ 2aÞt : ðA:5:25Þ
312 A.5: Many Ways of Solving an Ordinary Linear Second Order Differential Equation …

Putting this value in Eq. A.5.23 and solving


D2m y ¼ b2 y: ðA:5:30Þ
for y gives
Since two operations by Dm is equivalent to
yðtÞ ¼ K2 est þ K3 eðs þ 2aÞt : ðA:5:26Þ multiplication by b2, we see that
where K2,3 are constants. Note that in Eq. A.5.25 Dm y ¼ by ðA:5:31Þ
taking either s = s1 or s = s2 makes no difference,
because If we take positive sign in Eq. A.5.31, and
solve the resulting first order non-homogeneous
ðs1;2 þ 2aÞ ¼ ða  b þ 2aÞ equation, we get y1 ðtÞ ¼ A1 es1 t ; while taking the
ðA:5:27Þ
¼ a
b ¼ s2;1 : negative sign gives y2 ðtÞ ¼ A2 es2 t :Thus the gen-
eral solution is the same as Eq. A.5.6. For the
Thus the solution is of the same form as critical damping case, let z = Dmy [4]; then the
Eq. A.5.6. equation to be solved is Dmz = 0; further pro-
It is to be recognized that the clue to the cedure is similar to that in the operator method of
method is provided by the operator method but it Sect. 4.
has the advantage that the student has no difficulty
in comprehending this solution. Also, note that
when s1 = s2 = −a (critical damping case), Eq. LKM 2: Another Change
A.5.25 becomes zðtÞ ¼ K1 eat . Putting this value of Variable Method
in Eq. A.5.23, solving for y and evaluating the
constants lead to the same result as Eq. A.5.16. This method is due to Greenberg [5] and is based
We next discuss some less known methods on a change of variable such that the first dif-
(LKM) for solving Eq. A.5.1, which do not ferential coefficient term is eliminated; simulta-
appear to be included in textbooks, but deserve neously, it also removes the difficulty in
to be. analyzing the critical damping case. We let

y ¼ zeat : ðA:5:32Þ
LKM 1: Modified Operator
Method Substituting this in Eq. A.5.1 and simplifying,
we get
This method is due to Garrison [2, 3] and starts
with rewriting Eq. A.5.1 as z00 ¼ b2 z; ðA:5:33Þ
b being the same as in Eq. A.5.5. Now choose
d2 d
þ 2a y ¼ x20 y ðA:5:28Þ the trial solution z ¼ ekt ; putting this in Eq.
dt2 dt
A.5.33 given k2 = b2 or k = ±b so that the
Add a2y to both sides to get general solution for z becomes

d2 d   zðtÞ ¼ A1 ebt þ A2 ebt : ðA:5:34Þ
2
þ 2a þ a y ¼ a2  x20 y: ðA:5:29Þ
dt2 dt
Combining Eq. A.5.34 with Eq. A.5.32, we
As in the operator of Sect. 4, the operator on get the same solution as Eq. A.5.6. For the crit-
the left-hand side of Eq. A.5.29 can be factored ical damping case, b = 0, so that Eq. A.5.33
   gives z″ = 0 or z = A + Bt, and combined with
as ddt þ a ddt þ a . Denoting either factor by
Eq. A.5.32, we get the same solution as
Dm, Eq. A.5.29 combined with Eq. A.5.5 gives Eq. A.5.15.
A.5: Many Ways of Solving an Ordinary Linear Second Order Differential Equation … 313

Instead of a trial solution, one could also and eAt is the so-called fundamental matrix. The
make a change of variable from z to x = z′ − sz as latter can be calculated by using any of the
in the method discussed in Sect. 4. Then well-known techniques [8]. The result, for this
z00 ¼ x0 þ sz0 ¼ x0 þ sx þ s2 z. Substituting this in case, is

Eq. A.5.33 and choosing s such that the z-term is eat a sinh bt þ b cosh bt sinh bt
eAt ¼
absent in the result leads to s = ±b and x0 þ sx ¼ b x20 sinh bt b cosh bt  a sinh bt
0; which has the solution x = K1e−st. Finally, ðA:5:41Þ
solving the equation z′ − sz = K1e−st, we get z
(t) = K2est + K3e−st, which is of the same form as Combining Eq. A.5.41 with Eqs. A.5.39 and
Eq. A.5.34, irrespective of whether A.5.35 gives the desired y(t), which is the same
s ¼ þ b or s ¼ b. as that given by Eq. A.5.8.

State Variable Method Conclusion

This method involves more efforts than in any A comprehensive survey has been presented here
other method discussed so far, but is of peda- of the methods for solving Eq. A.5.1, some of
gogical interest, when the state variables are first which are well known and some are less known.
introduced to undergraduate students [6]. We let Conceptually, the method based on change of
variables given in Sect. 5 appears to be the
x1 ¼ y and x2 ¼ y0 : ðA:5:35Þ simplest and easily comprehensible by the
beginner. The less known methods given in
Then we can write Sects. 6 and 7 are quite instructive and should
find a place in textbooks. The method based on
x01 ¼ x2 and x02 ¼ x20 x1 2ax2 : ðA:5:36Þ state variables has a pedagogical value for
introducing state variables to the beginner, rather
The two equations in Eq. A.5.36 can be
than for solving Eq. A.5.1.
written in the familiar0 matrix from
x ¼ Ax ðA:5:37Þ It should be mentioned here that except for the
methods of Sects. 6 and 7, all other methods are
where applicable for solving higher order ordinary lin-
ear differential equations with constant coeffi-

x 0 1 cients [9].
x¼ 1 and A ¼ : ðA:5:38Þ
x2 x20 2a
Problems
The solution for Eq. A.5.37, as is well known
[7], is
P:1. Suppose, for Eq. A.5.1, the initial condi-
At
x ¼ e x0 ; ðA:5:39Þ tions Eq. A.5.2 are given at 0−. How would
you modify the solution?
where P:2. Instead of 0 on the RHS of Eq. A.5.1, let
there be a constant. How do you find a

y solution?
x0 ¼ 0 ðA:5:40Þ
p0 P:3. Repeat with RHS = f(y).
P:4. Repeat with RHS = f(x).
314 A.5: Many Ways of Solving an Ordinary Linear Second Order Differential Equation …

P:5. Suppose the middle term on the LHS of this contribution, (x2 − a2) should be
Eq. A.5.1 is 2ayy′. Can you find a solution? replaced by (a2 − x2))
6. D.S. Zrnic, Additional remarks on the equa-
tion €x þ 2ax þ x2 x ¼ 0. Am. J. Phys. 41, 712
References (1973) (Note that in Eq. (3) of this contribu-
tion, the sign of the (1, 2) element of the
matrix should be positive)
1. S.C. Dutta Roy, Transients in RLC networks
7. S.C. Dutta Roy, An introduction to the state
revisited. IETE J. Educ. 44, 207–211 (2003)
variable characterization of linear systems—
2. J.D. Garrison, On the solution of the equation
part I. IETE J. Educ. 38, 11–18 (1997)
for damped oscillation. Am. J. Phys. 42, 694–
8. S.C. Dutta Roy, An introduction to the state
695 (1974)
variable characterization of linear systems—
3. J.D. Garrison, Erratum: on the solution of the
part II. IETE J. Educ. 38, 99–107 (1997)
equation for damped oscillation. Am. J. Phys.
9. S.C. Dutta Roy, Solution of an Ordinary
43, 463 (1975)
Linear Differential Equation With Constant
4. S. Balasubramanian, R. Fatchally, Comment
Coefficients, unpublished manuscript. That
on the solution of the equation for damped
gives me an idea. I should try to publish this
oscillation. Am. J. Phys. 44, 705 (1976)
manuscript as soon as possible.
5. H. Greenberg, Further remarks concerning
the solution of the equation
€x þ 2ax þ x2 x ¼ 0. Am. J. Phys. 44, 1135–
1136 (1976) (Note that in Eqs. (5) and (6) of
A.6: Proofs of Two Chebyshev Polynomial
Identities Useful in Digital Filter Design

Alternate proofs of two Chebyshev polynomial where Ti is the i-th degree Chebyshev polyno-
identities, which are useful in the design of mial of the first kind. As shown by Shenoi and
low-pass recursive digital filters, are presented. Agrawal [2], these identities are useful in the
As compared to those provided by Yip [1], our design of recursive low-pass digital filters. In
proofs appear to be simpler and are direct, rather proving Eq. A.6.1, Yip first proved the identity
than inductive.
1 
Tn ðxÞTm ðxÞ ¼ c Tðm þ nÞ ðxÞ þ Tjmnj ðxÞ
2
Keywords ðA:6:3Þ
Chebyshev polynomial • Identities • Application
in DSP and then substituted m = n = N. In proving
In 1980, Yip provided proofs of the following Eq. A.6.2, Yip used the method of induction. We
two Chebyshev polynomial identities: present here simpler proofs of Eqs. A.6.1 and
A.6.2, and in the latter case, we give a direct,
T2N ðxÞ þ 1 ¼ 2 ½TN ðxÞ2 ðA:6:1Þ rather than inductive proof, based solely on the
properties of trigonometric functions.
and

T2N þ 1 ðxÞ þ 1 ¼ ð1xÞ Proof of the First Identity


" #
X
N
i N þ1 ðA:6:2Þ
2 ð1Þ TN1 ðxÞ þ ð1Þ 2; Letting x = cos h, we have
i¼0
Ti ðxÞ ¼ cos ih ðA:6:4Þ

Using Eq. A.6.4 and the trigonometric


formula

cos 2/ ¼ 2 cos2 /l ðA:6:5Þ

Source: S. C. Dutta Roy, “Proofs of Two Chebyshev Equation A.6.1 follows easily by putting
Polynomial Identities Useful in Digital Filter Design” / = Nh.
Journal of the IETE, vol. 28, p. 605, November 1982.
(Corrections in vol. 29, p. 132, March 1983).

© Springer Nature Singapore Pte Ltd. 2018 315


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
316 A.6: Proofs of Two Chebyshev Polynomial Identities Useful in Digital Filter Design

Proof of the Second Identity Now using the trigonometric identity


CþD jC  Dj
Using Eq. A.6.4, the right-hand side of Eq. cos C þ cos D ¼ 2 cos cos ðA:6:11Þ
2 2
A.6.2 becomes
and Eq. A.6.5 in Eq. A.6.10, the latter
RHS ¼ ð1 þ cos hÞ
simplifies to
" #2
X
N
i N þ1 RHS ¼ 2 cos2 ½ð2N þ 1Þh=2 ðA:6:12Þ
2 ð1Þ cosðN  iÞh þ ð1Þ
i¼0
Using Eq. A.6.5 once again gives
ðA:6:6Þ
RHS ¼ cos ð2N þ 1Þh þ 1
Putting ðA:6:13Þ
¼ T2N þ 1 ðxÞ þ l ¼ LHS
2 cosðNiÞh ¼ ejðNiÞh þ e þ jðNiÞh ðA:6:7Þ
which completes the proof.
in Eq. A.6.6, the first term within its square
brackets, to be denoted by F for brevity, becomes Problems

X
N  i X
N  i
F ¼ ejNh ejh þ ejNh eh P:1. Can these identities be proved with other
i¼0 i¼0 kinds of polynomial. For example, a second
h  N þ 1 i h  N þ 1 i
ejNh 1  ejh ejNh 1  ejh order one? Try it and let me know.
¼ þ P:2. Could the second identity be proved by
1 þ ejh 1 þ ejh
bringing in Euler again? That is, replacing
ðA:6:8Þ
cosh by Re ejh ?
By routine simplification of Eq. A.6.8, we get P:3. Prove Eq. A.6.5 with Euler’s identity.
P:4. Write T2N(x) as a polynomial in TN(x).
cos Nh þ cosðN þ 1Þh  ð1ÞN þ 1 ð1 þ cos hÞ P:5. Can you prove Eq. A.6.3 without invoking
F¼ induction? That is, directly?
1 þ cos h
ðA:6:9Þ

Combining Eq. A.6.6 with Eq. A.6.9, and References


simplifying gives
1. P.C. Yip, On a conjecture for the design of
RHS ¼ ½cos Nh þ cos ðN þ 1Þh2 =ð1 þ cos hÞ low-pass recursive filters. IEEE Trans.
ðA:6:10Þ ASSP-28, 6, 768 (1980)
2. K. Sbenoi, B.P. Agrawal, On the design of
recursive low-pass digital filters. IEEE Trans.
ASSP-28, 1, 79–84 (1980)
A.7: Computation of the Coefficients
of Chebyshev Polynomials

A simple derivation is given of a closed form tational complexity of the elliptic functions. Next
formula for the computation of the coefficients of to them in the category of optimum filters comes
Chebyshev polynomials, which, as is well the Chebyshev filter. For a normalized Cheby-
known, are required for the design of equal ripple shev low-pass filter with cutoff at 1 rad/s, the
filters. A modification of the formula is also magnitude squared function is given by
given for facilitating fast computation.
1
jHðjxÞj2 ¼ ðA:7:1Þ
1 þ e2 Cn2 ðxÞ
Keywords
Chebyshev polynomials • Computation • where Cn(x) is the Chebyshev polynomial,
Coefficients defined by

cosðn cos1 xÞ; x\1;
Introduction Cn ðxÞ ¼ ðA:7:2Þ
coshðn cosh1 xÞ; x [ 1:

Chebyshev polynomials are required in the Cn(x) is usually computed by the recursion
design of filters in which the pass-band or the relation
stop-band is desired to have equal ripple char-
acteristic [1, 2]. As is well known, elliptic filters, Cn þ 1 ðxÞ ¼ 2xCn ðxÞCn1 ðxÞ ðA:7:3Þ
in which both pass- and stop-bands are equal
ripple, are the optimum ones. However, their with the initial conditions
design is rather involved because of the compu-
C0 ðxÞ ¼ 1 and C1 ðxÞ ¼ x: ðA:7:4Þ

Tables for low order Cn(x) are available in


textbooks. However when n is high, one starts
from the two highest n for which entries exist in
the Table and then uses Eq. A.7.3 recursively.
Clearly, this computation is time consuming.
Nguyen [3] derived a recursive formula for the
Source: S. C. Dutta Roy, “Computation of the Coeffi-
cients of Chebyshev Polynomials”, IETE Journal of coefficients and formulated some rules for cut-
Education, vol. 49, pp. 19–21, January–April 2008. ting down on the computation time. Johnson and

© Springer Nature Singapore Pte Ltd. 2018 317


S. C. Dutta Roy, Circuits, Systems and Signal Processing,
https://doi.org/10.1007/978-981-10-6919-2
318 A.7: Computation of the Coefficients of Chebyshev Polynomials

Johnson [4] derived the following closed form If (−l)i is deleted from Eq. A.7.9, then we get
 pffiffiffiffiffiffiffiffiffiffiffiffiffiffin
representation of Cn(x): the expansion for x þ x2  1 : Substituting
bX
n=2c these in Eq. A.7.8, we observe that the odd i-
n
Cn ðxÞ ¼ ð1Þk xn2k ð1  x2 Þk ðA:7:5Þ terms will cancel. Hence if we let k = i/2, then
k¼0
2k
Eq. A.7.8 becomes
where bn=2c is the integer part of (n/2). They n=2c
bX
n  k
obtained this result by expressing Cn(x) of Eq. Cn ðxÞ ¼ xn2k x2  1
A.7.2 as k¼0 2k
  bX
n=2c
n  k
Cn ðxÞ ¼ Re expðjn cos1 xÞ : ðA:7:6Þ ¼ ð1Þ k
xn2k 1  x2
k¼0 2k
An alternative derivation of Eq. A.7.5 was ðA:7:10Þ
given by Cole [5] by treating Eq. A.7.3 as a
difference equation and applying z-transform to The last form in Eq. A.7.10 is the same as
it. Eq. A.7.5.
In this section, we give a simple derivation of
Eq. A.7.5 and a modification of this formula
which directly gives the coefficients of xn−2r, r = Simplification of Eq. A.7.10
0 to bn=2c, and facilitates faster computation as
compared to the existing methods. We can write Eq. A.7.10 as

bX
n=2c
k n  2  k
Derivation Cn ðxÞ ¼ x n
ð1Þ x 1  x2
k¼0
2k

Let ðA:7:11Þ

cos1 x ¼ h ðA:7:7Þ The term [x−2(1 − x2)]k in Eq. A.7.11 can be


expressed as
so that k
k k
X k
2 2 k k
 1 h jh n  jh n i
ðx  1Þ ¼ ð1Þ ð1  x Þ ¼ ð1Þ
1 r
Cn ðxÞ ¼ cos nh ¼ ejnh þ ejnh ¼ e þ e r¼0
2 2 k
1 h pffiffiffiffiffiffiffiffiffiffiffiffiffiffin  pffiffiffiffiffiffiffiffiffiffiffiffiffiffin i X k
¼ x þ j 1  x2 þ x  j 1  x2 ðx2 Þr ¼ ð1Þ k
ð1Þr x2r :
2 r
1 h  pffiffiffiffiffiffiffiffiffiffiffiffiffiffin  pffiffiffiffiffiffiffiffiffiffiffiffiffiffin i r¼0
¼ x  x2  1 þ x þ x2  1 : ðA:7:12Þ
2
ðA:7:8Þ
Substituting the last form in Eq. A.7.12 for
k
Note that this is the same as equation Eq. ½ðx2 ð1x2 Þ in Eq. A.7.11, we get
A.7.12 in [5], derived by using z-transforms.
n=2c
bX X
k
Using the Binomial theorem, we get n k r
Cn ðxÞ ¼ ð1Þ xn2r :
n
2k r
 pffiffiffiffiffiffiffiffiffiffiffiffiffiffin X k¼0 r¼0
n
x  x2  1 ¼ ðA:7:13Þ
i¼0
i
pffiffiffiffiffiffiffiffiffiffiffiffiffiffii where (n/2) has the usual significance. Equation
ð1Þi xni x2  1 ðA:7:9Þ Eq. A.7.13 can also be written as
A.7: Computation of the Coefficients of Chebyshev Polynomials 319

bX
n=2c 3
X
7 7 7 7 7
Cn ðxÞ ¼ an2r x n2r
; ðA:7:14Þ r ¼ 0 : a7 ¼ ¼ þ þ þ
k¼0 2k 0 2 4 6
r¼0
¼ 1 þ 21 þ 35 þ 7 ¼ 64;
X3
where 7 k 7 1
r ¼ 1 : a5 ¼  ¼
k¼0 2k 1 2 1
n=2c
bX
n k 7 2 7 3
an2r ¼ ð1Þr : ðA:7:15Þ þ þ 
2k r 4 1 6 1
k¼0
¼ ð21  1 þ 35  2 þ 7  3Þ ¼ 112;
X3
Equations A.7.14 and A.7.15 constitute the 7 k 7 2 7 3
r ¼ 2 : a3 ¼ ¼ þ
simplified formula for computation. In using k¼0 2k 2 4 2 6 2
these, note the following additional simplifying ¼ ð35  1 þ 7  3Þ ¼ 56;
features: (1) the quantities

n n n n and
; ; ;... are required
0 2 4 2bn=2c
for each coefficient and can be pre-calculated and X3
7 k
r ¼ 3 : a1 ¼ 
k 2k 3
stored; (2) ¼ 0 for k\r; and k¼0
r
7 3
k k ¼ ¼ 7  1 ¼ 7:
(3) ¼ ¼ 1: 6 3
0 k
We now illustrate the computation with two
Thus
examples.
C7 ðxÞ ¼ 64x7 112x5 þ 56x3 7x:
ðA:7:18Þ
Examples
Next, consider the example of n = 8. With the
Consider the case of n = 7. From Eqs. A.7.14
experience of the previous example, we can
and A.7.15, we get
directly write
X
3
4
X
C7 ðxÞ ¼ a72r x72r ; ðA:7:16Þ a82r ¼
8
ð1Þr
k
r¼0 k¼0 2k r

8 0 8 1 8 2
where ¼ ð1Þr þ þ
0 r 2 r 4 r

X3
þ
8 3
þ
8 4
7 k
a72r ¼ ð1Þr : ðA:7:17Þ 6 r 8 r
2k r
k¼0 r 0 1 2
¼ ð1Þ þ 28 þ 70
r r r
For various values of r, the coefficients are

3 4
calculated as follows: þ 28 þ :
r r
ðA:7:19Þ
320 A.7: Computation of the Coefficients of Chebyshev Polynomials

For various values of r, Eq. A.7.19 gives Problems

r ¼ 0 : a8 ¼ 1 þ 28 þ 70 þ 28 þ 1 ¼ 128;
r ¼ 1 : a6 ¼ ð28  1 þ 70  2 þ 28  3 þ 4Þ P:1. Any other method that you can find out for
¼ 256; deriving Eq. A.7.6?
P:2. Compute C15(x).
r ¼ 2 : a4 ¼ ð70  1 þ 28  3 þ 6Þ ¼ 160;
P:3. Repeat for C16(x).
r ¼ 3 : a2 ¼ ð28  l þ 4Þ ¼  32; P:4. Compare Eq. A.7.18 with a Butterworth
polynomial of the same order. What differ-
and ences do you observe?
P:5. What about Legendre polynomials? Are
r ¼ 4 : a0 ¼ 1: you not familiar with Legendre, a cousin of
Butterworth? Read Kuo and thy will come
Thus to know.

C8 ðxÞ ¼ 128x8 256x6 þ 160x4 32x2 þ 1:


ðA:7:20Þ References

Equations A.7.18 and A.7.20 agree with those


calculated by using any other method. 1. A. Budak, Passive and Active Network
Analysis and Synthesis (Houghton Miffin,
1974)
Conclusion 2. H. Lam, Analog and Digital Filters (Prentice
Hall, 1979)
A method has been presented for rapid calcula- 3. T.V. Nguyen, A triangle of coefficients for
tion of the coefficients of Chebyshev polynomi- Chebyshev polynomials, in Proceedings of
als of high order. The method should be useful in IEEE, vol. 72 (July 1984), pp. 982–983
designing high order equal ripple filters. 4. D.E. Johnson, J.R. Johnson, Mathematical
Methods in Engineering Physics (Ronals
Press, 1965)
5. J.D. Cole, A new derivation of a closed from
expression for Chebyshev polynomials of any
order. IEEE Trans. Educ. 32, 390–392 (1989)

You might also like