0% found this document useful (0 votes)
18 views78 pages

Modern Control Design With Matlab And Simulink 1st Edition Ashish Tewari pdf download

The document is about the book 'Modern Control Design With MATLAB and Simulink' by Ashish Tewari, which aims to simplify the understanding of control concepts while covering advanced topics like optimal control and Kalman filters. It emphasizes practical applications using MATLAB and SIMULINK, making it suitable for both students and practicing engineers. The book includes numerous examples and exercises across various disciplines to enhance learning and application of modern control theory.

Uploaded by

yantalbiondo17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views78 pages

Modern Control Design With Matlab And Simulink 1st Edition Ashish Tewari pdf download

The document is about the book 'Modern Control Design With MATLAB and Simulink' by Ashish Tewari, which aims to simplify the understanding of control concepts while covering advanced topics like optimal control and Kalman filters. It emphasizes practical applications using MATLAB and SIMULINK, making it suitable for both students and practicing engineers. The book includes numerous examples and exercises across various disciplines to enhance learning and application of modern control theory.

Uploaded by

yantalbiondo17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Modern Control Design With Matlab And Simulink

1st Edition Ashish Tewari download

https://ebookbell.com/product/modern-control-design-with-matlab-
and-simulink-1st-edition-ashish-tewari-2214064

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Classical And Modern Controls With Microcontrollers Design


Implementation And Applications 1st Ed Ying Bai

https://ebookbell.com/product/classical-and-modern-controls-with-
microcontrollers-design-implementation-and-applications-1st-ed-ying-
bai-7320486

Modern Linear Control Design A Timedomain Approach 2013th Edition


Paolo Caravani

https://ebookbell.com/product/modern-linear-control-design-a-
timedomain-approach-2013th-edition-paolo-caravani-4144494

Modern Control Statespace Analysis And Design Methods Arie Nakhmani

https://ebookbell.com/product/modern-control-statespace-analysis-and-
design-methods-arie-nakhmani-11390880

Modern Control Systems Analysis And Design Using Matlab And Simulink
2003121 Robert Hbishop

https://ebookbell.com/product/modern-control-systems-analysis-and-
design-using-matlab-and-simulink-2003121-robert-hbishop-59825610
Modern Control Systems Global Edition 14th Edition Richard Dorf

https://ebookbell.com/product/modern-control-systems-global-
edition-14th-edition-richard-dorf-51794618

Modern Control Systems Thirteenth Edition Dorf Richard Cbishop

https://ebookbell.com/product/modern-control-systems-thirteenth-
edition-dorf-richard-cbishop-21977372

Modern Control Engineering 5th Ed Prentice Hall Ogata Katsuhiko

https://ebookbell.com/product/modern-control-engineering-5th-ed-
prentice-hall-ogata-katsuhiko-22039606

Modern Control Systems Part 1 Richard Cdorf And Robert Hbishop

https://ebookbell.com/product/modern-control-systems-part-1-richard-
cdorf-and-robert-hbishop-2218708

Modern Control Engineering 4th Edition Solution Manual Katsuhiko Ogata

https://ebookbell.com/product/modern-control-engineering-4th-edition-
solution-manual-katsuhiko-ogata-2493098
Modern Control Design
With MATLAB and SIMULINK
This page intentionally left blank
Modern Control Design
With MATLAB and SIMULINK

Ashish Tewari
Indian Institute of Technology, Kanpur, India

JOHN WILEY & SONS, LTD


Copyright © 2002 by John Wiley & Sons Ltd
Baffins Lane, Chichester,
West Sussex, PO19 1UD, England
National 01243 779777
International (+44) 1243 779777
e-mail (for orders and customer service enquiries): [email protected]
Visit our Home Page on http://www.wiley.co.uk
or http://www.wiley.com
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted,
in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except
under the terms of the Copyright Designs and Patents Act 1988 or under the terms of a licence issued by
the Copyright Licensing Agency, 90 Tottenham Court Road, London, W1P 9HE, UK, without the permission
in writing of the Publisher, with the exception of any material supplied specifically for the purpose of being
entered and executed on a computer system, for exclusive use by the purchaser of the publication.
Neither the authors nor John Wiley & Sons Ltd accept any responsibility or liability for loss or damage
occasioned to any person or property through using the material, instructions, methods or ideas contained
herein, or acting or refraining from acting as a result of such use. The authors and Publisher expressly disclaim
all implied warranties, including merchantability of fitness for any particular purpose.
Designations used by companies to distinguish their products are often claimed as trademarks. In all instances
where John Wiley & Sons is aware of a claim, the product names appear in initial capital or capital letters.
Readers, however, should contact the appropriate companies for more complete information regarding trade-
marks and registration.
Other Wiley Editorial Offices
John Wiley & Sons, Inc., 605 Third Avenue,
New York, NY 10158-0012, USA
Wiley-VCH Verlag GmbH, Pappelallee 3,
D-69469 Weinheim, Germany
John Wiley, Australia, Ltd, 33 Park Road, Milton,
Queensland 4064, Australia
John Wiley & Sons (Canada) Ltd, 22 Worcester Road,
Rexdale, Ontario, M9W 1L1, Canada
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01,
Jin Xing Distripark, Singapore 129809

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library
ISBN 0 471 496790

Typeset in 10/12j pt Times by Laserwords Private Limited, Chennai, India


Printed and bound in Great Britain by Biddies Ltd, Guildford and Kings Lynn
This book is printed on acid-free paper responsibly manufactured from sustainable forestry,
in which at least two trees are planted for each one used for paper production.
To the memory of my father,
Dr. Kamaleshwar Sahai Tewari.
To my wife, Prachi, and daughter, Manya.
This page intentionally left blank
Contents

Preface xiii

Introduction 1
1.1 What is Control? 1
1.2 Open-Loop and Closed-Loop Control Systems 2
1.3 Other Classifications of Control Systems 6
1.4 On the Road to Control System Analysis and Design 10
1.5 MATLAB, SIMULINK, and the Control System Toolbox 11
References 12

2. Linear Systems and Classical Control 13


2.1 How Valid is the Assumption of Linearity? 13
2.2 Singularity Functions 22
2.3 Frequency Response 26
2.4 Laplace Transform and the Transfer Function 36
2.5 Response to Singularity Functions 51
2.6 Response to Arbitrary Inputs 58
2.7 Performance 62
2.8 Stability 71
2.9 Root-Locus Method 73
2.10 Nyquist Stability Criterion 77
2.11 Robustness 81
2.12 Closed-Loop Compensation Techniques for Single-Input, Single-Output Systems 87
2.12.1 Proportional-integral-derivative compensation 88
2.12.2 Lag, lead, and lead-lag compensation 96
2.13 Multivariable Systems 105
Exercises 115
References 124

3. State-Space Representation 125


3.1 The State-Space: Why Do I Need It? 125
3.2 Linear Transformation of State-Space Representations 140
viii CONTENTS

3.3 System Characteristics from State-Space Representation 146


3.4 Special State-Space Representations: The Canonical Forms 152
3.5 Block Building in Linear, Time-Invariant State-Space 160
Exercises 168
References 170

4. Solving the State-Equations 171


4.1 Solution of the Linear Time Invariant State Equations 171
4.2 Calculation of the State-Transition Matrix 176
4.3 Understanding the Stability Criteria through the State-Transition Matrix 183
4.4 Numerical Solution of Linear Time-Invariant State-Equations 184
4.5 Numerical Solution of Linear Time-Varying State-Equations 198
4.6 Numerical Solution of Nonlinear State-Equations 204
4.7 Simulating Control System Response with SIMUUNK 213
Exercises 216
References 218

5. Control System Design in State-Space 219


5.1 Design: Classical vs. Modern 219
5.2 Controllability 222
5.3 Pole-Placement Design Using Full-State Feedback 228
5.3.1 Pole-placement regulator design (or single-input plants 230
5.3.2 Pole-placement regulator design for multi-input plants 245
5.3.3 Pole-placement regulator design for plants with noise 247
5.3.4 Pole-placement design of tracking systems 251
5.4 Observers, Observability, and Compensators 256
5.4.1 Pole-placement design of full-order observers and compensators 258
5.4.2 Pole-placement design of reduced-order observers and compensators 269
5.4.3 Noise and robustness issues 276
Exercises 277
References 282

6. Linear Optimal Control 283


6.1 The Optimal Control Problem 283
6.1.1 The general optimal control formulation for regulators 284
6.1.2 Optimal regulator gain matrix and the riccati equation 286
6.2 Infinite-Time Linear Optimal Regulator Design 288
6.3 Optimal Control of Tracking Systems 298
6.4 Output Weighted Linear Optimal Control 308
6.5 Terminal Time Weighting: Solving the Matrix Riccati Equation 312
Exercises 318
References 321
CONTENTS ix

7. Kalman Filters 323


7.1 Stochastic Systems 323
7.2 Filtering of Random Signals 329
7.3 White Noise, and White Noise Filters 334
7.4 The Kalman Filter 339
7.5 Optimal (Linear, Quadratic, Gaussian) Compensators 351
7.6 Robust Multivariable LOG Control: Loop Transfer Recovery 356
Exercises 370
References 371

8. Digital Control Systems 373


8.1 What are Digital Systems? 373
8.2 A/D Conversion and the z-Transform 375
8.3 Pulse Transfer Functions of Single-Input, Single-Output Systems 379
8.4 Frequency Response of Single-Input, Single-Output Digital Systems 384
8.5 Stability of Single-Input, Single-Output Digital Systems 386
8.6 Performance of Single-Input, Single-Output Digital Systems 390
8.7 Closed-Loop Compensation Techniques for Single-Input, Single-Output Digital
Systems 393
8.8 State-Space Modeling of Multivariable Digital Systems 396
8.9 Solution of Linear Digital State-Equations 402
8.10 Design of Multivariable, Digital Control Systems Using Pole-Placement:
Regulators, Observers, and Compensators 406
8.11 Linear Optimal Control of Digital Systems 415
8.12 Stochastic Digital Systems, Digital Kalman Filters, and Optimal Digital
Compensators 424
Exercises 432
References 436

9. Advanced Topics in Modern Control 437


9.1 Introduction 437
9.2 #00 Robust, Optimal Control 437
9.3 Structured Singular Value Synthesis for Robust Control 442
9.4 Time-Optimal Control with Pre-shaped Inputs 446
9.5 Output-Rate Weighted Linear Optimal Control 453
9.6 Nonlinear Optimal Control 455
Exercises 463
References 465

Appendix A: Introduction to MATLAB, SIMULINK and the


Control System Toolbox 467
x CONTENTS

Appendix B: Review of Matrices and


Linear Algebra 481

Appendix C: Mass, Stiffness, and Control Influence Matrices of


the Flexible Spacecraft 487

Answers to Selected Exercises 489

Index 495
Preface

The motivation for writing this book can be ascribed chiefly to the usual struggle of
an average reader to understand and utilize controls concepts, without getting lost in
the mathematics. Many textbooks are available on modern control, which do a fine
job of presenting the control theory. However, an introductory text on modern control
usually stops short of the really useful concepts - such as optimal control and Kalman
filters - while an advanced text which covers these topics assumes too much mathe-
matical background of the reader. Furthermore, the examples and exercises contained
in many control theory textbooks are too simple to represent modern control appli-
cations, because of the computational complexity involved in solving practical prob-
lems. This book aims at introducing the reader to the basic concepts and applications
of modern control theory in an easy to read manner, while covering in detail what
may be normally considered advanced topics, such as multivariable state-space design,
solutions to time-varying and nonlinear state-equations, optimal control, Kalman filters,
robust control, and digital control. An effort is made to explain the underlying princi-
ples behind many controls concepts. The numerical examples and exercises are chosen
to represent practical problems in modern control. Perhaps the greatest distinguishing
feature of this book is the ready and extensive use of MATLAB (with its Control
System Toolbox) and SIMULINK®, as practical computational tools to solve problems
across the spectrum of modern control. MATLAB/SIMULINK combination has become
the single most common - and industry-wide standard - software in the analysis and
design of modern control systems. In giving the reader a hands-on experience with the
MATLAB/SIMULINK and the Control System Toolbox as applied to some practical design
problems, the book is useful for a practicing engineer, apart from being an introductory
text for the beginner.
This book can be used as a textbook in an introductory course on control systems at
the third, or fourth year undergraduate level. As stated above, another objective of the
book is to make it readable by a practicing engineer without a formal controls back-
ground. Many modern control applications are interdisciplinary in nature, and people
from a variety of disciplines are interested in applying control theory to solve practical
problems in their own respective fields. Bearing this in mind, the examples and exercises
are taken to cover as many different areas as possible, such as aerospace, chemical, elec-
trical and mechanical applications. Continuity in reading is preserved, without frequently
referring to an appendix, or other distractions. At the end of each chapter, readers are

® MATLAB, SIMULINK, and Control System Toolbox are registered trademarks of the Math Works, Inc.
xii PREFACE

given a number of exercises, in order to consolidate their grasp of the material presented
in the chapter. Answers to selected numerical exercises are provided near the end of
the book.
While the main focus of the material presented in the book is on the state-space
methods applied to linear, time-invariant control - which forms a majority of modern
control applications - the classical frequency domain control design and analysis is not
neglected, and large parts of Chapters 2 and 8 cover classical control. Most of the
example problems are solved with MATLAB/SIMULINK, using MATLAB command
lines, and SIMULINK block-diagrams immediately followed by their resulting outputs.
The reader can directly reproduce the MATLAB statements and SIMULINK blocks
presented in the text to obtain the same results. Also presented are a number of computer
programs in the form of new MATLAB M-files (i.e. the M-files which are not included
with MATLAB, or the Control System Toolbox) to solve a variety of problems ranging
from step and impulse responses of single-input, single-output systems, to the solution
of the matrix Riccati equation for the terminal-time weighted, multivariable, optimal
control design. This is perhaps the only available controls textbook which gives ready
computer programs to solve such a wide range of problems. The reader becomes aware
of the power of MATLAB/SIMULINK in going through the examples presented in the
book, and gets a good exposure to programming in MATLAB/SIMULINK. The numer-
ical examples presented require MATLAB 6.0, SIMULINK 4.0, and Control System
Toolbox 5.0. Older versions of this software can also be adapted to run the examples and
models presented in the book, with some modifications (refer to the respective Users'
Manuals).
The numerical examples in the book through MATLAB/SIMULINK and the Control
System Toolbox have been designed to prevent the use of the software as a black box, or by
rote. The theoretical background and numerical techniques behind the software commands
are explained in the text, so that readers can write their own programs in MATLAB, or
another language. Many of the examples contain instructions on programming. It is also
explained how many of the important Control System Toolbox commands can be replaced
by a set of intrinsic MATLAB commands. This is to avoid over-dependence on a particular
version of the Control System Toolbox, which is frequently updated with new features.
After going through the book, readers are better equipped to learn the advanced features
of the software for design applications.
Readers are introduced to advanced topics such as HOC-robust optimal control, struc-
tured singular value synthesis, input shaping, rate-weighted optimal control, and nonlinear
control in the final chapter of the book. Since the book is intended to be of introduc-
tory rather than exhaustive nature, the reader is referred to other articles that cover these
advanced topics in detail.
I am grateful to the editorial and production staff at the Wiley college group, Chichester,
who diligently worked with many aspects of the book. I would like to specially thank
Karen Mossman, Gemma Quilter, Simon Plumtree, Robert Hambrook, Dawn Booth and
See Hanson for their encouragement and guidance in the preparation of the manuscript.
I found working with Wiley, Chichester, a pleasant experience, and an education into
the many aspects of writing and publishing a textbook. I would also like to thank my
students and colleagues, who encouraged and inspired me to write this book. I thank all
PREFACE xiii

the reviewers for finding the errors in the draft manuscript, and for providing many
constructive suggestions. Writing this book would have been impossible without the
constant support of my wife, Prachi, and my little daughter, Manya, whose total age
in months closely followed the number of chapters as they were being written.

Ashish Tewari
This page intentionally left blank
1
Introduction

1.1 What is Control?


When we use the word control in everyday life, we are referring to the act of producing a
desired result. By this broad definition, control is seen to cover all artificial processes. The
temperature inside a refrigerator is controlled by a thermostat. The picture we see on the
television is a result of a controlled beam of electrons made to scan the television screen
in a selected pattern. A compact-disc player focuses a fine laser beam at the desired spot
on the rotating compact-disc in order to produce the desired music. While driving a car,
the driver is controlling the speed and direction of the car so as to reach the destination
quickly, without hitting anything on the way. The list is endless. Whether the control is
automatic (such as in the refrigerator, television or compact-disc player), or caused by a
human being (such as the car driver), it is an integral part of our daily existence. However,
control is not confined to artificial processes alone. Imagine living in a world where
the temperature is unbearably hot (or cold), without the life-supporting oxygen, water or
sunlight. We often do not realize how controlled the natural environment we live in is. The
composition, temperature and pressure of the earth's atmosphere are kept stable in their
livable state by an intricate set of natural processes. The daily variation of temperature
caused by the sun controls the metabolism of all living organisms. Even the simplest
life form is sustained by unimaginably complex chemical processes. The ultimate control
system is the human body, where the controlling mechanism is so complex that even
while sleeping, the brain regulates the heartbeat, body temperature and blood-pressure by
countless chemical and electrical impulses per second, in a way not quite understood yet.
(You have to wonder who designed that control system!) Hence, control is everywhere
we look, and is crucial for the existence of life itself.
A study of control involves developing a mathematical model for each component of
the control system. We have twice used the word system without defining it. A system
is a set of self-contained processes under study. A control system by definition consists
of the system to be controlled - called the plant - as well as the system which exercises
control over the plant, called the controller. A controller could be either human, or an
artificial device. The controller is said to supply a signal to the plant, called the input to
the plant (or the control input), in order to produce a desired response from the plant,
called the output from the plant. When referring to an isolated system, the terms input and
output are used to describe the signal that goes into a system, and the signal that comes
out of a system, respectively. Let us take the example of the control system consisting
of a car and its driver. If we select the car to be the plant, then the driver becomes the
INTRODUCTION

controller, who applies an input to the plant in the form of pressing the gas pedal if it
is desired to increase the speed of the car. The speed increase can then be the output
from the plant. Note that in a control system, what control input can be applied to the
plant is determined by the physical processes of the plant (in this case, the car's engine),
but the output could be anything that can be directly measured (such as the car's speed
or its position). In other words, many different choices of the output can be available
at the same time, and the controller can use any number of them, depending upon the
application. Say if the driver wants to make sure she is obeying the highway speed limit,
she will be focusing on the speedometer. Hence, the speed becomes the plant output. If
she wants to stop well before a stop sign, the car's position with respect to the stop sign
becomes the plant output. If the driver is overtaking a truck on the highway, both the
speed and the position of the car vis-d-vis the truck are the plant outputs. Since the plant
output is the same as the output of the control system, it is simply called the output when
referring to the control system as a whole. After understanding the basic terminology of
the control system, let us now move on to see what different varieties of control systems
there are.

1.2 Open-Loop and Closed-Loop Control Systems


Let us return to the example of the car driver control system. We have encountered the
not so rare breed of drivers who generally boast of their driving skills with the following
words: "Oh I am so good that I can drive this car with my eyes closed!" Let us imagine
we give such a driver an opportunity to live up to that boast (without riding with her,
of course) and apply a blindfold. Now ask the driver to accelerate to a particular speed
(assuming that she continues driving in a straight line). While driving in this fashion,
the driver has absolutely no idea about what her actual speed is. By pressing the gas
pedal (control input) she hopes that the car's speed will come up to the desired value,
but has no means of verifying the actual increase in speed. Such a control system, in
which the control input is applied without the knowledge of the plant output, is called
an open-loop control system. Figure 1.1 shows a block-diagram of an open-loop control
system, where the sub-systems (controller and plant) are shown as rectangular blocks, with
arrows indicating input and output to each block. By now it must be clear that an open-
loop controller is like a rifle shooter who gets only one shot at the target. Hence, open-loop
control will be successful only if the controller has a pretty good prior knowledge of the
behavior of the plant, which can be defined as the relationship between the control input

UBbirtJU uuipui l/UIUIUI IMJJUl

(desired (gas pedal (speed)


speed) force)
Controller Plant
(driver) (car)

Figure 1.1 An open-loop control system: the controller applies the control input without knowing the
plant output
OPEN-LOOP AND CLOSED-LOOP CONTROL SYSTEMS

and the plant output. If one knows what output a system will produce when a known
input is applied to it, one is said to know the system's behavior.
Mathematically, the relationship between the output of a linear plant and the control
input (the system's behavior) can be described by a transfer function (the concepts of
linear systems and transfer functions are explained in Chapter 2). Suppose the driver
knows from previous driving experience that, to maintain a speed of 50 kilometers per
hour, she needs to apply one kilogram of force on the gas pedal. Then the car's transfer
function is said to be 50 km/hr/kg. (This is a very simplified example. The actual car
is not going to have such a simple transfer function.} Now, if the driver can accurately
control the force exerted on the gas pedal, she can be quite confident of achieving her
target speed, even though blindfolded. However, as anybody reasonably experienced with
driving knows, there are many uncertainties - such as the condition of the road, tyre
pressure, the condition of the engine, or even the uncertainty in gas pedal force actually
being applied by the driver - which can cause a change in the car's behavior. If the
transfer function in the driver's mind was determined on smooth roads, with properly
inflated tyres and a well maintained engine, she is going to get a speed of less than
50 krn/hr with 1 kg force on the gas pedal if, say, the road she is driving on happens to
have rough patches. In addition, if a wind happens to be blowing opposite to the car's
direction of motion, a further change in the car's behavior will be produced. Such an
unknown and undesirable input to the plant, such as road roughness or the head-wind, is
called a noise. In the presence of uncertainty about the plant's behavior, or due to a noise
(or both), it is clear from the above example that an open-loop control system is unlikely
to be successful.
Suppose the driver decides to drive the car like a sane person (i.e. with both eyes
wide open). Now she can see her actual speed, as measured by the speedometer. In this
situation, the driver can adjust the force she applies to the pedal so as to get the desired
speed on the speedometer; it may not be a one shot approach, and some trial and error
might be required, causing the speed to initially overshoot or undershoot the desired value.
However, after some time (depending on the ability of the driver), the target speed can be
achieved (if it is within the capability of the car), irrespective of the condition of the road
or the presence of a wind. Note that now the driver - instead of applying a pre-determined
control input as in the open-loop case - is adjusting the control input according to the
actual observed output. Such a control system in which the control input is a function
of the plant's output is called a closed-loop system. Since in a closed-loop system the
controller is constantly in touch with the actual output, it is likely to succeed in achieving
the desired output even in the presence of noise and/or uncertainty in the linear plant's
behavior (transfer-function). The mechanism by which the information about the actual
output is conveyed to the controller is called feedback. On a block-diagram, the path
from the plant output to the controller input is called a feedback-loop. A block-diagram
example of a possible closed-loop system is given in Figure 1.2.
Comparing Figures 1.1 and 1.2, we find a new element in Figure 1.2 denoted by a circle
before the controller block, into which two arrows are leading and out of which one arrow
is emerging and leading to the controller. This circle is called a summing junction, which
adds the signals leading into it with the appropriate signs which are indicated adjacent to
the respective arrowheads. If a sign is omitted, a positive sign is assumed. The output of
INTRODUCTION

Desired Control input (u) Output (y)


output (gas pedal (speed)
rorcej
Controller Plant
(driver) (car)

Feedback loop

Figure 1.2 Example of a closed-loop control system with feedback; the controller applies a control
input based on the plant output

the summing junction is the arithmetic sum of its two (or more) inputs. Using the symbols
u (control input), y (output), and yd (desired output), we can see in Figure 1.2 that the
input to the controller is the error signal (yd — y). In Figure 1.2, the controller itself is a
system which produces an output (control input), u, based upon the input it receives in
the form of (yd — y)- Hence, the behavior of a linear controller could be mathematically
described by its transfer-function, which is the relationship between u and (yd — .v)- Note
that Figure 1.2 shows only a popular kind of closed-loop system. In other closed-loop
systems, the input to the controller could be different from the error signal (yd — y).
The controller transfer-function is the main design parameter in the design of a control
system and determines how rapidly - and with what maximum overshoot (i.e. maximum
value of | yd — y|) - the actual output, y, will become equal to the desired output, yd- We
will see later how the controller transfer-function can be obtained, given a set of design
requirements. (However, deriving the transfer-function of a human controller is beyond
the present science, as mentioned in the previous section.) When the desired output, yd, is
a constant, the resulting controller is called a regulator. If the desired output is changing
with time, the corresponding control system is called a tracking system. In any case, the
principal task of a closed-loop controller is to make (yd — y) = 0 as quickly as possible.
Figure 1.3 shows a possible plot of the actual output of a closed-loop control system.
Whereas the desired output yd has been achieved after some time in Figure 1.3, there
is a large maximum overshoot which could be unacceptable. A successful closed-loop
controller design should achieve both a small maximum overshoot, and a small error
magnitude |yd — y| as quickly as possible. In Chapter 4 we will see that the output of a
linear system to an arbitrary input consists of a fluctuating sort of response (called the
transient response), which begins as soon as the input is applied, and a settled kind of
response (called the steady-state response) after a long time has elapsed since the input
was initially applied. If the linear system is stable, the transient response would decay
to zero after sometime (stability is an important property of a system, and is discussed
in Section 2.8), and only the steady-state response would persist for a long time. The
transient response of a linear system depends largely upon the characteristics and the
initial state of the system, while the steady-state response depends both upon system's
characteristics and the input as a function of time, i.e. u(t). The maximum overshoot is
a property of the transient response, but the error magnitude | yd — y| at large time (or in
the limit t —>• oo) is a property of the steady-state response of the closed-loop system. In
OPEN-LOOP AND CLOSED-LOOP CONTROL SYSTEMS

Desired output, yd

u
Time (f)

Figure 1.3 Example of a closed-loop control system's response; the desired output is achieved after
some time, but there is a large maximum overshoot

Figure 1.3 the steady-state response asymptotically approaches a constant yd in the limit
t -> oo.
Figure 1.3 shows the basic fact that it is impossible to get the desired output imme-
diately. The reason why the output of a linear, stable system does not instantaneously
settle to its steady-state has to do with the inherent physical characteristics of all prac-
tical systems that involve either dissipation or storage of energy supplied by the input.
Examples of energy storage devices are a spring in a mechanical system, and a capacitor
in an electrical system. Examples of energy dissipation processes are mechanical friction,
heat transfer, and electrical resistance. Due to a transfer of energy from the applied input
to the energy storage or dissipation elements, there is initially a fluctuation of the total
energy of the system, which results in the transient response. As the time passes, the
energy contribution of storage/dissipative processes in a stable system declines rapidly,
and the total energy (hence, the output) of the system tends to the same function of time
as that of the applied input. To better understand this behavior of linear, stable systems,
consider a bucket with a small hole in its bottom as the system. The input is the flow
rate of water supplied to the bucket, which could be a specific function of time, and the
output is the total flow rate of water coming out of the bucket (from the hole, as well
as from the overflowing top). Initially, the bucket takes some time to fill due to the hole
(dissipative process) and its internal volume (storage device). However, after the bucket
is full, the output largely follows the changing input.
While the most common closed-loop control system is the feedback control system, as
shown in Figure 1.2, there are other possibilities such as the feedforward control system.
In a feedforward control system - whose example is shown in Figure 1.4 - in addition
to a feedback loop, a feedforward path from the desired output (y^) to the control input
is generally employed to counteract the effect of noise, or to reduce a known undesirable
plant behavior. The feedforward controller incorporates some a priori knowledge of the
plant's behavior, thereby reducing the burden on the feedback controller in controlling
INTRODUCTION

Disturbance
Feedforward
controller + AK Control input (u)
(engine RPM *\J (fuel flow)
governor) /
Desired-
+ / Output(y)
output
(yd)_ +
p—
Feedback
controller
(driver +
*S^\JL-/
r
» Plant
(car)
( speed)
—>-
gas pedal)

Feedback loop

Figure 1.4 A closed-loop control system with a feedforward path; the engine RPM governor takes
care of the fuel flow disturbance, leaving the driver free to concentrate on achieving desired speed with
gas pedal force

the plant. Note that if the feedback controller is removed from Figure 1.4, the resulting
control system becomes open-loop type. Hence, a feedforward control system can be
regarded as a hybrid of open and closed-loop control systems. In the car driver example,
the feedforward controller could be an engine rotational speed governor that keeps the
engine's RPM constant in the presence of disturbance (noise) in the fuel flow rate caused
by known imperfections in the fuel supply system. This reduces the burden on the driver,
who would have been required to apply a rapidly changing gas pedal force to counteract
the fuel supply disturbance if there was no feedforward controller. Now the feedback
controller consists of the driver and the gas-pedal mechanism, and the control input is the
fuel flow into the engine, which is influenced by not only the gas-pedal force, but also by
the RPM governor output and the disturbance. It is clear from the present example that
many practical control systems can benefit from the feedforward arrangement.
In this section, we have seen that a control system can be classified as either open- or
closed-loop, depending upon the physical arrangement of its components. However, there
are other ways of classifying control systems, as discussed in the next section.

1.3 Other Classifications of Control Systems


Apart from being open- or closed-loop, a control system can be classified according to
the physical nature of the laws obeyed by the system, and the mathematical nature of the
governing differential equations. To understand such classifications, we must define the
state of a system, which is the fundamental concept in modern control. The state of a
system is any set of physical quantities which need to be specified at a given time in order
to completely determine the behavior of the system. This definition is a little confusing,
because it introduces another word, determine, which needs further explanation given in
the following paragraph. We will return to the concept of state in Chapter 3, but here let
us only say that the state is all the information we need about a system to tell what the
system is doing at any given time. For example, if one is given information about the
speed of a car and the positions of other vehicles on the road relative to the car, then
OTHER CLASSIFICATIONS OF CONTROL SYSTEMS

one has sufficient information to drive the car safely. Thus, the state of such a system
consists of the car's speed and relative positions of other vehicles. However, for the same
system one could choose another set of physical quantities to be the system's state, such
as velocities of all other vehicles relative to the car, and the position of the car with
respect to the road divider. Hence, by definition the state is not a unique set of physical
quantities.
A control system is said to be deterministic when the set of physical laws governing the
system are such that if the state of the system at some time (called the initial conditions)
and the input are specified, then one can precisely predict the state at a later time. The laws
governing a deterministic system are called deterministic laws. Since the characteristics of
a deterministic system can be found merely by studying its response to initial conditions
(transient response), we often study such systems by taking the applied input to be zero.
A response to initial conditions when the applied input is zero depicts how the system's
state evolves from some initial time to that at a later time. Obviously, the evolution of
only a deterministic system can be determined. Going back to the definition of state, it is
clear that the latter is arrived at keeping a deterministic system in mind, but the concept of
state can also be used to describe systems that are not deterministic. A system that is not
deterministic is either stochastic, or has no laws governing it. A stochastic (also called
probabilistic) system has such governing laws that although the initial conditions (i.e.
state of a system at some time) are known in every detail, it is impossible to determine
the system's state at a later time. In other words, based upon the stochastic governing
laws and the initial conditions, one could only determine the probability of a state, rather
than the state itself. When we toss a perfect coin, we are dealing with a stochastic law that
states that both the possible outcomes of the toss (head or tail) have an equal probability
of 50 percent. We should, however, make a distinction between a physically stochastic-
system, and our ability (as humans) to predict the behavior of a deterministic system based
upon our measurement of the initial conditions and our understanding of the governing
laws. Due to an uncertainty in our knowledge of the governing deterministic laws, as
well as errors in measuring the initial conditions, we will frequently be unable to predict
the state of a deterministic system at a later time. Such a problem of unpredictability is
highlighted by a special class of deterministic systems, namely chaotic systems. A system
is called chaotic if even a small change in the initial conditions produces an arbitrarily
large change in the system's state at a later time.
An example of chaotic control systems is a double pendulum (Figure 1.5). It consists
of two masses, m\ and mi, joined together and suspended from point O by two rigid
massless links of lengths LI and L2 as shown. Here, the state of the system can be
defined by the angular displacements of the two links, 0\(t} and #2(0. as well as their
respective angular velocities, 0\ \t) and #7( }(t). (In this book, the notation used for
representing a &th order time derivative of /(r) is f ( k ) ( t ) , i.e. dkf(t)/dtk = f{k}(t).
Thus, 0j (1) (0 denotes dO\(t)/dt, etc.) Suppose we do not apply an input to the system,
and begin observing the system at some time, t = 0, at which the initial conditions are,
say, 6*i(0) = 40°, 02(0) = 80°, #, (l) (0) = 0°/s, and 0^1)(0) = 0°/s. Then at a later time,
say after 100 s, the system's state will be very much different from what it would have
been if the initial conditions were, say, 0j(0) = 40.01°, 6>2(0) = 80°, 6>,(1)(0) = 0°/s, and
0( ^(0) = 0°/s. Figure 1.6 shows the time history of the angle Oi(t) between 85 s and 100 s
INTRODUCTION

Figure 1.5 A double pendulum is a chaotic system because a small change in its initial conditions
produces an arbitrarily large change in the system's state after some time

-100
90 95 100
Time (s)

Figure 1.6 Time history between 85 s and 100 s of angle QI of a double pendulum with mi = 1 kg,
m-i = 2 kg, LI = 1 m, and 1-2 = 2 m for the two sets of initial conditions #1 (0) = 40°, #2(0) = 80°,
0J1)(0) = 0%, 0^(0) = 0% and 0,(0) = 40.01°, 02(0) = 80°, 0,(1|(0) = 0%, 0^(0) =0%.
respectively

for the two sets of initial conditions, for a double pendulum with m\ — 1 kg, mi = 2 kg,
LI = 1 m, and LI = 2 m. Note that we know the governing laws of this deterministic
system, yet we cannot predict its state after a given time, because there will always be
some error in measuring the initial conditions. Chaotic systems are so interesting that they
have become the subject of specialization at many physics and engineering departments.
Any unpredictable system can be mistaken to be a stochastic system. Taking the
car driver example of Section 1.2, there may exist deterministic laws that govern the
road conditions, wind velocity, etc., but our ignorance about them causes us to treat
such phenomena as random noise, i.e. stochastic processes. Another situation when a
deterministic system may appear to be stochastic is exemplified by the toss of a coin
deliberately loaded to fall every time on one particular side (either head or tail). An
OTHER CLASSIFICATIONS OF CONTROL SYSTEMS

unwary spectator may believe such a system to be stochastic, when actually it is very
much deterministic!
When we analyze and design control systems, we try to express their governing physical
laws by differential equations. The mathematical nature of the governing differential
equations provides another way of classifying control systems. Here we depart from the
realm of physics, and delve into mathematics. Depending upon whether the differential
equations used to describe a control system are linear or nonlinear in nature, we can call
the system either linear or nonlinear. Furthermore, a control system whose description
requires partial differential equations is called a distributed parameter system, whereas a
system requiring only ordinary differential equations is called a lumped parameter system.
A vibrating string, or a membrane is a distributed parameter system, because its properties
(mass and stiffness) are distributed in space. A mass suspended by a spring is a lumped
parameter system, because its mass and stiffness are concentrated at discrete points in
space. (A more common nomenclature of distributed and lumped parameter systems is
continuous and discrete systems, respectively, but we avoid this terminology in this book
as it might be confused with continuous time and discrete time systems.) A particular
system can be treated as linear, or nonlinear, distributed, or lumped parameter, depending
upon what aspects of its behavior we are interested in. For example, if we want to study
only small angular displacements of a simple pendulum, its differential equation of motion
can be treated to be linear; but if large angular displacements are to be studied, the same
pendulum is treated as a nonlinear system. Similarly, when we are interested in the motion
of a car as a whole, its state can be described by only two quantities: the position and
the velocity of the car. Hence, it can be treated as a lumped parameter system whose
entire mass is concentrated at one point (the center of mass). However, if we want to
take into account how the tyres of the car are deforming as it moves along an uneven
road, the car becomes a distributed parameter system whose state is described exactly by
an infinite set of quantities (such as deformations of all the points on the tyres, and their
time derivatives, in addition to the speed and position of the car). Other classifications
based upon the mathematical nature of governing differential equations will be discussed
in Chapter 2.
Yet another way of classifying control systems is whether their outputs are contin-
uous or discontinuous in time. If one can express the system's state (which is obtained
by solving the system's differential equations) as a continuous function of time, the
system is called continuous in time (or analog system). However, a majority of modern
control systems produce outputs that 'jump' (or are discontinuous) in time. Such control
systems are called discrete in time (or digital systems). Note that in the limit of very small
time steps, a digital system can be approximated as an analog system. In this book, we
will make this assumption quite often. If the time steps chosen to sample the discontin-
uous output are relatively large, then a digital system can have a significantly different
behaviour from that of a corresponding analog system. In modern applications, even
analog controllers are implemented on a digital processor, which can introduce digital
characteristics to the control system. Chapter 8 is devoted to the study of digital systems.
There are other minor classifications of control systems based upon the systems' char-
acteristics, such as stability, controllability, observability, etc., which we will take up
in subsequent chapters. Frequently, control systems are also classified based upon the
10 INTRODUCTION

number of inputs and outputs of the system, such as single-input, single-output system,
or two-input, three-output system, etc. In classical control (an object of Chapter 2)
the distinction between single-input, single-output (SISO) and multi-input, multi-output
(MIMO) systems is crucial.

1.4 On the Road to Control System Analysis


and Design
When we find an unidentified object on the street, the first thing we may do is prod or poke
it with a stick, pick it up and shake it, or even hit it with a hammer and hear the sound it
makes, in order to find out something about it. We treat an unknown control system in a
similar fashion, i.e. we apply some well known inputs to it and carefully observe how it
responds to those inputs. This has been an age old method of analyzing a system. Some
of the well known inputs applied to study a system are the singularity functions, thus
called due to their peculiar nature of being singular in the mathematical sense (their time
derivative tends to infinity at some time). Two prominent members of this zoo are the unit
step function and the unit impulse function. In Chapter 2, useful computer programs are
presented to enable you to find the response to impulse and step inputs - as well as the
response to an arbitrary input - of a single-input, single-output control system. Chapter 2
also discusses important properties of a control system, namely, performance, stability,
and robustness, and presents the analysis and design of linear control systems using the
classical approach of frequency response, and transform methods. Chapter 3 introduces
the state-space modeling for linear control systems, covering various applications from
all walks of engineering. The solution of a linear system's governing equations using
the state-space method is discussed in Chapter 4. In this chapter, many new computer
programs are presented to help you solve the state-equations for linear or nonlinear
systems.
The design of modern control systems using the state-space approach is introduced in
Chapter 5, which also discusses two important properties of a plant, namely its controlla-
bility and observability. In this chapter, it is first assumed that all the quantities defining
the state of a plant (called state variables) are available for exact measurement. However,
this assumption is not always practical, since some of the state variables may not be
measurable. Hence, we need a procedure for estimating the unmeasurable state variables
from the information provided by those variables that we can measure. Later sections of
Chapter 5 contains material about how this process of state estimation is carried out by
an observer, and how such an estimation can be incorporated into the control system in
the form of a compensator. Chapter 6 introduces the procedure of designing an optimal
control system, which means a control system meeting all the design requirements in
the most efficient manner. Chapter 6 also provides new computer programs for solving
important optimal control problems. Chapter 7 introduces the treatment of random signals
generated by stochastic systems, and extends the philosophy of state estimation to plants
with noise, which is treated as a random signal. Here we also learn how an optimal
state estimation can be carried out, and how a control system can be made robust with
respect to measurement and process noise. Chapter 8 presents the design and analysis of
MATLAB, SIMULINK, AND THE CONTROL SYSTEM TOOLBOX

digital control systems (also called discrete time systems), and covers many modern digital
control applications. Finally, Chapter 9 introduces various advanced topics in modern
control, such as advanced robust control techniques, nonlinear control, etc. Some of the
topics contained in Chapter 9, such as input shaping control and rate-weighted optimal
control, are representative of the latest control techniques.
At the end of each chapter (except Chapter 1), you will find exercises that help you
grasp the essential concepts presented in the chapter. These exercises range from analytical
to numerical, and are designed to make you think, rather than apply ready-made formulas
for their solution. At the end of the book, answers to some numerical exercises are
provided to let you check the accuracy of your solutions.

Modern control design and analysis requires a lot of linear algebra (matrix multipli-
cation, inversion, calculation of eigenvalues and eigenvectors, etc.) which is not very
easy to perform manually. Try to remember the last time you attempted to invert a
4 x 4 matrix by hand! It can be a tedious process for any matrix whose size is greater
than 3 x 3 . The repetitive linear algebraic operations required in modern control design
and analysis are, however, easily implemented on a computer with the use of standard
programming techniques. A useful high-level programming language available for such
tasks is the MATLAB®, which not only provides the tools for carrying out the matrix
operations, but also contains several other features, such as the time-step integration
of linear or nonlinear governing differential equations, which are invaluable in modern
control analysis and design. For example, in Figure 1.6 the time-history of a double-
pendulum has been obtained by solving the coupled governing nonlinear differential
equations using MATLAB. Many of the numerical examples contained in this book have
been solved using MATLAB. Although not required for doing the exercises at the end of
each chapter, it is recommended that you familiarize yourself with this useful language
with the help of Appendix A, which contains information about the commonly used
MATLAB operators in modern control applications. Many people, who shied away from
modern control courses because of their dread of linear algebra, began taking interest
in the subject when MATLAB became handy. Nowadays, personal computer versions of
MATLAB are commonly applied to practical problems across the board, including control
of aerospace vehicles, magnetically levitated trains, and even stock-market applications.
You may find MATLAB available at your university's or organization's computer center.
While Appendix A contains useful information about MATLAB which will help you in
solving most of the modern control problems, it is recommended that you check with
the MATLAB user's guide [1] at your computer center for further details that may be
required for advanced applications.
SIMULINK® is a very useful Graphical Users Interface (GUI) tool for modeling control
systems, and simulating their time response to specified inputs. It lets you work directly
with the block-diagrams (rather than mathematical equations) for designing and analyzing

® MATLAB, SIMULINK and Control System Toolbox are registered trademarks of MathWorks, Inc.
12 INTRODUCTION

control systems. For this purpose, numerous linear and nonlinear blocks, input sources,
and output devices are available, so that you can easily put together almost any practical
control system. Another advantage of using SIMULINK is that it works seamlessly with
MATLAB, and can draw upon the vast programming features and function library of
MATLAB. A SIMULINK block-diagram can be converted into a MATLAB program
(called M-file). In other words, a SIMULINK block-diagram does all the programming
for you, so that you are free to worry about other practical aspects of a control system's
design and implementation. With advanced features (such as the Real Time Workshop for
C-code generation, and specialized block-sets) one can also use SIMULINK for practical
implementation of control systems [2]. We will be using SIMULINK as a design and
analysis tool, especially in simulating the response of a control system designed with
MATLAB.
For solving many problems in control, you will find the Control System Toolbox® [3]
for MATLAB very useful. It contains a set of MATLAB M-files of numerical procedures
that are commonly used to design and analyze modern control systems. The Control
System Toolbox is available at a small extra cost when you purchase MATLAB, and is
likely to be installed at your computer center if it has MATLAB. Many solved examples
presented in this book require the Control System Toolbox. In the solved examples,
effort has been made to ensure that the application of MATLAB is clear and direct. This
is done by directly presenting the MATLAB line commands - and some MATLAB M-
files - followed by the numerical values resulting after executing those commands. Since
the commands are presented exactly as they would appear in a MATLAB workspace, the
reader can easily reproduce all the computations presented in the book. Again, take some
time to familiarize yourself with MATLAB, SIMULINK and the Control System Toolbox
by reading Appendix A.

References
1. MATLAB® 6.0 - User's Guide, The Math Works Inc., Natick, MA, USA, 2000.
2. SIMULINK® 4.0 - User's Guide, The Math Works Inc., Natick, MA, USA, 2000.
3. Control System Toolbox 5.0 for Use with MATLAB® - User's Guide, The Math Works Inc.
Natick, MA, USA, 2000.
2

It was mentioned in Chapter 1 that we need differential equations to describe the behavior
of a system, and that the mathematical nature of the governing differential equations is
another way of classifying control systems. In a large class of engineering applications,
the governing differential equations can be assumed to be linear. The concept of linearity
is one of the most important assumptions often employed in studying control systems.
However, the following questions naturally arise: what is this assumption and how valid
is it anyway? To answer these questions, let us consider lumped parameter systems
for simplicity, even though all the arguments presented below are equally applicable
to distributed systems. (Recall that lumped parameter systems are those systems whose
behavior can be described by ordinary differential equations.) Furthermore, we shall
confine our attention (until Section 2.13) to single-input, single-output (SISO) systems.
For a general lumped parameter, SISO system (Figure 2.1) with input u(t} and output
y ( t ) , the governing ordinary differential equation can be written as
M
( 0 , um-(t), . . . , « ( r ) , i«(0, 0
(2.1)
where y(k} denotes the &th derivative of y(t) with respect to time, t, e.g. v (n) = dny/dt",
y(n~l) = d"~ly/dt"~l, and u(k) denotes the fcth time derivative of u(t). This notation for
derivatives of a function will be used throughout the book. In Eq. (2.1), /() denotes a
function of all the time derivatives of y ( t ) of order (n — 1) and less, as well as the time
derivatives of u(t) of order m and less, and time, t. For most systems m < n, and such
systems are said to be proper.
Since n is the order of the highest time derivative of y(f) in Eq. (2.1), the
system is said to be of order n. To determine the output y ( t ) , Eq. (2.1) must be
somehow integrated in time, with u(t) known and for specific initial conditions
j(0), j (1) (0), .y(2)(0), . . . , y("-l)(G). Suppose we are capable of solving Eq. (2.1), given
any time varying input, u(t), and the initial conditions. For simplicity, let us assume that
the initial conditions are zero, and we apply an input, u(t), which is a linear combination
of two different inputs, u\(t), and U2(t), given by

U(t) = C\U\(t) +C2«2(0 (2.2)


14 UNEAR SYSTEMS AND CLASSICAL CONTROL

Input u(t) Lumped parameter Output y(t)


system

Figure 2.1 A general lumped parameter system with input, u(f), and output, y(f)

where c\ and c2 are constants. If the resulting output, y(t ), can be written as

c2y2(t) (2.3)

where y \ ( t ) is the output when u\(t) is the input, and y2(t) is the output when 1*2(1) is the
input, then the system is said to be linear, otherwise it is called nonlinear. In short, a linear
system is said to obey the superposition principle, which states that the output of a linear
system to an input consisting of linear combination of two different inputs (Eq. (2.2))
can be obtained by linearly superposing the outputs to the respective inputs (Eq. (2.3)).
(The superposition principle is also applicable for non-zero initial conditions, if the initial
conditions on y(t ) and its time derivatives are linear combinations of the initial conditions
on y\(t) and y2(t), and their corresponding time derivatives, with the constants c\ and
c2.) Since linearity is a mathematical property of the governing differential equations,
it is possible to say merely by inspecting the differential equation whether a system is
linear. If the function /() in Eq. (2.1) contains no powers (other than one) of y(t) and
its derivatives, or the mixed products of y ( t ) , its derivatives, and u(t) and its derivatives,
or transcendental functions of j(0 and u(t), then the system will obey the superposition
principle, and its linear differential equation can be written as

any(n)(t) + an-iy(n-])(t) + • • • + aiy™(t) + a*y(t)


(2-4)
Note that even though the coefficients OQ, a\ , . . . , an and bo,b\ , . . . ,bm (called the
parameters of a system) in Eq. (2.4) may be varying with time, the system given by
Eq. (2.4) is still linear. A system with time-varying parameters is called a time-varying
system, while a system whose parameters are constant with time is called time-invariant
system. In the present chapter, we will be dealing only with linear, time-invariant systems.
It is possible to express Eq. (2.4) as a set of lower order differential equations, whose
individual orders add up to n. Hence, the order of a system is the sum of orders of all
the differential equations needed to describe its behavior.

Example 2.1
For an electrical network shown in Figure 2.2, the governing differential equations
are the following:

3) + e(t)/(R\C\) (2.5a)
l
v2 \t) = ui(0/(C 2 /? 3 ) - (W2(0/C 2 )(1//J 2 + l/*3) + e(t)/(R2C2) (2.5b)
HOW VALID IS THE ASSUMPTION OF LINEARITY? 15

e(t)

Figure 2.2 Electrical network for Example 2.1

where v\(t) and i>2(0 are the voltages of the two capacitors, C\ and €2, e(t) is the
applied voltage, and R\, R2, and R^ are the three resistances as shown.
On inspection of Eq. (2.5), we can see that the system is described by two first
order, ordinary differential equations. Therefore, the system is of second order.
Upon the substitution of Eq. (2.5b) into Eq. (2.5a), and by eliminating v2, we get
the following second order differential equation:

1 + (Ci/C2)(R3/R2 +
l/R3)(R3/Ri + 1) - l/R3]vi(t)
l/R3)e(t)/C2 + (R3/Ri) (2.6)

Assuming y(t) = v\(t) and u(t) — e(t), and comparing Eq. (2.6) with Eq. (2.4), we
can see that there are no higher powers, transcendental functions, or mixed products
of the output, input, and their time derivatives. Hence, the system is linear.
Suppose we do not have an input, u(t), applied to the system in Figure 2.1.
Such a system is called an unforced system. Substituting u(t) = u ( l ) ( t ) = u(2)(t) —
. . . = u(m}(t) = 0 into Eq. (2.1) we can obtain the following governing differential
equation for the unforced system:

yW(t) = f ( y ( n ~ l ) ( t ) , y("-2)(t), ..., y ( 1 ) (/), v(/), 0, 0, . . . , 0, 0, t) (2.7)

In general, the solution, y ( t ) , to Eq. (2.7) for a given set of initial conditions is
a function of time. However, there may also exist special solutions to Eq. (2.7)
which are constant. Such constant solutions for an unforced system are called its
equilibrium points, because the system continues to be at rest when it is already
at such points. A large majority of control systems are designed for keeping a
plant at one of its equilibrium points, such as the cruise-control system of a car
and the autopilot of an airplane or missile, which keep the vehicle moving at a
constant velocity. When a control system is designed for maintaining the plant at
an equilibrium point, then only small deviations from the equilibrium point need to
be considered for evaluating the performance of such a control system. Under such
circumstances, the time behavior of the plant and the resulting control system can
generally be assumed to be governed by linear differential equations, even though
16 LINEAR SYSTEMS AND CLASSICAL CONTROL

the governing differential equations of the plant and the control system may be
nonlinear. The following examples demonstrate how a nonlinear system can be
linearized near its equilibrium points. Also included is an example which illustrates
that such a linearization may not always be possible.

Example 2.2
Consider a simple pendulum (Figure 2.3) consisting of a point mass, m, suspended
from hinge at point O by a rigid massless link of length L. The equation of motion
of the simple pendulum in the absence of an externally applied torque about point
O in terms of the angular displacement, 0(t), can be written as

L0(2)(» + g.sin(6>(/))=0 (2.8)

This governing equation indicates a second-order system. Due to the presence of


sin(#) - a transcendental function of 6 - Eq. (2.8) is nonlinear. From our everyday
experience with a simple pendulum, it is clear that it can be brought to rest at only
two positions, namely 0 = 0 and 9 = n rad. (180°). Therefore, these two positions
are the equilibrium points of the system given by Eq. (2.8). Let us examine the
behavior of the system near each of these equilibrium points.
Since the only nonlinear term in Eq. (2.8) is sin(0), if we can show that sin(0) can
be approximated by a linear term, then Eq. (2.8) can be linearized. Expanding sin(0)
about the equilibrium point 0 = 0, we get the following Taylor's series expansion:

sin(6>) =8- 03/3! + 05/5l - B1 /1\ + • • • (2.9)

If we assume that motion of the pendulum about 0=0 consists of small angular
displacements (say 0 < 10°), then sin(0) ^ 0, and Eq. (2.8) becomes

0 (2.10)

e =o

Figure 2.3 A simple pendulum of length L and mass m


HOW VALID IS THE ASSUMPTION OF LINEARITY? 17

Similarly, expanding sin(#) about the other equilibrium point, 0 = n, by assuming


small angular displacement, 0, such that B — n — 0, and noting that sin(0) =
— sin(0) % —0, we can write Eq. (2.8) as

(2.11)

We can see that both Eqs. (2.10) and (2.11) are linear. Hence, the nonlinear
system given by Eq. (2.8) has been linearized about both of its equilibrium points.
Second order linear ordinary differential equations (especially the homogeneous ones
like Eqs. (2.10) and (2.11)) can be be solved analytically. It is well known (and you
may verify) that the solution to Eq. (2.10) is of the form 9(t) = A. sin(f (g/L) 1/2 +
B.cos(t(g/L)1/2), where the constants A and B are determined from the initial
conditions, $(0) and <9(1)(0). This solution implies that 9(t) oscillates about the
equilibrium point 0=0. However, the solution to Eq. (2.11) is of the form 0(0 =
C. exp(?(g/L)'/ 2 ), where C is a constant, which indicates an exponentially increasing
0(0 if </>(0) ^ 0. (This nature of the equilibrium point at 9 = JT can be experimen-
tally verified by anybody trying to stand on one's head for any length of time!)
The comparison of the solutions to the linearized governing equations close to the
equilibrium points (Figure 2.4) brings us to an important property of an equilibrium
point, called stability.

2.5

2
/Solution to Eq. (2.11) with 0(0)
1.5

1 Solution to Eq. (2.10) with 6(0) = 0.2

0.5

-0.5

—1
0.2 0.4 0.6 0.8
Time (s)

Figure 2.4 Solutions to the governing differential equation linearized about the two equilibrium
points (9 = 0 and 9 = rt)

Stability is defined as the ability of a system to approach one of its equilibrium points
once displaced from it. We will discuss stability in detail later. Here, suffice it to say
that the pendulum is stable about the equilibrium point 9 = 0, but unstable about the
equilibrium point 9 = n. While Example 2.2 showed how a nonlinear system can be
18 LINEAR SYSTEMS AND CLASSICAL CONTROL

linearized close to its equilibrium points, the following example illustrates how a nonlinear
system's description can be transformed into a linear system description through a clever
change of coordinates.

Example 2.3
Consider a satellite of mass m in an orbit about a planet of mass M (Figure 2.5).
The distance of the satellite from the center of the planet is denoted r(r), while its
orientation with respect to the planet's equatorial plane is indicated by the angle
0(t), as shown. Assuming there are no gravitational anomalies that cause a departure
from Newton's inverse-square law of gravitation, the governing equation of motion
of the satellite can be written as

r (2) (0 - h2/r(t)3 + k2/r(t)2 = 0 (2.12)

where h is the constant angular momentum, given by

h = r(f) 2 0 (1) (f) = constant (2.13)

and k — GM, with G being the universal gravitational constant.


Equation (2.12) represents a nonlinear, second order system. However, since we
are usually interested in the path (or the shape of the orbit) of the satellite, given
by r(0), rather than its distance from the planet's center as a function of time, r(t),
we can transform Eq. (2.12) to the following linear differential equation by using
the co-ordinate transformation u(9) = l/r(0):

u(2\e) + u(0)-k2/h2 = 0 (2.14)

Being a linear, second order ordinary differential equation (similar to Eq. (2.10)),
Eq. (2.14) is easily solved for w(0), and the solution transformed back to r(0)

Figure 2.5 A satellite of mass m in orbit around a planet of mass M at a distance r(f) from the
planefs center, and azimuth angle 6(t) from the equatorial plane
HOW VALID IS THE ASSUMPTION OF LINEARITY? 19

given by

r(6») = (h2/k A(h2 / k2) cos((9 - B)] (2.15)

where the constants A and B are determined from r(6} and r ( l ) ( 9 ) specified at given
values of 9. Such specifications are called boundary conditions, because they refer
to points in space, as opposed to initial conditions when quantities at given instants
of time are specified. Equation (2.15) can represent a circle, an ellipse, a parabola,
or a hyperbola, depending upon the magnitude of A(h2/k2) (called the eccentricity
of the orbit).
Note that we could also have linearized Eq. (2.12) about one of its equilibrium
points, as we did in Example 2.2. One such equilibrium point is given by r(t) =
constant, which represents a circular orbit. Many practical orbit control applications
consist of minimizing deviations from a given circular orbit using rocket thrusters
to provide radial acceleration (i.e. acceleration along the line joining the satellite
and the planet) as an input, u(t), which is based upon the measured deviation from
the circular path fed back to an onboard controller, as shown in Figure 2.6. In such
a case, the governing differential equation is no longer homogeneous as Eq. (2.12),
but has a non-homogeneous forcing term on the right-hand side given by

r ( 2 ) (f) - h2/r(t)3 + k2/r(t)2 = u(t] (2.16)

Since the deviations from a given circular orbit are usually small, Eq. (2.16) can be
suitably linearized about the equilibrium point r(t) = C. (This linearization is left
as an exercise for you at the end of the chapter.)

Desired Thruster
circular Measured radial Acti
orbit deviation acceleration ort
r(t) = Cf r(t) - C u(t) r(t
Orbit Satellite
controller
_ ' i

Figure 2.6 On orbit feedback control system for maintaining a circular orbit of a satellite
around a planet

Examples 2.2 and 2.3 illustrated how a nonlinear system can be linearized for practical
control applications. However, as pointed out earlier, it is not always possible to do so.
If a nonlinear system has to be moved from one equilibrium point to another (such as
changing the speed or altitude of a cruising airplane), the assumption of linearity that is
possible in the close neighborhood of each equilibrium point disappears as we cross the
nonlinear region between the equilibrium points. Also, if the motion of a nonlinear system
consists of large deviations from an equilibrium point, again the concept of linearity is not
valid. Lastly, the characteristics of a nonlinear system may be such that it does not have
any equilibrium point about which it can be linearized. The following missile guidance
example illustrates such a nonlinear system.
20 LINEAR SYSTEMS AND CLASSICAL CONTROL

Example 2.4
Radar or laser-guided missiles used in modern warfare employ a special guidance
scheme which aims at flying the missile along a radar or laser beam that is illumi-
nating a moving target. The guidance strategy is such that a correcting command
signal (input) is provided to the missile if its flight path deviates from the moving
beam. For simplicity, let us assume that both the missile and the target are moving
in the same plane (Figure 2.7). Although the distance from the beam source to the
target, Ri(t), is not known, it is assumed that the angles made by the missile and
the target with respect to the beam source, #M(?) and #r(0, are available for precise
measurement. In addition, the distance of the missile from the beam source, /?M(0,
is also known at each instant.
A guidance law provides the following normal acceleration command signal,
ac(t), to the missile
(2.17)
As the missile is usually faster than the target, if the angular deviation
#M(O] is made small enough, the missile will intercept the target. The feedback
guidance scheme of Eq. (2.17) is called beam-rider guidance, and is shown in
Figure 2.8.

Altitude "
aT(0
* Target

Radar
or
laser ,/ ac(0
beam/ awc(0

Missile

Beam Horizon
source

Figure 2.7 Beam guided missile follows a beam that continuously illuminates a moving target
located at distance Rf(f) from the beam source

Target's Normal Mi<jsile's


angular Angular acceleration angular
position deviation command DO sition
Guidance ac(0 oM(')
~N to law Missile —»-
i Eq.(2.17)

Figure 2.8 Beam-rider closed-loop guidance for a missile


HOW VALID IS THE ASSUMPTION OF LINEARITY? 21

The beam-rider guidance can be significantly improved in performance if we can


measure the angular velocity, 0| \t), and the angular acceleration, 0£ (?), of the
target. Then the beam's normal acceleration can be determined from the following
equation:

In such a case, along with ac(t) given by Eq. (2.17), an additional command
signal (input) can be provided to the missile in the form of missile's acceleration
perpendicular to the beam, a^c(t), given by

Since the final objective is to make the missile intercept the target, it must be
ensured that 0^(0 = 0r' id),;(0 and 0^(0
i(2),
= 9?\t), even though [0j(0 - 0M(01 may
not be exactly zero. (To understand this philosophy, remember how we catch up
with a friend's car so that we can chat with her. We accelerate (or decelerate) until
our velocity (and acceleration) become identical with our friend's car, then we can
talk with her; although the two cars are abreast, they are not exactly in the same
position.) Hence, the following command signal for missile's normal acceleration
perpendicular to the beam must be provided:

The guidance law given by Eq. (2.20) is called command line-of-sight guidance,
and its implementation along with the beam-rider guidance is shown in the block
diagram of Figure 2.9. It can be seen in Figure 2.9 that while 0r(0 is being fed
back, the angular velocity and acceleration of the target, 0| \t), and 0j \t), respec-
tively, are being fed forward to the controller. Hence, similar to the control system
of Figure 1.4, additional information about the target is being provided by a feedfor-
ward loop to improve the closed-loop performance of the missile guidance system.

.(2)

Missile's
Acceleration angular
commands position
ac(0, aMc(f)
Target's
angular
position,

Figure 2.9 Beam-rider and command line-of-sight guidance for a missile

Note that both Eq. (2.17) and Eq. (2.20) are nonlinear in nature, and generally cannot
be linearized about an equilibrium point. This example shows that the concept of linearity
22 UNEAR SYSTEMS AND CLASSICAL CONTROL

is not always valid. For more information on missile guidance strategies, you may refer
to the excellent book by Zarchan [1].

2.2 Singularity Functions


It was mentioned briefly in Chapter 1 that some peculiar, well known input functions are
generally applied to test the behavior of an unknown system. A set of such test functions
is called singularity Junctions. The singularity functions are important because they can be
used as building blocks to construct any arbitrary input function and, by the superposition
principle (Eq. (2.3)), the response of a linear system to any arbitrary input can be easily
obtained as the linear superposition of responses to singularity functions. The two distinct
singularity functions commonly used for determining an unknown system's behavior are
the unit impulse and unit step functions. A common property of these functions is that
they are continuous in time, except at a given time. Another interesting fact about the
singularity functions is that they can be derived from each other by differentiation or
integration in time.
The unit impulse function (also called the Dime delta function), 8(t — a), is seen in
Figure 2.10 to be a very large spike occurring for a very small duration, applied at time
t = a, such that the total area under the curve (shaded region) is unity. A unit impulse
function can be multiplied by a constant to give a general impulse function (whose area
under the curve is not unity). From this description, we recognize an impulse function to
be the force one feels when hit by a car - and in all other kinds of impacts.
The height of the rectangular pulse in Figure 2.10 is 1/e, whereas its width is s seconds,
e being a very small number. In the limit e —> 0, the unit impulse function tends to infinity
(i.e. 8(t — a) —>• oo). The unit impulse function shown in Figure 2.10 is an idealization
of the actual impulse whose shape is not rectangular, because it takes some time to reach
the maximum value, unlike the unit impulse function (which becomes very large instan-
taneously). Mathematically, the unit impulse function can be described by the following
equations:

S(t - a) = 0, foTt (2.21)


oo
8(t -a)dt = 1 (2.22)
/ -00

ls(t-a)dt=l

a a+e

Figure 2.10 The unit impulse function; a pulse of infinitesimal duration (s) and very large magni-
tude (1/e) such that its total area is unity
SINGULARITY FUNCTIONS 23

Note that 8(t — a) is discontinuous at t = a. Furthermore, since the unit impulse


function is non-zero only in the period a < t < a + e, we can also express Eqs. (2.21)
and (2.22) by

8(t -a)dt = 1 (2.23)

However, when utilizing the unit impulse function for control applications, Eq. (2.22) is
much more useful. In fact, if <$(/ — a) appears inside an integral with infinite integra-
tion limits, then such an integral is very easily carried out with the use of Eqs. (2.21)
and (2.22). For example, if /(?) is a continuous function, then the well known Mean
Value Theorem of integral calculus can be applied to show that

/ f(t)8(t-a)dt = f ( a ) I 8(t - d)dt = f ( a ) (2.24)


JT\ JT\
where T\ < a < T^. Equation (2.24) indicates an important property of the unit impulse
function called the sampling property, which allows the time integral of any continuous
function f ( t ) weighted by 8(t — a) to be simply equal to the function /(?) evaluated at
t = a, provided the limits of integration bracket the time t = a.
The unit step function, us(t — a), is shown in Figure 2.11 to be a jump of unit magni-
tude at time t = a. It is aptly named, because it resembles a step of a staircase. Like the
unit impulse function, the unit step function is also a mathematical idealization, because
it is impossible to apply a non-zero input instantaneously. Mathematically, the unit step
function can be defined as follows:

0 for t < a
1 for t > a

It is clear that us(t — a) is discontinuous at t = a, and its time derivative at t — a is


infinite. Recalling from Figure 2.10 that in the limit e -* 0, the unit impulse function
tends to infinity (i.e. 8(t — a) -» oo), we can express the unit impulse function, 8(t — a),
as the time derivative of the unit step function, us(t — a), at time t = a. Also, since the
time derivative of us(t — a) is zero at all times, except at t = a (where it is infinite), we
can write
8(t-a)=dus(t-a)/dt (2.26)

1-

0 a t

Figure 2.11 The unit step function, us(t — a); a jump of unit magnitude at time t = a
24 UNEAR SYSTEMS AND CLASSICAL CONTROL

r(t-a)

0 a

Figure 2.12 The unit ramp function; a ramp of unit slope applied at time t = a

Or, conversely, the unit step function is the time integral of the unit impulse function,
given by
us(t-a)=f 8(T-a)dr (2.27)
J-oc

A useful function related to the unit step function is the unit ramp function, r(t — a ) ,
which is seen in Figure 2.12 to be a ramp of unit slope applied at time t = a. It is like an
upslope of 45° angle you suddenly encounter while driving down a perfectly flat highway
at t = a. Mathematically, r(t — a) is given by

r(tv - a) = ; . "" ' — (2.28)


[ ( t -a) for t > a

Note that r(t — a) is continuous everywhere, but its slope is discontinuous at t = a.


Comparing Eq. (2.28) with Eq. (2.25), it is clear that

r(t - a) = (t - a)us(t - a) (2.29)

or
r(t-a)=S us(r-a)dT (2.30)
J-oc

Thus, the unit ramp function is the time integral of the unit step function, or conversely,
the unit step function is the time derivative of the unit ramp function, given by

us(t-a} =dr(t~a)/dt (2.31)

The basic singularity functions (unit impulse and step), and their relatives (unit ramp
function) can be used to synthesize more complicated functions, as illustrated by the
following examples.

Example 2.5
The rectangular pulse function, f(t), shown in Figure 2.13, can be expressed by
subtracting one step function from another as

/(O = fo(us(t + 772) - us(t - 7/2)1 (2.32)


SINGULARITY FUNCTIONS 25

-t -TI2 0 -772 t

Figure 2.13 The rectangular pulse function of magnitude f0

Example 2.6
The decaying exponential function, /(/) (Figure 2.14) is zero before t = 0, and
decays exponentially from a magnitude of f0 at t = 0. It can be expressed by
multiplying the unit step function with f() and a decaying exponential term, given by

= f0e~t/rus(t) (2.33)

-t 0 t

Figure 2.14 The decaying exponential function of magnitude f0

Example 2.7
The sawtooth pulse function, f ( t ) , shown in Figure 2.15, can be expressed in terms
of the unit step and unit ramp functions as follows:

/(f) - (fo/T)[r(t) - r(t - f0us(t - T) (2.34)

fn- Slope =

0 T t

Figure 2.15 The sawtooth pulse of height f0 and width T

After going through Examples 2.5-2.7, and with a little practice, you can decide merely
by looking at a given function how to synthesize it using the singularity functions. The
unit impulse function has a special place among the singularity functions, because it can be
26 LINEAR SYSTEMS AND CLASSICAL CONTROL

f(r)-
Area = f (r)Ar

Figure 2.16 Any arbitrary function, f(t), can be represented by summing up unit impulse functions,
8(t — T) applied at t = r and multiplied by the area f(r) Ar for all values of r from —oo to t

used to describe any arbitrary shaped function as a sum of suitably scaled unit impulses,
S(t — a), applied at appropriate time, t = a. This fact is illustrated in Figure 2.16, where
the function f ( t ) is represented by

- r) (2.35)

or, in the limit Ar —>• 0,

=/: f(r)S(t-r)dr

Equation (2.36) is one of the most important equations of modern control theory,
(2.36)

because it lets us evaluate the response of a linear system to any arbitrary input, /(/), by
the use of the superposition principle. We will see how this is done when we discuss the
response to singularity functions in Section 2.5. While the singularity functions and their
relatives are useful as test inputs for studying the behavior of control systems, we can also
apply some well known continuous time functions as inputs to a control system. Examples
of continuous time test functions are the harmonic functions sin(o>f) and cos(<wf)» where
o) is a frequency, called the excitation frequency. As an alternative to singularity inputs
(which are often difficult to apply in practical cases), measuring the output of a linear
system to harmonic inputs gives essential information about the system's behavior, which
can be used to construct a model of the system that will be useful in designing a control
system. We shall study next how such a model can be obtained.

2.3 Frequency Response


Frequency response is related to the steady-state response of a system when a harmonic
junction is applied as the input. Recall from Section 1.2 that steady-state response is the
linear system's output after the transient response has decayed to zero. Of course, the
requirement that the transient response should have decayed to zero after some time calls
for the linear system to be stable. (An unstable system will have a transient response
shooting to infinite magnitudes, irrespective of what input is applied.) The steady-state
FREQUENCY RESPONSE 27

response of a linear system is generally of the same shape as that of the applied input,
e.g. a step input applied to a linear, stable system yields a steady-state output which
is also a step function. Similarly, the steady-state response of a linear, stable system
to a harmonic input is also harmonic. Studying a linear system's characteristics based
upon the steady-state response to harmonic inputs constitutes a range of classical control
methods called the frequency response methods. Such methods formed the backbone of
the classical control theory developed between 1 900-60, because the modern state-space
methods (to be discussed in Chapter 3) were unavailable then to give the response of
a linear system to any arbitrary input directly in the time domain (i.e. as a function of
time). Modern control techniques still employ frequency response methods to shed light
on some important characteristics of an unknown control system, such as the robustness of
multi-variable (i.e. multi-input, multi-output) systems. For these reasons, we will discuss
frequency response methods here.
A simple choice of the harmonic input, u(t\ can be

u(t) = u()cos(cot) or u(t) = u0sin(a)t) (2.37)


where u0 is the constant amplitude and CD is the frequency of excitation (sometimes called
the driving frequency). If we choose to write the input (and output) of a linear system
as complex functions, the governing differential equation can be replaced by complex
algebraic equations. This is an advantage, because complex algebra is easier to deal with
than differential equations. Furthermore, there is a vast factory of analytical machinery
for dealing with complex functions, as we will sample later in this chapter. For these
powerful reasons, let us express the harmonic input in the complex space as

u(t) = u(!Qlwt (2.38)


where / — \/^T (a purely imaginary quantity), and

e/a" = cos(a>f ) 4- i sin (art) (2.39)


Equation (2.39) is a complex representation in which cos(a>t ) is called the real part of
Ql(at and sin(cot) is called the imaginary part of &laj! (because it is multiplied by the
imaginary number i). The complex space representation of the harmonic input given by
Eq. (2.38) is shown in Figure 2.17. The two axes of the complex plane are called the real

Imaginary
axis

u0 cos(o>0 Real
axis

Figure 2.17 Complex space representation of a harmonic input, u(0


28_ LINEAR SYSTEMS AND CLASSICAL CONTROL _

and imaginary axis, respectively, as shown. Hence, complex space representation of a


harmonic function is a device of representing both the possibilities of a simple harmonic
input, namely M O COS(<W/) and M 0 sin(<wf), respectively, in one expression. By obtaining
a steady-state response to the complex input given by Eq. (2.38), we will be obtaining
simultaneously the steady-state responses of a linear, stable system to M O COS(O>/) and

When you studied solution to ordinary differential equations, you learnt that their solu-
tion consists of two parts - the complimentary solution (or the solution to the unforced
differential equation (Eq. (2.7)), and a particular solution which depends upon the input.
While the transient response of a linear, stable system is largely described by the compli-
mentary solution, the steady-state response is the same as the particular solution at large
times. The particular solution is of the same form as the input, and must by itself
satisfy the differential equation. Hence, you can verify that the steady-state responses to
u(t) = u0cos(a)t) and u(t) = M 0 sin(<wr), are given by v J5 (r) = y0cos(a>t) and yss(t) =
y0 sin(<wr), respectively (where y0 is the amplitude of the resulting harmonic, steady-state
output, yss(t)} by plugging the corresponding expressions of u(t) and yjs(0 into Eq. (2.4),
which represents a general linear system. You will see that the equation is satisfied in
each case. In the complex space, we can write the steady-state response to harmonic input
as follows:
(2.40)
Here, the steady-state response amplitude, v0, is a complex function of the frequency
of excitation, a). We will shortly see the implications of a complex response amplitude.
Consider a linear, lumped parameter, control system governed by Eq. (2.4) which can be
re-written as follows
D}{yss(t)} = D2{u(t)} (2.41)

where £>i{-} and D2{-} are differential operators (i.e. they operate on the steady-state
output, y55(0, and the input, u(t}, respectively, by differentiating them), given by

(2.42)

and
D2{-} = bmdm/dtm + bm-idm-l/dtm-* + ••• + bid/dt + bo (2.43)

Then noting that

' = [(ia>Yandn/dtn + (ia>)n-lan-id"-} /dtn~l +••• + (ico)aid/dt


(2.44)
and

' wf ) = [(iw}mbmdm/dtm + (ia>r~lbm-idm-*/dtm-1 +••• + (ia>)b\d/dt + b0]eia>l


(2.45)
we can write, using Eq. (2.41),

y0(ia)) = G(ia>)u0 (2.46)


FREQUENCY RESPONSE 29

where G(ia>) is called the frequency response of the system, and is given by

G(ia>) = [(ico)mbm + (/W~X-i + • • • + (iw)fci + b0]/[(ia))nan


+ (ic0)n-lan-i +••• + (ico)a{ + a(>] (2,47)

Needless to say, the frequency response G(ia>) is also a complex quantity, consisting of
both real and imaginary parts. Equations (2.46) and (2.47) describe how the steady-state
output of a linear system is related to its input through the frequency response, G(ico).
Instead of the real and imaginary parts, an alternative description of a complex quantity is
in terms of its magnitude and the phase, which can be thought of as a vector's length and
direction, respectively. Representation of a complex quantity as a vector in the complex
space is called a phasor. The length of the phasor in the complex space is called its
magnitude, while the angle made by the phasor with the real axis is called its phase. The
magnitude of a phasor represents the amplitude of a harmonic function, while the phase
determines the value of the function at t = 0. The phasor description of the steady-state
output amplitude is given by

y0(ia>) = \ya(ia>)\eia(ta> (2.48)

where \y0(ico)\ is the magnitude and ct(a)) is the phase of y()(ico). It is easy to see that

\y0(ia))\ = [real {y0(ico)}2 + imag {y0(ia))}2]l/2;


a(co} = tan"1 [imag {y«(M}/real [y0(ia))}] (2.49)

where real{-} and imag{-} denote the real and imaginary parts of a complex number. We
can also express the frequency response, G(ia>), in terms of its magnitude, |G (/&>)), and
phase, 0(o>), as follows:
|G(/a>)|e' 0M (2.50)

Substituting Eqs. (2.48) and (2.50) into Eq. (2.46), it is clear that \y0(ia))\ = \G(ia))\u(/
and a(a>) = <^>(co). Hence, the steady-state response of a linear system excited by a
harmonic input of amplitude u0 and zero phase (u0 = wf,e'°) is given through Eq. (2.40) by

yss(t) = y0(ia>)eia>t = \G(io))\u0ei<t>(w)Qi(at = \G(ico)\u0ei[ct>t+ct>{w)] (2.51)

Thus, the steady-state response to a zero phase harmonic input acquires its phase from the
frequency response, which is purely a characteristic of the linear system. You can easily
show that if the harmonic input has a non-zero phase, then the phase of the steady-state
response is the sum of the input phase and the phase of the frequency response, 0(co). The
phasor representation of the steady-state response amplitude is depicted in Figure 2.18.
From Eq. (2.51), it is clear that the steady-state response is governed by the amplitude
of the harmonic input, u0, and magnitude and phase of the frequency response, G (/&>),
which represent the characteristics of the system, and are functions of the frequency of
excitation. If we excite the system at various frequencies, and measure the magnitude and
phase of the steady-state response, we could obtain G(ito) using Eq. (2.51), and conse-
quently, crucial information about the system's characteristics (such as the coefficients a/,
30 LINEAR SYSTEMS AND CLASSICAL CONTROL

Imaginary ,,
axis

Figure 2.18 Phaser representations of a harmonic input, u(f), with zero phase and amplitude UQ,
and steady-state response amplitude, yo(/<w), °f a linear system with frequency response, G(ia>)

and bk, in Eq. (2.47)). In general, we would require G(ico) at as many frequencies as are
the number of unknowns, ak and bk, in Eq. (2.47). Conversely, if we know a system's
parameters, we can study some of its properties, such as stability and robustness, using
frequency response plots (as discussed later in this chapter). Therefore, plots of magnitude
and phase of G(/w) with frequency, CD, serve as important tools in the analysis and design
of control systems. Alternatively, we could derive the same information as obtained from
the magnitude and phase plots of G(i(o) from the path traced by the tip of the frequency
response phasor in the complex space as the frequency of excitation is varied. Such a
plot of G(ico) in the complex space is called a polar plot (since it represents G(ico) in
terms of the polar coordinates, \G(ico)\ and 0(<w)). Polar plots have an advantage over
the frequency plots of magnitude and phase in that both magnitude and phase can be
seen in one (rather than two) plots. Referring to Figure 2.18, it is easily seen that a phase
(j)(u>} = 0° corresponds to the real part of G(/&>), while the phase 0(o>) = 90° corresponds
to the imaginary part of G (/&>). When talking about stability and robustness properties,
we will refer again to the polar plot.
Since the range of frequencies required to study a linear system is usually very large,
it is often useful to plot the magnitude, |G(i<w)|, and phase, <J>(co), with respect to the
frequency, co, on a logarithmic scale of frequency, called Bode plots. In Bode plots,
the magnitude is usually converted to gain in decibels (dB) by taking the logarithm of
|G(i<w)| to the base 10, and multiplying the result with 20 as follows:

= 201og, 0 |G(ia>)| (2.52)

As we will see later in this chapter, important information about a linear, single-input,
single-output system's behavior (such as stability and robustness) can be obtained from
the Bode plots, which serve as a cornerstone of classical control design techniques.
Factoring the polynomials in G(/<w) (Eq. (2.47)) just produces addition of terms in
Iog10 \G(ico)\, which enables us to construct Bode plots by log-paper and pencil. Despite
this, Bode plots are cumbersome to construct by hand. With the availability of personal
computers and software with mathematical functions and graphics capability - such as
MATLAB - Bode plots can be plotted quite easily. In MATLAB, all you have to do is
FREQUENCY RESPONSE 31

specify a set of frequencies, a), at which the gain and phase plots are desired, and use the
intrinsic functions abs and angle which calculate the magnitude and phase (in radians),
respectively, of a complex number. If you have the MATLAB's Control System Toolbox
(CST), the task of obtaining a Bode plot becomes even simpler through the use of the
command bode as follows:

»G=tf(num,den); bode(G,w) <enter> %a Bode plot will appear on the screen

Here » is the MATLAB prompt, <enter> denotes the pressing of the 'enter' (or 'return')
key, and the % sign indicates that everything to its right is a comment. In the bode
command, w is the specified frequency vector consisting of equally spaced frequency
values at which the gain and phase are desired, G is the name given to the frequency
response of the linear, time-invariant system created using the CST LTI object function tf
which requires num. and den as the vectors containing the coefficients of numerator and
denominator polynomials, respectively, of G(/o>) in (Eq. (2.47)) in decreasing powers
of s. These coefficients should be be specified as follows, before using the tf and bode
commands:

»num=[bm b m _i ... bol; den=[a n a n -i ... a 0 ] ; <enter>

By using the MATLAB command logspace, the w vector can also be pre-specified as
follows:

»w=logspace(-2,3); <enter> %w consists of equally spaced frequencies in the


range 0.01-1000 rad/s.

(Using a semicolon after a MATLAB command suppresses the print-out of the result on
the screen.)
Obviously, u; must be specified before you use the bode command. If you don't specify
w, MATLAB will automatically generate an appropriate w vector, and create the plot.
Instead of plotting the Bode plot, you may like to store the magnitude (mag), \G(ico)\,
and the phase, <£(&>), at given set of frequencies, w, for further processing by using the
following MATLAB command:

» [ m a g , p h a s e j w ] = b o d e ( n u m , d e n , w ) ; <enter>

For more information about Bode plots, do the following:

»help bode <enter>

The same procedure can be used to get help on any other MATLAB command. The
example given below will illustrate what Bode plots look like. Before we do that, let us
try to understand in physical terms what a frequency response (given by the Bode plot) is.
Musical notes produced by a guitar are related to its frequency response. The guitar
player makes each string vibrate at a particular frequency, and the notes produced by the
various strings are the measure of whether the guitar is being played well or not. Each
string of the guitar is capable of being excited at many frequencies, depending upon where
32 UNEAR SYSTEMS AND CLASSICAL CONTROL

the string is struck, and where it is held. Just like the guitar, any system can be excited
at a set of frequencies. When we use the word excited, it is quite in the literal sense,
because it denotes the condition (called resonance) when the magnitude of the frequency
response, |G(/o>)|, becomes very large, or infinite. The frequencies at which a system can
be excited are called its natural (or resonant) frequencies. High pitched voice of many
a diva has shattered the opera-house window panes while accidently singing at one of
the natural frequencies of the window! If a system contains energy dissipative processes
(called damping), the frequency response magnitude at natural frequencies is large, but
finite. An undamped system, however, has infinite response at each natural frequency. A
natural frequency is indicated by a peak in the gain plot, or as the frequency where the
phase changes by 180°. A practical limitation of Bode plots is that they show only an inter-
polation of the gain and phase through selected frequency points. The frequencies where
\G(i(o)\ becomes zero or infinite are excluded from the gain plot (since logarithm of zero
is undefined, and an infinite gain cannot be shown on any scale). Instead, only frequency
points located close to the zero magnitude frequency and the infinite gain frequencies of
the system can be used in the gain plot. Thus, the Bode gain plot for a guitar will consist
of several peaks, corresponding to the natural frequencies of the notes being struck. One
could determine from the peaks the approximate values of the natural frequencies.

Example 2.8
Consider the electrical network shown in Figure 2.19 consisting of three resistances,
/?i, /?2, and /?3, a capacitor, C, and an inductor, L, connected to a voltage source,
e(t), and a switch, 5. When the switch, 5, is closed at time t = 0, the current
passing through the resistance R\ is i'i(f), and that passing through the inductor, L,
is /2(0- The input to the system is the applied voltage, e(t), and the output is the
current, /2(0-
The two governing equations of the network are

- 12(01 (2.53)

0 = #2*2(0 + #3[*2(0 - i'i(01 + Li\ + (I/O f: i2(r)dr (2.54)


Jo

:± e(f)

Figure 2.19 Electrical network for Example 2.8


FREQUENCY RESPONSE 33

Differentiating Eq. (2.54) and eliminating i \ ( t ) , we can write


L/f(t) + [(R{R3 + fl,/?2 + /?2

"(0 (2.55)
Comparing Eq. (2.55) with Eq. (2.4) we find that the system is linear and
of second order, with y(t) = i 2 ( t ) , u(t) = e(t), aQ = l/C, a\ = (R[R3 + R\R2 +
R2R3)/(R} + R3), bo - 0, and b\ - R3/(R\ + R3). Hence, from Eq. (2.47), the
frequency response of the system is given by

For RI = R3 = 10 ohms, R2 = 25 ohms, L = 1 henry, and C = 10~6 farad, the


frequency response is the following:
x""1 /*
G(i(jo) = (j.5(i(t))/[(i(jL>)
\ / A C / * \ /F/ * \2
+ 30(jco) + 10 I
i O f\ f ' \ t 1 /~\D T
(2.57)
/ ^ £5 ""7 \

Bode gain and phase plots of frequency response given by Eq. (2.57) can be plotted
in Figure 2.20 using the following MATLAB commands:

»w=logspace(-1,4); <enter>

(This command produces equally spaced frequency points on logarithmic scale from
0.1 to 10000 rad/s, and stores them in the vector w.)

»G=i*w*0.5. / ( - w . *w+30*i*w+1e6); <enter>

CO

10 102 103 104


Frequency (rad/sec)

90

-90
101 102 103 104
Frequency (rad/sec)

Figure 2.20 Bode plot for the electrical network in Example 2.8; a peak in the gain plot and
the corresponding phase change of 180° denotes the natural frequency of the system
34 LINEAR SYSTEMS AND CLASSICAL CONTROL

(This command calculates the value of G(/o>) by Eq. (2.57) at each of the speci-
fied frequency points in w, and stores them in the vector G. Note the MATLAB
operations.* and ./ which allow element by element multiplication and division,
respectively, of two arrays (see Appendix B).)

»gain=20*loglO(abs(G)) ; phase=180*angle(G)/pi; <enter>

(This command calculates the gain and phase of G(io>) at each frequency point in
w using the MATLAB intrinsic functions abs, angle, and loglO, and stores them in
the vectors gain and phase, respectively. We are assuming, however, that G does
not become zero or infinite at any of the frequencies contained in if.)

»subplot(211 ) , semilogx(w,gain) , grid, subplot(212) , semilogx(w,phase) ,


grid <enter>

(This command produces gain and phase Bode plots as two (unlabeled) subplots,
as shown in Figure 2.20. Labels for the axes can be added using the MATLAB
commands xlabel and y label.)
The Bode plots shown in Figure 2.20 are obtained much more easily through the
Control System Toolbox (CST) command bode as follows:

»num=[0.5 0]; den=[1 30 Ie6]; g=tf (num,den) , bode(g,w) <enter>

Note the peak in the gain plot of Figure 2.20 at the frequency, o> = 1000 rad/s.
At the same frequency the phase changes by 180°. Hence, u> = 1000 rad/s is the
system's natural frequency. To verify whether this is the exact natural frequency,
we can rationalize the denominator in Eq. (2.57) (i.e. make it a real number by
multiplying both numerator and denominator by a suitable complex factor - in this
case (— a>2 + 106) — 30/o> and express the magnitude and phase as follows:
\G(ia>)\ = [225o>4 + 0.25w2(-ft>2 + 106)2]1/2/[(-^2 + 106)2
0(o>) = tan~' (-(o2 + 106)/(30co) (2.58)
From Eq. (2.58), it is clear that |G(/<w)| has a maximum value (0.0167 or
-35.547 dB) - and </>(<*>) jumps by 180° - at co = 1000 rad/s. Hence, the natural
frequency is exactly 1000 rad/s. Figure 2.20 also shows that the gain at CD =
0.1 rad/s is -150 dB, which corresponds to |G(0.1/)| = 10~75 = 3.1623 x 10~8,
a small number. Equation (2.58) indicates that |G(0)| =0. Hence, CD = 0.1 rad/s
approximates quite well the zero-frequency gain (called the DC gain) of the system.
The frequency response is used to define a linear system's property called bandwidth
defined as the range of frequencies from zero up to the frequency, <Wb, where
|G(/o>b)| = 0.707|G(0)|. Examining the numerator of |G(/w)| in Eq. (2.58), we
see that |G(/o>)| vanishes at CD = 0 and a> = 1999 100 rad/s (the numerator roots
can be obtained using the MATLAB intrinsic function roots). Since |G(0)| =0,
the present system's bandwidth is 0% = 1999 100 rad/s (which lies beyond the
frequency range of Figure 2.20). Since the degree of the denominator polynomial
of G(iu>) in Eq. (2.47) is greater than that of the numerator polynomial, it follows
FREQUENCY RESPONSE 35

that \G(ico)\ -> 0 as a> -> oo. Linear systems with G(ico) having a higher degree
denominator polynomial (than the numerator polynomial) in Eq. (2.47) are called
strictly proper systems. Equation (2.58) also shows that 0(<w) -» 90° as co —> 0,
and 0(<w) —> —90° as co —> oo. For a general system, $(&>) -> —£90° as a> -> oo,
where k is the number by which the degree of the denominator polynomial of G(ico)
exceeds that of the numerator polynomial (in the present example, k = 1).
Let us now draw a polar plot of G(ico) as follows (note that we need more
frequency points close to the natural frequency for a smooth polar plot, because of
the 180° phase jump at the natural frequency):

»w=[logspace(-1,2.5) 350:2:1500 logspace(3.18,5)]; <enter>

(This command creates a frequency vector, w, with more frequency points close to
1000 rad/s.)

»G=i*w*0.5. / ( - w . *w+30*i*w+1e6); <enter>


»polar(angle(G), a b s ( G ) ) ; <enter>

(This command for generating a polar plot requires phase angles in radians, but the
plot shows the phase in degrees.)
The resulting polar plot is shown in Figure 2.21. The plot is in polar coordinates,
\G(ia))\ and 0(&>), with circles of constant radius, \G(i(o)\, and radial lines of
constant 0 (&>) overlaid on the plot. Conventionally, polar plots show either all posi-
tive, or all negative phase angles. In the present plot, the negative phase angles have
been shown as positive angles using the transformation 0 -> (<p + 360°), which is
acceptable since both sine and cosine functions are invariant under this transfor-
mation for 4> < 0 (e.g. 0 = —90° is the same as 0 = 270°). Note that the 0° and

120°.,- |
' =0.018

180

210

240°"--- ! •• 300°
270°

Figure 2.21 Polar plot of the frequency response, G(/&>), of the electrical system of Example 2.8
Exploring the Variety of Random
Documents with Different Content
Bourdier, Dr., 205.

Bouvart, Dr., 169.

Boydell, Mary, 408.

Boyle, Mr., 57, 58, 272, 467.

Brennen, Dr. John, 386.

Brocklesby, Dr., 16, 211, 381.

Brodie, Sir Benjamin, 107, 166, 366, 370.

Bruce, Robert, 193.

Browne, Sir Thomas, 38.

Buckle, Mr., 333.

Buckingham, Duchess of, 152.

Buckingham, Duke of, 47, 58.

Buckinghampshire, Countess of, 370.

Bulleyn, Richard, 37.

Bulleyn, Dr. William, 25, 26, 29, 37, 64, 165, 229.

Bungalo, Prof., 92.

Buns, Dr., 29.

Burke, Edmund, 211, 441.


Burnet, Gilbert, 131.

Burton, Dr., 67.

Burton, Robert, 263, 428.

Burton, Sim, 292.

Busby, Dr., 9.

Butler, Dr., 211.

Butler, Samuel, 260.

Butler, Dr. William, 25, 179.

Butts, Sir William, 25, 164, 166.

Byron, Lord, 193, 328.

Cadogan, Lord, 290, 393.

Cains, 22.

Calfe, Thomas, 29.

Chambre, Dr. John, 21.

Campan, Madame, 283.

Campanella, Thomas, 13, 264.

Cane, 11.

Canker, 33.
Canning, 421.

Cardan, 264.

Caroline, Queen, 174.

Carr, Dr., 29.

Carriages, 17.

Carteret, George, 55.

Case, John, 167.

Cashin, Catherine, 364, 370.

Catherine, Empress, 179.

Cavendish, Lord C., 161.

Chalon, Comtesse de, 349.

Charles I., 23, 42, 173, 204.

" II., 15, 17, 23, 38, 40, 57, 148, 157, 173, 174, 234, 472.

" VI., 221.

" IX., 173.

" XI., 203.

Charleton, Dr., 58.

Chartres, Francis, 191.


Chatham, Earl of, 394.

Chaucer, 20.

Cheke, Sir John, 138.

Chemberline, 79.

Cheselden, Dr., 68, 215, 292.

Chester, Richard, 332.

Chesterfield, Lord, 233, 314.

Cheyne, Dr., 146, 247, 377, 399.

Cholmondley, Miss, 238.

Churchill, General, 180, 290.

Clarke, Mr., 233.

Clarke, Sir James, 18, 107.

Clermont, Lady, 349.

Clopton, Roger, 312.

Coakley, Dr., 339.

Codrington, Col., 195, 199.

Cogan, Dr., 428.

Coke, 11.
Coldwell, Dr., 229.

Coleridge, S. T., 41.

Coles, William, 178.

Collier, Jeremy, 200.

Collington, Sir James, 193.

Colombèire, De la, 380.

Combermer, Lord, 393.

Congreve, 201.

Conolly, Dr., 221.

Conway, Lady, 271, 272.

Conway, Lord, 273.

Cooper, Sir Astley, 13, 70, 177, 362, 375.

Cooper, Bransby, 375.

Cooper, Dr. William, 216.

Cordus, Euricus, 168.

Cordus, Valerius, 65.

Cornwallis, Lord, 290.

Corvisart, Dr., 205.


Cotgrave, 85.

Coytier, Dr., 203.

Crabbe, George, 436.

Cranworth, Lord, 311, 320.

Creswell, Sir Creswell, 485.

Croft, Sir Richard, 394.

Cromwell, 83.

Crossfield, Thomas, 435.

Crowe, Mrs., 290.

Cruikshank, George, 413.

" Dr., 211.

Cudworth, Dr., 272.

Cullum, Sir Thomas Geery, 398.

Cumberland, Earl of, 171.

Curran, John Philpot, 213.

Curray, Dr. "Calomel," 162.

Cutler, Sir John, 472.


Dalmahoy, Colonel, 15.

Darrell, Lady, 33, 165.

Darwin, Dr. Erasmus, 428.

Davy, Sir Humphrey, 59, 60, 61, 62, 429.

Davy, Lady, 62.

Dawson, John, 406.

Dawson, Dr. Thomas, 406.

Dee, Dr., 42.

Delaune, 471.

Denman, Dr. Joseph, 394.

Denman, Lord, 393, 394.

Dennis, 375.

Denton, Dr., 272.

Derby, Edward, Earl of, 44, 165.

De Rothes, Countess, 398.

Derwentwater, Earl of, 111.

Desault, 13.

Desmond, Countess of, 254.


Devonshire, Duchess, 349.

D'Ewes, Sir Symonds, 480.

Diamond, Dr., 41, 321, 434.

Dickens, Charles, 503.

Digby, Sir Everard, 42.

Digby, Sir Kenelm, 38, 57, 58, 282.

Dilly, Charles, 339.

Dimsdale, Dr., 179.

Dioscorides, 64.

Dodds, James, 357.

Dodsley, 328.

Doran, John, 167, 466.

Dorset, Richard, Earl of, 343.

Douglas, Sylvester, 395.

Drake, Dr. James, 125.

Dryden, John, 38, 74, 194, 197, 201, 379.

Dubois, Dr., 205.

Ducrow, Andrew, 372.


Dumény, 381.

Dumoulin, Dr., 104.

Dunoyer, Madame, 380.

Dureux, Madame, 381.

Dwyer, J. W., 277.

Dyson, Dr., 328.

Edmunds, Dr., 29.

Edward I., 40.

" II., 258.

" III., 166, 170, 476.

" VI., 21, 173.

Edwards, Dr., 29.

Edwards, George, 56.

Eliot, Sir John, 402, 403, 408.

Elizabeth, Queen, 40, 164, 173, 203.

Elliot, Sir Thomas, 29, 33, 165, 229.

Elmy, Sarah, 438.

Elton, Sir Marwood, 393.


Embrocations, 30.

Ent, Dr., 58.

Erasistratus, 168.

Erskine, 180, 194.

Eugene, Prince, 153.

Evelyn, John, 57, 174.

Everard, Dr., 150, 225.

Faber, Dr., 272.

Fairclough, Dr. James, 272, 274.

Faire, Thomas, 29.

Fallopius, Gabriel, 144.

Fees, 163.

Ferriar, Dr., 428.

Fielding, Beau, 42, 186.

Fielding, Henry, 96, 316.

Fielding, Sir John, 316.

Flemyng, Dr., 146.


Fludd, Dr. Robert, 422, 436.

Fludd, Dr. Thomas, 435.

Foote, Samuel, 463.

Ford, Charles, 132.

Fordyce, Dr. George, 153.

Forster, Dr., 320.

Fothergill, Dr. John, 207, 335, 337.

Fox, Charles James, 430.

Fox, Simeon, 17.

Francis II., 173.

French, Mrs., 288.

Frere, Dr., 29.

Freind, Dr., 152, 186, 251, 252, 318, 375.

Froissart, 221.

Fuller, Thomas, 25, 180.

Gaddesden, John of, 258.

Galen, 13.

Galileo, 369.
Gardiner, Joseph, 292.

Garrick, David, 314.

Garth, Sir Samuel, 63, 92, 113, 152, 186, 194, 199, 333, 375, 376,
433, 472.

Gascoigne, Sir William, 33, 165.

Gaskin, Dr., 155.

Gay, John, 186.

Geber, 255.

Gee, Dr., 29.

George I., 243.

" III., 160, 173, 174, 340, 350, 431.

" IV., 170, 173.

Germain, Lord George, 402.

Getseus, John Daniel, 265.

Gibbons, Dr., 113, 117, 139, 152, 375.

Gilbert, Dr., 276.

Gisborne, Dr. Thomas, 394.

Gloucester, Duke of, 118.


Glynn, Dr., 162, 208, 400.

Goddard, Dr., 58.

Godolphin, Sir John, 272, 313, 316.

Goldsmith, Oliver, 86, 115, 185, 189, 426.

Good, Dr. Mason, 428.

Goodwin, Mr., 78.

Gordonius, 13.

Gout, 23.

Gower, Lord, 156.

Grafton, Duke of, 317.

Graham, Dr. James, 345, 350, 351.

Grainger, 427.

Grant, Roger, 94, 95.

Gray, Thomas, 333.

Greatrakes, Valentine, 265-273.

Greaves, Sir Edmund, 55.

Green, Richard, 439.

Gregory, Dr. James, 193, 209.


Grenville, Lord, 207.

Grey, Dr., 479.

Griffith, Mrs., 426.

Gungeland, Coursus de, 170.

Guy, Thomas, 466, 470.

Guyllyam, Dr., 221.

Gwynn, Nell, 157.

Gyer, Nicholas, 228.

Hale, Dr., 252.

Hales, Stephen, 291.

Halford, Sir Henry, 173, 393, 421.

Halifax, Charley, 486.

Halley, Dr., 185.

Hamey, Baldwin, 63.

Hamilton, Sir William, 348.

Hancock, The Rev. John, 95.

Handel, 161.

Hannes, Sir Edward, 113, 114, 115, 249, 375, 384.


Harrington, Dr., 429.

Harris, Sir Edward, 265.

Harris, Edmund, 265.

Hartley, Dr. D., 292.

Hartman, George, 45.

Harvey, Dr. John, 24, 369.

Harvey, Dr. Ambrose, 506.

Harward, Simon, 228.

Hastings, Mrs. Sarah, 288.

Hatcher, Dr., 29, 164.

Haveningham, Sir Anthony, 33, 165.

Hawkins, Dr. C., 292.

Hawkins, Sir John, 330.

Haygarth, Dr., 277.

Hearne, Thomas, 225, 423.

Heberden, Dr. W., 51, 53, 161, 211.

Hel, Dr. Maximilian, 283.

Henry III., 40, 173.


" IV., 23, 173.

" VII., 21.

" VIII., 21, 164, 171, 422, 468.

Heraclius, Prince, 303.

Herfurth, Earl of, 166.

Hermes, 9, 11.

Hertford, Marquis of, 235.

Hill, Sir John, 59, 398, 479.

Hill, Sir Rowland, 490.

Hilton, Sir Thomas, 36.

Hilton, William, 36.

Hippocrates, 226.

Hobart, Sir Nathaniel, 272.

Hogarth, 463, 468.

Hook, Mrs., 99.

Horace, 308.

Howe, Dr., 212.

Howell, James, 46.


Hughes, Mary Ann, 99.

Hulse, Dr. Edward, 72, 252.

Hunter, Dr. John, 23, 215, 295, 355, 369, 375, 405, 413.

Hunter, Dr. William, 175.

Huyck, Dr., 29.

Hyatt, Mr., 178.

Ingestre, Lord, 370.

Inverness, Lady, 303.

Ivan, Dr., 205.

James I., 42, 47, 173, 204, 225, 471, 479.

" II., 198.

" IV., 166.

James, Dr., 251.

Jebb, Dr. John, 160.

Jebb, Sir Richard, 159, 160, 205.

Jeffcott, Sir John, 384.

Jeffries, Dr., 383.


Jenkins, Henry, 254.

Jenner, Dr. Edward, 295, 369, 375, 488.

Jermaine, Lady Betty, 289.

Johnson, Samuel, 16, 39, 53, 67, 115, 140, 194, 201, 232, 239, 262,
308, 330, 333, 427, 463.

Jonson, Ben, 42, 44.

Joseph, Emperor, 179.

Jurin, Dr. James, 184.

Katterfelts, Dr., 103.

Kavanaugh, Lady Harriet, 370.

Kaye, John, 22, 29.

Keats, John, 436.

Keill, 184.

Kellet, Alexander, 181.

Kemp, Dr. Mitchell, 415.

Kennix, Margaret, 288.

King, Sir Edmund, 72, 113, 117, 234.

King, Dr., 299.


Kingsdown, Lord, 393, 394.

Kitchener, Dr., 42.

Kahn, Thamas Kouli, 303.

Kneller, Sir Godfrey, 118, 119.

Knightley, Sir Richard, 203.

Kunyngham, Dr. William, 29.

Lambert, Daniel, 145.

Langdale, Lord, 396.

Langton, Dr., 19, 29.

Lawrence, Sir Thomas, 358.

Lax, Mr., 278.

Leake, Robert, 247.

Lettsom, Dr. John Coakley, 178, 207, 335, 375, 385.

Levit, John, 78, 252.

Levitt, William Springall, 438.

Lewis, Jenkin, 115.

Lewis, M. G., 411.


Linacre, 22, 29, 138.

Lloyd, Mrs., 369.

Locke, Dr. John, 421.

Locock, Dr., 287.

Lodge, Edmund, 43.

Long, John St. John, 356, 402.

Louis XIII., 23, 173.

" XIV., 205, 235.

Louis XV., 146.

Loutherbourg, Mr. and Mrs., 97, 98, 99, 100, 101.

Lovell, Dr., 277.

Lovkin, Dr., 29.

Lower, Dr., 157.

Lowther, Sir James, 290.

Ludford, Dr. Simon, 29.

Luff, Dr., 113.

Macartney, Dr., 370.

Macaulay, Catherine, 345.


M'Dougal, Peter, 108, 109, 110.

Macilwain, George, 214, 375.

Mackintosh, Lady, 303.

Macnish, Dr., 436.

Mæcenas, 48.

Mahomet, 83.

Mandeville, 140.

Manfield, Dr., 28.

Manley, Mrs., 200.

Mapletoft, Dr., 52.

Mapp, Mrs., 295.

Marie Louise, 205.

Marlborough, Duke of, 77, 248, 313.

" Duchess of, 140.

Marshall, Dr., 112, 389.

Martial, 186.

Marvel, Andrew, 272.

Mary, Queen, 175.


Marwood, Dr., 393.

Masham, Lady, 132, 137.

Mason, William, 333.

Masters, Dr., 29.

Maupin, 381.

Maxwell, Dr. William, 281.

Mayerne, Sir Theodore, 23, 25, 48, 66, 146, 170.

Mead, The Rev. Matthew, 240.

Mead, Dr. Richard, 10, 68, 81, 97, 134, 136, 137, 142, 152, 207, 239,
292, 377, 403, 434.

Meade, Dr. William G., 254.

Meagrim, Molly, 354.

Mercurius, 11.

Mercury, 9.

Meredith, Sir Amos, 373.

Mesmer, Dr. Frederick Anthony, 256, 264, 265, 275, 280, 345.

Messenger, Elizabeth, 312.

Messenger, Thomas, 312.


Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like