Modern Control Design With Matlab And Simulink 1st Edition Ashish Tewari pdf download
Modern Control Design With Matlab And Simulink 1st Edition Ashish Tewari pdf download
https://ebookbell.com/product/modern-control-design-with-matlab-
and-simulink-1st-edition-ashish-tewari-2214064
https://ebookbell.com/product/classical-and-modern-controls-with-
microcontrollers-design-implementation-and-applications-1st-ed-ying-
bai-7320486
https://ebookbell.com/product/modern-linear-control-design-a-
timedomain-approach-2013th-edition-paolo-caravani-4144494
https://ebookbell.com/product/modern-control-statespace-analysis-and-
design-methods-arie-nakhmani-11390880
Modern Control Systems Analysis And Design Using Matlab And Simulink
2003121 Robert Hbishop
https://ebookbell.com/product/modern-control-systems-analysis-and-
design-using-matlab-and-simulink-2003121-robert-hbishop-59825610
Modern Control Systems Global Edition 14th Edition Richard Dorf
https://ebookbell.com/product/modern-control-systems-global-
edition-14th-edition-richard-dorf-51794618
https://ebookbell.com/product/modern-control-systems-thirteenth-
edition-dorf-richard-cbishop-21977372
https://ebookbell.com/product/modern-control-engineering-5th-ed-
prentice-hall-ogata-katsuhiko-22039606
https://ebookbell.com/product/modern-control-systems-part-1-richard-
cdorf-and-robert-hbishop-2218708
https://ebookbell.com/product/modern-control-engineering-4th-edition-
solution-manual-katsuhiko-ogata-2493098
Modern Control Design
With MATLAB and SIMULINK
This page intentionally left blank
Modern Control Design
With MATLAB and SIMULINK
Ashish Tewari
Indian Institute of Technology, Kanpur, India
Preface xiii
Introduction 1
1.1 What is Control? 1
1.2 Open-Loop and Closed-Loop Control Systems 2
1.3 Other Classifications of Control Systems 6
1.4 On the Road to Control System Analysis and Design 10
1.5 MATLAB, SIMULINK, and the Control System Toolbox 11
References 12
Index 495
Preface
The motivation for writing this book can be ascribed chiefly to the usual struggle of
an average reader to understand and utilize controls concepts, without getting lost in
the mathematics. Many textbooks are available on modern control, which do a fine
job of presenting the control theory. However, an introductory text on modern control
usually stops short of the really useful concepts - such as optimal control and Kalman
filters - while an advanced text which covers these topics assumes too much mathe-
matical background of the reader. Furthermore, the examples and exercises contained
in many control theory textbooks are too simple to represent modern control appli-
cations, because of the computational complexity involved in solving practical prob-
lems. This book aims at introducing the reader to the basic concepts and applications
of modern control theory in an easy to read manner, while covering in detail what
may be normally considered advanced topics, such as multivariable state-space design,
solutions to time-varying and nonlinear state-equations, optimal control, Kalman filters,
robust control, and digital control. An effort is made to explain the underlying princi-
ples behind many controls concepts. The numerical examples and exercises are chosen
to represent practical problems in modern control. Perhaps the greatest distinguishing
feature of this book is the ready and extensive use of MATLAB (with its Control
System Toolbox) and SIMULINK®, as practical computational tools to solve problems
across the spectrum of modern control. MATLAB/SIMULINK combination has become
the single most common - and industry-wide standard - software in the analysis and
design of modern control systems. In giving the reader a hands-on experience with the
MATLAB/SIMULINK and the Control System Toolbox as applied to some practical design
problems, the book is useful for a practicing engineer, apart from being an introductory
text for the beginner.
This book can be used as a textbook in an introductory course on control systems at
the third, or fourth year undergraduate level. As stated above, another objective of the
book is to make it readable by a practicing engineer without a formal controls back-
ground. Many modern control applications are interdisciplinary in nature, and people
from a variety of disciplines are interested in applying control theory to solve practical
problems in their own respective fields. Bearing this in mind, the examples and exercises
are taken to cover as many different areas as possible, such as aerospace, chemical, elec-
trical and mechanical applications. Continuity in reading is preserved, without frequently
referring to an appendix, or other distractions. At the end of each chapter, readers are
® MATLAB, SIMULINK, and Control System Toolbox are registered trademarks of the Math Works, Inc.
xii PREFACE
given a number of exercises, in order to consolidate their grasp of the material presented
in the chapter. Answers to selected numerical exercises are provided near the end of
the book.
While the main focus of the material presented in the book is on the state-space
methods applied to linear, time-invariant control - which forms a majority of modern
control applications - the classical frequency domain control design and analysis is not
neglected, and large parts of Chapters 2 and 8 cover classical control. Most of the
example problems are solved with MATLAB/SIMULINK, using MATLAB command
lines, and SIMULINK block-diagrams immediately followed by their resulting outputs.
The reader can directly reproduce the MATLAB statements and SIMULINK blocks
presented in the text to obtain the same results. Also presented are a number of computer
programs in the form of new MATLAB M-files (i.e. the M-files which are not included
with MATLAB, or the Control System Toolbox) to solve a variety of problems ranging
from step and impulse responses of single-input, single-output systems, to the solution
of the matrix Riccati equation for the terminal-time weighted, multivariable, optimal
control design. This is perhaps the only available controls textbook which gives ready
computer programs to solve such a wide range of problems. The reader becomes aware
of the power of MATLAB/SIMULINK in going through the examples presented in the
book, and gets a good exposure to programming in MATLAB/SIMULINK. The numer-
ical examples presented require MATLAB 6.0, SIMULINK 4.0, and Control System
Toolbox 5.0. Older versions of this software can also be adapted to run the examples and
models presented in the book, with some modifications (refer to the respective Users'
Manuals).
The numerical examples in the book through MATLAB/SIMULINK and the Control
System Toolbox have been designed to prevent the use of the software as a black box, or by
rote. The theoretical background and numerical techniques behind the software commands
are explained in the text, so that readers can write their own programs in MATLAB, or
another language. Many of the examples contain instructions on programming. It is also
explained how many of the important Control System Toolbox commands can be replaced
by a set of intrinsic MATLAB commands. This is to avoid over-dependence on a particular
version of the Control System Toolbox, which is frequently updated with new features.
After going through the book, readers are better equipped to learn the advanced features
of the software for design applications.
Readers are introduced to advanced topics such as HOC-robust optimal control, struc-
tured singular value synthesis, input shaping, rate-weighted optimal control, and nonlinear
control in the final chapter of the book. Since the book is intended to be of introduc-
tory rather than exhaustive nature, the reader is referred to other articles that cover these
advanced topics in detail.
I am grateful to the editorial and production staff at the Wiley college group, Chichester,
who diligently worked with many aspects of the book. I would like to specially thank
Karen Mossman, Gemma Quilter, Simon Plumtree, Robert Hambrook, Dawn Booth and
See Hanson for their encouragement and guidance in the preparation of the manuscript.
I found working with Wiley, Chichester, a pleasant experience, and an education into
the many aspects of writing and publishing a textbook. I would also like to thank my
students and colleagues, who encouraged and inspired me to write this book. I thank all
PREFACE xiii
the reviewers for finding the errors in the draft manuscript, and for providing many
constructive suggestions. Writing this book would have been impossible without the
constant support of my wife, Prachi, and my little daughter, Manya, whose total age
in months closely followed the number of chapters as they were being written.
Ashish Tewari
This page intentionally left blank
1
Introduction
controller, who applies an input to the plant in the form of pressing the gas pedal if it
is desired to increase the speed of the car. The speed increase can then be the output
from the plant. Note that in a control system, what control input can be applied to the
plant is determined by the physical processes of the plant (in this case, the car's engine),
but the output could be anything that can be directly measured (such as the car's speed
or its position). In other words, many different choices of the output can be available
at the same time, and the controller can use any number of them, depending upon the
application. Say if the driver wants to make sure she is obeying the highway speed limit,
she will be focusing on the speedometer. Hence, the speed becomes the plant output. If
she wants to stop well before a stop sign, the car's position with respect to the stop sign
becomes the plant output. If the driver is overtaking a truck on the highway, both the
speed and the position of the car vis-d-vis the truck are the plant outputs. Since the plant
output is the same as the output of the control system, it is simply called the output when
referring to the control system as a whole. After understanding the basic terminology of
the control system, let us now move on to see what different varieties of control systems
there are.
Figure 1.1 An open-loop control system: the controller applies the control input without knowing the
plant output
OPEN-LOOP AND CLOSED-LOOP CONTROL SYSTEMS
and the plant output. If one knows what output a system will produce when a known
input is applied to it, one is said to know the system's behavior.
Mathematically, the relationship between the output of a linear plant and the control
input (the system's behavior) can be described by a transfer function (the concepts of
linear systems and transfer functions are explained in Chapter 2). Suppose the driver
knows from previous driving experience that, to maintain a speed of 50 kilometers per
hour, she needs to apply one kilogram of force on the gas pedal. Then the car's transfer
function is said to be 50 km/hr/kg. (This is a very simplified example. The actual car
is not going to have such a simple transfer function.} Now, if the driver can accurately
control the force exerted on the gas pedal, she can be quite confident of achieving her
target speed, even though blindfolded. However, as anybody reasonably experienced with
driving knows, there are many uncertainties - such as the condition of the road, tyre
pressure, the condition of the engine, or even the uncertainty in gas pedal force actually
being applied by the driver - which can cause a change in the car's behavior. If the
transfer function in the driver's mind was determined on smooth roads, with properly
inflated tyres and a well maintained engine, she is going to get a speed of less than
50 krn/hr with 1 kg force on the gas pedal if, say, the road she is driving on happens to
have rough patches. In addition, if a wind happens to be blowing opposite to the car's
direction of motion, a further change in the car's behavior will be produced. Such an
unknown and undesirable input to the plant, such as road roughness or the head-wind, is
called a noise. In the presence of uncertainty about the plant's behavior, or due to a noise
(or both), it is clear from the above example that an open-loop control system is unlikely
to be successful.
Suppose the driver decides to drive the car like a sane person (i.e. with both eyes
wide open). Now she can see her actual speed, as measured by the speedometer. In this
situation, the driver can adjust the force she applies to the pedal so as to get the desired
speed on the speedometer; it may not be a one shot approach, and some trial and error
might be required, causing the speed to initially overshoot or undershoot the desired value.
However, after some time (depending on the ability of the driver), the target speed can be
achieved (if it is within the capability of the car), irrespective of the condition of the road
or the presence of a wind. Note that now the driver - instead of applying a pre-determined
control input as in the open-loop case - is adjusting the control input according to the
actual observed output. Such a control system in which the control input is a function
of the plant's output is called a closed-loop system. Since in a closed-loop system the
controller is constantly in touch with the actual output, it is likely to succeed in achieving
the desired output even in the presence of noise and/or uncertainty in the linear plant's
behavior (transfer-function). The mechanism by which the information about the actual
output is conveyed to the controller is called feedback. On a block-diagram, the path
from the plant output to the controller input is called a feedback-loop. A block-diagram
example of a possible closed-loop system is given in Figure 1.2.
Comparing Figures 1.1 and 1.2, we find a new element in Figure 1.2 denoted by a circle
before the controller block, into which two arrows are leading and out of which one arrow
is emerging and leading to the controller. This circle is called a summing junction, which
adds the signals leading into it with the appropriate signs which are indicated adjacent to
the respective arrowheads. If a sign is omitted, a positive sign is assumed. The output of
INTRODUCTION
Feedback loop
Figure 1.2 Example of a closed-loop control system with feedback; the controller applies a control
input based on the plant output
the summing junction is the arithmetic sum of its two (or more) inputs. Using the symbols
u (control input), y (output), and yd (desired output), we can see in Figure 1.2 that the
input to the controller is the error signal (yd — y). In Figure 1.2, the controller itself is a
system which produces an output (control input), u, based upon the input it receives in
the form of (yd — y)- Hence, the behavior of a linear controller could be mathematically
described by its transfer-function, which is the relationship between u and (yd — .v)- Note
that Figure 1.2 shows only a popular kind of closed-loop system. In other closed-loop
systems, the input to the controller could be different from the error signal (yd — y).
The controller transfer-function is the main design parameter in the design of a control
system and determines how rapidly - and with what maximum overshoot (i.e. maximum
value of | yd — y|) - the actual output, y, will become equal to the desired output, yd- We
will see later how the controller transfer-function can be obtained, given a set of design
requirements. (However, deriving the transfer-function of a human controller is beyond
the present science, as mentioned in the previous section.) When the desired output, yd, is
a constant, the resulting controller is called a regulator. If the desired output is changing
with time, the corresponding control system is called a tracking system. In any case, the
principal task of a closed-loop controller is to make (yd — y) = 0 as quickly as possible.
Figure 1.3 shows a possible plot of the actual output of a closed-loop control system.
Whereas the desired output yd has been achieved after some time in Figure 1.3, there
is a large maximum overshoot which could be unacceptable. A successful closed-loop
controller design should achieve both a small maximum overshoot, and a small error
magnitude |yd — y| as quickly as possible. In Chapter 4 we will see that the output of a
linear system to an arbitrary input consists of a fluctuating sort of response (called the
transient response), which begins as soon as the input is applied, and a settled kind of
response (called the steady-state response) after a long time has elapsed since the input
was initially applied. If the linear system is stable, the transient response would decay
to zero after sometime (stability is an important property of a system, and is discussed
in Section 2.8), and only the steady-state response would persist for a long time. The
transient response of a linear system depends largely upon the characteristics and the
initial state of the system, while the steady-state response depends both upon system's
characteristics and the input as a function of time, i.e. u(t). The maximum overshoot is
a property of the transient response, but the error magnitude | yd — y| at large time (or in
the limit t —>• oo) is a property of the steady-state response of the closed-loop system. In
OPEN-LOOP AND CLOSED-LOOP CONTROL SYSTEMS
Desired output, yd
u
Time (f)
Figure 1.3 Example of a closed-loop control system's response; the desired output is achieved after
some time, but there is a large maximum overshoot
Figure 1.3 the steady-state response asymptotically approaches a constant yd in the limit
t -> oo.
Figure 1.3 shows the basic fact that it is impossible to get the desired output imme-
diately. The reason why the output of a linear, stable system does not instantaneously
settle to its steady-state has to do with the inherent physical characteristics of all prac-
tical systems that involve either dissipation or storage of energy supplied by the input.
Examples of energy storage devices are a spring in a mechanical system, and a capacitor
in an electrical system. Examples of energy dissipation processes are mechanical friction,
heat transfer, and electrical resistance. Due to a transfer of energy from the applied input
to the energy storage or dissipation elements, there is initially a fluctuation of the total
energy of the system, which results in the transient response. As the time passes, the
energy contribution of storage/dissipative processes in a stable system declines rapidly,
and the total energy (hence, the output) of the system tends to the same function of time
as that of the applied input. To better understand this behavior of linear, stable systems,
consider a bucket with a small hole in its bottom as the system. The input is the flow
rate of water supplied to the bucket, which could be a specific function of time, and the
output is the total flow rate of water coming out of the bucket (from the hole, as well
as from the overflowing top). Initially, the bucket takes some time to fill due to the hole
(dissipative process) and its internal volume (storage device). However, after the bucket
is full, the output largely follows the changing input.
While the most common closed-loop control system is the feedback control system, as
shown in Figure 1.2, there are other possibilities such as the feedforward control system.
In a feedforward control system - whose example is shown in Figure 1.4 - in addition
to a feedback loop, a feedforward path from the desired output (y^) to the control input
is generally employed to counteract the effect of noise, or to reduce a known undesirable
plant behavior. The feedforward controller incorporates some a priori knowledge of the
plant's behavior, thereby reducing the burden on the feedback controller in controlling
INTRODUCTION
Disturbance
Feedforward
controller + AK Control input (u)
(engine RPM *\J (fuel flow)
governor) /
Desired-
+ / Output(y)
output
(yd)_ +
p—
Feedback
controller
(driver +
*S^\JL-/
r
» Plant
(car)
( speed)
—>-
gas pedal)
Feedback loop
Figure 1.4 A closed-loop control system with a feedforward path; the engine RPM governor takes
care of the fuel flow disturbance, leaving the driver free to concentrate on achieving desired speed with
gas pedal force
the plant. Note that if the feedback controller is removed from Figure 1.4, the resulting
control system becomes open-loop type. Hence, a feedforward control system can be
regarded as a hybrid of open and closed-loop control systems. In the car driver example,
the feedforward controller could be an engine rotational speed governor that keeps the
engine's RPM constant in the presence of disturbance (noise) in the fuel flow rate caused
by known imperfections in the fuel supply system. This reduces the burden on the driver,
who would have been required to apply a rapidly changing gas pedal force to counteract
the fuel supply disturbance if there was no feedforward controller. Now the feedback
controller consists of the driver and the gas-pedal mechanism, and the control input is the
fuel flow into the engine, which is influenced by not only the gas-pedal force, but also by
the RPM governor output and the disturbance. It is clear from the present example that
many practical control systems can benefit from the feedforward arrangement.
In this section, we have seen that a control system can be classified as either open- or
closed-loop, depending upon the physical arrangement of its components. However, there
are other ways of classifying control systems, as discussed in the next section.
one has sufficient information to drive the car safely. Thus, the state of such a system
consists of the car's speed and relative positions of other vehicles. However, for the same
system one could choose another set of physical quantities to be the system's state, such
as velocities of all other vehicles relative to the car, and the position of the car with
respect to the road divider. Hence, by definition the state is not a unique set of physical
quantities.
A control system is said to be deterministic when the set of physical laws governing the
system are such that if the state of the system at some time (called the initial conditions)
and the input are specified, then one can precisely predict the state at a later time. The laws
governing a deterministic system are called deterministic laws. Since the characteristics of
a deterministic system can be found merely by studying its response to initial conditions
(transient response), we often study such systems by taking the applied input to be zero.
A response to initial conditions when the applied input is zero depicts how the system's
state evolves from some initial time to that at a later time. Obviously, the evolution of
only a deterministic system can be determined. Going back to the definition of state, it is
clear that the latter is arrived at keeping a deterministic system in mind, but the concept of
state can also be used to describe systems that are not deterministic. A system that is not
deterministic is either stochastic, or has no laws governing it. A stochastic (also called
probabilistic) system has such governing laws that although the initial conditions (i.e.
state of a system at some time) are known in every detail, it is impossible to determine
the system's state at a later time. In other words, based upon the stochastic governing
laws and the initial conditions, one could only determine the probability of a state, rather
than the state itself. When we toss a perfect coin, we are dealing with a stochastic law that
states that both the possible outcomes of the toss (head or tail) have an equal probability
of 50 percent. We should, however, make a distinction between a physically stochastic-
system, and our ability (as humans) to predict the behavior of a deterministic system based
upon our measurement of the initial conditions and our understanding of the governing
laws. Due to an uncertainty in our knowledge of the governing deterministic laws, as
well as errors in measuring the initial conditions, we will frequently be unable to predict
the state of a deterministic system at a later time. Such a problem of unpredictability is
highlighted by a special class of deterministic systems, namely chaotic systems. A system
is called chaotic if even a small change in the initial conditions produces an arbitrarily
large change in the system's state at a later time.
An example of chaotic control systems is a double pendulum (Figure 1.5). It consists
of two masses, m\ and mi, joined together and suspended from point O by two rigid
massless links of lengths LI and L2 as shown. Here, the state of the system can be
defined by the angular displacements of the two links, 0\(t} and #2(0. as well as their
respective angular velocities, 0\ \t) and #7( }(t). (In this book, the notation used for
representing a &th order time derivative of /(r) is f ( k ) ( t ) , i.e. dkf(t)/dtk = f{k}(t).
Thus, 0j (1) (0 denotes dO\(t)/dt, etc.) Suppose we do not apply an input to the system,
and begin observing the system at some time, t = 0, at which the initial conditions are,
say, 6*i(0) = 40°, 02(0) = 80°, #, (l) (0) = 0°/s, and 0^1)(0) = 0°/s. Then at a later time,
say after 100 s, the system's state will be very much different from what it would have
been if the initial conditions were, say, 0j(0) = 40.01°, 6>2(0) = 80°, 6>,(1)(0) = 0°/s, and
0( ^(0) = 0°/s. Figure 1.6 shows the time history of the angle Oi(t) between 85 s and 100 s
INTRODUCTION
Figure 1.5 A double pendulum is a chaotic system because a small change in its initial conditions
produces an arbitrarily large change in the system's state after some time
-100
90 95 100
Time (s)
Figure 1.6 Time history between 85 s and 100 s of angle QI of a double pendulum with mi = 1 kg,
m-i = 2 kg, LI = 1 m, and 1-2 = 2 m for the two sets of initial conditions #1 (0) = 40°, #2(0) = 80°,
0J1)(0) = 0%, 0^(0) = 0% and 0,(0) = 40.01°, 02(0) = 80°, 0,(1|(0) = 0%, 0^(0) =0%.
respectively
for the two sets of initial conditions, for a double pendulum with m\ — 1 kg, mi = 2 kg,
LI = 1 m, and LI = 2 m. Note that we know the governing laws of this deterministic
system, yet we cannot predict its state after a given time, because there will always be
some error in measuring the initial conditions. Chaotic systems are so interesting that they
have become the subject of specialization at many physics and engineering departments.
Any unpredictable system can be mistaken to be a stochastic system. Taking the
car driver example of Section 1.2, there may exist deterministic laws that govern the
road conditions, wind velocity, etc., but our ignorance about them causes us to treat
such phenomena as random noise, i.e. stochastic processes. Another situation when a
deterministic system may appear to be stochastic is exemplified by the toss of a coin
deliberately loaded to fall every time on one particular side (either head or tail). An
OTHER CLASSIFICATIONS OF CONTROL SYSTEMS
unwary spectator may believe such a system to be stochastic, when actually it is very
much deterministic!
When we analyze and design control systems, we try to express their governing physical
laws by differential equations. The mathematical nature of the governing differential
equations provides another way of classifying control systems. Here we depart from the
realm of physics, and delve into mathematics. Depending upon whether the differential
equations used to describe a control system are linear or nonlinear in nature, we can call
the system either linear or nonlinear. Furthermore, a control system whose description
requires partial differential equations is called a distributed parameter system, whereas a
system requiring only ordinary differential equations is called a lumped parameter system.
A vibrating string, or a membrane is a distributed parameter system, because its properties
(mass and stiffness) are distributed in space. A mass suspended by a spring is a lumped
parameter system, because its mass and stiffness are concentrated at discrete points in
space. (A more common nomenclature of distributed and lumped parameter systems is
continuous and discrete systems, respectively, but we avoid this terminology in this book
as it might be confused with continuous time and discrete time systems.) A particular
system can be treated as linear, or nonlinear, distributed, or lumped parameter, depending
upon what aspects of its behavior we are interested in. For example, if we want to study
only small angular displacements of a simple pendulum, its differential equation of motion
can be treated to be linear; but if large angular displacements are to be studied, the same
pendulum is treated as a nonlinear system. Similarly, when we are interested in the motion
of a car as a whole, its state can be described by only two quantities: the position and
the velocity of the car. Hence, it can be treated as a lumped parameter system whose
entire mass is concentrated at one point (the center of mass). However, if we want to
take into account how the tyres of the car are deforming as it moves along an uneven
road, the car becomes a distributed parameter system whose state is described exactly by
an infinite set of quantities (such as deformations of all the points on the tyres, and their
time derivatives, in addition to the speed and position of the car). Other classifications
based upon the mathematical nature of governing differential equations will be discussed
in Chapter 2.
Yet another way of classifying control systems is whether their outputs are contin-
uous or discontinuous in time. If one can express the system's state (which is obtained
by solving the system's differential equations) as a continuous function of time, the
system is called continuous in time (or analog system). However, a majority of modern
control systems produce outputs that 'jump' (or are discontinuous) in time. Such control
systems are called discrete in time (or digital systems). Note that in the limit of very small
time steps, a digital system can be approximated as an analog system. In this book, we
will make this assumption quite often. If the time steps chosen to sample the discontin-
uous output are relatively large, then a digital system can have a significantly different
behaviour from that of a corresponding analog system. In modern applications, even
analog controllers are implemented on a digital processor, which can introduce digital
characteristics to the control system. Chapter 8 is devoted to the study of digital systems.
There are other minor classifications of control systems based upon the systems' char-
acteristics, such as stability, controllability, observability, etc., which we will take up
in subsequent chapters. Frequently, control systems are also classified based upon the
10 INTRODUCTION
number of inputs and outputs of the system, such as single-input, single-output system,
or two-input, three-output system, etc. In classical control (an object of Chapter 2)
the distinction between single-input, single-output (SISO) and multi-input, multi-output
(MIMO) systems is crucial.
digital control systems (also called discrete time systems), and covers many modern digital
control applications. Finally, Chapter 9 introduces various advanced topics in modern
control, such as advanced robust control techniques, nonlinear control, etc. Some of the
topics contained in Chapter 9, such as input shaping control and rate-weighted optimal
control, are representative of the latest control techniques.
At the end of each chapter (except Chapter 1), you will find exercises that help you
grasp the essential concepts presented in the chapter. These exercises range from analytical
to numerical, and are designed to make you think, rather than apply ready-made formulas
for their solution. At the end of the book, answers to some numerical exercises are
provided to let you check the accuracy of your solutions.
Modern control design and analysis requires a lot of linear algebra (matrix multipli-
cation, inversion, calculation of eigenvalues and eigenvectors, etc.) which is not very
easy to perform manually. Try to remember the last time you attempted to invert a
4 x 4 matrix by hand! It can be a tedious process for any matrix whose size is greater
than 3 x 3 . The repetitive linear algebraic operations required in modern control design
and analysis are, however, easily implemented on a computer with the use of standard
programming techniques. A useful high-level programming language available for such
tasks is the MATLAB®, which not only provides the tools for carrying out the matrix
operations, but also contains several other features, such as the time-step integration
of linear or nonlinear governing differential equations, which are invaluable in modern
control analysis and design. For example, in Figure 1.6 the time-history of a double-
pendulum has been obtained by solving the coupled governing nonlinear differential
equations using MATLAB. Many of the numerical examples contained in this book have
been solved using MATLAB. Although not required for doing the exercises at the end of
each chapter, it is recommended that you familiarize yourself with this useful language
with the help of Appendix A, which contains information about the commonly used
MATLAB operators in modern control applications. Many people, who shied away from
modern control courses because of their dread of linear algebra, began taking interest
in the subject when MATLAB became handy. Nowadays, personal computer versions of
MATLAB are commonly applied to practical problems across the board, including control
of aerospace vehicles, magnetically levitated trains, and even stock-market applications.
You may find MATLAB available at your university's or organization's computer center.
While Appendix A contains useful information about MATLAB which will help you in
solving most of the modern control problems, it is recommended that you check with
the MATLAB user's guide [1] at your computer center for further details that may be
required for advanced applications.
SIMULINK® is a very useful Graphical Users Interface (GUI) tool for modeling control
systems, and simulating their time response to specified inputs. It lets you work directly
with the block-diagrams (rather than mathematical equations) for designing and analyzing
® MATLAB, SIMULINK and Control System Toolbox are registered trademarks of MathWorks, Inc.
12 INTRODUCTION
control systems. For this purpose, numerous linear and nonlinear blocks, input sources,
and output devices are available, so that you can easily put together almost any practical
control system. Another advantage of using SIMULINK is that it works seamlessly with
MATLAB, and can draw upon the vast programming features and function library of
MATLAB. A SIMULINK block-diagram can be converted into a MATLAB program
(called M-file). In other words, a SIMULINK block-diagram does all the programming
for you, so that you are free to worry about other practical aspects of a control system's
design and implementation. With advanced features (such as the Real Time Workshop for
C-code generation, and specialized block-sets) one can also use SIMULINK for practical
implementation of control systems [2]. We will be using SIMULINK as a design and
analysis tool, especially in simulating the response of a control system designed with
MATLAB.
For solving many problems in control, you will find the Control System Toolbox® [3]
for MATLAB very useful. It contains a set of MATLAB M-files of numerical procedures
that are commonly used to design and analyze modern control systems. The Control
System Toolbox is available at a small extra cost when you purchase MATLAB, and is
likely to be installed at your computer center if it has MATLAB. Many solved examples
presented in this book require the Control System Toolbox. In the solved examples,
effort has been made to ensure that the application of MATLAB is clear and direct. This
is done by directly presenting the MATLAB line commands - and some MATLAB M-
files - followed by the numerical values resulting after executing those commands. Since
the commands are presented exactly as they would appear in a MATLAB workspace, the
reader can easily reproduce all the computations presented in the book. Again, take some
time to familiarize yourself with MATLAB, SIMULINK and the Control System Toolbox
by reading Appendix A.
References
1. MATLAB® 6.0 - User's Guide, The Math Works Inc., Natick, MA, USA, 2000.
2. SIMULINK® 4.0 - User's Guide, The Math Works Inc., Natick, MA, USA, 2000.
3. Control System Toolbox 5.0 for Use with MATLAB® - User's Guide, The Math Works Inc.
Natick, MA, USA, 2000.
2
It was mentioned in Chapter 1 that we need differential equations to describe the behavior
of a system, and that the mathematical nature of the governing differential equations is
another way of classifying control systems. In a large class of engineering applications,
the governing differential equations can be assumed to be linear. The concept of linearity
is one of the most important assumptions often employed in studying control systems.
However, the following questions naturally arise: what is this assumption and how valid
is it anyway? To answer these questions, let us consider lumped parameter systems
for simplicity, even though all the arguments presented below are equally applicable
to distributed systems. (Recall that lumped parameter systems are those systems whose
behavior can be described by ordinary differential equations.) Furthermore, we shall
confine our attention (until Section 2.13) to single-input, single-output (SISO) systems.
For a general lumped parameter, SISO system (Figure 2.1) with input u(t} and output
y ( t ) , the governing ordinary differential equation can be written as
M
( 0 , um-(t), . . . , « ( r ) , i«(0, 0
(2.1)
where y(k} denotes the &th derivative of y(t) with respect to time, t, e.g. v (n) = dny/dt",
y(n~l) = d"~ly/dt"~l, and u(k) denotes the fcth time derivative of u(t). This notation for
derivatives of a function will be used throughout the book. In Eq. (2.1), /() denotes a
function of all the time derivatives of y ( t ) of order (n — 1) and less, as well as the time
derivatives of u(t) of order m and less, and time, t. For most systems m < n, and such
systems are said to be proper.
Since n is the order of the highest time derivative of y(f) in Eq. (2.1), the
system is said to be of order n. To determine the output y ( t ) , Eq. (2.1) must be
somehow integrated in time, with u(t) known and for specific initial conditions
j(0), j (1) (0), .y(2)(0), . . . , y("-l)(G). Suppose we are capable of solving Eq. (2.1), given
any time varying input, u(t), and the initial conditions. For simplicity, let us assume that
the initial conditions are zero, and we apply an input, u(t), which is a linear combination
of two different inputs, u\(t), and U2(t), given by
Figure 2.1 A general lumped parameter system with input, u(f), and output, y(f)
where c\ and c2 are constants. If the resulting output, y(t ), can be written as
c2y2(t) (2.3)
where y \ ( t ) is the output when u\(t) is the input, and y2(t) is the output when 1*2(1) is the
input, then the system is said to be linear, otherwise it is called nonlinear. In short, a linear
system is said to obey the superposition principle, which states that the output of a linear
system to an input consisting of linear combination of two different inputs (Eq. (2.2))
can be obtained by linearly superposing the outputs to the respective inputs (Eq. (2.3)).
(The superposition principle is also applicable for non-zero initial conditions, if the initial
conditions on y(t ) and its time derivatives are linear combinations of the initial conditions
on y\(t) and y2(t), and their corresponding time derivatives, with the constants c\ and
c2.) Since linearity is a mathematical property of the governing differential equations,
it is possible to say merely by inspecting the differential equation whether a system is
linear. If the function /() in Eq. (2.1) contains no powers (other than one) of y(t) and
its derivatives, or the mixed products of y ( t ) , its derivatives, and u(t) and its derivatives,
or transcendental functions of j(0 and u(t), then the system will obey the superposition
principle, and its linear differential equation can be written as
Example 2.1
For an electrical network shown in Figure 2.2, the governing differential equations
are the following:
3) + e(t)/(R\C\) (2.5a)
l
v2 \t) = ui(0/(C 2 /? 3 ) - (W2(0/C 2 )(1//J 2 + l/*3) + e(t)/(R2C2) (2.5b)
HOW VALID IS THE ASSUMPTION OF LINEARITY? 15
e(t)
where v\(t) and i>2(0 are the voltages of the two capacitors, C\ and €2, e(t) is the
applied voltage, and R\, R2, and R^ are the three resistances as shown.
On inspection of Eq. (2.5), we can see that the system is described by two first
order, ordinary differential equations. Therefore, the system is of second order.
Upon the substitution of Eq. (2.5b) into Eq. (2.5a), and by eliminating v2, we get
the following second order differential equation:
1 + (Ci/C2)(R3/R2 +
l/R3)(R3/Ri + 1) - l/R3]vi(t)
l/R3)e(t)/C2 + (R3/Ri) (2.6)
Assuming y(t) = v\(t) and u(t) — e(t), and comparing Eq. (2.6) with Eq. (2.4), we
can see that there are no higher powers, transcendental functions, or mixed products
of the output, input, and their time derivatives. Hence, the system is linear.
Suppose we do not have an input, u(t), applied to the system in Figure 2.1.
Such a system is called an unforced system. Substituting u(t) = u ( l ) ( t ) = u(2)(t) —
. . . = u(m}(t) = 0 into Eq. (2.1) we can obtain the following governing differential
equation for the unforced system:
In general, the solution, y ( t ) , to Eq. (2.7) for a given set of initial conditions is
a function of time. However, there may also exist special solutions to Eq. (2.7)
which are constant. Such constant solutions for an unforced system are called its
equilibrium points, because the system continues to be at rest when it is already
at such points. A large majority of control systems are designed for keeping a
plant at one of its equilibrium points, such as the cruise-control system of a car
and the autopilot of an airplane or missile, which keep the vehicle moving at a
constant velocity. When a control system is designed for maintaining the plant at
an equilibrium point, then only small deviations from the equilibrium point need to
be considered for evaluating the performance of such a control system. Under such
circumstances, the time behavior of the plant and the resulting control system can
generally be assumed to be governed by linear differential equations, even though
16 LINEAR SYSTEMS AND CLASSICAL CONTROL
the governing differential equations of the plant and the control system may be
nonlinear. The following examples demonstrate how a nonlinear system can be
linearized near its equilibrium points. Also included is an example which illustrates
that such a linearization may not always be possible.
Example 2.2
Consider a simple pendulum (Figure 2.3) consisting of a point mass, m, suspended
from hinge at point O by a rigid massless link of length L. The equation of motion
of the simple pendulum in the absence of an externally applied torque about point
O in terms of the angular displacement, 0(t), can be written as
If we assume that motion of the pendulum about 0=0 consists of small angular
displacements (say 0 < 10°), then sin(0) ^ 0, and Eq. (2.8) becomes
0 (2.10)
e =o
(2.11)
We can see that both Eqs. (2.10) and (2.11) are linear. Hence, the nonlinear
system given by Eq. (2.8) has been linearized about both of its equilibrium points.
Second order linear ordinary differential equations (especially the homogeneous ones
like Eqs. (2.10) and (2.11)) can be be solved analytically. It is well known (and you
may verify) that the solution to Eq. (2.10) is of the form 9(t) = A. sin(f (g/L) 1/2 +
B.cos(t(g/L)1/2), where the constants A and B are determined from the initial
conditions, $(0) and <9(1)(0). This solution implies that 9(t) oscillates about the
equilibrium point 0=0. However, the solution to Eq. (2.11) is of the form 0(0 =
C. exp(?(g/L)'/ 2 ), where C is a constant, which indicates an exponentially increasing
0(0 if </>(0) ^ 0. (This nature of the equilibrium point at 9 = JT can be experimen-
tally verified by anybody trying to stand on one's head for any length of time!)
The comparison of the solutions to the linearized governing equations close to the
equilibrium points (Figure 2.4) brings us to an important property of an equilibrium
point, called stability.
2.5
2
/Solution to Eq. (2.11) with 0(0)
1.5
0.5
-0.5
—1
0.2 0.4 0.6 0.8
Time (s)
Figure 2.4 Solutions to the governing differential equation linearized about the two equilibrium
points (9 = 0 and 9 = rt)
Stability is defined as the ability of a system to approach one of its equilibrium points
once displaced from it. We will discuss stability in detail later. Here, suffice it to say
that the pendulum is stable about the equilibrium point 9 = 0, but unstable about the
equilibrium point 9 = n. While Example 2.2 showed how a nonlinear system can be
18 LINEAR SYSTEMS AND CLASSICAL CONTROL
linearized close to its equilibrium points, the following example illustrates how a nonlinear
system's description can be transformed into a linear system description through a clever
change of coordinates.
Example 2.3
Consider a satellite of mass m in an orbit about a planet of mass M (Figure 2.5).
The distance of the satellite from the center of the planet is denoted r(r), while its
orientation with respect to the planet's equatorial plane is indicated by the angle
0(t), as shown. Assuming there are no gravitational anomalies that cause a departure
from Newton's inverse-square law of gravitation, the governing equation of motion
of the satellite can be written as
Being a linear, second order ordinary differential equation (similar to Eq. (2.10)),
Eq. (2.14) is easily solved for w(0), and the solution transformed back to r(0)
Figure 2.5 A satellite of mass m in orbit around a planet of mass M at a distance r(f) from the
planefs center, and azimuth angle 6(t) from the equatorial plane
HOW VALID IS THE ASSUMPTION OF LINEARITY? 19
given by
where the constants A and B are determined from r(6} and r ( l ) ( 9 ) specified at given
values of 9. Such specifications are called boundary conditions, because they refer
to points in space, as opposed to initial conditions when quantities at given instants
of time are specified. Equation (2.15) can represent a circle, an ellipse, a parabola,
or a hyperbola, depending upon the magnitude of A(h2/k2) (called the eccentricity
of the orbit).
Note that we could also have linearized Eq. (2.12) about one of its equilibrium
points, as we did in Example 2.2. One such equilibrium point is given by r(t) =
constant, which represents a circular orbit. Many practical orbit control applications
consist of minimizing deviations from a given circular orbit using rocket thrusters
to provide radial acceleration (i.e. acceleration along the line joining the satellite
and the planet) as an input, u(t), which is based upon the measured deviation from
the circular path fed back to an onboard controller, as shown in Figure 2.6. In such
a case, the governing differential equation is no longer homogeneous as Eq. (2.12),
but has a non-homogeneous forcing term on the right-hand side given by
Since the deviations from a given circular orbit are usually small, Eq. (2.16) can be
suitably linearized about the equilibrium point r(t) = C. (This linearization is left
as an exercise for you at the end of the chapter.)
Desired Thruster
circular Measured radial Acti
orbit deviation acceleration ort
r(t) = Cf r(t) - C u(t) r(t
Orbit Satellite
controller
_ ' i
Figure 2.6 On orbit feedback control system for maintaining a circular orbit of a satellite
around a planet
Examples 2.2 and 2.3 illustrated how a nonlinear system can be linearized for practical
control applications. However, as pointed out earlier, it is not always possible to do so.
If a nonlinear system has to be moved from one equilibrium point to another (such as
changing the speed or altitude of a cruising airplane), the assumption of linearity that is
possible in the close neighborhood of each equilibrium point disappears as we cross the
nonlinear region between the equilibrium points. Also, if the motion of a nonlinear system
consists of large deviations from an equilibrium point, again the concept of linearity is not
valid. Lastly, the characteristics of a nonlinear system may be such that it does not have
any equilibrium point about which it can be linearized. The following missile guidance
example illustrates such a nonlinear system.
20 LINEAR SYSTEMS AND CLASSICAL CONTROL
Example 2.4
Radar or laser-guided missiles used in modern warfare employ a special guidance
scheme which aims at flying the missile along a radar or laser beam that is illumi-
nating a moving target. The guidance strategy is such that a correcting command
signal (input) is provided to the missile if its flight path deviates from the moving
beam. For simplicity, let us assume that both the missile and the target are moving
in the same plane (Figure 2.7). Although the distance from the beam source to the
target, Ri(t), is not known, it is assumed that the angles made by the missile and
the target with respect to the beam source, #M(?) and #r(0, are available for precise
measurement. In addition, the distance of the missile from the beam source, /?M(0,
is also known at each instant.
A guidance law provides the following normal acceleration command signal,
ac(t), to the missile
(2.17)
As the missile is usually faster than the target, if the angular deviation
#M(O] is made small enough, the missile will intercept the target. The feedback
guidance scheme of Eq. (2.17) is called beam-rider guidance, and is shown in
Figure 2.8.
Altitude "
aT(0
* Target
Radar
or
laser ,/ ac(0
beam/ awc(0
Missile
Beam Horizon
source
Figure 2.7 Beam guided missile follows a beam that continuously illuminates a moving target
located at distance Rf(f) from the beam source
In such a case, along with ac(t) given by Eq. (2.17), an additional command
signal (input) can be provided to the missile in the form of missile's acceleration
perpendicular to the beam, a^c(t), given by
Since the final objective is to make the missile intercept the target, it must be
ensured that 0^(0 = 0r' id),;(0 and 0^(0
i(2),
= 9?\t), even though [0j(0 - 0M(01 may
not be exactly zero. (To understand this philosophy, remember how we catch up
with a friend's car so that we can chat with her. We accelerate (or decelerate) until
our velocity (and acceleration) become identical with our friend's car, then we can
talk with her; although the two cars are abreast, they are not exactly in the same
position.) Hence, the following command signal for missile's normal acceleration
perpendicular to the beam must be provided:
The guidance law given by Eq. (2.20) is called command line-of-sight guidance,
and its implementation along with the beam-rider guidance is shown in the block
diagram of Figure 2.9. It can be seen in Figure 2.9 that while 0r(0 is being fed
back, the angular velocity and acceleration of the target, 0| \t), and 0j \t), respec-
tively, are being fed forward to the controller. Hence, similar to the control system
of Figure 1.4, additional information about the target is being provided by a feedfor-
ward loop to improve the closed-loop performance of the missile guidance system.
.(2)
Missile's
Acceleration angular
commands position
ac(0, aMc(f)
Target's
angular
position,
Note that both Eq. (2.17) and Eq. (2.20) are nonlinear in nature, and generally cannot
be linearized about an equilibrium point. This example shows that the concept of linearity
22 UNEAR SYSTEMS AND CLASSICAL CONTROL
is not always valid. For more information on missile guidance strategies, you may refer
to the excellent book by Zarchan [1].
ls(t-a)dt=l
a a+e
Figure 2.10 The unit impulse function; a pulse of infinitesimal duration (s) and very large magni-
tude (1/e) such that its total area is unity
SINGULARITY FUNCTIONS 23
However, when utilizing the unit impulse function for control applications, Eq. (2.22) is
much more useful. In fact, if <$(/ — a) appears inside an integral with infinite integra-
tion limits, then such an integral is very easily carried out with the use of Eqs. (2.21)
and (2.22). For example, if /(?) is a continuous function, then the well known Mean
Value Theorem of integral calculus can be applied to show that
0 for t < a
1 for t > a
1-
0 a t
Figure 2.11 The unit step function, us(t — a); a jump of unit magnitude at time t = a
24 UNEAR SYSTEMS AND CLASSICAL CONTROL
r(t-a)
0 a
Figure 2.12 The unit ramp function; a ramp of unit slope applied at time t = a
Or, conversely, the unit step function is the time integral of the unit impulse function,
given by
us(t-a)=f 8(T-a)dr (2.27)
J-oc
A useful function related to the unit step function is the unit ramp function, r(t — a ) ,
which is seen in Figure 2.12 to be a ramp of unit slope applied at time t = a. It is like an
upslope of 45° angle you suddenly encounter while driving down a perfectly flat highway
at t = a. Mathematically, r(t — a) is given by
or
r(t-a)=S us(r-a)dT (2.30)
J-oc
Thus, the unit ramp function is the time integral of the unit step function, or conversely,
the unit step function is the time derivative of the unit ramp function, given by
The basic singularity functions (unit impulse and step), and their relatives (unit ramp
function) can be used to synthesize more complicated functions, as illustrated by the
following examples.
Example 2.5
The rectangular pulse function, f(t), shown in Figure 2.13, can be expressed by
subtracting one step function from another as
-t -TI2 0 -772 t
Example 2.6
The decaying exponential function, /(/) (Figure 2.14) is zero before t = 0, and
decays exponentially from a magnitude of f0 at t = 0. It can be expressed by
multiplying the unit step function with f() and a decaying exponential term, given by
= f0e~t/rus(t) (2.33)
-t 0 t
Example 2.7
The sawtooth pulse function, f ( t ) , shown in Figure 2.15, can be expressed in terms
of the unit step and unit ramp functions as follows:
fn- Slope =
0 T t
After going through Examples 2.5-2.7, and with a little practice, you can decide merely
by looking at a given function how to synthesize it using the singularity functions. The
unit impulse function has a special place among the singularity functions, because it can be
26 LINEAR SYSTEMS AND CLASSICAL CONTROL
f(r)-
Area = f (r)Ar
Figure 2.16 Any arbitrary function, f(t), can be represented by summing up unit impulse functions,
8(t — T) applied at t = r and multiplied by the area f(r) Ar for all values of r from —oo to t
used to describe any arbitrary shaped function as a sum of suitably scaled unit impulses,
S(t — a), applied at appropriate time, t = a. This fact is illustrated in Figure 2.16, where
the function f ( t ) is represented by
- r) (2.35)
=/: f(r)S(t-r)dr
Equation (2.36) is one of the most important equations of modern control theory,
(2.36)
because it lets us evaluate the response of a linear system to any arbitrary input, /(/), by
the use of the superposition principle. We will see how this is done when we discuss the
response to singularity functions in Section 2.5. While the singularity functions and their
relatives are useful as test inputs for studying the behavior of control systems, we can also
apply some well known continuous time functions as inputs to a control system. Examples
of continuous time test functions are the harmonic functions sin(o>f) and cos(<wf)» where
o) is a frequency, called the excitation frequency. As an alternative to singularity inputs
(which are often difficult to apply in practical cases), measuring the output of a linear
system to harmonic inputs gives essential information about the system's behavior, which
can be used to construct a model of the system that will be useful in designing a control
system. We shall study next how such a model can be obtained.
response of a linear system is generally of the same shape as that of the applied input,
e.g. a step input applied to a linear, stable system yields a steady-state output which
is also a step function. Similarly, the steady-state response of a linear, stable system
to a harmonic input is also harmonic. Studying a linear system's characteristics based
upon the steady-state response to harmonic inputs constitutes a range of classical control
methods called the frequency response methods. Such methods formed the backbone of
the classical control theory developed between 1 900-60, because the modern state-space
methods (to be discussed in Chapter 3) were unavailable then to give the response of
a linear system to any arbitrary input directly in the time domain (i.e. as a function of
time). Modern control techniques still employ frequency response methods to shed light
on some important characteristics of an unknown control system, such as the robustness of
multi-variable (i.e. multi-input, multi-output) systems. For these reasons, we will discuss
frequency response methods here.
A simple choice of the harmonic input, u(t\ can be
Imaginary
axis
u0 cos(o>0 Real
axis
When you studied solution to ordinary differential equations, you learnt that their solu-
tion consists of two parts - the complimentary solution (or the solution to the unforced
differential equation (Eq. (2.7)), and a particular solution which depends upon the input.
While the transient response of a linear, stable system is largely described by the compli-
mentary solution, the steady-state response is the same as the particular solution at large
times. The particular solution is of the same form as the input, and must by itself
satisfy the differential equation. Hence, you can verify that the steady-state responses to
u(t) = u0cos(a)t) and u(t) = M 0 sin(<wr), are given by v J5 (r) = y0cos(a>t) and yss(t) =
y0 sin(<wr), respectively (where y0 is the amplitude of the resulting harmonic, steady-state
output, yss(t)} by plugging the corresponding expressions of u(t) and yjs(0 into Eq. (2.4),
which represents a general linear system. You will see that the equation is satisfied in
each case. In the complex space, we can write the steady-state response to harmonic input
as follows:
(2.40)
Here, the steady-state response amplitude, v0, is a complex function of the frequency
of excitation, a). We will shortly see the implications of a complex response amplitude.
Consider a linear, lumped parameter, control system governed by Eq. (2.4) which can be
re-written as follows
D}{yss(t)} = D2{u(t)} (2.41)
where £>i{-} and D2{-} are differential operators (i.e. they operate on the steady-state
output, y55(0, and the input, u(t}, respectively, by differentiating them), given by
(2.42)
and
D2{-} = bmdm/dtm + bm-idm-l/dtm-* + ••• + bid/dt + bo (2.43)
where G(ia>) is called the frequency response of the system, and is given by
Needless to say, the frequency response G(ia>) is also a complex quantity, consisting of
both real and imaginary parts. Equations (2.46) and (2.47) describe how the steady-state
output of a linear system is related to its input through the frequency response, G(ico).
Instead of the real and imaginary parts, an alternative description of a complex quantity is
in terms of its magnitude and the phase, which can be thought of as a vector's length and
direction, respectively. Representation of a complex quantity as a vector in the complex
space is called a phasor. The length of the phasor in the complex space is called its
magnitude, while the angle made by the phasor with the real axis is called its phase. The
magnitude of a phasor represents the amplitude of a harmonic function, while the phase
determines the value of the function at t = 0. The phasor description of the steady-state
output amplitude is given by
where \y0(ico)\ is the magnitude and ct(a)) is the phase of y()(ico). It is easy to see that
where real{-} and imag{-} denote the real and imaginary parts of a complex number. We
can also express the frequency response, G(ia>), in terms of its magnitude, |G (/&>)), and
phase, 0(o>), as follows:
|G(/a>)|e' 0M (2.50)
Substituting Eqs. (2.48) and (2.50) into Eq. (2.46), it is clear that \y0(ia))\ = \G(ia))\u(/
and a(a>) = <^>(co). Hence, the steady-state response of a linear system excited by a
harmonic input of amplitude u0 and zero phase (u0 = wf,e'°) is given through Eq. (2.40) by
Thus, the steady-state response to a zero phase harmonic input acquires its phase from the
frequency response, which is purely a characteristic of the linear system. You can easily
show that if the harmonic input has a non-zero phase, then the phase of the steady-state
response is the sum of the input phase and the phase of the frequency response, 0(co). The
phasor representation of the steady-state response amplitude is depicted in Figure 2.18.
From Eq. (2.51), it is clear that the steady-state response is governed by the amplitude
of the harmonic input, u0, and magnitude and phase of the frequency response, G (/&>),
which represent the characteristics of the system, and are functions of the frequency of
excitation. If we excite the system at various frequencies, and measure the magnitude and
phase of the steady-state response, we could obtain G(ito) using Eq. (2.51), and conse-
quently, crucial information about the system's characteristics (such as the coefficients a/,
30 LINEAR SYSTEMS AND CLASSICAL CONTROL
Imaginary ,,
axis
Figure 2.18 Phaser representations of a harmonic input, u(f), with zero phase and amplitude UQ,
and steady-state response amplitude, yo(/<w), °f a linear system with frequency response, G(ia>)
and bk, in Eq. (2.47)). In general, we would require G(ico) at as many frequencies as are
the number of unknowns, ak and bk, in Eq. (2.47). Conversely, if we know a system's
parameters, we can study some of its properties, such as stability and robustness, using
frequency response plots (as discussed later in this chapter). Therefore, plots of magnitude
and phase of G(/w) with frequency, CD, serve as important tools in the analysis and design
of control systems. Alternatively, we could derive the same information as obtained from
the magnitude and phase plots of G(i(o) from the path traced by the tip of the frequency
response phasor in the complex space as the frequency of excitation is varied. Such a
plot of G(ico) in the complex space is called a polar plot (since it represents G(ico) in
terms of the polar coordinates, \G(ico)\ and 0(<w)). Polar plots have an advantage over
the frequency plots of magnitude and phase in that both magnitude and phase can be
seen in one (rather than two) plots. Referring to Figure 2.18, it is easily seen that a phase
(j)(u>} = 0° corresponds to the real part of G(/&>), while the phase 0(o>) = 90° corresponds
to the imaginary part of G (/&>). When talking about stability and robustness properties,
we will refer again to the polar plot.
Since the range of frequencies required to study a linear system is usually very large,
it is often useful to plot the magnitude, |G(i<w)|, and phase, <J>(co), with respect to the
frequency, co, on a logarithmic scale of frequency, called Bode plots. In Bode plots,
the magnitude is usually converted to gain in decibels (dB) by taking the logarithm of
|G(i<w)| to the base 10, and multiplying the result with 20 as follows:
As we will see later in this chapter, important information about a linear, single-input,
single-output system's behavior (such as stability and robustness) can be obtained from
the Bode plots, which serve as a cornerstone of classical control design techniques.
Factoring the polynomials in G(/<w) (Eq. (2.47)) just produces addition of terms in
Iog10 \G(ico)\, which enables us to construct Bode plots by log-paper and pencil. Despite
this, Bode plots are cumbersome to construct by hand. With the availability of personal
computers and software with mathematical functions and graphics capability - such as
MATLAB - Bode plots can be plotted quite easily. In MATLAB, all you have to do is
FREQUENCY RESPONSE 31
specify a set of frequencies, a), at which the gain and phase plots are desired, and use the
intrinsic functions abs and angle which calculate the magnitude and phase (in radians),
respectively, of a complex number. If you have the MATLAB's Control System Toolbox
(CST), the task of obtaining a Bode plot becomes even simpler through the use of the
command bode as follows:
Here » is the MATLAB prompt, <enter> denotes the pressing of the 'enter' (or 'return')
key, and the % sign indicates that everything to its right is a comment. In the bode
command, w is the specified frequency vector consisting of equally spaced frequency
values at which the gain and phase are desired, G is the name given to the frequency
response of the linear, time-invariant system created using the CST LTI object function tf
which requires num. and den as the vectors containing the coefficients of numerator and
denominator polynomials, respectively, of G(/o>) in (Eq. (2.47)) in decreasing powers
of s. These coefficients should be be specified as follows, before using the tf and bode
commands:
By using the MATLAB command logspace, the w vector can also be pre-specified as
follows:
(Using a semicolon after a MATLAB command suppresses the print-out of the result on
the screen.)
Obviously, u; must be specified before you use the bode command. If you don't specify
w, MATLAB will automatically generate an appropriate w vector, and create the plot.
Instead of plotting the Bode plot, you may like to store the magnitude (mag), \G(ico)\,
and the phase, <£(&>), at given set of frequencies, w, for further processing by using the
following MATLAB command:
» [ m a g , p h a s e j w ] = b o d e ( n u m , d e n , w ) ; <enter>
The same procedure can be used to get help on any other MATLAB command. The
example given below will illustrate what Bode plots look like. Before we do that, let us
try to understand in physical terms what a frequency response (given by the Bode plot) is.
Musical notes produced by a guitar are related to its frequency response. The guitar
player makes each string vibrate at a particular frequency, and the notes produced by the
various strings are the measure of whether the guitar is being played well or not. Each
string of the guitar is capable of being excited at many frequencies, depending upon where
32 UNEAR SYSTEMS AND CLASSICAL CONTROL
the string is struck, and where it is held. Just like the guitar, any system can be excited
at a set of frequencies. When we use the word excited, it is quite in the literal sense,
because it denotes the condition (called resonance) when the magnitude of the frequency
response, |G(/o>)|, becomes very large, or infinite. The frequencies at which a system can
be excited are called its natural (or resonant) frequencies. High pitched voice of many
a diva has shattered the opera-house window panes while accidently singing at one of
the natural frequencies of the window! If a system contains energy dissipative processes
(called damping), the frequency response magnitude at natural frequencies is large, but
finite. An undamped system, however, has infinite response at each natural frequency. A
natural frequency is indicated by a peak in the gain plot, or as the frequency where the
phase changes by 180°. A practical limitation of Bode plots is that they show only an inter-
polation of the gain and phase through selected frequency points. The frequencies where
\G(i(o)\ becomes zero or infinite are excluded from the gain plot (since logarithm of zero
is undefined, and an infinite gain cannot be shown on any scale). Instead, only frequency
points located close to the zero magnitude frequency and the infinite gain frequencies of
the system can be used in the gain plot. Thus, the Bode gain plot for a guitar will consist
of several peaks, corresponding to the natural frequencies of the notes being struck. One
could determine from the peaks the approximate values of the natural frequencies.
Example 2.8
Consider the electrical network shown in Figure 2.19 consisting of three resistances,
/?i, /?2, and /?3, a capacitor, C, and an inductor, L, connected to a voltage source,
e(t), and a switch, 5. When the switch, 5, is closed at time t = 0, the current
passing through the resistance R\ is i'i(f), and that passing through the inductor, L,
is /2(0- The input to the system is the applied voltage, e(t), and the output is the
current, /2(0-
The two governing equations of the network are
- 12(01 (2.53)
:± e(f)
"(0 (2.55)
Comparing Eq. (2.55) with Eq. (2.4) we find that the system is linear and
of second order, with y(t) = i 2 ( t ) , u(t) = e(t), aQ = l/C, a\ = (R[R3 + R\R2 +
R2R3)/(R} + R3), bo - 0, and b\ - R3/(R\ + R3). Hence, from Eq. (2.47), the
frequency response of the system is given by
Bode gain and phase plots of frequency response given by Eq. (2.57) can be plotted
in Figure 2.20 using the following MATLAB commands:
»w=logspace(-1,4); <enter>
(This command produces equally spaced frequency points on logarithmic scale from
0.1 to 10000 rad/s, and stores them in the vector w.)
CO
90
-90
101 102 103 104
Frequency (rad/sec)
Figure 2.20 Bode plot for the electrical network in Example 2.8; a peak in the gain plot and
the corresponding phase change of 180° denotes the natural frequency of the system
34 LINEAR SYSTEMS AND CLASSICAL CONTROL
(This command calculates the value of G(/o>) by Eq. (2.57) at each of the speci-
fied frequency points in w, and stores them in the vector G. Note the MATLAB
operations.* and ./ which allow element by element multiplication and division,
respectively, of two arrays (see Appendix B).)
(This command calculates the gain and phase of G(io>) at each frequency point in
w using the MATLAB intrinsic functions abs, angle, and loglO, and stores them in
the vectors gain and phase, respectively. We are assuming, however, that G does
not become zero or infinite at any of the frequencies contained in if.)
(This command produces gain and phase Bode plots as two (unlabeled) subplots,
as shown in Figure 2.20. Labels for the axes can be added using the MATLAB
commands xlabel and y label.)
The Bode plots shown in Figure 2.20 are obtained much more easily through the
Control System Toolbox (CST) command bode as follows:
Note the peak in the gain plot of Figure 2.20 at the frequency, o> = 1000 rad/s.
At the same frequency the phase changes by 180°. Hence, u> = 1000 rad/s is the
system's natural frequency. To verify whether this is the exact natural frequency,
we can rationalize the denominator in Eq. (2.57) (i.e. make it a real number by
multiplying both numerator and denominator by a suitable complex factor - in this
case (— a>2 + 106) — 30/o> and express the magnitude and phase as follows:
\G(ia>)\ = [225o>4 + 0.25w2(-ft>2 + 106)2]1/2/[(-^2 + 106)2
0(o>) = tan~' (-(o2 + 106)/(30co) (2.58)
From Eq. (2.58), it is clear that |G(/<w)| has a maximum value (0.0167 or
-35.547 dB) - and </>(<*>) jumps by 180° - at co = 1000 rad/s. Hence, the natural
frequency is exactly 1000 rad/s. Figure 2.20 also shows that the gain at CD =
0.1 rad/s is -150 dB, which corresponds to |G(0.1/)| = 10~75 = 3.1623 x 10~8,
a small number. Equation (2.58) indicates that |G(0)| =0. Hence, CD = 0.1 rad/s
approximates quite well the zero-frequency gain (called the DC gain) of the system.
The frequency response is used to define a linear system's property called bandwidth
defined as the range of frequencies from zero up to the frequency, <Wb, where
|G(/o>b)| = 0.707|G(0)|. Examining the numerator of |G(/w)| in Eq. (2.58), we
see that |G(/o>)| vanishes at CD = 0 and a> = 1999 100 rad/s (the numerator roots
can be obtained using the MATLAB intrinsic function roots). Since |G(0)| =0,
the present system's bandwidth is 0% = 1999 100 rad/s (which lies beyond the
frequency range of Figure 2.20). Since the degree of the denominator polynomial
of G(iu>) in Eq. (2.47) is greater than that of the numerator polynomial, it follows
FREQUENCY RESPONSE 35
that \G(ico)\ -> 0 as a> -> oo. Linear systems with G(ico) having a higher degree
denominator polynomial (than the numerator polynomial) in Eq. (2.47) are called
strictly proper systems. Equation (2.58) also shows that 0(<w) -» 90° as co —> 0,
and 0(<w) —> —90° as co —> oo. For a general system, $(&>) -> —£90° as a> -> oo,
where k is the number by which the degree of the denominator polynomial of G(ico)
exceeds that of the numerator polynomial (in the present example, k = 1).
Let us now draw a polar plot of G(ico) as follows (note that we need more
frequency points close to the natural frequency for a smooth polar plot, because of
the 180° phase jump at the natural frequency):
(This command creates a frequency vector, w, with more frequency points close to
1000 rad/s.)
(This command for generating a polar plot requires phase angles in radians, but the
plot shows the phase in degrees.)
The resulting polar plot is shown in Figure 2.21. The plot is in polar coordinates,
\G(ia))\ and 0(&>), with circles of constant radius, \G(i(o)\, and radial lines of
constant 0 (&>) overlaid on the plot. Conventionally, polar plots show either all posi-
tive, or all negative phase angles. In the present plot, the negative phase angles have
been shown as positive angles using the transformation 0 -> (<p + 360°), which is
acceptable since both sine and cosine functions are invariant under this transfor-
mation for 4> < 0 (e.g. 0 = —90° is the same as 0 = 270°). Note that the 0° and
120°.,- |
' =0.018
180
210
240°"--- ! •• 300°
270°
Figure 2.21 Polar plot of the frequency response, G(/&>), of the electrical system of Example 2.8
Exploring the Variety of Random
Documents with Different Content
Bourdier, Dr., 205.
Bulleyn, Dr. William, 25, 26, 29, 37, 64, 165, 229.
Busby, Dr., 9.
Cains, 22.
Cane, 11.
Canker, 33.
Canning, 421.
Cardan, 264.
Carriages, 17.
" II., 15, 17, 23, 38, 40, 57, 148, 157, 173, 174, 234, 472.
Chaucer, 20.
Chemberline, 79.
Coke, 11.
Coldwell, Dr., 229.
Congreve, 201.
Cromwell, 83.
Delaune, 471.
Dennis, 375.
Desault, 13.
Dioscorides, 64.
Dodsley, 328.
Erasistratus, 168.
Fees, 163.
Froissart, 221.
Galen, 13.
Galileo, 369.
Gardiner, Joseph, 292.
Garth, Sir Samuel, 63, 92, 113, 152, 186, 194, 199, 333, 375, 376,
433, 472.
Geber, 255.
Gordonius, 13.
Gout, 23.
Grainger, 427.
Handel, 161.
Hermes, 9, 11.
Hippocrates, 226.
Horace, 308.
Hunter, Dr. John, 23, 215, 295, 355, 369, 375, 405, 413.
Johnson, Samuel, 16, 39, 53, 67, 115, 140, 194, 201, 232, 239, 262,
308, 330, 333, 427, 463.
Keill, 184.
Mæcenas, 48.
Mahomet, 83.
Mandeville, 140.
Martial, 186.
Maupin, 381.
Mead, Dr. Richard, 10, 68, 81, 97, 134, 136, 137, 142, 152, 207, 239,
292, 377, 403, 434.
Mercurius, 11.
Mercury, 9.
Mesmer, Dr. Frederick Anthony, 256, 264, 265, 275, 280, 345.
ebookbell.com