Bio Control System
Bio Control System
com
NOTES ON LESSON
Introduction to Control
A control system is a dynamical system that affects the behaviour of another system.
Examples of control systems can be found all around, and in fact there are very few
mechanical or electro-mechanical systems that do not include some kind of a feedback
control device. In robotics, control design algorithms are responsible for the motion of
the manipulators. In flight applications, control algorithms are designed for stabilization,
altitude regulation and disturbance rejection. Cruise control is an interesting application
in which the automobile's speed is set at a fixed value. In electronic amplifiers feedback
is used to reduce the damaging influence of external noise. In addition, these days control
systems can be found in diverse fields ranging from semiconductor manufacturing to
environmental regulation.
This course is intended to present you with the basic principles and techniques for the
design of feedback control systems. At this point in your study you have mastered the
prerequisite topics such as dynamics and the basic mathematical tools that are needed for
their analysis. Control system design relies on your knowledge in these fields but also
requires additional skills in system interfacing. As you will see from this course, from
further electives, or from future experience, the design of feedback control systems
depends on
GupShupStudy.com 1
www.gupshupstudy.com
The study of electronic feedback amplifiers provided the impetus for much of the
progress of control design during the first part of the 20th century. The work of
Nyquist (1932) and Bode (1945) used mathematical methods based on complex
analysis for the analysis of the stability and performance of electronic amplifiers.
These techniques are still in use in many technological applications as we will see
in this course. Such complex analytic methods are currently called classical
control techniques.
During the second world war, advances in control design centered around the use
of stochastic analysis techniques to model noise and the development of new
methods for filtering and estimation. The MIT mathematician N. Wiener was very
influential in this development. Also during that period, research at MIT
Radiation Laboratory gave rise to more systematic design methods for
servomechanisms.
GupShupStudy.com 2
www.gupshupstudy.com
During the 1950's a different approach to the analysis and design of control
systems was developed. This approach concentrated on differential equations and
dynamical systems as opposed to complex analytic methods. One advantage of
this approach is that it is intimately related to, physical modeling and can be
viewed as a continuation to the methods of analytical mechanics. In addition, it
provided a computationally attractive methodology for both analysis and design
of control systems. Work by Kalman in the USA and Pontryagin in the USSR laid
the foundation to what is currently called modern control.
The system (or process) that we will consider consists of a tank that contains oil. The
tank sits on a heater which supplies energy to the oil. The energy supplied by the heater
depends on the input voltage to the heater which is controlled by a knob. Please refer to
Figure 1 for a sketch of this thermal system.
In this application we are interested in regulating the oil temperature. The system can be
represented by a block diagram as shown in Figure 2. The input to the system is the
voltage to the heater or knob position, whereas the output of the system is the oil
temperature.
GupShupStudy.com 3
www.gupshupstudy.com
Let us suppose that this process is in a factory and a worker adjusts the voltage knob to
set the oil temperature at a desired value determined by his boss. The boss decides the
desired temperature based on the factory requirements. The worker uses a lookup table to
position the knob. The table contains knob positions and the corresponding values of
steady state temperature (we mean by steady state the final value since it takes some time
to heat the oil to the desired temperature). An experiment was performed in the past to
obtain the lookup table. The knob was adjusted at the different positions and the
corresponding values of temperature were measured by a temperature sensor such as a
thermometer or a thermocouple. See Figure 3.
Figure 3: Open loop control of the oil temperature. Note that there is no device to
measure the oil temperature.
GupShupStudy.com 4
www.gupshupstudy.com
The boss would like to have the oil temperature more accurate, that is, the oil temperature
should be very close to the desired value. He buys a thermometer (temperature sensor) so
that the worker could read the oil temperature and thus adjusts the knob appropriately to
make the temperature as close as possible to the desired value. If the measured
tempearture is less than the desired one he will raise the knob, otherwise he will lower it.
He keeps doing this until the desired value of temperature is achieved. The worker is able
to do a better job because he has a sensor or feedback information about the oil
temperature. In this case we have a closed loop system.
Figure 5: Closed loop control of the oil temperature. Note that there is a thermometer to
measure the oil temperature.
GupShupStudy.com 5
www.gupshupstudy.com
In the above example the controller was a human (worker). The controller is the part of
the system that makes decisions based on an objective. The objective in the thermal
system was to achieve the desired value of oil temperature. This was done by making the
difference between the desired and measured values of temperature equal to zero. The
human could be replaced by an electronic circuit, computer or a mechanical controller.
We used a thermometer for the tempearture measurement. A thermocouple which
converts temperature to voltage could be used for the electronic circuit or computer
controller. No sensor is needed for the mechanical controller. The relationship between
pressure and temperature could be used to design a mechanism for moving the knob up
and down to achieve the desired value of tempearature.
Modeling of Systems
The first step in control design is the development of a mathematical model for the
process or system under consideration. In the modeling of systems, we assume a cause
and effect relationship described by the simple input/output diagram below. An input is
applied to a system, and the system processes it to produce an output. In general, a
system has the three basic components listed below.
1. Inputs: These represent the variables under the designer's disposal. The designer
produces these signals directly and applies them to the system under
consideration. For example, the voltage source to a motor and the external torque
input to a robotic manipulator both represent inputs. Systems may have single or
multiple inputs.
GupShupStudy.com 6
www.gupshupstudy.com
2. Outputs: These represent the variables which the designer ultimately wants to
control and that can be measured by the designer. For example, in a flight control
application an output may be the altitude of the aircraft, and in automobile cruise
control the output is the speed of the vehicle.
3. System or Plant: This represents the dynamics of a physical process which relate
the input and output signals. For example, in automobile cruise control, the output
is the vehicle speed, the input is the supply of gasoline, and the system itself is the
automobile. Similarly, an air conditioning system regulates the temperature in a
room. The output from the system is the air temperature in the room. The input is
cool air added to the room. The system itself is the room full of air with its air
flow characteristics. Note that the system could be mechanical, thermal, fluid,
electrical, electro-mechanical, etc.
In this lecture we will show by examples how mathematical models for simple
engineering systems can be developed. Notice that in each example four steps are
taken. First, a diagram of all system components and externally applied inputs is
drawn. From this diagram, the inputs and outputs are identified. Then a diagram is
made of the system components in which all internal signals are shown. Finally,
the differential equations governing the system dynamics are obtained. These
equations form the mathematical model of the system.
Example 1 Thermal system: Oil, tank and heater described in lecture #2.
The oil, tank and heater system was described in the previous lecture. Now, we
develop a mathematical model (differential equation) for this system which
describes its dynamics. The different signals and components needed to derive a
mathematical model for the thermal system are shown in Figure 1.
GupShupStudy.com 7
www.gupshupstudy.com
Energy supplied by the heater = Energy stored by oil + Energy lost to the
surrounding environment by conduction
GupShupStudy.com 8
www.gupshupstudy.com
Figure 3: (a) Diagram of the mechanical system components. (b) Free body
diagram of the mechanical system.
where, b is the damping coefficient and k is the spring stiffness (see Table 2.2,
page 35 in the text book). Equation (2) is the differential equation that describes
the dynamics of the spring-mass-damper system. Note that the input and output
appear in this equation. If we know the input, f then we can solve equation (2) for
the output, x.
The thermal and mechanical systems described in this lecture can be represented
by the following block diagram:
GupShupStudy.com 9
www.gupshupstudy.com
The system is a storage tank of cross-sectional area A whose liquid level or height is h.
The liquid enters the tank from the top and leaves the tank at the bottom through the
valve, whose fluid resistance is R. The volume flow rate in and the volume flow rate out
are qi and qo, respectively. The fluid density p is constant. Please refer to Figure 1 for a
schematic diagram of the system. In such a system it is desired to regulate the water level
in the tank. Assume that the variable the we can change to control the water level is qi.
In order to obtain the differential equation of the system we use the conservation of mass
principle which states that
The time rate of change of fluid mass inside the tank = the mass flow rate in - mass flow
rate out
GupShupStudy.com 10
www.gupshupstudy.com
where, A is the cross sectional area of the tank, g is the acceleration due gravity and R is
the fluid resistance through the valve (see Table 2.2, page 35 in the text book). Equation
(1) is the differential equation that describes the dynamics of our thermal system. Note
that the input and output appear in this equation. If we know the input, qi then we can
solve equation (1) for the output, h. A block diagram representation of the fluid system is
shown in Figure 2.
This system contains two masses, a spring, and a damper (refer to Figure 3). An external
force is applied to the first mass and we would like to control the position of the second
mass. The external force, f indirectly affects the motion of the second mass.
GupShupStudy.com 11
www.gupshupstudy.com
Figure 3: (a) Diagram of the mechanical system components. (b) Free body diagram of
the system.
GupShupStudy.com 12
www.gupshupstudy.com
Notice how the system dynamics relate the input and output. Remember that f is the input
and z is the output. A block diagram representation of the mechanical system is shown in
Figure 4.
The system below shows an electrical circuit with a current source i, resistor R, inductor
L, and capacitor C. All of these parts are connected in parallel. It is required to regulate
the capacitor voltage V.
GupShupStudy.com 13
www.gupshupstudy.com
In the next lecture we show how a state space representation of this mechanical system
can be obtained.
Next we find a set of differential equation that describes this system. Kirchoff's current
law is applied. The sum of the three currents (R, L, and C) is equal to the overall source
current i.
In the next lecture we show how a state space representation of this mechanical system
can be obtained.
In deriving a mathematical model for a physical system one usually begins with a system
of differential equations. It is often convenient to rewrite these equations as a system of
first order differential equations. We will call this system of differential equations a state
space representation. The solution to this system is a vector that depends on time and
which contains enough information to completely determine the trajectory of the
dynamical system. This vector is referred to as the state of the system, and the
components of this vector are called the state variables. In order to illustrate these
concepts consider the following two examples.
GupShupStudy.com 14
www.gupshupstudy.com
This is the same example discussed in the lecture #3 where we derived the mathematical
model of the system. The equation describing the evolution of this system can be written
as (see equation (2) in lecture #3):
Suppose that the input to the system is f and the output is x. In this problem we have a
second order differential equation which means that we need two initial conditions to
solve it uniquely. For example, specifying the value of x(to) and dx/dt(to) will allow for
solution of x(t) for t>to. Therefore, we will also require two state variables to describe
this system. We will label each state variable xi where i = 1,2 and make the assignments
Rewriting the differential equation above in terms of the state variables xi yields the
following state-space description for the system:
We can use matrix notation to rewrite the equations in the above state space
representation.
So for the above example, the state vector takes the form:
GupShupStudy.com 15
www.gupshupstudy.com
This is the same example discussed in the previous lecture where we derived the
mathematical model of the system (set of differential equations). The equations
describing the evolution of this system can be written as (see equations (2) and (3) from
the previous lecture):
Suppose that the input to the system is f and the output is z. In this problem we have two
second order differential equations which means that we need four initial conditions to
solve them uniquely. For example, specifying the value of w(to), dw/dt(to), z(to), and
dz/dt(to) will allow for solution of w(t) and z(t) for t>to. Therefore, we will also require
four state variables to describe this system. We will label each state variable xi where i =
1, ...,4 and make the assignments
GupShupStudy.com 16
www.gupshupstudy.com
Rewriting the two differential equations above in terms of the state variables xi yields the
following state-space description for the system:
We can use matrix notation to rewrite the equations in the above state space
representation.
So for the above example, the state vector takes the form:
GupShupStudy.com 17
www.gupshupstudy.com
We will usually want to choose the smallest set of state variables which will still fully
describe the system. A system description in terms of the minimum number of state
variables is called a minimal realization of the system. We will want next to compute the
solution of the state space equation. This leads us to the study of Laplace transforms.
Laplace Transforms
Laplace transforms are mathematical operations that transform functions of a single real
variable into functions of a complex variable. They are extremely useful in the study of
linear systems for many reasons. The most important reason is that by using Laplace
transforms one can tranform a linear differential equation into an algebraic equation.
Another important reason for Laplace transforms is that they supply a different
representation for a linear system. Such a representaion will be the basis for the classical
control techniques. Given a function of time, x(t) for t>0, we define the Laplace
transform of x(t) as follows:
GupShupStudy.com 18
www.gupshupstudy.com
A natural question to ask is: what are the conditions that the signal x(t) must satisfy in
order for the Laplace transform to exist? The Laplace transform exists for signals for
which the above transformation integral converges. However, the following fact supplies
an easy way to check sufficient condition for the existence of a Laplace transform.
then the Laplace transform L(x(t)) = X(s) exists for Re(s)>a. Note that Re(s) is the real
part of s.
Since any physically realizable signal is continuous and will not blow up so fast that no
exponential can bound it, the previous fact assures us that the Laplace transform can be
defined for any physically realizable signal. Let us now consider a few examples of how
to calculate Laplace transforms. Note: For this course, we will use the letter 'j' to
represent the complex constant sqaure root of -1.
GupShupStudy.com 19
www.gupshupstudy.com
GupShupStudy.com 20
www.gupshupstudy.com
This property shows that the process of differentiation in the time-domain corresponds to
multiplication by s in the Laplace domain plus the addition of the constant -x(0).
In general we have
GupShupStudy.com 21
www.gupshupstudy.com
We can find the Laplace transform for t and sin2t using the Laplace transform table. In
part (a) we need to shift s by 3 whereas we shift s by -1 in part (b). Therefore the Laplace
transforms of the above two functions are given as
In this lecture we discuss how to calculate the inverse Laplace transform. The simplest
way to invert a Laplace transform is to convert the function to be inverted into a sum of
functions which are Laplace transforms that we have already calculated. By the linearity
of Laplace transforms, the given function can be inverted directly and its inverse will be a
sum of the corresponding time domain functions. The table given under the
supplementary material in this web site will be very useful in this. It lists some common
time domain functions and their Laplace transforms. Some of these relationships were
derived in Lecture 6.
We will next give an example that demonstrates how the Laplace transform can be used
in solving an ordinary differential equation. Note that the inverse Laplace transform will
be needed to solve the problem.
Example 1:
GupShupStudy.com 22
www.gupshupstudy.com
where u is the input to the system and x is the output of the system. Find the output signal
x(t) if the input
Example 2: Calculate the inverse Laplace transform for the following function
GupShupStudy.com 23
www.gupshupstudy.com
Example 3: Calculate the inverse Laplace transform for the following function
GupShupStudy.com 24
www.gupshupstudy.com
Example 4: Calculate the inverse Laplace Transform for the following function. Note
that the denominator has real and complex roots.
The roots of the denominator of X(s) are s = -6, s = -3+4j and s = -3-4j . We first calculate
the partial fraction expansion of X(s) without factoring the second order polynomial
which has the complex roots.
GupShupStudy.com 25
www.gupshupstudy.com
In this lecture we will begin studying a few methods for the solution of linear differential
equations. In particular we will focus on those differential equations that arise from
modeling linear mechanical systems such as a mass-spring system. First consider the
following differential equation:
In order to solve this equation for a unique y(t), we need the initial conditions:
. Let us start by writing this equation in state space form. Define
GupShupStudy.com 26
www.gupshupstudy.com
with
Before we develop the complete solution for this general problem we will look at the
solution of a very simple case. Suppose x(t) and u(t) are scalars. This would give us A = a
and B = b also scalars. Consider the first order differential equation
GupShupStudy.com 27
www.gupshupstudy.com
This last equation is known as the scalar form of the variation of parameters formula. The
components of this equation are as follows:
Example: Calculating the solution to a specific case of the scalar problem given above.
with input
GupShupStudy.com 28
www.gupshupstudy.com
where I is the n x n identity matrix. By using Laplace transforms we can transform the
above equation into
and thus
and
which is the inverse Laplace transform. One can also verify that (t) is equal to the
following infinite sum by showing that the sum converges and satisfies the above matrix
differential equation.
GupShupStudy.com 29
www.gupshupstudy.com
Observe that we can use (t) as an integrating factor in the following way:
Note that for one of the steps above we have used the fact that (t) commutes with A;
that is A ()= (t)A. Using the infinite sum expression for (t), the reader can easily
verify this fact. In a similar way to the scalar case we arrive at
In the general case where the initial condition is given at some point t0, we have
where
is given. This is the matrix form of the variation of parameters formula. The matrix is
called the state transition matrix.
GupShupStudy.com 30
www.gupshupstudy.com
Suppose u(t) = 0.
We first find
Therefore,
and
Hence for the unforced case (u(t) = 0), we have by the variation of parameters formula
We may use the (t) which we found previously since no part of its derivation has
changed. Hence, we have
GupShupStudy.com 31
www.gupshupstudy.com
Now using the variation of parameters formula given above, we can see that our system
satisfies the equation:
Example 3 For the shown spring-mass-damper system, find the position and velocity of
the mass if the initial position is 1 m, the initial velocity is zero and the external force f (t
) = 5 N.
GupShupStudy.com 32
www.gupshupstudy.com
The state variables are the position and velocity of the mass. Recall that the state space
representation for this system is written as
We need to solve this equation in order to find the state variables. The variation of
parameters formula will be used to solve the problem. We first calculate the state
transition matrix
GupShupStudy.com 33
www.gupshupstudy.com
Stability
Conceptually, a stable system is one for which the output is small in magnitude whenever
the applied input is small in magnitude. In other words, a stable system will not blow
up when bounded inputs are applied to it. Equivalently, a stable systems output will
always decay to zero when no input is applied to it at all. However, we will need to
express these ideas in a more precise definition.
is a stable system if
By the variation of parameters formula, the state x(t) for such an unforced system
satisfies
In this case, the system output y(t) = Cx(t) is driven only by the initial conditions.
GupShupStudy.com 34
www.gupshupstudy.com
If a < 0, then the system is stable because x(t) goes as to zero as t goes to infinity for any
initial condition. If a = 0, x(t) stays constant at the initial condition and according to our
definition of stability the system is unstable. Finally, if a > 0 then x(t) goes to infinity
(blows up) as t goes to infinity and thus the system is unstable.
Example 2: Let us analyze the stability of the following unforced linear system
Because both exponentials have a negative number multiplying t , for any values of the
initial conditions we have
Examples of stable and unstable systems are the spring-mass and spring-mass-damper
systems. These two systems are shown in Figure 1. The spring-mass system (Figure 1(a))
is unstable since if we pull the mass away from the equilibrium position and release, it is
going to oscillate forever (assuming there is no air resistance). Therefore it will not go
back to the equilibrium position as time passes. On the other hand, the spring-mass-
damper system (Figure 1(b)) is stable since for any initial position and velocity of the
mass, the mass will go to the equilibrium position and stops there when it is left to move
on its own. ( See the animations by clicking on the links that are provided below Figure
1.)
GupShupStudy.com 35
www.gupshupstudy.com
Let us analyze the statility of the spring-mass system using mathematical relations. The
equation of motion for the spring-mass system shown in Figure 1(a) is written as
Note that there is no damping and external force (b = 0 and f = 0). Rewriting the above
differential equation in the state space form with the mass position and velocity as state
variables, we get
GupShupStudy.com 36
www.gupshupstudy.com
Therefore, the state variables of the system as a function of time are given as
Note that the two state variables do not go to zero as time goes to infinity for any initial
condition. They instead oscillate around the equilibrium point (x1 = 0 and x2 = 0).
Therefore, the system is unstable.
Another example that demonstrates the concept of stability is the pendulum system which
is shown in Figure 2. If air resistance is negelected the pendulum will oscillate forever
and thus the system will be unstable. On the other hand, the mass will always go back to
its equilibrum position if air resistance is taken into account and the system will therefore
be stable.
GupShupStudy.com 37
www.gupshupstudy.com
Stability (Continued)
The stability of a system can be related to the eigenvalues of the system A matrix. These
eigenvalues can be determined from the characteristic polynomial of the matrix. For a
matrix A, its characteristic polynomial is
and the eigenvalues of the matrix A are the roots of the characteristic polynomial.
The roots of the characteristic polynomial are the values of s that satisfy the following
relation
Therefore the roots of the characteristic polynomial are s = -1 and s = -2. Thus the
eignevalues of A are -1 and -2. It is a fact that the eigenvalues of a diagonal matrix, such
as the matrix A, are its diagonal elements. Notice that the A matrix of this example is the
same as the A matrix of the unforced system studied in a previous example (see previous
lecture). Notice also that the eigenvalues of A are the constants multiplying t in the
exponentials of the unforced state solution of that example. Remember that
Because of this fact, it is easy to see that negative eigenvalues correspond to a stable
system while non-negative eigenvalues will produce an unstable system.
is a stable system if all of the eigenvalues of matrix A have negative real part.
GupShupStudy.com 38
www.gupshupstudy.com
The eignenvalues of the A matrix are -1 and 2. Since one of the eigenvalues is non-
negative, this system is unstable.
Exercise: Solve the differential equation of this example and note that the second state
variable will blow up as t goes to infinity which produces an unstable system.
Example 3: Let us analyze the stability of a system whose A matrix is of size 3x3. Also,
find the state transition matrix.
The characteristic equation, has the roots s = -1, s = -2+4j and s = -2-4j . Since the
real parts of all the roots are negative, we conclude that the system is stable.
GupShupStudy.com 39
www.gupshupstudy.com
where u is the input, y is the output and is the initial condition. Pictorially, the system
can be represented as follows:
GupShupStudy.com 40
www.gupshupstudy.com
The first term, , is the unforced response resulting only from the initial
we want to find an input/output relationship in the Laplace domain. We first take the
Laplace transform of the above equations with the initial condition set to zero, we have
Taking the Laplace transform of the output equation y = Cx+Du , we find that
An internal or state space system representation describes the evolution of the system in
the time domain. However, an external or input/output system description is developed in
the Laplace domain. We now consider how an external representation may be obtained
from an internal one.
The term is called the transfer function of the system and it determines the
output Y(s) for any given input U(s). Notice that the equation
does not contain any information about the system state or the initial conditions.
GupShupStudy.com 41
www.gupshupstudy.com
For a given set of system matrices A, B, C, D, there is only one corresponding transfer
function T(s). However, one transfer function may correspond to many different state
space representations. A state space description obtained from a transfer function is
known as a realization and can take on many different forms. We will study a few of
these forms such as
the controllable canonical form and the observable canonical form later on in the course.
Remember that in computing a transfer function, the initial conditions are set to zero.
Therefore, a piece of information is lost in the transformation from a state space
description to a transfer function.
GupShupStudy.com 42
www.gupshupstudy.com
Solution
(b) Using the transfer function T(s) calculated in (a) and the equation Y(s) =T(s)U(s), we
can calculate the output for a given input signal.
GupShupStudy.com 43
www.gupshupstudy.com
Note that in the example above, the denominator of the transfer function T(s) is of degree
two, which is the same as the number of state variables in our state space system
representation. However, this is not always the case. A state space system with n states
may have a transfer function whose denominator has degree less than n. The order of a
state space system as the number of its state variables. This tells us that the degree of the
transfer function (i.e. the degree of the denominator polynomial) is less than or equal to
the order of the system. In the special case where the order of the system is equal to the
degree of the transfer function, we say that the system has minimal order.
GupShupStudy.com 44
www.gupshupstudy.com
We must first calculate the adjoint of the determinant of the matrix (sI-A) (see matrix
inverse under tools in the supplementary section):
Finally, we have
GupShupStudy.com 45
www.gupshupstudy.com
m > 0, b > 0 and k > 0. Let the state variables be the position and velocity of the mass.
The output of the system is the position and the input is the external force f .
For parts (c), (d) and (e) let m = 1 kg, b = N.s/m, and k = 1 N/m.
(c) Find the state variables (position and velocity of the mass) as functions of time if the
initial conditions are equal to zero and f(t) = 1.
(d) Find the output y(t) if the initial conditions are equal to zero and f(t) = sint .
(e) Find the output y(t) if the initial position is equal to 1 m, the initial velocity is equal to
zero and f(t) = sint .
Solution:
Let the position of the mass be z. Let us first find the state space representation of the
system. We previously found the differential equation that governs the dynamics of this
system:
GupShupStudy.com 46
www.gupshupstudy.com
The input u = f and the output y = z . Thus, the state space equations are written as:
(a) In order to test the stability of the system we find the eigenvalues of the matrix A:
Note that for m > 0, b > 0 and k > 0 the real part of the eigenvalues is always negative.
Therefore, the system is stable. For any initial condition the mass will go to the
equilibrium point and stops if it is left to move without applying the external force
(input).
GupShupStudy.com 47
www.gupshupstudy.com
Note that the transfer function of the system can lso be found by taking the Laplace
transform of both sides of the differential equation that governs the system dynamics and
setting all the initial conditions to zero.
(c) To find the state variables as functions of time we use the variation of parameters
formula. Let us first compute the state transition matrix with m = 1 kg, b = N.s/m, and k =
1 N/m:
Exercise: Complete the integration and get the answer for part (c). Is there an easier way
to solve this part of the example?
GupShupStudy.com 48
www.gupshupstudy.com
(d) Since all the initial conditions are equal to zero we can use the transfer function found
in part (b) to find y(t).
(e) This part is the same as part (d) except for the initial condition. In part (d) the initial
condition is assumed to be zero and the output is the forced response. Thus, if we find the
unforced response and add it to the one found in part (d) we get the answer.
The unforced response (or response due to the initial condition) is given as:
The output is equal to the unforced response found here plus the forced response found in
part (d), that is (variation of parameters formula)
GupShupStudy.com 49
www.gupshupstudy.com
Example: Pendulum
The figure below shows the pendulum system. A mass m is attached to a cable of length
L. The angular position of the mass measured from the vertical axis is . The positive
direction for the angular position is counterclockwise. We will assume that there is no
external force or air resistance on the system. This means that the mass will oscillate
freely forever about the vertical axis.
We first obtain the differential equation that governs the dynamics of the pendulum
system. The free-body-diagram that shows all the forces on the mass is drawn below.
GupShupStudy.com 50
www.gupshupstudy.com
Note that the above differential equation is nonlinear. We choose the state variables as
follows (z is usually used for nonlinear systems and x for linear systems):
In general, the state equation for a nonlinear system with n state variables z1, z2, ... zn and
input v (we use v for nonlinear systems and u for linear systems) can be written as
GupShupStudy.com 51
www.gupshupstudy.com
Equivalently,
Define , the deviation of the state from its nominal value, and , the
deviation of the input from its nominal value. We can write
In order to obtain an approximation for our system, we express dz/ dt as its Taylor series
expansion about the nominal trajectory as follows:
GupShupStudy.com 52
www.gupshupstudy.com
Choose a nominal state vector z0 and a nominal input v0. We usually choose the nominal
state vector and input at the equilibrium or steady state condition.
GupShupStudy.com 53
www.gupshupstudy.com
We will linearize the system around the equilibrium point where the two state variables
are equal to zero:
GupShupStudy.com 54
www.gupshupstudy.com
We first obtain a mathematical model of the above system. The force exerted on the mass
by the electromagnetic is
and obtain a state a space representation for the system. Since we have a second order
differential equation in y, we choose y and its derivative as state variables. Since we also
have a first order differential equation in i, we to choose i as a third state variable.
GupShupStudy.com 55
www.gupshupstudy.com
We would like to write this in a linear state space form, but we cannot because of the
presence of the nonlinearities. Therefore, we proceed by finding a linear approximation to
this system. The first step in this process is to define a nominal trajectory for the system.
In this case, we choose a nominal trajectory based on a steady-state condition. Consider
the Case where v and y are constants, and denote their values by and . Our nominal
conditions become:
Our condition for equilibrium: the weight of the ball is equal to the magnetic force.
Written in terms of our state variables we have
In the steady state case, we assume that the system input is also a constant. Therefore, we
have
GupShupStudy.com 56
www.gupshupstudy.com
System Controllability
there exists a control input u(t) for to move the system from
GupShupStudy.com 57
www.gupshupstudy.com
We want to find u(t) such that at time T the resulting system state will be:
where the subscript k denotes the time step. Suppose the system starts at the initial
condition and we want to find the input sequence so that the final state
is equal to a desired final state . By writing the equations for the evolution of the
system during the n time steps we arrive at
The unknowns in this problem are the elements of the control sequence .
The quantity is known a priori. Now we can rewrite the above equations as
follows:
GupShupStudy.com 58
www.gupshupstudy.com
In order for the above set of equations to have a solution for any initial condition and
any final desired state the following condition must hold
Therefore, our discrete-time system is controllable if and only if the controllability matrix
is of full rank. In the case of single input systems, i.e. B is an n x 1 matrix, the condition
reduces to
This is called the Kalman rank test. Although the test has been derived for discrete-time
systems it is also valid for continuous-time systems. We summarize the test in the
following:
GupShupStudy.com 59
www.gupshupstudy.com
Equivalently,
Clearly, this system is not controllable. The input has no effect on the state . Check
this with the Kalman controllability test:
Equivalently,
GupShupStudy.com 60
www.gupshupstudy.com
We notice that the dynamical equations for the two state variables are identical.
Therefore, if the initial states are the same, then . Thus, whatever the input
signal, no final state in which can be reached. Therefore, this system is
uncontrollable. Verify using the Kalman controllability test:
where u is a scalar, we have the following test for controllability: if det(Con) is not equal
to zero then the system is controllable, and if det(Con) is equal to zero then the system is
uncontrollable.
We would like to determine if this system is controllable or not. We first form the
controllability matrix:
GupShupStudy.com 61
www.gupshupstudy.com
Note that
We then evaluate the determinant of the controllability matrix. We find that det(Con) = -
32 which is not equal to zero. Therefore, the system is controllable.
This is the same example we considered in lecture #4. We will determine here whether or
not the system is controllable with the following values
GupShupStudy.com 62
www.gupshupstudy.com
GupShupStudy.com 63
www.gupshupstudy.com
Therefore, the system is controllable. The external force is applied to one mass but we
can control the position and velocity of the second mass as we like!!
GupShupStudy.com 64
www.gupshupstudy.com
and we are interested in transferring the system to the state in T units of time. We will
now give a formula for an input u(t) in the time interval from zero to T that will produce
the desired state transfer. First we need to compute the following matrix called the
controllability gramian at time T which is given by
Note that the superscript denotes the transpose operation (see matrix transpose under
tools in the supplementary section). The desired input is given by the following formula:
Let us verify that the above control input produces the desired transfer. From the
variation of parameters formula, our state equation is the following:
Evaluating our state at time T and substituting our input equation leads to the following:
Hence, our input u(t) as given above will perform the desired state transfer.
GupShupStudy.com 65
www.gupshupstudy.com
We would like to transfer the system from its initial state of:
The first step in constructing our control input is to calculate the controllability gramian
Q(1). In order to do this, we must determine our state transition matrix:
GupShupStudy.com 66
www.gupshupstudy.com
We can finally construct the required control input. Using the formula given above we
have:
GupShupStudy.com 67
www.gupshupstudy.com
Click here to download a Matlab code for the numerical verification of the solution of
this example. The code also exists under Matlab in the supplementary section.
Example 4:
The figure shows a mass of 1 Kg on a smooth surface. The mass is initially moving with
a velocity of 0.5 m/ s. Is it possible to find a force F(t) for such that in 1 second
the mass will move 1 m and attain a velocity of 1 m/ s? If yes find F(t) . If no explain
why?
We first find the differential equation of motion. Use Newtons second law
Let the state variables of the system be the position and velocity of the mass, that is
GupShupStudy.com 68
www.gupshupstudy.com
Therefore the system is controllable and we can solve the problem. Now, we find the
state transition matrix
GupShupStudy.com 69
www.gupshupstudy.com
Assume we have a process with input u and output y . If the transfer function of the
process is equal to P(s) , then we can write
Example 1: Consider the block diagram shown below. The system represented by the the
transfer function C(s) has the input E(s) and the output U(s). The system represented by
the the transfer function P(s) has the input U(s) and the output Y(s). We would like to find
the transfer function from E to Y. In this case the input will be E and the output will be Y .
GupShupStudy.com 70
www.gupshupstudy.com
We have
Therefore,
Let us find the transfer function T(s) of the above closed loop (feedback) system. This
will be the transfer function from the input to the system R to the output of the system Y .
GupShupStudy.com 71
www.gupshupstudy.com
PID Controllers
In feedback control systems the controller takes the error signal (difference between the
desired and measured signals) and processes it. The output of the controller is passed as
an input to the process. One type of controller which is widely used in industrial
applications is the PID (proportional integral derivative) controller. The proportional part
of this controller multiplies the error by a constant. The integral part of the PID controller
integrates the error. Finally the derivative part differentiates the error. The output of the
controller is the sum of the previous three signals. We have
GupShupStudy.com 72
www.gupshupstudy.com
In the previous lecture we showed how to obtain the transfer function of a closed loop
system. In addition, we derived the transfer function of a PID controller. In this lecture
we will discuss the response of a closed loop system, final value theorem and tracking
problem.
We will deal with block diagrams of the form shown below. Comparing this diagram
with the one given in the previous lecture we note that the sensor block S(s) is missing
here. The closed loop system below is called a unity feedback system since S(s) = 1. In
practice this can be easily achieved by adding an amplifier to the measurement system in
order to make the measured and actual outputs equal in value. The units of two outputs
will be usually different.
From the previous lecture the transfer function of the above closed loop (feedback)
system is (with S(s) = 1):
If we know the input signal r(t), then the response of the system due to the input signal
can be found as follows
1. Unit step,
GupShupStudy.com 73
www.gupshupstudy.com
2. Step,
3. Ramp,
4. Sine signal,
We now state the final value theorem which will be needed in the discussion of the
tracking problem.
Example 1:
Tracking Problem
Tracking means that the output follows the input (or reference) signal as time goes to
infinity.
In order to check if tracking is achieved mathematically we follow one of the relations
given below.
GupShupStudy.com 74
www.gupshupstudy.com
Let
GupShupStudy.com 75
www.gupshupstudy.com
Solution:
(a) We first find the transfer function of the closed loop system
The roots of the denominator of the transfer function are -1+2j and -1-2j. The system is
therefore stable (negative real parts) and there will be oscillations in the response
(imaginary roots). Think of the denominator of the transfer function as det(sI-A). Now we
find the output y(t)
GupShupStudy.com 76
www.gupshupstudy.com
The plot of the response of the system (y versus t ) is shown below. From the graph note
that the maximum value of y is equal to 1.234 whereas the steady state (final) value of y
is equal to 1. The overshoot is the quantity by which the output goes above its final value:
GupShupStudy.com 77
www.gupshupstudy.com
(b) There are two methods to check if tracking is achieved. The first method is to find the
value of the output as time goes to infinity and see if the output approaches the input. The
other method is to use the final value theorem as discussed in the tracking section above.
We will solve this part using the two methods.
method 1:
Therefore, tracking is achieved. Note that since the output y(t) was found in part (a) we
could use it to find the value of the output as time goes to infinity.
GupShupStudy.com 78
www.gupshupstudy.com
method 2: Check if the error goes to zero as time goes to infinity. Using the final value
theorem we have
(c) We will repeat part (a) with the new values of proportional and integral gains
The roots of the denominator of the transfer function are at -2 and -3. Therefore, the
system is stable and no oscillations will be present in the response. Let us now find the
response of the closed loop system when it is given a unit step function
The output plot is shown below. Note that as time goes to infinity the output goes to the
set point which is equal to 1. Therefore, tracking is achieved. Also note that there are no
oscillations since all the roots of the transfer function were real. There is an overshoot
due to the existence of two exponentials in the response.
GupShupStudy.com 79
www.gupshupstudy.com
(d) We will repeat part (a) with the new values of proportional and integral gains
The roots of the denominator of transfer function (characteristic polynomial) are at 0 and
-2. This indicates that the system is unstable.
GupShupStudy.com 80
www.gupshupstudy.com
Let us now find the response of the closed loop system when it is given a unit step
function
The output plot is shown below. Note that as time goes to infinity the output goes to the
0.5 which is NOT equal to 1 (desired signal). Therefore, tracking is NOT achieved. In
this case proportional control action alone is not enough to satisfy the objective of
tracking.
GupShupStudy.com 81
www.gupshupstudy.com
(e) The transfer function of the closed loop system is equal to (see above)
For stability the denominator of the transfer function should have roots with negative real
parts (Think of the denominator as det(sI-A)). For no oscillations the roots should be pure
real (no imaginary part). In part (b) of this problem we gave values for the proportional
and integral gains that will satisfy the conditions of part (e). As an exercise try to find
other values of the proportional and integral gains that will satisfy the conditions of part
(e). In addition, try to select values of the gains that will also eliminate the overshoot in
the system.
GupShupStudy.com 82
www.gupshupstudy.com
We will consider a practical example that demonstrate the ideas of transfer function
methods.
For the tank, fluid and heater process (that we studied earlier in this course) shown
below, let
the energy supplied by the heater = 20V Watt, where V is the input voltage,
the thermal capacitance of the fluid C = 60 Watt.s/ oC, and
the thermal resistance of tank R = 0.2 oC/ Watt.
We would like to control the temperature of fluid. It is required that the temperature
tracks given desired constant reference signals. A suitable solution for such a problem is
the use of PI controllers. In order to design a control system for the above process, we
need to select a sensor for temperature measurement. We will choose a thermocouple that
converts every 10 oC to 1 volt. This means that the measurement will be equal to 0.1T ,
where T is the fluid temperature. Let us apply the following steps in the design of a PI
controller for the above process.
1. Modeling: Recall that the differential equation of the above thermal process is obtained
using the heat balance equation and is written as (state space representation)
GupShupStudy.com 83
www.gupshupstudy.com
Note that we choose the state variable to be the temperature difference between the fluid
temperature and ambient temperature. The reason for this is to obtain a linear state space
representation for the system. By adding the value Ta to x we obtain T. For the purpose of
simplifying the discussion, we will let Ta = 0, that is, x = T and y =x =T.
The transfer function of the process from the input voltage to the temperature is
If the initial temperature and the input to the process are known one can solve for the
time response of the temperature using the transfer function and the state space
representation of the system. Note that when we deal with transfer functions, the initial
condition is assumed to be equal to zero.
2. PI (proportional plus integral) control: We will now study the effect of connecting the
above process to a PI controller in a closed loop (feedback) system. The sensor will be
the thermocouple above. The block diagram of the closed loop system is drawn below
GupShupStudy.com 84
www.gupshupstudy.com
GupShupStudy.com 85
www.gupshupstudy.com
Note that the denominator of the transfer function depends on the controller parameters kp
and ki that the control engineer selects based on certain requirements. The roots of the
denominator are called poles. Negative poles produce stable system and imaginary roots
cause oscillations in the response of the system. The response of the system depends on
the poles. These poles depend in turn on the controller parameters. We conclude that it is
important to select kp and ki carefully to design of a good control system.
As an exercise let us find the response of the system when the proportional gain is equal
to 5, the integral gain is equal to 0 and the reference signal is equal to 5 volts. Note that in
this case the controller becomes proportional since there is no integral action. We have
As time goes to infinity the output temperature goes to 33.3. The measured temperature
goes to 0.1*33.3 = 3.33 < r, where r = 5 (reference signal). Thus, tracking is not achieved
with a P (proportional) controller. Let us test the tracking ability of the closed loop
system with PI controller when the input reference signal is constant (r = a). We will use
the final value theorem as discussed in the previous lecture.
The poles of the system are the roots of the denominator and are equal to
GupShupStudy.com 86
www.gupshupstudy.com
The poles are negative for all positive values of kp and ki . Therefore, the use of a PI
controller produces a stable closed loop system. If the integral gain is equal to zero, one
pole will be equal to zero and according to our stability definition, the system will be
unstable.
In summary, stability, tracking ability, and system response requirements are important
factors in determining the controller parameters.
GupShupStudy.com 87