Ilovepdf Merged
Ilovepdf Merged
A transfer function is the relationship between the input and the output of a certain system. Given the
system below, we say that:
The output is Y(s), the input is R(s) and the plant is given by G(s). The relationship of these three is
given by:
Y (s ) = R(s )G (s )
Y (s ) − Output
R(s ) − Input
G (s ) − Plant
From this equation, if we divide both sides by R(s), then we will get:
Y (s )
= G (s )
R(s )
And in general,
N (s ) polynomial of order m
G (s ) = =
D(s ) polynomial of order n
Therefore, the relationship between the input and the output is given by the plant response and is called
the transfer function.
We recall again that the zeros of the transfer function are derived by equating the numerator to zero.
We see that:
And the poles of the system are derived from equating the denominator to zero, such that:
We state that: Zeros and poles affect the open-loop and closed-loop stability of a system. We say that
a system is stable if for a bounded input, the output is also bounded.
The two (2) predominant systems in control system: open loop and closed loop.
For an open loop shown below, we have already stated the transfer function.
Again, in general, the transfer function of such system is governed by system plant, given by:
N (s )
G (s ) =
(s − p1 ) (s − pn )
Notice that the above transfer function has a denominator that has already been factored out. Here,
the assumption is placed that pn may still be complex or real. Even though we disregarded the
multiplicity of poles here, the point that we are after is that the general solution for such will be of the
form:
And we could see here that for this to be stable, the poles should be negative to have a decaying
response. If there is a pole that is positive, then the system is still unstable. We conclude that for open
loop systems, the poles solely determine system stability, and these poles should be negative in value
or are located at the left-hand side of the s-plane.
The system shown in Figure 4.2 is a closed loop negative feedback system. The circle acts as a
summing block. Note the signs. The output is multiplied by a certain “k” and is fed back to the summing
block. Its sign is negative. This is the reason such a diagram is called a negative feedback system. In
most systems, we would always want to have negative feedback because this can ensure system
stability.
Y (s ) G (s ) N (s )
= =
R(s ) 1 + kG (s ) D(s ) + kN (s )
Unlike the open loop system where stability relies on the system poles only, the stability of a closed
loop system is determined by the zeros and poles.
R : reference input
D : disturbance
− known/unkn own
− random/det erministic
N : sensor / measurement noise
For any control system, we always have the reference input R. This is what every control system would
want to achieve. The actual output is represented by Y.
The output Y is sampled or sensed by an appropriate transducer. Since transducers are electronic
components, they are inherent in noise. This may be due to the characteristic of the device that may
depend on temperature, etc. The sample output plus the sensor noise are fed back to the system,
particularly to the summing block.
The error E is the error of the output, and the input and the desired value of this should be zero. A zero
value denotes that the output has reached the desired input.
The error is then fed to a block K. This block K, in general, will be called the compensator or controller.
This block compensates or controls the plant G in order for the plant to reach the desired reference
input. The output of the compensator or controller drives the plant G.
However, there are certain disturbances in the output. These can be known or unknown or random or
deterministic or a combination of both. An example could be an additional load in the propeller of a
certain electric fan. We know that if the load is heavy, then the propeller should slow down.
Let us now determine the relationships of these representations. We assume that for any block diagram
we have, we are using Laplace transforms as their transfer functions. We begin with the output Y.
Y = KGE + D
E = R −Y − N
If we substitute the second equation to the first equation, we will see that:
Y = KG (R − Y − N ) + D
Y = KGR − KGY − KGN + D
Y + KGY = KGR − KGN + D
(1 + KG )Y = KGR − KGN + D
KG KG 1
Y= R− N+ D
1 + KG 1 + KG 1 + KG
We can notice from the final answer that the output Y is dependent on three inputs, the desired input,
disturbance, and noise. However, we see also that the noise and disturbance have multiplying factors
dictated by the controller and the plant. This means that we can really minimize their effect by designing
appropriate controllers/compensators.
The tracking error E can also be derived using the same step except that the first equation is now
substituted for the second equation.
Y = KGE + D
E = R −Y − N
Following the same derivation steps, we can see that:
E = R − KGE − D − N
E + KGE = R − D − N
(1 + KG )E = R − D − N
1 1 1
E= R− N− D
1 + KG 1 + KG 1 + KG
Ideally, we want the tracking error equal to zero. In practice, a small error will normally do in the range
of 1-5% of the desired value is alright for any system design. Mathematically, we can have a zero error
if the magnitude of “KG” is infinity.
U = KE
K
U= (R − D − N )
1 + KG
This is the command accepted by the system plant. We have arrived with this equation by simply
multiplying the tracking error signal with “K”.
KG KG 1
Y= R− N+ D
1 + KG 1 + KG 1 + KG
1 1 1
E= R− N− D
1 + KG 1 + KG 1 + KG
K
U= (R − D − N )
1 + KG
These equations will be our governing equations in meeting our objective of achieving a desirable
reference input.
We say again that we want to have a small error E. A small error E will tell the control engineer that
the actual output is near the desired reference input.
With a small tracking error, it follows that we also have a small actuator input. Providing a small actuator
input allows the system plant to go to the desired state at smaller ranges, thus prohibiting it from going
unstable.
Now, if we look at the tracking error E and the output Y, we see their dependence on the noise and
disturbance. As much as possible, these two, N and D, should be small enough to affect the system’s
response.
How do we minimize disturbance D and the noise N? As a practical approach, the system must be well
shielded from these two unwanted signals. For example, the sensors must be of high quality or
precision/accuracy to read the actual output. They should also have a small internal noise. However,
these two may not be generalized on how to minimize because each control system will have their own
disturbances and noises. A good controller engineer at least would consider all disturbances and
noises that greatly affect the system’s performance.
From the three equations above, we see that their denominators are the same. We call this the return
difference, defined by:
J = 1 + KGH
The return difference is the measure of the difference between the actual error or output and the
sampled error or output. One can see this clearly from the derivation of the output and error signals
above.
We note that the “H” represents the feedback transfer function. For the closed loop system above, H
= 1. When this happens, we have a unity negative feedback system on our hand.
KGH is normally defined as the loop gain because if we take out the noise, disturbance, input, and
output, these three blocks form a loop, thus having the name loop gain. In the future, we will see that
this plays an important role in determining system stability.
1 1
S= =
1 + KGH J
The sensitivity of the plant means how well the system is sensitive to the reference input, noise, and
disturbance. Of course, as much as possible, we want our system to be super sensitive to the reference
input and least sensitive to the noise and disturbance.
T = 1− S
KGH
T=
1 + KGH
Note that the complementary sensitivity T is only injected or a factor of the desired reference input.
output : Y = SD + T (R − N )
error : E = S (R − D − N )
input : U = KS (R − D − N )
To formulate our control system objectives, we now have the following observations.
• Disturbance rejection – We must reduce the effect of disturbance D by having a small S, i.e., being less
sensitive to such disturbances.
• Good tracking – We must have a small error, ideally equal to zero, thus requiring again a small S, i.e.,
being less sensitive to both disturbance and noise.
• Bounded actuator signals – From the two abovementioned objectives, we saw that S is small, and it
has to be kept that way. For a bounded actuator signal, we need a small U, thus requiring a small KS
or small K.
• Noise immunity - We must reduce the effect of noise by having large T and small S.
We see now that for some control objectives, we need a small S, and for some, we need a large S. In
a control system, technically and practically speaking, we can not have the best of both worlds. So
there is a part of the control system design that there is a compromise of what is best for the system
as a whole. Normally, this is defined by the control specifications set by the designer or the employer.
In order to meet these objectives, the design must take into consideration the following.
o System stability
o Noise immunity
System response is the system’s response to a certain input. So if we say “step response”, then this
is the system’s response to a step input.
In control systems, we follow standard reference inputs. One advantage of using such inputs is the
ease of computation, especially when using Laplace transforms.
So, given a standard reference input (impulse, step, ramp inputs), we classify the system according to
its response.
y (t ) = y t (t ) + y ss (t )
y t (t ) : transient response
lim y t (t ) = 0
t →∞
y ss (t ) : steady state response
The transient response is determined by the plant’s transfer function, while the steady-state response
is determined by the forcing function or the reference input.
In the long run, the effect of the transient response dies out, leaving the steady-state response. This
steady-state response must be equal to the desired input.
What are the standard reference inputs used in control systems? We have to take note that these
reference inputs should not only be mathematical in nature but also found in practical applications.
Standard inputs also allow the use of linearity and superposition in the analysis of any control system
project, plus the fact that they have simple Laplace transforms.
1
• Step input u (t ) ⇔ . The practical representation of this is the common ON and OFF switch. When
s
the switch is OFF, it is equal to zero. At the time the switch is turned ON, that is the time where the
function u(t) will change from zero to one. The discontinuity lies in how fast the switch was pressed.
There many different kinds of switches, the most common is the mechanical switch we use at home.
Electronic switches are made of BJT’s, diodes, FET’s, etc.
1
• Ramp input tu (t ) = r (t ) ⇔ . A ramp is a monotonically increasing function determined by its slope
s2
on how fast it quickly changes. A common practical application of this is a change in temperature in a
room or in a device.
t2 1
• Parabolic input u (t ) ⇔ . This can be used in robot trajectories or any motion following a
2 s3
parabolic path.
s
cos ωt ⇔
s +ω2
2
• Sinusoidal input . The most common application of such are the AC machines or
ω
sin ωt ⇔
s2 + ω 2
rotating machines.
Other reference inputs may arise from the superposition of these standard inputs. The analysis is
simply done by adding their Laplace transforms.
The classical control technique uses the unity gain negative feedback control system shown below.
What makes this classical is that the feedback has a unity transfer function or simply a gain block, i.e.,
no rational polynomial transfer functions. Please note here that we used H as a controller.
We recall that the error of such closed loop system is given by:
1
E= R
1 + GH
For simplicity’s sake, we neglect disturbance and noise. This is also a rational polynomial function, and
in the long run, we want this to be equal to zero. The steady-state error is defined as:
Using the Final Value Theorem of the Laplace Transform, we will see that:
R(s )
ess = lim sE (s ) = lim s
s →0 s → 0 1 + GH (s )
Note that there is a factor “s” and the error is dependent on the reference input and the system plant
and controller. Whether in the time or frequency domain, we want this value to be equal to zero.
From this error response, we also can see the loop gain GH(s). This is represented by:
GH (s ) = K
(s − z1 )(s − zm )
s j (s − p1 )(s − pn )
j = system type
The variable “j” in the denominator is defined as the system type. If j = 0, then the system is said to be
Type 0. If j = 1, then the system is said to be Type 1 and so on. This implies how many poles located
at the origin are there in a given system. The origin means that the pole is equal to zero.
We now study what the importance of the system type to the error response is. Keep in mind that what
we want is that the error response should be zero as the time approaches infinity or if the complex
variable “s” approaches zero.
1
G (s ) =
s +1
− 1st order
− type 0
Clearly, it can be seen that it is a first-order system and a Type 0 system. Also, note that we will be
placing this in an open loop system, such as shown below.
We get the step response of the system above. Its step response is shown below.
Step Response
1
0.9
0.8
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6
Time (seconds)
From the graph, we see that at approximately five seconds, the system approached its desired value
of one. From zero to less than five seconds is the transient response.
Now, using the same system, but this time, closing the loop with the controller equal to one and as
shown below,
The step response is also determined and shown below. Note the following changes.
• The steady-state value is 0.5, thus having a steady-state error of 0.5 since the desired is one.
• Transient time decreases by almost half or is now approximately equal to 2.5 seconds.
Step Response
0.5
0.45
0.4
0.35
0.3
Amplitude
0.25
0.2
0.15
0.1
0.05
0
0 0.5 1 1.5 2 2.5 3
Time (seconds)
The two superimposed plots are shown below to clearly see the changes brought about by closing the
loop.
Step Response
1
0.9
closed loop
0.8
open loop
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6
Time (seconds)
Figure 4.9 Open loop and closed loop systems step responses
Though the open loop has reached the final response to one, the closed loop is still the best way to
control a system. It is by chance that the open loop system G(s) has a final value of one when one
used the Final Value theorem.
Now let us mathematically analyze the system above since the plots generated above came from a
well-known software (Matlab).
E R −T 1 s +1
= = =
R R 1+ G s + 2
If the input is a unit step, then, using the final value theorem, we will see that:
r (t ) = u (t )
s + 1 1 s + 1 1
ess = lim s = =
s → 0 s + 2 s s + 2 2
We observe that for a type zero system, there is a finite error. Also, a step input is a type 1 system
since we have an “s” in the denominator.
1
G (s ) =
s(s + 1)
Which is type 1 in nature. Doing the same mathematical analysis, we see that:
E R −T 1 s (s + 1)
= = =
R R 1+ G s2 + s +1
r (t ) = u (t )
s (s + 1) 1
ess = lim s = 0
s →0 s 2 + s + 1 s
The error response now has a steady-state value of zero, meaning the desired input is reached. If the
input to this system is now a ramp input, we will have:
r (t ) = tu (t )
s (s + 1) 1
ess = lim s = 1
s → 0 s 2 + s + 1 s 2
In summary, the system type determines the steady-state error response value of any given system. It
also determines up to what input will provide a zero steady-state error. If a plan is Type 2, then both
unit and ramp inputs will produce zero steady-state error at the output. A parabolic input will introduce
a finite steady-state error and so on.
REFERENCES:
Dorf, R.C., & Bishop, R.H. (2010). Modern Control Systems 12th Edition. New Jersey: Prentice Hall
Nise, N.S. (2010). Control Systems Engineering 6th Edition. New Jersey: John Wiley & Sons
DiStefano, J., Stubberud, A. & Williams, I. (2012). Schaum’s Outline of Feedback and Control Systems
2nd Edition. New York: McGraw Hill
Ogata, K. (2009). Modern Control Engineering 5th Edition. New Jersey: Prentice Hall
Golnaraghi, F. & Kuo, B.C. (2009). Automatic Control Systems 9th Edition. New Jersey: John Wiley & Sons
If a problem is solved using Laplace Transforms, then the final answer will still be in terms of the variable
“s” and not the time variable “t”. There is a need to find a way to reverse this operation to go back to
the time domain perspective.
This is where the Inverse Laplace Transform comes into play. It reverses the operation, thus
transforming the frequency domain signal to its time-domain representation. In order to this, the Inverse
Laplace Transform Integral must be used, as given below.
+ j
L F (s ) = f (t ) = F (s )e
−1 1 st
ds
2j
− j
However, using such is difficult to evaluate because it requires contour integration using complex
variables theory.
The complicated fraction is split up into forms that are in the Laplace Transform table.
The Laplace Transform table provides common engineering problem pairs. This implies that once the
polynomial is split into simpler fractions, all that is left is to look at the Laplace Transform table.
Most Laplace transform expressions are not in recognizable form, but in most cases appear in a
rational form of, that is:
N (s ) bm s m + bm −1s m −1 + bm − 2 s m − 2 + + b1s + b0
F (s ) = =
D(s ) an s n + an −1s n −1 + an − 2 s n − 2 + + a1s + a0
an , bm = coefficien ts
If
m n → Proper rational function
m n → Improper rational function
Remember that any rational function will always have a numerator and a denominator, and this time,
the numerator and denominator are functions of “s”.
From the rational function above, it is noteworthy to say that the coefficients bm and an can never be
equal to zero. If this happens, then the degree of the numerator and denominator will decrease by one.
Other coefficients other than these two can be equal to zero.
If the degree of the numerator is less than that of the degree of the denominator, then there exists a
proper rational function. Most engineering problems and scenarios are of this form.
If the degree of the numerator is greater than or equal that of the degree of the denominator, then there
exists an improper rational function.
One of the most important things to do in a partial fraction is getting the roots. Both the numerator and
the denominator have roots. Roots are values that will make the polynomial equal to zero.
If the numerator is equated to zero, then the roots are called zeros. If the denominator is equated to
zero, then the roots are called poles. For partial fraction expansion, getting the zeros is not yet of
utmost importance. Getting the poles is the primary thing needed to determine, and these poles are
determined through the basic factoring techniques.
Once the denominator has been factored out to determine the poles, one would like to observe whether
these poles are distinct and real, repeated and real, complex conjugates, or a combination of real and
complex conjugate poles.
Complex conjugates mean that these poles always appear in pairs, thus, always have an even degree
in “s” polynomial. Recall also that the complex conjugate of a +bj is a – bj. As for another example,
the complex conjugate of 2 – 2j is 2 + 2j.
Because there is a slight difference in the attack of certain problems in determining the inverse Laplace
Transform, there is a need to explore different cases.
The first step in proceeding with partial fraction for all cases would always be factoring. If all poles are
distinct and real, then for a given a proper rational polynomial in “s”, the rational function can be written
as below.
N (s )
F (s ) =
(s − p1 )(s − p2 )(s − p3 ) (s − pn )
p1 p2 p3 pn
Notice that the poles are not equal to any of the other poles. They are totally unique from the rest of
the poles. If after examination, the poles are found unique, then the partial fraction expansion of the
given rational polynomial follows the form:
F (s ) =
A1 A2 A3 An
+ + ++
s − p1 s − p2 s − p3 s − pn
Observe that there will be “n” fractions in order to fully represent the given rational polynomial. For a
polynomial with degree “n”, there lies “n” number of values that will make the polynomial equal to zero.
One would also see that the above partial fraction expansion has denominators with degree equal to
one.
In partial fraction expansion, it is required to just evaluate the coefficients An. After all An’s are solved,
look at the Laplace Transform table, and convert it to the time domain representation.
An = (s − pn )F (s ) s = p
n
Ke− at u(t )
K
s+a
The number raised in the exponential function represents the roots derived in the homogeneous
equation of the differential equation.
3s + 2
Example 1: Given that F (s ) =
s 2 + 3s + 2
The first step is always to factor the denominator. Leave the numerator as is.
3s + 2 3s + 2
F (s ) = =
2
s + 3s + 2 (s + 1)(s + 2)
The partial fraction expansion of the polynomial above is:
F (s ) =
A1 A
+ 2
s +1 s + 2
To evaluate the coefficients A1 and A2, recall that:
A1 = (s − p1 )F (s ) s = p1
A2 = (s − p2 )F (s ) s = p
2
3s + 2
A1 = (s + 1)
(s + 1)(s + 2) s = −1
3(− 1) + 2
A1 =
(− 1 + 2) s = −1
A1 = −1
And
3s + 2
A2 = (s + 2)
(s + 1)(s + 2) s = −2
3(− 2) + 2
A2 =
(− 2 + 1) s = −2
A2 = 4
Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:
3s + 2
F (s ) =
2
s + 3s + 2
3s + 2
F (s ) =
(s + 1)(s + 2)
−1
F (s ) =
4
+
s +1 s + 2
Looking at the Laplace Transform table, the time domain representation of the above partial fractions
are:
(
f (t ) = L−1 F (s ) = − e −t + 4e −2t u(t ) )
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.
3s 2 + 2s + 5
Example 2: Given that F (s ) =
s 3 + 12 s 2 + 44 s + 48
The first step is always to factor the denominator. Leave the numerator as is.
3s 2 + 2s + 5 3s 2 + 2s + 5
F (s ) = =
s 3 + 12 s 2 + 44 s + 48 (s + 2)(s + 4)(s + 6)
The partial fraction expansion of the polynomial above is:
F (s ) =
A1 A A
+ 2 + 3
s+2 s+4 s+6
To evaluate the coefficients A1 and A2, recall that:
A1 = (s − p1 )F (s ) s = p1
A2 = (s − p2 )F (s ) s = p
2
A3 = (s − p2 )F (s ) s = p
3
3s 2 + 2s + 5
A1 = (s + 2)
(s + 2)(s + 4)(s + 6) s = −2
3(− 2)2 + 2(− 2) + 5
A1 =
(− 2 + 4)(− 2 + 6) s = −2
9
A1 =
8
3s 2 + 2s + 5
A2 = (s + 4)
(s + 2)(s + 4)(s + 6) s = −4
3(− 4)2 + 2(− 4) + 5
A2 =
(− 4 + 2)(− 4 + 6) s = −4
37
A2 = −
4
and
3s 2 + 2s + 5
A3 = (s + 6)
(s + 2)(s + 4)(s + 6) s = −6 1
3(− 6)2 + 2(− 6) + 5
A3 =
(− 6 + 2)(− 6 + 4) s = −6
89
A3 =
8
Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:
3s 2 + 2s + 5
F (s ) =
s 3 + 12 s 2 + 44 s + 48
3s 2 + 2s + 5
F (s ) =
(s + 2)(s + 4)(s + 6)
9 37 89
F (s ) = 8 − 4 + 8
s+2 s+4 s+6
Looking at the Laplace Transform table, the time domain representation of the above partial fractions
is:
9
f (t ) = L−1F (s ) = e − 2t − e − 4t + e − 6t u (t )
37 89
8 4 8
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.
If repeated real poles with multiplicity m occur, then the proper rational function can be represented by:
N (s )
F (s ) =
(s − p1 )m (s − p2 )(s − p3 )(s − pn )
The partial fraction expansion of such is:
Notice that there occur “m” partial fractions for a pole with multiplicity “m”.
For the simple poles, the coefficients are evaluated just the same:
An = (s − pn )F (s ) s = p
n
However, for poles with multiplicity “m”, the coefficients are evaluated by:
An =
1 d m −1
(m − 1)! dsm −1
(
(s − pn )m F (s ) )
s = pn
In this way, it is seen that the derivative operation is involved. As the number of multiplicity increases,
there is also that number minus one of differentiation involved.
s+3
Example 1: Given F (s ) =
(s + 1)2 (s + 2)
The first step is always to factor the denominator. Leave the numerator as is. Since the given rational
polynomial is already factored, then the partial fraction expansion of the polynomial above is:
A1 A21 A
F (s ) = + + 22
(s + 2) (s + 1)2 (s + 1)
To evaluate the coefficients A1, A21, and A22, recall that:
A1 = (s − p1 )F (s ) s = p1
A21 = (s − p2 )2 F (s )
s = p2
A22 =
d
ds
(s − p 2 )2 F (s )
s = p2
s+3
A1 = (s + 2)
(s + 1)2 (s + 2)
s = −2
A1 = 1
s+3
A21 = (s + 1)2
(s + 1)2 (s + 2)
s = −1
A21 = 2
and
d s+3
A22 = (s + 1)2
ds (s + 1)2 (s + 2 )
d s+3
A22 =
ds (s + 2 ) s = −1
A22 = −1
Another way of getting the coefficient A22 is shown below. You may use this method also in any case,
though it is mostly used in this case.
Since A1 and A21 are already known, substitute any value of “s” except the roots, in this case, -1 and -
2.
s+3 1 2 A
= + + 22
(s + 1)2 (s + 2) s =0 (s + 2) (s + 1)2 (s + 1)
3 1 2 A
= + + 22
(1)2 (2) s =0 (2) (1)2 (1)
3 1
A22 = − − 2 = −1
2 2
Notice that this answer is the same as the previous one but without the differentiation process.
Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:
s+3
F (s ) =
(s + 1)2 (s + 2)
A1 A21 A
F (s ) = + + 22
(s + 2) (s + 1) (s + 1)
2
−1
F (s ) =
1 2
+ +
(s + 2) (s + 1) (s + 1)
2
Looking at the Laplace Transform table, the time domain representation of the above partial fractions
is:
(
f (t ) = L−1F (s ) = e− 2t + 2te −t − e−t u(t ))
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.
s 2 + 3s + 1
Example 2: Given F (s ) =
(s + 1)3 (s + 2)2
The first step is always to factor the denominator. Leave the numerator as is. Since the given rational
polynomial is already factored, then the partial fraction expansion of the polynomial above is:
A11 = (s + 1)3 F (s )
s = −1
A12 =
d
ds
(s + 1)3 F (s )
s = −1
A13 =
1 d2
2 ds 2
(s + 1)3 F (s )
s = −1
A21 = (s + 2 )2 F (s )
s = −2
A22 =
d
ds
(s + 2)2 F (s )
s = −2
s 2 + 3s + 1
A11 = (s + 1)3
(s + 1)3 (s + 2)2
s = −1
A11 = −1
d s 2 + 3s + 1
A12 =
ds (s + 2 )2
s = −1
s+4
=
(s + 2)3 s = −1
A12 = 3
1 d2 s 2 + 3s + 1
A13 =
2! ds 2 (s + 2)2
s = −1
d d s 2 + 3s + 1
=
ds ds (s + 2 )2
s = −1
1 d s + 4
=
2 ds (s + 2)3
s = −1
1 s + 2 − 3s − 12
=
2 (s + 2 )4
s = −1
−s −5
=
(s + 2)4 s = −1
A13 = −4
s 2 + 3s + 1
A21 = (s + 2)2
(s + 1)3 (s + 2)2
s = −2
A21 = 1
and
d s 2 + 3s + 1
A22 = (s + 2)2
ds (s + 1)3 (s + 2)2
=
(s + 1)3 (2s + 3) − 3(s + 1)2 (s 2 + 3s + 1)
(s + 1)6 s = −2
(s + 1)(2s + 3) − 3(s 2 + 3s + 1) − s 2 − 4s
= =
(s + 1)4 s = −2 (s + 1)4 s = −2
A22 = 4
Now that all coefficients are completely known, then the partial fraction expansion of the given rational
polynomial is:
Looking at the Laplace Transform table, the time domain representation of the above partial fractions
is:
1
f (t ) = L−1F (s ) = − t 2e −t + 3te −t − 4e −t + te − 2t + 4e − 2t u (t )
2
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.
COMPLEX POLES
In most cases, the poles in the denominator occur in complex conjugate pairs. These poles have both
the real and imaginary parts. Since these always occur in conjugate pairs, the number of poles is
always even.
One may think of these conjugate pairs as distinct complex poles. And indeed, it is true. However, the
difficulty of proceeding this way adds a little difficulty. One important note when it comes to complex
poles, their coefficients will also be the complex conjugate of the other.
Do not proceed in this manner and instead remember that complex poles will always result in a sinusoid
function in the time domain.
Use the frequency shifting property of the Laplace transform. This is stated below.
e−at f (t ) F (s + a )
3s + 9
Example 1: Given F (s ) =
2
s + 4s + 5
The first step is always to factor the denominator. Leave the numerator as is. The partial fraction
expansion of the polynomial above is:
3s + 9
F (s ) =
2
s + 4s + 5
Bs + C
F (s ) =
(s 2 + 4s + 4)+ 1
Bs + C
F (s ) =
(s + 2)2 + 1
Notice what was done in the denominator. This is a technique in factoring known as completing the
square. Such a technique is very useful in the analysis of linear systems using Laplace transforms.
Also, take note of the numerator. The numerator is just the derivative of the denominator with arbitrary
coefficients.
Let s = 0
3s + 9 Bs + C
=
2
s + 4s + 5 s =0 (s + 2)2 + 1 s =0
C 9
=
4 +1 5
C =9
Let 𝑠 = 1
3𝑠 + 9 𝐵𝑠 + 𝐶
| = |
𝑠 2 + 4𝑠 + 5 𝑠=1 (𝑠 + 2)2 + 1 𝑠=1
𝐵+9 3+9
=
9 + 1 (1 + 2)2 + 1
𝐵 + 9 12
=
10 10
𝐵=3
Note that looking at the coefficients' values, one can compare these with the given rational polynomial.
Hence, it is necessary to remember that when a complex polynomial is given above, it is NO LONGER
necessary to proceed with such an evaluation of the arbitrary coefficients.
However, if the POLES are not solely complex, meaning there are other poles, then proceed like the
above method.
We now have:
3s + 9
F (s ) =
(s + 2)2 + 1
This can also be expressed as:
3(s + 2 − 2)
F (s ) =
1
+ 9
(s + 2)2 + 1 (s + 2)2 + 1
The two fractions are separated, one with “s”, one without the “s”. From here on, there is a need to
“massage” the partial fraction.
The technique in going to this step is that one must remember the form of the Laplace transform using
the frequency shifting property, i.e.:
e−at f (t ) F (s + a )
Look at the denominator. Since the “s” is “s+2”, the numerator must also be “s+2”. Since “2” is just a
constant, add and subtract 2 without changing the expression above. Then, we have:
3(s + 2) 3(2)
F (s ) =
1
− + 9
(s + 2)2 + 1 (s + 2)2 + 1 (s + 2)2 + 1
3(s + 2)
F (s ) =
3
+
(s + 2)2 + 1 (s + 2)2 + 1
Using the frequency shifting property of the Laplace transforms, it can be seen that:
(
f (t ) = L−1F (s ) = 3e− 2t cost + 3e− 2t sin t u(t ) )
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.
Remember that for complex poles, a decaying sinusoid will always occur. In this kind of problem, it is
handy to remember Euler’s identity, i.e.
e j = cos j sin
COMBINATION
For rational functions with both real and complex roots, use any of the three cases above, which applies
to the factored denominator.
s+3
Example 1: Given F (s ) =
s + 5s 2 + 12 s + 8
3
The first step is always to factor the denominator. Leave the numerator as is.
s+3
F (s ) =
(s + 1)(s 2 + 4s + 8)
The partial fraction expansion of the given rational polynomial is:
Bs + C
F (s ) =
A
+
s + 1 s + 4s + 8
2
s+3
A = (s + 1)
(
(s + 1) s 2 + 4s + 8 )
s = −1
−1+ 3
A=
(− 1)2 + 4(− 1) + 8
2
A=
5
To solve for the constants B and C, assign any real pole except that s = -1.
2
s+3 Bs + C
= 5 +
( )
(s + 1) s 2 + 4s + 8 s = 0 s + 1 s 2 + 4s + 8
s =0
3 2 C
= +
8 5 8
3 2
C = 8 −
8 5
16
= 3 −
5
1
C=−
5
and
2
s+3 Bs + C
= 5 +
( )
(s + 1) s 2 + 4s + 8 s =1 s + 1 s 2 + 4s + 8
s =1
2 1
B−
1+ 3
= 5 + 5
(1 + 1)(1 + 4 + 8) s =1 1 + 1 1 + 4 + 8
s =1
2 1 B 1
= + −
13 5 13 65
2 1 1
B = 13 − +
13 5 65
13 1
= 2 − +
5 5
2
B=−
5
We now have:
2 2 1
− s−
F (s ) = 5 + 5 5
s + 1 s 2 + 4s + 8
Manipulate the equation so a Laplace transform pair in the table can be seen. Then do the following:
2 1 2 1
F (s ) =
s 1
− 2 −
5 s + 1 5 s + 4s + 8 5 s + 4s + 8
2
2 1 2 1
F (s ) =
s − 1
− 2
( ) (
5 s + 1 5 s + 4s + 4 + 4 5 s 2 + 4s + 4 + 4 )
2 1 2 1
F (s ) =
s − 1
− 2
( ) (
5 s + 1 5 s + 4s + 4 + 4 5 s + 4s + 4 + 4
2
)
2 1 2 1
F (s ) =
s − 1
− 2
( ) (
5 s + 1 5 s + 4s + 4 + 4 5 s + 4s + 4 + 4
2
)
2 1 2 1
F (s ) =
s − 1
−
5 s + 1 5 (s + 2) + 4 5 (s + 2) + 4
2 2
2 1 2 s + 2 − 2 1 2
F (s ) =
1
− −
5 s + 1 5 (s + 2) + 2 5 2 (s + 2) + 2
2 2 2 2
2 1 2 s+2 2 1
F (s ) = + 2 − 2
−
5 s + 1 5 (s + 2)2 + 22 5 (s + 2)2 + 2 2 10 (s + 2)2 + 22
2 1 2 s+2 3
F (s ) = + 2
−
5 s + 1 5 (s + 2)2 + 22 10 (s + 2)2 + 22
2
f (t ) = L−1F (s ) = e −t − e − 2t cos 2t + e − 2t sin 2t u (t )
2 3
5 5 10
Do not forget the “u(t)”. This is the unit step function that implies that this is a one-sided or unilateral
Laplace transform pair.
Recall that improper fractions exhibit this property. The degree of the numerator is greater than or
equal to the degree of the denominator, m n. To solve such problem, divide the numerator by the
denominator to obtain an expression of the form:
N (s )
F (s ) = k0 + k1s + k 2 s 2 + + k m − n s m − n +
D(s )
N (s )
= proper rational function
D(s )
Remember your long division method because that is the key to turning the improper rational function
into a proper rational function.
For the remaining proper rational function, use the appropriate case as studied
s 2 + 2s + 2
Example 1: Given: F (s ) =
s +1
s 2 + 2s + 2
F (s ) =
1
= 1+ s +
s +1 s +1
The inverse Laplace of this is:
( )
f (t ) = L−1F (s ) = (t ) + ' (t ) + e −t u (t )
REFERENCES:
DiStefano, J., Stubberud, A. & Williams, I. (2012). Schaum's outline of feedback and control Systems (2nd
ed.). New York: McGraw Hill
Dorf, R., & Bishop, R. (2017). Modern control systems (13th ed.). Pearson.
Franklin, G., Powell, J., & Emami-Naeini, A. (2018). Feedback control of dynamic systems (8th ed.). Pearson.
Golnaraghi, F. & Kuo, B.C. (2009). Automatic control systems (9th ed.). New Jersey: John Wiley & Sons
Nise, N.S. (2010). Control systems engineering (6th ed.). New Jersey: John Wiley & Sons
Ogata, K. (2009). Modern control engineering (5th ed.). New Jersey: Prentice Hall