CH15
CH15
15.1 Introduction
To analyze linear circuits, we use differential equations (DE).
𝑑𝑖 1 𝑡
𝑅𝑖 + 𝐿 + ∫ 𝑖(𝜏)𝑑𝑡 = 𝑉𝑠
𝑑𝑡 𝐶 ∞
DEs allow us to find the complete solution, i.e. transient and steady-state solution.
𝑣(𝑡) = 𝑣𝑡 (𝑡) + 𝑣𝑠𝑠 (𝑡)
But as the circuit gets larger and we have different sources, finding the solution via
differential equations is not trivial.
For the sinusoidal source we learned AC (phasor domain) analysis to get steady state solution.
In this chapter we will learn another tool, Laplace Transform to obtain both transient and
steady state solution for a linear circuit.
Like frequency domain, Laplace transform is a transform domain analysis tool, and the
analysis has the following steps:
In short, Laplace transform transforms a function 𝑓(𝑡) into another function 𝐹(𝑠). While the
argument of the first function is 𝑡 the argument of latter is 𝑠.
𝑓(𝑡) ℒ[ . ] 𝐹(𝑠)
𝑑2𝑣 𝑅 𝑑𝑣 1 1
𝑡 − 𝑑𝑜𝑚𝑎𝑖𝑛 ⇒ + + 𝑣 = 𝑣
𝑑𝑡 2 𝐿 𝑑𝑡 𝐿𝐶 𝐿𝐶 𝑠
But we will use one-sided Laplace, as our functions are assumed to be zero for 𝑡 < 0.
∞ ∞ ∞
∫ |𝑓(𝑡)𝑒 −𝑠𝑡 |𝑑𝑡 = ∫ |𝑓 (𝑡)𝑒 −(𝜎+𝑗𝜔)𝑡 |𝑑𝑡 = ∫ 𝑒 −𝜎𝑡 |𝑓(𝑡)|𝑑𝑡 < ∞
0− 0− 0−
In above gray region |𝐹(𝑠)| < ∞, and 𝐹(𝑠) exists. 𝐹(𝑠) is undefined outside the region of
convergence.
Fortunately, all the functions that we will deal in circuit analysis meet the convergence
requirement. Therefore, we do not need to specify the region of convergence.
Inverse Laplace Transform
After circuit analysis we need to go back to time-domain. Therefore, we need to define the
inverse Laplace transform. The inverse Laplace transform is defined as
−1 [𝐹(𝑠)]
1 𝜎1+𝑗∞ Integrate over −∞ < 𝜔 < ∞.
ℒ = 𝑓(𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠
2𝜋𝑗 𝜎1−𝑗∞ for a given 𝜎1 > 𝜎𝑐
The inverse Laplace with this definition requires complex analysis, which is beyond scope of
this course.
But, instead of inverse by integration, we will use look-up tables. Because there is a one-to-
one relation between function and its Laplace transform. In other words, a function 𝑓(𝑡) and
its Laplace transform constitutes a transform pair and are represented as
𝑓(𝑡) ⟺ 𝐹(𝑠)
Solution:
∞
−𝑠𝑡
1 −𝑠𝑡 ∞ 1 1 1
𝑎) ℒ[𝑢(𝑡)] = ∫ 1𝑒 𝑑𝑡 = − 𝑒 | = − (𝑒 −∞ − 𝑒 0 ) = − (0 − 1) =
0− 𝑠 𝟎 𝑠 𝑠 𝑠
∞ ∞ ∞
−𝑎𝑡 −𝑎𝑡 −𝑠𝑡 −(𝑠+𝑎)𝑡
1 −(𝑠+𝑎)𝑡
1
𝑏) ℒ[𝑒 𝑢(𝑡)] = ∫ 𝑒 𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = − 𝑒 | =
0− 0− 𝑠+𝑎 𝟎 𝑠+𝑎
∞
𝑐) ℒ[𝛿(𝑡)] = ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝑒 −0 = 1
0−
1 −𝑠𝑡 1 ∞ −𝑠𝑡
ℒ[𝑡𝑢(𝑡)] = 𝑢𝑣 − ∫ 𝑣𝑑𝑢 = −𝑡 𝑒 + ∫ 𝑒 𝑑𝑡
𝑠 𝑠 0−
1 −𝑠𝑡 1 −𝑠𝑡 ∞ 1 1
[−𝑡 𝑒 − 2 𝑒 ]| = (−0 − 0) − (−0 − 2 ) = 2
𝑠 𝑠 0− 𝑠 𝑠
Note that:
𝑡 𝑡
lim 𝑒 −𝑠𝑡 = lim 𝑠𝑡
t→∞ 𝑠 t→∞ 𝑠𝑒
1 1 1 50 𝑠 + 𝑗𝜔 + 𝑠 − 𝑗𝜔 𝑠
= 50 ( + )= = 50
2 𝑠 − 𝑗𝜔 𝑠 + 𝑗𝜔 2 𝑠2 + 𝜔2 𝑠2 + 𝜔2
15.3 Properties of Laplace transform:
Linearity: If 𝐹1 (𝑠) and 𝐹2 (𝑠) are LT of the functions 𝑓1 (𝑡) and 𝑓2 (𝑡), then,
ℒ[𝑎1 𝑓1 (𝑡) + 𝑎2 𝑓2 (𝑡)] = 𝑎1 𝐹1 (𝑠) + 𝑎2 𝐹2 (𝑠)
where 𝑎1 and 𝑎2 are constants. This property is a direct consequence of the definition of LT.
Thus,
1 1 1 𝑠
ℒ[cos 𝜔𝑡 𝑢(𝑡)] = ( + )= 2
2 𝑠 − 𝑗𝜔 𝑠 − 𝑗𝜔 𝑠 + 𝜔2
∞ 𝑠
−( )𝑥 𝑑𝑥 1 ∞ 𝑠
−( )𝑥
ℒ[𝑓(𝑎𝑡)] = ∫ 𝑓(𝑥) 𝑒 𝑎 = ∫ 𝑓(𝑥) 𝑒 𝑎 𝑑𝑥
0− 𝑎 𝑎 0−
comparing this result with the definition of LT ( replace dummy variable 𝑥 with 𝑡, and 𝑠 with
𝑠/𝑎 ) we can write that
1 𝑠
ℒ[𝑓(𝑎𝑡)] = 𝐹( )
𝑎 𝑎
𝜔
For example, we have seen that ℒ[𝑠𝑖𝑛𝜔𝑢(𝑡)] =
𝑠 2 +𝜔2
As an example,
𝑠
ℒ[𝑐𝑜𝑠𝜔𝑡𝑢(𝑡)] =
𝑠2 + 𝜔2
Using time-shift property we can write,
𝑠
ℒ[𝑐𝑜𝑠𝜔(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)] = 𝑒 −𝑎𝑠
𝑠2 + 𝜔2
Frequency Shift: If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), then,
∞ ∞
−𝑎𝑡 −𝑎𝑡 −𝑠𝑡
ℒ[𝑒 𝑓(𝑡)𝑢(𝑡)] = ∫ 𝑒 𝑓(𝑡)𝑒 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −(𝑠+𝑎)𝑡 𝑑𝑡 = 𝐹(𝑠 + 𝑎)
0 0
∞
−)
= 0 − 𝑓(0 + 𝑠 ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝑠𝐹(𝑠) − 𝑓(0− )
0
The Laplace transform of second derivative of 𝑓(𝑡) is a repeated application above equality,
𝑑2𝑓
ℒ [ 2 ] = 𝑠ℒ[𝑓′(𝑡)] − 𝑓 ′ (0− ) = 𝑠[𝑠𝐹(𝑠) − 𝑓(0− )] − 𝑓′(0− )
𝑑𝑡
= 𝑠 2 𝐹(𝑠) − 𝑠𝑓(0− ) − 𝑓′(0− )
Repeating this process, we can obtain the LT of the 𝑛𝑡ℎ derivative of 𝑓(𝑡) as
𝑑𝑛 𝑓
ℒ [ 𝑛 ] = 𝑠 𝑛 𝐹(𝑠) − 𝑠 𝑛−1 𝑓(0− ) − 𝑠 𝑛−2 𝑓 ′ (0− ) − ⋯ − 𝑠 0 𝑓 (𝑛−1) (0− )
𝑑𝑡
Example:
We can use time differentiation to obtain the LT of the 𝑠𝑖𝑛𝜔𝑡 from that of the 𝑐𝑜𝑠𝜔𝑡.
Let 𝑓(𝑡) = 𝑐𝑜𝑠𝜔𝑡𝑢(𝑡), then 𝑓(0) = 1 and 𝑓′(𝑡) = −𝜔𝑠𝑖𝑛𝜔𝑡𝑢(𝑡) . Using time derivation and
the scaling property
1 1
ℒ[𝑠𝑖𝑛𝜔𝑢(𝑡)] = −ℒ[𝑓 ′ (𝑡)] = − [𝑠𝐹(𝑠) − 𝑓(0− )]
𝜔 𝜔
1 𝑠 𝜔
− (𝑠 2 − 1) =
𝜔 𝑠 + 𝜔2 𝑠2 + 𝜔2
Time Integration: If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), the Laplace transform of its
integral is,
𝑡 ∞ 𝑡
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ∫ [∫ 𝑓(𝑥)𝑑𝑥] 𝑒 −𝑠𝑡 𝑑𝑡
0 0− 0
Then
𝑡 𝑡 ∞ ∞
1 −𝑠𝑡 1
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = {[∫ 𝑓(𝑥)𝑑𝑥] (− 𝑒 )}| − ∫ (− ) 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡
0 0 𝑠 0− 0 𝑠
∞
where assuming ∫0 𝑓(𝑥)𝑑𝑥 < ∞, i.e., the integration of 𝑓(𝑡) is finite the first term in curly
1
brackets at 𝑡 = ∞ yields zero because 𝑒 −𝑠.∞ = 0. Evaluating it at 𝑡 = 0, we get,
𝑠
1 0 1 1
[ ∫ 𝑓(𝑥)𝑑𝑥] (− 𝑒 −0𝑡 ) = [0] (− ) = 0
𝑠 0 𝑠 𝑠
Thus, the first term is zero. Then we have
𝑡
1 ∞ 1
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝐹(𝑠)
0 𝑠 0− 𝑠
Example: 𝑓(𝑡) = 𝑢(𝑡) and 𝐹(𝑠) = 1/𝑠. Using integration property, we can find ℒ[𝑡].
𝑡 𝑡
1 1 1 1
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ℒ [∫ 𝑢(𝑥)𝑑𝑥] = ℒ[𝑡] = 𝐹(𝑠) = ( ) = 2
0− 0− 𝑠 𝑠 𝑠 𝑠
𝑓 (−1) (𝑡), 𝑓 (0) (𝑡), 𝑓 (1) (𝑡), 𝑓 (2) (𝑡), 𝑓 (3) (𝑡)
𝑑 1 1
ℒ[𝑡𝑒 −𝑎𝑡 𝑢(𝑡)] = − ( )=
𝑑𝑠 𝑠 + 𝑎 (𝑠 + 𝑎)2
𝑓(𝑡) can be represented as the sum of time-shifted functions shown in below Figure
Thus,
𝑓(𝑡) = 𝑓1 (𝑡) + 𝑓2 (𝑡) + 𝑓3 (𝑡) + ⋯
where 𝑓1 (𝑡) is
𝑓(𝑡), 0<𝑡<𝑇
𝑓1 (𝑡) = 𝑓(𝑡)[𝑢(𝑡) − 𝑢(𝑡 − 𝑇)] = {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
We can express 𝑓(𝑡) as,
𝑓(𝑡) = 𝑓1 (𝑡) + 𝑓1 (𝑡 − 𝑇) + 𝑓1 (𝑡 − 2𝑇) + 𝑓1 (𝑡 − 3𝑇) + ⋯
We now transform each term in 𝑓(𝑡) and apply the time-shift property we obtain,
𝐹(𝑠) = 𝐹1 (𝑠) + 𝐹1 (𝑠)𝑒 −𝑇𝑠 + 𝐹1 (𝑠)𝑒 −2𝑇𝑠 + 𝐹1 (𝑠)𝑒 −3𝑇𝑠 + ⋯
= 𝐹1 (𝑠)[1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯ ]
Thus,
lim 𝑠𝐹(𝑠) = 𝑓(0)
s→∞
∞ ∞
−
𝑑𝑓 0𝑡
lim [𝑠𝐹(𝑠) − 𝑓(0 )] = ∫ 𝑒 𝑑𝑡 = ∫ 𝑑𝑓 = 𝑓(∞) − 𝑓(0− )
s →0 0− 𝑑𝑡 0−
or
𝑓(∞) = lim 𝑠𝐹(𝑠)
s →0
The reason for the first requirement is that if 𝐹(𝑠) has poles with positive real part 𝜎 > 0,
then 𝑓(𝑡) will have a term like 𝐴𝑒 𝜎𝑡 , hence 𝑓(∞) does not converge.
The second requirement is due to fact that 𝑠𝐹(𝑠) will be infinity if 𝐹(𝑠) has more than one
pole at origin. Only one pole (i.e. 1/𝑠) will be cancelled by 𝑠 multiplication in 𝑠𝐹(𝑠).
This result is incorrect, 𝑓(𝑡) = 𝑠𝑖𝑛𝑡 oscillates between −1 and +1 and does not have a limit
as 𝑡 → ∞.
Thus, the final value theorem can not be used to find the final value of 𝑓(𝑡) = sin 𝑡, because
𝐹(𝑠) has poles at 𝑠 = ±𝑗 which are not in the left half of the 𝑠 plane.
The initial-value and final value theorems shows the relationship between the origin and the
infinity in time domain and the 𝑠 − 𝑑𝑜𝑚𝑎𝑖𝑛. They servs as useful checks on LT.
Below is the list of properties of LT.
Example 15.3 Obtain the LT of 𝑓(𝑡) = 𝛿(𝑡) + 2𝑢(𝑡) − 3𝑒 −2𝑡 𝑢(𝑡).
Solution:
2
𝑑2 2 𝑑 −4𝑠 12𝑠 2 − 16
ℒ[𝑡 sin 2𝑡] = (−1)2 ( )= [ ]= 2
𝑑𝑠 2 𝑠 2 + 4 𝑑𝑠 (𝑠 2 + 4)2 (𝑠 + 4)3
𝑠
Solution: Let 𝑓1 (𝑡) = cos 3𝑡 𝑢(𝑡) ⟹ ℒ[cos 3𝑡 𝑢(𝑡) ] =
𝑠 2 +32
Then
2 2
𝑑 2 𝐹1 (𝑠)
ℒ[𝑡 cos 3𝑡 𝑢(𝑡) ] = ℒ[𝑡 𝑓1 (𝑡) ] = (−1)2
𝑑𝑠 2
𝑑𝐹1 (𝑠) 1 2𝑠 2 (𝑠 2 + 32 ) − 2𝑠 2 −𝑠 2 + 32
= 2 − = = 2
𝑑𝑠 𝑠 + 32 (𝑠 2 + 32 )2 (𝑠 2 + 32 )2 (𝑠 + 32 )2
𝑑 2 𝐹1 (𝑠) −2𝑠 2 2)
𝑑 1
= + (−𝑠 + 3 [ ]
𝑑𝑠 2 (𝑠 2 + 32 )2 𝑑𝑠 (𝑠 2 + 32 )2
−2𝑠 2 2)
4𝑠
= + (−𝑠 + 3 (− )
(𝑠 2 + 32 )2 (𝑠 2 + 32 )3
Solution:
𝑔(𝑡) = 10[𝑢(𝑡 − 2) − 𝑢(𝑡 − 3)]
Given that we know LT of 𝑢(𝑡)
𝑒 −2𝑠 𝑒 −3𝑠 10 −2𝑠
𝐺(𝑠) = 10 ( − )= (𝑒 − 𝑒 −3𝑠 )
𝑠 𝑠 𝑠
Practice Problem 15.5 Find the LT of the function ℎ(𝑡) in Figure below.
4 8
−𝑠𝑡
𝐻(𝑠) = ℒ[ℎ(𝑡)] = ∫ 20𝑒 𝑑𝑡 + ∫ 10𝑒 −𝑠𝑡 𝑑𝑡
0− 4
1 4 1 8
= 20 (− 𝑒 −𝑠𝑡 | ) + 10 (− 𝑒 −𝑠𝑡 | )
𝑠 0 𝑠 4
20 10 −4𝑠 10
= (1 − 𝑒 −4𝑠 ) + (𝑒 − 𝑒 −8𝑠 ) = (2 − 𝑒 −4𝑠 − 𝑒 −8𝑠 )
𝑠 𝑠 𝑠
Alternative Method(preferred):
ℎ(𝑡) = 20[𝑢(𝑡) − 𝑢(𝑡 − 4)] + 10[𝑢(𝑡 − 4) − 𝑢(𝑡 − 8)]
= 10[2𝑢(𝑡) − 𝑢(𝑡 − 4) − (𝑡 − 8)]
2 𝑒 −4𝑠 𝑒 −8𝑠 10
ℒ[ℎ(𝑡)] = 10 [ − − ] = [2 − 𝑒 −4𝑠 − 𝑒 −8𝑠 ]
𝑠 𝑠 𝑠 𝑠
Practice Problem 15.6 Find the LT of the function periodic function ℎ(𝑡) given below
−1 [𝐹(𝑠)]
1 𝜎1+𝑗∞
ℒ = 𝑓(𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠
2𝜋𝑗 𝜎1−𝑗∞
The inverse Laplace with this definition requires complex analysis, which is beyond scope of
this course.
But the inverse LT is usually obtained by lookup table. In other words, we will obtain the
inverse LT by using the LT of known functions.
The LT of a function 𝑓(𝑡) has the following general form,
𝑁(𝑠)
𝐹(𝑠) =
𝐷(𝑠)
where 𝑁(𝑠) is the numerator polynomial and 𝐷(𝑠) is the denominator polynomial.
We will use partial fraction expansion to break down 𝐹(𝑠) down into simple terms whose
inverse LT can be obtained from lookup table where the LT of known functions is listed.
Simple Poles:
In simple poles case, all poles (roots of 𝐷(𝑠) = 0 ) are distinct and we can express the 𝐷(𝑠)
as product of the factors as
𝑁(𝑠)
𝐹(𝑠) =
(𝑠 + 𝑝1 )(𝑠 + 𝑝2 ) … (𝑠 + 𝑝𝑛 )
𝑠 = −𝑝1 , −𝑝2, … , −𝑝𝑛 are the simple roots and 𝑝𝑖 ≠ 𝑝𝑗 for all I 𝑖 ≠ 𝑗.
Partial Fraction Expansion:
If the degree of 𝑁(𝑠) is smaller than the degree of 𝐷(𝑠), we use partial fraction expansion
to decompose 𝐹(𝑠) as
𝑘1 𝑘2 𝑘𝑛
𝐹(𝑠) = + + ⋯+
𝑠 + 𝑝1 𝑠 + 𝑝2 𝑠 + 𝑝𝑛
Example:
2 𝑘1 𝑘2
𝐹(𝑠) = = +
(𝑠 + 3)(𝑠 + 5) 𝑠 + 3 𝑠 + 5
There are many ways of finding the expansion coefficients. One way is using the residue
method.
Residue Method (also known as Heaviside):
The expansion coefficients 𝑘1 , 𝑘2 , … , 𝑘𝑛 are known as the residues of 𝐹(𝑠). If we multiply
both sides of the above equation by (𝑠 + 𝑝1 ), we obtain
(𝑠 + 𝑝1 )𝑘2 (𝑠 + 𝑝1 )𝑘𝑛
(𝑠 + 𝑝1 )𝐹(𝑠) = 𝑘1 + + ⋯+
𝑠 + 𝑝2 𝑠 + 𝑝𝑛
Since 𝑝𝑖 ≠ 𝑝𝑗 , setting 𝑠 = −𝑝1 leaves only 𝑘1 on right hand side. Hence, 𝑘1 is obtained as
(𝑠 + 𝑝1 )𝐹(𝑠)|𝑠=−𝑝1 = 𝑘1
Using the same method, we can obtain any residue as
𝑘𝑖 = (𝑠 + 𝑝𝑖 )𝐹(𝑠)|𝑠=−𝑝𝑖
Once we obtained 𝑘𝑖 values, since ℒ[𝑘/(𝑠 + 𝑘)] = 𝑘𝑒 −𝑝𝑘𝑡 𝑢(𝑡) we can write the inverse LT
of 𝑓(𝑡) as
𝑓(𝑡) = ( 𝑘1 𝑒 −𝑝1𝑡 + 𝑘2 𝑒 −𝑝2𝑡 + ⋯ + 𝑘𝑛 𝑒 −𝑝𝑛𝑡 )𝑢(𝑡)
What if 𝑁(𝑠) has a larger degree than 𝐷(𝑠)
If degree of 𝑁(𝑠) is larger than degree of 𝐷(𝑠), than we divide 𝑁(𝑠) to 𝐷(𝑠) and obtain the
following function of s
𝑁(𝑠) 𝑁2 (𝑠)
= 𝐹1 (𝑠) +
𝐷(𝑠) 𝐷(𝑠)
where 𝐹1 (𝑠) is a polynomial in 𝑠 and the degree of 𝑁2 (𝑠) is smaller than the degree of 𝐷(𝑠).
Hence, we will apply partial fraction expansion to 𝑁2 (𝑠)/𝐷(𝑠).
Repeated Poles:
Suppose 𝐹(𝑠) has 𝑛 repeated poles at 𝑠 = −𝑝. Then we can express 𝐹(𝑠) as
𝑘𝑛 𝑘𝑛−1 𝑘2 𝑘1
𝐹(𝑠) = + + ⋯ + + + 𝐹1 (𝑠)
(𝑠 + 𝑝)𝑛 (𝑠 + 𝑝)𝑛−1 (𝑠 + 𝑝)2 (𝑠 + 𝑝)
where 𝐹1 (𝑠) is the remaining part of 𝐹(𝑠) which does not have any pole at 𝑠 = −𝑝. We
can determine the expansion coefficient 𝑘𝑛 as
𝑘𝑛 = (𝑠 + 𝑝)𝑛 𝐹(𝑠)|𝑠=−𝑝
To determine 𝑘𝑛−1 , we multiply each term in above expression of 𝐹(𝑠) by (𝑠 + 𝑝)𝑛 and
differentiate to get rid of 𝑘𝑛 , then we evaluate result at 𝑠 = −𝑝 to get rid of other coefficients
except 𝑘𝑛−1 . Thus, we obtain,
𝑑
𝑘𝑛−1 = [(𝑠 + 𝑝)𝑛 𝐹(𝑠)]|𝑠=−𝑝
𝑑𝑠
Repeating this gives
1 𝑑2
𝑘𝑛−2 = [(𝑠 + 𝑝)𝑛 𝐹(𝑠)]|𝑠=−𝑝
2! 𝑑𝑠 2
Combining all, we can write the general formula as
1 𝑑𝑖
𝑘𝑛−𝑖 = 𝑖
[(𝑠 + 𝑝)𝑛 𝐹(𝑠)]|𝑠=−𝑝 𝑖 = 0, 1, … , 𝑛 − 1
𝑖! 𝑑𝑠
were,
𝑑0
0
[(𝑠 + 𝑝)𝑛 𝐹(𝑠)] = (𝑠 + 𝑝)𝑛 𝐹(𝑠)
𝑑𝑠
Once we obtain the values of 𝑘1 , 𝑘2 , … , 𝑘𝑛 by partial fraction expansion, we may apply
inverse transform to each term.
−1
1 𝑡 𝑛−1 𝑒 −𝑎𝑡
ℒ [ ]= 𝑢(𝑡)
(𝑠 + 𝑎)𝑛 (𝑛 − 1)!
and obtain,
𝑘3 2 −𝑝𝑡 𝑘𝑛
𝑓(𝑡) = ( 𝑘1 𝑒 −𝑝𝑡 + 𝑘2 𝑡𝑒 −𝑝𝑡 + 𝑡 𝑒 + ⋯+ 𝑡 𝑛−1 𝑒 −𝑝𝑡 ) 𝑢(𝑡) + 𝑓1 (𝑡)
2! (𝑛 − 1)!
Complex Poles:
Method 1 (First order)
The complex poles can be handled in the same way as simple Heaviside (cover up) method.
𝑁(𝑠)
Since 𝑁(𝑠) and 𝐷(𝑠) in 𝐹(𝑠) = have real coefficients the complex poles in 𝐷(𝑠) comes
𝐷(𝑠)
in conjugate pairs, and we can express 𝐹(𝑠) as
𝑘1 𝑘2
𝐹(𝑠) = + + 𝐹1 (𝑠)
(𝑠 + 𝑝) (𝑠 + 𝑝∗ )
where 𝑝 is the complex pole and 𝑝∗ is its complex conjugate. 𝐹1 (𝑠) is the remaining part of
𝐹(𝑠) that does not have pairs of complex poles at 𝑠 = −𝑝 and 𝑠 = −𝑝∗ . Then we can obtain
This method produces complex values for 𝑘1 and 𝑘2 , therefore another way is to use
second order or completing to square method.
Whether the poles are simple(distinct) are repeating or complex conjugate, there is method
which allows us to find all the coefficients. The method is called Method of Algebra.
Example:
Start with the partial fraction expansion,
𝑠+3 𝐴1 𝐴2 𝐴3 𝐴4
𝐹(𝑠) = = + + +
𝑠(𝑠 + 2)2 (𝑠 + 5) 𝑠 𝑠 + 2 (𝑠 + 2)2 𝑠+5
Multiply this by the denominator (to clear it out). In other words, cross-multiply the right
side by the denominator of the left side.
𝑠+3 𝐴1 𝐴2 𝐴 𝐴4
𝑠(𝑠 + 2)2 (𝑠 + 5) [ ] = 𝑠(𝑠 + 2)2 (𝑠 + 5) [ + 3
+ (𝑠+2) + ]
𝑠(𝑠+2)2 (𝑠+5) 𝑠 𝑠+2 2 𝑠+5
𝐴1 + 𝐴2 + 𝐴4 = 0
9𝐴1 + 7𝐴2 + 𝐴3 + 4𝐴4 = 0
24𝐴1 + 10𝐴2 + 5𝐴3 + 4𝐴4 = 1
20𝐴1 = 3
6 7𝑠
𝑓(𝑡) = ℒ −1 [5] + ℒ −1 [ ] − ℒ −1 [ 2 ]
𝑠+4 𝑠 + 25
= 5𝛿(𝑡) + 6𝑒 −4𝑡 𝑢(𝑡) − 7𝑐𝑜𝑠5𝑡 𝑢(𝑡)
= 5𝛿(𝑡) + [6𝑒 −4𝑡 − 7𝑐𝑜𝑠5𝑡 ]𝑢(𝑡)
Thus,
1 3 4
𝐹(𝑠) = + −
𝑠+1 𝑠+3 𝑠+4
and
𝑓(𝑡) = (𝑒 −𝑡 + 3𝑒 −3𝑡 − 4𝑒 −4𝑡 )𝑢(𝑡)
Practice Problem 15.10 Obtain 𝑔(𝑡) if,
Example 15.11 Example 15.11 here
Practice Problem 15.11
Find 𝑔(𝑡) given that,
Example 15.15 Use the Laplace transform to solve the differential equation,
Solution We take the Laplace transform of each term in the given differential equation,
and obtain,
or
Hence,
Hence,
or simply
Where λ is a dummy variable and the asterisk ‘*’ denotes convolution. This equation states
that the output of a system is obtained by convolving the input 𝑥(𝑡) with the unit impulse
response of the system ℎ(𝑡) . The convolution is commutative:
If both 𝑥(𝑡) = 0 for 𝑡 < 0 and ℎ(𝑡) = 0 for 𝑡 < 0, then since ℎ(𝑡 − λ) = 0 for 𝑡 − λ <
0 or λ > 𝑡 , we have
Properties of the Convolution:
Where the integral in brackets ranges from 0 to 𝑡 because 𝑢(𝑡 − λ) = 1 for λ ≤ 𝑡 and
𝑢(𝑡 − λ) = 0 for λ > 𝑡 . The term in brackets is simply the convolution of 𝑓1 (𝑡) and 𝑓2 (𝑡).
Then we have,
First fold the signal 𝑥1 (−λ) and then shift by 𝑡 to obtain 𝑥1 (𝑡 − λ) as below,
Then for each range of values of 𝑡 we will compute the convolution 𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡)
b) For 0 < 𝑡 ≤ 1
c) For 1 < 𝑡 ≤ 2
d) For 2 < 𝑡 ≤ 3
Method 2: Laplace
1 1 −𝑠
𝑋1 (𝑠) = − 𝑒
𝑠 𝑠
and
1 1 −𝑠 2
𝑋2 (𝑠) = + 𝑒 − 𝑒 −2𝑠
𝑠 𝑠 𝑠
Thus,
1 1
𝑌(𝑠) = 𝑋1 (𝑠)𝑋2 (𝑠) = (1 − 𝑒 −𝑠 ) (1 + 𝑒 −𝑠 − 2𝑒 −2𝑠 )
𝑠 𝑠
1
= 2 (1 − 3𝑒 −2𝑠 + 2𝑒 −3𝑠 )
𝑠
If we write three terms separately, we have
1 𝑒 −2𝑠 𝑒 −3𝑠
𝑌(𝑠) = 2 − 3 2 + 2 2
𝑠 𝑠 𝑠
Combining the parts, we get
• For 𝑡 < 0 𝑦(𝑡) = 0. Because we have unit step functions starting at t=0 or later.
• For 0 < 𝑡 ≤ 2 we have a ramp with slope: 1
• For 2 < 𝑡 ≤ 3 A new ramp with slope -3 is starting and the net slop is: 1 − 3 = −2.
• For 𝑡 > 3 A new ramp with slope -2 is starting and the net slop is: 1 − 3 + 2 = 0.