0% found this document useful (0 votes)
16 views

CH15

Test

Uploaded by

Hayro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

CH15

Test

Uploaded by

Hayro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Chapter 15: Introduction to Laplace Transform

15.1 Introduction
To analyze linear circuits, we use differential equations (DE).

𝑑𝑖 1 𝑡
𝑅𝑖 + 𝐿 + ∫ 𝑖(𝜏)𝑑𝑡 = 𝑉𝑠
𝑑𝑡 𝐶 ∞

Initial values: 𝑖(0) and 𝑣(𝑜)

DEs allow us to find the complete solution, i.e. transient and steady-state solution.
𝑣(𝑡) = 𝑣𝑡 (𝑡) + 𝑣𝑠𝑠 (𝑡)
But as the circuit gets larger and we have different sources, finding the solution via
differential equations is not trivial.

For the sinusoidal source we learned AC (phasor domain) analysis to get steady state solution.

In this chapter we will learn another tool, Laplace Transform to obtain both transient and
steady state solution for a linear circuit.
Like frequency domain, Laplace transform is a transform domain analysis tool, and the
analysis has the following steps:

1. Transform the signal and the circuit to Laplace domain.


2. Analyze the circuit in Laplace domain.
3. Transform back to time domain.
Laplace transform has several advantages:
1. It allows us to obtain the total solution.
2. It can be used for a variety of inputs than sinusoidal inputs.
3. It allows us to solve the circuit problems with initial conditions. It allows us to solve
differential equations but using algebraic equations.

15.2 Definition of the Laplace Transform


Given any function 𝑓(𝑡), its Laplace transform, denoted by 𝐹(𝑠) or ℒ[𝑓(𝑡)] is defined as

ℒ[𝑓(𝑡)] = 𝐹(𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0−

where 𝑠 is a complex variable given by,


𝑠 = 𝜎 + 𝑗𝜔
Units of parameter 𝑠:
Since the argument 𝑠𝑡 of the exponent 𝑒 must be dimensionless, it follows that 𝑠 has the
dimensions of frequency and units of inverse seconds (𝑠 −1 ) or “frequency”.

The reason for lower limit: 0−

We need to capture any discontinuity around 𝑡 = 0.



ℒ[𝛿(𝑡)] = ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 For the impulse function 𝛿(𝑡) we need to integrate
0− starting from just before 0, i.e., 𝑡 = 0− .

In short, Laplace transform transforms a function 𝑓(𝑡) into another function 𝐹(𝑠). While the
argument of the first function is 𝑡 the argument of latter is 𝑠.

𝑓(𝑡) ℒ[ . ] 𝐹(𝑠)

Therefore, we say the transform is from 𝑡 − 𝑑𝑜𝑚𝑎𝑖𝑛 to 𝑠 − 𝑑𝑜𝑚𝑎𝑖𝑛

Definition of Laplace transform:


How to use Laplace transform in circuit analysis:
When we apply the Laplace transform to circuit analysis, differential equations represent the
circuit in time domain. Each term in differential equations take the place of 𝑓(𝑡). Their
Laplace transform, which corresponds to 𝐹(𝑠) and their algebraic equations represent the
circuit in frequency domain.

𝑑2𝑣 𝑅 𝑑𝑣 1 1
𝑡 − 𝑑𝑜𝑚𝑎𝑖𝑛 ⇒ + + 𝑣 = 𝑣
𝑑𝑡 2 𝐿 𝑑𝑡 𝐿𝐶 𝐿𝐶 𝑠

𝑠 − 𝑑𝑜𝑚𝑎𝑖𝑛 ⇒ 𝑠 2 V(𝑠) + 𝑐1 𝑠V(𝑠) + 𝑐0 V(𝑠) = cs V𝑠 (𝑠)

One-sided and two-sided Laplace transform


The Laplace definition that we have given ignores the function 𝑓(𝑡) for 𝑡 < 0 . Therefore, we
will assume that 𝑓(𝑡) is written as,
𝑓(𝑡)𝑢(𝑡) or 𝑓(𝑡), 𝑡 ≥ 0
This Laplace is known as one-sided (or unilateral) Laplace.

Two-sided (or bilateral) Laplace is given as



𝐹(𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
−∞

But we will use one-sided Laplace, as our functions are assumed to be zero for 𝑡 < 0.

Existence of Laplace transform


Not all the functions have a Laplace transform. For 𝑓(𝑡) to have a Laplace transform, the
Laplace integral must converge to a finite value.

That is, the following integral should have a finite value.

∞ ∞ ∞
∫ |𝑓(𝑡)𝑒 −𝑠𝑡 |𝑑𝑡 = ∫ |𝑓 (𝑡)𝑒 −(𝜎+𝑗𝜔)𝑡 |𝑑𝑡 = ∫ 𝑒 −𝜎𝑡 |𝑓(𝑡)|𝑑𝑡 < ∞
0− 0− 0−

where we have used the fact that |𝑒 𝑗𝜔𝑡 | = √𝑐𝑜𝑠 2 𝜔𝑡 + 𝑠𝑖𝑛2 𝜔𝑡 = 1


This integral converges when for some real value σ = σ𝑐 . Thus, the region of converges for
Laplace transform is 𝑅(𝑠) = 𝜎 > 𝜎𝑐 as shown in below Figure.

In above gray region |𝐹(𝑠)| < ∞, and 𝐹(𝑠) exists. 𝐹(𝑠) is undefined outside the region of
convergence.
Fortunately, all the functions that we will deal in circuit analysis meet the convergence
requirement. Therefore, we do not need to specify the region of convergence.
Inverse Laplace Transform
After circuit analysis we need to go back to time-domain. Therefore, we need to define the
inverse Laplace transform. The inverse Laplace transform is defined as

−1 [𝐹(𝑠)]
1 𝜎1+𝑗∞ Integrate over −∞ < 𝜔 < ∞.
ℒ = 𝑓(𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠
2𝜋𝑗 𝜎1−𝑗∞ for a given 𝜎1 > 𝜎𝑐

The inverse Laplace with this definition requires complex analysis, which is beyond scope of
this course.

Using Look-up Tables for Inverse Laplace transform:

But, instead of inverse by integration, we will use look-up tables. Because there is a one-to-
one relation between function and its Laplace transform. In other words, a function 𝑓(𝑡) and
its Laplace transform constitutes a transform pair and are represented as

𝑓(𝑡) ⟺ 𝐹(𝑠)

I will use the abbreviation LT, for the Laplace transform.


Example 15.1 Determine the LT of, a) 𝑢(𝑡), b) 𝑒 −𝑎𝑡 𝑢(𝑡) , 𝑎 > 0, and c) δ(𝑡)

Solution:

−𝑠𝑡
1 −𝑠𝑡 ∞ 1 1 1
𝑎) ℒ[𝑢(𝑡)] = ∫ 1𝑒 𝑑𝑡 = − 𝑒 | = − (𝑒 −∞ − 𝑒 0 ) = − (0 − 1) =
0− 𝑠 𝟎 𝑠 𝑠 𝑠

∞ ∞ ∞
−𝑎𝑡 −𝑎𝑡 −𝑠𝑡 −(𝑠+𝑎)𝑡
1 −(𝑠+𝑎)𝑡
1
𝑏) ℒ[𝑒 𝑢(𝑡)] = ∫ 𝑒 𝑒 𝑑𝑡 = ∫ 𝑒 𝑑𝑡 = − 𝑒 | =
0− 0− 𝑠+𝑎 𝟎 𝑠+𝑎


𝑐) ℒ[𝛿(𝑡)] = ∫ 𝛿(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝑒 −0 = 1
0−

Practice Problem 15.1


Find the LT of 𝑟(𝑡) = 𝑡𝑢(𝑡), i.e., ramp function, 𝐴𝑒 −𝑎𝑡 𝑢(𝑡) and 𝐵𝑒 −𝑗𝜔𝑡 𝑢(𝑡).

For 𝑟(𝑡) = 𝑡𝑢(𝑡)


∞ ∞
−𝑠𝑡
ℒ[𝑡𝑢(𝑡)] = ∫ 𝑡𝑢(𝑡)𝑒 𝑑𝑡 = ∫ 𝑡𝑒 −𝑠𝑡 𝑑𝑡
0− 0−
1
Integrate by parts: 𝑢 = 𝑡 ⟹ 𝑑𝑢 = 𝑑𝑡 and 𝑑𝑣 = 𝑒 −𝑠𝑡 𝑑𝑡 ⟹ 𝑣 = − 𝑒 −𝑠𝑡
𝑠

1 −𝑠𝑡 1 ∞ −𝑠𝑡
ℒ[𝑡𝑢(𝑡)] = 𝑢𝑣 − ∫ 𝑣𝑑𝑢 = −𝑡 𝑒 + ∫ 𝑒 𝑑𝑡
𝑠 𝑠 0−

1 −𝑠𝑡 1 −𝑠𝑡 ∞ 1 1
[−𝑡 𝑒 − 2 𝑒 ]| = (−0 − 0) − (−0 − 2 ) = 2
𝑠 𝑠 0− 𝑠 𝑠
Note that:
𝑡 𝑡
lim 𝑒 −𝑠𝑡 = lim 𝑠𝑡
t→∞ 𝑠 t→∞ 𝑠𝑒

Using Hospital rule


𝑑
𝑡 [𝑡] 1
lim 𝑠𝑡 = lim 𝑑𝑡 = lim 2 𝑠𝑡 = 0
t→∞ 𝑠𝑒 t→∞ 𝑑 t→∞ 𝑠 𝑒
[𝑠𝑒 𝑠𝑡 ]
𝑑𝑡

For 𝐴𝑒 −𝑎𝑡 𝑢(𝑡)


∞ ∞
−𝑎𝑡 −𝑎𝑡 −𝑠𝑡
𝐴
ℒ[𝐴𝑒 𝑢(𝑡)] = ∫ 𝐴𝑒 𝑢(𝑡) 𝑒 𝑑𝑡 = 𝐴 ∫ 𝑒 −𝑎𝑡 𝑢(𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 = 𝐴ℒ[𝑒 −𝑎𝑡 𝑢(𝑡)] =
0− 0− 𝑠+𝑎

For 𝐵𝑒 −𝑗𝜔𝑡 𝑢(𝑡)


𝐵
ℒ[𝐵𝑒 −𝑗𝜔𝑡 𝑢(𝑡)] = 𝐵ℒ[𝑒 −𝑗𝜔𝑡 𝑢(𝑡)] =
𝑠 + 𝑗𝜔

Example 15.2 determine the LT of 𝑓(𝑡) = 𝑠𝑖𝑛𝜔𝑡𝑢(𝑡)


Solution:
∞ ∞
−𝑠𝑡
𝑒 𝑗𝜔 − 𝑒 −𝑗𝜔 −𝑠𝑡
ℒ[sin 𝜔𝑡] = ∫ (sin 𝜔𝑡) 𝑒 𝑑𝑡 = ∫ ( ) 𝑒 𝑑𝑡
0 0 2𝑗
1 ∞ −(𝑠−𝑗𝜔) 1 1 1 𝜔
= ∫ (𝑒 − 𝑒 −(𝑠+𝑗𝜔) ) 𝑑𝑡 = ( − )= 2
2𝑗 0 2𝑗 𝑠 − 𝑗𝜔 𝑠 + 𝑗𝜔 𝑠 + 𝜔2

Practice Problem 15:2 Find the LT of 𝑓(𝑡) = 50 cos 𝜔𝑡 𝑢(𝑡)


Solution:
∞ ∞
−𝑠𝑡
(𝑒 𝑗𝜔𝑡 + 𝑒 −𝑗𝜔𝑡 ) −𝑠𝑡
𝐹(𝑠) = ℒ[50 cos 𝜔𝑡 𝑢(𝑡)] = ∫ (50 cos 𝜔𝑡 𝑢(𝑡)) 𝑒 𝑑𝑡 = 50 ∫ 𝑒 𝑑𝑡
0− 0− 2

1 1 1 50 𝑠 + 𝑗𝜔 + 𝑠 − 𝑗𝜔 𝑠
= 50 ( + )= = 50
2 𝑠 − 𝑗𝜔 𝑠 + 𝑗𝜔 2 𝑠2 + 𝜔2 𝑠2 + 𝜔2
15.3 Properties of Laplace transform:
Linearity: If 𝐹1 (𝑠) and 𝐹2 (𝑠) are LT of the functions 𝑓1 (𝑡) and 𝑓2 (𝑡), then,
ℒ[𝑎1 𝑓1 (𝑡) + 𝑎2 𝑓2 (𝑡)] = 𝑎1 𝐹1 (𝑠) + 𝑎2 𝐹2 (𝑠)
where 𝑎1 and 𝑎2 are constants. This property is a direct consequence of the definition of LT.

ℒ[𝑓(𝑡)] is an integral and integral is a linear operation, LT is also a linear operation.


For example, by the linearity we may write,
1 1 1
ℒ[cos 𝜔𝑡 𝑢(𝑡)] = ℒ [ (𝑒 𝑗𝜔𝑡 + 𝑒 −𝑗𝜔𝑡 )] = ℒ[𝑒 𝑗𝜔𝑡 ] + ℒ[𝑒 −𝑗𝜔𝑡 ]
2 2 2
1
But we have seen that, ℒ[𝑒 −𝑎𝑡 ] =
𝑠+𝑎

Thus,
1 1 1 𝑠
ℒ[cos 𝜔𝑡 𝑢(𝑡)] = ( + )= 2
2 𝑠 − 𝑗𝜔 𝑠 − 𝑗𝜔 𝑠 + 𝜔2

Scaling: If 𝐹(𝑠) is the LT of 𝑓(𝑡) then, occasions.



1
ℒ[𝑓(𝑎𝑡)] = ∫ 𝑓(𝑎𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 =
0− 𝑎
where 𝑎 > 0 is a constant. If we let 𝑥 = 𝑎𝑡, 𝑑𝑥 = 𝑎𝑑𝑡, then,

∞ 𝑠
−( )𝑥 𝑑𝑥 1 ∞ 𝑠
−( )𝑥
ℒ[𝑓(𝑎𝑡)] = ∫ 𝑓(𝑥) 𝑒 𝑎 = ∫ 𝑓(𝑥) 𝑒 𝑎 𝑑𝑥
0− 𝑎 𝑎 0−
comparing this result with the definition of LT ( replace dummy variable 𝑥 with 𝑡, and 𝑠 with
𝑠/𝑎 ) we can write that
1 𝑠
ℒ[𝑓(𝑎𝑡)] = 𝐹( )
𝑎 𝑎
𝜔
For example, we have seen that ℒ[𝑠𝑖𝑛𝜔𝑢(𝑡)] =
𝑠 2 +𝜔2

Then using scaling property we can obtain ℒ[𝑠𝑖𝑛2𝜔𝑢(𝑡)] as


1 𝜔 2𝜔
ℒ[sin 2𝜔𝑡 𝑢(𝑡)] = =
2 (𝑠/2)2 + 𝜔 2 𝑠 2 + 4𝜔 2
Time Shift: If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), then,

ℒ[𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)] = ∫ 𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)𝑒 −𝑠𝑡 𝑑𝑡 𝑎≥0
0−

But 𝑢(𝑡 − 𝑎) = 0 for 𝑡 < 𝑎 and 𝑢(𝑡 − 𝑎) = 1 for 𝑡 > 𝑎. Hence,



ℒ[𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)] = ∫ 𝑓(𝑡 − 𝑎)𝑒 −𝑠𝑡 𝑑𝑡
𝑎

If we let 𝑥 = 𝑡 − 𝑎, then 𝑑𝑥 = 𝑑𝑡 and 𝑡 = 𝑥 + 𝑎. As 𝑡 → 𝑎, 𝑥 → 0 and as 𝑡 → ∞, 𝑥 → ∞.


Thus,
∞ ∞
−𝑠(𝑥+𝑎) −𝑎𝑠
ℒ[𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)] = ∫ 𝑓(𝑥)𝑒 𝑑𝑥 = 𝑒 ∫ 𝑓(𝑥)𝑒 −𝑠𝑥 𝑑𝑥 = 𝑒 −𝑎𝑠 𝐹(𝑠)
0− 0−

As an example,
𝑠
ℒ[𝑐𝑜𝑠𝜔𝑡𝑢(𝑡)] =
𝑠2 + 𝜔2
Using time-shift property we can write,
𝑠
ℒ[𝑐𝑜𝑠𝜔(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)] = 𝑒 −𝑎𝑠
𝑠2 + 𝜔2
Frequency Shift: If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), then,
∞ ∞
−𝑎𝑡 −𝑎𝑡 −𝑠𝑡
ℒ[𝑒 𝑓(𝑡)𝑢(𝑡)] = ∫ 𝑒 𝑓(𝑡)𝑒 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −(𝑠+𝑎)𝑡 𝑑𝑡 = 𝐹(𝑠 + 𝑎)
0 0

Question: why is it called frequency shift?


As an example,
𝑠
𝑐𝑜𝑠𝜔𝑡𝑢(𝑡) ⟺
𝑠2
+ 𝜔2
𝜔
𝑠𝑖𝑛𝜔𝑡𝑢(𝑡) ⟺ 2
𝑠 + 𝜔2
Using, frequency shift property we can obtain the LT of dumped sine and dumped cosine
functions as
𝑠+𝑎
ℒ[𝑒 −𝑎𝑡 𝑐𝑜𝑠𝜔𝑡𝑢(𝑡)] =
(𝑠 + 𝑎)2 + 𝜔 2
𝜔
ℒ[𝑒 −𝑎𝑡 𝑠𝑖𝑛𝜔𝑡𝑢(𝑡)] =
(𝑠 + 𝑎)2 + 𝜔 2
Time Differentiation
If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), then, LT of its derivative is,

𝑑𝑓 𝑑𝑓 −𝑠𝑡
ℒ[ 𝑢(𝑡)] = ∫ 𝑒 𝑑𝑡
𝑑𝑡 0 𝑑𝑡
To integrate by parts, we let 𝑢 = 𝑒 −𝑠𝑡 , 𝑑𝑢 = −𝑠𝑒 −𝑠𝑡 and 𝑑𝑣 = (𝑑𝑓⁄𝑑𝑡) = 𝑑𝑓(𝑡), 𝑣 =
𝑓(𝑡).
Then

𝑑𝑓 −𝑠𝑡 ∞
ℒ[ 𝑢(𝑡)] = 𝑓(𝑡)𝑒 | − − ∫ 𝑓(𝑡)[−𝑠𝑒 −𝑠𝑡 ] 𝑑𝑡
𝑑𝑡 0 0


−)
= 0 − 𝑓(0 + 𝑠 ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝑠𝐹(𝑠) − 𝑓(0− )
0

The Laplace transform of second derivative of 𝑓(𝑡) is a repeated application above equality,
𝑑2𝑓
ℒ [ 2 ] = 𝑠ℒ[𝑓′(𝑡)] − 𝑓 ′ (0− ) = 𝑠[𝑠𝐹(𝑠) − 𝑓(0− )] − 𝑓′(0− )
𝑑𝑡
= 𝑠 2 𝐹(𝑠) − 𝑠𝑓(0− ) − 𝑓′(0− )
Repeating this process, we can obtain the LT of the 𝑛𝑡ℎ derivative of 𝑓(𝑡) as
𝑑𝑛 𝑓
ℒ [ 𝑛 ] = 𝑠 𝑛 𝐹(𝑠) − 𝑠 𝑛−1 𝑓(0− ) − 𝑠 𝑛−2 𝑓 ′ (0− ) − ⋯ − 𝑠 0 𝑓 (𝑛−1) (0− )
𝑑𝑡

where 𝑓 (𝑛−1) (𝑡) is the (𝑛 − 1)𝑡ℎ order derivative of 𝑓(𝑡) evaluated at 𝑡 = 0− .

Example:
We can use time differentiation to obtain the LT of the 𝑠𝑖𝑛𝜔𝑡 from that of the 𝑐𝑜𝑠𝜔𝑡.

Let 𝑓(𝑡) = 𝑐𝑜𝑠𝜔𝑡𝑢(𝑡), then 𝑓(0) = 1 and 𝑓′(𝑡) = −𝜔𝑠𝑖𝑛𝜔𝑡𝑢(𝑡) . Using time derivation and
the scaling property
1 1
ℒ[𝑠𝑖𝑛𝜔𝑢(𝑡)] = −ℒ[𝑓 ′ (𝑡)] = − [𝑠𝐹(𝑠) − 𝑓(0− )]
𝜔 𝜔
1 𝑠 𝜔
− (𝑠 2 − 1) =
𝜔 𝑠 + 𝜔2 𝑠2 + 𝜔2
Time Integration: If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), the Laplace transform of its
integral is,
𝑡 ∞ 𝑡
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ∫ [∫ 𝑓(𝑥)𝑑𝑥] 𝑒 −𝑠𝑡 𝑑𝑡
0 0− 0

To integrate this by parts, we let


𝑡 1
𝑢 = ∫0 𝑓(𝑥)𝑑𝑥 , 𝑑𝑢 = 𝑓(𝑡)𝑑𝑡 and 𝑑𝑣 = 𝑒 −𝑠𝑡 𝑑𝑡, 𝑣 = − 𝑒 −𝑠𝑡
𝑠

Then
𝑡 𝑡 ∞ ∞
1 −𝑠𝑡 1
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = {[∫ 𝑓(𝑥)𝑑𝑥] (− 𝑒 )}| − ∫ (− ) 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡
0 0 𝑠 0− 0 𝑠

where assuming ∫0 𝑓(𝑥)𝑑𝑥 < ∞, i.e., the integration of 𝑓(𝑡) is finite the first term in curly
1
brackets at 𝑡 = ∞ yields zero because 𝑒 −𝑠.∞ = 0. Evaluating it at 𝑡 = 0, we get,
𝑠

1 0 1 1
[ ∫ 𝑓(𝑥)𝑑𝑥] (− 𝑒 −0𝑡 ) = [0] (− ) = 0
𝑠 0 𝑠 𝑠
Thus, the first term is zero. Then we have
𝑡
1 ∞ 1
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝐹(𝑠)
0 𝑠 0− 𝑠

Example: 𝑓(𝑡) = 𝑢(𝑡) and 𝐹(𝑠) = 1/𝑠. Using integration property, we can find ℒ[𝑡].
𝑡 𝑡
1 1 1 1
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ℒ [∫ 𝑢(𝑥)𝑑𝑥] = ℒ[𝑡] = 𝐹(𝑠) = ( ) = 2
0− 0− 𝑠 𝑠 𝑠 𝑠

Thus, the LT of the rump function 𝑓(𝑡) = 𝑡𝑢(𝑡) is


1
ℒ[𝑡] = ⟹ ℒ[𝑡 2 ] = ?
𝑠2
Using this result and time integration again we obtain
𝑡
𝑡2 11 2
ℒ [∫ 𝑥𝑑𝑥 ] = ℒ [ ] = 2 or ℒ[𝑡 2 ] =
0 2 𝑠𝑠 𝑠3
Repeated applications of time-integration lead to
𝑛!
ℒ[𝑡 𝑛 ] =
𝑠 𝑛+1
Similarly, we can show that,
𝑡 0− 𝑡
ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = ℒ [∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥]
−∞ −∞ 0−
0− 𝑡
𝑓 (−1) (0) 𝐹(𝑠)
= ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] + ℒ [∫ 𝑓(𝑥)𝑑𝑥 ] = +
−∞ 0− 𝑠 𝑠
0−
where 𝑓 −1 (0− ) = ∫−∞ 𝑓(𝑡)𝑑𝑡 and 𝑓 (−1) (0) ⁄𝑠 is the Laplace of the constant 𝑓 (−1) (0).

𝑡 𝑑𝑓(𝑡) 𝑑 2 𝑓(𝑡) 𝑑 3 𝑓(𝑡)


∫−∞ 𝑓(𝑥)𝑑𝑥 , 𝑓(𝑡),
𝑑𝑡
,
𝑑𝑡 2
,
𝑑𝑡 3
is denoted by,

𝑓 (−1) (𝑡), 𝑓 (0) (𝑡), 𝑓 (1) (𝑡), 𝑓 (2) (𝑡), 𝑓 (3) (𝑡)

Frequency Differentiation: If 𝐹(𝑠) is the Laplace transform of 𝑓(𝑡), then,



𝐹(𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡
0−

Taking derivative with respect to 𝑠, we get


∞ ∞
𝑑𝐹(𝑠) −𝑠𝑡 )𝑑𝑡
= ∫ 𝑓(𝑡)(−𝑡𝑒 = ∫ [−𝑡𝑓(𝑡)]𝑒 −𝑠𝑡 𝑑𝑡 = ℒ[−𝑡𝑓(𝑡)]
𝑑𝑠 0− 0

and the frequency differentiation property becomes,


𝑑𝐹(𝑠)
ℒ[𝑡𝑓(𝑡)] = −
𝑑𝑠
Repeated application of this property leads to
𝑑 𝑛 𝐹(𝑠)
ℒ[𝑡 𝑛 𝑓(𝑡)] = (−1)𝑛
𝑑𝑠 𝑛
For example, we have seen that ℒ[𝑒 −𝑎𝑡 ] = 1/(𝑠 + 𝑎). Using Frequency derivation, we get

𝑑 1 1
ℒ[𝑡𝑒 −𝑎𝑡 𝑢(𝑡)] = − ( )=
𝑑𝑠 𝑠 + 𝑎 (𝑠 + 𝑎)2

Note that if 𝑎 = 0, we obtain that ℒ[𝑡] = 1/𝑠 2


Time Periodicity: If function 𝑓(𝑡) is a periodic function such as shown in below Figure,

𝑓(𝑡) can be represented as the sum of time-shifted functions shown in below Figure

Thus,
𝑓(𝑡) = 𝑓1 (𝑡) + 𝑓2 (𝑡) + 𝑓3 (𝑡) + ⋯
where 𝑓1 (𝑡) is
𝑓(𝑡), 0<𝑡<𝑇
𝑓1 (𝑡) = 𝑓(𝑡)[𝑢(𝑡) − 𝑢(𝑡 − 𝑇)] = {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
We can express 𝑓(𝑡) as,
𝑓(𝑡) = 𝑓1 (𝑡) + 𝑓1 (𝑡 − 𝑇) + 𝑓1 (𝑡 − 2𝑇) + 𝑓1 (𝑡 − 3𝑇) + ⋯
We now transform each term in 𝑓(𝑡) and apply the time-shift property we obtain,
𝐹(𝑠) = 𝐹1 (𝑠) + 𝐹1 (𝑠)𝑒 −𝑇𝑠 + 𝐹1 (𝑠)𝑒 −2𝑇𝑠 + 𝐹1 (𝑠)𝑒 −3𝑇𝑠 + ⋯
= 𝐹1 (𝑠)[1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯ ]

Let 𝑋(𝑠) = 1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯


So that 𝐹(𝑠) = 𝐹1 (𝑠)𝑋(𝑠). If we multiply both sides of this expression by 𝑒 −𝑇𝑠 , we get,
𝑒 −𝑇𝑠 𝑋(𝑠) = 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯
If we add ±1 to the left-hand side, we have ,
𝑒 −𝑇𝑠 𝑋(𝑠) = −1 + 1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯
= −1 + 𝑋(𝑠)
Thus,
1
𝑋(𝑠) =
1 − 𝑒 −𝑇𝑠
Therefore, the LT of periodic function 𝑓(𝑡) is
𝐹1 (𝑠)
𝐹(𝑠) =
1 − 𝑒 −𝑇𝑠
where 𝐹1 (𝑠) is the LT of 𝑓1 (𝑡).

Alternative proof (in textbook):


We have the series 1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯
But we know that, if |𝑥| < 1
1
1 + 𝑥 + 𝑥2 + 𝑥3 + ⋯ =
1−𝑥
Setting 𝑥 = 𝑒 −𝑇𝑠 , we have
|𝑥| = |𝑒 −𝑇(𝜎+𝑗𝜔) | = |𝑒 −𝑗𝜔𝑇 |. |𝑒 −𝜎𝑇 | = |𝑒 −𝜎𝑇 |
The period is 𝑇 > 0. Hence in order the summation 1 + 𝑒 −𝑇𝑠 + 𝑒 −2𝑇𝑠 + 𝑒 −3𝑇𝑠 + ⋯ to
1
converge, 𝜎 > 0. In that case |𝑒 −𝑇𝑠 | = |𝑒 −𝜎𝑇 | < 1. That means, we can use as the limit.
1−𝑥

Initial and Final Values:


The initial-value and final-value properties allow us to find the initial value 𝑓(0) and the final
value 𝑓(∞) of 𝑓(𝑡) directly from its LT 𝐹(𝑠).
Using time-differentiation property, we have

𝑑𝑓 𝑑𝑓 −𝑠𝑡
ℒ[ ] = ∫ 𝑒 𝑑𝑡 = 𝑠𝐹(𝑠) − 𝑓(0)
𝑑𝑡 0 𝑑𝑡
If we let 𝑠 → ∞, then due to vanishing exponential 𝑒 −𝑠𝑡 in integral, we get,
∞ ∞
𝑑𝑓 −𝑠𝑡 𝑑𝑓 −∞𝑡
lim [∫ 𝑒 𝑑𝑡] = [∫ 𝑒 𝑑𝑡] = 0
s→∞ 0 𝑑𝑡 0 𝑑𝑡

Since 𝑓(0) is independent of 𝑠, we get,


lim [𝑠𝐹(𝑠) − 𝑓(0)] = lim 𝑠𝐹(𝑠) − 𝑓(0) = 0
s→∞ s→∞

Thus,
lim 𝑠𝐹(𝑠) = 𝑓(0)
s→∞

This is known as the initial-value theorem.


For example, we know that,
𝑠+2
𝑓(𝑡) = 𝑒 −2𝑡 cos 10𝑡 ⟺ 𝐹(𝑠) =
(𝑠 + 2)2 + 102
Using the initial-value theorem
𝑠 2 + 2𝑠 1 + 2/𝑠
𝑓(0) = lim 𝑠𝐹(𝑠) = lim = lim = 1
s →∞ s →∞ (𝑠 + 2)2 + 102 s →∞ 1 + 4/𝑠 + 104/𝑠

which is the value of we expect from the given 𝑓(𝑡).

Similarly using time-derivation property we have

∞ ∞

𝑑𝑓 0𝑡
lim [𝑠𝐹(𝑠) − 𝑓(0 )] = ∫ 𝑒 𝑑𝑡 = ∫ 𝑑𝑓 = 𝑓(∞) − 𝑓(0− )
s →0 0− 𝑑𝑡 0−

or
𝑓(∞) = lim 𝑠𝐹(𝑠)
s →0

This is known as the final-value theorem.

For example, we know that,


5
𝑓(𝑡) = 𝑒 −2𝑡 sin 5𝑡 𝑢(𝑡) ⟺ 𝐹(𝑠) =
(𝑠 + 2)2 + 52
Applying final value theorem
5𝑠
𝑓(∞) = lim 𝑠𝐹(𝑠) = lim = 0
s →0 s →0 𝑠 2 + 4𝑠 + 29
as expected from the given 𝑓(𝑡).
Note that: In order final value theorem to hold, 𝐹(𝑠) must satisfy the following requirements:
1. Poles of 𝐹(𝑠) must have negative real values (left of 𝑠 − 𝑝𝑙𝑎𝑛𝑒).

2. 𝐹(𝑠) can have only one pole at origin (𝑠 = 0).

The reason for the first requirement is that if 𝐹(𝑠) has poles with positive real part 𝜎 > 0,
then 𝑓(𝑡) will have a term like 𝐴𝑒 𝜎𝑡 , hence 𝑓(∞) does not converge.

The second requirement is due to fact that 𝑠𝐹(𝑠) will be infinity if 𝐹(𝑠) has more than one
pole at origin. Only one pole (i.e. 1/𝑠) will be cancelled by 𝑠 multiplication in 𝑠𝐹(𝑠).

Example: We know that,


1
𝑓(𝑡) = sin 𝑡 𝑢(𝑡) ⟺ 𝐹(𝑠) =
𝑠2 + 1
So that
𝑠
𝑓(∞) = lim 𝑠𝐹(𝑠) = lim = 0 incorrect !
s →0 s →0 𝑠2 + 1

This result is incorrect, 𝑓(𝑡) = 𝑠𝑖𝑛𝑡 oscillates between −1 and +1 and does not have a limit
as 𝑡 → ∞.

Thus, the final value theorem can not be used to find the final value of 𝑓(𝑡) = sin 𝑡, because
𝐹(𝑠) has poles at 𝑠 = ±𝑗 which are not in the left half of the 𝑠 plane.

The initial-value and final value theorems shows the relationship between the origin and the
infinity in time domain and the 𝑠 − 𝑑𝑜𝑚𝑎𝑖𝑛. They servs as useful checks on LT.
Below is the list of properties of LT.
Example 15.3 Obtain the LT of 𝑓(𝑡) = 𝛿(𝑡) + 2𝑢(𝑡) − 3𝑒 −2𝑡 𝑢(𝑡).
Solution:

Practice Problem 15.3

Example 15.4 Determine the LT of 𝑓(𝑡) = 𝑡 2 sin 2𝑡 𝑢(𝑡)


Solution: We know that
2
ℒ[sin 2𝑡] =
𝑠 2 + 22
Using frequency differentiation

2
𝑑2 2 𝑑 −4𝑠 12𝑠 2 − 16
ℒ[𝑡 sin 2𝑡] = (−1)2 ( )= [ ]= 2
𝑑𝑠 2 𝑠 2 + 4 𝑑𝑠 (𝑠 2 + 4)2 (𝑠 + 4)3

Practice Problem 15.4 Find the LT of 𝑓(𝑡) = 𝑡 2 cos 3𝑡 𝑢(𝑡)

𝑠
Solution: Let 𝑓1 (𝑡) = cos 3𝑡 𝑢(𝑡) ⟹ ℒ[cos 3𝑡 𝑢(𝑡) ] =
𝑠 2 +32

Then

2 2
𝑑 2 𝐹1 (𝑠)
ℒ[𝑡 cos 3𝑡 𝑢(𝑡) ] = ℒ[𝑡 𝑓1 (𝑡) ] = (−1)2
𝑑𝑠 2

𝑑𝐹1 (𝑠) 1 2𝑠 2 (𝑠 2 + 32 ) − 2𝑠 2 −𝑠 2 + 32
= 2 − = = 2
𝑑𝑠 𝑠 + 32 (𝑠 2 + 32 )2 (𝑠 2 + 32 )2 (𝑠 + 32 )2
𝑑 2 𝐹1 (𝑠) −2𝑠 2 2)
𝑑 1
= + (−𝑠 + 3 [ ]
𝑑𝑠 2 (𝑠 2 + 32 )2 𝑑𝑠 (𝑠 2 + 32 )2
−2𝑠 2 2)
4𝑠
= + (−𝑠 + 3 (− )
(𝑠 2 + 32 )2 (𝑠 2 + 32 )3

𝑑 2 𝐹1 (𝑠) 2𝑠(𝑠 2 + 32 ) − 4𝑠(−𝑠 2 + 32 ) 2𝑠(𝑠 2 − 27)


⟹ = − =
𝑑𝑠 2 (𝑠 2 + 32 )3 (𝑠 2 + 32 )3

Example 15.5 Find the LT of the following function.

Solution:
𝑔(𝑡) = 10[𝑢(𝑡 − 2) − 𝑢(𝑡 − 3)]
Given that we know LT of 𝑢(𝑡)
𝑒 −2𝑠 𝑒 −3𝑠 10 −2𝑠
𝐺(𝑠) = 10 ( − )= (𝑒 − 𝑒 −3𝑠 )
𝑠 𝑠 𝑠

Practice Problem 15.5 Find the LT of the function ℎ(𝑡) in Figure below.

4 8
−𝑠𝑡
𝐻(𝑠) = ℒ[ℎ(𝑡)] = ∫ 20𝑒 𝑑𝑡 + ∫ 10𝑒 −𝑠𝑡 𝑑𝑡
0− 4

1 4 1 8
= 20 (− 𝑒 −𝑠𝑡 | ) + 10 (− 𝑒 −𝑠𝑡 | )
𝑠 0 𝑠 4

20 10 −4𝑠 10
= (1 − 𝑒 −4𝑠 ) + (𝑒 − 𝑒 −8𝑠 ) = (2 − 𝑒 −4𝑠 − 𝑒 −8𝑠 )
𝑠 𝑠 𝑠

Alternative Method(preferred):
ℎ(𝑡) = 20[𝑢(𝑡) − 𝑢(𝑡 − 4)] + 10[𝑢(𝑡 − 4) − 𝑢(𝑡 − 8)]
= 10[2𝑢(𝑡) − 𝑢(𝑡 − 4) − (𝑡 − 8)]
2 𝑒 −4𝑠 𝑒 −8𝑠 10
ℒ[ℎ(𝑡)] = 10 [ − − ] = [2 − 𝑒 −4𝑠 − 𝑒 −8𝑠 ]
𝑠 𝑠 𝑠 𝑠
Practice Problem 15.6 Find the LT of the function periodic function ℎ(𝑡) given below

Practice Problem 15.7 Find the initial and final values of


15.4 The Inverse Laplace Transform
Formally the inverse LT is given by,

−1 [𝐹(𝑠)]
1 𝜎1+𝑗∞
ℒ = 𝑓(𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠
2𝜋𝑗 𝜎1−𝑗∞

The inverse Laplace with this definition requires complex analysis, which is beyond scope of
this course.

But the inverse LT is usually obtained by lookup table. In other words, we will obtain the
inverse LT by using the LT of known functions.
The LT of a function 𝑓(𝑡) has the following general form,
𝑁(𝑠)
𝐹(𝑠) =
𝐷(𝑠)
where 𝑁(𝑠) is the numerator polynomial and 𝐷(𝑠) is the denominator polynomial.

• The roots of 𝑁(𝑠) = 0 are the zeros of the 𝐹(𝑠)


• The roots of 𝐷(𝑠) = 0 are the poles of the 𝐹(𝑠)

We will use partial fraction expansion to break down 𝐹(𝑠) down into simple terms whose
inverse LT can be obtained from lookup table where the LT of known functions is listed.

So, we obtain the inverse LT with the following simple steps.


1. Decompose 𝐹(𝑠) into simple terms using partial fraction expansion.

2. Find the inverse LT of each term by matching entries in Lookup Table.

Simple Poles:
In simple poles case, all poles (roots of 𝐷(𝑠) = 0 ) are distinct and we can express the 𝐷(𝑠)
as product of the factors as
𝑁(𝑠)
𝐹(𝑠) =
(𝑠 + 𝑝1 )(𝑠 + 𝑝2 ) … (𝑠 + 𝑝𝑛 )
𝑠 = −𝑝1 , −𝑝2, … , −𝑝𝑛 are the simple roots and 𝑝𝑖 ≠ 𝑝𝑗 for all I 𝑖 ≠ 𝑗.
Partial Fraction Expansion:
If the degree of 𝑁(𝑠) is smaller than the degree of 𝐷(𝑠), we use partial fraction expansion
to decompose 𝐹(𝑠) as
𝑘1 𝑘2 𝑘𝑛
𝐹(𝑠) = + + ⋯+
𝑠 + 𝑝1 𝑠 + 𝑝2 𝑠 + 𝑝𝑛
Example:
2 𝑘1 𝑘2
𝐹(𝑠) = = +
(𝑠 + 3)(𝑠 + 5) 𝑠 + 3 𝑠 + 5

𝑘1 𝑘2 𝑘1 (𝑠 + 3) + 𝑘2 (𝑠 + 5) (𝑘1 + 𝑘2 )𝑠 + 3𝑘1 + 5𝑘2


⟹ + = =
𝑠+3 𝑠+5 (𝑠 + 3)(𝑠 + 5) (𝑠 + 3)(𝑠 + 5)

The numerator(𝑘1 + 𝑘2 )𝑠 + 3𝑘1 + 5𝑘2 must be equal to that of 𝐹(𝑠).


Thus,
𝑘1 + 𝑘2 = 0 𝑘1 = −1 −1 1
} ⟹ ⟹ 𝐹(𝑠) = +
3𝑘1 + 5𝑘2 = 2 𝑘2 = 1 𝑠+3 𝑠+5

There are many ways of finding the expansion coefficients. One way is using the residue
method.
Residue Method (also known as Heaviside):
The expansion coefficients 𝑘1 , 𝑘2 , … , 𝑘𝑛 are known as the residues of 𝐹(𝑠). If we multiply
both sides of the above equation by (𝑠 + 𝑝1 ), we obtain
(𝑠 + 𝑝1 )𝑘2 (𝑠 + 𝑝1 )𝑘𝑛
(𝑠 + 𝑝1 )𝐹(𝑠) = 𝑘1 + + ⋯+
𝑠 + 𝑝2 𝑠 + 𝑝𝑛
Since 𝑝𝑖 ≠ 𝑝𝑗 , setting 𝑠 = −𝑝1 leaves only 𝑘1 on right hand side. Hence, 𝑘1 is obtained as
(𝑠 + 𝑝1 )𝐹(𝑠)|𝑠=−𝑝1 = 𝑘1
Using the same method, we can obtain any residue as
𝑘𝑖 = (𝑠 + 𝑝𝑖 )𝐹(𝑠)|𝑠=−𝑝𝑖
Once we obtained 𝑘𝑖 values, since ℒ[𝑘/(𝑠 + 𝑘)] = 𝑘𝑒 −𝑝𝑘𝑡 𝑢(𝑡) we can write the inverse LT
of 𝑓(𝑡) as
𝑓(𝑡) = ( 𝑘1 𝑒 −𝑝1𝑡 + 𝑘2 𝑒 −𝑝2𝑡 + ⋯ + 𝑘𝑛 𝑒 −𝑝𝑛𝑡 )𝑢(𝑡)
What if 𝑁(𝑠) has a larger degree than 𝐷(𝑠)

If degree of 𝑁(𝑠) is larger than degree of 𝐷(𝑠), than we divide 𝑁(𝑠) to 𝐷(𝑠) and obtain the
following function of s
𝑁(𝑠) 𝑁2 (𝑠)
= 𝐹1 (𝑠) +
𝐷(𝑠) 𝐷(𝑠)

where 𝐹1 (𝑠) is a polynomial in 𝑠 and the degree of 𝑁2 (𝑠) is smaller than the degree of 𝐷(𝑠).
Hence, we will apply partial fraction expansion to 𝑁2 (𝑠)/𝐷(𝑠).

Repeated Poles:
Suppose 𝐹(𝑠) has 𝑛 repeated poles at 𝑠 = −𝑝. Then we can express 𝐹(𝑠) as
𝑘𝑛 𝑘𝑛−1 𝑘2 𝑘1
𝐹(𝑠) = + + ⋯ + + + 𝐹1 (𝑠)
(𝑠 + 𝑝)𝑛 (𝑠 + 𝑝)𝑛−1 (𝑠 + 𝑝)2 (𝑠 + 𝑝)
where 𝐹1 (𝑠) is the remaining part of 𝐹(𝑠) which does not have any pole at 𝑠 = −𝑝. We
can determine the expansion coefficient 𝑘𝑛 as

𝑘𝑛 = (𝑠 + 𝑝)𝑛 𝐹(𝑠)|𝑠=−𝑝
To determine 𝑘𝑛−1 , we multiply each term in above expression of 𝐹(𝑠) by (𝑠 + 𝑝)𝑛 and
differentiate to get rid of 𝑘𝑛 , then we evaluate result at 𝑠 = −𝑝 to get rid of other coefficients
except 𝑘𝑛−1 . Thus, we obtain,
𝑑
𝑘𝑛−1 = [(𝑠 + 𝑝)𝑛 𝐹(𝑠)]|𝑠=−𝑝
𝑑𝑠
Repeating this gives
1 𝑑2
𝑘𝑛−2 = [(𝑠 + 𝑝)𝑛 𝐹(𝑠)]|𝑠=−𝑝
2! 𝑑𝑠 2
Combining all, we can write the general formula as
1 𝑑𝑖
𝑘𝑛−𝑖 = 𝑖
[(𝑠 + 𝑝)𝑛 𝐹(𝑠)]|𝑠=−𝑝 𝑖 = 0, 1, … , 𝑛 − 1
𝑖! 𝑑𝑠

were,
𝑑0
0
[(𝑠 + 𝑝)𝑛 𝐹(𝑠)] = (𝑠 + 𝑝)𝑛 𝐹(𝑠)
𝑑𝑠
Once we obtain the values of 𝑘1 , 𝑘2 , … , 𝑘𝑛 by partial fraction expansion, we may apply
inverse transform to each term.

−1
1 𝑡 𝑛−1 𝑒 −𝑎𝑡
ℒ [ ]= 𝑢(𝑡)
(𝑠 + 𝑎)𝑛 (𝑛 − 1)!

and obtain,
𝑘3 2 −𝑝𝑡 𝑘𝑛
𝑓(𝑡) = ( 𝑘1 𝑒 −𝑝𝑡 + 𝑘2 𝑡𝑒 −𝑝𝑡 + 𝑡 𝑒 + ⋯+ 𝑡 𝑛−1 𝑒 −𝑝𝑡 ) 𝑢(𝑡) + 𝑓1 (𝑡)
2! (𝑛 − 1)!

Complex Poles:
Method 1 (First order)
The complex poles can be handled in the same way as simple Heaviside (cover up) method.
𝑁(𝑠)
Since 𝑁(𝑠) and 𝐷(𝑠) in 𝐹(𝑠) = have real coefficients the complex poles in 𝐷(𝑠) comes
𝐷(𝑠)
in conjugate pairs, and we can express 𝐹(𝑠) as
𝑘1 𝑘2
𝐹(𝑠) = + + 𝐹1 (𝑠)
(𝑠 + 𝑝) (𝑠 + 𝑝∗ )
where 𝑝 is the complex pole and 𝑝∗ is its complex conjugate. 𝐹1 (𝑠) is the remaining part of
𝐹(𝑠) that does not have pairs of complex poles at 𝑠 = −𝑝 and 𝑠 = −𝑝∗ . Then we can obtain

𝑘1 = (𝑠 + 𝑝)𝐹(𝑠)|𝑠=−𝑝 and 𝑘2 = (𝑠 + 𝑝∗ ) 𝐹(𝑠)|𝑠=−𝑝∗ = 𝑘1∗

This method produces complex values for 𝑘1 and 𝑘2 , therefore another way is to use
second order or completing to square method.

Method 2 (Second order, or completing the square)


We choose 𝐴1 𝑠 + 𝐴2 because
𝑘1 𝑘2 𝐴1 𝑠 + 𝐴2 we know that the denominator
𝐹(𝑠) = + + 𝐹1 (𝑠) = + 𝐹1 (𝑠) is second order and the
(𝑠 + 𝑝) (𝑠 + 𝑝∗ ) (𝑠 + 𝑝)(𝑠 + 𝑝∗ ) numerator should have a
smaller order, hence it is
𝐴1 𝑠 + 𝐴2 selected as first order.
= + 𝐹1 (𝑠)
𝑠 2 + 𝑎𝑠 + 𝑏
If we write the denominator in alternative way by complete the square as
𝑠 2 + 𝑎𝑠 + 𝑏 = 𝑠 2 + 2𝛼𝑠 + 𝛼 2 + 𝛽 2 = (𝑠 + 𝛼)2 + 𝛽 2
We can also express the numerator in terms of new variables 𝛼 and 𝛽 as,
𝐴1 𝑠 + 𝐴2 = 𝐴1 (𝑠 + 𝛼) + 𝐵1 𝛽
Thus, we can express 𝐹(𝑠) as
𝐴1 (𝑠 + 𝛼) 𝐵1 𝛽
𝐹(𝑠) = + + 𝐹1 (𝑠)
(𝑠 + 𝛼)2 + 𝛽 2 (𝑠 + 𝛼)2 + 𝛽 2
Then using the lookup table, we can write inverse LT as

𝑓(𝑡) = (𝐴1 𝑒 −𝛼𝑡 cos 𝛽𝑡 + 𝐵1 𝑒 −𝛼𝑡 sin 𝛽𝑡) 𝑢(𝑡) + 𝑓1 (𝑡)

The Method of Algebra for partial coefficient expansion.

Whether the poles are simple(distinct) are repeating or complex conjugate, there is method
which allows us to find all the coefficients. The method is called Method of Algebra.
Example:
Start with the partial fraction expansion,
𝑠+3 𝐴1 𝐴2 𝐴3 𝐴4
𝐹(𝑠) = = + + +
𝑠(𝑠 + 2)2 (𝑠 + 5) 𝑠 𝑠 + 2 (𝑠 + 2)2 𝑠+5

Multiply this by the denominator (to clear it out). In other words, cross-multiply the right
side by the denominator of the left side.

𝑠+3 𝐴1 𝐴2 𝐴 𝐴4
𝑠(𝑠 + 2)2 (𝑠 + 5) [ ] = 𝑠(𝑠 + 2)2 (𝑠 + 5) [ + 3
+ (𝑠+2) + ]
𝑠(𝑠+2)2 (𝑠+5) 𝑠 𝑠+2 2 𝑠+5

𝑠 + 3 = (𝑠 + 2)2 (𝑠 + 5)𝐴1 + 𝑠(𝑠 + 2)(𝑠 + 5)𝐴2 + 𝑠(𝑠 + 5)𝐴3 + 𝑠(𝑠 + 2)2 𝐴4

Now expand the right-hand side as a polynomial in "𝑠"


𝑠 + 3 = (𝐴1 + 𝐴2 + 𝐴4 )𝑠 3 + (9𝐴1 + 7𝐴2 + 𝐴3 + 4𝐴4 )𝑠 2
+ (24𝐴1 + 10𝐴2 + 5𝐴3 + 4𝐴4 )𝑠 + 20𝐴1
If the polynomials two polynomials are equal, then their coefficients must be equal.
Therefore, we have

𝐴1 + 𝐴2 + 𝐴4 = 0
9𝐴1 + 7𝐴2 + 𝐴3 + 4𝐴4 = 0
24𝐴1 + 10𝐴2 + 5𝐴3 + 4𝐴4 = 1
20𝐴1 = 3

We can put into matrix form as


1 2 0 1 𝐴1 0 𝐴1 0.1500
9 7 1 4 𝐴 0 𝐴2 −0.1944
[ ] [ 2] = [ ] ⟹ [ ]=[ ]
24 10 5 4 𝐴3 1 𝐴3 −0.1667
20 0 0 0 𝐴4 3 𝐴4 0.0444

Exponential term in 𝐹(𝑠):


Example
(𝑠 + 3)𝑒 −𝑎𝑠 (𝑠 + 3)𝑒 −𝑎𝑠
𝐹(𝑠) = 2 =
𝑠 + 7𝑠 2 + 10𝑠 𝑠(𝑠 + 2)(𝑠 + 5)
We can express 𝐹(𝑠) as,
𝐴1 𝐴2 𝐴3
𝐹(𝑠) = [ + + ] 𝑒 −𝑎𝑠
𝑠 𝑠+2 𝑠+5
The coefficients 𝐴1 , 𝐴2 and 𝐴3 are found in the same way as we have seen, and we can
write,
0.3 1 1 2 1
𝐹(𝑠) = [ − − ] 𝑒 −𝑎𝑠
𝑠 6 𝑠 + 2 15 𝑠 + 5
𝑒 −𝑎𝑠 1 𝑒 −𝑎𝑠 2 𝑒 −𝑎𝑠
= 0.3 − −
𝑠 6 𝑠 + 2 15 𝑠 + 5
Thus,
1 2
𝑓(𝑡) = 0.3𝑢(𝑡 − 𝑎) − 𝑒 −2(𝑡−𝑎) 𝑢(𝑡 − 𝑎) − 𝑒 −5(𝑡−𝑎) 𝑢(𝑡 − 𝑎)
6 15
Practice problem 15.8
6 7𝑠
Find the inverse LT of 𝐹(𝑠) = 5 + −
𝑠+4 𝑠 2 +25

6 7𝑠
𝑓(𝑡) = ℒ −1 [5] + ℒ −1 [ ] − ℒ −1 [ 2 ]
𝑠+4 𝑠 + 25
= 5𝛿(𝑡) + 6𝑒 −4𝑡 𝑢(𝑡) − 7𝑐𝑜𝑠5𝑡 𝑢(𝑡)
= 5𝛿(𝑡) + [6𝑒 −4𝑡 − 7𝑐𝑜𝑠5𝑡 ]𝑢(𝑡)

Example 15.9 Find 𝑓(𝑡), given


𝑠 2 + 12
𝐹(𝑠) =
𝑠(𝑠 + 2)(𝑠 + 3)
Solution: We need to find the partial fraction expansion of 𝐹(𝑠). Since there are 3 simple
poles, we let,
𝑠 2 + 12 𝐴 𝐵 𝐶
= + +
𝑠(𝑠 + 2)(𝑠 + 3) 𝑠 𝑠+2 𝑠+3
where 𝐴, 𝐵 and 𝐶 are constants to be determined. We can find these constants using two
approaches.
Thus, 𝐴 = 2, 𝐵 = −8 , 𝐶 = 7 and we have
2 8 7
𝐹(𝑠) = − +
𝑠 𝑠+2 𝑠+3
By finding inverse LT of each term, we obtain
𝑓(𝑡) = (2 − 8𝑒 −2𝑡 + 7𝑒 −3𝑡 )𝑢(𝑡)

Practice problem 15.9 Find 𝑓(𝑡) if,


6(𝑠 + 2)
𝐹(𝑠) =
(𝑠 + 1)(𝑠 + 3)(𝑠 + 4)

Thus,
1 3 4
𝐹(𝑠) = + −
𝑠+1 𝑠+3 𝑠+4

and
𝑓(𝑡) = (𝑒 −𝑡 + 3𝑒 −3𝑡 − 4𝑒 −4𝑡 )𝑢(𝑡)
Practice Problem 15.10 Obtain 𝑔(𝑡) if,
Example 15.11 Example 15.11 here
Practice Problem 15.11
Find 𝑔(𝑡) given that,
Example 15.15 Use the Laplace transform to solve the differential equation,

Solution We take the Laplace transform of each term in the given differential equation,
and obtain,

or

Hence,

Hence,

Taking the inverse LT of each term we will have,


15.5 The Convolution Integral
The term convolution means “folding.” Convolution is an invaluable tool to the engineer
because it provides a means of viewing and characterizing physical systems.
For example, it is used in finding the response 𝑦(𝑡) of a system to an excitation 𝑥(𝑡),
knowing the system impulse response ℎ(𝑡). This is achieved through the convolution
integral, defined as

or simply

Where λ is a dummy variable and the asterisk ‘*’ denotes convolution. This equation states
that the output of a system is obtained by convolving the input 𝑥(𝑡) with the unit impulse
response of the system ℎ(𝑡) . The convolution is commutative:

If 𝑥(𝑡) = 0 for 𝑡 < 0 the convolution integral simplifies to

Similarly, if ℎ(𝑡) = 0 for 𝑡 < 0 , using commutativity, we again have


∞ ∞
𝑦(𝑡) = ∫ ℎ(λ)𝑥(𝑡 − λ)𝑑λ = ∫ ℎ(λ)𝑥(𝑡 − λ)𝑑λ
−∞ 0

If both 𝑥(𝑡) = 0 for 𝑡 < 0 and ℎ(𝑡) = 0 for 𝑡 < 0, then since ℎ(𝑡 − λ) = 0 for 𝑡 − λ <
0 or λ > 𝑡 , we have
Properties of the Convolution:

The Laplace Transform of the Convolution integral


Given two functions 𝑓1 (𝑡) and 𝑓2 (𝑡) with Laplace transforms 𝐹1 (𝑠) and 𝐹2 (𝑠), respectively,
their convolution is

Taking the Laplace transform gives,

Proof: 𝐹1 (𝑠) is defined as

If we multiply this with 𝐹𝑠 (𝑠) we obtain

Using time shift property of the LT, we have


If we substitute this result in 𝐹1 (𝑠)𝐹2 (𝑠)

Interchanging the order of integrations gives

Where the integral in brackets ranges from 0 to 𝑡 because 𝑢(𝑡 − λ) = 1 for λ ≤ 𝑡 and
𝑢(𝑡 − λ) = 0 for λ > 𝑡 . The term in brackets is simply the convolution of 𝑓1 (𝑡) and 𝑓2 (𝑡).
Then we have,

That means convolution in time domain, corresponds to multiplication in s-domain.


Example:
𝑥(𝑡) = 4𝑒 −𝑡 and ℎ(𝑡) = 5𝑒 −2𝑡
Example 15.12 Find the convolution of the following two signals,

First fold the signal 𝑥1 (−λ) and then shift by 𝑡 to obtain 𝑥1 (𝑡 − λ) as below,

Then for each range of values of 𝑡 we will compute the convolution 𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡)

For 0 < 𝑡 < 1 since there is no overlap


𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 0

For 1 < 𝑡 < 2

For 2 < 𝑡 < 3

For 3 < 𝑡 < 4


For 𝑡 > 4 there is no overlap between the signals
𝑦(𝑡) = 0

Combining all the cases we obtain

Practice Problem 15.12


Graphically convolve the following functions. Verify your result by performing the
equivalent operation in s-domain.
Mehod 1:

Let’s first mirror and shift 𝑥1 (𝑡)


a) For 𝑡 < 0 there is no overlap between 𝑥1 (𝑡 − λ)and 𝑥2 (𝑡), hence
𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 0

b) For 0 < 𝑡 ≤ 1

𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 1 × 𝑡 = 𝑡

c) For 1 < 𝑡 ≤ 2

𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡)


= 1 × (1 − (𝑡 − 1)) + 2 × (𝑡 − 1) = 𝑡

d) For 2 < 𝑡 ≤ 3

𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 2 × (2 − (𝑡 − 1))


= −2𝑡 + 6

e) For 𝑡 > 3 there is no overlap between 𝑥1 (𝑡 − λ)and 𝑥2 (𝑡), hence


𝑦(𝑡) = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 0
Combining all the cases, we have

Method 2: Laplace

We can express 𝑥1 (𝑡) and 𝑥2 (𝑡) in the following way.

𝑥1 (𝑡) = 𝑢(𝑡) − 𝑢(𝑡 − 1) and 𝑥2 (𝑡) = 𝑢(𝑡) + 𝑢(𝑡 − 1) − 2𝑢(𝑡 − 2)

Then we have the corresponding Laplace transforms as

1 1 −𝑠
𝑋1 (𝑠) = − 𝑒
𝑠 𝑠

and
1 1 −𝑠 2
𝑋2 (𝑠) = + 𝑒 − 𝑒 −2𝑠
𝑠 𝑠 𝑠
Thus,
1 1
𝑌(𝑠) = 𝑋1 (𝑠)𝑋2 (𝑠) = (1 − 𝑒 −𝑠 ) (1 + 𝑒 −𝑠 − 2𝑒 −2𝑠 )
𝑠 𝑠
1
= 2 (1 − 3𝑒 −2𝑠 + 2𝑒 −3𝑠 )
𝑠
If we write three terms separately, we have
1 𝑒 −2𝑠 𝑒 −3𝑠
𝑌(𝑠) = 2 − 3 2 + 2 2
𝑠 𝑠 𝑠
Combining the parts, we get

𝑦(𝑡) = 𝑢(𝑡) − 3(𝑡 − 2)𝑢(𝑡 − 2) + 2(𝑡 − 3)𝑢(𝑡 − 3)

Analyzing this function, we see that

• For 𝑡 < 0 𝑦(𝑡) = 0. Because we have unit step functions starting at t=0 or later.
• For 0 < 𝑡 ≤ 2 we have a ramp with slope: 1
• For 2 < 𝑡 ≤ 3 A new ramp with slope -3 is starting and the net slop is: 1 − 3 = −2.
• For 𝑡 > 3 A new ramp with slope -2 is starting and the net slop is: 1 − 3 + 2 = 0.

Therefore, we get the same result as we obtained with method 1.

You might also like