0% found this document useful (0 votes)
192 views

BSC 2 Math 3

Uploaded by

Sukhmander Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
192 views

BSC 2 Math 3

Uploaded by

Sukhmander Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 160

B.Sc./B.A.

Second Year
Mathematics, Paper - III

DIFFERENTIAL EQUATIONS

MADHYA PRADESH BHOJ (OPEN) UNIVERSITY - BHOPAL


Reviewer Committee
1. Dr (Prof) Piyush Bhatnagar 3. Dr Rajkumar Bhimtae
Professor Assistant Professor
Govt MLB College, Bhopal Govt College Vidisha, MP

2. Dr (Prof) Anil Rajput


Professor
Govt C.S. Azad (PG) College, Sehore

Advisory Committee
1. Dr Jayant Sonwalkar 4. Dr (Prof) Piyush Bhatnagar
Hon'ble Vice Chancellor Professor
Madhya Pradesh Bhoj (Open) University, Bhopal Govt MLB College, Bhopal

2. Dr H.S.Tripathi 5. Dr (Prof) Anil Rajput


Registrar Professor
Madhya Pradesh Bhoj (Open) University, Bhopal Govt C.S.Azad (PG) College, Sehore

3. Dr Neelam Wasnik 6. Dr (Prof) Rajkumar Bhimtae


Dy Director Printing Assistant Professor
Madhya Pradesh Bhoj (Open) University, Bhopal Govt College Vidisha, MP

COURSE WRITERS

Dr Bikas Chandra Bhui, Head, Mathematics Department, Meghnad Saha Institute of Technology, Kolkata
Dr Dipak Chatterjee, Distinguished Professor, St Xavier's College, Kolkata
Units (1-5)

Copyright © Reserved, Madhya Pradesh Bhoj (Open) University, Bhopal

All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Registrar,
Madhya Pradesh Bhoj (Open) University, Bhopal.

Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Madhya Pradesh Bhoj (Open) University, Bhopal, Publisher and its Authors
shall in no event be liable for any errors, omissions or damages arising out of use of this information
and specifically disclaim any implied warranties or merchantability or fitness for any particular use.

Published by Registrar, MP Bhoj (Open) University, Bhopal in 2020

Vikas® is the registered trademark of Vikas® Publishing House Pvt. Ltd.


VIKAS® PUBLISHING HOUSE PVT. LTD.
E-28, Sector-8, Noida - 201301 (UP)
Phone: 0120-4078900  Fax: 0120-4078999
Regd. Office: A-27, 2nd Floor, Mohan Co-operative Industrial Estate, New Delhi 1100 44
 Website: www.vikaspublishing.com  Email: [email protected]
SYLLABI-BOOK MAPPING TABLE
Differential Equations
Syllabi Mapping in Book

UNIT-1: Series Solutions of Differential Equations Unit-1: Series Solutions of


Series Solutions of Differential Equations, Power Series Differential Equations
Method, Bessel and Legendre Equations, Bessel’s and (Pages 3-44)
Legendre’s Functions and Their Properties, Recurrence
and Generating Function, Orthogonality of Functions.

UNIT-2: Laplace Transformation Unit-2: Laplace Transformation


Laplace Transformation, Linearity of the Laplace (Pages 45-72)
Transformation, Existence Theorem for Laplace
Transforms, Laplace Transforms of Derivatives and
Integrals, Shifting Theorems, Differentiation and
Integration of Transforms.

UNIT-3: Laplace Transforms: Inverse and Solving Unit-3: Laplace Transforms: Inverse
Differential Equations and Solving Differential Equations
Inverse Laplace Transforms, Convolution Theorem, (Pages 73-88)
Application of Laplace Transformation in Solving Linear
Differential Equations with Constant Coefficients.

UNIT-4: Partial Differential Equations of the First Order Unit-4: Partial Differential
Partial Differential Equations of the First Order, Equations of the First Order
Lagrange’s Solution, Some Special Types of Equations (Pages 89-112)
which can be Solved Easily by Methods Other Than the
General Method, Charpit’s General Method.

UNIT-5: Partial Differential Equations of the Second and Unit-5: Partial Differential Equations
Higher Orders of the Second and Higher Orders
Partial Differential Equations of Second and Higher (Pages 113-152)
Orders, Classification of Partial Differential Equations
of Second Order, Homogeneous and Non-Homogeneous
Equations with Constant Coefficients, Partial Differential
Equations Reducible to Equations with Constant
Coefficients.
CONTENTS
INTRODUCTION 1-2
UNIT 1 SERIES SOLUTIONS OF DIFFERENTIAL EQUATIONS 3-44
1.0 Introduction
1.1 Objectives
1.2 Power Series Method
1.2.1 Convergence—Interval and Radius
1.2.2 Operations on Power Series
1.2.3 Existence of Power Series Solutions and Real Analytic Functions
1.3 Bessel, Legendre and Hypergeometric Equations
1.3.1 Bessel Equations
1.3.2 Legendre’s Equation
1.3.3 Hypergeometric Equation
1.3.4 Regular Singular Point
1.4 Generating Functions and Recurrence Relations
1.5 Orthogonality of Bessel Functions and Legendre Polynomials
1.6 Answers to ‘Check Your Progress’
1.7 Summary
1.8 Key Terms
1.9 Self-Assessment Questions and Exercises
1.10 Further Reading
UNIT 2 LAPLACE TRANSFORMATION 45-72
2.0 Introduction
2.1 Objectives
2.2 Laplace Transformation
2.3 Existence Theorem for Laplace Transforms
2.4 Laplace Transforms of Derivatives and Integrals
2.5 Shifting Theorems
2.6 Differentiation and Integration of Transforms
2.7 Answers to ‘Check Your Progress’
2.8 Summary
2.9 Key Terms
2.10 Self-Assessment Questions and Exercises
2.11 Further Reading
UNIT 3 LAPLACE TRANSFORMS: INVERSE AND
SOLVING DIFFERENTIAL EQUATIONS 73-88
3.0 Introduction
3.1 Objectives
3.2 Inverse Laplace Transforms
3.3 Convolution Theorem
3.4 Application of Laplace Transformation in Solving Linear Differential Equations with Constant
Coefficients
3.5 Answers to ‘Check Your Progress’
3.6 Summary
3.7 Key Terms
3.8 Self-Assessment Questions and Exercises
3.9 Further Reading
UNIT 4 PARTIAL DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 89-112
4.0 Introduction
4.1 Objectives
4.2 Partial Differential Equations of the First Order Lagrange’s Solution
4.3 Solution of Some Special Types of Equations
4.4 Charpit's General Method
4.5 Answers to ‘Check Your Progress’
4.6 Summary
4.7 Key Terms
4.8 Self-Assessment Questions and Exercises
4.9 Further Reading
UNIT 5 PARTIAL DIFFERENTIAL EQUATIONS OF
THE SECOND AND HIGHER ORDERS 113-152
5.0 Introduction
5.1 Objectives
5.2 Partial Differential Equations of Second and Higher Orders
5.3 Classification of Partial Differential Equations of Second Order
5.4 Homogeneous and Non-Homogeneous Equations with Constant Coefficients
5.5 Partial Differential Equations Reducible to Equations with Constant Coefficients
5.6 Answers to ‘Check Your Progress’
5.7 Summary
5.8 Key Terms
5.9 Self-Assessment Questions and Exercises
5.10 Further Reading
Introduction

INTRODUCTION
In mathematics, a differential equation is an equation that relates one or more
functions and their derivatives. In applications, the functions generally represent NOTES
physical quantities, the derivatives represent their rates of change, and the differential
equation defines a relationship between the two. Because such relations are
extremely common, therefore the differential equations play a prominent role in
many disciplines including engineering, physics, economics, and biology. The study
of differential equations consists mainly of the study of their solutions (the set of
functions that satisfy the equation), and of the properties of their solutions. Only
the simplest differential equations are solvable by explicit formulas; however, many
properties of solutions of a given differential equation may be determined without
computing them exactly.
If a closed-form expression for the solutions is not available, the solutions
may be numerically approximated using computers. The theory of dynamical
systems puts emphasis on qualitative analysis of systems described by differential
equations, while many numerical methods have been developed to determine
solutions with a given degree of accuracy. Differential equations, thus, allow us to
model changing patterns in both physical and mathematical problems.
Fundamentally, a differential equation is a mathematical equation for an
unknown function of one or several variables that relates the values of the function
itself and its derivatives of various orders. Differential equations first came into
existence with the invention of calculus by Newton and Leibniz. The Euler–Lagrange
equation was developed in the 1750s by Euler and Lagrange in connection with
their studies of the tautochrone problem. This is the problem of determining a
curve on which a weighted particle will fall to a fixed point in a fixed amount of
time, independent of the starting point. Lagrange solved this problem in 1755 and
sent the solution to Euler. Both further developed Lagrange’s method and applied
it to mechanics, which led to the formulation of Lagrangian mechanics.
Differential equations can be divided into several types. Apart from describing
the properties of the equation itself, these classes of differential equations can help
inform the choice of approach to a solution. Commonly used distinctions include
whether the equation is: Ordinary/Partial, Linear/Non-linear, and Homogeneous/
Heterogeneous.
This book, Differential Equations, is designed to be a comprehensive and
easily accessible book covering the basic concepts of differential equations. It will
help readers to understand the basics of series solutions of differential equations,
power series method, Bessel and Legendre equations, existence theorem for
Laplace transforms, Laplace transforms of derivatives and integrals, shifting
theorems, differentiation and integration of transforms, inverse Laplace transforms,
convolution theorem, partial differential equations of the first order, Charpit’s general
method, partial differential equations of second and higher orders, homogeneous
and non-homogeneous equations with constant coefficients, partial differential
equations reducible to equations with constant coefficients. The book is divided
Self - Learning
Material 1
Introduction into five units that follow the Self-Instruction Mode (SIM) with each unit beginning
with an Introduction to the unit, followed by an outline of the Objectives. The
detailed content is then presented in a simple but structured manner interspersed
with Check Your Progress to test the student’s understanding of the topic. A
NOTES Summary along with a list of Key Terms and a set of Self-Assessment Questions
and Exercises is also provided at the end of each unit for understanding, revision
and recapitulation. The topics are logically organized and explained with related
theorems and examples, analysis and formulations to provide a background for
logical thinking and analysis with good knowledge of differential equations. The
examples have been carefully designed so that the students can gradually build up
their knowledge and understanding.

Self - Learning
2 Material
Series Solutions of

UNIT 1 SERIES SOLUTIONS OF Differential Equations

DIFFERENTIAL EQUATIONS
NOTES
Structure
1.0 Introduction
1.1 Objectives
1.2 Power Series Method
1.2.1 Convergence—Interval and Radius
1.2.2 Operations on Power Series
1.2.3 Existence of Power Series Solutions and Real Analytic Functions
1.3 Bessel, Legendre and Hypergeometric Equations
1.3.1 Bessel Equations
1.3.2 Legendre’s Equation
1.3.3 Hypergeometric Equation
1.3.4 Regular Singular Point
1.4 Generating Functions and Recurrence Relations
1.5 Orthogonality of Bessel Functions and Legendre Polynomials
1.6 Answers to ‘Check Your Progress’
1.7 Summary
1.8 Key Terms
1.9 Self-Assessment Questions and Exercises
1.10 Further Reading

1.0 INTRODUCTION
In mathematics, the power series method is used to search for a power series
solution to certain differential equations. Physical problems in many fields lead to
differential equation which must be solved. Some of these can be solved by
elementary methods, but when these methods are not applicable then we resort to
series solution. The power series method will give solutions only to initial value
problems when dealing with linear equations since the solution may turn up multiple
linearly independent solutions which may be combined (by superposition) to solve
boundary value problems. A further restriction is that the series coefficients will be
specified by a nonlinear recurrence, the nonlinearities are inherited from the
differential equation.
In general, such a solution assumes a power series with unknown coefficients
and then substitutes that solution into the differential equation to find a recurrence
relation for the coefficients. Legendre’s equations are very important equations of
this type. These equations and their solutions play a significant and basic role in
applied mathematics. The power series method is considered as the standard
basic method for solving linear differential equations with variable coefficients.
One of the most important differential equations in applied mathematics is Bessel’s
differential equation. Various differential equations can be reduced to Bessel’s
equation.

Self - Learning
Material 3
Series Solutions of In this unit, you will study about the series solutions of differential equations,
Differential Equations
power series method, Bessel and Legendre equations, Bessel’s and Legendre’s
functions and their properties, recurrence and generating function, and orthogonality
of functions.
NOTES
1.1 OBJECTIVES
After going through this unit, you will be able to:
 Get an overview of the power series method
 Explain the Bessel, Legendre and hypergeometric equations
 Describe the functions, generating functions and recurrence relations
 Understand the orthogonality of Bessel functions as well as Legendre
polynomials

1.2 POWER SERIES METHOD


The power series method is used to search a power series solution to certain
differential equations. Basically, such a solution assumes a power series with
unknown coefficients and then substitutes that solution into the differential equation
for finding a recurrence relation for the coefficients. The power series method can
also be applied to certain nonlinear differential equations with less flexibility.
If a homogeneous linear different equation has constant coefficients, then
it can be solved using algebraic methods and its solutions are elementary functions
known from calculus ex, cos x, etc. However, if such an equation has variable
coefficients functions of x, it must be solved by other methods. The standard basic
technique used for the purpose of answering linear differential equations containing
variable coefficients is known as the power series method. The answer is provided
in the form of power series which explains the name. The use of these series can
be done for the purpose of evaluating the values of solutions, for exploring their
properties and for obtaining different kinds of representation of those solutions.
The operations on power series include the methods of differentiation, addition,
multiplication, etc.)
Power Series
A power series about a or just power series is any series that can be written in the
form,

Where a and cn are numbers. The cn’s are often called the coefficients of
the series. The most important thing about a power series is that it is a function of
x. We know from calculus that a power series (in powers of
x – x0) is an infinite series of the form,
Self - Learning
4 Material
 Series Solutions of
 am x  x0   a0  a1 x  x0   a2 x  x0  
m0
m 2
(1.1) Differential Equations

Where a0, a1, a2, ... are constants, known as the coefficients of the series.
Here x0 is a constant, known as center of the series and x is a variable. NOTES

If x0  0 , we obtain a power series in powers of x as,


a
m0
m x m  a0  a1 x  a2 x 2  a3 x 3  (1.2)

Assume here that all variables and constants are real. Well known examples
of power series are the Maclaurin series.

1
  xm  1  x  x2 
1  x m0
(|x| < 1, is geometric series),

xm x 2 x3
ex   1 x   
m  0 m! 2! 3!

cos x  
 1m x 2m  1  x 2  x 4  

m0 2m ! 2! 4!

sin x  

 1m x 2m 1  x  x3  x5  
m  0 2m  1! 3! 5!
The power series method is used for solving differential equations because
this method is considered easy and universally used as standard. We first describe
the procedure and then exemplify it using simple equations. For a given differential
equation,
y  p x  y  qx  y  0

Now represent p x  and q x  as power series in powers of x or of x  x0


if the solutions are required in powers of x  x0 . If, p x  and q x  are polynomials
then this step can be escaped. Let us assume an answer in the form of a power
series with anonymous coefficients as,

y   am x m  a0  a1 x  a2 x 2  a3 x 3  (1.3)
m0

Insert this series and the series that we have obtained by term wise
differentiation into the equation.

(a) y   mam x  a1  2a2 x  3a3 x 


m 1 2
(1.4)
m 1

(b) y   mm  1am x


m2
 2a2  3  2a3 x  4  3a4 x 2  (1.5)
m2
Self - Learning
Material 5
Series Solutions of Now accumulate like powers of x and evaluate the sum of the coefficient of
Differential Equations
each power of x to zero that occurs beginning with the constant terms that includes
the terms having x, the terms having x2, etc. This provides relations from which the
unknown coefficients can be determined in Equation (1.3) of series successively.
NOTES This can be explained using some simple equations that can be solved with the
help of elementary methods.
Example 1.1 : Solve y  y  0 .
Solution: Step 1, We insert series Equations (1.3) and (1.4) into the equation as
follows:
a 1  2a2 x  3a3 x 2    a
0  a1 x  a2 x 2   0
Step 2, Accumulate the like powers of x to find,
a1  a0   2a2  a1 x  3a3  a2 x 2  0
Step 3, Evaluating the coefficient of each power of x to zero we get,
a1  a0  0 , 2a2  a1  0 , 3a3  a2  0,
Step 4, After solving these equations, a1 , a2 , can be expressed in terms
of a0 which are arbitrary:

a1 a0 a a
a1  a0 , a2  , a3  2  0 ,
2 2! 3 3!
Step 5, Using these coefficients the series in Equation (1.3) becomes,
a0 2 a0 3
y  a0  a0 x  x  x  .
2! 3!
Step 6, Finally we have obtained the known and common general solution
as,

 x 2 x3 
y  a0 1  x      a0e x .
 2! 3! 
Example 1.2: Solve y  2 xy .
Solution: Following the same method insert series of Equations (1.3) and (1.4)
into the equation:
a1  2a2 x  3a3 x 2  
 2 x a0  a1 x  a2 x 2  
Now do the multiplication by 2x. The resulting equation can be easily written
as,
a1  2 a2  3a3 x 2  4a4 x 3  5a5 x 4  6 a6 x 5 

= 2a0  2a1 x 2  2a2 x 3  2a3 x 4  2a4 x 5 


Thus, we conclude that:
Self - Learning a1 = 0, 2a2 = 2a0, 3a3 = 2a1, 4a4 = 2a2, 5a5 = 2a3 ...
6 Material
Series Solutions of
Therefore, a3  0, a5  0, and for the coefficients having even subscripts Differential Equations
it becomes,
a 2 a0 a 4 a0
a 2  a0 , a4   , a6   , ; NOTES
2 2! 3 3!
Where a0 remains arbitrary. Using these coefficients the series in Equation
(1.3) provides the solution in the form given below,

 x 4 x 6 x8  2
y  a0 1  x 2       a0e x
 2! 3! 4! 
The answer can be verified using the method of separating variables.
Example 1.3 : Solve y  y  0
Solution: Insert series Equations (1.3) and (1.5) into the equation that we have
obtained,
2a 2  3  2 a 3 x  4  3a 4 x 2    a 0  a1 x  a 2 x 2   0
By accumulating the like powers of x we get,
2a2  a0   3  2a3  a1 x  4  3a4  a2 x 2  0
On evaluating the coefficient of each power of x to zero we get,
2a2  a0  0 Coefficient of x 0

3  2a3  a1  0 Coefficient of x1

4  3a4  a2  0 Coefficient of x 2 , etc.

After solving the given equations, we observe that a2 , a4 , can be


expressed in terms of a0 and similarly a3 , a5 , can be expressed in terms of a1 ,
where a0 and a1 are arbitrary. :

a0 a1 a2 a
a2   , a3   , a4    0, ;
2! 3! 4  3 4!
Using these coefficients the series in Equation (1.3) can be written as,
a0 2 a1 3 a0 a1 5
y  a0  a1x  x  x   x 
2! 3! 4! 5!
The acceptable reordering terms for a power series can be written in the
form as,

 x2 x4   x3 x5 
y  a0 1      a1  x     
 2! 4!   3! 5! 

Self - Learning
Material 7
Series Solutions of The known and common general solution can be distinguished as,
Differential Equations
y  a0 cos x  a1 sin x .
Probably we finish up using new functions specified by power series. If
NOTES these equations and its solutions are of realistic or theoretical importance then the
names are given to them and are systematically examined. This is how the
Legendre’s, the Bessel’s and the Gauss’s hypergeometric equations were
established.
Basic Concepts
In calculus, a power series is an infinite series of the form,

 a x  x   a0  a1  x  x0   a2  x  x0  
m 2
m 0 (1.6)
m0

If the variable x, the center x0 , and the coefficients a0 , a1, to be real


then the nth partial sum of series Equation (1.6) is,

sn  x   a0  a1  x  x0   a2  x  x0    an  x  x0 
2 n
(1.7)

Where n  0,1, . If the terms of sn are omitted from Equation (1.6), then
the remaining expression becomes,

Rn  x   an 1  x  x0   an  2  x  x0 
n 1 n2
 (1,8)
This expression is termed as the remainder of Equation (1.6) subsequent
to the term an  x  x0 n .
For example, consider the following geometric series,
1  x  x2   xn 
We can include,
s0  1 . R0  x  x 2  x 3  ,

s1  1  x , R1  x 2  x3  x 4  ,

s2  1  x  x 2 R2  x3  x 4  x5  , etc.
In this manner, Equation (1.6) is associated with the series of the partial
sums s0  x , s1  x , s2  x , . If x  x1 series converges as,

lim sn  x1   s x1 
n

Then the series in Equation (1.6) is termed convergent at x  x1 and the


number sx1  is termed the value or sum of Equation (1.6) at x1 . This is written as,

Self - Learning
8 Material
Series Solutions of

s  x1    am  x1  x0 
m Differential Equations

m 0

For every n we have,


NOTES
sx1   sn  x1   Rn x1  (1.9)

If the series diverges at x  x1 , then the series in Equation (1.6) is termed


divergent at x  x1
In convergence, for any positive  there must be an N which depends on
. Using Equation (1.9) we get,
Rn x1   s x1 sn x1   for all n > N (1.10)

Mathematically, this signifies that all sn  x1  with n > N be positioned between


s x1   and s x1   . In fact, it refers that for convergence the sum sx1  of
Equation (6) can be accurately approximated at x1 by sn  x1  considering n too
large.
1.2.1 Convergence—Interval and Radius
The convergence of the series may depend upon the value of x that we put into the
series. A power series may converge for some values of x and not for other values
of x.
We can consider that there is a number R so that the power series will
converge for, and will diverge for . This number is termed as
the radius of convergence for the series. Remember that the series may or may
not converge if . If something happens at these points it will not change
the radius of convergence.
Secondly, the interval of all x’s, including the end points, for which the power
series converges is termed as the interval of convergence of the series. These
two concepts are quite strongly coupled together. If we know that the radius of
convergence of a power series is R then we have the following:

The interval of convergence must then contain the interval


since we know that the power series will converge for these values. We also
know that the interval of convergence can not contain x’s in the ranges
and x > a + R since we know the power series diverges for these value of x.
Therefore, to completely identify the interval of convergence we have to determine
if the power series will converge for or .
If the power series converges for one or both of these values then we must
include those in the interval of convergence.
Self - Learning
Material 9
Series Solutions of
Differential Equations 1. If the series of Equation (1.6) converges at x  x0 , then all its terms
are zero except for the first a 0 . In special cases it can be the just x for
Equation (1.6) which converges. Such a series is not considered
NOTES significant.
2. If there are any other values of x for which convergence is done by
the series, then such values form an interval, termed as the convergence
interval. If this interval is finite, then it contains the midpoint’s x0 of the
form,
x  x0  R (1.11)
The series in Equation (1.6) converges for all x such that x  x0  R
and diverges for all x such that x  x0  R . Here the number R is the
radius of convergence of Equation (1.6). It can be acquired using
any one of the following formulas, provided these limits exist and are
not zero.

am 1
(a) R  1 lim m | am | (b) R  1 lim (1.12)
m m am

If they are infinite, then the series in Equation (1.6) converges only at
the center x0 .
3. The convergence interval can at times be infinite, i.e, series in Equation
(1.6) converges for all x. For example, if the limit in Equation (1.12a)
or (1.12b) is zero, then such case takes place and can be written
as R   . For each x, for which the series in Equation (1.6) converges,
has a definite value s(x). We state that series in Equation (1.6) denotes
the function s(x) in the convergence interval and can be written as,

s x    am x  x0 
m0
m
x x 0  R

The following examples will make the concept clear.


Convergence at the Center: Consider the series,

 m! x
m0
m
 1  x  2 x 2  6 x3 

We include am  m! in Equation (1.12b),

am 1 m  1!
  m 1  as m  
am m!

Self - Learning
10 Material
This series only converges at the center x = 0 and hence it is useless series. Series Solutions of
Differential Equations
Convergence in a Finite Interval: Consider the geometric series,

1
  xm  1  x  x2  NOTES
1  x m 0

We include am  1 / m!. Hence, the Equation (1.12b) becomes,

am 1 1 / m  1! 1
  0 as m  
am 1 / m! m 1
This series converges for all x and is considered significant.
The following hint will help you to solve some specific problems:

 1m x3m  1  x3  x 6 x9

m 0 8m 8 64

512


This is considered as a series in powers of t  x3 having coefficients


am   1 / 8m , so that the series in Equation (1.12b) becomes,
m

am 1 8m 1
 m 1 
am 8 8
The R = 8 and therefore the series converges for |t| < 8, i.e., |x| < 2.
Example 1.4: Determine the radius of convergence and interval of convergence
for the power series,

Solution: This power series will converge for x = – 3 at this point. We have to
determine the remainder of the x’s for which it convergences by using any one of
the appropriate test. Using the test we derive the condition(s) on x which can be
used to determine the values of x for which the power series will converge and the
values of x for which the power series will diverge. From this we evaluate the
radius of convergence and also the interval of convergence. The most suitable test
in this case is the ratio or root test. Using the test we get,

Self - Learning
Material 11
Series Solutions of Here x is not dependent on the limit hence it can be factored out of the limit.
Differential Equations
We must keep the absolute value bars on it so that everything remains positive.
The limit then becomes,

NOTES

The ratio test defines that if L < 1 the series will converge, if L > 1 the series
will diverge and if L = 1 then any thing may happen. Thus,

Now consider the case L = 1. Now we have the radius of convergence for
this power series which are the required conditions for the radius of convergence.
Hence, the radius of convergence for this power series is R = 4.
Next we obtain the interval of convergence. For this we obtain most of the
interval by solving the first inequality from the above series.

Hence, the interval of validity is given by –7 < x < 1. Now we determine


whether the power series will converge or diverge at the endpoints of this interval.
Remember that these values of x will correspond to the value of x to give L = 1.
The convergence at these points can be determined by just plugging them into the
original power series and observing if the series converges or diverges.
For x = –7: In this case the series is,

This series is divergent since .


For x = 1: In this case the series is,

Self - Learning
12 Material
Series Solutions of
This series is also divergent since does not exist. Differential Equations

Hence, in this case the power series will not converge for either endpoint.
The interval of convergence is .
NOTES
Example 1.5: Determine the radius of convergence and interval of convergence
for the power series,

Solution: Using the ratio test we get,

Thus the convergence or divergence can be determined as follows,

For the interval of convergence we need and , i.e., we


have to factorize 4 out of the absolute value bars to obtain the accurate radius of
convergence. This gives,

1
Thus, the radius of convergence for this power series is R = . Next we
8
find the interval of convergence by first solving the inequality that gives convergence.

Now verify the end points.

For : The series is,

Self - Learning
Material 13
Series Solutions of
Differential Equations

NOTES

This series is considered as the alternating harmonic series and it converges.

For : The series is,

This series is the harmonic series and it diverges. Hence, the power series
converges for one of the end points but not the other. Then, the interval of
convergence for this power series is given as,

Example 1.6: Determine the radius of convergence and interval of convergence


for the power series,

Solution: Using the ratio test we get,

Here, the limit is infinite but we can use the term with the x’s in front of the
limit. We have L =  – > 1 provided . Hence, this power series will only
converge if .

Self - Learning
14 Material
The basic principle defines that every power series will converge for Series Solutions of
Differential Equations
and here it is . We get a from and the coefficient of x must be one.
The radius of convergence is R = 0 and the interval of convergence is .
Example 1.7: Determine the radius of convergence and interval of convergence NOTES
for the power series,

Solution: In this case we use the root test to get,

Since L = 0 < 1, so despite of the value of x this power series will converge
for every x. In such conditions we define that the radius of convergence is
and interval of convergence is .
Example 1.8: Determine the radius of convergence and interval of convergence
for the power series,

Solution: In this case the significant difference is the exponent on the x which is
2n rather than the standard n. We use the root test again to determine the
convergence. It gives,

We acquire convergence if,

In this case the radius of convergence is NOT 3, however the radius of


convergence needs an exponent of 1 on the x. As a result,

Self - Learning
Material 15
Series Solutions of
Differential Equations

Here the absolute value bars looks like the radius of convergence and is
NOTES
. The inequality for divergence is the interval for convergence. Next we
obtain the interval of convergence. From the inequality we have,

Now verify the end points.


For : The power series is,

This series is divergent since does not exist.

For : Here we square the x so this series will be similar to the preceding
step which is divergent.

The interval of convergence is .


1.2.2 Operations on Power Series
In the power series, the acceptable operations are differentiation, integration,
addition, subtraction, division and multiplication of power series. A condition
regarding the desertion of every coefficient of a power series is listed, which is
considered the basic tool for solving power series.
Differentiation
The differentiation of a power series can be done term by term. More specifically,
if the series,

y x    am  x  x0 
m

m 0

Self - Learning
16 Material
Series Solutions of
Converges for x  x0  R , where R > 0, then the series acquired after Differential Equations

differentiating term by term also converges for those x and it also characterizes the
derivative y and y for those x, so that,
 NOTES
y x    mam  x  x0 
m 1
m 1
x x
0  R.

Also,

yx    mm  1am  x  x0 
m2
m2
x x 0  R  , etc.

Addition
Two power series can be added term by term. More specifically, if the series,
 

 am x  x0  and  b x  x 
m m
m 0 (1.13)
m0 m 0

Contain positive radii of convergence and the sum is f(x) and g(x), then the
series:

 a
m0
m  bm  x  x0 
m

Converges and denotes f(x) + g(x) for each x which is in the interior of the
convergence interval of each of the given series.
Multiplication
The two given power series can also be multiplied term by term. Assume that the
series in Equation (1.13) contains positive radii of convergence and f(x) and g(x)
are their sums. Then we obtain the series by multiplying each term of the first series
with each term of the second series and accumulating like powers of x  x0 , so
that,

 a b 0 m  a1bm 1   amb0  x  x0 
m

 a0b0  a0b1  a1b0 x  x0   a0b2  a1b1  a2b0  x  x0  


2

Converges and denotes f(x) g(x) for each x in the interior of the convergence
interval of every given series.
Vanishing of Coefficients
In case there is a positive radius of convergence of a power series as well as an
identically zero sum all through its interval of convergence, then every coefficient
of the series will be zero.
Shifting Summation Indices
This can be explained using specific example. Consider the following given series,
 
x2  mm  1am x m  2  2 mam x m 1
m2 m 1

Self - Learning
Material 17
Series Solutions of
Differential Equations 
 x2 2a2  6a3x  12a4 x2    2a  2a x  3a x
1 2 3
2
 
This series can be written as a single series. For this, we first use x2 inside
the summation, to obtain,
NOTES
 

 mm  1a
m2
m x m   2mam x m 1 .
m 1

Assume that we have used s as the summation letter of the series which is to
be obtained. We just replace s with m in the first series. This change of notation is
possible because a summation letter is only a dummy index and we can use any
letter for this dummy index which has not been used before. In the second series
shifting happens where you can shift by one unit as m 1  s . Then, m  s  1 .
Now the summation starts with s = 0 because m  0  1  1 , which is the previous
start. Collectively, it becomes:
 

 ss  1as x 2   2s  1as 1x s .


s2 s 0

In the first series we replace s = 2 by s = 0 to obtain,


 ss  1a
s0
s  2s  1as 1 x 2

 2a1  4a2 x  2a2  5a3 x 2  6a3  8a4 x 3 

1.2.3 Existence of Power Series Solutions and Real


Analytic Functions
We have already learned the various properties of power series. We know that an
equation has power series solutions. Consider the series where we have the
coefficients p and q, and the function r in on the right side of the equation,
y  p x  y  q x  y  r  x  (1.14)
The power series solution has the form as represented in Equation (1.14).
~ ~
This is true if h , ~
p , q and ~
r in the series,
~
h  x  y  ~
p x  y  ~
r x  (1.15)
The power series can be represented as shown in Equation (1.15) where
~
h  x0   0 , here x0 is the center of the series.

Real Analytic Function

A real function f(x) is termed as analytic at a point x = x0 if it can be denoted by


a power series in powers of x  x0 by radius of convergence R > 0. In mathematics,
an analytic function is a function that is locally given by a convergent power series.
There exist both real analytic functions and complex analytic functions. Functions
Self - Learning
18 Material
Series Solutions of
of each type are infinitely differentiable, but complex analytic functions exhibit Differential Equations
properties that do not hold generally for real analytic functions. A function is analytic
if and only if it is equal to its Taylor series in some neighborhood of every point.
This concept can be stated with the help of following basic theorem. NOTES

Theorem 1: Existence of Power Series Solutions


In Equation (1.14) if p, q and r are analytic at x  x0 , then every solution of series
in Equation (1.14) is analytic at x  x0 and can be denoted by a power series in
~ ~
powers of x  x0 by radius of convergence R > 0. The same is true if h , ~
p , q and
~
r in Equation (1.15) are analytic at x  x0 and h  x0   0 .
~ 3

Using this theorem you can prove the existence of power series.

Check Your Progress


1. Why is power series method used?
2. How is power series represented?
3. Define power series in calculus.
4. On what value does convergence of power series depends?
5 Define the term interval of convergence.
6. Which operations are acceptable in power series?
7. When is every coefficient of the power series zero?

1.3 BESSEL, LEGENDRE AND


HYPERGEOMETRIC EQUATIONS
The following are the equations and their basic properties which are used to evaluate
the power series.
1.3.1 Bessel Equations
In mathematics, Bessel functions, first defined by the mathematician Daniel
Bernoulli and generalized by Friedrich Bessel, are canonical solutions y(x) of
Bessel’s differential equation:

In this equation, for an arbitrary real or complex number α (the order of the
Bessel function) is the most common and important cases are for α as an integer or
as half-integer. Though α and –α produce the same differential equation so it is
conventional to define different Bessel functions for these two orders so that the
Bessel functions are mostly smooth functions of α. Bessel functions are also known
Self - Learning
Material 19
Series Solutions of as cylinder functions or cylindrical harmonics because they are found in the solution
Differential Equations
to Laplace’s equation in cylindrical coordinates.
Bessel Functions of the First Kind: J
NOTES Bessel functions of the first kind, denoted as J(x), are solutions of Bessel’s
differential equation that are finite at the origin (x = 0) for non-negative integer 
and diverge as x approaches zero for negative non-integer . The solution type
and normalization of J(x) are defined by its properties. It is possible to define the
function by its Taylor series expansion around x = 0 as:

Here (z) is the gamma function, which is a generalization of the factorial


function to non-integer values. The graphs of Bessel functions roughly looks like
oscillating sine or cosine functions that decay proportionally to 1/x, although
their roots are not generally periodic, except asymptotically for large x (Refer
Figure 1.1). The Taylor series indicates that – J1(x) is the derivative of J0(x) just
like –sin x is the derivative of cos x. Mathematically, the derivative of Jn(x) can be
expressed in terms of Jn±1(x) using the identities.

Fig. 1.1 Plot of Bessel function of the First Kind, J(x), for Integer Orders =0,1,2.
For non-integer , the functions J(x) and J–(x) are linearly independent
and are therefore the two solutions of the differential equation. Alternatively, for
integer order , the following relationship is valid:

Remember that the Gamma function becomes infinite for negative integer
arguments. This means that the two solutions are no longer linearly independent.
Bessel’s Integrals: Another definition of the Bessel function, for integer
values of n, is possible using an integral representation of the form:

Self - Learning
20 Material
Series Solutions of
Differential Equations

Another integral representation is:


NOTES

This was the approach that Bessel used and from this definition he derived
several properties of the function. The definition may be extended to non-integer
orders by the addition of another term,

Or for by,,

Relation to Hypergeometric Series


The Bessel functions can be expressed in terms of the generalized hypergeometric
series as,

This expression is related to the development of Bessel functions in terms of


the Bessel–Clifford function.
Bessel Functions of the Second Kind: Yα
The Bessel functions of the second kind, denoted by Yα(x), are solutions of the
Bessel differential equation. They have a singularity at the origin (x = 0). Yα(x) is
sometimes also called the Neumann function, and is occasionally denoted instead
by Nα(x). For non-integer α, it is related to Jα(x) by:

In the case of integer order n, the function is defined by taking the limit as a
non-integer α tends to ‘n’:

Self - Learning
Material 21
Series Solutions of This denotes the result in integral form,
Differential Equations

NOTES
For the case of non-integer α, the definition of Yα(x) is redundant.
Alternatively, when α is an integer Yα(x) is the second linearly independent solution
of Bessel’s equation. Similarly to the case for the functions of the first kind, the
following relationship is considered valid:

Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane
cut along the negative real axis. When α is an integer, the Bessel functions J are
entire functions of x. If x is held fixed, then the Bessel functions are entire functions
of α.
1.3.2 Legendre’s Equation
In mathematics, Legendre’s equation is the diophantine equation and is
represented as,
ax2 + by2 + cz2 = 0.
This equation is named after Adrien Marie Legendre who proved in 1785
that it is solvable in integers x, y, z, not all zero, if and only if –bc, –ca and –ab are
quadratic residues modulo a, b and c, respectively, where a, b, c are nonzero,
squarefree, pairwise relatively prime integers, not all positive or all negative.
The Legendre differential equation is the second order ordinary differential
equation and is written as:

It can also be written as:

For . Here L is the termed as the Legendre operator:

Frobenius method can be used to solve the equation in the region .


We set the parameter p in Frobenius method zero.

Self - Learning
22 Material
Series Solutions of
Differential Equations

NOTES

By substituting these terms into the original equation, we obtain:

Thus,

And,

This series converges when,

Therefore the series solution has to be cut by specifying:

The series cut in specific integers l and l+1 produce polynomials termed as
Legendre polynomials.
Solution via Power Series
The Legendre’s equation is given by,

Self - Learning
Material 23
Series Solutions of This equation is analytic around x0 = 0, so we can use the standard power
Differential Equations
series method to determine y(x). In this case, consider

NOTES
Upon substitution of this and its appropriate derivative relationships into the
original equation, we obtain the recurrence relation of the form,

Here a0 and a1 are arbitrary constants and m = 0, 1, 2, …. Hence, the


solution to Legendre’s equation can be written as,

Where,

And,

These series converge for x 1.


1.3.3 Hypergeometric Equation
The Gaussian Hypergeometric Differential Equation is of the type:

Where a, b and c are constants. The indicial equation of the hypergeometric


differential equation is of the form:

Which has the roots r1 = 0 and r2 = 1 c. Using the Frobenius method, the
series solution for r1 = 0 can be express as:

Where c  0, 1, 2, 3, … and the series converges for 1 < x < 1. This
series is termed as Hypergeometric Series. The sum of the hypergeometric
series is denoted by F (a, b; c; x) and is called Hypergeometric Function,
which is represented as:

Self - Learning
24 Material
General Solution: If c, a  b and c  a  b are all non-integers, then the general Series Solutions of
Differential Equations
solution for the hypergeometric differential equation is of the form:

NOTES
Which is valid for 1 < x < 1.
Gamma Function: A hypergeometric function can be expressed in terms of gamma
functions as,

For x = 1,

1.3.4 Regular Singular Point


A regular singular point of a differential equation is a singular point of the equation
at which none of the solutions has an essential singularity. In mathematics, in the
theory of ordinary differential equations in the complex plane , the points of
are classified into ordinary points, at which the equation’s coefficients are analytic
functions and singular points at which some coefficient has a singularity. Then
amongst singular points, an important distinction is made between a regular
singular point, where the growth of solutions is bounded (in any small sector) by
an algebraic function and an irregular singular point, where the full solution set
requires functions with higher growth rates. This distinction occurs between the
hypergeometric equation, with three regular singular points and the Bessel equation
which is in a sense a limiting case, but where the analytic properties are substantially
different. More specifically, consider an ordinary linear differential equation of nth
order of the form,

Where pi (z) are meromorphic functions. One can assume that,


pn (z) = 1. If this is not the situation then this equation must be divided by pn(x).
This may introduce singular points to consider. The equation should be studied on
the Riemann sphere to include the point at infinity as a possible singular point. A
Möbius transformation can be applied to move  into the finite part of the complex
plane if required.
Then the Frobenius method based on the indicial equation can be applied
for finding possible solutions that are power series times complex powers (z – a)r
near any given a in the complex plane where r need not be an integer. This function
may exist, therefore, a branch cut extending out from a or on a Riemann surface of
Self - Learning
Material 25
Series Solutions of some punctured disc around a. This presents no difficulty for a an ordinary point.
Differential Equations
When a is a regular singular point, which by definition means that, pni (z), then it
has a pole of order at most i at a. The Frobenius method also can be used to
provide n independent solutions near a. Otherwise the point a is an irregular
NOTES singularity. An ordinary differential equation whose only singular points, including
the point at infinity, are regular singular points is called a Fuchsian ordinary
differential equation.
Examples for Second Order Differential Equations
The above equation is reduced to the form:

The following conditions can be distinguished:


 Point a is an ordinary point when functions p1(x) and p0(x) are analytic at
x = a.
 Point a is a regular singular point if p1(x) has a pole of order 1 at
x = a and p0 has a pole of order up to 2 at x = a.
 Otherwise point a is an irregular singular point.
Following examples of ordinary differential equations have singular points
and known solutions.
Bessel Differential Equation: This is an ordinary differential equation of
second order. It is established in the solution to Laplace’s equation in cylindrical
coordinates:

For an arbitrary real or complex number α (the order of the Bessel function).
The most common and important special case is where α is an integer n.
Dividing this equation by x2 gives:

In this case p1(x) = 1/x has a pole of first order at x = 0.


When α  0 p0(x) = (1 – α2/x2) has a pole of second order at x = 0. Thus
this equation has a regular singularity at 0.
To see what happens when x one has to use a Möbius transformation,
for example x = 1 / (w  b). After performing the algebra:

Now, p1(w) = 1/(w – b) has a pole of first order at w = b and p0(w) has a
pole of fourth order at w = b. Thus this equation has an irregular singularity w = b
Self - Learning
26 Material
corresponding to x at . There is a basis for solutions of this differential equation Series Solutions of
Differential Equations
that are Bessel functions.
Legendre Differential Equation: This is an ordinary differential equation of
second order. It is obtained in the solution of Laplace’s equation in spherical NOTES
coordinates:

Opening the square bracket gives:

Dividing by (1  x2) gives:

This differential equation has regular singular points at 1, +1 and .


Hypergeometric Equation: The equation can be defined as,

Dividing both sides by z (1  z) gives:

This differential equation has regular singular points at 0, 1 and . A solution


is the hypergeometric function.

Check Your Progress


8. What are the Bessel functions of first kind?
9. How is Bessel function of second kind denoted?
10. How is Legendre’s equation denoted mathematically?
11. Write the Legendre differential equation of the second order.
12. What is the form of indicial equation in hypergeometric differential equations?
13. Define regular singular point for a differential equation.

1.4 GENERATING FUNCTIONS AND


RECURRENCE RELATIONS
In mathematics, a recurrence relation is an equation that recursively defines a
sequence, where each term of the sequence is defined as a function of the preceding
terms. The term difference equation also sometimes refers to a specific type of
Self - Learning
Material 27
Series Solutions of recurrence relation. Remember that ‘difference equation’ is frequently used to
Differential Equations
refer to any recurrence relation. An example of a recurrence relation is the logistic
map represented as:

NOTES
Some simply defined recurrence relations can have very complex actions
and they are considered as a part of the field of mathematics known as nonlinear
analysis. Solving a recurrence relation means obtaining a closed-form solution: a
non-recursive function of n.
Linear Homogeneous Recurrence Relations with Constant Coefficients
An order d linear homogeneous recurrence relation with constant coefficients is an
equation of the form:

Where the d coefficients ci (for all i) are constants. More specifically, this is
an infinite simultaneous linear equation one for each n > d – 1. A sequence which
satisfies a relation of this form is called a Linear Recursive Sequence or LRS.
There are d degrees of freedom for LRS, the initial values can be
taken to be any values but then the linear recurrence determines the sequence
uniquely. The same coefficients yield the characteristic polynomial of the form,

Whose d roots play a crucial role in finding and understanding the sequences
satisfying the recurrence. If the roots r1, r2, ... are all distinct, then the solution to
the recurrence takes the form,

Where the coefficients ki are determined in order to fit the initial conditions
of the recurrence. When the same roots occur multiple times, the terms in this
formula corresponding to the second and later occurrences of the same root are
multiplied by increasing powers of n. For example, if the characteristic polynomial
can be factored as (x – r)3 with the same root r occurring three times, then the
solution would take the form,

Rational Generating Function


Linear recursive sequences are precisely the sequences whose generating function
is a rational function where the denominator is the auxiliary polynomial and the
numerator is obtained from the seed values.

Self - Learning
28 Material
Series Solutions of
The simplest cases are periodic sequences, , which Differential Equations
have sequence and generating function a sum of
geometric series of the form:
NOTES

Generally, given the recurrence relation as:

With generating function:

The series is annihilated at ad and above by the polynomial:

Multiplying the generating function by the polynomial yields:

Here the coefficient on xn vanishes by the recurrence relation for n  d.


Thus,

=
On dividing yields:

This expresses the generating function as a rational function. The denominator


is , which is a transform of the auxiliary polynomial consistently reversing
the order of coefficients. This normalization is the simple relation to the auxiliary
polynomial so that b0 = a0.

Self - Learning
Material 29
Series Solutions of Relationship to Difference Equations
Differential Equations

Given an ordered sequence of real numbers, then the first difference

NOTES is defined as,

The second difference is defined as,

This can be simplified to,

Basically, the kth difference of the sequence an is written as


which is defined recursively as,

The more restrictive definition of difference equation is an equation composed


of an and its kth differences. Linear recurrence relations are difference equations
and conversely this is a simple and common form of recurrence. For example,
consider the difference equation,

This equation is equivalent to the recurrence relation,

Thus one can solve many recurrence relations by rephrasing them as difference
equations, and then solving the difference equation, analogously to how one solves
ordinary differential equations.
Generating Functions
The generating function,

It is derived from the combinatorial definition on the (small) Schröder


numbers. Some recurrence relations can be satisfied by these numbers. To show
how those recurrences are related to the generating function, see that the derivative
of g(x) is,

Self - Learning
30 Material
Series Solutions of
Differential Equations

Therefore, g(x) and g(x) can be written as,


NOTES

Where Q represents the square root of x2 – 6x + 1. A relationship can be


looked between g, g and x that do not involve square roots. Proceed by simply
solving each of these two equations for Q and then equating those two expressions.
This gives,

Let sj denote the jth small Schröder number then the functions g and g have
the series expansions as,

By substituting these expressions for g and g into the previously defined


equation and setting the coefficient of each power of x in the resulting expression
to zero, we obtain the following equations:

, etc.

This gives the convolution recurrence relation of the form,

A simple recurrence can occur if we obtain a linear equation linking those to


functions. For this we multiply the equation for g(x) by 4 and the equation for g(x)
by 4Q2 to give,

Since Q2 = x2 – 6x + 1, we can substitute this into the right side equation


along with the expression for Q given by the left side equation. This gives,

Self - Learning
Material 31
Series Solutions of After expanding and simplifying this equation can be written in the form,
Differential Equations

NOTES By substituting the series expansions for g and g into this equation and
defining the coefficients of each power of x in the final expression to zero, we
obtain the sequence of given expressions,

, etc.
We get the recurrence as,

To proceed in the opposite direction, i.e., specified a recurrence relation for


a sequence of numbers, one can establish the generating function for that sequence.
A specified sequence can satisfy various different recurrence relations, hence there
is not a unique initial point. If the convolution recurrence relation if known then we
multiply each of the individual relations by the corresponding power of x as follows:

, etc.

Adding these equations gives,

By definition, each of the summations is the generating function g(x) that


makes the substitutions and re-arranging terms to give,

Differentiating this equation we have,

Self - Learning
32 Material
This differential equation can be solved by first making a change of variables. Series Solutions of
Differential Equations
Then define a function h(x) and its derivative in terms of g(x) and its derivative as
shown below:

NOTES
Remember that if we set A(x) = 2, B(x) = (1x), and C(x) = x, the differential
equation written in terms of h is simply h = 0, which implies that h is an arbitrary
constant of integration. Inserting the values of A, B and C into the equation for h
and solving for g, we obtain,

This equation has h = 0 to match the initial values and is considered as the
original generating function for the small Schröder numbers.
For a given second order recurrence relation we proceed in a similar way
multiplying each of the individual relations by the respective power of x. This gives
the equations of the form,

, etc.
Adding these we have:

The two summations on the right can be divided into two, one with a factor
of n and the other without. The summations factored by n can be represented in
terms of the derivative of the generating function with the relation as,

By substituting these summations in the previous equation we get,

Self - Learning
Material 33
Series Solutions of Take s1 = 1 and re-arranging terms. Then divide by x to get,
Differential Equations

NOTES To solve this differential equation we can use the previous method by defining
a new function h = Ag2 + Bg + C so that the equation is trivial. If the equation is
linear, i.e., there are no terms with products of g or its derivatives then put A = 0.
If we define B to the coefficient of g in the above equation, the coefficient of g
would have to be B = 2x – 6. We can also use an integrating factor, i.e., we can
multiply by an (initially) arbitrary function R(x) to give,

Set B(x) equal to the coefficient of g in this equation and then determine
R(x) that must be in order for B(x) to be the coefficient of g. The derivative of B
is,

By defining this equal to the coefficient of g, we obtain the following state on


the function R:

Re-arranging terms gives,

Integrating both sides gives,

Taking the exponential of both sides gives,

Consequently, we have:

Integrating the expression for C(x) gives,

Self - Learning
34 Material
Using these expressions for B and C, the differential equation that is written Series Solutions of
Differential Equations
in terms of the variable h = Bg + C is simply h = 0, where h is a constant. We get,

NOTES
Here the constant h is equivalent to 1/4 in order to match the initial value.
This gives the original generating function. Simple integration is used because the
coefficients satisfied certain conditions. Basically, if a sequence defined by the
initial values s0, s1 and the second order recurrence relation is,

Where A through F are obtained by integration to determine the integrating


factor R(x). Then,

If A, C and E are all non-zero, then this is equivalent to the condition,

When this condition is achieved, we get the generating function:

Where h is a constant determined by the initial values. For the Schröder


numbers, consider A = 1, B = 0, C = 6, D = 9, E = 1 and F = 3 that satisfy the
declared specifications.
Now consider a series of numbers with the initial values s0 = s1 = 1 satisfying
the recurrence,

The coefficients satisfy the declared specifications, hence we can use the
above formula to give the generating function,

By expanding this function into a series, we obtain:

The coefficients of this power series satisfy the given recurrence relation.

Self - Learning
Material 35
Series Solutions of
Differential Equations 1.5 ORTHOGONALITY OF BESSEL
FUNCTIONS AND LEGENDRE
NOTES
POLYNOMIALS
In mathematics, an orthogonal polynomial sequence is an infinite sequence of
real polynomials, of one variable x, in which each pn has
degree n and such that any two different polynomials in the sequence are orthogonal
to each other under a particular version of the L2 inner product.
The theory of orthogonal polynomials includes many definitions of
orthogonality. In abstract notation, it is written as, when the polynomials
p(x) and q(x) are orthogonal. A sequence of orthogonal polynomials is a
sequence of polynomials such that pn has degree n and all
distinct members of the sequence are orthogonal to each other.
The algebraic and analytic properties of the polynomials depend upon the
specific assumptions about the operator . In the classical formulation, the
operator is defined in terms of the integral of a weighted product and happens to
be an inner product.
Let [x1,x2] be an interval in the real line, where x1 =  and x2 =  are
allowed. This is termed as the interval of orthogonality. Let,

Be a function on the interval, that is strictly positive on the interior (x1,x2),


but which may be zero or go to infinity at the end points. Additionally, W must
satisfy the requirement that for any polynomial f , then the integral,

is finite. Such a W is called a weight function.

Given any x1, x2 and W as above, define an operation on pairs of polynomials


f and g by,

This operation is an inner product on the vector space of all polynomials. It


induces a notion of orthogonality in the usual way, namely that two polynomials
are orthogonal if their inner product is zero.
The general theory for operators satisfies the axioms of an inner
product. This includes inner products within a Hilbert space where the polynomials
can be interpreted as an orthogonal basis and inner products that can be defined
as integral of the form,

Self - Learning
36 Material
Where μ is a positive measure this in turn includes the classical definition as Series Solutions of
Differential Equations
well as the probabilistic definition where the measure is a probability measure and
the discrete definition where the integral is an infinite weighted sum.
Bessel Functions
NOTES
We have already discussed Bessel functions in the previous section. Here we will
consider the orthogonality of Bessel function. Consider that using sin in our equation
it gives:

Here the sines are equal to zero at the limits of integration hence with Bessel
functions we consider their zeros. Assume:

Where the functions and satisfy the following differential


equations:

=0

=0
By multiplying the first equation by v and the second by u and on subtracting
we get:

Hence,

Thus for it has the form,

In this case the weight function x is included in the orthogonality relation.


For a = b it can be represented as:

These derivations can be used for determining the coefficients in an expansion


of a function for a series of Bessel functions. If the function f(x) is to be expanded
in the range 0 < x < a, then we have the equation:

Where the are selected so that . The coefficients in the


expansion are specified as: Self - Learning
Material 37
Series Solutions of
Differential Equations

This derivation can be used to solve the problem.


NOTES Legendre Polynomials
The simplest classical orthogonal polynomials are the Legendre polynomials for
which the interval of orthogonality is [–1, 1] and the weight function is simply 1:

These are all orthogonal over [–1, 1]; whenever m  n,

The Legendre polynomials are standardized so that Pn(1) = 1 for all n.


The following differential equation is Legendre’s equation of the form,

The second form of the differential equation is:

The recurrence relation is,

A mixed recurrence is,

As per the Rodrigues’ formula it is,

The associated Legendre polynomials, denoted as where l and


m are integers with , are defined as,

The m in parentheses is a parameter. The m in brackets denotes the mth


derivative of the Legendre polynomial. These ‘polynomials’ are misnamed as they
Self - Learning
are not polynomials when m is odd. They have a recurrence relation as:
38 Material
Series Solutions of
Differential Equations

For fixed m, the sequence are orthogonal over


[–1, 1], with weight 1. NOTES

For given m, are the solutions of:

Check Your Progress


14. What is a recurrence relation?
15. Define linear homogeneous recurrence relations with constant coefficients.
16. What is an orthogonal polynomial sequence?
17. What are the simplest classical orthogonal polynomials?

1.6 ANSWERS TO ‘CHECK YOUR PROGRESS’


1. The power series method is used to search a power series solution to certain
differential equations. Basically, such a solution assumes a power series
with unknown coefficients and then substitutes that solution into the differential
equation for finding a recurrence relation for the coefficients.
2. A power series about a or just power series is any series that can be written
in the form,

Where a and cn are numbers. The cn’s are often called the coefficients of
the series.
3. In calculus, a power series is an infinite series of the form,

 a x  x   a0  a1 x  x0   a2  x  x0  
m 2
m 0
m0

4. The convergence of the series may depend upon the value of x that we put
into the series. A power series may converge for some values of x and not
for other values of x.
5. The interval of all x’s, including the end points, for which the power series
converges is termed as the interval of convergence of the series.
6. In the power series, the acceptable operations are differentiation, integration,
addition, subtraction, division and multiplication of power series.
7. In case there is a positive radius of convergence of a power series as well
as an identically zero sum all through its interval of convergence, then every
coefficient of the series will be zero.
Self - Learning
Material 39
Series Solutions of 8. Bessel functions of the first kind, denoted as J(x), are solutions of Bessel’s
Differential Equations
differential equation that are finite at the origin (x = 0) for non-negative
integer  and diverge as x approaches zero for negative non-integer .
9. The Bessel functions of the second kind, denoted by Y(x), are solutions of
NOTES
the Bessel differential equation. They have a singularity at the origin (x = 0).
10. In mathematics, Legendre’s equation is the diophantine equation and is
represented as,
ax2 + by2 + cz2 = 0.
11. The Legendre differential equation is the second order ordinary differential
equation and is written as:

12. The indicial equation of the hypergeometric differential equation is of the


form:

Which has the roots r1 = 0 and r2 = 1- c.


13. A regular singular point of a differential equation is a singular point of the
equation at which none of the solutions has an essential singularity.\
14. A recurrence relation is an equation that recursively defines a sequence,
where each term of the sequence is defined as a function of the preceding
terms. The term difference equation also sometimes refers to a specific
type of recurrence relation.
15. An order d linear homogeneous recurrence relation with constant coefficients
is an equation of the form:

Where the d coefficients ci (for all i) are constants. More specifically, this is
an infinite simultaneous linear equation one for each n > d – 1.
16. An orthogonal polynomial sequence is an infinite sequence of real
polynomials, of one variable x, in which each pn has
degree n and such that any two different polynomials in the sequence are
orthogonal to each other.
17. The simplest classical orthogonal polynomials are the Legendre polynomials
for which the interval of orthogonality is [–1, 1] and the weight function is
simply 1.

1.7 SUMMARY
 The power series method is used to search a power series solution to certain
differential equations. Basically, such a solution assumes a power series
with unknown coefficients and then substitutes that solution into the differential
equation for finding a recurrence relation for the coefficients.
Self - Learning
40 Material
 If a homogeneous linear different equation has constant coefficients, then it Series Solutions of
Differential Equations
can be solved using algebraic methods and its solutions are elementary
functions known from calculus ex, cos x, etc. However, if such an equation
has variable coefficients functions of x, it must be solved by other methods.
NOTES
 In calculus, a power series is an infinite series of the form,

 a x  x   a0  a1 x  x0   a2  x  x0  
m 2
m 0
m0

 The convergence of the series may depend upon the value of x that we put
into the series. A power series may converge for some values of x and not
for other values of x.
 Consider a number R so that the power series will converge for,
and will diverge for |x – a| > R. This number is termed as the radius of
convergence for the series.
 The series may or may not converge if . If something happens at
these points it will not change the radius of convergence.
 The interval of all x’s including the end points for which the power series
converges is termed as the interval of convergence of the series.
 In the power series, the acceptable operations are differentiation, integration,
addition, subtraction, division and multiplication.
 If there is a positive radius of convergence of a power series as well as an
identically zero sum all through its interval of convergence, then every
coefficient of the series will be zero.
 A real function f(x) is termed as analytic at a point x = x0 if it can be
denoted by a power series in powers of x x0 by radius of convergence
R > 0.
 Bessel functions are canonical solutions y(x) of Bessel’s differential equation
of the form,

 Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel’s
differential equation that are finite at the origin (x = 0) for non-negative
integer α and diverge as x approaches zero for negative non-integer α.
 The Bessel functions of the second kind, denoted by Yα(x), are solutions of
the Bessel differential equation. They have a singularity at the origin (x = 0).
 Legendre’s equation is the diophantine equation and is represented as ax2
+ by2 + cz2 = 0.
 The Gaussian hypergeometric differential equation is of the type,
where a, b and c are
constants.
Self - Learning
Material 41
Series Solutions of  The indicial equation of the hypergeometric differential equation is of the
Differential Equations
form, which has the roots r1 = 0 and r2 = 1 c.
 A regular singular point of a differential equation is a singular point of the
NOTES equation at which none of the solutions has an essential singularity.
 A recurrence relation is an equation that recursively defines a sequence,
where each term of the sequence is defined as a function of the preceding
terms. The term difference equation also sometimes refers to a specific
type of recurrence relation.
 Solving a recurrence relation means obtaining a closed-form solution: a
non-recursive function of n.
 An order d linear homogeneous recurrence relation with constant coefficients
is an equation of the form

where the d coefficients ci (for all i) are constants.


 An orthogonal polynomial sequence is an infinite sequence of real
polynomials, of one variable x, in which each pn has
degree n and such that any two different polynomials in the sequence are
orthogonal to each other.
 The theory of orthogonal polynomials includes many definitions of
orthogonality. In abstract notation, it is written as, when the
polynomials p(x) and q(x) are orthogonal.
 The algebraic and analytic properties of the polynomials depend upon the
specific assumptions about the operator .
 The simplest classical orthogonal polynomials are the Legendre polynomials
for which the interval of orthogonality is [–1, 1] and the weight function is
simply 1. The Legendre polynomials are standardized so that Pn(1) = 1 for
all n.

1.8 KEY TERMS


 Radius of convergence: For a number R the power series will converge
for, and will diverge for . This number is termed as the
radius of convergence for the series.
 Interval of convergence: The interval of all x’s including the end points
for which the power series converges is termed as the interval of convergence
of the series.
 Vanishing of coefficients: In this case there is a positive radius of
convergence of a power series as well as an identically zero sum all through
its interval of convergence, so that every coefficient of the series will be
zero.
 Real analytic function: A real function f(x) is termed as analytic at a point
Self - Learning
42 Material
x = x0 if it can be denoted by a power series in powers of by radius of
convergence R > 0. In mathematics, an analytic function is a function that is Series Solutions of
Differential Equations
locally given by a convergent power series.
 Regular singular point: A regular singular point of a differential equation
is a singular point of the equation at which none of the solutions has an
NOTES
essential singularity.
 Recurrence relation: It is an equation that recursively defines a sequence
where each term of the sequence is defined as a function of the preceding
terms. The term difference equation also sometimes refers to a specific type
of recurrence relation.

1.9 SELF-ASSESSMENT QUESTIONS AND


EXERCISES
Short-Answer Questions
1. Why is power series method used?
2. Define a power series and its functionality.
3. Why is regular singular point used in power series?
4. Define the significance of Bessel, Legendre and Hypergeometric equations
in power series.
5. What are recurrence relations?
6. Why is generating function used?
7. What do you mean by the orthogonality?
8. What role does Bessel functions and Legendre polynomials have in
orthogonality?
Long-Answer Questions
1. Prove that the given power series is centered at 0.

2. Prove that the given power series is centered at 2.

3. Find the radius of convergence of,

4. Find the radius of convergence of,

Self - Learning
Material 43
Series Solutions of
Differential Equations
5. Show that satisfies the differential equation
y + xy  y = 0.
NOTES 6. Determine the radius of convergence and interval of convergence for the

given power series, .

7. Determine the radius of convergence for the power series .

8. Find a power series representation for the following function and determine
its interval of convergence.

9. Find a power series representation for the following function and determine
its interval of convergence.
x
f ( x)
5 x
10. Prove that the Legendre polynomials are orthogonal over (1, 1) with
weighting function 1 and satisfy,

1.10 FURTHER READING

K. P. Gupta and J. K. Goyal. 2013. Integral Transform. Meerut (UP): Pragati


Prakashan.
Sharma, J. N. and R. K. Gupta. 2015. Differential Equations (Paperback
Edition). Meerut (UP): Krishna Prakashan Media (P) Ltd.
Raisinghania, M. D. 2013. Ordinary and Partial Differential Equations. New
Delhi: S. Chand Publishing.
Coddington, Earl A. and N. Levinson. 1972. Theory of Ordinary Differential.
Equations. New Delhi: Tata McGraw-Hill.
Coddington, Earl A. 1987. An Introduction to Ordinary Differential Equations.
New Delhi: Prentice Hall of India.
Boyce, W. E. and Richard C. DiPrima. 1986. Elementary Differential Equations
and Boundary Value Problems. New York: John Wiley and Sons, Inc.
Ross, S. L. 1984. Differential Equations, 3rd Edition. New York: John Wiley
and Sons.
Sneddon, I. N. 1986. Elements of Partial Differential Equations. New York:
McGraw-Hill Education.
Self - Learning
44 Material
Laplace Transformation

UNIT 2 LAPLACE TRANSFORMATION


Structure NOTES
2.0 Introduction
2.1 Objectives
2.2 Laplace Transformation
2.3 Existence Theorem for Laplace Transforms
2.4 Laplace Transforms of Derivatives and Integrals
2.5 Shifting Theorems
2.6 Differentiation and Integration of Transforms
2.7 Answers to ‘Check Your Progress’
2.8 Summary
2.9 Key Terms
2.10 Self-Assessment Questions and Exercises
2.11 Further Reading

2.0 INTRODUCTION
Laplace was a French mathematician, astronomer and physicist who played a
leading role in the development of the metric system. The Laplace transform is
widely used in engineering applications (mechanical and electronic), especially
where the driving force is discontinuous. It is also used in process control. The
Laplace Transformation help us to solve an equation or system of equations
containing differential and integral terms by transforming the equation in ‘t’ space
to one in ‘s’ space making the problem much easier to solve. The Laplace transform
provides a useful method of solving certain types of differential equations when
certain initial conditions are given, especially when the initial values are zero.
The Laplace transform is similar to the Fourier transform. While the Fourier
transform of a function is a complex function of a real variable (frequency), the
Laplace transform of a function is a complex function of a complex variable. The
Laplace transform is usually restricted to transformation of functions of t with
t  0. A consequence of this restriction is that the Laplace transform of a function
is a holomorphic function of the variable s. Unlike the Fourier transform, the Laplace
transform of a distribution is generally a well-behaved function. Techniques of
complex variables can also be used to directly study Laplace transforms. As a
holomorphic function, the Laplace transform has a power series representation.
This power series expresses a function as a linear superposition of moments of the
function. This perspective has applications in probability theory.
In this unit, you will be study about the Laplace transformation, existence
theorem for Laplace transforms, Laplace transforms of derivatives and integrals
shifting theorems, differentiation and integration of transforms.

Self - Learning
Material 45
Laplace Transformation
2.1 OBJECTIVES
After going through this unit, you will be able to:
NOTES  Discuss the Laplace transformation
 Describe the existence theorem for Laplace transforms
 Explain the Laplace transforms of derivatives and integrals
 Define and understand the shifting theorems
 Understand differentiation and integration of transforms

2.2 LAPLACE TRANSFORMATION


In mathematics, the Laplace transform is a widely used integral transform and is
denoted by . It is a linear operator of a function f(t) including a real argument
t (t  0) that transforms it to a function F(s) with a complex argument s. As a
bijective transformation the respective pairs of f(t) and F(s) are matched in tables.
The Laplace transform has the significant property so that various relationships
and operations over the originals f(t) correspond to simpler relationships and
operations over the images F(s).
The Laplace transform can be related to the Fourier transform. The Fourier
transform resolves a function or signal into its modes of vibration and the Laplace
transform resolves a function into its moments. The original signal depends on time
and therefore Laplace transform is called the time domain representation of the
signal, whereas the Fourier transform depends on frequency and is called the
frequency domain representation of the signal. Similar to the Fourier transform,
the Laplace transform is also used for solving differential and integral equations. In
physics and engineering, it is used for analysis of linear time-invariant systems such
as electrical circuits, harmonic oscillators, optical devices and mechanical systems.
Switching from operations of calculus to algebraic operations on transforms
is known as operational calculus which is an essential area of applied mathematics
and with regard to an engineer, the Laplace transform method is basically a very
essential operational technique. It is particularly useful in problems where the
mechanical or electrical driving force has discontinuities, is impulsive or is a
complicated periodic function, not merely a sine or cosine.
Another benefit of the Laplace transform is that it helps in solving the problems
in a straightforward manner, initial value problems regardless of initially obtaining a
basic solution, and nonhomogeneous differential equation exclusive of initially
answering the corresponding homogeneous equation.
In this chapter we consider Laplace transforms from a practical approach
and exemplify their usage through essential engineering problems wherein many of
them are associated with ordinary differential equations. Partial differential equations
can also be treated by Laplace transforms.

Self - Learning
46 Material
The Laplace transform is named in honor of mathematician and astronomer Laplace Transformation

Pierre-Simon Laplace, who used the transform in his work on probability theory.
Leonhard Euler considered integrals of the form,

and NOTES

These integrals were the solutions of differential equations but were not
used in the long run. Joseph Louis Lagrange was an admirer of Euler and in his
work on integrating probability density functions, explored expressions of the form,

This was interpreted within modern Laplace transform theory. These integrals
have attracted Laplace’s attention for using the integrals themselves as solutions of
equations. He used an integral of the form,

This integral was akin to a Mellin transform, to transform the whole of a


difference equation in order to look for solutions of the transformed equation.
The Laplace transform of a function f(t), defined for all real numbers
t  0, is the function F(s), defined by:

The parameter s is a complex number s = σ + iω with real numbers σ and


ω. The meaning of the integral depends on types of functions of interest. A necessary
condition for existence of the integral is that ƒ have to be the neighborhood integrable
on (0, ). For neighborhood integrable functions that decay at infinity or are of
exponential type, the integral can be understood as a (proper) Lebesgue integral.
Though, for various applications it is considered as a conditionally convergent
improper integral at .
The Laplace transform can be defined of a finite Borel measure μ by the
Lebesgue integral of the form,

As a special case μ is a probability measure or more specifically the Dirac


delta function. In operational calculus, the Laplace transform of a measure is treated
as the measure of a distribution function ƒ. In such case the expression is of the
form,

Here the lower limit of 0– is short notation that means,

Self - Learning
Material 47
Laplace Transformation This limit emphasizes that any point located at 0 is completely acquired by
the Laplace transform.
Bilateral Laplace Transform
NOTES When the Laplace transform is defined without condition then the unilateral or
one-sided transform is normally considered. Alternatively, the Laplace transform
can be defined as the bilateral Laplace transform or two-sided Laplace
transform by extending the limits of integration to be the entire real axis. If that is
done the common unilateral transform simply becomes a special case of the bilateral
transform where the definition of the function being transformed is multiplied by
the Heaviside step function. The bilateral Laplace transform is defined as follows:

Inverse Laplace Transform


The Inverse Laplace transform is also known by various names as the Bromwich
integral, the Fourier-Mellin integral and Mellin’s inverse formula. It is given by the
following complex integral:

Where γ is a real number so that the contour path of integration is in the


region of convergence of F(s).
Region of Convergence
If ƒ is a locally integrable function, then the Laplace transform F(s) of ƒ converges
provided that the following limit exists:

The Laplace transform converges absolutely if the following integral exists:

The Laplace transform is usually understood as conditionally convergent,


meaning that it converges in the former instead of the latter sense.
The set of values for which F(s) converges absolutely is either of the form
Re{s} > a or else Re{s}  a, where a is an extended real constant,
–  a  . This follows from the dominated convergence theorem. The constant
a is known as the abscissa of absolute convergence, and depends on the
growth behavior of ƒ(t). Analogously, the two-sided transform converges absolutely
in a strip of the form a < Re{s} < b and possibly including the lines Re{s} = a or
Re{s} = b. The subset of values of s for which the Laplace transform converges
absolutely is called the region of absolute convergence or the domain of absolute
convergence. In the two-sided case, it is sometimes called the strip of absolute
Self - Learning
48 Material
convergence. The Laplace transform is analytic in the region of absolute Laplace Transformation

convergence.
Similarly, the set of values for which F(s) converges (conditionally or
absolutely) is known as the region of conditional convergence or simply the region
NOTES
of convergence. If the Laplace transform converges (conditionally) at s = s0, then
it automatically converges for all s with Re{s} > Re{s0}. Therefore the region of
convergence is a half-plane of the form Re{s} > a, possibly including some points
of the boundary line Re{s} = a. In the region of convergence Re{s} > Re{s0}, the
Laplace transform of ƒ can be expressed by integrating by parts as the integral,

That is, in the region of convergence F(s) can effectively be expressed as


the absolutely convergent Laplace transform of some other function. In particular,
it is analytic. A variety of theorems, in the form of Paley–Wiener Theorems, exist
concerning the relationship between the decay properties of ƒ and the properties
of the Laplace transform within the region of convergence.
Differential equations and corresponding initial as well as boundary value
problems can be solved through the Laplace transform method. There are three
basic steps for the process of solution:
Step 1. Transformation of the provided hard problem is done into a simple
equation (Subsidiary Equation).
Step 2. The use of purely algebraic modifications is done for solving the
subsidiary equation.
Step 3. The answer obtained of the subsidiary equation is again transformed
for getting the answer of the provided problem.
Through this, Laplace transforms help in decreasing the problem of evaluating
a differential equation to an algebraic problem. Tables of functions as well as their
transforms have made such process an easy task to perform, whose role is quite
equivalent to that of integral tables in calculus. Refer standard table of calculus
(Table is given at the end of chapter).
Consider a given function f t  that is defined for all t  0. Multiply f(t) by
e  st to integrate t from zero to infinity. If the resultant integral exists with some
finite value then it is a function of s, represented as F(s):

F s    e st f t dt
0

This function F(s) of the variable s is the Laplace transform of the basic
function f(t) and is depicted by L f  . Hence,

F s   L f    e st f t dt (2.1)
0

Here the basic function f is dependent on t and the novel function F which Self - Learning
Material 49
Laplace Transformation is its transform is dependent on s. The process that provides F(s) from a given f(t)
is the Laplace transform.
The basic function f(t) in Equation (2.1) is called the inverse transform or
NOTES inverse of F(s) and is depicted by L1 F  . It is written as,

f t   L1 F 
Notation
The basic functions are indicated by lowercase letters and the associated
transforms by the same letters in capitals. Implying F(s) indicates the transform of
f(t) and Y(s) indicates the transform of y(t).
Example 2.1: If f t   1 for t  0 then find F(s).
Solution: From Equation (2.1) using integration we get,

1 1
L f   L1   e st dt   e  st |0  (s > 0)
0
s s

The notation is appropriate. Here the interval of integration in Equation


(2.1) is infinite and is termed as an improper integral. According to the rule,
 T

e
 st
f t dt  min  e st f t dt
T 
0 0

Hence, the notation means,


 T
 1  st   1 1  1
0 e dt  min
 st
  e   min  e  sT  e0   for (s > 0)
T 
 s  0 T   s s  s

Example 2.2: Let f t   e at for t  0, where a is a constant. Find L( f ) of the


exponential function.
Solution: Using Equation (2.1) we get,

 
L e at   e  st e at dt 
1  s  a t 
as
e |0
0

If s  a  0 then we get,

 
L e at 
1
sa
Theorem 2.1: Linearity of the Laplace Transform
The Laplace transform is a linear operation; which means, for any functions
f(t) and g(t) whose Laplace transforms exist and any constants a and b,
Laf t   bg t   aL f t   bLg t  .
Proof: By the definition,
Self - Learning
50 Material
Laplace Transformation

Laf t   bg t  =  e af t   bg t dt


 st

 
NOTES
= a e
 st
f t dt  b  e st g t dt .
0 0

= aL f t  bLg t  .

Example 2.3: Using Theorem 2.1 find L(f) if, f t   cosh at 


2

1 at
e  e at .
Solution: Using Theorem 2.1 and Example 2.2 we have,

L cosh at  
1
2
  1
 
1 1
L e at  L e  at  
2

1 

2 sa s a
By taking the common denominator while s > a ( 0) we have,
s
Lcosh at  
s  a2
2

2.3 EXISTENCE THEOREM FOR LAPLACE


TRANSFORMS
This cannot be termed as a major practical problem as in many situations the
solution of a differential equation can be checked through substitution without any
hassles. For a fixed s the integral in Equation (2.1) will exist if the complete integrand
e  st f t  is zero as t   for an exponential function with a negative exponent.
This implies that f t  itself should not develop faster than e kt .
Let f(t) be a function piecewise continuous on [0, A] (for every A > 0) and
have an exponential order at infinity with . Then, the Laplace
transform is defined for , that is .
The function f(t) need not be continuous. The piecewise continuity is of
practical importance because discontinuous inputs for which the Laplace transform
method becomes particularly useful. By definition, a function f(t) is piecewise
continuous on a finite interval a  t  b if f(t) is defined on that interval and is such
that the interval can be subdivided into finitely many intervals in each of which f(t)
is continuous and has finite limits as t approaches either endpoint of the interval of
subdivision from the interior.
Theorem 2.2: Existence Theorem for Laplace Transforms
Let f t  be a function that is piecewise continuous on every finite interval in
the range t  0 and satisfies,
Self - Learning
Material 51
Laplace Transformation
f t   Me kt for all t  0
(2.2)
For some constants k and M. Then the Laplace transform of f(t) exists
NOTES for all s > k.
Proof: Since f(t) is piecewise continuous, e  st f t  is integrable over any finite
interval on the t-axis. From Equation (2.2) we obtain the following expression
taking s > k:
 
M
L f    e  st
f t dt   | f t  | e dt   Me kt e  st dt 
 st

0 0
sk
Here the condition s > k was required for the existence to the last integral.
Hence proved.
The conditions of Theorem 2.2 is applicable when we want to find whether
a given function satisfies an inequality of the form given in Equation (2.2). For
example,

cosh t  et , t n  n! et n  0,1,  for all t > 0, (2.3)


Any function which is bounded in absolute value for all t  0, i.e., the sine
and cosine functions of a real variable satisfies the condition. A function that does
not assure a relation of the form given in Equation (2.2) is the exponential function
et , because we can have too large M and k in Equation (2.2),
2

2
et  Me kt for all t > t0
Where t0 is a sufficiently large number which depends on M and k.
The conditions in Theorem 2.2 are sufficient and not necessary. For example,
the function 1 / t is infinite at t = 0, but its transform exists. From the definition

and for     we have the following expression for st = x,


1
2
 


Lt 1 / 2
 e  st 1 / 2
t dt 
1  x 1 / 2
 e x dx 
1 1
  
s .
0 s0 s 2
Uniqueness
If there is an existence of Laplace transform of a given function, it is distinctly
determined. Conversely, it can also be displayed that if two functions that are
defined on the positive real axis, contain the similar transform, such functions cannot
vary over an interval of positive length, even though there may be variance at
different isolated points. As this is of no significance in applications, it can be said
that the inverse of a given transform is fundamentally distinct. In particular, if there
are two continuous functions that have a similar transform, they are entirely similar.
Indeed, this is of practical importance.
Self - Learning
52 Material
Laplace Transformation
Check Your Progress
1. Why is Laplace transform used in mathematics?
2. How will you represent Laplace transform of a function?
NOTES
3. Write the three basic steps used in Laplace transform.
4. Write the linearity theorem of the Laplace transform.
5. Define the existence theorem for Laplace transform.

2.4 LAPLACE TRANSFORMS OF


DERIVATIVES AND INTEGRALS
Proof of the Laplace Transform of a Function’s Derivative
It is often convenient to use the differentiation property of the Laplace transform
to find the transform of a function’s derivative. This can be derived from the basic
expression for a Laplace transform as follows:

Yielding,

In the bilateral case,

The general result,

Where fn is the nth derivative of f, can then be established with an inductive


argument.
Laplace Transform of the Derivative
Consider that the Laplace transform of y(t) is Y(s). Then the Laplace transform of
y(t) is,

Self - Learning
Material 53
Laplace Transformation For the second derivative we have,

For the nth derivative we have,


NOTES

Derivatives of the Laplace Transform


Let Y(s) be the Laplace transform of y(t). Then,

We can compute the Laplace transform of t sin (t) as follows:

The Laplace transform is method is used for solving differential equations.


The Laplace transform replaces operations of calculus by operations of algebrai
on transforms. Approximately, differentiation of f(t) is replaced by multiplication
of L(s) by s and integration of f(t) is replaced by division of L(f) by s.
Theorem 2.3: Laplace Transform of the Derivative of f(t)
Suppose that f(t) is continuous for all t  0, satisfies for some k and M, and
has a derivative f t  that is piecewise continuous on every finite interval in
the range t  0 . Then the Laplace transform of the derivative f t  exists
when s > k, and,
L f   sL f   f 0 for (s > k)

Proof: Consider the situation when f t  is continuous for all t  0. Then, by the
definition and by integration by parts we have,
 
L f    e  st

f t dt  e  st

f t  |  s  e st f t dt

0
0 0

Since f satisfies the integrated portion on the right is zero at the upper limit
when s > k and at the lower limit it contributes  f 0 . The last integral L(f) is the
existence for s > k. This proves that the expression on the right exists when s > k
and is equal to  f 0  sLt  . Consequently, L(f) exists when s > k. If the
derivative f t  is piecewise continuous, then the proof is quite akin. In this case,
the range of integration in the original integral which is split into parts such that f 
is continuous in each such part. This theorem may be extended to piecewise
continuous functions f(t).
Theorem 2.4: Laplace Transform of the Derivative of Any Order n
Let f(t) and its derivatives f t , f t , , f n1 t  be continuous functions

Self - Learning
for all t  0, satisfying some k and M, and let the derivative f n  (t) be piecewise
54 Material
continuous on every finite interval in the range t  0. Then the Laplace Laplace Transformation

transform of f n  t  exists when s > k and is given by,,

 
L f n   s n L f   s n1 f 0  s n2 f 0   f n1 0 .
NOTES
Example 2.4: If f t   t then derive L f  from L1 .
2

Solution: Since f 0  0, f 0  0, f t   2 and L2  2 L1  2 / s


We get,
2
L f   L2 
s
2
 s 2 L f , hence L f 2  3
s
 
Example 2.5: Derive the Laplace transform of cos t .
Solution. Let f t   cos t .

Then f t    2 cos t   2 f t  . Also f 0  1, f 0  0 .

Now we take the transform, L f    2 L f  .


We get,
  2 L f   L f   s 2 L f   s,
s
Hence L f   Lcos t  
s  2
2

Example 2.6: If f t   sin 2 t then find L f  .

Solution: Given is, f 0  0, f t   2 sin t cos t  sin 2t


Which gives,
2
L sin 2t  
s 4
2
 sL  f . or 
L sin 2 t   ss 2 4
2

Example 2.7: If f(t)= t sin t then find L(f).


Solution: Given is, f(0) = 0 and,
f t   sin t  t cos t ,
For f 0  0
f t   2 cos t   2t sin t
= 2 cos t   2 f t  ,
Also,
L f   2Lcos t    2 L f   s 2 L f  .
Using the formula for the Laplace transform of cos  t , we obtain,
2s
s 2

  2 L f   2Lcos t  
s  2
2 .
Self - Learning
Material 55
Laplace Transformation The outcome is,
2s
Lt sin t  
s 2
 2 
2 .

NOTES Laplace Transform of the Integral of a Function


Differentiation and integration are inverse processes. Consequently, as differentiation
of a function corresponds to the multiplication of its transform by s, we expect
integration of a function to equates to division of its transform by s, because division
is the inverse operation of multiplication.
Theorem 2.5: Integration of f(t)
Let F(s) be the Laplace transform of f(t). If f(t) is piecewise continuous and
satisfies an inequality, then
t  1
L  f  d   F s 
0  s
For (s > 0, s > k)
Or, if only the inverse transform on both sides of the above equation is
taken,
t
1 
 f  d  L  F s  .
1

0 s 
Proof: Suppose that f(t) is piecewise continuous and satisfies the Equation (2.2)
for some k and M. Clearly, if for Equation (2.2) some negative k, it also holds for
positive k then we may assume that k is positive. Then the integral,
t
g t    f  d
0

This is continuous and by using Equation (2.2) we obtain for any positive t,
t t
| g t  |   | f   | d  M  e k d 
k

M kt M

e  1  e kt for (k > 0).
k
0 0

This shows that g(t) also satisfies an inequality of the form given in Equation
(2.2). Also, g t   f t  , except for points at which f(t) is discontinuous. Hence
g t  is piecewise continuous on each finite interval and gives,

L f t   Lg t   sLg t  g 0 for (s > k).

Here, clearly, g 0  0 , so that L(f) = sL(g).

Check Your Progress


6. Why is differentiation property of the Laplace transform used?
7. Write the theorem of Laplace transform of the derivative of any order n.
Self - Learning
56 Material
Laplace Transformation
2.5 SHIFTING THEOREMS
Theorem 2.6: First Shifting Theorem
NOTES
If f(t) has the transform F(s) (where s > k), then e at f (t ) has the transform
F(s – a) where s – a > k. As per formula,
 
L e at f t   F s  a 
Or, if we take the inverse on both sides then we have,
e at f t   L1F s  a .
Proof: We obtain F(s – a) by replacing s by s – a in the integral to get,
 
F s  a    e  s a t
  
f t dt   e st e at f t  dt  L e at f t  .
0 0

If F(s) exists (i.e., is finite) for s greater than some k, then the first integral
exists for s – a > k. Now take the inverse on both sides to obtain the second
formula in the theorem.
Damped Vibrations
From the first shifting theorem we obtain the following useful formulas:
sa

L e at cos t  
s  a 2   2


L e at sin t  
s  a 2   2 .
For negative a these f(t) are damped vibrations.
Some Important Transforms
Table 2.1 gives a list of basic transforms which includes some functions f(t) and
their Laplace transforms L(f). From these transforms we can obtain nearly all the
other transforms that we require.
Table 2.1 Some Functions f(t) and Their Laplace Transforms L(f)

f t  L f  f t  L f 
1 1 ½ 7 cos t s
s  2
2

2 T 1/s2 8 sin t 
s  2
2

3 t2 2!/s3 9 cosh at s
s2  a2
4 tn n! 10 sinh at s a
n  0,1, 9 s n 1
s2  a2
Self - Learning
  Material 57
Laplace Transformation
5 ta a  1 11 e at cos t sa
(Positive) s a 1 s  a 2   2
6 e at 1 12 e at sin t 
NOTES sa s  a 2   2
Proofs: Formulas 1, 2 and 3 in Table 2.1 are special cases of formula 4. We
prove formula 4 by induction. It is true for n = 0 because 0! = 1. We now take the
induction hypothesis that it holds for any positive integer n. We get using integration
by parts,

 

1
L t n 1   e  st t n 1dt   e  st t n 1 |0 
n  1 e stt n dt

0
s s 0
The integral free part at t = 0 and for t   . The right side equals
(n + 1)L(tn/s). From this and the induction hypothesis we obtain,
n  1 n n  1n! n  1!
 
L t n 1 
s
 
Lt 
s  s n 1
 n2 .
s
This proves Formula 4 of Table 2.1.
a  1 in Formula 5 in Table 2.1 is the gamma function. We get formula
5 by setting st = x:
  a 
   x  dx
L t a   e  st t a dt   e  x  
1
 a 1  e  x x a dx
0 0 s s s 0

Where s > 0. The last integral is precisely that define a  1 , so we have
a  1 / s a1 , as claimed.
Note that n  1  n! for non-negative integer n, so that formula 4 also
follows from 5.
From Table 2.1 and the first shifting theorem we immediately obtain another
useful formula,


L t n e at  n!
s  a n 1 .
For example, L te at   1 / s  a  .
2

Theorem 2.7: Second Shifting Theorem, Dirac’s Delta Function


Unit Step Function is a function and not a product.
Here, ‘u’ is the function name, ‘t’ is the variable and ‘a’ is a constant.
The unit step function works is very simple but efficient way. The value of u
is always zero as long as ‘t’ is smaller than ‘a’ and turns to one as soon as ‘t’ is
larger than ‘a’.
Self - Learning
58 Material
It works on an ‘ON’ switch at time t = a: Laplace Transformation

NOTES
But this means if we multiply the unit step function to any arbitrary
function f(x), the resulting product will be zero as long as ‘t’ is smaller than ‘a’
and turns to f(x) as soon as ‘t’ is larger than ‘a’.

2.6 DIFFERENTIATION AND INTEGRATION


OF TRANSFORMS
Various techniques are used for the purpose of attaining transforms or inverse
transforms and application associated to it for answering differential equations is
quite high. The differentiation and integration of transforms F(s) is considered and
the related operations are recognized for basic functions f(t).
Consider that the differentiation and integration being performed in the ‘t’
space and then transformed these expressions into the ‘s’ space.
Differentiation of Laplace Transforms
Let F(s) be the Laplace transform to the function f(t). This means, by the definition
of the transform that,

Then the derivative of F(s) or F(s) is,

Since the integration is done in respect to t we perform the differentiation


inside of the integral.

If we differentiate the transform itself with respect to s then this will result in
a multiplication with –t of the original function.
Or,

And consequently,
It can be shown that if f(t) satisfies the conditions of the existence theorem
then the derivative F(s) of its transform is,

F s   L f    e st f t dt
0
Self - Learning
Material 59
Laplace Transformation This can be obtained by differentiating under the integral sign with respect
to s. Thus,

F s     e st tf t  dt .
NOTES 0

Consequently, if L f   F s  , then Ltf t    F s  is differentiation of


the transform of a function corresponds to the multiplication of the function by –t.
Equivalently,
L1F s   tf t  .
This property helps to transform the integral.
Integration of Transforms
Again, we are using the definition of the Laplace transform in order to find the
integration of transform.

Then, is defined as,

Or,

And

For the other direction.


Similarly, if f(t) satisfies the conditions of the existence theorem and the limit
of f(t)/t, as t approaches 0 from the right exists, then

 f t  

   F s  ds for
L ~ ~
(s > k);
 t  s
This way, integration of the transform of a function f(t) functions to the
division of f(t) by t. Equivalently,

  f t 
L1  F ~
s  d~
s 
s  t

Self - Learning
60 Material
In fact, from the definition it follows that, Laplace Transformation

  
 ~s t 
s F ~s d~s  s  0 e f t dt  d~s
NOTES
And it can also be shown that using these assumptions we can reverse the
order of integration as,
  
 ~s t  
 ~s t ~ 
s F ~s  d~s  0 s e f t d~s  dt  f t 
0  s e ds  dt

s gives e  s t /  t  . Here the integral
Integration of e  ~s t with respect to ~
~

over ~
s on the right equals e  st / t . Therefore,

 st f t   f t  
 

s F s ds 0 e t dt  L  t 
~ ~
for (s > k).

 2 
Example 2.8: Find the inverse transform of the function 1  s 2 
ln
 
Solution: By differentiation,
d  2  1 2 2 2 2
 ln1  2      3  3  2 2  2 2 2 s 2
2  2
ds  s 
1 2
 s s 
s s  
s s 
s
The final equality can be thoroughly checked through direct calculation, i.e.,
is F(s). It is the derivative of the given function (times – 1), so that the latter is the
integral of F(s) from s to . From Table 2.1 we obtain,
2 s 
f t   L1F  L1   2 2   2  2 cos t .
s s  2 
This function satisfies the conditions. Therefore,
   2    f t 
L1 ln1  2   L1  F ~
s d~
s
  s  s  t .
The final result is,
   2  2
L ln1  2   1  cos t  .
1

  s  t

Check Your Progress


8. Write the first shifting theorem.
9. How is differentiation and integration of transforms done?

Self - Learning
Material 61
Laplace Transformation
2.7 ANSWERS TO ‘CHECK YOUR PROGRESS’
1. In mathematics, the Laplace transform is a widely used integral transform
NOTES and is denoted by . It is a linear operator of a function f(t)
including a real argument t (t  0) that transforms it to a function F(s) with
a complex argument s.
2. The Laplace transform of a function f(t), defined for all real numbers
t  0, is the function F(s), defined by:

3. There are three basic steps for the process of solution:


Step 1. Transformation of the provided hard problem is done into a simple
equation (subsidiary equation).
Step 2. The use of purely algebraic modifications is done for solving the
subsidiary equation.
Step 3. The answer obtained of the subsidiary equation is again transformed
for getting the answer of the provided problem.
4. The Laplace transform is a linear operation; which means, for any functions
f(t) and g(t) whose Laplace transforms exist and any constants a and b,
L af t bg t aL f t bL g t .

5. A function f(t) is piecewise continuous on a finite interval a t b if f(t)


is defined on that interval and is such the interval can be subdivided into
finitely many intervals in each of which f(t) is continuous and has finite limits
as t approaches either endpoint of the interval of subdivision from the interior.
6. It is convenient to use the differentiation property of the Laplace transform
to find the transform of a function’s derivative.
7. Let f(t) and its derivatives f t , f t , , f n1 t  be continuous functions
for all t  0 , satisfying some k and M, and let the derivative f n  (t) be
piecewise continuous on every finite interval in the range t  0 . Then the
Laplace transform of f n  t  exists when s > k and is given by,,

 
L f n   s n L f   s n1 f 0  s n2 f 0   f n1 0 .

8. If f(t) has the transform F(s) (where s > k), then e at f (t ) has the transform
F(s – a) where s – a > k. The formula is,
 
L e at f t   F s  a 

Self - Learning
62 Material
Or, if we take the inverse on both sides then we have, Laplace Transformation

e at f t   L1F s  a .
9. Various techniques are used for the purpose of attaining transforms or inverse
NOTES
transforms and application associated to it for answering differential
equations is quite high. The differentiation and integration of transforms F(s)
is considered and the related operations are recognized for basic functions
f(t).

2.8 SUMMARY
 In mathematics, the Laplace transform is a widely used integral transform
and is denoted by . It is a linear operator of a function f(t) including
a real argument t (t  0) that transforms it to a function F(s) with a complex
argument s.
 The Laplace transform has the significant property so that various relationships
and operations over the originals f(t) correspond to simpler relationships
and operations over the images F(s).
 The Laplace transform is named in honor of mathematician and astronomer
Pierre-Simon Laplace, who used the transform in his work on probability
theory. Leonhard Euler considered integrals of the form,

and

 The Laplace transform of a function f(t), defined for all real numbers
t  0, is the function F(s), defined by:

 The Laplace transform can be defined of a finite Borel measure μ by the


Lebesgue integral of the form,

 The bilateral Laplace transform is defined as follows:

 Transformation of the provided hard problem is done into a simple equation


(Subsidiary Equation).
 The use of purely algebraic modifications is done for solving the subsidiary
equation.

Self - Learning
Material 63
Laplace Transformation  The answer obtained of the subsidiary equation is again transformed for
getting the answer of the provided problem.
 The Laplace transform is a linear operation; which means, for any functions
f(t) and g(t) whose Laplace transforms exist and any constants a and b,
NOTES
Laf t   bg t   aL f t   bLg t  .
 A function f(t) is piecewise continuous on a finite interval a  t  b if f(t) is
defined on that interval and is such that the interval can be subdivided into
finitely many intervals in each of which f(t) is continuous and has finite limits
as t approaches either endpoint of the interval of subdivision from the interior.
 Let f t  be a function that is piecewise continuous on every finite interval
in the range t  0 and satisfies,

f t   Me kt for all t  0


For some constants k and M. Then the Laplace transform of f(t) exists for
all s > k.
 It is often convenient to use the differentiation property of the Laplace
transform to find the transform of a function’s derivative.
 The Laplace transform replaces operations of calculus by operations of
algebrai on transforms. Approximately, differentiation of f(t) is replaced by
multiplication of L(s) by s and integration of f(t) is replaced by division of
L(f) by s.
 Suppose that f(t) is continuous for all t  0, satisfies for some k and M, and
has a derivative f t  that is piecewise continuous on every finite interval in
the range t  0 . Then the Laplace transform of the derivative f t  exists
when s > k, and,
L f   sL f   f 0 for (s > k)

 Let f(t) and its derivatives f t , f t , , f n1 t  be continuous functions
for all t  0, satisfying some k and M, and let the derivative f n  (t) be
piecewise continuous on every finite interval in the range t  0. Then the
Laplace transform of f n  t  exists when s > k and is given by,,

 
L f n   s n L f   s n1 f 0  s n2 f 0   f n1 0 .
 As differentiation of a function corresponds to the multiplication of its
transform by s, we expect integration of a function to equates to division of
its transform by s, because division is the inverse operation of multiplication.
 If f(t) has the transform F(s) (where s > k), then e at f (t ) has the transform
F(s – a) where s – a > k. As per formula,

Self - Learning
 
L e at f t   F s  a 
64 Material
Or, if we take the inverse on both sides then we have, Laplace Transformation

e at f t   L1F s  a .
 Various techniques are used for the purpose of attaining transforms or inverse
transforms and application associated to it for answering differential equations NOTES
is quite high. The differentiation and integration of transforms F(s) is
considered and the related operations are recognized for basic functions
f(t).

2.9 KEY TERMS


 Laplace transformation: It is a widely used integral transform and is
denoted by . It is a linear operator of a function f(t) including a real
argument t (t ³ 0) that transforms it to a function F(s) with a complex argument
s.
 Operational calculus: Switching from operations of calculus to algebraic
operations on transforms is known as operational calculus which is an essential
area of applied mathematics.
 Bilateral Laplace transform: The Laplace transform can be defined as
the bilateral Laplace transform or two-sided Laplace transform by extending
the limits of integration to be the entire real axis.

2.10 SELF-ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Why is Laplace transform used?
2. Differentiate between bilateral and inverse Laplace transforms.
3. What do you mean by region of convergence?
4. Differentiate between linearity and existence theorems of Laplace transform.
5. Define the Laplace transform of a function’s derivative.
6. Elaborate on by Laplace transform of the derivative of any order n.
7. Define the shifting theorems.
8. How differentiation and integration of transforms takes place?
9. How is Laplace transform solved schematically?
Long-Answer Questions
1. Discuss the significance of Laplace transform in solving differential equations.
2. Find the Laplace transforms of the following:
(i) 2t3 + 3t2 – 5t + 2
(ii) e3(t 1) Self - Learning
Material 65
Laplace Transformation (iii) (e3t + e–2t)2
(iv) sin at cos at
(v) sin3 bt
NOTES (vi) 3t2 + cos3 bt
(vii) sin at cos bt
3. Find the Laplace transforms of the following:
(i) t 3e5t
(ii) e–t sin(2t + 3)
(iii) cosh at cos bt
(iv) sinh at sin bt
(v) 3t2e–3t + 5e3t cos 2t
4. Find the Laplace transforms of the following:
(i) (2t + 1)sin 2t
(ii) (t + 2)cos 3t
(iii) t2 sin at
(iv) t2 cos at
(v) te–t cos2t
(vi) te–at sin at
5. Find the Laplace transforms of:
e at  cos at sin 2 t
(i) (ii)
t t
2 2
 sin 2t   sin at 
(iii)   (iv)  
 t   at 

1  e at
(v)
t
6. Find
 t sin at   t f (t )  1 
(i) L   dt  (ii) Prove that L   dt     L[ f (t )ds
0 t  0 t  s s

 t sin 2 t   t t  e at 
(iii) Find L   dt  (iv) Find L   dt 
0 t  0 t 
t
(v) Find L[et  t cos t dt ]
0

7. Find the Laplace transform of the following periodic function:


(i) f(t) = E sin t, for 0  t  ( / ) and f(t + ( / ) = f(t), for all t.
(ii) f(t) = | cos t|

Self - Learning
66 Material
Laplace Transformation
2
(iii) f(t) = 0, for 0 < t < ( / ) = –sin t, for ( / ) < t < and

f(t + 2( / )) = f(t), for all t.
8. Find the Laplace transforms of the following functions given that f(t) is a NOTES
periodic function of period 2.
(i) f(t) = et, for 0 < t < 2
(ii) f(t) =  – t, for 0 < t < 2
(iii) f(t) = t2, for 0 < t < 2
(iv) f(t) = t, for 0 < t <  = 0, for  < t <2
(v) f(t) = t, for 0 < t <  = 2 – t, for  < t <2
9. Find the inverse transform of the following:
1 1
(i) (3 p  4)5 (ii) (2  3s )3

2 s 2  5s  2 2s  1
(iii) ( s  1) 4 (iv) (s 2  4)

4s  1 s
(v) ( s  1)2 (vi) (s 2  4) 2

s 2  3s  2
(vii) ( s 2  4)2
10. Find the inverse transform of the following functions:
1 1
(i) s( s 2  4) (ii) s 2 ( s 2  a 2 )

1 s
(iii) s( s  2)( s  2) (iv) (s  2)( s  7)

s
(v) (2s  3)(3s  5)
11. Find the Laplace transforms of:
(i) L(2t2 – e–t)
(ii) L(t2 + 1)2
(iii) L(sin t – cos t)2
(iv) L(cosh2 4t)

0 when 0  t  2
(v) L{f(t)} if f(t) = 
4 when t  2

(vi) L{t3 e–3t}


(vii) L{(t + 2)2 et}

Self - Learning
Material 67
Laplace Transformation 
3
12. Show that the Laplace transform of  te3t sin t dt 
0 50

13. Solve using L 
f (t ) 
NOTES    F (u ) du
 t  s

 sin t 
2
1  et 
(i) Find L   (ii) Find L  
 t   t 

(iii) Evaluate  te3t cos t dt


s

2.11 FURTHER READING


K. P. Gupta and J. K. Goyal. 2013. Integral Transform. Meerut (UP): Pragati
Prakashan.
Sharma, J. N. and R. K. Gupta. 2015. Differential Equations (Paperback
Edition). Meerut (UP): Krishna Prakashan Media (P) Ltd.
Raisinghania, M. D. 2013. Ordinary and Partial Differential Equations. New
Delhi: S. Chand Publishing.
Coddington, Earl A. and N. Levinson. 1972. Theory of Ordinary Differential.
Equations. New Delhi: Tata McGraw-Hill.
Coddington, Earl A. 1987. An Introduction to Ordinary Differential Equations.
New Delhi: Prentice Hall of India.
Boyce, W. E. and Richard C. DiPrima. 1986. Elementary Differential Equations
and Boundary Value Problems. New York: John Wiley and Sons, Inc.
Ross, S. L. 1984. Differential Equations, 3rd Edition. New York: John Wiley
and Sons.
Sneddon, I. N. 1986. Elements of Partial Differential Equations. New York:
McGraw-Hill Education.

Self - Learning
68 Material
Laplace Transformation
Table of Laplace Transforms
Operation Transforms

NOTES

Self - Learning
Material 69
Laplace Transformation Function Transforms

NOTES

Self - Learning
70 Material
Laplace Transformation

NOTES

Self - Learning
Material 71
Laplace Transformation

NOTES

Self - Learning
72 Material
Laplace Transforms:

UNIT 3 LAPLACE TRANSFORMS: Inverse and Solving


Differential Equations

INVERSE AND SOLVING


DIFFERENTIAL EQUATIONS NOTES

Structure
3.0 Introduction
3.1 Objectives
3.2 Inverse Laplace Transforms
3.3 Convolution Theorem
3.4 Application of Laplace Transformation in Solving Linear Differential
Equations with Constant Coefficients
3.5 Answers to ‘Check Your Progress’
3.6 Summary
3.7 Key Terms
3.8 Self-Assessment Questions and Exercises
3.9 Further Reading

3.0 INTRODUCTION
In mathematics, the inverse Laplace transform of a function F(s) is the piecewise-
continuous and exponentially-restricted real function f(t). It can be proven that, if
a function F(s) has the inverse Laplace transform f(t), then f(t) is uniquely
determined (considering functions which differ from each other only on a point set
having Lebesgue measure zero as the same). This result was first proven by Mathias
Lerch in 1903 and is known as Lerch’s Theorem. The Laplace transform and the
inverse Laplace transform together have a number of properties that make them
useful for analysing linear dynamical systems.
In this unit, you will study about the inverse Laplace transforms, Convolution
theorem and application of Laplace transformation in solving linear differential
equations with constant coefficients.

3.1 OBJECTIVES
After going through this unit, you will be able to:
 Briefly explain the inverse Laplace transforms
 Drive the convolution theorem
 Discuss the application of Laplace transformation in solving linear differential
equations with constant coefficients

Self - Learning
Material 73
Laplace Transforms:
Inverse and Solving 3.2 INVERSE LAPLACE TRANSFORMS
Differential Equations

We can now define the inverse Laplace transform:


NOTES Given a function (s), the inverse Laplace transform of F, denoted by –1
[F
], is that function f whose Laplace transform is F.
More succinctly:

This along with our understanding about always assuming t  0 assures us


that the above definition for –1 [F] is unambiguous. In this definition, of course,
we assume F(s) can be given as [f(t )] for some function f .
Example 3.1: We have,

Because

Likewise, since

We have,

The fact that,

Instead of reading off the F(s) for each f(t ) found, read off the f(t) for each
F(s) .
As you may have already noticed, we take inverse transforms of ‘functions
of s that are denoted by upper case Roman letters and obtain functions of t that
are denoted by the corresponding lower case Roman letter. These notational
conventions are consistent with the notational conventions laid down earlier for
the Laplace transform.
We should also note that the phrase inverse Laplace transform can refer to
either the ‘inverse transformed function f or to the process of computing f from F.
By the way, there is a formula for computing inverse Laplace transforms. If
you must know, it is,

The integral here is over a line in the complex plane, and  is a suitably
chosen positive value.

Self - Learning
74 Material
Linearity of the Inverse Transform Laplace Transforms:
Inverse and Solving
The fact that the inverse Laplace transform is linear follows immediately from the Differential Equations

linearity of the Laplace transform. To see that, let us consider –1 [F(s) + G(s)]
where  and  are any two constants and F and G are any two functions for NOTES
which inverse Laplace transforms exist. Following our conventions, we will denote
those inverse transforms by f and g. That is,

Remember, this is completely the same as stating that,

Because we already know the Laplace transform is linear, we know

This, along the definition of the inverse transform and the above definitions
of f and g, yields

These equations can be solved using many functions and constants as desired
which then gives the theorem discussed below.
Theorem 3.1: (Linearity of the Inverse Laplace Transform)
The inverse Laplace transform transform is linear. That is,

When each Ck is a constant and each Fk is a function having an inverse


Laplace transform.
Use the linearity to compute a few inverse transforms.
Example 3.2: Find

Solution: We know that,

Which gives the desired result. This can then be used with the required
inverse transform, so that we can combine linearity with one of mathematics’
oldest tricks (multiplying by 1 with, in this case, 1 = 3/3)

The use of linearity along with ‘multiplying by 1’ will be used again and
again.

Self - Learning
Material 75
Laplace Transforms: Example 3.3: Find the inverse Laplace transform of,
Inverse and Solving
Differential Equations

NOTES Solution:
We know,

and

So,

Which, after a little arithmetic, reduces to

Partial Fractions
When using the Laplace transform with differential equations, we often get
transforms that can be converted via ‘partial fractions’ to forms that are easily
inverse transformed using the tables and linearity. This means that the general
method (s) of partial fractions are particularly important. By now, you should
well-acquainted with using partial fractions — remember, the basic idea is that, if
we have a fraction of two polynomials,

And P(s) can be factored into two smaller polynomials,


P(s) = P1(s) P2(s)
Then two other polynomials Q1(s) and Q2(s) can be found, so that

Moreover, if the degree of Q(s) is less than the degree of P(s) , then the
degree of each Qk(s) will be less than the degree of the corresponding Pk(s) .
Inverse Transforms of Shifted Functions
All the identities derived for the Laplace transform can be rewritten in terms of the
inverse Laplace transform. Of particular value to us is the first shifting identity

Self - Learning
76 Material
Where F = [f (t )] and a is any fixed real number. In terms of the inverse Laplace Transforms:
Inverse and Solving
transform, this is Differential Equations

NOTES
Where f = [ F (s)] and a is any fixed real number. Viewed this way, we
have a nice way to find inverse transforms of functions that can be written as
‘shifts’ of functions in our tables.
Example 3.4: Consider

Here, the ‘shift’ is clearly by a = 6 , and we have, by the above identity,

(3.1)

We now need to figure out the f (t) from the fact that,

Letting X = s – 6 in this equation, we have

Thus,

And

Plugging this back into Equation (3.1), we obtain

In many cases, determining the shift is part of the problem.


Example 3.5: Consider finding the inverse Laplace transform of,

Solution:
If the denominator could be factored nicely, we would use partial fractions. This
denominator does not factor nicely (unless we use complex numbers). When that
happens, try completing the square to rewrite the denominator in terms of ‘s – a’
for some constant a. Here,
Self - Learning
Material 77
Laplace Transforms:
Inverse and Solving
Differential Equations

NOTES
Hence,

...(3.2)
Again, we need to find f (t) from a shifted version of its transform. Here,

Letting X = s – 4 in this equation, we have

Which means the formula for F (s) is,

Thus,

Plugging this back into Equation (3.2), we get

3.3 CONVOLUTION THEOREM


Convolution Theorem is one more essential basic property of the Laplace transform
related to the products of transforms. Many a times, it occurs that two transforms
F(s) and G(s) are provided whose inverses f(t) and g(t) are known, and the
inverse of the product H(s) = F(s)G(s) has to be calculated from those known
inverses f(t) and g(t). This inverse h(t) is written (f * g)(t), which is a standard
notation, and is called the convolution of f and g.
Theorem 3.2: Convolution Theorem
Let f(t) and g(t) satisfy the hypothesis of the existence theorem. Then the
product of their transforms F s   L f  and Gs   L g  is the transform
H s   Lh  of the convolution h(t) of f(t) and g(t), which is denoted by
(f * g)(t) and defined by,
Self - Learning
78 Material
t
Laplace Transforms:

ht    f * g t    f  g t   d


Inverse and Solving
Differential Equations
0

Proof: Theorem can be proved by the definition of G(s) and the second shifting
NOTES
theorem, for each fixed    0  we have,

e  s G s   Lg t   u t   

=  e g t   u t    dt
 st

=  e g t   dt
 st

Where s > k. From this and the definition of F(s) we obtain,


  
F s Gs    e  s
f  G s d   f   e st g t   dt d
0 0 
Where s > k. Here we integrate over t from  to  and then over  from 0 to .
Example 3.6: Using convolution, find the inverse H(t) of,
1 1 1
H s     2
s 2
1 
2
s 1 s 1 .
2

Solution: We know that each factor on the right has the inverse sin t. Hence by
the convolution theorem we get,
ht   L1 H   sin t * sin t
t
  sin  sin t   d
0

t t
1 1
   cos td   cos2  t d
20 20
1 1
=  t cos t  sin t .
2 2
1/s has the inverse t and 1/s and the convolution theorem confirms that
2

 
1 / s 3  1 / s 2 1 / s  has the inverse,

t2
t *1     1 d  .
2


Example 3.7: If H s   1 / s 2 s  a  then find h(t). 
Solution: From Table 2.1 we know that,
1  1  at
L1  2   t , L1  e . Self - Learning
s  s  a  Material 79
Laplace Transforms: Using the convolution theorem and integrating by parts, we get the answer:
Inverse and Solving
Differential Equations t t
ht   t * e at   e a t  d  e at  e  a d
0 0
NOTES
=
1 at
a2

e  at  1 . 

3.4 APPLICATION OF LAPLACE


TRANSFORMATION IN SOLVING LINEAR
DIFFERENTIAL EQUATIONS WITH
CONSTANT COEFFICIENTS
The Laplace transform is an elegant way for fast and schematic solving of linear
differential equations with constant coefficients. As an alternative of solving the
differential equation with the initial conditions directly in the original domain, a
mapping into the frequency domain is taken where only an algebraic equation has
to be solved. Solving differential equations is performed as per the guidelines
given in Figure 3.1 which involve the following three steps:
 Transformation of the differential equation into the mapped space.
 Solving the algebraic equation in the mapped space.
 Back transformation of the solution into the original space.

Fig. 3.1 Schema for Solving Differential Equations


Using the Laplace Transformation

The following examples will make the concept clear.


Example 3.8: Consider the differential equation with
the initial conditions .
Solution: We can solve the differential equation using the following steps:

Step 1:

Step 2:

Step 3: The complex function F(s) must be decomposed into partial fractions
in order to get,
Self - Learning
80 Material
Laplace Transforms:
Inverse and Solving
Differential Equations

From the inverse Laplace transformation the solution of the given differential
equation is, NOTES

Example 3.9: Solve the following system of differential equation using Laplace
transform:

Solution: Notice that the system is not given in matrix form hence matrix form is
not required in solution. The system is nonhomogeneous.
Using Laplace transforms to solve differential equations, we consider the
transform of both differential equations as,

Now use the initial condition and simplify to get,

To solve this for one of the transforms, multiply the top equation by s and
the bottom by –3 and then add. We get,

Solving for X1 gives,

On partial fraction we get,

Taking the inverse transform gives the first solution,

To find the second solution we can eliminate X1 to find the transform for X2.
However, in this case notice that the second differential equation is as follows,
Self - Learning
Material 81
Laplace Transforms:
Inverse and Solving
Differential Equations
By, endorsing the first solution in and integrating gives,

NOTES

Reapplying the second initial condition to get the constant of integration


gives,

The second solution is,

Putting all this together gives the solution to the system as,

Other systems of differential equations of practical significance can be solved


using the Laplace transform method in a related manner, and taking eigenvalues
and eigenvectors as shown in Example 3.10 based on electrical network.
Example 3.10: Find the currents i1 t and i2 t in the network as shown in the
following figure with L and R measured in terms of the usual units, t 100
volts if 0  t  0.5 sec and 0 thereafter, and i 0 0, i i 00 00.

Solution: The method of the network is obtained using the Kirchhoff’s voltage
law as,
1
0.8i1 1 i1 i2 1.4i1 100 1 u t
2
Self - Learning
82 Material
Laplace Transforms:
1 i2 l i2 i1 = 0. Inverse and Solving
Differential Equations
Dividing by 0.8 and on ordering we get,
1 NOTES
i1 3i1 1.25i2 125 1 u t
2

i2 i1 i2 0
With i1 0 0, i2 0 0 we acquire the second shifting theorem as the
subsidiary equations:
s/2
1 e
s 3 I1 1.25 I 2 125
s s

I1 s 1 I2 0.

Algebraically solving I1 and I 2 gives:

125 s 1
I1 1 es / 2
1 7
s s s
2 2
125
I2 1 e s/2
,
1 7
s s s
2 2
The right hand sides without the factor 1 e s/2 contain the partial fraction
expansions of the form:
500 125 625 500 250 250
, ,
7s 1 7 7s 1 7
3 s 21 s 3 s 21 s
2 2 2 2
1
The inverse transform of this equation provides the solution 0 t ,
2
125 t/2 625 7t / 2 500
i1 t e e
3 21 7
250 250 500 1
i2 t e t/2
e 7t / 2 0 t
3 21 7 2
1
As per the second shifting theorem, the solution for t is obtained by
2
1 1
subtracting from this i1 t and i2 t , respectively. We get,
2 2
125 625
i1 t 1 e1/ 4 e t /2
1 e7 / 4 e 7t / 2

3 21
Self - Learning
Material 83
Laplace Transforms:
Inverse and Solving 250 250 1
i2 t 1 e1/ 4 e t / 2 1 e 7 / 4 e7 t / 2 t
Differential Equations 3 21 2
Similarly, the systems of differential equations of higher order can also be
NOTES solved using the Laplace transform method. The higher order differential equations
involve the higher derivatives x"(t), x"'(t), etc. These mathematical models are
used to solve physics and engineering problems.

Check Your Progress


1. What is the inverse Laplace transform?
2. Explain the linearity of the inverse Laplace transform.
3. Define the basics of convolution theorem.
4. Why is Laplace transform used?

3.5 ANSWERS TO ‘CHECK YOUR PROGRESS’


1. We can now define the inverse Laplace transform:
Given a function (s), the inverse aplace transform of F, denoted by –1
[F ],
is that function f whose Laplace transform is F.
2. The inverse Laplace transform transform is linear. That is,

When each Ck is a constant and each Fk is a function having an inverse


Laplace transform.
3. Convolution Theorem is one more essential basic property of the Laplace
transform related to the products of transforms. Many a times, it occurs
that two transforms F(s) and G(s) are provided whose inverses f(t) and
g(t) are known, and the inverse of the product H(s) = F(s)G(s) has to be
calculated from those known inverses f(t) and g(t). This inverse h(t) is
written (f * g)(t), which is a standard notation, and is called the convolution
of f and g.
4. The Laplace transform is an elegant way for fast and schematic solving of
linear differential equations with constant coefficients.

3.6 SUMMARY
 There is a formula for computing inverse Laplace transforms. If you must
know, it is,

Self - Learning
84 Material
 When using the Laplace transform with differential equations, we often get Laplace Transforms:
Inverse and Solving
transforms that can be converted via ‘partial fractions’ to forms that are Differential Equations
easily inverse transformed using the tables and linearity. This means that the
general method (s) of partial fractions are particularly important.
NOTES
 All the identities derived for the Laplace transform can be rewritten in terms
of the inverse Laplace transform. Of particular value to us is the first shifting
identity,

Where F = [f (t )] and a is any fixed real number..


 Let f(t) and g(t) satisfy the hypothesis of the existence theorem. Then the
product of their transforms F s   L f  and Gs   L g  is the transform
H s   Lh  of the convolution h(t) of f(t) and g(t), which is denoted by
(f * g)(t) and defined by,
t
ht    f * g t    f  g t   d
0

 The complex function F(s) must be decomposed into partial fractions in


order to get,

 From the inverse Laplace transformation the solution of the given differential
equation is,

 The systems of differential equations of higher order can also be solved


using the Laplace transform method.
 The higher order differential equations involve the higher derivatives x"(t),
x"'(t), etc.

3.7 KEY TERMS


 Inverse Laplace transform: the inverse Laplace transform:
Given a function (s), the inverse aplace transform of F, denoted by
–1
[F], is that function f whose Laplace transform is F.
 Convolution theorem: Convolution Theorem is one more essential basic
property of the Laplace transform related to the products of transforms.
Many a times, it occurs that two transforms F(s) and G(s) are provided
whose inverses f(t) and g(t) are known, and the inverse of the product
H(s) = F(s)G(s) has to be calculated from those known inverses f(t) and
g(t).

Self - Learning
Material 85
Laplace Transforms:
Inverse and Solving 3.8 SELF-ASSESSMENT QUESTIONS AND
Differential Equations
EXERCISES
NOTES
Short-Answer Questions
1. Give the definition of inverse Laplace theorem.
2. What is the linearity of the inverse Laplace transform?
3. Explain the inverse transforms of shifted functions.
4. State the convolution theorem.
5. Give the applications of Laplace transform in solving linear differential
equations.
Long-Answer Questions
1. Discuss about the inverse Laplace transform with suitable examples.
2. Analyse the use of partial fraction in inverse Laplace transform.
3. Briefly describe the convolution theorem. Give examples.
4. Find the inverse Laplace transform for each of the following:

(i) (ii) (iii)

(iv) (v) (vi)

5. Using the tables and ‘linearity’, find the inverse Laplace transform for each
of the following:

(i) (ii) (iii)

(iv) (v) (vi)

6. Verify the following inverse Laplace transforms assuming  is any real


constant:

(i)

(ii)

Self - Learning
86 Material
7. Solve each of the following initial-value problems using the Laplace transform: Laplace Transforms:
Inverse and Solving
(i) y + 9y = 0 with y(0) = 4 Differential Equations
(ii) y + 9y = 0 with y(0) = 4 and y(0) = 6
8. Using the tables and partial fractions, find the inverse Laplace transform for NOTES
each of the following:

(i) (ii)

(iii) (iv)

(v) (vi)

(vii) (viii)

(ix)

9. Solve each of the following initial-value problems using the Laplace transform
(and partial fractions):
(i) y – 9y = 0 with y(0) = 4 and y (0) = 9
(ii) y + 9y = 27t3 with y(0) = 0 and y (0) = 0
(iii) y + 8y + 7y = 165e4t with y(0) = 8 and y (0) = 1
10. Using the translation identity (and the tables), find the inverse Laplace
transform for each of the following:

(i) (ii) (iii)

(iv) (v) (vi)

(vii) (viii)

11. The inverse transforms of the following could be computed using partial
fractions. Instead, find the inverse transform of each using the appropriate
integration identity from section

(i) (ii) (iii)

Self - Learning
Material 87
Laplace Transforms:
Inverse and Solving 3.9 FURTHER READING
Differential Equations

K. P. Gupta and J. K. Goyal. 2013. Integral Transform. Meerut (UP): Pragati


NOTES Prakashan.
Sharma, J. N. and R. K. Gupta. 2015. Differential Equations (Paperback
Edition). Meerut (UP): Krishna Prakashan Media (P) Ltd.
Raisinghania, M. D. 2013. Ordinary and Partial Differential Equations. New
Delhi: S. Chand Publishing.
Coddington, Earl A. and N. Levinson. 1972. Theory of Ordinary Differential.
Equations. New Delhi: Tata McGraw-Hill.
Coddington, Earl A. 1987. An Introduction to Ordinary Differential Equations.
New Delhi: Prentice Hall of India.
Boyce, W. E. and Richard C. DiPrima. 1986. Elementary Differential Equations
and Boundary Value Problems. New York: John Wiley and Sons, Inc.
Ross, S. L. 1984. Differential Equations, 3rd Edition. New York: John Wiley
and Sons.
Sneddon, I. N. 1986. Elements of Partial Differential Equations. New York:
McGraw-Hill Education.

Self - Learning
88 Material
Partial Differential

UNIT 4 PARTIAL DIFFERENTIAL Equations of the First Order

EQUATIONS OF THE FIRST


ORDER NOTES

Structure
4.0 Introduction
4.1 Objectives
4.2 Partial Differential Equations of the First Order Lagrange’s Solution
4.3 Solution of Some Special Types of Equations
4.4 Charpit's General Method
4.5 Answers to ‘Check Your Progress’
4.6 Summary
4.7 Key Terms
4.8 Self-Assessment Questions and Exercises
4.9 Further Reading

4.0 INTRODUCTION
In mathematics, a first-order partial differential equation is a partial differential
equation that involves only first derivatives of the unknown function of n variables.
Such equations arise in the construction of characteristic surfaces for hyperbolic
partial differential equations, in the calculus of variations, in some geometrical
problems, and in simple models for gas dynamics whose solution involves
the method of characteristics. If a family of solutions of a single first-order partial
differential equation can be found, then additional solutions may be obtained by
forming envelopes of solutions in that family. In a related procedure, general solutions
may be obtained by integrating families of ordinary differential equations.
In this unit, you will be study about the partial differential equations of the
first order Lagrange’s solution, solution of some special types of equations,
Charpit’s general method.

4.1 OBJECTIVES
After going through this unit, you will be able to:
 Drive the partial differential equations of the first order Lagrange’s solution
 Know solution of some special types of equations which can solved easily
by the method other than the general method
 Describe the Charpit’s general method of solution and its special cases

Self - Learning
Material 89
Partial Differential
Equations of the First Order 4.2 PARTIAL DIFFERENTIAL EQUATIONS OF
THE FIRST ORDER LAGRANGE’S
NOTES
SOLUTION

Lagrange’s Equation
The partial differential equation Pp + Qq = R, where P, Q, R are functions of x, y,
z, is called Lagrange’s Linear Differential Equation.
dx dy dz
Form the auxiliary equations P  Q  R and find two indpendent solutions
of the auxiliary equations say u(x, y, z) = C1 and v(x, y, z) = C2, where C1 and C2
are constants. Then the solution of the given equation is F(u, v) = 0 or u = F(v).
For example, solve ( y 2 z 2 ) p – xyq xz
The auxiliary equations are,
dx dy dz
y 2
z 2 =  xy  – xz (4.1)

Taking the last two equations, we get,


dy dz
y = z

Integrating we get log y = log z + constant,


y
 = C1
z
Each of the Equation (4.1) is equal to,
xdx  ydy  zdz
x( y 2  z 2 ) – xy 2 – xz 2

xdx  ydy  zdz


i.e.
0
i.e. xdx + ydy + zdz = 0
Hence after integration this reduces to,
x2 + y2 + z2 = C2
Hence the general solution of the equation is,
y 
F  , x 2  y 2  z 2  = 00
z 

Self - Learning
90 Material
Partial Differential
z z Equations of the First Order
Example 4.1: Solve x  y2
2
 ( x  y) z
x y

Solution: The auxiliary equations are,


NOTES
dx dy dz
2 =

x y 2
( x  y) z

dx  dy dz
i.e. x2  y2
= ( x  y) z

dx  dy dz
i.e. x y
=
z

i.e. log (x – y) = log z + constant


x y
 = C1
z

dx dy
Also 
x2 y 2

1 1
Hence = y + constant
x

1 1
 –
y x = C2

1 1 x– y
Hence the solution is, F  – ,  =0
y x z 
Example 4.2: Solve (x2 – yz)p + (y2 – zx)q = z2 – xy
Solution: The subsidiary equations are,
dx dy dz
2 = 2
 2
x – yz y – zx z – xy

dx – dy d ( x  y)
x 2  yz  ( y 2  zx) = ( x  y )( x  y  z )

d ( y  z)
= ( y  z )( x  y  z )

d ( x  y) d ( y  z)
 x y
= yz

Integrating log (x – y) = log (y – z) + log C1


x y
  C1 (4.2)
yz

Self - Learning
Material 91
Partial Differential Using multipliers x, y, z, each of the subsidiary equations,
Equations of the First Order
xdx  ydy  zdz xdx  ydy  zdz
= 
x3  y 3  z 3 – 3xyz ( x  y  z )( x 2  y 2  z 2 – xy – yz – zx)
NOTES
dx  dy  dz
And is also equal to
x 2  y 2  z 2  yz  zx  xy

xdx  ydy  zdz dx  dy  dz


 x yz =
1
xdx + ydy + zdz = (x + y + z)d (x + y + z)
On Integrating, we get,
x2 + y2 + z2 = (x + y + z)2 + C2
 xy + yz + zx = C2 (4.3)
From Equations (4.2) and (4.3), we get the solution,

x y 
F , xy  yz  zx   0, where F is arbitrary..
 yz 
Example 4.3: Solve (a – x)p + (b – y)q = c – z
Solution: The subsidiary equations are,
dx dy dz
= b– y c–z (4.4)
ax
From Equation (4.4)
dy dz
b y = cz

dy dz
i.e. yb
=
z c
log ( y – b) = log (z – c) + log C1
yb
 = C1
zc
Also
dx dy
= b y
ax

dx dy
 = yb
xa
 log (x – a) = log (y – b) + log C2

 xa
   = C2
 yb

Self - Learning
92 Material
The general solution is Partial Differential
Equations of the First Order

 y b xa 
F ,  =0
 z c y b
NOTES
Example 4.4: Solve (y – z)p + (z – x)q = x – y
Solution: The auxiliary equations are,
dx dy dz dx  dy  dz
  =
yz zx x y 0
 dx + dy + dz = 0
Integrating we get, x + y + z = C1
Also each ratio,
xdx  ydy  zdz
=
x( y  z )  y ( z  x)  z ( x  y )

xdx  ydy  zdz


=
0
 xdx + ydy + zdz = 0
On integrating, we get,
x2 + y2 + z2 = C2
 The general solution is,
F(x + y + z, x2 + y2 + z2) = 0
Example 4.5: Solve (mz – ny)p – (nx –lz)q = ly – mx
Solution: The auxiliary equations are,
dx dy dz
mz ny nx lz ly – mx

Using multipliers x, y, z, we get each ratio


xdx  ydy  zdz
= x(mz  ny )  y(nx  lz )  z (ly – mx)

xdx  ydy  zdz


=
0
 x2 + y2 + z2 = C1
Also by using multipliers l, m, n, we get each ratio,
ldx  mdy  ndz
=
0
 lx + my + nz = C2
 The general solution is,

F  x 2  y 2  z 2 , lx  my  nz   0
Self - Learning
Material 93
Partial Differential Example 4.6: Solve x (y – z)p + y(z – x)q = z(x – y)
Equations of the First Order
Solution: The auxiliary equations are,
dx dy dz
= 
NOTES xy  xz yz  yx zx  zy

dx  dy  dz
=
0
 dx + dy + dz = 0
On integrating, we get, x + y + z = C1 (4.5)

dx dy dz dx dy dz
 
x  y  z x y z
=
yz zx x y 0

dx dy dz
  
x y z =0

On integrating, log x + log y + log z = log C2


xyz = C2 (4.6)
From Equations (4.5) and (4.6), the general solution is, F(x + y + z, xyz) = 0
Example 4.7: Solve x2p + y2q = z2
Solution: The auxiliary equations are,
dx dy dz
 
x2 y2 z2

dx dy
 2 =
y2
x

x 1 y 1
=  C1
1 1

1 1
= – y  C1
x

1 1
 =C
y x 1

Also
dy dz
y2 = z 2

1 1
    C2
y = z

1 1

z y = C2
Self - Learning
94 Material
The general solution is, Partial Differential
Equations of the First Order

1 1 1 1
F – , – 0
 y x z y
NOTES
Example 4.8: Solve ( y + z)p + (z + x)q = x + y
Solution: The auxiliary equations are,
dx dy dz
 
yz zx x y

dx  dy dy  dz dz  dx
i.e. = 
x y yz zx

dx  dy  dz
= 2( x  y  z )

Considering first two members and integrating, we get,


x y
y  z = C1

Considering first and last members and integrating, we get,


1
log(x – y) = log(x + y + z) + log C2
2

x  y
2

log = log C2


x yz

x  y
2

= log C2
x yz
 The general solution is,
 x  y ( x  y )2 
F , 0
 yz x yz

4.3 SOLUTION OF SOME SPECIAL TYPES OF


EQUATIONS

Wave Equation
For deriving the equation governing small transverse vibrations of an elastic string,
we position the string along the x-axis, extend it to its length L and fix it at its ends
x = 0 and x = L. Distort the string and at some instant, say t = 0, release it to
vibrate. Now the problem is to find the deflection u(x, t) of the string at point x
and at any time t > 0.
To obtain u(x, t) as the result of a partial differential equation we have to
make simplifying assumptions as follows:
Self - Learning
Material 95
Partial Differential 1. The string is homogeneous. The mass of the string per unit length is constant.
Equations of the First Order
The string is perfectly elastic and hence does not offer any resistance to
bending.
2. The tension in the string is constant throughout.
NOTES
3. The vibrations in the string are small so the slope at each point remains
small.
For modeling the differential equation, consider the forces working on a
small portion of the string. Let the tension be T1 and T2 at the endpoints P and Q
of the chosen portion. The horizontal components of the tension are constant
because the points on the string move vertically according to our assumption.
Hence we have,
T1 cos α  T2 cos β  T  Constant (4.7)

The two forces in the vertical direction are  T1 sin  and T2 sin β of T1
and T2 . The negative sign shows that the component is directed downward. If 
is the mass of the undeflected string per unit length and x is the length of that
portion of the string that is undeflected then by Newton’s second law the resultant
of these two forces is equal to the mass x of the portion times the acceleration
 2 u / t 2

T2 sin  – T1 sin  = x  u .


2
(4.8)
t 2
By using Equation (4.7), we can divide the Equation (4.8) by
T2 cos β  T1 cos α  T to get,

T2 sin β T1 sin α ρx  2 u


  tan β  tan α  (4.9)
T2 cos β T1 cos α T t 2
Since tan  and tan  are the slopes of the string at x and x + x, therefore,

 u   u 
tan α    x . and tan β   
 x   x  x  x
By dividing Equation (4.9) by x and substituting the values of tan  and
tan , we have,

1  u   u   ρ  u
2

  x  x      2 .
x  x   x  x  T t
As x approaches zero, the equation becomes the linear partial differential
equation,
 2u 2  u
2
T
 c , c2  (4.10)
t 2 x 2 ρ
Self - Learning
96 Material
Which is the one-dimensional wave equation governing the vibrations of an Partial Differential
Equations of the First Order
elastic string,
 2u 2  u
2
c , (4.11)
t 2 x 2 NOTES
To determine the solution we use the boundary conditions, x = 0 and
x = L,
u 0, t   0, u  L, t   0 for all t (4.12)
The initial velocity and initial deflection of the string determine the form of
motion. If f(x) is the original deflection and g(x) is the initial velocity, then our initial
conditions are,
u  x,0  f  x  (4.13)
And

u
 g x . (4.14)
t t 0

I. Now the problem is to get the solution of Equation (4.11) satisfying the
Equations (4.12) to (4.14).
By using the method of separation of variables, verify solutions of the wave
Equation (4.11) of the form,
u  x, t   F  x G t  (4.15)
Which are a product of two functions, F(x) and G(t). Note here that each
of these functions is dependent on one variable, i.e., either x or t. By differentiating
Equation (4.15) two times both with respect to x and t, we obtain,

 2u  2u
 FG and  F G
t 2 x 2
By substituting these values in the wave equation we get,
FG  c 2 F G .
Divide this equation by c 2 FG to get,

G F 
 .
c 2G F
The equations on either side are dependent on different variables. Hence
changing x will not change G and changing t will not change F and the other side
will remain constant. Thus,

G F 
2
  k.
c G F
Or
F   kF  0 (4.16) Self - Learning
Material 97
Partial Differential And
Equations of the First Order
G  c 2 kG  0 . (4.17)
The constant k is arbitrary.
NOTES Now we will find the solutions of Equation (4.16) and (4.17) so that the
equation u = FG fulfills the boundary Equations (4.12), that is,
u 0, t   F 0G t   0, u L, t   F L G t   0
For all t.
When G  0, then u  0.
Therefore, G  0 and,
(a) F(0) = 0, (b) F(L) = 0 (4.18)
For k = 0 the general solution of Equation (4.16) is F = ax + b, and from
Equation (4.18) we obtain a = b = 0 and hence F  0, which gives
u  0. But for positive value of k, i.e., k = 2 the general solution of Equation
(4.16) is,
F  Ae μ , x  Be  μx ,
And from Equation (4.18), we again get F  0. Hence choose k < 0, i.e.,
k   p 2 . Then the Equation (4.16) becomes,

F   p 2 F  0
The general solution of the above equation is,
F  x   A cos px  B sin px .
Using conditions of Equation (4.18), we have,
F 0  A  0 and F L   B sin pL  0
B = 0 implies F  0. Thus we will take sin pL = 0, giving,

pL  nπ , so that p  where n is an integer,, (4.19)
L
For B = 1, we get infinitely many solutions F  x   Fn  x  , where,


Fn  x   sin x n  1,2,  . (4.20)
L
These solutions satisfy Equation (4.18). The value of the constant k is now
limited to the values k  p 2  nπ / L 2 , resulting from Equation (4.19), so
Equation (4.17) becomes,

G  λ 2n G  0 where λ n  cnπ (4.21)


L
A general solution is,
Self - Learning Gn t   Bn cos λ n t  Bn * sin λ n t .
98 Material
Hence solutions of Equation (4.11) satisfying Equation (4.12) are Partial Differential
Equations of the First Order
u n  x, t   Fn x Gn t  , written as


u n  x, t   Bn cos λ n t  Bn * sin λ n t sin x n  1,2, . (4.22) NOTES
L
Functions of these type are called the eigenfunctions and the values
n = cn/L are called the eigenvalues of the vibrating string. This set of n is
known as spectrum.
Each u n represents a harmonic motion with frequency λ n / 2π  cn / 2 L
cycles per unit time. This motion is known as the nth normal mode of the string.
The first normal mode is referred as the fundamental mode (n = 1) while the others
are known as overtones.
A single solution u n  x, t  will not satisfy the initial Equations (4.14) and
(4.13). But, u n is a solution of Equation (4.11), since the equation is linear and
homogeneous. To obtain a solution that satisfies Equations (4.13) and (4.14),
consider the following infinite series,
 

u  x, t    u n x, t   Bn cos λ n t  Bn * sin λ n t  sin x,
n 1 n 1 L
Where n  cn / L (4.23)
Therefore,


u  x,0    Bn sin x  f x  . (4.24)
n 1 L

Select the coefficients Bn’s so that u  x,0  becomes the Fourier sine series
of f(x). Thus, from Equation (4.16),
L
2 nπx
Bn   f x sin dx, n  1,2, . (4.25)
L0 L

Similarly, by differentiating Equation (4.23) with respect to t and using


Equation (4.16), we get,

u  nπx 
   Bn λ n sin λ n t  Bn * λ n cos λ n t sin
t t 0  n 1 L  t 0

n πx
= B n * λ n sin  g x 
n 1 L

Bn * ’s should be selected so that for t = 0 the partial derivative u / t


becomes the Fourier sine series of the function g(x). So from Equation (4.16),
L
2 nπx
Bn * λ n   g x sin dr Self - Learning
L0 L Material 99
Partial Differential
Equations of the First Order Here, since λ n  cnπ / L ,
L
2 nπx
Bn *   g  x sin dx n  1,2, . (4.26)
NOTES cnπ 0 L
Now, let us consider the case when the initial velocity g(x) is zero. Then the
Bn * are zero and Equation (4.23) becomes,

n πx cnπ
u  x, t    Bn cos λ n t sin , λn  . (4.27)
n 1 L L
We know that,

cnπ nπ 1   nπ   nπ 
cos t sin x  sin  x  ct   sin   x  ct 
L L 2  L  L 
Therefore Equation (4.27) becomes,

1   nπ  1   nπ 
u  x, t    Bn sin   x  ct    Bn sin   x  ct 
2 n 1 L  2 n 1 L 
The above two series are generated by substituting x – ct and x + ct,
respectively, for the variable x in the Fourier sine series given in Equation (4.24)
for f(x). Thus,
1
u  x, t    f * x  ct   f * x  ct  (4.28)
2
Where f* is the odd periodic extension of f with the period 2L. By
differentiating Equation (4.28) we see that u(x, t) is a solution of Equation (4.11),
given that f(x) is twice differentiable on the interval 0 < f(x) < L and has one-sided
second derivatives at x = 0 and x = L, which are zero. u(x, t) is obtained as a
solution satisfying Equations (4.12) to (4.14).
If f  x  and f  x  are merely piecewise continuous or if the one-sided
derivatives are not zero, then for each t there will be finitely many values of x at
which the second derivatives of u appearing in Equation (4.11) do not exist. Except
at these points the wave equation will still be satisfied. We can then regard u(x, t)
as a generalized solution.
Example 4.9: Determine the solution of the wave Equation (4.11) corresponding
to the following triangular initial deflection,

 2k L
 L x if 0 x
f x    2
2k L
 L  x  if xL
L 2

Self - Learning
100 Material
And zero initial velocity. Partial Differential
Equations of the First Order
Solution: Since g  x   0 , we have Bn *  0 in Equation (4.23).

The Bn are given by Equation (4.17) and thus Equation (4.23) takes the
NOTES
8k 1 π πc 1 3π 3 πc 
form u  x, t   2 12 sin L x cos L t  3 2 sin L x cos L t  .
π  

4.4 CHARPIT'S GENERAL METHOD


Charpit’s method is used to find the solution of most general partial differential
equation of order one, given by,
F(x, y, z, p, q) = 0 (4.29)
The primary idea in Charpit’s method is the introduction of a second partial
differential equation of order one,
f(x, y, z, p, q, a) = 0 (4.30)
Containing an arbitrary constant ‘a’ and satisfying the following conditions :
1. Equations (4.29) and (4.30) can be solved to give,
p  p x, y, z, a  and q  q x, y, z, a 
2. The equation is integrable.
dz  p x, y, z, a dx  q x, y, z, a dy (4.31)
When a function ‘f’ satisfying the conditions 1 and 2 has been found, the
solution of Equation (4.31) containing two arbitrary constants (including ‘a’) will
be a solution of Equation (4.29). The Condition 1 will hold if,

F f
 F , f  p p
J  0
 p, q  F f (4.32)
q q

Condition 2 will hold when,

 p   p   p q 
p   q        0
 z   z   y x 

q q p p
 p  q  (4.33)
z x z y
Substituting the values of p and q as functions of x, y and z in Equations
(4.29) and (4.30) and differentiating with respect to x,
F F p F q
  0
x p x q x
Self - Learning
Material 101
Partial Differential
Equations of the First Order f f p f q
And   0
x p x q x
Therefore,
NOTES
 F f F f  q F f F f
    
 p q q p  x x p p x

q 1  F f F f 
Or    
x J  x p p x 

p 1  F f F f 
Similarly y  J   
 y q q y 

p 1  F f F f 
   
z J  z q q z 

q 1  F f F f 
And     (4.34)
z J  z p p z 
Substituting the values from Equation (4.34) in Equation (4.33)

1   F f F f   F f F f 
 p      
J   z p p z   x p p x 

1   F f F f   F f F f 
  q      
J   z q p z   y q q y 

 F  f  F  f  F F  f
Or   p  x    q  y    p p  q q  z
     

 F F  f  F F  f
+ p     q   0 (4.35)
 z x  p  z y  q
The Equation (4.35) being linear in variable x, v, z, p, q and f has the
following subsidiary equations:
dx dy dz dp dq
   
F F F F F F F F (4.36)
  p q p q
p q p q dx z y z
If any of the integrals of Equation (4.36) involve p or q then it is of the form
of Equation (4.30).
Then we solve Equations (4.29) and (4.30) for p and q and integrate Equation
Self - Learning
(4.31).
102 Material
Example 4.10: Get complete integral of the equation, Partial Differential
Equations of the First Order
p 2  q 2  2 px  2qy  2 xy  0 (4.37)
Solution: The subsidiary equations are,
NOTES
dp dq dx dy
   (4.38)
2 y  p  2x  q   2 p  x   2q  y 

dp  dq dx dy
 2 y  2 x  2 p  2q 
2 x 2 y 2 p 2q

 dp  dq  dx  dy
Integrating, we get,
pqx ya
Where a is constant,
  p  x   q  y   a (4.39)
Equation (4.37) can also be written as,

 p  x 2  q  y 2  x  y 2
Now {( p x) (q y )}2 {( p x ) (q y )}2


= 2  p  x   q  y 
2 2

  p  x   q  y   2 x  y   a 2
2
(4.40)
Adding Equations (4.39) and (4.40),

 p  x  1 a  1 2 x  y   a 2
2

2 2

a 1
2 x  y   a 2
2
Or p x
2 2
Similarly subtracting Equation (4.40) from Equation (4.39),
a 1
2 x  y   a 2
2
q y 
2 2
 dz  pdx  qdy
Or

a 1   a 1 
2 x  y   a 2 dx   y   2 x  y   a 2 dy
2 2
dz    x 
2 2   2 2 

=
1
2
 
a
d x 2  y 2  d x  y  
2
1
2
2 x  y   a 2 d  x  y 
2

Self - Learning
Material 103
Partial Differential On integrating,
Equations of the First Order

x2  y2 a
 
1
1
zb    x  y    2U 2  a 2 2 dU
2 2 2
NOTES
Where U  x  y and b is an arbitrary constant,

  a2 
 
2
 
U U  2 
x2  y2 a  1  2 a2 a  

zb   x y    logU  U 
2

2 2 2 2 4 
 2  
   
 

x2  y2 a  x  y  2 x  y   a 2
2

=  x  y  
2 2 4

a2 a 2 
log x  y    x  y  
 2

4 2  2 

Example 4.11: Determine the complete integral of the equation,
p 2  q 2  2 px  2qy  1  0 (4.41)
Solution: The subsidiary equations are,
dx dy dp dq
   (4.42)
 2  2 xp   2q  2 y   2 p  2q
With,
dp dq

p q
On integrating, we get,
p  aq (4.43)
Where ‘a’ is an arbitrary constant.
Substituting the value of p from Equation (4.43) in Equation (4.41)
 
q 2 1  a 2  2qax  y   1  0

 q  ax  y   ax  y 2  1  a 2 
 dz  pdx  qdy
Which gives,
dz  qadx  dy 


= d ax  y  ax  y   ax  y 2  1  a 2  
Self - Learning
104 Material
Integrating, Partial Differential
Equations of the First Order

1
z  b  ax  y  
2 ax  y  ax  y   1  a 2 2
 
2 2
NOTES


a 2

2
1 
log ax  y   
ax  y 2  1  a 2 
Where b is an arbitrary constant.
Example 4.12: Find Complete Integral of the following equation,
2 pq  py  qx   x 2  y 2  0 (4.44)
Solution: The subsidiary equations of Equation (4.44) are,
dx dy dp dq
   (4.45)
 2q  2 y   2 p  2 x  2q  2 x  2 p  2 y 
 dp + dq + dx + dy = 0
Integrating,
p  q  x  y  Constant = a (say)
Or  p  x   q  y   a (4.46)
Equation (4.44) can be written as,

2 p  x q  y  x  y   0
2

Or  p  x q  y    1 x  y 2
2

  p  x   q  y    p  x   q  y 2  4 p  x q  y 

a 2  2 x  y 
2
= (4.47)
Adding Equations (4.46) and (4.47),

2 p  x   a  a 2  2x  y 
2

a 1
a 2  2x  y 
2
Or p  x  
2 2
Subtracting Equation (4.47) from Equation (4.46)
a 1
a 2  2 x  y 
2
q  y  
2 2
 dz  pdx  qdy
Giving,
a
dz   xdx  ydy   dx  dy   1 a 2  2x  y 2 d x  y  Self - Learning
Material 105
2 2
Partial Differential
=  d x  y   d x  y  
1 a 1
Equations of the First Order
a 2  2 x  y  d  x  y 
3 3 2

2 2 2
Integrating the above equation, we get,
NOTES
 
2 z  b   x 2  y 2  a x  y   2 
a2
2
x  y 2 d x  y 

a2
2 x  y   x  y 
2

 
  x 2  y 2  ax  y   2
2

a2  a2 
2 
 2 log  x  y    x  y  
4  2 

x  y  a 2  2 x  y 
2

 
  x 2  y 2  ax  y  
2

a2  a2 
2 
 log  x  y    x  y   .
2 2  2 

Example 4.13: Find Complete Integral of the equation,


p 2  q 2  2 pq tan h 2 y  sec h 2 2 y
Solution: The subsidiary equations are,
dx dy dp
 
 2 p  2q tanh 2 y   2q  2 p tanh 2 y  0

dq
=  4 pq sec h 2 2 y  4 sec h 2 2 y tanh 2 y

 dp  0
Or p = Constant = a (say)
Therefore,
q 2  2a tanh 2 y.q  a 2  sec h 2 2 y  0

 q  a tanh 2 y  a 2 tanh 2 2 y  a 2  sec h 2 2 y

 a tanh 2 y  1  a 2 sec h2 y
 dz = pdz + qdy
Gives,

Self - Learning

dz  adx  a tanh 2 y  1  a 2 sec h2 y dy 
106 Material
Partial Differential
 a  Equations of the First Order
 d  ax  log cosh 2 y   1  a 2 sec h 2 ydy
 2 
Integrating,
NOTES
a 2dy
z  b  ax  log cosh 2 y  1  a 2  2 v
2 e  e 2v
a 2e 2 v dy
 ax  log cosh 2 y  1  a 2 
2 1  e 4v

 ax 
a
2
 
log cosh 2 y  1  a 2 tan 1 e 2 v .

Example 4.14: Find Complete Integral,



xy  3 yq  2 z  x 2 q 2  (4.48)
Solution: The subsidiary equations are,
dx dy dp dq
  
 x  3 y  4 x q p  2 p  4 xq
3 2
3q  2q

dq dx
 
q x
 qx = Constant = a
a
 q
x
Substituting in Equation (4.48) we get,

p
 
2 z  a 3 3 ya
 2
x x
 dz = pdx + qdy

dz  
 
 2 z  a 2 3 ya  a
 2 dx  dy
Gives x x
 x 
Multiplying by x2,
 
x 3 dz  2 x z  a 2 dx  3 yadx  axdy

4  z a 
2

i.e., x d 
 x2   3aydx  axdy
 

 z  a2  a 3ay  ay 
i.e.,  2
d   3 dy  4 dx  d  2 
 x  x x x 

Self - Learning
Material 107
Partial Differential
Equations of the First Order z  a 2 ay
On integrating , we get  3 b
x2 x

NOTES  y
Or z  a a   + bx2 where, a and b are arbitrary constants.
 x

Check Your Progress


1. Define Lagrange’s linear differential equation.
2. What are the assumptions for solving the wave equation?
3. What is the nth normal mode of the string?
4. Where is Charpit’s method used?

4.5 ANSWERS TO ‘CHECK YOUR PROGRESS’


1. The partial differential equation Pp + Qq = R, where P, Q, R are functions
of x, y, z is called Lagrange’s linear differential equation.
2. We have to make following assumptions:
(a) The mass of the string for each unit length is constant (‘homogeneous
string’). The string is perfectly elastic and does not offer any resistance
to bending.
(b) The tension caused by stretching the string before fixing it at the ends
is so large that the action of the gravitational force on the string can be
neglected.
(c) The string performs small transverse motions in a vertical plane; that
is, every particle of the string moves strictly vertically and so that the
deflection and the slope at every point of the string always remain
small in absolute value.

3. Each u n  x, t   Bn cos λ n t  Bn * sin λ n t sin x represents a harmonic
L
motion having the frequency λ n / 2π  cn / 2 L cycles per unit time. This
motion is called the nth normal mode of the string.
4. Charpit’s method is used to find the solution of most general partial differential
equation of order one.

4.6 SUMMARY
 The partial differential equation Pp + Qq = R, where P, Q, R are functions
of x, y, z, is called Lagrange’s linear differential equation.
 The tension in the string is constant throughout.

Self - Learning
108 Material
 The vibrations in the string are small so the slope at each point remains Partial Differential
Equations of the First Order
small.
 The horizontal components of the tension are constant because the points
on the string move vertically according to our assumption.
NOTES
 The initial velocity and initial deflection of the string determine the form of
motion. If f(x) is the original deflection and g(x) is the initial velocity, then
our initial conditions are,
u x,0  f  x 

u
 g x .
t t 0

 Charpit’s method is used to find the solution of most general partial differential
equation of order one, given by,
F(x, y, z, p, q) = 0
 The primary idea in this method is the introduction of a second partial
differential equation of order one,
f(x, y, z, p, q, a) = 0

4.7 KEY TERMS


 Partial differential equation: Any equation which contains one or more
partial derivatives is called a partial differential equation.
 Fundamental mode: The first normal mode is referred as the fundamental
mode.
 Partial differential equation: Any equation which contains one or more
partial derivatives is called a partial differential equation.
 Charpit’s method: Charpit’s method is used to find the solution of most
general partial differential equation of order one, which is given by
F(x, y, z, p, q) = 0.

4.8 SELF-ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define Lagrange’s linear differential equation with suitable examples.
2. What is a space function?
3. Which equations are termed as singular integral?

Self - Learning
Material 109
Partial Differential 4. Give the assumptions of wave equation.
Equations of the First Order
5. What are the eigenfunctions?
6. Define the term spectrum.
NOTES 7. Explain the Charpit’s method?
Long-Answer Questions
1. Discuss the first order Lagrange’s equations. Give examples.
2. Briefly describe the wave equation with suitable examples.
3. Explain the Charpit’s general method to find the solution of general partial
differential equation.
4. Solve the following differential equations:
(i) (3 z  4 y ) p  (4 x  2 z )q  2 y  3 x

(ii) x( z 2  y 2 ) p  y ( x 2  z 2 )q  z ( y 2  x 2 )
5. How does the frequency of the fundamental mode of the vibrating string
depend on the (a) Length of the string (b) On the mass per unit length (c)
On the tension? What happens to that frequency if we double the tension?
6. Find u(x, t) of the string of length L =  when c2 = 1, the initial velocity is
zero, and the initial deflection is
(i) 0.01 sin 3x.

(ii) k  sin x  sin 2 x 


1
 2 .
(iii) 0.1x  x  .


(iv) 0.1x  2  x 2 . 
7. Find the deflection u  x, t  of the string of length L =  and c 2  1 for zero
initial displacement and ‘triangular’ initial velocity u t  x,0   0.01x  if
1 1
0 x  , u t  x,0  0.01  x  if   x   . (Initial conditions with
2 2
u t  x,0   0 are hard to realize experimentally).
8. Find solutions u(x, y) of the following equations by separating variables.
(i) u x  u y  0.

(ii) u x  u y  0 .

(iii) y 2 u x  x 2 u y  0
.
(iv) u x  u y  x  y u
.

Self - Learning
110 Material
Partial Differential
(v) u xx  u yy  0 Equations of the First Order
.
(vi) u xy  u  0 .

(vii) u xx  u yy  0 . NOTES
(viii) xu xy  2 yu  0
.
9. Show that

n πx
The substitution of u  x, t    G n t  sin (L =length of the string) into
n 1 L
the wave equation governing free vibrations leads to
cnπ
G n  λ 2n G  0, λ n 
L .
10. Forced vibrations of the string under an external force P(x, t) per unit length
acting normal to the string are governed by the equation
P
u tt  c 2 u xx  .
ρ
11. Find Complete Integrals of the following equations:
(i) p 2  px  q  z .
(ii) p 2 x  q 2 y  z .

(iii) px  qy  z 1  pq .
(iv) p(1 + q2) = q (z – a).
 
(v) pq  x2 y  1 p  y 2  y q  2 y  1z  0 .
(vi)  pq  px  qy   1 .
(vii) pxy  pq  qy  yz .
 
(viii) p 2  q 2 x  pz .
(ix) 2 y  zq   qxp  yq  .

4.9 FURTHER READING


K. P. Gupta and J. K. Goyal. 2013. Integral Transform. Meerut (UP): Pragati
Prakashan.
Sharma, J. N. and R. K. Gupta. 2015. Differential Equations (Paperback
Edition). Meerut (UP): Krishna Prakashan Media (P) Ltd.
Raisinghania, M. D. 2013. Ordinary and Partial Differential Equations. New
Delhi: S. Chand Publishing.

Self - Learning
Material 111
Partial Differential Coddington, Earl A. and N. Levinson. 1972. Theory of Ordinary Differential.
Equations of the First Order
Equations. New Delhi: Tata McGraw-Hill.
Coddington, Earl A. 1987. An Introduction to Ordinary Differential Equations.
New Delhi: Prentice Hall of India.
NOTES
Boyce, W. E. and Richard C. DiPrima. 1986. Elementary Differential Equations
and Boundary Value Problems. New York: John Wiley and Sons, Inc.
Ross, S. L. 1984. Differential Equations, 3rd Edition. New York: John Wiley
and Sons.
Sneddon, I. N. 1986. Elements of Partial Differential Equations. New York:
McGraw-Hill Education.

Self - Learning
112 Material
Partial Differential

UNIT 5 PARTIAL DIFFERENTIAL Equations of the Second


and Higher Orders

EQUATIONS OF THE SECOND


AND HIGHER ORDERS NOTES

Structure
5.0 Introduction
5.1 Objectives
5.2 Partial Differential Equations of Second and Higher Orders
5.3 Classification of Partial Differential Equations of Second Order
5.4 Homogeneous and Non-Homogeneous Equations with Constant Coefficients
5.5 Partial Differential Equations Reducible to Equations with Constant
Coefficients
5.6 Answers to ‘Check Your Progress’
5.7 Summary
5.8 Key Terms
5.9 Self-Assessment Questions and Exercises
5.10 Further Reading

5.0 INTRODUCTION
In mathematics, a Partial Differential Equation (PDE) is a differential equation that
contains unknown multivariable functions and their partial derivatives. PDEs are
used to formulate problems involving functions of several variables, and are either
solved by hand, or used to create a computer model. A special case is Ordinary
Differential Equations (ODEs), which deal with functions of a single variable and
their derivatives. PDEs can be used to describe a wide variety of phenomena such
as sound, heat, diffusion, electrostatics, electrodynamics, fluid dynamics,
elasticity, gravitation and quantum mechanics. These seemingly distinct physical
phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential
equations often model one-dimensional dynamical systems, partial differential
equations often model multidimensional systems. PDEs find their generalisation
in stochastic partial differential equations.
Partial differential equations are equations that involve rates of change with
respect to continuous variables. The position of a rigid body is specified by six
parameters, but the configuration of a fluid is given by the continuous distribution of
several parameters, such as the temperature, pressure, and so forth. The dynamics
for the rigid body take place in a finite-dimensional configuration space; the dynamics
for the fluid occur in an infinite-dimensional configuration space. This distinction
usually makes PDEs much harder to solve than ordinary differential equations, but
here again, there will be simple solutions for linear problems. Classic domains
where PDEs are used include acoustics, fluid dynamics, electrodynamics, and heat
transfer.
In this unit, you will be study about the partial differential equations of second
and higher orders, classification of partial differential equations of second order, Self - Learning
Material 113
Partial Differential homogeneous and non-homogeneous equations with constant coefficients, partial
Equations of the Second
and Higher Orders differential equations reducible to equations with constant coefficients.

NOTES 5.1 OBJECTIVES


After going through this unit, you will be able to:
 Analyse the partial differential equations of second and higher orders
 Discuss the classification of partial differential equations of second order
 Classify the homogeneous and non-homogeneous equations with constant
coefficients
 Briefly explain the partial differential equations reducible to equations with
constant coefficients

5.2 PARTIAL DIFFERENTIAL EQUATIONS OF


SECOND AND HIGHER ORDERS
The general form of a linear differential equation of nth order is,
dny d n 1y dn 2
y dy
n
P1 n 1
P2 n 2
... Pn 1 Pn y =Q
dx dx dx dx
Where P1, P2 ..., Pn and Q are functions of x alone or constants.
The linear differential equation with constant coefficients are of the form,
dny d n 1y dn 2
y dy
n
P1 n 1
P2 n 2
... Pn 1 Pn y =Q (5.1)
dx dx dx dx
Where P1, P2, ..., Pn are constants and Q is a function of x.
The equation,
dny d n 1 y d n2 y dy
n
 P1 n 1
 P2 n 2
+ ... + Pn–1  Py y = 0 (5.2)
dx dx dx dx
This is then called the Reduced Equation (R.E.) of the Equation (5.1)
If y = y1 (x), y = y2 (x), ..., y = yn (x) are n-solutions of this reduced equation,
then y = c1 y1 + c2 y2 + ... + cn yn is also a solution of the reduced equation where
c1, c2, ..., cn are artbitrary constants.
The solution y = y1 (x), y = y2 (x), y = y3 (x), ..., y = yn (x) are said to be
linearly independent if the Wronskian of the functions is not zero where the
Wronskian of the functions y1, y2,..., yn, denoted by W (y1, y2, ...,yn), is defined
by,
y1 y2 y3... yn
y1 y2 y3 ... yn
W (y1, y2, ....yn) = y1 y2 y3 ... yn

y1( n 1)
y2( n 1)
y3( n 1)
... yn( n 1)
Self - Learning
114 Material
Since the general solution of a differential equation of nth order contains n Partial Differential
Equations of the Second
arbitrary constants, u = c1y1 + c2y2 + ... + cn yn is its complete solution. and Higher Orders
Let v be any solution of the differential Equation (5.1), then,
d nv d n 1v d n2v dv NOTES
n
 P1 n 1
 P2 n2
+ ... + Pn–1  Pn v =Q (5.3)
dx dx dx dx
Since u is a solution of Equation (5.2), we get,
d nu d n 1u d n 2u du
 P1  P2 + ... + Pn–1 Pn u = 0 (5.4)
dx n dx n 1 dx n  2 dx
Now adding Equations (5.3) and (5.4), we get,
d n (u v) d n 1(u v) d n 2 (u v) d (u v )
n
P1 n
P2 n 2
+ ...+ Pn –1 + Pn(u + v) = Q
dx dx dx dx
This shows that y = u + v is the complete solution of the Equation (5.1).
d d2 d3
Introducing the operators D for , D2 for 2 , D3 for etc. The Equation
dx dx dx 3
(5.1) can be written in the form,
Dny + P1Dn–1y + P2Dn–2 y +.......+ Pn –1 Dy + y Pn = Q
Or (Dn + P1 Dn –1 + P2Dn–2 +.....+ Pn–1 D + Pn) y = Q
Or F(D) y = Q where F (D) = Dn + P1Dn–1 P2Dn–2 + .......+ Pn–1D + Pn
From the above discussions it is clear that the general solution of F (D)y = Q
consists of two parts:
(i) The Complementary Function (C.F.) which is the complete primitive of the
Reduced Equation (R.E.) and is of the form
y = c1 y1 + c2 y2 + ... + cn yn containing n arbitrary constants.
(ii) The Particular Integral (P.I.) which is a solution of F (D) y = Q containing
no arbitrary constant.
Rules for Finding The Complementary Function
Let us consider the 2nd order linear differential equation,
d2y dy
2
 P1  P2 y = 0 (5.5)
dx dx
Let y = A emx be a trial solution of the Equation (5.3); then the auxiliary equation
(A.E.) of Equation (5.5) is given by,
m2 + P1m + P2 = 0 (5.6)
The Equation (5.6) has two roots m = m1, m = m2. We discuss the following
cases:
(i) When m1  m2, then the complementary function will be,
1 2
y = c1em x + c2 em x where c1 and c2 are arbitrary constants.
(ii) When m1= m2, then the complementary function will be,
1
y = (c1 + c2 x) em x where c1 and c2 are arbitrary constants.
Self - Learning
Material 115
Partial Differential (iii) When the auxiliary Equation (5.6) has complex roots of the form  + i
Equations of the Second
and Higher Orders and  – i, then the complementary function will be,
y = ex (c1 cos  x + c2 sin  x)
NOTES Let us consider the equation of order n,
dny d n 1 y d n2 y dy
 P1  P2 + ... + Pn –1 Pn y = 0 (5.7)
dx n dx n 1 dx n  2 dx
Let y = A emx be a trial solution of Equation (5.7), then the auxiliary equation is,
mn + P1 mn–1 + P2 mn – 2 + ......+ Pn –1 m + Pn = 0 (5.8)
Rule (1): If m1, m2, m3, ..., mn be n distinct real roots of Equation (5.8), then the
general solution will be,
1 2 3
y = c1 em x +c2e m x + c3em x + ... + cnemnx
Where c1, c2, c3.....cn are arbitrary constants.
Rule (2): If the two roots m1 and m2 of the auxiliary equation are equal
and each equal to m, the corresponding part of the general solution will be (c1 +
c2 x) emx and if the three roots m3, m4, m5 are equal to  the corresponding part
of the solution is (c3 + c4x + c5x2) ex and others are distinct, the general solution
will be,
6
y = (c1 + c2x) emx + (c3 + c4 + c5x2) ex + c6 em x +......+ cnemnx
Rule (3): If a pair of imaginary roots  ± i occur twices, the corresponding part
of the general solution will be,
ex [(c1 + c2x) cos x + (c3 + c4x) sin x]
And the general solution will be,
5
y = ex [(c1 + c2x) cos x + (c3 + c4x) sin x] + c5em x + ......+ cnemnx
Where c1, c2..., cn are arbitrary constants and m5, m6, ...., mn are distinct real
roots of Equation (5.8).
Rule (4): If the two roots (real) be m and – m, the corresponding part of the
general solution will be c1emx + c2e – mx
= c1 (cosh mx + sinh mx) + c2 (cosh mx – sinh mx)
= c1 cosh mx + c2 sinh mx where c1 = c1 + c2, c2 = c1 – c2
And general solution will be,
3 4
y = c1 cosh mx + c2 sinh mx + c3em x + c4 em x +......+ cnemnx
Where c1, c2, c3, .....cn are arbitrary constants and m3, m4 ... mn are distinct real
roots of Equation (5.8).
Rules for Finding Particular Integrals
Any particular solution of F (D) y = f(x) is known as its Particular Integral (P.I).
The P.I. of F(D)y = f(x) is symbolically written as,
1
P.I. = {f (x)} where F(D) is the operator..
F ( D)

Self - Learning
116 Material
Partial Differential
1 Equations of the Second
The operator F ( D) is defined as that operator which, when operated on and Higher Orders

f (x) gives a function  (x), such that F (D) (x) = f (x)


1 NOTES
i.e., F ( D) { f (x)} =  (x) (= P.I. )
 1  1
 F (D)  f ( x)  = f (x) f ( x) ( x)
 F ( D )  F ( D)
Obviously, F (D) and 1/F(D) are inverse operators.
1
Case I: Let F (D) = D, then
D
f ( x) =  f ( x) dx .
1 1
Proof: Let y = { f (x)}, operating by D, we get Dy = D . { f (x)} or Dy = f (x) or
D D
dy
= f (x) or dy = f (x) dx
dx
Integrating both sides with respect to x, we get,
y =  f ( x) dx, since particular integrating does not contain any arbitrary constant.
Case II: Let F (D) = D – m where m is a constant, then,
1
{ f ( x )} = emx
e
 mx
f ( x)dx .
Dm
1
Proof: Let { f ( x )} = y, then operating by D – m, we get,
Dm
1
(D – m) . { f ( x)} = (D – m) y
Dm
dy
Or f (x) =  my
dx
dy
Or  my = f (x) which is a first order linear differential equation and
dx

I.F. = e 
 mdx
 e mx.
Then multiplying above equation by e–mx and integrating with respect to x, we
get,
y e – mx =  f ( x)emx dx, since particular integral does not contain any arbitrary
constant,
Or y = emx  f ( x )e
 mx
dx .

1 a1 a2 an
Note: If = ..... where ai and mi (i = 1, 2, ..., n)
F ( D) D m1 D m2 D mn
are constants, then
1 1
{ f ( x)} = a1em x f ( x )e m1x
dx a2 em2 x f ( x )e m2 x
dx
F ( D)
... an e mn x f ( x)e mn x
dx Self - Learning
Material 117
Partial Differential n
Equations of the Second
and Higher Orders
=  ai em x  f ( x)em x dx
i i

i 1

We now discuss methods of finding particular integrals for certain specific types
NOTES of right hand functions
Type 1: f (D) y = emx where m is a constant.
1 mx emx
Then P.I. = {e } = if F (m)  0
F ( D) F ( m)
If F (m) = 0, then we replace D by D + m in F (D),
1 1
P.I. = {emx } = emx . {1}
F ( D) F ( D  m)
Example 5.1: (D3 – 2D2 – 5D + 6) y = (e2x + 3)2 + e3x cosh x.
Solution: The reduced equation is,
(D3 – 2D2 – 5D + 6) y = 0 ...(5.9)
Let y =Aemx be a trial solution of Equation (5.9). Then the auxiliary equation is,
m3 – 2 m2 – 5m + 6 = 0 or m3 – m2 – m2 + m – 6m + 6 = 0
Or m2 (m – 1) – m (m – 1) – 6 (m – 1) = 0
Or (m – 1) (m2 – m– 6) = 0 or (m – 1) (m2 – 3m + 2m – 6) = 0
Or (m – 1) (m – 3) (m + 2) = 0 or m = 1, 3, –2
 The complementary function is,
y = c1ex + c2e3x + c3 e–2x where c1, c2, c3 are arbitrary constants.
 e x  e x 
Again (e2x + 3)2 + e3x cosh x = e4x + 6 e2x + 9 + e3x   .
 2 

e4 x e2 x
= e4x + 6 e2x + 9e0 . x + 
2 2
3 4x 13 2 x
= e e 9e0. x
2 2
 The particular integral is,
1 3 4x 13 2 x
y= 3 2
e e 9e0. x
D 2D 5D 6 2 2

1 3 4x 13 2 x
= e e 9e0. x
( D 1)( D 3)( D 2) 2 2
3 1 13 1
= e4 x {e 2 x}
2 ( D 1)( D 3)( D 2) 2 ( D 1)( D 2)( D 3)
1
+9 e0. x
( D 1)( D 3)( D 2)

3 e4 x 13 e2 x
=
2 (4 1) (4 3) (4 2) 2 (2 1) )(2 2) (2 3)

e0. x
Self - Learning 9
118 Material (0 1)(0 3)(0 2)
Partial Differential
3 e4 x 13 e2 x e0. x Equations of the Second
= 9
2 3 .1. 6 2 1. 4 . ( 1) ( 1)( 3) . 2 and Higher Orders

e4 x 13 2 x 3
=  e  .
12 8 2 NOTES
Hence the general solution is,
y = C.F. + P.I.
e4 x 13 2 x 3
= c1ex + c2 e3x + c3 e–2x +  e  .
12 8 2
1 1
Notes: 1. When F (m) = 0 and F(m)  0, P.I. = {emx } =x {emx }
F ( D) F (D)

xe mx
=
F (m)
1
2. When F (m)= 0 F(m) = 0 and F(m)  0, then P.I. = {emx }
F ( D)

1 x 2 e mx
= x2 emx =
F ( D) F ( m)
And so on.
Type 2: f (x) = emx V where V is any function of x.
Here the particular integral (P.I.) of F (D) y = f (x) is,
1 1
P.I. = {emxV } = emx {V }.
F ( D) F ( D  m)
Example 5.2: Solve (D2 – 5D + 6) y = x2 e3x
Solution: The reduced equation is,
(D2 –5D + 6) y = 0 (5.10)
mx
Let y = Ae be a trial solution of Equation (5.10) and then auxiliary equation is
m2 – 5m + 6 = 0 or m2 – 3m – 2m + 6 = 0
Or m (m – 3) – 2 (m – 3) = 0 or (m – 3) (m – 2) = 0
 m = 2, 3
 The complementary function is,
y = c1 e2x + c2 e3x where c1 and c2 are arbitrary constants.
The particular integral is,
1 e3 x
y= { x 2 e3 x } = {x 2 }
D 2  5D  6 ( D  3) 2  5( D  3)  6

1 1
= e3x {x 2 } = e3x 2
{x 2 }
2
D  6 D  9  5 D  15  6 D D

1 2 3x 1 2 1
= e3x D(1  D) {x }  e D (1  D) {x }

Self - Learning
Material 119
Partial Differential
e3 x
Equations of the Second = (1 D D2 D3 D 4 ...){x 2 }
and Higher Orders D
e3 x 2  x 
= {x  2 x  2}  e3x   x 2  2 x 
NOTES D  3 
 
Hence the general solution is,
y = C.F. + P.I.
x3
= c1e 2 x c2 e3 x e3x x2 2x .
3
Recall: (i) (1+ x)–1 = 1 – x + x2 – x3 + x4 – x5 + ...
(ii) (1 – x)–1 = 1 + x + x2 + x3 + x4 + x5 + ...
Type 3: (a) F (D) y = sin ax or cos ax where F (D) =  (D2).
1 1
Here P.I. = {sin ax} = sin ax (if  (– a2)  0)
F ( D) ( a2 )
1 1
Or P.I. = {cos ax} = cos ax (if  (– a2)  0)
F ( D) ( a2 )
[Note D2 has been replaced by – a2 but D has not been replaced by – a.]
(b) F (D) y = sin ax or cos ax and F (D) =  (D2, D)
1 1 1
Here P.I. = {sin ax} = {sin ax} {sin ax}
F ( D) 2
(D , D) ( a 2 , D)

if (– a2, D)  0
1 1 1
Or y= {cos ax} = {cos ax} {cos ax}
F (D) 2
( D , D) ( a 2, D )

if (–a2, D)  0
( D)
(c) F (D) y = sin ax or cos ax and F(D) =
( D2 )

1 ( D) ( D)
Here P.I. = {sin ax} = 2
{sin ax} = {sin ax} if (–a2)  0
F ( D) (D ) ( a2)

1 ( D)
Or y= {cos ax} = {cos ax}
F (D) ( D2 )

( D)
= {cos ax} if  (– a2)  0
( a2 )

(d) F (D) y = sin ax or cos ax, F (D) =  (D2) but  (–a2) = 0.


1 1
Here P.I. = {sin ax or cos ax} = x {sin ax or cos ax}
F ( D) F ( D )

Self - Learning
120 Material
Partial Differential
ei xa e ixa
Alternatively, sin ax and cos ax can be written in the form sin ax = Equations of the Second
2i and Higher Orders
e aix e aix
And cos ax = , then find P.I. by the method of Type 1.
2
NOTES
Example 5.3: Solve (D4 + 2D2 + 1) y = cos x.
Solution: The reduced equation is (D4 + 2D2 + 1) y = 0
Let y = Aemx be a trial solution. Then the auxiliary equaiton is,
m4 + 2m2 + 1 = 0 or [(m2 + 1)]2 = 0 or m = ± i, ± i
 C.F. = (c1 + c2x) cos x + (c3 + c4x) sin x where c1, c2, c3 and c4 are
arbitrary constants.
1
 P.I. = {cos x}
D  2D 2  1
4

1
= x 3
{cos x}
4D 4D
[ (D2) = D4 + 2D2 + 1
1 1
(–12) = 1 – 2 + 1 = 0, then { f ( x)} = x { f ( x)} ]
F ( D) F ( D)
x 1 x x
= 3
{cos x} = . 2 {cos x}
4D D 4 3D  1

x2 1 x 2 cos x x2
= {cos x} . cos x
4 3D 2 1 4 3 1 8
Hence the general solution is,
y = C.F. + P.I.
x2
= (c1 + c2x) cos x + (c3 + c4x) sin x – cos x .
8
Example 5.4: Solve (D2 – 4)y = sin 2x.
Solution: The reduced equation is,
(D2 – 4)y = 0
Let y = Aemx be a trial solution and then auxiliary equation is,
m2 – 4 = 0  m = ± 2
The complementary function is,
y = c1 e2x + c2 e–2x where c1, c2 are arbitrary constants.
The particular integral is,
1 1
y= 2
{sin 2 x} = 2
sin 2 x [Replace D 2 by 22 ]
D 4 2 4
1
= sin 2 x
8

Self - Learning
Material 121
Partial Differential 1
Equations of the Second The general solution is y = C.F. + P.I. = c1e2x + c2e–2x sin 2 x .
and Higher Orders 8
Example 5.5: Solve (3D2 + 2D – 8)y = 5 cos x.
NOTES Solution: The reduced equation is,
(3D2 + 2D – 8)y = 0
Let y = Aemx be a trial solution and then the auxiliary equation is,
3m2 + 2m – 8 = 0 or 3m2 + 6m – 4m – 8 = 0
Or 3m (m + 2) – 4 (m + 2) = 0 or (m + 2) (3m – 4) = 0
4
Or m = – 2, m =
3
 The complementary function is,
4
x
y = c1e–2x + c2 e 3 when c1 and c2 are arbitrary constants.
The particular integral is,
1 1
y= {5cos x} = 5 {cos x}
2
3D  2 D  8 (3D  4)( D  2)

(3D  4)( D  2) (3D  4)( D  2)


=5 2 2
{cos x} =5 {cos x}
(9 D  16)( D  4) [9( 12 )  16][12  4]

( D)
[D2 is replaced by – 12 in the denominator] form
(D2 )

5 1
= [3D 2 6 D 4 D 8]{cos x} = [3 D 2  2 D  8]cos x
( 25) ( 5) 25

1  d2 d 
=  3 2 (cos x)  2 (cos x)  8cos x 
25  dx dx 
1 1
=  3cos  2sin x  8 cos x  = (2 sin x  11cos x)
25 25
The general solution is,
y = C.F. + P.I.
1
= c1e –2x + c2e4/3x + (2 sin x  11cos x ) .
25
Type 4: F (D) y = xn, n is a positive integer.
1
Here P.I. = {x n } = [F(D)]–1 {xn}
F ( D)
In this case, [F (D)]–1 is expanded in a binomial series in ascending powers of
D upto Dn and then operate on xn with each term of the expansion. The terms in
the expansion beyond Dn need not be considered, since the result of their operation
on xn will be zero.

Self - Learning
122 Material
Example 5.6: Solve D2 (D2 + D + 1)y = x2. Partial Differential
Equations of the Second
Solution: The reduced equation is, and Higher Orders

D2 (D2 + D + 1)y = 0 (5.11)


Let y = Aemx be a trial solution of Equation (5.11) and then the auxiliary equation NOTES
is,
m2 (m2 + m + 1) = 0
1  1  4 1  3  I  3i
 m = 0, 0 and m = = 
2 2 2
 The complementary function is,
1
0.x x 3 3
y = (c1 + c2 x) e +e 2 c3 cos x c4 sin x
2 2
1
x 3 3
= c1 + c2x + e 2 c3 cos x c4 sin x
2 2
Where c1, c2, c3, c4 are the arbitrary constant.
The particular integral is,
1 1
y= {x 2 } = 2
(1  D  D2 )1{x2}
2 2
D ( D  D  1) D

1
= {1 ( D D2 ) (D D 2 )2 (D D )3 ...}{x 2}
D2
1
= {1 ( D D2 ) ( D2 2 D3 D4 ) (D D 2 )3 ...}{x2}
D2
1
= 2
{x 2 (2 x 2) (2) 0}
D
1 1 x3 x4 x3
= 2
{x 2 2 x} = x2 =
D D 3 12 3

The general solution is y = C.F. + P.I.

x/2 3 3 x4 x3
= c1 + c2x + e c3 cos x c4 sin x .
2 2 12 3

Example 5.7: Solve (D2 + 4)y = x sin2x.


Solution: The reduced equation is,
(D2 + 4) y = 0
The trial solution y = A emx gives the auxiliary equation as,
m2 + 4 = 0, m = ± 2i
The complementary function is y = c1 cos 2x + c2 sin 2x
1
The particular integral is y = {x sin 2 x}
D2 4

Self - Learning
Material 123
Partial Differential 1 x 1 x x
Equations of the Second = 2
(1 cos 2 x ) = 2
cos 2 x
and Higher Orders D 4 2 D 4 2 2

1 x 1 x (e 2ix e 2ix
)
= 2 2
NOTES D 4 2 D 4 2 2
1
1 D2 x 1 e 2ix e 2ix
= 1 {x} {x}
4 4 2 4 ( D 2i ) 2 4 4( D 2i) 2 4

1x e2ix 1 e 2ix
1
= {x} {x}
42 4 D2 4 Di 4 4 4 D 2
4 Di 4 4

x e 2ix 1 e 2ix
= {x} {x}
8 4 D D
4 Di 1 4 . ( 4 Di ) 1
4i 4i
1 1
x e2ix 1 D e 2 xi D
= . 1 {x} 1 {x}
8 4 4 Di 4i 4( 4 Di ) 4i

x e2ix 1 D D2 e 2 xi D
= . 1 ... {x} 1 ... {x}
8 4 4 Di 4i 16 4( 4 Di ) 4i

x e2ix 1 1 e 2 xi 1
= . x x
8 4 4 Di 4i 4 . 4 Di 4i

x e 2ix x 2 x e 2 xi x 2 x
=
8 2 . 8i 2 4i 2 . 8i 2 4i

x x 2 e 2ix e 2 xi
x e2ix e 2 xi
=
8 2.8 2i 2 .16 . i 2 2

x x2 x
=  sin 2 x  cos 2 x
8 2.8 2.16
x x2 x
=  sin 2 x  cos 2 x
8 16 32
Hence the general solution is y = C.F. + P.I.
x x2 x
= c1 cos 2x + c2 sin 2x + sin 2x – cos 2 x .
8 16 32
Example 5.8: Solve (D4 + D3 – 3D2 – 5D – 2) y = 3xe–x.
Solution: The reduced equation is,
(D4 + D3 – 3D2 – 5D – 2) y = 0 (5.12)
mx
The trial solution y = Ae gives the auxiliary equation as,
m4 + m3 – 3m2 – 5 m – 2 = 0
Or m4 + m3 – 3m2 – 3 m – 2m – 2 = 0
Or m3 (m + 1) – 3m (m + 1) – 2 (m +1)
Or (m + 1) (m3 – 3m – 2) = 0 or (m + 1) {m3 + m2 – m2 – m – 2m –2) = 0
Self - Learning Or (m + 1) {m2 (m + 1) – m (m + 1) – 2 (m + 1)} = 0
124 Material
Or (m + 1) (m + 1) (m2 – m – 2) = 0 Partial Differential
Equations of the Second
Or (m + 1)2 (m2 – 2m + m – 2) = 0 and Higher Orders

Or (m + 1)2 (m + 1) (m – 2) = 0
 m = – 1, –1, –1, 2 NOTES
2 –x 2x
The complementary function is y = (c1 + c2 x + c3x ) e + c4e .
The particular integral is,
1 x
y= 3
{3e x}
( D 1) ( D 2)
1 1
= 3e–x 3
{x} = 3e–x 3
{x}
( D 1 1) ( D 3) D ( 3) (1 D/3)
1
1 D 1 D D2
= – e–x 1 {x} e x
1 ... {x}
D3 3 D3 3 9

1 1 1 x2 x 1 x3 x2
= – e–x x e x
e x
D3 3 D2 2 3 D 6 6

x4 x3
= –e–x
24 18

The general solution is y = C.F. + P.I.


x4 x3
= (c1 + c2 x + c3 x2) + c4 e2x – e–x .
24 18

Type 5: (a) F (D) y = xV where V is a function of x.


1  1  1
Here P.I. = {xV } =  x  F ( D )  {V }.
F ( D)  F ( D )  F ( D)
Example 5.9: Solve (D2 + 9) y = x sin x.
Solution: The reduced equation is (D2 + 9) y = 0 (5.13)
The trial solution y = Aemx gives the auxiliary equation as,
m2 + 9 = 0 or m = ± 3i
 C.F. = c1 cos 3x + c2 sin 3x where c1 and c2 are arbitrary constants.
1
And P.I. = {x sin x} where F (D) = D2 + 9
F (D)

 1  1
= x  F ( D )  {sin x}
 F ( D)  F (D)

2D 1
= x 2 2
{sin x}
D 9 D 9

2D sin x 2D sin x
= x 2 1 9
= x 2 8
D 9 D 9

x sin x 1 1 x sin x 1
= D{sin x} =  cos x Self - Learning
8 4 1 9 8 32 Material 125
Partial Differential Hence the general solution is,
Equations of the Second
and Higher Orders x sin x 1
y = C.F. + P.I. = c1 cos 3x + c2 sin 3x +  cos x
8 32

NOTES (b) F (D) y = xnV where V is any function of x.


n
1 1 F (D) 1
HereP.I. = { f ( x)} = {x nV } x {V }
F ( D) F ( D) F ( D) F ( D)
Example 5.10: Solve (D2 –1)y = x2 sin x
Solution: The reduced equation is (D2 –1)y = 0 (5.14)
Let y = Aemx be a trial solution. Then the auxiliary equation is,
m2 – 1 = 0 or m = ± 1
 C.F. = c1ex + c2e–x where c1 and c2 are arbitrary constants.
1
P.I. = {x 2 sin x} where F(D) = D2 – 1
F ( D)
2 2
=  x  
F ( D) 1 1 1
= x {sin x} 2
2 D  2 {sin x}
F ( D) F ( D)  D 1  D 1

1 1 1
= x 2
2D x 2
2D 2
sin x
D 1 D 1 1 1

1 1
= x 2
2D x 2
2 D { 1/ 2 sin x}
D 1 D 1
1 x 1
= x 2D sin x {cos x}
D2 1 2 D2 1

=  x   x 
1 1
2 D   sin x  cos x 
2
 D 1   2 2 
x2 x 1
=– sin x  cos x  2 {D ( x sin x  cos x)}
2 2 D 1
x2 x 1
=– sin x cos x 2
{sin x x cos x sin x}
2 2 D 1
x2 x 1
=– sin x  cos x  2 {x cos x}
2 2 D 1
1  1  1
Again 2
{x cos x} =  x  2 2D  2 {cos x}
D 1  D  1  D 1
=  x  2 2D   
1 1
cos x 
 D 1    1  1 
1 1
= x cos x { sin x}
2 D2 1
1 sin x 1 1
= x cos x = – x cos x  sin x
2 12 1 2 2

Self - Learning
126 Material
Partial Differential
x2 x x 1
P.I. = – sin x  cos x  cos x  sin x Equations of the Second
2 2 2 2 and Higher Orders
1 2 1
= x sin x x cos x sin x
2 2
NOTES
Hence the general solution is,
1 2 1
y = C.F. + P.I. = c1ex + c2e–x x sin x x cos x sin x .
2 2

Check Your Progress


1. Write the general linear differential equation with constant coefficients.
2. What is the complementary function of 2nd order linear differential
equation if the roots of equation m1 and m2 are equal?
3. What is the particular integral?
4. What are the three types of second order partial differential equations?
5. What is the complementary function of the equation
A 0 D n  A 1 D n 1 D   A 2 D n  2 D  2  
 A n D  n z  0 if the roots are
distinct?

5.3 CLASSIFICATION OF PARTIAL


DIFFERENTIAL EQUATIONS OF SECOND
ORDER
Consider the following linear partial differential equation of the second order in
two independent variables,

 2u  2u  2u u u
A  B  C D E  Fu  G
x 2
xy y 2
x y
Where A, B, C, D, E, F, and G are functions of x and y.
This equation when converted to quasi-linear partial differential equation
takes the form,

 2u  2u  2u  u u 
A  B  C  f  x, y, u ,   0
x 2
xy y 2
 x y 
These equations are said to be of:
1. Elliptic Type if B2 – 4AC < 0
2. Parabolic Type if B2 – 4AC = 0
3. Hyperbolic Type if B2 – 4AC > 0

Self - Learning
Material 127
Partial Differential Let us consider some examples to understand this:
Equations of the Second
and Higher Orders
 2u  2u 2  u
2
u
(i)  2 x  x 2 0
x 2
xy y 2
y
NOTES
 uxx – 2xuxy + x2uyy – 2uy = 0
Comparing it with the general equation we find that,
A = 1, B = –2x, C = x2
Therefore,
B2 – 4AC = (–2x)2 – 4x2 = 0,  x and y  0
So the equation is parabolic at all points.
(ii) y2uxx + x2uyy = 0
Comparing it with the general equation we get,
A = y2, B = 0, C= x2
Therefore,
B2 – 4AC = 0 – 4x2y2 < 0,  x and y  0
So the equation is elliptic at all points.
(iii) x2uxx – y2uyy = 0
Comparing it with the general equation we find that,
A = x2, B = 0, C = –y2
Therefore,
B2 – 4AC = 0 – 4x2y2 > 0,  x and y  0
So the equation is hyperbolic at all points.
Following three are the most commonly used partial differential equations
of the second order:
1. Laplace equation,

 2u  2u
 0
x 2 y 2
This is equation is of elliptic type.
2. One-dimensional heat flow equation,
u  2u
 c2 2
t x
This equation is of parabolic type.
3. One-dimensional wave equation,
 2u 2  u
2
 c
t 2 x 2
This is a hyperbolic type.
Self - Learning
128 Material
Partial Differential
5.4 HOMOGENEOUS AND NON- Equations of the Second
and Higher Orders
HOMOGENEOUS EQUATIONS WITH
CONSTANT COEFFICIENTS NOTES
Homogeneous Linear Equations with Constant Coefficients
Let f(D, D' )z = V(x, y) (5.15)
Then if,
f D, D   A0 D n  A1 D n 1 D   A2 D n  2 D2   An D  n (5.16)

Where A1 , A 2 , , A n are constants.


Then Equation (5.15) is known as Homogeneous equation and takes the
form,
A 0 D n  A 1 D n 1 D   A 2 D n  2 D  2  
 A n D  n z  V x , y  (5.17)

Complementary Function
Consider the equation,
A 0 D n  A 1 D n 1 D   A 2 D n  2 D  2  
 A n Dn z  0 (5.18)
Let,
z  y  mx (5.19)
Be a solution of Equation (5.18)
Now D r z  m r  r y  mx 

D  s z   e  y  mx 

And D r D s z  m r   r  s  y  mx 
Therefore, on substituting Equation (5.19) in Equation (5.18), we get
A 0 m n  A 1 m n s  A 2 m n 2  
 A n   n  y  mx   0
Which will be satisfied if,
A 0 m n  A 1 m n 1  A 2 m n  2   An  0 (5.20)
Equation (5.20) is known as the auxiliary equation.
Let m1 , m 2 , , m n be the roots of the Equation (5.20),
Then the following three cases arise:
Case I: Roots m1 , m 2 , , m n are Distinct.
Part of C.F. corresponding to m = m1 is,
z  1 y  m1x 

Self - Learning
Material 129
Partial Differential Where ‘1’ is an arbitrary function.
Equations of the Second
and Higher Orders Part of C.F. corresponding to m = m2 is,
z   2 y  m 2 x 
NOTES
Where 2 is any arbitrary function.
Now since our equation is linear, so the sum of solutions is also a solution.
Therefore, our complimentary function becomes,
C.F. = 1(y + m1x) + 2(y + m2x) +……………+ n(y + mnx)
Case II: Roots are Imaginary.
Let the pair of complex roots of the Equation (5.50) be
u ± iv
Then the corresponding part of complimentary function is,
z = 1(y + ux + ivx) + 2(y + ux – ivx)
…(5.21)
Let y + ux = P and vx = Q
Then z = 1(P + iQ) + 2(P – iQ)
Or z = (1+ 2)P + (1– 2)iQ
If 1+ 2 = 1
And 1– 2 = 2
Then,
1
φ1  ( ξ 1  iξ 2 )
2
And
1
φ2  ( ξ 1  iξ 2 )
2
Substituting these values in Equation (5.21), we get,
1 1 1 1
z ξ 1 ( P  iQ )  iξ 2 ( P  iQ )  ξ 1 ( P  iQ )  iξ 2 ( P  iQ )
2 2 2 2
Or
1 1
z  {ξ 1 ( P  iQ )  ξ 1 ( P  iQ )}  i{ξ 2 ( P  iQ )  ξ 2 ( P  iQ )}
2 2
Case III: Roots are Repeated.
Let m be the repeated root of Equation (5.20).
Then we have,
(D – mD')(D – mD')z = 0
Putting (D – mD')z = U, we get (5.22)
(D – mD')U = 0 (5.23)

Self - Learning
130 Material
Since the equation is linear, it has the following subsidiary equations, Partial Differential
Equations of the Second
and Higher Orders
dx dy dU
  (5.24)
1 m 0
Two independent integrals of Equation (5.24) are, NOTES

y  mx = Constant
And U = Constant
 U  y  mx 
This is a solution of Equation (5.23) where  is an arbitrary function.
Substituting in Equation (5.22),
z z
m  y  mx  (5.25)
x y
Which has the following subsidiary equations,
dx dy dz
 
1  m y  mx 
Two independent integrals of Equation (5.22) are,
y  mx = Constant

And z  xy  mx   Constant


Therefore z  xy  mx   y  mx  (5.26)
This is a solution of Equation (5.25) where  is an arbitrary function.
Equation (5.26) is the part of C.F. corresponding to two times repeated
root.
In general, if the root m is repeated r times, the corresponding part of C.F.
is,
z  x r 11 y  mx   x r  2  2 y  mx     r y  mx 

Where 1 , 2 , , r are arbitrary functions.


Example 5.11: Solve the equation, D 3  3D 2 D   3DD  2  D  3 z  0 .
Solution: The A.E. of the given equation is,
m 3  3m 2  3m  1  0
Or m  13  0
 m = 1, 1, 1
 C.F. = x 2 1 y  x   x 2 y  x    2 y  x  .

Self - Learning
Material 131
Partial Differential Non-Homogeneous Linear Equations with Constant Coefficients
Equations of the Second
and Higher Orders If all the terms on left hand side of Equation (5.15) are not of same degree then
Equation (5.15) is said to be Non-Homogeneous equation. Equation is said to
NOTES be reducible if the symbolic function f D, D can be resolved into factors each
of which is of first degree in D and D' and irreducible otherwise.
For example, the equation,
 
f D, D  z  D 2  D  2  2 D  1 z  D  D   1D  D   1z  x 2  xy
It is reducible while the equation,
   
f D, D  z  DD   D  3 z  D  D  D  2 z  cos x  2 y 
It is irreducible.
Reducible Non Homogeneous Equations
In the equation,
f D, D  a 1D  b1D  c1 a 2 D  b 2 D  c 2  a n D  b n D  c n 
…(5.27)
Where a’s, b’s and c’s are constants.
The complementary function takes the form,
a1D  b1D  c1 a 2 D  b 2 D  c 2  a n D  bn D  c n z  0
(5.28)
Any solution of the equation given by
a i D  b i D  c i z  0
(5.29)
This is a solution of the Equation (5.28)
Forming the Lagrange’s subsidiary equations of Equation (5.29),
dx dy dz
  (5.30)
ai bi  ci z
The two independent integrals of Equation (5.30) are,
b i x  a i y  Constant
ci
 x
ai
And z = Constant e , if ai  0
Or
ci
 y
b
z = Constant e , if bi  0
Therefore,
ci
z = e  ai x i b i x  a i y  , if a i  0

Self - Learning
132 Material
Or Partial Differential
Equations of the Second
ci
 y and Higher Orders
z= e bi
ψ i (bi x  ai y ) if bi  0
This is the general solution of Equation (5.29). Here φ i and ψ i are arbitrary NOTES
functions.
Example 5.12: Solve the differential equations,
D 2

 D  2  3D  3 D  z  0 .
Solution: The equation can also be written as,
D  DD  D  3z  0
 C.F.  1 y  x   e 3 x  2 x  y 
Or
ψ1 ( y  x)  e3 y ψ 2 ( x  y )
When the Factors are Repeated
Let the factor is repeated two times and is given by,
(aD  bD ' c)
Consider the equation,
aD  bD  c aD  bD  c z  0 (5.31)

Put aD  bD  cz  U (5.32)


Then the Equation (5.32) reduces to,
aD  bD  cU  0 (5.33)
General solution of Equation (5.33) is,

U = e  a x bx  ay if a  0
c
(5.34)

Or

bx  ay if b  0
c
 y
U e b (5.35)

Substituting Equation (5.34) in Equation (5.32), we obtain,


c
bx  ay
 x
(aD  bD' c) z  e a (5.36)

The subsidiary equations are,


dx dy dz
  c
a b  x (5.37)
e a
φ(bx  ay )  cz

Self - Learning
Material 133
Partial Differential The two independent integrals of Equations (5.37) are given by,
Equations of the Second
and Higher Orders bx  ay  Constant = 
(5.38)
NOTES dz c 1  x 1  x
c c

And  z  e a φ(bx  ay )  e a φ(λ)


dx a a a
(5.39)
The Equation (5.39) being an ordinary linear equation has the following
solution:
c
x 1
ze a
 xφ(λ) + Constant
a
c
x 1
Or ze a  xφ(bx  ay ) + Constant
a
Therefore, general solution of Equation (5.36) is,
c c
x  x  x
z  e a bx  ay   1 bx  ay e a
a
c
 x
=e a
x 2 bx  ay   1 bx  ay 
…(5.40)
Where 1 and 2 are arbitrary functions.
Similarly from Equations (5.35) and (5.32), we get

y 2 bx  ay  1 bx  ay


c
 y
ze b

Where 1 and 2 are arbitrary functions.


In general, for r times repeated factor, aD  bD  c
c r
 x
z=e a
x i 1
 i bx  ay  if a  0
i 1

Or
c r
 y
ze b
y i 1
 i bx  ay  if b  0
i 1

Where φ1 , φ 2, ...., φ r and ψ1 , ψ 2 ,..., ψ r are arbitrary functions.


Example 5.13: Solve the differential equation,
2 D  D   4 D  2D   12 z  0 
Solution: C.F. corresponding to the factor 2D  D  4 is,

e4y x  2 y 
Self - Learning
134 Material
Partial Differential
C.F. corresponding to the factor D  2 D   12 is, Equations of the Second
and Higher Orders
e  x x 2 2 x  y   1 2 x  y 

Hence C.F. = e4yx  2 y   e  x x 2 2 x  y   1 2 x  y  NOTES


Irreducible Non-Homogeneous Equations
For solving the equation,
f D, Dz  0 (5.41)

Substitute z  ce ax  by where a, b and c are constants. (5.42)


Now D r z  ca r e ax  by
D r D  a z  ca r b s e ax  by
And D  s z  cb s e ax  by
Substituting Equation (5.42) in Equation (5.41), we get,
cf a , b e ax  by  0
Which will hold if,
f (a, b) = 0 (5.43)
For any selected value of a (or b) Equation (5.43) gives one or more values
of b (or a). Thus there exists infinitely many pairs of numbers (ai, bi) satisfying
Equation (5.43).
Thus

z   c i e a i x  bi y (5.44)
i 1

Where f a i , b i   0  i, is a solution of the Equation (5.43),


If
f D, D  D  hD  k gD, D (5.45)
Then any pair (a, b) such that,
a  hb  k  0 (5.46)
Satisfies Equation (5.43). There are infinite number of such solutions.
From Equation (5.46),
a  hb  k 
Thus

z   c i e  hb i  k x  b i y
i 1

= e  cie i
 kx b  y  hx 
(5.47)
i 1 Self - Learning
Material 135
This is a part of C.F. corresponding to a linear factor D  hD  k  given
Partial Differential
Equations of the Second
and Higher Orders in Equation (5.45).
Equation (5.47) is equivalent to,
NOTES
e  kx y  hx 
Where ‘’ is an arbitrary function.
Equation (5.44) is the general solution if f (D, D') has no linear factor
otherwise general solution will be composed of both arbitrary functions and partly
arbitrary constants.
Example 5.14: Solve the differential equation 2D 4  3D 2 D  D 2 z  0 .
Solution: The given equation is equivalent to,
2D 2

 D D 2  D z  0 
C.F. corresponding to the first factor,

= c e
i 1
i
a i x  bi y

Where a i and bi are related by,,

2a i2  b i  0

Or b i  2a i2
Therefore, part of C.F. corresponding to the first factor,

d e
i 1
i
ei ( x  ei y )

Where ei and di are arbitrary constants.


 

 C.F. = c e
i 1
i
a i x 2 a i y 
  d i e ei  x ei y 
i 1

Particular Integral
In the equation,
f D, Dz  Vx, y
…(5.48)
f(D, D) is a non homogeneous function of D and D.
1
P.I. = V x , y 
f D, D
…(5.49)
Here if Vx, y  is of the form e axby where ‘a’ and ‘b’ are constants then
we use the following theorem to evaluate the particular integral:
Self - Learning
136 Material
Theorem 5.1: If f a , b   0 , then,
Partial Differential
Equations of the Second
and Higher Orders
1 1
e ax  by  e ax  by
f D, D f a , b 
NOTES
Proof: By differentiation,
D r D 's e ax by  a r b s e ax by
D r e ax  by  a r e ax  by

D 's e ax by  b s e ax by


 f ( D, D' )e ax  by  f ( a, b)e ax  by

1
e ax  by  f a , b  e ax  by
f D, D 
Dividing the above equation by f(a, b)
1 1
e ax  by  e ax  by
f a , b  f D, D

1 1
Or e ax  by  e ax  by
f D, D f a , b 

Example 5.15: Solve the equation D 2  D 2  3D  3Dz  e x  2 y


Solution: The given equation is equivalent to,
D  D D  D  3z  e x 2 y
C.F. = 1 y  x   e 3 x  2 y  x 
1
P.I. = e x2 y
( D  D' )( D  D'3)
1 x 2 y
= e
12
1 x 2 y
Therefore, z  φ1 ( y  x)  e φ 2 ( y  x) 
3x
e
12
But in case V(x, y) is of the form e ax  by x , y  where ‘a’ and ‘b’ are
constants then following theorem is used to evaluate the particular integral:
Theorem 5.2: If x, y is any function, then
1 1
e ax  by x , y   e ax  by  x , y 
f D, D  f D  a , D  b 
Proof: From Leibnitz’s Theorem for successive differentiation, we have
  
D r e ax  by x , y   e ax  by D r x , y   r c1a.D r 1x , y  
Self - Learning
Material 137
Partial Differential
Equations of the Second
and Higher Orders  r c 2 a 2 d r  2  x , y    r c r a r x , y 

= e ax  by D r  r c1 D r 1  r c 2 a 2 D r  2  
 r c r a r  x , y 
NOTES
= e ax  by D  a r  x , y  .
Similarly,
 
D 's e ax by φ( x, y )  e ax by ( D'b) s φ( x, y )

 
And D r D' s e ax by φ( x, y )  D r [e ax by ( D'b)φ( x, y )]

= e ax  by D  a r D   b s  x , y 

So  
f D, D   e ax  by x , y   e ax  by f D  a , D   b x , y 
(5.50)
Put f D  a, D  b x, y   x, y 

1
 x, y    x , y 
f D  a , D  b 
Substituting in Equation (5.50), we get,

 1 
f D, D  e ax  by  x , y   e ax  by  x , y 
 f D  a , D   b  

1
Operating on the equation by
f D, D

e ax  by
1
f D  a , D  b 
 x , y  
1
f D, D

e ax  by  x , y  
Replacing x, y by x, y  , we have,

1
f D, D
 
e ax  by x , y   e ax  by
1
f D  a , D   b 
x , y 

Example 5.16: Solve D 2  D  2  3D  3D  z  xv  e x  av .


Solution: The given equation is equivalent to,
D  DD  D  3y  xy  e x  2 y
C.F. = 1 y  x   e 3 x  2 x  y 

1 1
P.I. = xy  e x2 y
D  DD  D  3 D  DD  D  3
Self - Learning
138 Material
1 1
Partial Differential
1  D    D  D  Equations of the Second
= 1   1   xy and Higher Orders
3D  D   3 

x 2 y 1
+e 1 NOTES
D  1  D  2D  1  D  2  3 .
1  D D  2   D  D 2 
 1    1   DD    xy  e
x 2 y

3D  D D 2   3 9 
1
.1
D  D  1D  D

1  2 x2 1 2
  xy  x   y    xe x  2 y
3D  3 2 3 9

1 x2y x2 x3 1 2  x 2 y
= 3  2  3  6  3 xy  9 x   xe
 .
 

 
Example 5.17: Solve D 2  DD' D'1 z  cos x  2 y   e y  xy  1 .
Solution: Equation is equivalent to,
D  1D  D'1z  cosx  2 y   e y  xy  1
Complementary Function = e x φ1 ( y )  e y φ 2 ( x  y ) .
Particular integral corresponding to cos (x + 2y) is,
1
cosx  2 y 
D  DD  D  1
2

1
 cosx  2 y 
 1   2  D  1
1
 cosx  2 y 
D
1
 sin x  2 y 
2
Corresponding to e y , the particular integral is,
1
 ey
D  DD   D   1
2

Self - Learning
Material 139
Partial Differential
1
Equations of the Second  ey
and Higher Orders D  1
1
 e y . .1
NOTES D
= ye .
y

Particular Integral corresponding to the part (xy + 1) is,


1
 xy  1
D  1D  D  1
 1  D 1  ( D  D' ) ( xy  1)
1 1

 
  1  D  D 2  ..... 1  D  D   D  D  ..... xy  1
2

 1  D  D 2
 .....xy  1  y  x   2
 1  D  D 2
 .....xy  y  x  1
 xy  y  x  1  y  1
 xy  x 
  x y  1
1
 z  e x φ1 ( y )  e y φ 2 ( x  y )  sin( x  2 y )  ye y  x( y  1)
2

5.5 PARTIAL DIFFERENTIAL EQUATIONS


REDUCIBLE TO EQUATIONS WITH
CONSTANT COEFFICIENTS
The equation,
f xD, yDz  Vx, y 
Where f xD, yD   c rs x r y s D r Ds , c rs = Constant. (5.51)
r ,s

This is reduced to linear partial differential equation with constant coefficients


by the following substitution:
u = log x, v = log y (5.52)
By substitution of Equation (5.52)

xD  x
x
u
x
u x
Self - Learning
140 Material
Partial Differential

  d (say) Equations of the Second
u and Higher Orders

And,
NOTES
1  
x 2 D 2  x 2 D 
 x u 
 1  1 2 
 x 2   2  2 2 
 x u x u 
2 
 2
u u
 dd  1
Therefore,

x r D r  d d  1d  2 ..... d  r  1 

And y s Ds  d d   1d   2 ... d   s  1 
 
Hence f xD, yD   c rs d d  1..... d  r  1 d ' (d '1).....(d ' s  1)

 gd, d
Here the coefficients in g(d, d’) are constants.
Thus by substitution Equation (5.51) is reduced to,

gd, dz  V e u , e v 
Or g (d , d ' ) z  Uu, v  (5.53)
Equation (5.53) can be solved by methods that have been described for
solving partial differential equations with constant coefficients.
Example 5.18: Solve the differential equation,
x D
2 2

 4xyDD  4 y 2 D 2  6 yD z  x 3 y 4
Solution: Put u  log x
v = log y
The given equation can be reduced to
dd  1  4dd  4dd  1  6dz  e 3u 4v
Or d  2dd  2d  1z  e 3u  4 v

The complementary function is 1 2u  v   e u  2 2u  v 

  
= 1 log x 2 y  x 2 log x 2 y 
=  x y   x x y 
1
2
2
2

Self - Learning
Material 141
Partial Differential
1
Equations of the Second
And the particular integral is e 3u 2 v
and Higher Orders d  2dd  2d  1
1 3u  4 v
= e
NOTES 30
1 3 4
= x y
30

    
z   1 x 2 y  x 2 x 2 y 
1 3 4
30
x y .


Example 5.19: Find the solution of, x 2 D 2  y 2 D 2  yD  xD z  0 
Solution: Put u = log x
v = log y
The given differential can be reduced to,
dd  1  dd  1  d  dz  0
 d 2

 d2 z  0
A.E. is,
m2 1  0
 m  1,1
 z  1 v  u    2 v  u 

 y
 1 log xy    2  log 
 x

y
 1 xy   2   .
x
Example 5.20: Determine the solution of the following equation,
x D
2 2

 2xyDD  y 2 D 2 z  nz  n xD  yDz  x 2  y 2  x 3
Solution: Put u  log x
v  log y
The equation reduces to,
dd  1  2dd  dd  1z  n d  d z  nz  e 2u  e 2 v  e 3u
Or

Or

Self - Learning
142 Material
Partial Differential
Equations of the Second
and Higher Orders

Or
d  d 2

 n  1d  d   n z  e 2 u  e 2 v  e 3 u NOTES
Or
d  d  n d  d  1z  e 2u  e 2 v  e 3u
C.F. = e nu 1 u  v   e u  2 u  v 

x x
= x 1    x 2  
n

 y  y

P.I. =
1
d  d  n d  d  1

e 2 u  e 2 v  e 3u 

x2  y2 1 1 3
=   . x
n2 2 n 3

x  x  x 2  y2 1 x3
 z  x 1    x 2    
n

y  y n2 2 n 3

Example 5.21: Solve x D  xyDD  2 y D  xD  2 yDz  log


y 1
2 2 2 2

x 2
Solution: Put u  log x
v  log y
Our equation reduces to,

dd  1  dd  2dd  1  d  2dz  v  u  1


2

d 2

 dd   2d  2 z  v  u 
1
2

Or d  2dd  d z  v  u  1
2
C.F. = 1 2u  v    2 u  v 

x
 
= 1 x y   2  
2

y Self - Learning


Material 143
Partial Differential
Equations of the Second 1  1
P.I. = v u  
and Higher Orders
d  2dd  d  2

1 1  d  1
NOTES = . 1   v  u  
d  2d  d  d  2

1 1 1 
= . v  u   u 
d  2d  d  2 

1  1 
= . uv  u 2  u 
d  2d   2 

1  2d  4d  2  1 
= d 1  d  d 2    uv  u  u 
2

  2 

1 1 2
uv  u  u  u 
2
=
d 2 

1
= log x 2 log y  1 log x 2
2 4

x 1
   2 1
z  1 x 2 y   2    log x  log y  log x  .
2

 y 2 4
Example 5.22: Solve the differential equation,

x D   
n
2 2
 2 xyDD  y 2 D 2 z  x 2  y 2 2

Solution: Put u = log x


v = log y

The equation is reduced to d d  1  2dd   d d   1z  e 2 u  e 2 v 2


n

d  d   
n
Or  d  d  z  e 2 u  e 2 v
2
2

d  dd  d  1z  e 2 u  e 2 v 2
n
Or

C.F.  1 u  v   e u  2 u  v 

 x  x
 1  log   x 2  log 
 y  y

Self - Learning
144 Material
Partial Differential
x x Equations of the Second
 1    x2   and Higher Orders
 y  y

 
n
1 NOTES
Particular Integral is  e 2u  e 2v 2
d  dd  d  1

 
n
1
Substituting Z  e 2u  e 2v 2
d  d  1
Z Z
 
n
Or   Z  e 2u  e 2v 2
u v
du dv dZ
The subsidiary equations are  
 
n
1 1
Z  e 2u  e 2v 2

Two independent integrals of Equation are given by,


u – v = Constant = a (say)

 
n
dZ
And  Z  e 2u  e 2v 2
dv
n
= e nv (e 2 a  1) 2

Since this equation is linear, therefore,

e n 1v 2a
 
n
Ze  v  e 1 2
n  1


e nv 2a

n
 Z e 1 2
n 1

e 
n
2u
 e 2v 2

n  1

 
 
n
1  e 2u  e 2v 2 
 P.I.  d  d   n  1 
 

  du
n
1
= n  1  e  e
2u 2a  2 u 2

a vu

1  2a 
n

= n  1  ( e  1)  e nu du 
2

  a  v u
Self - Learning
Material 145
Partial Differential
1  nu 2 a 
 
n
Equations of the Second
and Higher Orders = n n  1 e e  1 2

 a  v  u

 
n
NOTES 1
 e 2u  e 2v 2
n n  1

 
n
1
= x 2  y2 2
n n  1

x x
 
n
1
 z  1    x 2    x 2  y2 2 .
 y  y  n n  1

Example 5.23: Solve x D  2 xyDD  y D  xD  3yDz 


2 2 2 2 8y
x
Solution: Put u  log x
v  log x
Our Equation reduces to,

Or

Or d  dd  d  2z  8e vu


C.F.  1 u  v   e 2 u  2 u  v 

= 1 xy   x 2  2 xy 

1
P.I. = 8. e vu
d  dd  d  2
= e vu

y
=
x

y
 z   xy   x 2  2 xy   .
x


Example 5.24: Solve x 2 D 2  2xyDD  y 2 D 2 z  x m y n 
Solution: Put u  log x
v  log y

Self - Learning
146 Material
The equation reduces to, Partial Differential
Equations of the Second

dd  1  2dd  dd  1z  e mu  nv and Higher Orders

NOTES
Or

Or d  dd  d   1z  e mu  nv
C.F. = 1 u  v   e u  2 u  v 

x x
= 1    x 2  
 y  y
1
P.I. = e mu  nv
d  dd  d  1
1
= e mu  nv
m  n m  n  1
1
= x m yn
m  n m  n  1
x x 1
 z  1    x 2    x m yn .
 y  y  m  n m  n  1

Check Your Progress


6. Write the homogeneous linear equations with constant coefficients.
7. When is a non-homogeneous equation said to be reducible?
8. Which mathematical function is used to reduce partial differential
equations to equations with constant coefficients?

5.6 ANSWERS TO ‘CHECK YOUR PROGRESS’


1. The linear differential equation with constant coefficients are of the form,

dny d n1 y d n2 y dy


n
 P1 n 1
 P2 n2
 .....  Pn 1  Pn y  Q
dx dx dx dx
Where P1, P2, …., Pn are constants and Q is a function of x.

2. When m1= m2, then the complementary function will be,


1
y = (c1 + c2 x) em x where c1 and c2 are arbitrary constants.
3. Any particular solution of F (D) y = f(x) is known as its Particular Integral
(P.I). The P.I. of F(D)y = f(x) is symbolically written as,
1
P.I. = {f (x)} where F(D) is the operator.. Self - Learning
F ( D)
Material 147
Partial Differential 4. The three types of equations are the elliptic type, the parabolic type and the
Equations of the Second
and Higher Orders hyperbolic type.
5. L et m1, m2, ..., mn be the roots of the equation then C.F. = 1(y + m1x) +
2(y + m2x) +……………+ n(y + mnx) where i’s are arbitrary functions.
NOTES

6. Let f(D, D' )z = V(x, y)


Then if,
f D, D   A0 D n  A1 D n 1 D   A2 D n  2 D2   An D  n

Where A1 , A 2 , , A n are constants.


7. The equation f(D, D')z = V(x, y) is said to be reducible if the symbolic
function f (D, D') can be resolved into factors each of which is of first degree
in D and D'.
8. Logarithm function is used to reduce partial differential equations to equations
with constant coefficients

5.7 SUMMARY
 The general form of a linear differential equation of nth order is,
dny d n 1y dn 2
y dy
n
P1 n 1
P2 n 2
... Pn 1 Pn y =Q
dx dx dx dx
 The solution y = y1 (x), y = y2 (x), y = y3 (x), ..., y = yn (x) are said to be
linearly independent if the Wronskian of the functions is not zero
 The Complementary Function (C.F.) which is the complete primitive of the
Reduced Equation (R.E.) and is of the form
y = c1 y1 + c2 y2 + ... + cn yn containing n arbitrary constants.
 The Particular Integral (P.I.) which is a solution of F (D) y = Q containing
no arbitrary constant.
 If the two roots m1 and m2 of the auxiliary equation are equal
and each equal to m, the corresponding part of the general solution will be
(c1 + c2 x) emx and if the three roots m3, m4, m5 are equal to  the
corresponding part of the solution is (c3 + c4x + c5x2) ex and others are
distinct, the general solution will be,
6
y = (c1 + c2x) emx + (c3 + c4 + c5x2) ex + c6 em x +......+ cnemnx
 If a pair of imaginary roots  ± i occur twices, the corresponding part of
the general solution will be,
ex [(c1 + c2x) cos x + (c3 + c4x) sin x]
1
 The operator F ( D) is defined as that operator which, when operated on
f (x) gives a function  (x), such that F (D) (x) = f (x)
Self - Learning the following linear partial differential equation of the second order in two
148 Material independent variables,
Partial Differential
 2u  2u  2u u u
A 2 B C 2  D E  Fu  G Equations of the Second
x xy y x y and Higher Orders

Where A, B, C, D, E, F, and G are functions of x and y.


 Laplace equation, NOTES

 2u  2u
 0
x 2 y 2
This is equation is of elliptic type.
 One-dimensional heat flow equation,
u 2  u
2
c
t x 2
This equation is of parabolic type.
 One-dimensional wave equation,
 2u 2  u
2
 c
t 2 x 2
This is a hyperbolic type.
 Equation is said to be reducible if the symbolic function f D, D can be
resolved into factors each of which is of first degree in D and D' and
irreducible otherwise.

5.8 KEY TERMS


 Reducible: Equation is said to be reducible if the symbolic function f
(D, D) can be resolved into factors each of which is of first degree in D and
D and irreducible otherwise.
 Fundamental mode: The first normal mode is referred as the fundamental
mode.
 Complementary function: Consider the equation,
A 0 D n  A 1 D n 1 D   A 2 D n  2 D  2  
 A n Dn z  0
Let,
z  y  mx

5.9 SELF-ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define partial differential equations with suitable examples.
Self - Learning
2. How will you identify the order of a partial differential equation? Material 149
Partial Differential 3. How will you determine the degree of the partial differential equation?
Equations of the Second
and Higher Orders 4. Define Wronskian of functions.
5. Give the rules for finding the complementary function.
NOTES 6. Explain the partial differential equation of the second order.
7. Give examples of parabolic, elliptic and hyperbolic type equations.
8. What is the difference between homogeneous and non homogeneous
differential equations?
9. Explain the reducible non homogeneous equations.
Long-Answer Questions
1. Solve the equations:

(i) D 2  DDs  1D3 z  0 . 
(ii) D 3

 3D 2 D  4D3 z  0 .
2. Solve the equations:
 
(i) D 2  2DD  D 2 z  12xy .
(ii) D 2

 2DD  15D 2 z  12xy .
(iii) D 2

 6DD  9D2 z  12x 2  16xy .
(iv) D 3

 7DD 2  6D3 z  x 2  xy 2  y 3 .

(v) D 2 D  2DD 2  D3 z  1


x2 .
3. Solve the equations:
 
(i) D 2  DD  2D 2 z  x  y .

(ii) D  3DD  2D z  x  y .


2 2

(iii) 4D  4DD  D z  16 logx  2y .


2 2

(iv) D  7DD  6D z  cosx  y  x  xy  y


3 2 3 2 2 3
.

(v) D  7DD  6D z  sin (x + 2y) + e .


3 2 3 3x+y

(vi) D  3DD  2D z  x  2y .


3 2 3

(vii) D  4D D  5DD  2D z = e  y  x .


3 2 2 3 y+2x

4. Solve the equations:


 
(i) D 3  3DD 2  2D3 z  cosx  2 y  .

(ii) D 2

 5DD  5D 2 z  x sin 3x  2 y  .
Self - Learning
150 Material
5. Solve the equations: Partial Differential
Equations of the Second

(i) D 2  Dd  2D 2 z  y  1e x .  and Higher Orders

(ii) D 3

 3DD 2  2D3 z  cosx  2 y   e y 3  2x  . NOTES
6. Solve the equations:

(i) DD  D 2  3D z  0 . 
(ii) 2D  D  12 D  2 D  2 3 z  0 .
7. Solve the equations:

(i) 2D 2  D 2  D z  0 . 
(ii) D 2
 DD  D  D  1 z  0 . 
8. Solve the equations:
(i) D  D  1D  D  2 z  e 2 x  y .

 
(ii) D 2  D z  e x  y .
9. Solve the equations:
 
(i) D 2  DD  2D z  cos3x  4 y  .

(ii) D 2

 D z  A coslx  my , where A, l, m are constants.
10. Solve the equations:
(i) D  D  1D  2D  3z  4  3x  6 y .
x2
(ii) D  DD  D  DDz 
3 2 2
.
x3
 
(iii) D 2  D y  2 y  x 2 .
11. Solve the equations:
 
(i) D  D 2 z  cosx  3y  .

(ii) D  D  1D  D  3D  Dz  e x  y sin 2x  y  .


(iii) D 2  DD  D  1 z  4 sin h x . 
(iv) D D  D
2 2

 2 z  2 y sin 3x  e  cos 2 y .
12. Solve the equations:

(i) x 2 D 3  y 3 D 2 z  xy . 
(ii) x D2 2
 2xyDD  y 2 D 2 z  x 2 y 2 . 
(iii) x D2 2
 
 2xyDD  3y 2 D 2  xD  3yD z  x 2 y cos log x 3 .
Self - Learning
Material 151
Partial Differential
Equations of the Second  
13. Solve D 3  2D 2 D  DD 2  2D3 z  e x  y .
and Higher Orders
14. Solve D 3

 D3  D3  3DDD u  x 3  3xyz .
NOTES 15. Solve the following equations:
(i) r = x2 ey.
(ii) x ys  1 .
16. Solve the following equations:
(i) t  xq   sin y  x cos y .
(ii) t  xq  x 2 .
(iii) yt  q  xy .
17. Solve the following equations:
(i) xr  ys  p  10xy 3 .
(ii) 2 yt  xs  2q  4 yx 3 .
(iii) z  r  x cosx  y  .

18. Solve the differential equation, r  2 yp  y 2 z  y  2e 2 x 3 y .

5.10 FURTHER READING


K. P. Gupta and J. K. Goyal. 2013. Integral Transform. Meerut (UP): Pragati
Prakashan.
Sharma, J. N. and R. K. Gupta. 2015. Differential Equations (Paperback
Edition). Meerut (UP): Krishna Prakashan Media (P) Ltd.
Raisinghania, M. D. 2013. Ordinary and Partial Differential Equations. New
Delhi: S. Chand Publishing.
Coddington, Earl A. and N. Levinson. 1972. Theory of Ordinary Differential.
Equations. New Delhi: Tata McGraw-Hill.
Coddington, Earl A. 1987. An Introduction to Ordinary Differential Equations.
New Delhi: Prentice Hall of India.
Boyce, W. E. and Richard C. DiPrima. 1986. Elementary Differential Equations
and Boundary Value Problems. New York: John Wiley and Sons, Inc.
Ross, S. L. 1984. Differential Equations, 3rd Edition. New York: John Wiley
and Sons.
Sneddon, I. N. 1986. Elements of Partial Differential Equations. New York:
McGraw-Hill Education.

Self - Learning
152 Material
NOTES
NOTES

You might also like