0% found this document useful (0 votes)
16 views

Chapter 5 _new

The document discusses Chapter 5 of Engineering Mathematics I, focusing on Series Solutions of Ordinary Differential Equations (ODEs) and Special Functions. It covers methods such as the Power Series Method, Legendre’s Equation, and Bessel’s Equation, along with their applications in engineering. Key concepts include convergence, operations on power series, and the existence of power series solutions for ODEs.

Uploaded by

john940213
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Chapter 5 _new

The document discusses Chapter 5 of Engineering Mathematics I, focusing on Series Solutions of Ordinary Differential Equations (ODEs) and Special Functions. It covers methods such as the Power Series Method, Legendre’s Equation, and Bessel’s Equation, along with their applications in engineering. Key concepts include convergence, operations on power series, and the existence of power series solutions for ODEs.

Uploaded by

john940213
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

Engineering Mathematics I

Chia-Yu Lin
Department of Chemical Engineering,
National Cheng Kung University
E-mail: [email protected]
Ext:62664
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Power Series
• Some differential equations whose coefficients are functions of x, e.x., Bessel’s equation, Legendre’s
equation, and hygergeometric equation, etc., can be solved by power series method. They are solved
because these differential equations are important in engineering application.
• What is power series?
 a power series (in powers of x – x0) is an infinite series of the form

(1)

Here, x is a variable.
a0, a1, a2, ‥‥ are constants, called the coefficients of the series.
x0 is a constant, called the center of the series. In particular,

 if x0 = 0, we obtain a power series in powers of x

(2)

assume that all variables and constants are real.

4
Example Maclaurin series

5
Idea of the Power Series Method
• For a given ODE
y'' + p(x)y' + q(x)y = 0
1. represent p(x) and q(x) by power series in powers of x (or of x – x0 if solutions in powers of x – x0 are
wanted). Often p(x) and q(x) are polynomials, and then nothing needs to be done in this first step.
2. assume a solution in the form of a power series with unknown coefficients,
(3)

and insert this series and the series obtained by termwise differentiation,
(4)

into the ODE.


6
Idea of the Power Series Method

3.collect like powers of x and equate the sum of the coefficients of each
occurring power of x to zero, starting with the constant terms, then taking
the terms containing x, then the terms in x2, and so on.
➔equations to determine the unknown coefficients of (3) successively

7
Example
• Solve the following ODE by power series.
y' = 2xy.
• Solution.
insert (3) and (4a) into the given ODE, obtaining
a1 + 2a2x + 3a3x2 + ‥‥ = 2x(a0 + a1x + a2x2 + ‥‥).
do multiplication by 2x on the right,
➔ a1 + 2a2x + 3a3x2 + 4a4x3 + 5a5x4 + 6a6x5 + ‥‥
= 2a0x + 2a1x2 + 2a2x3 + 2a3x4 + 2a4x5 +‥‥ .
 Coefficients of every power of x on both sides must be equal, that is:
a1 = 0, 2a2 = 2a0, 3a3 = 2a1,
4a4 = 2a2, 5a5 = 2a3, 6a6 = 2a4, ‥‥ .

8
Example
 Coefficients of every power of x on both sides must be equal, that is:
a1 = 0, 2a2 = 2a0, 3a3 = 2a1,
4a4 = 2a2, 5a5 = 2a3, 6a6 = 2a4, ‥‥ .
Hence a3 = 0, a5 = 0, ‥‥ and for the coefficients with even subscripts,

a0 remains arbitrary. With these coefficients the series (3) gives the following
solution. (confirm by method of separating variables).

9
Example
 More rapidly, (3) and (4) give for the ODE y' = 2xy

to get the same general power on both sides, make a “shift of index” on the left by setting m =
s + 2, thus m – 1 = s + 1.
➔ am becomes as+2 and xm-1 becomes xs+1. Also the summation, which started with m = 2, now
starts with s = 0 because s = m – 2.
On the right, simply make a change of notation m = s, hence am = as and xm+1 = xs+1; also the
summation now starts with s = 0. This altogether gives

10
Example

 Every occurring power of x must have the same coefficient on both sides; hence

For s = 0, 1, 2, ‥‥

➔ a2 = (2/2)a0, a3 = (2/3)a1 = 0, a4 = (2/4)a2, ‥‥ as before.

11
Example 2
• Solve
y" + y = 0.
• Solution.
 insert (3) and (4b) into the ODE,

To obtain the same general power on both series, set m = s + 2 in the first series and m = s in the
second, and take the second term to the right side.

• Each power xs must have the same coefficient on both sides. Hence (s + 2)(s + 1)as+2 = –as. This
gives the recursion formula

12
Example 2
➔ thus obtain successively

and so on.
 a0 and a1 remain arbitrary.

Reordering terms (which is permissible for a power series), write this in the form

and we recognize the familiar general solution y = a0 cos x + a1 sin x.


13
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Basics of concepts
• Power series.

(1)

• the variable x, the center x0, and the coefficients a0, a1, ‥‥ are real.

• The nth partial sum of (1) is

(2) sn(x) = a0 + a1(x – x0) + a2(x – x0)2 ‥‥ an(x – x0)n

where n = 0, 1, ‥‥ .

• The remainder of (1) after the term an(x – x0)n:

(3) Rn(x) = an+1(x – x0)n+1 + an+2(x – x0)n+2 ‥‥.

15
Basics of concepts

16
Basics of concepts
• Convergence

↑ The value of the sum of (1) at x1

or

and

(4)

• Divergence

Sn(X1) → ∞ as n → ∞

17
Basics of concepts
• S1, S2, …. Sn convergence to S, if

| Sn – S | < ε for every n > N ε- positive but not zero

or

(5) | Rn (x1) | = | S(x) - Sn(x1) | < ε

→0 as n → ∞

• See Fig 102.

• Geometrically, this means that all sn(x1) with n > N lie between s(x1) –ε and s(x1) +ε. Practically, this
means that in the case of convergence we can approximate the sum s(x1) of (1) at x1 by sn(x1) as
accurately as we please, by taking n large enough.

18
Convergence Interval. Radius of Convergence

• The convergence of the power series (1):

three cases: the useless Case 1, the usual Case 2, and the best Case 3, as follows.

• Case 1. The series (1) always converges at x = x0, because for x = x0 all its terms are zero, perhaps
except for the first one, a0. In exceptional cases x = x0 may be the only x for which (1) converges.
Such a series is of no practical interest.

• Case 2. Convergence interval, formed by x other than x0 for which the series converges, If this
interval is finite, it has the midpoint x0, so that it is of the form
(6) │x – x0│< R , x → converge , x0 - midpoint
│x – x0│> R , x → diverge

See Fig. 103

Fig. 103. Convergence interval (6) of a


power series with center x0 19
Example 1 The useless Case 1 of convergence only at the
center

• In the case of the series

am = m!,

Thus this series converges only at the center x = 0. Such a series is useless.

20
Example 2 The usual Case 2 of convergence in a finite
interval. Geometric Series

• For the geometric series ,

am = 1 for all m, ➔ 1

and from (7) → R = 1,


 the geometric series converges and represents 1/(1 – x) when│x│< 1.

21
Example 3 The best Case 3 of convergence for All x

• In the case of the series

am = 1/m!. Hence in (7b),

so that the series converges for all x.

22
Example 4 Hint for some of the problems

• Find the radius of convergence of the series

• Solution. This is a series in powers of t = x3 with coefficients am = (–1)m/8m, so that in (7b),

Thus R = 8. Hence the series converges for │t│ = │x3│< 8, that is,│x│< 2.

23
Operations on Power Series
• Termwise Differentiation
A power series may be differentiated term by term. More precisely: if

converges for│x – x0│< R, where R > 0, then the series obtained by differentiating term by term
also converges for those x and represents the derivative y' of y for those x, that is,

Similarly,

24
Operations on Power Series

• Termwise Addition
Two power series may be added term by term. More precisely: if the series, ƒ(x) and g(x), have
positive radii of convergence
(8)

sums of the series, ƒ(x) + g(x):

 the convergence interval, R, is at least equal to the smaller of Ra and Rb.

25
Operations on Power Series
• Termwise Multiplication

Two power series may be multiplied term by term. More precisely: Suppose that the series (8) have
positive radii of convergence and let ƒ(x) and g(x) be their sums.

• Multiplying each term of the first series by each term of the second series and collecting like
powers of x – x0, ➔

converges and represents ƒ(x)g(x).


 the convergence interval, R, is at least equal to the smaller of Ra and Rb.
• Vanishing of all coefficients.
If a power series has a positive radius of convergence and a sum that is identically zero throughout
its interval of convergence, then each coefficient of the series is zero.
26
Operations on Power Series
• Shifting summation indices.

An index of summation is dummy and can be changed. Ex. ∑ 3m m2/ m! = ∑ 3k k2/ k!

An index of summation can be shifted.

ex. ∑ (m) (m-1) am xm-2 , if set m = s + 2, then s = m -2

➔∑ (s+2) (s+1) as+2 xs = 2 a2 + 6 a3 x + 12 a4 x2 + ..

• Ex. Needed in writing the sum of two series as a single series.

x2 ∑ (m) (m-1) am xm-2 + 2 ∑ m am xm-1 = x2 (2 a2 + 6 a3 x + 12 a4 x2 + ..) + 2 (a1 + 2 a2 x + 3 a3 x2 + .. )

➔ ∑ (m) (m-1) am xm + 2 ∑ m am xm-1

set m = s set m-1 = s

➔ ∑ s (s-1) as xs + ∑ 2 (s + 1) as+1 xs = ∑[s (s-1) as + 2 (s + 1) as+1] xs

Note that summation indices are shifted accordingly. 27


Existence of Power Series Solutions of ODEs.
Real Analytic Functions

DEFINITION
Real Analytic Function

A real function ƒ(x) is called analytic at a point x =


x0 if it can be represented by a power series in
powers of x – x0 with radius of convergence R > 0.

28
Existence of Power Series Solutions of ODEs.
Real Analytic Functions

THEOREM 1
Existence of Power Series Solutions
If p, q, and r in (9) are analytic at x = x0, then every
solution of (9) is analytic at x = x0 and can thus be
represented by a power series in powers of x – x0
with radius of convergence R > 0.

29
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Legendre’s Equation

• This is Legendre’s equation


(1) (1 – x2)y'' – 2xy' + n(n + 1)y = 0
 n - a given constant, a real number.
Any solution of (1) is called a Legendre function
Arising from spherical symmetry system.
(1) → - analytic at x = 0.
Dividing (1) by the coefficient 1 – x2 of y'',
➔ the coefficients –2x/(1 – x2) and n(n + 1)/(1 – x2) of the new equation are analytic at x = 0.
• Hence by Theorem 1, in Sec. 5.2, Legendre’s equation has power series solutions of the form
(2)

31
Legendre’s Equation
• Substituting (2) and its derivatives into (1), and set k = n(n + 1),

• By writing the first expression as two separate series we have the equation

• To obtain the same general power xs in all four series, set m – 2 = s (thus m = s + 2) in the first
series and simply write s instead of m in the other three series. This gives

32
Legendre’s Equation
Rewriting in a column form with each column of the same power.

2  1  a2 + 3  2a3x + 4 3 a4 x2+ …+(s+2)(s+1)as+2xs+..


- 2 1 a2 x2 - …- s(s-1)asxs - ..
- 2  1a1x - 2 2 a2 x2 - …- 2sasxs - ..
+ k a0 + k a 1x + k a2 x2 + …+ k asxs+..

Collecting coefficients of the same power:


 x0 (3a) 2 a2 + n (n+1) a0 = 0
x (3b) 6 a3 + [-2 + n (n+1)] a1 = 0
. .
xs (3c) …(s+2)(s+1)as+2 + [-s(s-1) - 2s + n (n+1)] as = 0

the general formula – recursion formula

(4)

and so on.

33
Legendre’s Equation
By inserting these expressions for the coefficients into (2) we obtain
(5) y(x) = a0y1(x) + a1y2(x)
where a0 ,, a1 are arbitrary constant.
and

(6)

(7)

These series converge for│x│< 1.


34
Legendre Polynomials Pn(x)
• In various applications, power series solutions of ODEs reduce to polynomials, that is, they
terminate after finitely many terms.

• For Legendre’s equation this happens when the parameter n is a nonnegative integer because
when s = n, then from (4),

then the right side of (4) is zero for s = n, so that an+2 = 0, an+4 = 0, an+6 = 0, ‥‥.

• Hence if n is even, y1(x) reduces to a polynomial of degree n.

• If n is odd, the same is true for y2(x).

• These polynomials, multiplied by some constants, are called Legendre polynomials and are
denoted by Pn(x).

35
Legendre Polynomials Pn(x)
• The standard choice of a constant is done as follows. We choose the coefficient an of the highest
power xn as

• (8)

and an = 1 if n = 0.

Then calculate the other coefficients from (4), solved for as in terms of as+2, that is,

(9)

The reason why such choice of an is made:

The choice (8) makes Pn(1) = 1 for every n (see Fig. 104); this motivates (8).

36
Legendre Polynomials Pn(x)
• am = ? For such choice of an.

from (8), (9) 

37
Legendre Polynomials Pn(x)

38
Legendre Polynomials Pn(x)
• The resulting solution of Legendre’s differential equation (1) is called the Legendre polynomial of
degree n and is denoted by Pn(x). From (10) we obtain

(11)

• where M = n/2 or (n – 1)/2, whichever is an integer. The first few of these functions are (Fig. 104)

• and so on.

39
Legendre Polynomials Pn(x)
• The orthogonality of the Legendre polynomials – an important characteristic makes the function
more important.

40
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Frobenius Method

• An extension of power series method.


• Why extension is needed ?
Power series method works for linear differential equation, or
y” + p(x) y’ + q(x) y = 0
where p(x), q(x) are analytic for all x for which the equation is considered.
For some equations of practical important they have a “singular point – at which p(x),
q(x) is no longer analytic.
ex. Bessel equation

42
Frobenius Method

THEOREM 1
Frobenius Method(1)
Let b(x) and c(x) be any functions that are analytic at x = 0.
Then the ODE
(1)
has at least one solution that can be represented in the form
(2)

where the exponent r may be any (real or complex) number


(and r is chosen so that a0 ≠ 0).

43
Frobenius Method

THEOREM 1
Frobenius Method(2)
The ODE (1) also has a second solution (such that
these two solutions are linearly independent) that may
be similar to (2) (with a different r and different
coefficients) or may contain a logarithmic term. (Details
in Theorem 2 below.)

44
Regular and Singular Points

• Regular and Singular Points


. A regular point of
y" + p(x)y' + q(x)y = 0
is a point x0 at which the coefficients p and q are analytic. Then the power series
method can be applied.
If x0 is not regular, it is called singular.
Similarly, a regular point of the ODE

is an x0 at which are analytic and (x0) ≠ 0 (so what we can divide by and get
the previous standard form). If x0 is not regular, it is called singular.

45
Indicial Equation, Indicating the Form of Solutions
• How to solve (1) ?

First, multiply (1) by x2 gives the more convenient form

(1') x2y" + xb(x)y' + c(x)y = 0.

• Step 1. Expand b(x) and c(x) in power series,

b(x) = b0 + b1x + b2x2 + ‥‥ ,


c(x) = c0 + c1x + c2x2 + ‥‥ , or do nothing if b(x) and c(x) are polynomials.

• Step 2. Differentiate (2) term by term, finding y’, y”.

(2*)

46
Indicial Equation, Indicating the Form of Solutions

• Step 3. Inserting all these series into (1'), obtain

(3) xr[r(r – 1)a0 + ‥‥] + (b0 + b1x + ‥‥)xr(ra0 + ‥‥) + (c0 + c1x + ‥‥)xr(a0 + a1x + ‥‥) = 0.

• Step 4. Equate the sum of the coefficients of each power xr, xr+1, xr+2, ‥‥ to zero. This yields a
system of equations involving the unknown coefficients am.

xr : [r(r – 1) + b0r + c0]a0 = 0.

Since by assumption a0 ≠ 0,

 (4) r(r – 1) + b0r + c0 = 0.

• This important quadratic equation is called the indicial equation of the ODE (1).

47
Indicial Equation, Indicating the Form of Solutions

• Solution of (1).

One of the two solutions will always be the form of (2), where r is a root of (4).

The other solution depends on (4), there are three cases.

- case 1. r = r1, r2, | r1 - r2 | ≠ an integer 1, 2, 3, ..

- case 2. r = r1 = r2

- case 3. r = r1, r2, | r1 - r2 | = an integer 1, 2, 3,

48
Indicial Equation, Indicating the Form of Solutions

THEOREM 2
Frobenius Method. Basis of Solutions.
Three Cases(1)
Suppose that the ODE (1) satisfies the assumptions in
Theorem 1. Let r1 and r2 be the roots of the indicial
equation (4). Then we have the following three cases.
Case 1. Distinct Roots Not Differing by an Integer. A
basis is
(5)
and
(6)

49
Indicial Equation, Indicating the Form of Solutions

THEOREM 2
Frobenius Method. Basis of Solutions.
Three Cases(2)
Case 2. Double Root r1 = r2 = r. A basis is
(7)
(of the same general form as before) and
(8)

50
Indicial Equation, Indicating the Form of Solutions

THEOREM 2
Frobenius Method. Basis of Solutions.
Three Cases(3)
Case 3. Roots Differing by an Integer. A basis is
(9)
(of the same general form as before) and
(10)
where the roots are so denoted that r1 – r2 > 0 and k may
turn out to be zero.

51
Example 1 Euler–Cauchy Equation, Illustrating
Cases 1 and 2 and Case 3 without a Logarithm

• For the Euler–Cauchy equation (Sec. 2.5)


x2y" + b0xy' + c0y = 0 (b0, c0 constant)
substitution of y = xr gives the auxiliary equation
r(r – 1) + b0r + c0 = 0,

which is the indicial equation [and y = xr is a very special form of (2)!]. For different roots r1, r2 we
get a basis y1 = , y2 = , and for a double root r we get a basis xr, xr ln x. Accordingly, for this
simple ODE, Case 3 plays no extra role.

52
Example 2 Illustration of Case 2 (Double Root)

• Solve the ODE

(11) x(x – 1)y" + (3x – 1)y' + y = 0.

(This is a special hypergeometric equation, as we shall see in the problem set.)

• Solution. Writing (11) in the standard form (1), we see that it satisfies the assumptions in
Theorem 1. [What are b(x) and c(x) in (11)?] By inserting (2) and its derivatives (2*) into (11) we
obtain

(12)

53
Example 2 Illustration of Case 2 (Double Root)

• The smallest power is xr-1, occurring in the second and the fourth series; by equating the sum of
its coefficients to zero

➔ [–r(r – 1) – r]a0 = 0, thus r2 = 0.

Hence this indicial equation has the double root r = 0.

• First Solution. We insert this value r = 0 into (12) and equate the sum of the coefficients of the
power xs to zero, obtaining

s(s – 1)as – (s + 1)sas+1 + 3sas – (s + 1)as+1 + as = 0

➔ as+1 = as.

➔ a0 = a1 = a2 = ‥‥

If a0 = 1, we obtain the solution

54
Example 2 Illustration of Case 2 (Double Root)

• Second Solution. y2 = uy1, get a second independent solution y2 by the method of reduction of
order (Sec. 2.1),

substituting and its derivatives into the equation. This leads to (9), Sec. 2.1,

y” + p(x) y’ + q(x) y = 0
1 −  pdx
u= 2
e dx
y1
➔ p = (3x – 1)/(x2 – x), the coefficient of y in (11) in standard form. By partial fractions,

• Hence (9), Sec. 2.1, becomes

55
Example 2 Illustration of Case 2 (Double Root)

• y1 and y2 are shown in Fig. 106. These functions are linearly independent and thus form a basis
on the interval 0 < x < 1 (as well as on 1 < x < ∞).

56
Example 3 Case 3, Second Solution with Logarithmic
• Solve the ODE

(13) (x2 – x)y" – xy' + y = 0.

• Solution. Substituting (2) and (2*) into (13), we have

 take x2, x, and x inside the summations and collect all terms with power xm+r and simplify algebraically,

In the first series we set m = s and in the second m = s + 1, thus s = m – 1. Then

57
Example 3 Case 3, Second Solution with Logarithmic
The lowest power is xr-1 (take s = –1 in the second series) and gives the indicial equation

r(r – 1) = 0.

The roots are r1 = 1 and r2 = 0. They differ by an integer. This is Case 3.

• First Solution. From (14) with r = r1 = 1 we have

This gives the recurrence relation

Hence a1 = 0, a2 = 0, ‥‥ successively. Taking a0 = 1, we get as a first solution

58
Example 3 Case 3, Second Solution with Logarithmic
• Second Solution. r = r2 = 0

Applying reduction of order (Sec. 2.1), substitute

y2 = y1u = xu, y'2 = xu' + u and y''2 = xu'' + 2u' into the ODE, obtaining

(x2 – x)(xu'' + 2u') – x(xu' + u) + xu = 0 or (x2 – x)u'' + (x – 2)u' = 0.

From this, using partial fractions and integrating (taking the integration constant zero), get

• Taking exponents and integrating (again taking the integration constant zero), obtain

➔ y1 and y2 are linearly independent, and y2 has a logarithmic term. Hence y1 and y2 constitute a basis
of solutions for positive x.
59
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Bessel’s equation
• Bessel’s equation,

(1) x2y" + xy' + (x2 –ν2)y = 0. (ν - real and nonnegative)

or, in standard form:

Source of equations: electric fields, heat conduction, mass transfer, and vibrations, etc., with
cylindrical symmetry (just as Legendre’s equation may appear in cases of spherical symmetry).

Method of solution: Frobenius method.

Set

(2)

Insert it and its derivatives into (1). This gives


61
Bessel’s equation
 Coefficient of xr+s :

first term, m=s; second term, m=s


third term, m = s-2; last term, m=s

Indicial equation
(4) (r +ν)(r –ν) = 0.
The roots are r1 = ν(≧ 0) and r2 = –ν.
(7)

62
Bessel’s equation
• Coefficient Recurrence in the case of r = r1 = ν.

For r = r1 = ν,

(3b) ➔ (r +1)r a1 + (r +1) a1 –ν2 a1 = [(ν+1)ν+(ν+1) – ν2] a1 = (2ν+1) a1 = 0

(3c) ➔ (s + r +ν)(s + r -ν) as + as-2 = 0

➔ (5) (s + 2ν)(s ) as + as-2 = 0  a1 = 0  a3 = a5 = …. = 0

(6) m=1,2,3….

From (6) we can determine a2, a4, … successively. This gives

and so on, and in general

63
Bessel Functions Jn(x) For Integer ν= n
• Integer values of ν are denoted by n. This is standard. For ν= n the relation (7) becomes

(8)

a0 is still arbitrary, so that the series (2) with these coefficients would contain this arbitrary factor a0.

Chose (9)

Purpose: then n!(n + 1) ‥‥ (n + m) = (m + n)! in (8), so that (8) simply becomes

(10)

This simplicity of the denominator of (10) partially motivates the choice (9).

64
Bessel Functions Jn(x) For Integer ν= n

For r1 =ν= n,

denoted by Jn(x) and given by

(11)

Jn(x) is called the Bessel function of the first kind of order n.

converges for all x. In fact, it converges very rapidly because of the factorials in the denominator.

65
Example 1 Bessel Functions J0(x) and J1(x)
• For n = 0, obtain from (11) the Bessel function of order 0
(12) which looks similar to a cosine (Fig. 107).

• For n = 1, obtain the Bessel function of order 1

(13) which looks similar to a sine (Fig. 107).

• The zeros of these functions are not completely regularly spaced (see also Table A1 in App. 5).
 The height of the “waves” decreases with increasing x
 For larger x, n2/x2 in (1) in standard form [(1) divided by x2] is zero (if n = 0) or small in absolute
value, and so is y'/x,
➔Bessel’s equation ➔ y'' + y = 0,
solution = cos x and sin x;
66
Example 1 Bessel Functions J0(x) and J1(x)
y'/x acts as a “damping term,” in part responsible for the decrease in height. One can show that for
large x,

(14)

where ~ is read “asymptotically equal” and means that for fixed n the quotient of the two sides
approaches 1 as x → ∞.

• Formula (14) is surprisingly accurate even for smaller x (> 0). For instance, it will give you good
starting values in a computer program for the basic task of computing zeros. For example, for the
first three zeros of J0 you obtain the values 2.356 (2.405 exact to 3 decimals, error 0.049), 5.498
(5.520, error 0.022), 8.639 (8.654, error 0.015), etc.

Fig. 107. Bessel functions of the first kind J0 and J1 67


Bessel Functions Jν(x) for any ν≥ 0. Gamma Function
• Extend (11) for ν= n to any ν ≥ 0.

• Definition of Gamma function, Γ(ν).

(15) (ν > 0)

• Γ(ν+1) =

➔ (16) Γ(ν+1) = νΓ(ν)

• From (15), set ν= 1

Γ(1) =

 Γ(2) = 1Γ(1) = 1!, Γ(3) = 2Γ(2) = 2! ➔ in general, (17) Γ(n +1) = n! , n = 0, 1, 2,

68
Bessel Functions Jν(x) for any ν≥ 0. Gamma Function
• a0 for any ν, ← in (9), a0 = 1 / (2n n!)

or a0 = 1 / (2n Γ(n +1) )

➔suggest (18) a0 = 1 / (2ν Γ(ν+1) )

Then (7) becomes

69
Bessel Functions Jν(x) for any ν≥ 0. Gamma Function
• Hence because of our (standard!) choice (18) of a0 the coefficients (7) simply are

• Solution. With these coefficients and r = r1 =νwe get from (2) a particular solution of (1), denoted by
Jν(x) and given by

(20)

Jν(x) is called the Bessel function of the first kind of order ν. The series (20) converges for all x,
as one can verify by the ratio test.

70
General Solution for Noninteger ν. Solution J–ν

• r = - ν≠ integer

• Replacing νby – ν in (20),

If ν≠ integer, J ν and J –ν are linearly independent.

THEOREM 1
General Solution of Bessel’s Equation
If ν is not an integer, a general solution of Bessel’s
equation for all x ≠ 0 is
(22) y(x) = c1Jν(x) + c2J –ν(x).
71
General Solution for Noninteger ν. Solution J–ν

THEOREM 2
Linear Dependence of
Bessel Functions Jn and J–n

For integer ν = n the Bessel functions Jn(x) and


J–n(x) are linearly dependent, because
(23) J–n(x) = (–1)nJn(x) (n = 1, 2, ‥‥).

72
General Solution for Noninteger ν. Solution J–ν

• Proof:
Gamma function, Γ(α) is defined as

For α< 0, the Gamma function is defined as

and (α + k + 1) > 0
as α→ - n, Γ(α) →   (See Fig.552)
73
General Solution for Noninteger ν. Solution J–ν

➔the coefficients of the first n terms in (21) → 0, and


Γ(m-n+1)=(m-n)!

Set m = n + s
→s=m–n
➔ = (-1)n Jn(x)

74
Discovery of Properties From Series

THEOREM 3
Derivatives, Recursions
The derivative of Jν(x) with respect to x can be expressed
by Jν-1(x) or Jν+1(x) by the formulas

(24)

Furthermore, Jν(x) and its derivative satisfy the


recurrence relations

(24)

75
Discovery of Properties From Series

• Proof:

76
77
Example 2 Application of Theorem 3 in Evaluation and
Integration
• Formula (24c) can be used recursively in the form

for calculating Bessel functions of higher order from those of lower order.
• For instance, J2(x) = 2J1(x)/x – J0(x), so that J2 can be obtained from tables of J0 and J1 (in App. 5
or, more accurately, in Ref. [GR1] in App. 1).
• Use (24b) with ν= 3 integrated on both sides. This evaluates, for instance, the integral

A table of J3 (of Ref. [GR1]) or your CAS will give you


78
Example 2 Application of Theorem 3 in Evaluation and
Integration
• obtains J3 from (24), first using (24c) with ν= 2, that is,
J3 = 4x-1J2 – J1
then (24c) with ν= 1, that is,
J2 = 2x-1J1 – J0. Together,

= 0.003445448
79
Example 2 Application of Theorem 3 in Evaluation and
Integration

THEOREM 4
Elementary Jν for Half-Integer Order ν
Bessel functions Jν of orders ±1/2, ±3/2, ±5/2 ‥‥are
elementary; they can be expressed by finitely many
cosines and sines and powers of x. In particular,

(25)

80
Example 2 Application of Theorem 3 in Evaluation and
Integration
• Proof:

81
82
Example 3 Further Elementary Bessel Functions

• From (24c) with ν= 1/2 and ν= –1/2 and (25) we obtain

respectively, and so on.

83
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Bessel Functions of the Second Kind Yν(x)

• Getting a general solution of Bessel equation for integer ν= n


(note that, for ν≠ integer, y = c1 J ν +c2 J –ν)
 y = c1 Jn + c2 Yn, Yn = ?
• n = 0: Bessel Function of the Second Kind Y0(x)
When n = 0, Bessel’s equation can be written
(1) xy" + y' + xy = 0.
the indicial equation (4) in Sec. 5.5,
(r + ν) (r - ν) = r2 = 0  r = 0, double roots.
From theorem 2, Case 2 in Sec. 5.4. ➔ have only one solution, J0(x).

85
Bessel Functions of the Second Kind Yν(x)
From (8) in Sec. 5.4, the desired second solution must be of the form

(2)

Substitute y2 and its derivatives

into (1) xy" + y' + xy = 0.


➔ (ln x)( x J0” + J0’ + xJ0) + 2 J0’ + xm-1 + + xm+1 = 0

86
Bessel Functions of the Second Kind Yν(x)
Since J0 is solution of (1)

➔ ( x J0” + J0’ + xJ0) = 0

87
Bessel Functions of the Second Kind Yν(x)
• Collect the coefficient of the power x0, x2, ….., x2s,

x0: A1 = 0
x2s: (2s + 1)2 A2s+1 + A 2s-1 = 0

• Collect the coefficient of the power x1, x3, ….., x2s+1,

x1: - 1 + 4 A2 = 0 ➔ A2 = ¼

x2s+1:

88
Bessel Functions of the Second Kind Yν(x)
 Using the short notations

(4)

and inserting (4) and A1 = A3 = ‥‥ = 0 into (2), obtain the result

(5)

Since J0 and y2 are linearly independent functions, they form a basis of (1) for x > 0.

89
Bessel Functions of the Second Kind Yν(x)

• Since J0 and y2 are linearly independent functions, they form a basis of (1) for x > 0.

• Another basis:

replace y2 by an independent particular solution of the form a(y2 + bJ0),

where a (≠ 0) and b are constants.

It is customary to choose a = 2π and b = γ – ln 2,

where the number γ = 0.577 215 664 90 ‥‥ is the so-called Euler constant, which is defined
as the limit of

as s approaches infinity.

90
Bessel Functions of the Second Kind Yν(x)

The standard particular solution thus obtained is called the Bessel function of the second kind of
order zero (Fig. 109) or Neumann’s function of order zero and is denoted by Y0(x). Thus [see (4)]

(6)

• For small x > 0 the function Y0(x) behaves about like ln x (see Fig. 109, why?).

• Y0(x) → –∞ as x → 0.

Fig. 109. Bessel functions of the second kind


Y0 and Y1. (For a small table, see App. 5.) 91
Bessel Functions of the Second Kind Yn(x)

• For ν = n = 1, 2, 3, ….
The second solution can be obtained by assuming y2(x) = k y1(x) lnx + am Xm-n

• The standard second solution Y (x) defined for all ν by the formula

(7)

This function is called the Bessel function of the second kind of order ν or
Neumann’s function of order ν. Figure 109 shows Y0(x) and Y1(x).

92
Bessel Functions of the Second Kind Yn(x)

• Jν & Yν are linearly independent.


Proof:
1. ν – noninteger
 Yν(x) is a solution of Bessel equation since Jν(x) & J-ν(x) are solutions.
 Yν(x) is independent of Jν(x) since Yν(x) contains J-ν(x) which is linearly independent of Jν(x).
2. ν – integer
 Yn(x) exists and is independent of Jn(x), since Yn(x) contains a ln x term.
• How to obtain Yn .
1. Insert (20), (21) in sec.4.5 for Jν(x) & J-ν(x) into (7a) and let ν➔ n.

93
Bessel Functions of the Second Kind Yn(x)

2. Use L’Hopital rule,

(8)

where x > 0, n = 0, 1, ‥ , and [as in (4)] h0 = 0, h1 = 1,

When n = 0, last term in (8) is replaced by 0.

• Y-n(x) = (-1)n Yn(x).

94
Bessel Functions of the Second Kind Yn(x)

THEOREM 1
General Solution of Bessel’s Equation
A general solution of Bessel’s equation for all values of
ν (and x > 0) is
(9) y(x) = C1Jν(x) + C2Yν(x).

• Bessel functions of the third kind of order ν or First and second Hankel functions of order ν.

95
Orthogonality
• gm(x) and gn(x) – real-valued function, defined on a ≤ x ≤ b.
 gm(x) gn(x) dx exists, or
(1) ( gm(x), gn(x) ) = gm(x) gn(x) dx
• gm(x), gn(x) –orthogonal on a ≤ x ≤ b, if
(2) ( gm(x), gn(x) ) = gm(x) gn(x) dx = 0 m≠n
• Orthogonal set of functions on a ≤ x ≤ b,
if g1, g2, …… gm, satisfied (1) and (2)
Ex. Orthogonality of Legendre polynominals
m≠n

96
Orthogonality

• Orthogonality with respect to a weight functions

For some functions, g1, g2, …… gm, and some nonnegtive function p(x),

p(x) gm(x) gn(x) dx = 0 m≠n

➔ orthogonal with respect to the weight functions p(x) on a ≤ x ≤ b.

Ex. Orthogonality of Bessel functions Jn(x).

97
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Sturm–Liouville Problems

• Practical Importance:
Bessel’s, Legendres functions form a basis for the series representations of given
functions occurring in mechanics, heat conduction, electricity, and other physical
applications.
• So far we have considered initial value problems. We recall from Sec. 2.1 that
such a problem consists of an ODE, say, of second order, and initial conditions
y(x0) = K0, y'(x0) = K1 referring to the same point (initial point) x = x0. We now turn
to boundary value problems. A boundary value problem consists of an ODE and
given boundary conditions referring to the two boundary points (endpoints) x = a
and x = b of a given interval a ≤ x ≤ b. To solve such a problem means to find a
solution of the ODE on the interval a ≤ x ≤ b satisfying the boundary conditions.

99
General form of Sturm–Liouville equation

• We shall see that Legendre’s, Bessel’s, and other ODEs of importance in engineering can be written
as a Sturm–Liouville equation

(1) [p(x)y']' + [q(x) + λr(x)]y = 0

involving a parameterλ. The boundary value problem consisting of an ODE (1) and given Sturm–
Liouville boundary conditions

(a) k1y(a) + k2y'(a) = 0


(2)
(b) l1y(b) + l2y'(b) = 0

is called a Sturm–Liouville problem. We shall see further that these problems lead to useful series
developments in terms of particular solutions of (1), (2).

100
General form of Sturm–Liouville equation

These problems lead to useful series developments in terms of particular solutions of (1), (2).
➔Crucial connection is orthogonality.
➔Eigenfunction, y(x)≠0 satisfying (1) & (2).
(y = 0 –Trivial solution).
➔Eigenvalue, λ.
 Assumptions:
(1) p, q, r, and p' are continuous on a ≤ x ≤ b, and
r(x) > 0 (a ≤ x ≤ b).
(2) k1, k2 are given constants, not both zero, and so are l1, l2, not both zero.

101
Example 1 Legendre’s and Bessel’s Equations are
Sturm–Liouville Equations

• Legendre’s equation (1 – x2)y" – 2xy' + n(n + 1)y = 0 may be written

[(1 – x2)y']' + λy = 0 λ = n(n + 1).

This is (1) with p = 1 – x2, q = 0, and r = 1.

• In Bessel’s equation

as a model in physics or elsewhere, one often likes to have another parameter k in addition to n. For
this reason we set = kx. Then by the chain rule . In the
first two terms, k2 and k drop out and we get

x2y'' + xy' + (k2x2 – n2)y = 0.

102
Example 1 Legendre’s and Bessel’s Equations are
Sturm–Liouville Equations

• Division by x gives the Sturm–Liouville equation

This is (1) with p = x, q = –n2/x, and r = x.

• Clearly, y ≡ 0 is a solution—the “trivial solution”—for any λ because (1) is homogeneous and (2)
has zeros on the right. This is of no interest. We want to find eigenfunctions y(x), that is, solutions
of (1) satisfying (2) without being identically zero. We call a number λ for which an eigenfunction
exists an eigenvalue of the Sturm–Liouville problem (1), (2).

103
Example 2 Trigonometric Functions as Eigenfunctions.
Vibrating String

• Find the eigenvalues and eigenfunctions of the Sturm–Liouville problem


(3) y" + λy = 0, y(0) = 0, y(π) = 0.
This problem arises, for instance, if an elastic string (a violin string, for
example) is stretched a little and then fixed at its ends x = 0 and x = π and
allowed to vibrate. Then y(x) is the “space function” of the deflection u(x, t)
of the string, assumed in the form u(x, t) = y(x)w(t), where t is time. (This
model will be discussed in great detail in Secs. 12.2–12.4.)

104
Example 2 Trigonometric Functions as Eigenfunctions.
Vibrating String
• Solution. From (1) and (2) we see that p = 1, q = 0, r = 1 in (1), and a = 0, b = π, k1 = l1 = 1, k2 = l2
= 0 in (2). For negative λ = –ν2 a general solution of the ODE in (3) is y(x) = c1eνx + c2e-νx. From the
boundary conditions we obtain c1 = c2 = 0, so that y ≡ 0, which is not an eigenfunction. For λ = 0
the situation is similar. For positive λ = ν2 a general solution is

y(x) = A cosνx + B sinνx.

From the first boundary condition we obtain y(0) = A = 0. The second boundary condition then yields

y(π) = B sinνπ = 0, thus ν= 0, ±1, ±2, ‥‥ .

For ν = 0 we have y ≡ 0. For λ = ν2 = 1, 4, 9, 16, ‥‥ , taking B = 1, we obtain

y(x) = sinνx (ν = 1, 2, ‥‥).

Hence the eigenvalues of the problem are λ = ν2, where ν = 1, 2, ‥‥ , and corresponding
eigenfunctions are y(x) = sinνx, where ν = 1, 2, ‥‥ .

105
Example 2 Trigonometric Functions as Eigenfunctions.
Vibrating String

• Existance of Eigenvalues

Eigenvalues of a Sturm-Liouville problem (1), (2), even infinitely many, exist under
rather general conditions on p, q, r in (1).

• Reality of eigenvalues

If p, q, r, and p’ in (1) are real-valued and continuous on the interval a ≤ x ≤ b and r is


positive(or negative) throughout that interval, then all the eigenvalues of the Sturm-
Liouville problem (1) & (2) are real.

( eigenvalues in engineering problem are often related to frequencies, energies, …..)

106
Orthogonality

DEFINITION
Orthogonality(1)
Functions y1(x), y2(x), ‥‥ defined on some interval a ≤ x ≤
b are called orthogonal on this interval with respect to
the weight function r(x) > 0 if for all m and all n different
from m,

(4)

The norm ║ym║ of ym is defined by

(5)

107
Orthogonality

DEFINITION
Orthogonality(2)
Note that this is the square root of the integral in (4) with n =
m.
The functions y1, y2, ‥‥ are called orthonormal on a ≤ x ≤ b if
they are orthogonal on this interval and all have norm 1.
If r(x) = 1, we more briefly call the functions orthogonal instead
of orthogonal with respect to r(x) = 1; similarly for
orthonormality. Then

108
Example 3 Orthogonal Functions. Orthonormal Functions

• The functions ym(x) = sin mx, m = 1, 2, form an orthogonal set on the interval –π ≤ x ≤ π ,
because for m ≠ n we obtain by integration [see (11) in App. A3.1]

The norm ║ym║ equals , because

Hence the corresponding orthonormal set, obtained by division by the norm, is

109
Orthogonality of Eigenfunctions

THEOREM 1
Orthogonality of Eigenfunctions(1)
Suppose that the functions p, q, r, and p' in the Sturm–
Liouville equation (1) are real-valued and continuous and
r(x) > 0 on the interval a ≤ x ≤ b. Let ym(x) and yn(x) be
eigenfunctions of the Sturm–Liouville problem (1), (2)
that correspond to different eigenvalues λm and λn,
respectively. Then ym, yn are orthogonal on that interval
with respect to the weight function r, that is,

(6)

110
Orthogonality of Eigenfunctions

THEOREM 1
Orthogonality of Eigenfunctions(2)
If p(a) = 0, then (2a) can be dropped from the problem. If
p(b) = 0, then (2b) can be dropped. [It is then required
that y and y' remain bounded at such a point, and the
problem is called singular, as opposed to a regular
problem in which (2) is used.]
If p(a) = p(b), then (2) can be replaced by the “periodic
boundary conditions”
(7) y(a) = y(b), y'(a) = y'(b).

111
Proof..

112
Proof..

113
Proof..

114
Example 4 Application of Theorem 1. Vibrating Elastic
String

• The ODE in Example 2 is a Sturm–Liouville equation with p = 1, q = 0,


and r = 1. From Theorem 1 it follows that the eigenfunctions ym = sin
mx (m = 1, 2, ‥‥) are orthogonal on the interval 0 ≤ x ≤ π.

115
Example 5 Application of Theorem 1. Orthogonality of the
Legendre Polynomials

• Legendre’s equation is a Sturm–Liouville equation (see Example 1)


[(1 – x2)y']' + λy = 0, λ = n(n + 1)
with p = 1 – x2, q = 0, and r = 1. Since p(–1) = p(1) = 0, we need no boundary
conditions, but have a singular Sturm—Liouville problem on the interval –1 ≤
x ≤ 1. We know that for n = 0, 1, ‥‥ , hence λ = 0, 1 ∙2, 2 ∙3, ‥‥ , the Legendre
polynomials Pn(x) are solutions of the problem. Hence these are the
eigenfunctions. From Theorem 1 it follows that they are orthogonal on that
interval, that is,
(10)

116
Example 6 Application of Theorem 1. Orthogonality of the
Bessel Functions Jn(x)

• The Bessel function Jn( ) with fixed integer n ≥ 0 satisfies Bessel’s equation (Sec.
5.5)

where In Example 1 we transformed this equation, by


setting = kx, into a Sturm–Liouville Equation:

117
Example 6 Application of Theorem 1. Orthogonality of the
Bessel Functions Jn(x)

with p(x) = x, q(x) = –n2/x, r(x) = x, and parameter λ = k2. Since p(0) = 0,
Theorem 1 implies orthogonality on an interval 0 ≤ x ≤ R (R given, fixed) of
those solutions Jn(kx) that are zero at x = R, that is,
(11) Jn(kR) = 0 (n fixed).
[Note that q(x) = –n2/x is discontinuous at 0, but this does not affect the proof
of Theorem 1.] It can be shown (see Ref. [A13]) that Jn( ) has infinitely many
zeros, say, = αn,1 < αn,2 < (see Fig. 107 in Sec. 5.5 for n = 0 and 1). Hence
we must have
(12) kR = αn,m thus kn,m = αn,m/R (m = 1, 2, ).
• This proves the following orthogonality property.

118
THEOREM 2
Orthogonality of Bessel Functions
For each fixed nonnegative integer n the sequence of
Bessel functions of the first kind Jn(kn,1x), Jn(kn,2x), ‥‥
with kn,m as in (12) forms an orthogonal set on the
interval 0 ≤ x ≤ R with respect to the weight function
r(x) = x, that is,

(13)

119
Example 7 Eigenvalues from Graphs

• Solve the Sturm–Liouville problem y" + λy = 0, y(0) + y'(0) = 0, y(π) – y'(π) = 0.

• Solution. A general solution and its derivative are

The first boundary condition gives y(0) + y'(0) = A + Bk = 0, hence A = –Bk. The second
boundary condition and substitution of A = –Bk give

y(π) – y'(π) = A cosπk + B sinπk + Ak sinπk – Bk cosπk


= –Bk cosπk + B sinπk – Bk2 sinπk – Bk cosπk = 0

• We must have B ≠ 0 since otherwise B = A = 0, hence y = 0, which is not an eigenfunction.


Division by B cosπk gives

120
Example 7 Eigenvalues from Graphs

• The graph in Fig. 110 now shows us where to look for eigenvalues. These correspond to the k-values
of the points of intersection of tanπk and the right side –2k/(k2 – 1) of the last equation. The
eigenvalues are λm = km2, where λ0 = 0 with eigenfunction y0 = 1 and the other λm are located near 22,
32, 42, ‥‥ , with eigenfunctions cos kmx and sin kmx, m = 1, 2, ‥‥ . The precise numeric determination
of the eigenvalues would require a root-finding method (such as those given in Sec. 19.2).

Fig. 110. Example 7. Circles mark the intersections of tan πk and –2k/(k2 – 1) 121
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Orthogonal Eigenfunction Expansions

• Kronecker’s delta δmn is defined by δmn = 0 if m ≠ n and δmn = 1 if m = n (thus


δnn = 1). Hence for orthonormal functions y0, y1, y2, ‥‥ with respect to weight r(x)
(> 0) on a ≤ x ≤ b we can now simply write (ym, yn) = δmn, written out

(1)

Also, for the norm we can now write

(2)

123
Orthogonal Series

• Let y0, y1, y2, ‥‥ be an orthogonal set with respect to weight r(x) on an interval a ≤ x ≤ b. Let ƒ(x)
be a function that can be represented by a convergent series

(3)

This is called an orthogonal expansion or generalized Fourier series. If the ym are


eigenfunctions of a Sturm–Liouville problem, we call (3) an eigenfunction expansion.

Then we obtain

• Because of the orthogonality all the integrals on the right are zero, except when m = n. Hence the
whole infinite series reduces to the single term

an(yn, yn) = an║yn║2.

124
Orthogonal Series

• Assuming that all the functions yn have nonzero norm, we can divide by ║yn║2;
writing again m for n, to be in agreement with (3), we get the desired formula for
the Fourier constants

(4)

125
Example 1 Fourier Series
• A most important class of eigenfunction expansions is obtained from the periodic Sturm–Liouville
problem

y" + λy = 0, y(π) = y(–π), y'(π) = y'(–π).

A general solution of the ODE is y = A cos kx + B sin kx, where k = . Substituting y and its
derivative into the boundary conditions, we obtain

A cos kπ + B sin kπ = A cos (–kπ) + B sin (–kπ)


–kA sin kπ + kB cos kπ = –kA sin (–kπ) + kB cos (–kπ).

Since cos (–α) = cos α, the cosine terms cancel, so that these equations give no condition for these
terms. Since sin (–α) = –sin α, the equations gives the condition sin kπ = 0, hence kπ = mπ, k = m = 0,
1, 2, ‥‥ , so that the eigenfunctions are

cos 0 = 1, cos x, sin x, cos 2x, sin 2x, ‥‥ ,


cos mx, sin mx, ‥‥
corresponding pairwise to the eigenvalues λ = k2 = 0, 1, 4, ‥‥ , m2, ‥ . (sin 0 = 0 is not an eigenfunction.)
126
Example 1 Fourier Series

• By Theorem 1 in Sec. 5.7, any two of these belonging to different eigenvalues are
orthogonal on the interval –π ≤ x ≤ π (note that r(x) = 1 for the present ODE). The
orthogonality of cos mx and sin mx for the same m follows by integration,

For the norms we get║1║= , and for all the others, as you may verify by
integrating 1, cos2 x, sin2 x, etc., from –π to π. This gives the series (with a slight
extension of notation since we have two functions for each eigenvalue 1, 4, 9, ‥‥)

(5)

127
Example 1 Fourier Series

According to (4) the coefficients (with m = 1, 2, ‥‥ ) are

(6)

The series (5) is called the Fourier series of ƒ(x). Its coefficients are called the Fourier
coefficients of ƒ(x), as given by the so-called Euler formulas (6) (not to be confused with the
Euler formula (11) in Sec. 2.2).

• For instance, for the “periodic rectangular wave” in Fig. 111, given by

128
Example 1 Fourier Series

we get from (6) the values a0 = 0 and

• Hence the Fourier series of the periodic rectangular wave is

129
Example 2 Fourier–Legendre Series
• A Fourier–Legendre series is an eigenfunction expansion

in terms of Legendre polynomials (Sec. 5.3). The latter are the eigenfunctions of the Sturm–
Liouville problem in Example 5 of Sec. 5.7 on the interval –1 ≤ x ≤ 1. We have r(x) = 1 for
Legendre’s equation, and (4) gives
(7)

because the norm is


(8)

as we state without proof. (The proof is tricky; it uses Rodrigues’s formula in Problem Set 5.3
and a reduction of the resulting integral to a quotient of gamma functions.)

130
Example 2 Fourier–Legendre Series
• A For instance, let ƒ(x) = sin πx. Then we obtain the coefficients

• Hence the Fourier–Legendre series of sin πx is

sin πx = 0.95493P1(x) – 1.15824P3(x) + 0.21429P5(x)


– 0.01664P7(x) + 0.00068P9(x) – 0.00002P11(x) + ‥‥ .

• The coefficient of P13 is about 3‧10-7. The sum of the first three nonzero terms gives a
curve that practically coincides with the sine curve. Can you see why the even-
numbered coefficients are zero? Why a3 is the absolutely biggest coefficient?

131
Example 3 Fourier–Bessel Series
• In Example 6 of Sec. 5.7 we obtained infinitely many orthogonal sets of Bessel functions, one for
each of J0, J1, J2, ‥‥ . Each set is orthogonal on an interval 0 ≤ x ≤ R with a fixed positive R of our
choice and with respect to the weight x. The orthogonal set for Jn is Jn(kn,1x), Jn(kn,2x), Jn(kn,3x), ‥‥ ,
where n is fixed and kn,m is given in (12), Sec. 5.7. The corresponding Fourier–Bessel series is

(9)

The coefficients are (with αn,m = kn,mR)

(10)

because the square of the norm is

(11)

as we state without proof (which is tricky; see the discussion beginning of [A13]).
132
Example 3 Fourier–Bessel Series

• For instance, let us consider ƒ(x) = 1 – x2 and take R = 1 and n = 0 in the series (9), simply writing
λ for α0,m. Then kn,m = α0,m = λ = 2.405, 5.520, 8.654, 11.792, etc. (use a CAS or Table A1 in App.
5). Next we calculate the coefficients am by (10),

This can be integrated by a CAS or by formulas as follows. First use [xJ1(λx)]' = λxJ0(λx) from
Theorem 3 in Sec. 5.5 and then integration by parts,

133
Example 3 Fourier–Bessel Series

• The integral-free part is zero. The remaining integral can be evaluated by [x2J2(λx)] = λx2J1(λx)
from Theorem 3 in Sec. 5.5. This gives

• Numeric values can be obtained from a CAS (or from the table of Ref. [GR1] in App. 1, together
with the formula J2 = 2x-1J1 – J0 in Theorem 3 of Sec. 5.5). This gives the eigenfunction expansion
of 1 – x2 in terms of Bessel functions J0, that is,
1 – x2 = 1.1081J0(2.405x) – 0.1398J0(5.520x) +
0.0455J0(8.654x) – 0.0210J0(11.792x) + ‥‥ .
• A graph would show that the curve of 1 – x2 and that of the sum of the first three terms practically
coincide.

134
Mean Square Convergence. Completeness of
Orthonormal Sets

• Convergence is convergence in the norm, also called mean-square convergence; that


is, a sequence of functions ƒk is called convergent with the limit ƒ if
(12*)
written out by (2) (where we can drop the square root, as this does not affect the limit)
(12)

• Accordingly, the series (3) converges and represents ƒ if


(13)

135
Mean Square Convergence. Completeness of
Orthonormal Sets

where sk is the kth partial sum of (3),

(14)

• By definition, an orthonormal set y0, y1, ‥‥ on an interval a ≤ x ≤ b is complete in a set of functions


S defined on a ≤ x ≤ b if we can approximate every ƒ belonging to S arbitrarily closely by a linear
combination a0y0 + a1y1 + ‥‥ + akyk, that is, technically, if for every ε > 0 we can find constants
a0, ‥‥ , ak (with k large enough) such that

(15)

Hence in the case of completeness every ƒ in S satisfies the so-called Parseval’s equality

136
THEOREM 1
Completeness
Let y0, y1, ‥‥ be a complete orthonormal set on
a ≤ x ≤ b in a set of functions S. Then if a
function ƒ belongs to S and is orthogonal to
every ym, it must have norm zero. In particular,
if ƒ is continuous, then ƒ must be identically
zero.

137
Example 4 Fourier Series

• The orthonormal set in Example 1 is complete in the set of continuous


functions on –π ≤ x ≤ π. Verify directly that ƒ(x) ≡ 0 is the only
continuous function orthogonal to all the functions of that set.
• Solution. Lef ƒ be any continuous function. By the orthogonality (we
can omit and ),

Hence am = 0 and bm = 0 in (6) for all m, so that (3) reduces to ƒ(x) ≡ 0.

138
Chapter 5_ Series Solutions of ODEs.
Special Functions
1. Power Series Method
2. Theory of the Power Series Method
3. Legendre’s Equation. Legendre Polynomials Pn(x)
4. Frobenius Method
5. Bessel’s Equation. Bessel Functions Jν(x)
6. Bessel Functions of the Second Kind Yν(x)
7. Sturm-Liouville Problems. Orthogonal Functions
8. Orthogonal Eigenfunction Expansions
9. Summary of Chapter 5
Summary

• The power series method gives solutions of linear ODEs


(1) y" + p(x)y' + q(x)y = 0
with variable coefficients p and q in the form of a power series (with any
center x0, e.g., x0 = 0)

(2)

Such a solution is obtained by substituting (2) and its derivatives into (1). This
gives a recurrence formula for the coefficients. You may program this formula
(or even obtain and graph the whole solution) on your CAS.

140
Summary

• If p and q are analytic at x0 (that is, representable by a power series in powers


of x – x0 with positive radius of convergence; Sec. 5.2), then (1) has solutions of
this form (2). The same holds if , , in

are analytic at x0 and (x0) ≠ 0, so that we can divide by and obtain the
standard form (1). Legendre’s equation is solved by the power series method
in Sec. 5.3.

141
Summary

• The Frobenius method (Sec. 5.4) extends the power series method to ODEs

(3)

whose coefficients are singular (i.e., not analytic) at x0, but are “not too bad,”
namely, such that a and b are analytic at x0. Then (3) has at least one solution
of the form
(4)

142
Summary

• where r can be any real (or even complex) number and is determined by substituting
(4) into (3) from the indicial equation (Sec. 5.4), along with the coefficients of (4). A
second linearly independent solution of (3) may be of a similar form (with different r and
am’s) or may involve a logarithmic term. Bessel’s equation is solved by the Frobenius
method in Secs. 5.5 and 5.6.

• “Special functions” is a common name for higher functions, as opposed to the usual
functions of calculus. Most of them arise either as nonelementary integrals [see (24)–
(44) in App. 3.1] or as solutions of (1) or (3). They get a name and notation and are
included in the usual CASs if they are important in application or in theory. Of this kind,
and particularly useful to the engineer and physicist, are Legendre’s equation and
polynomials P0, P1, ‥‥ (Sec. 5.3), Gauss’s hypergeometric equation and functions
F(a, b, c; x) (Sec. 5.4), and Bessel’s equation and functions Jν and Yν (Secs. 5.5,
5.6).

143
Summary

• Modeling involving ODEs usually leads to initial value problems (Chaps. 1–3) or
boundary value problems. Many of the latter can be written in the form of
Sturm–Liouville problems (Sec. 5.7). These are eigenvalue problems
involving a parameter λ that is often related to frequencies, energies, or other
physical quantities. Solutions of Sturm–Liouville problems, called
eigenfunctions, have many general properties in common, notably the highly
important orthogonality (Sec. 5.7), which is useful in eigenfunction
expansions (Sec. 5.8) in terms of cosine and sine (“Fourier series”, the topic of
Chap. 11), Legendre polynomials, Bessel functions (Sec. 5.8), and other
eigenfunctions.

144

You might also like