0% found this document useful (0 votes)
22 views67 pages

MA3264CHAP7

The document discusses partial differential equations and the wave equation. It introduces separation of variables as a method to solve certain partial differential equations with two independent variables, like x and y, by separating the PDE into ordinary differential equations. As an example, it uses separation of variables to solve the PDE ux + xuy = 0. It then introduces the wave equation and provides context for how it arises from a vibrating string. D'Alembert's solution to the wave equation is presented, which involves superimposing two waves that move in opposite directions at a constant speed.

Uploaded by

Nancy Q
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views67 pages

MA3264CHAP7

The document discusses partial differential equations and the wave equation. It introduces separation of variables as a method to solve certain partial differential equations with two independent variables, like x and y, by separating the PDE into ordinary differential equations. As an example, it uses separation of variables to solve the PDE ux + xuy = 0. It then introduces the wave equation and provides context for how it arises from a vibrating string. D'Alembert's solution to the wave equation is presented, which involves superimposing two waves that move in opposite directions at a constant speed.

Uploaded by

Nancy Q
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Chapter 7

Modelling with Partial Differential Equations

7.1 Partial Differential Equations

A partial differential equation (PDE) is an

equation containing an unknown function u(x, y, . . .)

of two or more independent variables x, y, . . . and

its partial derivatives with respect to these variables.

We call u the dependent variable.

These equations allow us to deal with situations where

[for example] something depends on SPACE as well

as time − − − nearly all of our models so far have

involved variations with TIME only. So for exam-

ple suppose you introduce some lions into a national


2 MA3264 Chapter 7. Partial Differential Equations

park: you might want to know WHERE they are, as

well as how many you have!

7.1.1 Separation of Variables for PDE

This method can be used to solve PDE involving

two independent variables, say x and y, that can be

‘separated’ from each other in the PDE There are

similarities between this method and the technique

of separating variables for ODE in Chapter 1.We first

make an observation:

Suppose u(x, y) = X(x)Y (y).

Then

(i) ux(x, y) = X ′(x)Y (y)

(ii) uy (x, y) = X(x)Y ′(y)


2
3 MA3264 Chapter 7. Partial Differential Equations

(iii) uxx(x, y) = X ′′(x)Y (y)

(iv) uyy (x, y) = X(x)Y ′′(y)

(v) uxy (x, y) = X ′(x)Y ′(y)

Notice that each derivative of u remains ‘separated’

as a product of a function of x and a function of y.

We exploit this feature as follows:

7.1.2 Illustration of Separation of Variables

Consider a PDE of the form

ux = f (x)g(y)uy .

If a solution of the form u(x, y) = X(x)Y (y) exists,

3
4 MA3264 Chapter 7. Partial Differential Equations

then we obtain

X ′(x)Y (y) = f (x)g(y)X(x)Y ′(y)


1 X ′(x) Y ′(y)
i.e., · = g(y) .
f (x) X(x) Y (y)

LHS is a function of x only while RHS is a function

of y only. We conclude that

LHS = RHS = some constant k.

Thus, we obtain two ODEs

1 X ′(x)
· = k ⇒ X ′(x) = kf (x)X(x) (1)
f (x) X(x)
Y ′(y) ′ k
g(y) = k ⇒ Y (y) = Y (y) (2)
Y (y) g(y)

Note that (2) is an ODE with independent variable x

and dependent variable X while (3) is an ODE with

independent variable y and dependent variable Y .


4
5 MA3264 Chapter 7. Partial Differential Equations

By solving (2) and (3) respectively for X(x) and

Y (y), we obtain the solution u(x, y) = X(x)Y (y).

7.1.3 Example

Solve ux + xuy = 0.

Solution: If a solution u(x, y) = X(x)Y (y) exists,

then we obtain

X ′(x)Y (y) + xX(x)Y ′(y) = 0

1 X ′(x) Y ′(y)
i.e., · =− (3)
x X(x) Y (y)

This gives two ODEs :

LHS of (4) = k gives X ′ = kxX.

This ODE has general solution

kx2 /2
X(x) = Ae (a)

5
6 MA3264 Chapter 7. Partial Differential Equations

Similarly, RHS of (4) = k gives Y ′ = −kY .

This ODE has general solution

Y (y) = Be−ky (b)

Multiplying (a) and (b), we obtain a solution of the

PDE

k(x2 /2 − y)
u(x, y) = X(x)Y (y) = Ce .

7.2 The Wave Equation

Suppose you have a very flexible string [meaning that

it does not resist bending at all] which lies stretched

tightly along the x axis and has its ends fixed at x =

0 and x = π. Then you pull it in the y-direction so

that it is stationary and has some specified shape, y

= f(x) at time t = 0 [so that f(0)=0 and f(π) = 0].


6
7 MA3264 Chapter 7. Partial Differential Equations

We can assume that f(x) is continuous and bounded,

but we will let it have some sharp corners, but only a

finite number of them. (Actually, we can allow more

general functions, even discontinuous ones: see below

when we talk about Fourier series.)

What will happen if you now let the string go? Clearly

the string will start to move. We assume that the

only forces acting are those due to the tension in the

string, and that the pieces of the string will only move

in the y-direction.

Now the y-coordinate of any point on the string will

become a function of time as well as a function of

x. So it becomes a function y(t,x) of both t and

7
8 MA3264 Chapter 7. Partial Differential Equations

x, and we have to use partial derivatives when we

differentiate it. This function satisfies

y(t, 0) = 0 y(t, π) = 0

for all t, because the ends are nailed down, also

y(0, x) = f (x)

and
∂y
(0, x) = 0,
∂t
because the string is initially stationary. Notice that

we need four pieces of information here, and it is

useful to remember that.

Suppose that the mass per unit length of the string

is constant and equal to µ. Then the mass of a small

piece of the string is µdx. [We assume that we don’t


8
9 MA3264 Chapter 7. Partial Differential Equations

pull the string too far, so it never bends much, hence

the length of the small piece can be approximated by

dx.] So the mass times the acceleration of the small


∂ 2y
piece is µdx ∂t2 . The force acting in the y direction is

just the difference between the y-components of the

tension at the two ends of the piece, and this turns


∂y
out to be d(K ∂x ) where K is a certain positive con-

stant. [Look at the note at the end of the solutions of

Tutorial 2 if you are interested in the details.] Using

Newton’s second law we get


( ∂y ) ∂ 2y
d K = µdx 2 ,
∂x ∂t

or
2
2∂ y ∂ 2y
c 2
= 2,
∂x ∂t
9
10 MA3264 Chapter 7. Partial Differential Equations

where c2 is a positive constant. This is the famous

WAVE EQUATION. Notice that it involves FOUR

derivatives altogether, two involving x, and two in-

volving t. That matches up with the fact we men-

tioned earlier, that we needed FOUR pieces of infor-

mation to nail down a solution [the endpoints, the

initial position, the initial velocity].

Note that the following function solves the wave equa-

tion [with those four conditions]:

1[ ]
y(t, x) = f (x + ct) + f (x − ct) .
2

Here f(x) gives the initial shape of the string, as

above. [Verify that this does satisfy the PDE, that

y(0,x) = f(x), and that ∂y


∂t = 0 for all x when t = 0, as

10
11 MA3264 Chapter 7. Partial Differential Equations

it should be. Initially f (x) was only defined between

0 and π, but we can extend it to be an odd, periodic

function of period 2π, and then you can also verify

that y(t,0) = y(t,π) = 0. See Tutorial 11.] This is

called d’Alembert’s solution of the wave equation.

[ASIDE: Let’s expand on that “odd function” remark

in case you are unfamiliar with the procedure. Sup-

pose f (x) is only defined on [0, π]. (It might actually

be defined outside that interval, but just ignore that

− − − we don’t care what is happening outside, so

we can re-define it if we want!) Now DEFINE f (x)

on [−π, π] by setting, for any negative number −a in

[−π, 0], f (−a) ≡ −f (a). You now have an odd func-

11
12 MA3264 Chapter 7. Partial Differential Equations

tion, defined on [−π, π], which coincides with f (x)

on the positive side. Next, having done that, you can

extend it to the whole real line: consider for example

any number in (π, 3π]: define f (a) ≡ f (a − 2π), and

so on: you just pick up the original function, defined

on [−π, π], and put down copies of it along the whole

real axis. The result is of course periodic with period

2π. Thus, any f (x) defined on [0, π] can be extended

to an odd periodic function with period 2π.]

You can think about f(x-ct) in the following way.

First, think about f(x): it is a function with some

definite shape. Now what is f(x-1)? It is exactly the

same shape, but shifted to the right by one unit.

12
13 MA3264 Chapter 7. Partial Differential Equations

Similarly f(x-2) is the same shape shifted 2 units

to the right, and similarly f(x-ct) is f(x) shifted to

the right by ct. But if t is time, then ct is some-

thing which increases linearly with time, at a rate

controlled by c. In other words, f(x-ct) represents

the shape f(x) moving to the right at a constant

speed c. In other words, it represents a WAVE [of ar-

bitrary shape] moving to the right at constant speed

c. Similarly f(x+ct) represents a wave [with the same

shape as f(x)] moving to the left at constant speed

c. So d’Alembert’s solution says that the solutions of

the wave equation [with the given boundary and ini-

tial conditions] can be found by superimposing two

13
14 MA3264 Chapter 7. Partial Differential Equations

waves of those forms.

7.2.1 Separation of Variables for the Wave


Equation

We can also solve the wave equation using the method

of separation of variables. We want to solve

c2yxx = ytt,

y(t, 0) = y(t, π) = 0, y(0, x) = f (x), yt(0, x) = 0,

where f(x) is a given (“reasonable”, see below) func-

tion which is zero at 0 and π.

We separate the variables:

y(t, x) = u(x)v(t),

and get
u′′(x) 1 v ′′(t)
= 2 = − λ.
u(x) c v(t)
14
15 MA3264 Chapter 7. Partial Differential Equations

The usual separation argument now implies that λ is

a constant, and we have

u′′ + λu = 0,

v ′′ + λc2v = 0.

Now let’s force u(x) to vanish at 0 and π, so we can

enforce y(t,0) = u(0)v(t) = 0 and y(t,π) = u(π)v(t)

= 0. NOTICE that this is very different from what

we normally do when we try to solve second-order

ODEs. Normally we give some information about

the function at ONE point, for example we might

ask for solutions of u′′ + λu = 0 where u(0) and u′(0)

are given, as in Chapter 1. But here we are giving

the information at TWO different points.


15
16 MA3264 Chapter 7. Partial Differential Equations

If λ is negative, then u(0) = 0 implies that all solu-

tions of u′′ + λu = 0 are proportional to a hyperbolic

sine function [sinh(x)], and such a function cannot

cut the x axis twice. So λ cannot be negative. Sim-

ilarly, if λ = 0, then u(x) is a straight line function

which cannot cut the x axis twice. So λ has to be pos-

itive, and we can write it as λ = n2 for some constant

n, and u(x) = C cos(nx) + D sin(nx) for arbitrary

constants C and D. Since u(0) = 0, we have C = 0

and so u(x) = D sin(nx). If we want u(π) = 0, this

means sin(nπ) = 0 and that is correct PROVIDED

that n is an integer [whole number]. So we have to

enforce that.

16
17 MA3264 Chapter 7. Partial Differential Equations

Solving the other equation for v(t), we get now

v(t) = Acos(nct) + B sin(nct)

for arbitrary constants A and B. We force v(t) to

satisfy v’(0) = 0 because we want yt(0, x) = u(x)v’(0)

= 0. So now B = 0 and we are left with v(t) =

Acos(nct). So our complete solution is

y(t, x) = bn sin(nx)cos(nct),

where bn is an arbitrary constant and n is any integer.

This satisfies all of our conditions... except one! We

still have not satisfied y(0,x) = f(x).

17
18 MA3264 Chapter 7. Partial Differential Equations

7.2.2 Fourier Series: Completing the Solu-


tion of the Wave Equation

We all know that, in order to talk about vectors in

R3, we need to specify three numbers, the compo-

nents relative to i, j, k. One way to obtain those

components, for a vector V , is by taking a scalar

product: the three components are i·V , j·V , k·V .

Now think about the set of all piecewise continuous

functions1 on [0, π]. It is clearly a vector space over

R. Can we find a basis for it? The answer is that we

can2! Well, apart from certain harmless exceptions

to be discussed.

We claim that the basis is given simply by sin(nx),


1
For technical reasons we will also assume that the functions have well-defined left and right
derivatives at every point.
2
If necessary we will extend these functions to odd functions, periodic with period 2π.

18
19 MA3264 Chapter 7. Partial Differential Equations

where n is any positive integer. That is, we claim

that ANY function g(x) of this sort can be expressed,

away from discontinuous points (see below) as




g(x) = bn sin(nx),
n=1

for certain real numbers bn which we can think of as

the components: they are given, in analogy with i·V ,

j·V , k·V , by
∫ π
2
bn = g(x) sin(nx)dx.
π 0

That is, the integral (times a constant3) plays the

role of the scalar product here. (Note that the series

is well-defined outside [0, π]: out there, it converges

(again, away from discontinuous points) to the odd


3
To see why the 2/π is needed, ask what is the Fourier series of sin(x) itself is: of course it is just
1 × sin(x), so we need b1 = 1. But the integral of sin2 (x) from 0 to π is π/2, so you need the 2/π to
cancel this.

19
20 MA3264 Chapter 7. Partial Differential Equations

periodic extension of g(x), as discussed earlier. Of

course, the series itself is clearly odd and periodic

with period 2π, since each term in it is so.)

The series here is called the FOURIER (sine) SE-

RIES of g(x). It’s hard to overstate the importance

of this rather amazing fact: the Fourier series al-

lows us to express ANY function on this interval as a

string of NUMBERS, the components bn! And num-

bers can be talked about. That is, if I have some

function which hasn’t been named yet (meaning of

course the overwhelming majority of functions), I can

communicate what it is to you by giving you a string

of numbers. Better still: the numbers bn typically

20
21 MA3264 Chapter 7. Partial Differential Equations

become smaller very rapidly with n, so I can approx-

imate and give you this information in the form of

a FINITE string of numbers, obtained by neglecting

all of the bn beyond some value of n.

At points of discontinuity, it turns out that the Fourier

series doesn’t (necessarily) converge to the function

value at that point. Instead, it converges to the AV-

ERAGE of the left and right limits of the function

at that point. Which is also very cool. I won’t keep

saying this however, I leave it to you.

Now let’s return to the problem of solving the wave

equation. Remember that we have extended the func-

tion f(x) to be an odd function of period 2π. So it has

21
22 MA3264 Chapter 7. Partial Differential Equations

a Fourier sine series, and since f(x) is continuous and

has only a finite number of sharp corners, we have




f (x) = bn sin(nx).
n=1

We are supposed to be given f(x), so we know how to

compute the coefficients bn of the Fourier sine series:


∫ π
2
bn = f (x) sin(nx)dx.
π 0

So we know all of these numbers if we are given f(x).

Now consider the series




bn sin(nx)cos(nct),
n=1

where the numbers bn are the coefficients in the Fourier

series of f(x). Then we notice two things:

[1] If we put t = 0 into this series, then we get exactly

f(x), expressed as its Fourier sine series; and


22
23 MA3264 Chapter 7. Partial Differential Equations

[2] Since the wave equation is linear, and each term

in this series is a solution of the wave equation, then

this series is also a solution of the wave equation, as

above.

So we have solved the problem: this series is just

exactly what we want, y(t,x)!

Summary: the solution of

c2yxx = ytt,

y(t, 0) = y(t, π) = 0, y(0, x) = f (x), yt(0, x) = 0,

is just


y(t, x) = bn sin(nx)cos(nct),
n=1

where the bn are just the Fourier sine coefficients of

f(x) [regarded as an odd periodic function of period


23
24 MA3264 Chapter 7. Partial Differential Equations

2π.]

REMARK: We have done everything for the interval

[0, π]. But it is straightforward to get all this to

work for a general interval [0, L] of any length L: the


( nπx )
basis functions are now sin L (which are clearly

periodic with period 2L instead of 2π). The Fourier

formulae are now



∑ ( nπx )
g(x) = bn sin ,
n=1
L

and
∫ L ( nπx )
2
bn = g(x) sin dx,
L 0 L

f will now be a function that vanishes at 0 and L,

and the solution of the wave equation will take the

24
25 MA3264 Chapter 7. Partial Differential Equations

form

∑ ( nπx ) ( )
nπct
y(t, x) = bn sin cos .
n=1
L L

7.2.3 Example

TSUNAMI!!

A tsunami is a wave in the ocean, usually generated

by an undersea earthquake. Out at sea, it moves

extremely fast (≈ 800 km/hour) and is quite small

in height (≈ 1 metre). Both are bad, because they

make early warning extremely difficult; there are a

lot of differently shaped one-metre waves out in the

open ocean. One of the most important of all math-

ematical modelling problems is to understand the

25
26 MA3264 Chapter 7. Partial Differential Equations

characteristic shape of a tsunami when it is still far

away from shore: if you can recognise the shape (us-

ing satellite radar, for example Elon Musk’s STAR-

LINK constellation would be excellent for this pur-

pose) then you might be able to warn people that

one is coming.

Using the physics of fluids, one can model a surface

wave in SHALLOW water by means of the PDE



√ 3 g 1 2√
∂t η + gh ∂x η + η ∂x η + h gh ∂x3 η = 0,
2 h 6

where η is the elevation above mean sea level, h is

the depth of the sea, and g is the acceleration due

to gravity. This non-linear third-order PDE is called

the Korteweg-de Vries equation.

26
27 MA3264 Chapter 7. Partial Differential Equations

It is very important to note that this equation only

works for waves in SHALLOW water. Now you might

think that the ocean isn’t exactly shallow (typical

depths far away from land are around 3 to 4 km)....

but to a tsunami, it IS shallow, because the wave-

length out there is typically around 200 km! The

point is this: if we can solve the KdV equation, the

shape of the wave we find will be the shape of a

TSUNAMI — not of any other kind of wave. So the

shape we find will allow us, in theory, to IDENTIFY

tsunami — that is, to distinguish them from all of

the other waves on the ocean.

This model is *only* intended to treat the tsunami

27
28 MA3264 Chapter 7. Partial Differential Equations

when it is far from shore; things get much more com-

plicated near to shore, when the tsunami becomes

much slower and taller. But that’s ok, after all we

really want to identify tsunami precisely when they

are far away.

We now look for a wave solution. From what we have

seen above, that means that we want a solution of the

form E(x − ct), a wave of fixed shape (given by the

form of the function E) moving to the right at speed

c. Substituting this into the KdV equation we get


( √
√ ) ′ 3 g ′ 1 2√
−c + gh E + EE + h gh E ′′′ = 0,
2 h 6

where the prime denotes ordinary differentiation. Let’s

28
29 MA3264 Chapter 7. Partial Differential Equations

write this as

−2A E ′ + 6B E E ′ + 2C E ′′′ = 0,

where the constants A, B, C are defined in the obvi-

ous way. (For physical values of the parameters they

are all positive.) We can do one integral immediately:

−2A E + 3B E 2 + 2C E ′′ = 0,

where for simplicity I have taken the constant of in-

tegration to be zero (which it won’t be in general).

Using the trick we learned long ago we can now in-

tegrate again, this time with respect to E:

−A E 2 + B E 3 + C (E ′) = K,
2

where K is another constant of integration. And this

is a separable first order ODE!


29
30 MA3264 Chapter 7. Partial Differential Equations

A typical solution of this equation is a simple multiple

of the function sech2. So our model predicts that a

tsunami should move at a steady speed with that

exact shape, which of course you can easily graph.

Of course, this model is far too simple, but we can

learn something from it. We saw that the *linear*

wave equation can represent a wave of ANY shape.

In other words, a model based on the linear wave

equation would tell us nothing about the shape of the

wave. But if we use the *non-linear* KdV equation,

we see that the model dictates what the shape MUST

be: when we went looking for a wave-like solution,

we had no choice about the shape. This gives us

30
31 MA3264 Chapter 7. Partial Differential Equations

hope that a more sophisticated model, also based on

some non-linear PDE, might allow us to predict the

shape of a tsunami, and then we would know how to

distinguish them from other ocean waves of the same

size. Then we can detect them and warn people.

7.3 Heat Equation

Consider the temperature in a long thin bar or wire

of constant cross section and homogeneous material,

which is oriented along the x-axis and is perfectly

insulated laterally, so that heat only flows in x-

direction. Then the temperature u depends only on

x and t and is given by the one-dimensional heat

31
32 MA3264 Chapter 7. Partial Differential Equations

equation

ut = c2uxx, (4)

where c2 is a positive constant called the thermal dif-

fusivity [units length2/time]. It measures how quickly

heat moves through the bar, and depends on what it

is made of.

7.3.1 Zero temperature at ends of rod

Let’s assume that the ends x = 0 and x = L of the

bar are kept at temperature zero, so that we have the

boundary conditions

u(0, t) = 0, u(L, t) = 0 for all t (5)

and the initial temperature of the bar is f (x), so that

32
33 MA3264 Chapter 7. Partial Differential Equations

we have the initial condition

u(x, 0) = f (x). (6)

Here we will assume that, when f(x) is extended to

be an odd function, it is then equal to its Fourier sine

series everywhere. [Remember that this can happen,

even if f(x) is discontinuous at some points.] We

call the p.d.e (5) together with the conditions (6)

and (7) a boundary value heat equation problem.

Notice that, unlike the wave equation, which needs

four pieces of data, here we only need three, which

matches the fact that the heat equation only involves

a total of three derivatives [two in the spatial direc-

tion, but only one in the time direction].

33
34 MA3264 Chapter 7. Partial Differential Equations

The Heat Equation is particularly useful in modelling

for the following reason. Think of an ordinary func-

tion, g(x). We can think of its second derivative,

g ′′(x), as a measure of the extent to which its graph

is not a straight line (the second derivative is zero ev-

erywhere if and only if g(x) is a linear function). We

say that g ′′(x) measures the curvature of the graph.

Now the Heat Equation says that the second spatial

derivative of u is equal to its time derivative. So as

time goes by, if the graph of u as a function of x is

concave up, then u will increase; whereas if the graph

is concave down, then it tends to decrease. The ef-

fect in both cases is to reduce the curvature. So we

34
35 MA3264 Chapter 7. Partial Differential Equations

can picture the equation as something that, given an

initial shape described by f (x), tries to “straighten

it out”. And of course that is how we expect heat

to behave! That is, it flows from hot places to cold,

trying to even out its distribution (in a theoretically

irreversible way — notice that the equation is NOT

invariant under a reversal of time, as the Wave Equa-

tion is).

This is a very useful property, because it applies to

many things other than heat. For example, animals

introduced into a confined area at some point will

normally spread out away from that point and fill up

the space more or less uniformly. The “Heat” Equa-

35
36 MA3264 Chapter 7. Partial Differential Equations

tion applies to ANYTHING THAT TENDS TO DIF-

FUSE, and so it can be used to model ANYTHING

that has a tendency to spread out! So please don’t

think of the “Heat” equation as something that be-

longs only to physics.

7.3.2 Example

Solve

ut = 2uxx, 0 < x < 3, t > 0,

given boundary conditions u(0, t) = 0, u(3, t) = 0,

and initial condition u(x, 0) = 5 sin 4πx.

Solution:

We use the method of separation of variables. Let

36
37 MA3264 Chapter 7. Partial Differential Equations

u(x, t) = X(x)T (t). Then ut = 2uxx gives

XT ′ = 2X ′′T,

or equivalently
X ′′ T′
= .
X 2T
Each side must be a constant k. So

X ′′ − kX = 0 (A)

T ′ − 2kT = 0 (B)

The solutions of (A) are, as in the cases we studied

earlier, of three types; but clearly we want X(x) to

vanish at TWO values of x [the two ends of the bar].

Of course, hyperbolic sine [k positive] and straight-

line [k = 0] functions cannot do that. So we have to

choose k to be negative, so as to get trigonometric


37
38 MA3264 Chapter 7. Partial Differential Equations

functions which can vanish at two values of x. So we

have
√ √
X(x) = a cos −kx + b sin −kx. (7)

The solutions of (B) are

T (t) = de2kt

for the same negative value of k as in (A).

So now we have a simple solution of the heat equa-

tion, by just multiplying X(x) into T (t).

Now we just have to use the boundary conditions.

They can be written as

X(0)T (t) = 0 and X(3)T (t) = 0.

Now since T (t) = de2kt ̸= 0 for any t, we conclude

that X(0) = 0 and X(3) = 0.


38
39 MA3264 Chapter 7. Partial Differential Equations

Substituting x = 0 and 3 separately into our expres-

sion for X(x), we get

X(0) = a = 0
√ √
X(3) = a cos 3 −k + b sin 3 −k = 0

Solving these two equations, we get



a = 0 and b sin 3 −k = 0.

Since we do not want a and b both zero, this implies



sin 3 −k = 0 which gives

√ nπ −n2π 2
−k = or k = where n = 0, 1, 2, . . .
3 9

Putting this back into our general solution and ab-

sorbing d into b [since the product of arbitrary con-

39
40 MA3264 Chapter 7. Partial Differential Equations

stants is another arbitrary constant] we have

nπx
un(x, t) = bne−2n
2 π 2 t/9
sin (HSn)
3

is a solution for each n = 1, 2, 3, . . ..

We have used up two of our pieces of information.

But one remains: we have to satisfy

u(x, 0) = 5 sin 4πx.

We want to construct a solution from among (HSn)

that satisfies this initial condition.

Now substituting t = 0 into (HSn) for any n,


nπx
un(x, 0) = bn sin .
3
If we take n = 12 and b12 = 5, we have

12πx
u12(x, 0) = 5 sin = 5 sin 4πx
3
40
41 MA3264 Chapter 7. Partial Differential Equations

Hence, the particular solution that will also satisfy

the initial condition is

u12(x, t) = 5e−32π t sin 4πx.


2

Now of course this particular example is very simple,

because u(x, 0) = 5 sin 4πx has a Fourier sine series

with only one term! In general, u(x, 0) has an infinite

Fourier sine series. But that is no problem, to get the

time dependence we just have to multiply each term

in the series by a suitable exponential function. The

sum will then give us the full solution.

SUMMARY

In this example, the thermal diffusivity c2 was 2 and

41
42 MA3264 Chapter 7. Partial Differential Equations

the length of the sample was 3. But you can easily

work through the above discussion for the general

case, where the thermal diffusivity is any value c2

and the length of the sample is L. Then you will find

that the solution of

ut = c2uxx,

u(0, t) = u(L, t) = 0, u(x, 0) = f (x),

is just

∑ ( nπx ) ( )
π 2 n2 c2
u(x, t) = bn sin exp − t ,
n=1
L L2

where the bn are just the Fourier sine coefficients of

f(x) [regarded as an odd periodic function of period

2L.]

42
43 MA3264 Chapter 7. Partial Differential Equations

7.4 Fisher’s Equation

Life on dry land took a long time to evolve: ani-

mals and plants had lived in the sea for hundreds of

millions of years before that happened, roughly 450

million years ago. Of course it must have started

along the sea shore, that is, along a line. There must

have been some kind of marine plant growing along

the shore line; a mutation occurred (helped by the

extreme exposure to sunlight) which made one of

them, at some particular time and place, better able

to tolerate drying out. The descendants of that in-

dividual had a tremendous advantage over the non-

mutated neighbours, because sometimes there is a

43
44 MA3264 Chapter 7. Partial Differential Equations

succession of exceptionally low tides which leave the

plants dry for a long time. So they would have out-

competed their neighbours, and the mutation would

have spread along the shoreline like a wave. Even-

tually the result would be a plant that could survive

out of the water full-time.

The process of spreading along the shoreline is clearly

irreversible, so we need an equation like the heat

equation, not the wave equation: we need a “heat

equation with a wave-like solution”! On the other

hand, we don’t want the effect to go away, like the

temperature going down as heat dissipates. What we

need is a combination of the Heat Equation with

44
45 MA3264 Chapter 7. Partial Differential Equations

our model of the spread of a rumour [Tutorial 2].

In 1937 the great RA Fisher proposed the following

equation to model this situation:

ut = αuxx + β u (1 − u) ,

where u(x, t) is the fraction of the plants at any given

place and time which have mutated (so 1 − u(x, t) is

the fraction which haven’t). This is indeed a combi-

nation of the Heat Equation with the rumour equa-

tion! The constant α tells you how quickly the muta-

tion tends to spread in space, while β measures how

quickly it grows in time at a specific point in space.

(They have different units of course).

This is a fairly formidable object, a non-linear par-


45
46 MA3264 Chapter 7. Partial Differential Equations

tial differential equation, and finding all of its solu-

tions is very difficult. But it is important because

it has many other applications, for example to the

theory of how flames move and to the theory of how

nuclear reactors work.

What we should do of course is to follow what we did

with the Heat Equation, that is, specify some initial

function f (x) = u(x, 0) and then try to evolve it

forward in time. In this case a good model for f (x)

would be a delta function. This problem has been

studied, and there are interesting existence theorems

etc, but because of the non-linearity it is very hard

to find out, in this way, whether there is a solution

46
47 MA3264 Chapter 7. Partial Differential Equations

of the Fisher equation that really behaves in the way

we want.

We can however do this in a different way, as follows.

We want a wave solution. From our study of the

wave equation (in this Chapter!), we know what that

means: we seek a solution of the form

u(x, t)) = U (x − ct).

This will be a wave, with a shape given by the func-

tion U (s), s ≡ x−ct, moving to the right at constant

speed c, starting at x = 0. (The wave moving to the

left can of course be found by using x + ct instead; so

the left x axis has to be treated separately, and the

two results joined up at the end of the analysis.) So

47
48 MA3264 Chapter 7. Partial Differential Equations

U (s) is a function of a single variable, s. Note that as

x → ∞, we also have s → ∞; BUT as t → ∞, we

have INSTEAD s → −∞, an important difference.

Now uxx = U ′′, where the primes denote differenti-

ation with respect to s; also we have ut = − c U ′.

Thus in this special case Fisher’s equation reduces

to an ORDINARY (but still non-linear!) differential

equation,

αU ′′ + cU ′ + βU − βU 2 = 0.

But having absorbed Chapter 5 of this course, we

know what to do: we turn this into a pair of (still

non-linear) first-order equations:

U′ = V
48
49 MA3264 Chapter 7. Partial Differential Equations

c β β
V′ = − V − U + U 2.
α α α

This system has two equilibria: (U, V ) = (0, 0) and

(U, V ) = (1, 0). The meanings of these two equilibria

are as follows.

If U (s) = 0 for all s = x − ct, that means that,

setting t = t0 for any time t0, u(x, t0) = 0. In other

words, this equilibrium corresponds to u being a flat,

zero graph as a function of x (at any time). That

is, the mutation does not occur. Similarly U (s) = 1

means that u is a flat graph equal to 1 for all x, at

any time: the mutation has completely taken over.

49
50 MA3264 Chapter 7. Partial Differential Equations

The Jacobian is
( )
0 1
J(U, V ) = .
β
α (2U − 1) − α
c

( )
0 1
At (1, 0) this is J(1, 0) = β , which is al-
α − c
α ( )
0 1
ways a saddle. At (0, 0) it is J(0, 0) = ,
−α −α
β c


which is a SPIRAL SINK if c < 2 α β. It’s easy

to see that a spiral sink at the origin will inevitably

lead to negative values for U , which does not make

sense. So we reject this option, and insist that


c ≥ 2 α β.

That is, the model predicts that the wave cannot

move more slowly than this, and this gives us a pos-

sible way of verifying it.

50
51 MA3264 Chapter 7. Partial Differential Equations

In this second case, (0, 0) is a NODAL SINK4, so U

doesn’t have to be negative (though V does, but that

is acceptable). In fact it is clear from the phase dia-

gram (Wolfram “stream plot(y, −2.3y + x2 − x) x =

−0.5..1.5 y = −0.3..0.3”) there is a unique trajec-

tory connecting the two equilibria, flowing out of

the saddle and into the nodal sink. (You should

also Wolframize “stream plot(y, −y + x2 − x) x =

−0.5..1.5 y = −0.3..0.3” to see that there is indeed



a spiral sink at the origin when c = 1 × αβ. Note

that U clearly goes negative in this case.) Notice that

trajectories coming out of the saddle in the direction


4

When c = 2 α β, we have something called a degenerate node. It behaves very much like a
node (there is only one real eigenvector instead of two, but still, the point is that it is indeed real,
so no complex numbers are involved!), in the sense that there is no spiral behaviour; so this case is
ok too, hence the ≥ instead of >. Of course I don’t believe in = signs, this is just a parting gift for
those of you who do.

51
52 MA3264 Chapter 7. Partial Differential Equations

of increasing U do not make any sense, because ob-

viously U ≤ 1.

Now remember that “time” in this phase diagram is

really s, not t. That is, when s → −∞ we are at

the saddle (0, 1), and when s → +∞ we are at the

node (0, 0). This means, as discussed earlier, that

when t → −∞, we are at the node (0, 0) (at the

origin, the equilibrium corresponding to a flat graph

equal to zero), and then as time goes by (t → +∞)

we move towards the saddle (0, 1) (the equilibrium

corresponding to a flat graph equal to 1). The wave

has moved out, converting the flat zero graph to a

flat graph equal to 1. Which is what we wanted!

52
53 MA3264 Chapter 7. Partial Differential Equations

The mutation propagates outward from the initial

spot until it takes over the whole shoreline.

We can see how this happens in a bit more detail by

fixing t = t0 at some time and asking what happens

when x is a large negative number. Then s is a large

negative number, so we are somewhere near the sad-

dle (0, 1): in other words, far to the left, the wave

looks like a flat graph equal to 1. By the same logic,

far to the right it must look flat and near to zero,

since that corresponds to s → ∞, moving closer to

the node at the origin (0, 0). (In between it has to

interpolate somehow.) As time goes by and the wave

moves, this part of the graph moves to the right, eat-

53
54 MA3264 Chapter 7. Partial Differential Equations

ing up everything in its path and forcing more and

more of the graph of u to be a flat graph equal to 1. A

similar thing is happening as the other wave (the one

described by x + ct) moves to the left. So eventually

everything gets eaten and the whole shoreline is cov-

ered by the mutants. Try Graphmatica y = (1/(1 +

e5(x−n))2) ∗ step(x) + (1/(1 + e5(−x−n))2) ∗ step(−x)

for n = 1, 2, 3, 4, 5 to get an idea of the pattern.

The actual shape of the wave is given by solving

αU ′′ + cU ′ + βU − βU 2 = 0 and fixing time.

(We know what it looks like far to the left, but not

the exact shape to the right, except that it must be

asymptotic to zero, as explained above. Note that we

54
55 MA3264 Chapter 7. Partial Differential Equations

would have to solve this equation as a boundary value

problem, demanding U (−∞) = 1, U (+∞) = 0, and

not as an initial value problem, and this is diffi-

cult.) To see what it looks like, Wolframize “plot


( √ )2
1/ 1 + e(x/ 6) ” (this is an actual solution of the

ODE for α = β = 1 and a certain special choice of



c = 5/ 6 ≈ 2.04 > 2). Now imagine that thing

moving to the right, and you get the picture.

Notice that, for given α and β (they are fixed by bi-

ological data on the specific kind of plant), the shape

of the wave is fixed by its speed (because the speed,

c, is one of the coefficients in the equation for U );

this is very different from the Wave Equation, where

55
56 MA3264 Chapter 7. Partial Differential Equations

the shape could be prescribed arbitrarily. (This is

typical of wave solutions of non-linear PDEs; we saw

the same thing in the case of the Korteweg-de Vries

equation.) In principle this gives us another way of

checking Fisher’s model.

A final observation: in Fisher’s model, the mutated

organism is so superior that it completely wipes out

its predecessor. But suppose that it is less success-

ful: that it eventually constitutes only a fraction

0 < γ ≤ 1 of the total population at each point

when it establishes itself. This can be modelled by

an equation of the form

ut = αuxx + β u (γ − u) .

56
57 MA3264 Chapter 7. Partial Differential Equations

We could go through the full analysis all over again,

but instead here is a trick: define a new variable

w(x, t) ≡ u(x, t)/γ. Then dividing the equation

throughout by γ, we find

wt = αwxx + βγ w (1 − w) ,

and this is Fisher’s equation again! The only dif-

ference is that (because of the non-linearity) the

parameter β has been replaced by βγ. Otherwise

everything we have said remains: there is a wave of

w moving out from the point of the initial mutation,

meaning that regions of w = 0 will be eaten up by

regions of w = 1 (that is, u = γ ≤ 1; the wave moves



at a minimum speed of 2 αβγ, that is, more slowly

57
58 MA3264 Chapter 7. Partial Differential Equations


than before by a factor of γ.

7.5 The Diffusion of Lions

Suppose you are running a National Park in Botswana.

The park is roughly square in shape, with three sides

on land, but the fourth, northern side abutting a wide

river (too wide for any land animal to swim). You

want to study the spatial distribution of lions in the

park.

You force the animals to stay away from the land

boundaries, for their own safety, but you don’t need

to do that with the river. In fact the lions like to stay

near the river, where there is plenty of prey all the

time. The members of the most dominant lion pride

58
59 MA3264 Chapter 7. Partial Differential Equations

distribute themselves along the river and stay near

there, but the less dominant lions are driven away

by the dominant ones, and they wander out to fill

up the main area of the park. Lion prides don’t get

along very well with other prides; they have a natu-

ral tendency to disperse: they won’t clump together

unnecessarily. (I mean that they DO clump together

within each pride, but not with other prides.)

The density of lions is in general a function u(t, x, y)

of time *and* of two dimensions of space. (No flying

lions please.)

Let’s assume that we know the density along the

boundaries of the park (zero on three sides, some

59
60 MA3264 Chapter 7. Partial Differential Equations

definite function of position (determined by the rela-

tionships within the dominant pride) along the river).

Our objective is to predict the density inside the park.

We can model the diffusion of the less dominant lions

into the interior of the park by imagining that the

density is like a temperature. So we construct the

model by using the Heat equation. But now we need

to consider *two* dimensions of space, say x and y,

so the Heat equation is

ut = c2 (uxx + uyy ) . (8)

Let’s focus on the situation where everything has set-

tled down to a steady state, so u(t, x, y) is no longer

a function of time. So now we think of it as a func-


60
61 MA3264 Chapter 7. Partial Differential Equations

tion of the spatial coordinates only, u(x, y). In this

case the Heat equation becomes another very famous

PDE, the Laplace equation,

uxx + uyy = 0.

Let’s set up a model of our National Park, as a square

domain in the plane, say 0 ≤ x ≤ π, 0 ≤ y ≤ π. We

need four boundary conditions because there are four

derivatives in the Laplace equation. Let’s take them,

for example, to be u(x,0) = u(0,y) = u(π,y) = 0, and

u(x,π) = f(x) for some well-behaved function f(x);

so the function u(x,y) vanishes along three sides of

the square, but it can be some complicated function

along one side [the top of the square in this case]. So


61
62 MA3264 Chapter 7. Partial Differential Equations

the function f(x) describes the density of lions along

the border with the river, which corresponds to the

top of the square.

We can separate the variables in this case too: put

u(x,y) = X(x)Y(y) and get

X ′′Y + XY ′′ = 0.

Separating and using the usual argument we get

−X ′′ Y ′′
=λ= ,
X Y

where λ is a constant. So

X ′′ = −λX, Y ′′ = +λY.

Since u(0,y) = u(π,y) = 0, we want X(0) = X(π) =

0, and as in the cases of the wave and heat equations,

62
63 MA3264 Chapter 7. Partial Differential Equations

this means that we need trigonometric solutions for

X(x); thus we want λ to be positive [otherwise the

equation for X(x) will have exponential or linear so-

lutions which don’t cut the x axis twice]. We set λ

= n2 and so X(x) = sin(nx). Then the equation for

Y(y) [which should also satisfy Y(0) = 0 since u(x,0)

= 0] has solutions of the form Y(y) = cnsinh(ny) for

constants cn. The linearity of the Laplace equation

then gives us


u(x, y) = cn sin(nx) sinh(ny).
n=1

Putting u(x,π) = f(x) we have




f (x) = cn sin(nx) sinh(nπ),
n=1

so we see that the numbers cnsinh(nπ) are the Fourier


63
64 MA3264 Chapter 7. Partial Differential Equations

sine coefficients of the odd extension of f(x); in other

words,
∫ π
2
cn sinh(nπ) = f (x) sin(nx)dx.
π 0

Since sinh(nπ), n ≥ 1, is never zero, it can be taken

to the right side of this equation, and so finally we

get
2
∫π
π 0 f (x) sin(nx)dx
cn = .
sinh(nπ)

By the way, what would we do if u(x,y) were non-

zero only on the *bottom* of the square, instead of

the top? (This means that the river runs along the

southern edge of the park.) That is, what do we do

if the boundary conditions are u(x,0) = f(x) [along

the bottom of the square], u(0,y) = u(π,y) = 0, and


64
65 MA3264 Chapter 7. Partial Differential Equations

u(x,π) = 0 [on the top of the square]? Then nothing

changes until we get to solving for the function Y(y).

Now we don’t want Y(0) = 0 any more, instead we

want Y(π) = 0. So we use the following trick: set

Y(y) = cn sinh[n(π−y)]. You can check that this still

satisfies the differential equation for Y(y), and that

indeed it does satisfy Y(π) = 0! Now we proceed just

as before:


u(x, y) = cn sin(nx) sinh[n(π − y)].
n=1

Putting u(x, 0) = f(x) we have




f (x) = cn sin(nx) sinh(nπ),
n=1

so we see that the numbers cnsinh(nπ) are, just as be-

fore, the Fourier sine coefficients of the odd extension


65
66 MA3264 Chapter 7. Partial Differential Equations

of f(x); in other words, we still have



2 π
f (x) sin(nx)dx
cn = π 0 .
sinh(nπ)

Similarly you can handle the situations where the

boundary conditions make u(x,y) non-zero only along

one of the vertical sides of the square.

Going back to the case where there are lions at the

top: let’s consider for example f(x) = sin(x) + 0.2sin(4x).

(This is designed so that it vanishes at 0 and π but is

positive everywhere in between, as a density of lions

should be.) Then of course the Fourier sine series is

just sin(x) + 0.2sin(4x), so

u(x, y) = c1 sin(x) sinh(y) + c4 sin(4x) sinh(4y)

and of course c1 = 1/ sinh(π), c4 = 0.2/ sinh(4π) so


66
67 MA3264 Chapter 7. Partial Differential Equations

the lions are distributed through the park according

to

sin(x) sinh(y) 0.2 sin(4x) sinh(4y)


u(x, y) = + ,
sinh(π) sinh(4π)

which can be graphed. The point is that we now

know where *all* of our lions are, starting only with

their distribution along the borders of the park. Of

course you can consider more complicated situations,

more complicated f (x), two non-trivial borders, etc

etc etc!

GOODBYE, GOOD LUCK, AND THANKS!!!

67

You might also like