Laplace S Equation
Laplace S Equation
∂2 u ∂2 u
+ =0 0≤x≤L , 0≤ y≤H
∂ x2 ∂ y2
u(0 , y )=u( L, y )=u( x ,0 )=0 u( x , H )=u N ( x ) [1]
We do not know, in advance, if this solution will work. However, we assume that
it will and we substitute it for u in equation [1]. Since X(x) is a function of x only
and Y(y) is a function of y only, we obtain the following result when we substitute
equation [2] into equation [1].
2 2
∂ 2 u ∂ 2 u ∂ [ X ( x )Y ( y ) ] ∂ [ X ( x )Y ( y ) ] ∂2 X ( x ) ∂2 Y ( y )
+ = + =Y ( y ) + X( x) =0
∂ x2 ∂ y2 ∂ x2 ∂ y2 ∂ x2 ∂ y2 [3]
If we divide the final equation through by the product X(x)Y(y), and move the y
derivative to the other side of the equal sign, we obtain the following result.
2 2
1 ∂ X ( x) 1 ∂ Y ( y)
2
=−
X ( x) ∂ x Y ( y) ∂ y 2 [4]
The left hand side of equation [6] is a function of x only; the right hand side is a
function of y only. The only way that this can be correct is if both sides equal a
constant. This also shows that the separation of variables solution works. In
order to simply the solution, we choose the constant 1 to be equal to2. This
gives us two ordinary differential equations to solve.
2 2
1 d X( x) 1 d Y(y)
2
=−λ 2 − 2
=−λ2
X ( x ) dx Y ( y ) dy [5]
Equation [5] shows that we have two separate differential equations, each of
which has a known general solution. These equations and their general
solutions are shown below. 2
1
The choice of –2 for the constant as opposed to just plain comes from experience. Choosing
the constant to have this form now gives a more convenient result later. If we chose the constant
to be simply , we would obtain the same result, but the expression of the constant would be
awkward. In solutions to Laplace’s equation in rectangular geometries we will select the constant
as –2 to give us an ordinary differential equation whose solutions results in sines and cosines in
the direction for which we have a Sturm-Liouville problem (with homogenous boundary
conditions).
L L
∫ sin mπx
L
2 x
( )
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] L
= −
L
2 4 mπ
sin
2 mπL L
L
=
2 ( )
0 0
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+ 2 y ( x )=0
x
z
dz dz [ ](
+ − +k 2 z y=0
z )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
2
As usual, you can confirm that this solution satisfies the differential equation by substituting the
solution into the differential equation.
L L
∫ sin mπx
L
2 x
( )
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
0 0
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+ 2 y ( x )=0
x
z
dz dz [ ](
+ − +k 2 z y=0
z )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
2
d X( x) 2
+ λ X ( x )=0 ⇒ X ( x )=A sin( λx )+B cos( λx )
dx 2 [6]
d2 Y ( y ) 2
−λ Y ( y )=0 ⇒ Y ( y )=C sinh( λy )+ Dcosh ( λy )
dy 2 [7]
From the solutions in equations [6] and [7], we can write the general solution for
u(x,t) = X(x)T(t) as follows.
We now apply the boundary conditions shown with the original equation [1] to
evaluate the constants A, B, C, and D. If we substitute the boundary condition at
x = 0 into equation [8], get the following result.
Because sin(0) = 0 and cos(0) = 1, equation [9] will be satisfied for all y only if B
= 0. Thus, we set B = 0. Next we apply the solution in equation [8] (with B = 0)
to the boundary condition at y = 0
Since sinh(0) = 0 and cosh(0) = 1, this boundary condition will be satisfied only if
D = 0. The third boundary condition that u = 0 occurs at x = L. At this point we
have the following result, using solution in [8] with B = 0 and D = 0 as found
previously.
Equation [11] can only be satisfied if the sine term is zero. This will be true only if
L is an integral times . If n denotes an integer, we must have
nπ
λL=nπ or λ=
L [12]
Since any integral value of n gives a solution to the original differential equations,
with the boundary conditions that u = 0 at both boundaries, the most general
solution is one that is a sum of all possible solutions, each multiplied by a
different constant. In the general solution for one value of n, which we can now
write as Asin(nx)Csinh(ny), with n = nx/L, we can write the product of two
constants, AC, as the single constant, Cn., which may be different for each value
of n. The general solution which is a sum of all solutions with different values of
n is written as follows
∞
nπ
u( x , y )= ∑ C n sin( λ n x ) sinh( λn y ) with λn =
n=1 L [13]
The final boundary condition, the one at y = H, states that u(x,H) is a given
function of x, uN(x). Setting u(x,H) = uN(x) in equation [13] gives the following
result.
∞
u( x , H )=u N ( x )= ∑ C n sin
n=1
( nπxL ) sinh ( nπHL ) [14]
In reviewing the solution for u(x,y), we see that the differential equation for X(x) is
a Sturm-Liouville problem. This is a result of both the form of the differential
equation for X(x) and the boundary conditions for X(x), implied by the boundary
conditions that u(0,y) = 0 and u(L,y) = 0. These boundary conditions can be
satisfied only if X(0) = 0 and X(L) = 0, giving the homogenous boundary
conditions required for a Sturm-Liouville problem.
Because we have a Sturm-Liouville problem for X(x), we know that the solutions
sin(nx/L) form a complete orthogonal set over the region 0 ≤ x ≤ L. We can use
this fact in equation [14] to get a solution for C m. If we multiply both sides by
sin(mx/L), where m is another integer, and integrate from a lower limit of zero to
an upper limit of L, we get the following result.
L ∞ L
∫ u N ( x )sin L dx=∫ ∑ C n sin mπx
( ) mπx
L
sin
nπx
( ) ( ) ( )
L
sinh
nπH
L
dx
0 0 n=1
∞ L L
mπx nπx nπH mπH mπx
¿ ∑ ∫ C sin ( ) sin ( ) sinh ( ) dx=C sinh ( ) sin (
L )
2
n m ∫ dx
n=1 0 L L L L 0 [15]
In the second row of equation [15] we can reverse the order of summation and
integration because these operations commute. We then recognize that the
integrals in the summation all vanish unless m = n, leaving only the sine-squared
integral to evaluate. Solving for Cm and evaluating the last integral3 in equation
[15] gives the following result.
Using a standard integral table, and the fact that the sine of zero and the sine of an integral
3
∫ sin mπx
2
L( ) x
dx= −
L
2 4 mπ[sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
0 0
L L
∫ u N ( x )sin mπx
( ) L
2∫ u ( x )sin (
dx
mπx
L )
dx N
0 0
C =m L
=
mπH mπx mπH
Lsinh (
sinh (
L ) (L) L )
2
∫ sin dx
0 [16]
For any initial condition, then, we can perform the integral on the right hand side
of equation [16] to compute the values of C m and substitute the result into
equation [13] to compute u(x,y). For example, consider the simplest case where
uN(x) = UN, a constant. In this case we find Cm from the usual equation for the
integral of sin(ax).
L L
mπH 2 mπx 2U L mπx 2U L mπx
sinh( )
L
Cm= ∫ u N (x )sin
L0 L ( )
dx= N ∫ sin
L 0 L
dx= N −
L ( ) mπ
cos
L [ ( )] 0
2UN L mπL L mπ 0 2U N
¿
L
−
mπ [
cos
L
+
mπ ( )
cos
L
=
mπ ( )]
[ 1−cos ( mπ ) ]
[17]
4 UN
C m= mπ
{ 0
m odd
m even [18]
Substituting this result into equation [13] gives the following solution to the
diffusion equation when u(x,H) = uN(x) = UN, a constant, and all other boundary
values of u are zero.
4 U N ∞ sin( λn x )sinh( λ n y ) nπ
u( x ,t )= ∑ λn =
π n=1,3,5 … n sinh( λ n H ) L [19]
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+ 2 y ( x )=0
x
z
dz dz [ ](
+ − +k 2 z y=0
z )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
If we substitute the equation for n into the summation terms we get the following
result.
nπx nπy
u( x ,t )=
4UN ∞
∑
sin ( ) ( )
L
sinh
L
π n=1,3,5 … nπH
( )
n sinh
L [20]
If we compare this equation to equation [19] in the notes on the solution of the
diffusion equation, we see that the sine terms are the same. The exponential
n2 π 2 αt
−
x 2max
time term, e , in the diffusion equation, has been replaced by the
hyperbolic sine terms in equation [20]. This similarity arises from the fact that we
have the same ordinary differential equation, with the same zero boundary
conditions, for the variable X(x) in each problem. This is an important feature of
separation of variable solutions. The contribution to the solution from a term like
∂2 u
2
∂xwill be the same in different partial differential equations where the
separation of variables solution works and that the boundary conditions for the
term are the same.
Equation [20] shows that the ratio u(x,y)/UN depends on the dimensionless
parameters x/L, y/L, and H/L. Thus we can compute universal plots of the
solution for different values of H/L. A contour plot of the solution u(x,y) where U
= 1 and H/L = 1 is shown in Figure 1 below.
Figure 1. MATLAB Plot of Laplace Equation Solution
∂2 u ∂2 u
+ =0 0≤x≤ L, 0≤ y≤H
∂ x2 ∂ y2
u(0 , y )=uW ( y ) u( L , y )=uW ( y ) u( x , 0)=uS ( x ) u( x , H )=u N ( x ) [21]
(The subscripts N, S, E, and W refer to the north, west, east and west sides of
the rectangular region.) To solve this general problem, we solve four separate
problems like we just did for equation [1] for four separate potentials, u 1(x,y), u2,
(x,y), u3(x,y), and u4(x,y). The desired result is the sum of these potentials.
The solution posed in equation [22] with the boundary conditions in equation [23]
is a complete solution to the differential equation and boundary conditions in
equation [20]. Since each function ui(x,y) satisfies the homogenous differential
equation, the sum of these four functions also satisfies the differential equation.
Furthermore, equation [24] shows that the boundary conditions on the individual
ui(x,y) solutions, give the following boundary conditions on u(x,y).
Thus, the solution proposed in equation [22], with the boundary conditions in
equation [23] satisfies the differential equation and the boundary conditions of the
original problem in equation [21].
The solution for u1(x,y) is the one found above and is given by equation [13].
∞
u1 ( x , y )= ∑ C(n1) sin
n=1
( nπxL )sinh ( nπyL ) [25]
C(n1)=
2∫ u N ( x )sin
0
( nπxL ) dx
nπH
L sinh (
L ) [26]
The solution for u3(x,y) is similar to the one for u1(x,y) except that the roles of the
x and y coordinates, and the corresponding maximum lengths, H and L, are
reversed. Exchanging x and y and reversing H and L in equations [25] and [26]
give us the solution for u3(x,y).
∞
nπy nπx
u3 (x , y )= ∑ C (3n ) sin( )sinh( )
n=1 H H [27]
C(n3)=
2 ∫ u W ( y )sin
0
( nπyH ) dy
nπL
H sinh (
H ) [28]
The x dependence of the solution for u2(x,y) will be the same as in the solution
for u1(x,y). This holds because the solution obtained by separation of variables is
a product of two separate solutions X(x) and Y(y) and the x boundary conditions
for u2(x,y) are the same as those for u1(x,y). If we make a coordinate
transformation to a new coordinate, y’ = H – y, the boundary conditions in the y’
coordinate system for u2 will have the same form as those for u1 in the y
coordinate system. Furthermore, the differential equation has only the second
derivative in y. We can find the second derivative for the new y’ coordinate as
follows.
∂ u ∂ y ' ∂u ∂u ∂2 u ∂ y ' ∂ ∂ u ∂ − ∂u = ∂ u
2
=
∂y ∂y ∂y'
=(−1 )
∂y'
and =
∂ y2 ∂ y ∂ y ' ∂ y ( )
=(−1 ) ( )
∂ y' ∂ y ' ∂ ( y ' )2 [29]
∂2 u ∂2 u
+
Thus Laplace’s equation has the same form, ∂ x 2 ∂ y 2 , for both the y and the
y’ coordinate system. This means that the solution for u 2 in the y’ coordinate
system will be the same as u1 in the y coordinate system. Thus we can take the
solution for u1 in equations [25] and [26] and replace y in that solution by y’ = H –
y to get the correct solution for u2. (We also replace the old boundary condition
for uN by the one for uS.)
∞
nπx nπ ( H− y )
u2 ( x , y )= ∑ C (2)
n=1
n sin ( ) (
L
sinh
L ) [30]
L
C(n2)=
2∫ u S (x )sin
0
( nπxL ) dx
nπH
L sinh (
L ) [31]
Except for the substitution of the “south” boundary condition, u S(x) that applies at
y = 0, the equation for C(2) is the same as that for C(1). We can apply the same
coordinate transformation in the x direction and use the same logic to obtain the
solution for u4 from the solution for u3, by replacing x by L – x in that solution.
∞
nπy nπ ( L−x )
u4 ( x , y )= ∑ C(n4 ) sin
n=1
( ) (
H
sinh
H ) [32]
H
nπy
C(n4 )=
2∫ u E ( y )sin
0
( )H
dy
nπL
H sinh (
H ) [33]
These results show how we can obtain a solution with general boundary
conditions, while retaining the ability to develop eigenfunction expansions which
rely upon the existence of zero boundary conditions.
The rules for operations with complex numbers are shown below. In these rules
z1, z2, and z3 are complex numbers defined such that zk = xk + i yk, = rke-k and c is
a real number.
z1 x x +y y y x −x y
z 3= ⇒ x3 = 1 22 12 2 and y 3 = 1 22 21 2
z2 x 2+ y 2 x2+ y 2 [45]
z1 r1
z 3= ⇒ r 3= and θ3 =θ1 −θ2
z2 r2 [46]
We can have some complex variables, which are functions of other complex
variables. Such a functional relationship is usually written in terms of the real
part, u(x,y) and the imaginary part v(x,y) as follows:
df ( z) lim f ( z+ Δz)−f ( z )
=
dz Δz→0 Δz [48]
∂u ∂ v ∂v ∂u df ∂ u ∂ v ∂ v ∂u
If = and =− then = +i = −i
∂x ∂y ∂x ∂y dz ∂ x ∂ x ∂ y ∂ y [49]
∂u ∂v ∂v ∂u
=2 x= =2 x and =2 y =− =2 y
∂x ∂y ∂x ∂y [50]
Thus the Cauchy-Riemann conditions are satisfied and we can write the
derivative df/dz as follows.
df ∂u ∂ v df ∂ v ∂u
= +i =2 x+i2 y or = −i =2 x−i(−2 y )
dz ∂ x ∂ x dz ∂ y ∂ y [51]
Note that this result is simply the result we would have obtained by taking the
derivative of f(z) = z2 directly as df/dz = 2z = 2(x +iy). Functions that satisfy the
Cauchy-Riemann conditions and are continuous are called analytic functions.
The Cauchy-Riemann conditions provide the link between the complex analysis
∂
and the solutions to Laplace’s equation. If the take ∂ x of the first condition
∂
and ∂ y of the second condition, we have the following results.
2 2 2 2
∂ ∂u = ∂ v ⇒ ∂ u = ∂ v ∂ ∂ v =− ∂u ∂ v ∂u
[
∂x ∂x ∂ y ]
∂ x2 ∂ x ∂ y
and [
∂ y ∂x ∂y ] ⇒
∂ y∂x
=− 2
∂y
[52]
2 2
∂v ∂v
=
Since ∂ x∂ y ∂ y∂ x , subtracting the two results in equation [52] shows that u
satisfies Laplace’s equation.
∂2 u ∂2 v ∂2 v ∂2 u ∂2 u ∂ 2 u
[ =
∂ x2 ∂ x ∂ y][
−
∂ y∂x
=−
∂ y2 ] ⇒ +
∂ x2 ∂ y 2
=0
[53]
∂
In a similar way, we could take ∂ y of the first Cauchy-Riemann condition and
∂
∂ x of the second condition, and use the same manipulations of equations [52]
and [53] to show that the imaginary part of the analytical function also satisfies
Laplace’s equation.
∂2 v ∂ 2 v
+ =0
∂ x2 ∂ y2 [54]
We can show that these solutions are mutually perpendicular as follows. The
vector gradients of the two components, u and v can be expressed in terms of
the unit vectors in the x and y directions (i and j) as follows.
∂u ∂ u ∂v ∂v
∇ u= i+ j and ∇ v= i+ j
∂x ∂ y ∂x ∂ y [55]
We find the angle between these two gradients by taking their dot product.
Recalling that i•i = j•j = 1 and j•i = i•j = 0, we get the following result for the dot
product of these two vectors.
∂u ∂v ∂ v ∂u ∂u ∂ v ∂ v ∂u ∂u ∂ v ∂ v ∂ u
∇u⋅∇ v=
∂x ∂x
i⋅i+
∂y ∂y
j⋅j+ [ +
∂ x ∂ y ∂x ∂ y ]
i⋅j= +
∂x ∂x ∂ y ∂ y [56]
We can use the Cauchy-Riemann conditions to show that this vector dot product
is zero. Substituting those conditions from equation [49] gives
∂ u ∂ v ∂ v ∂u ∂v ∂ v ∂ v ∂ v
∇ u⋅∇ v= + = + − =0
∂x ∂x ∂ y ∂ y ∂ y ∂x ∂ y ∂ x ( ) [56]
∂Φ ∂Φ
v x≡− and v y≡−
∂x ∂y [55]
∂v x ∂ v y
+ =0
∂x ∂y [56]
Substituting equations [55] into equation [56] shows that the velocity potential
satisfies Laplace’s equation.
∂2 Φ ∂2 Φ
+ =0
∂ x2 ∂ y2 [57]
∂2 Ψ ∂2 Ψ
+ =0
∂ x2 ∂ y2 [58]
∂Φ( x , y ) ∂Ψ ( x , y ) ∂Ψ ( x , y ) ∂ Φ( x , y )
= and =−
∂x ∂y ∂x ∂y [59]
From these conditions and the definitions of the velocity potential, , in equation
[55], we have the following definitions of the velocity components in terms
gradients of the .
∂Φ ∂Ψ ∂ Φ ∂Ψ
v x≡− =− and v y ≡− =
∂x ∂y ∂y ∂x [60]
2. If the boundary has a specified value (Dirichlet problem) that is the same
at all points of the boundary, the solution for the entire region is that
boundary value.
The theory of complex integration is a significant topic in its own right and we do
not have time to cover it here. However these results, which are based on the
treatment of solutions to Laplace’s equation as complex variables, are useful
ones for making a reality check on our solutions. If we find a point in the region
where the solution has a value that is greater than the maximum boundary value
or less than the minimum boundary value we can be sure that we have an error.
In contrast to the uniqueness of the solution to the Dirichlet problem, the solution
to the problem where we specify a gradient at all points of the boundary (the
Neumann problem) is not unique. This is easy to see. If we have a solution, u 1,
that satisfies Laplace’s equation and satisfies the gradient boundary conditions,
any other solution u2 = u1 + C, where C is a constant, will also satisfy the
differential equation and the gradient boundary conditions.
In addition to the results from complex analysis for Laplace’s equation, we can
also obtain results from vector calculus regarding solutions to Laplace’s equation.
In particular if there is a flux, f, which is found as the gradient of the potential u,
the solution of Laplace’s equation, we can show that the net flow of the flux f
across the boundaries of the region in which Laplace’s equation applies is zero.
This is true for any coordinate system.
Radial geometry
1 ∂ ∂ u ∂2 u
r + =0
r ∂ r ∂r ∂ z 2 [61]
Substituting these results into equation [62] and dividing the result through by the
product P(r)Z(z), gives following.
2 2
1 1 ∂ ∂ P(r ) 1 ∂ Z ( z) 1 1 ∂ ∂ P (r) 1 ∂ Z ( z)
r + 2
=0 ⇒ r =−
P(r ) r ∂r ∂ r Z ( z) ∂ z P(r ) r ∂ r ∂r Z ( z) ∂ z 2 [64]
Equation [64] says that a function of r only equals a function of z only. This can
only be true if each term equals a constant. As before, we want to choose the
constant to give us a Sturm-Liouville problem in the direction that has
homogenous boundary conditions. We set the term with homogenous boundary
conditions to be equal to2. Then the other term will be equal to 2. For a solid
cylinder, no boundary condition is required at r = 0. It is sufficient to require that
u remain finite at r = 0 to get a Sturm-Liouville problem. If we have a zero
boundary condition in the radial direction, we can get a solution to the radial
problem in terms of Bessel functions and the solutions in the z direction will be in
terms of hyperbolic sines and cosines. If we have homogenous boundary
conditions in the z directions, we can obtain eigenfunction solutions in terms of
sines and cosines in this direction and the radial solution will be in terms of
modified Bessel functions.
We start with a simple set of boundary conditions for a solid cylinder: u(R,z) =
u(0,R) = 0 and u(r,H) = uN(H); this gives a Sturm Liouville problem in the radial
direction so we will want to set the radial term equal to2; this requires us to set
the z term to 2 to satisfy equation [64]. Doing this gives us two ordinary
differential equations to solve.
d2 Z ( z )
2
−λ 2 Z ( z )=0 ⇒ Z ( z )= A sinh λz + B cosh λz
dz [65]
d dP(r ) 2
r +λ rP (r )=0 ⇒ P(r )=CJ 0 ( λR)+DY 0 ( λR )=0
dr dr [66]
Here J0 and Y0, are the Bessel functions of the first and second kind with zero
order.4 Thus, our general solution for u(x,t) = X(x)Z(z) becomes
∫ sin mπx
2
L ( ) x
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
4 0 0 Bessel
functions, like sines and cosines, are solutions to differential equations. Although Bessel
functions occur less frequently than sines and cosines, they are functions that can be found in
tables or in computer function calculations. The general form of Bessel’s equation used for
2
d y (x ) 1 dy( x ) x2 −ν 2
+ + 2 y ( x )=0
obtaining series solutions is dx 2 x dx x . This equation can be
As r → 0, Y0(r) → -∞; to keep the solution finite, we require that C2 = 0. To satisfy
the condition that u(R,z) = 0, we must satisfy the following equation.
For example, the first five points at which J0 is zero are for α10 = 2.40483, α20 =
5.52008, α30 = 8.65373, α40 = 11.79153, and α50 = 14.93092. There are an
infinite number of such values such that J0(λmr), with λmR = αm0 provides a
complete set of orthogonal eigenfunctions that can be used to represent any
other function over 0 ≤ r ≤ R. These mn values are called the zeros of Jn.
We can apply the boundary condition that u(r,0) = 0 to equation [67] (with D set
equal to zero to eliminate the Y0 term) to obtain.
This gives B = 0. The general solution to the problem is the sum of all
eigenvalue solutions, each multiplied by a different constant.
∞
u(r , z )= ∑ C m sinh( λ m z)J 0 ( λ m r ) λm R=α m 0
m=1 [71]
We still have to satisfy the boundary condition that u(r,H) = u N(r). We can do this
by using an eigenfunction expansion. Setting z = H in equation [71] gives the
following equation for the boundary condition at y = H.
d dy ν2 2
transformed into
z [ ](
dz dz
+ − +k z y=0
z ) whose solution is AJ(kz) + BY(kz). We see
∂ r ∂ P(r ) +λ 2 rP(r )=0
that this second equation has the same form as ∂r ∂ r , provided that we
set = 0. This gives the result above that the solutions are J 0 and Y0. The factor of r in the
λ2rP(r) term is a weighting factor that must be included in the definition for the inner product of
solutions to the radial portion of the diffusion equation.
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
∞
u(r , H )=u N (r )= ∑ Cm sinh( λ m H )J 0 ( λm r)
m=1 [72]
The values of Cm are found from the general equation for orthogonal
eigenfunction expansions which includes a weighting function. Here the
weighting function p(r) is equal to r. For any initial condition, u 0(r), we find the
values of Cm from the following equation. (In the second equation below, the
integral in the denominator has been evaluated using integral tables and the fact
that J0(mR) = 0.)
R R
∫ rJ 0 ( λm r )u N (r )dr ∫ rJ 0( λm r )uN dr
0 0
C m= R
= 2
2 R 2
sinh( λ m H ) ∫ r [ J 0 ( λm r) ] dr sinh( λ m H ) [ J 1( λ m R )]
2
0 [73]
If we consider the simple case where uN(r) is a constant equal to U. we have the
following result for Cm.
R r=R
rJ 1 ( λm r )
C m=
∫ rJ 0 ( λ m r )Udr
0
2
2U
= 2
[ 2
λm
r =0 ]
R 2 R sinh( λm H ) [ J 1 ( λm R ) ]
sinh( λ m H ) [ J 1 ( λm R) ]
2
2U RJ 1 ( λm R) 2 U
sinh( λ m H ) λ m sinh( λ m H ) 2U
¿ 2
= =
R2 [ J 1 ( λ m R ) ] λ m RJ 1 ( λm R ) α m 0 J 1 (α m 0 )sinh ( λm H )
[74]
With this result for Cm our solution for u(r,t) shown in equation [71] with the
constant boundary condition uN(r), is written as shown below, using the result
from equation [71] that m = m0/R.
∞
2U r z
u(r , z)= ∑
H (
J0 α m 0
R ) (
sinh α m 0
R )
m=1
(
α m 0 J 1 ( α m 0 )sinh α m 0
R ) [75]
Dividing this equation by U shows the important variables are u(r,t)/U, r/R, z/R,
and H/R. The solution to this equation for u(r,z)/U with H/R = 1 is shown below.
Solution with a constant boundary condition at r = R – Different boundary
conditions give rise to different solutions. If we change the boundary condition at
r = R to be a constant, uR instead of zero and leave all other boundary conditions
the same5 we can solve the problem by introducing a change of variable v(r,z) =
u(r,z) – uR. The solution for v(r,t) proceeds exactly as above to give v(r,z) =
mCmsinh(m0z/R)J0(m0r/R), but the equation for Cm is different in this case.
L L
∫ sin mπx
2
L ( )x
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
5 0 0
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+
x 2
y ( x )=0
dz dz
z [ ](+ −
z
+k 2 z y=0 )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r With this change, the full set of boundary conditions in this case is
u(R,z) = uR, u(r,0) = 0, u(r,H) = uN(r), and u remains finite at r = 0.
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
R
∫ rJ 0 ( λ m r )(u N (r )−u R ) dr
0
C m=
R2 2
sinh( λ m H ) [ J 1 ( λm R) ]
2 [76]
The solution for u(r,z) in equation [71] is replaced by the solution shown below.
∞ α m0
u( r , z )= ∑ Cm sinh( λ m z ) J 0 ( λm r )+u R λm =
m=0 R [77]
Applying the same integration that was used to obtain equation [74] gives the
final solution for uN(r) = U, a constant, as
∞ 2 ( U−u R ) r z
u( r , t )−u R = ∑
H (
J0 α m 0
R ) (
sinh α m 0
R )
m=1
(
α m 0 J 1 ( α m 0 ) sinh α m 0
R ) [78]
Dividing this equation by U – uR shows the important variables are (u(r,t) – uR)/(U
– uR), r/R, z/R, and H/R. The result would have the same left-hand side as
equation [75]; the right-hand side would change from u(r,z)/U in equation [75] to
[u(r,z) – U] / [U – uR] in the dimensionless form of equation [78].
d2 Z ( z ) 2
+ λ Z( z )=0 ⇒ Z( z )= A sin λz+ B cos λz
dz 2 [80]
d dP(r ) 2
r −λ rP(r )=0 ⇒ P(r )=CI 0 ( λR )+ DK 0 ( λR)=0
dr dr [81]
The functions I0 and K0 are known as the modified Bessel functions of the first
and second kind, respectively.6 Since K0 approaches –∞ as r approaches zero
we must set D = 0. Satisfying the boundary conditions that u(r,0) = u(r,H) = 0
leads to the same result that we found in the analysis of a rectangular region.
We must have B = 0 and = n/H, where n is a positive, nonzero integer. These
boundary conditions lead to a solution for u that is a sum of all the eigenfunction
solutions.
∞
mπ
u(r , z )= ∑ C m sin( λm z )I 0 ( λm r ) λ m=
m=1 H [82]
The usual eigenfunction expansion gives the boundary condition that u(R,z) =
uR(z). Solving this eigenfunction expansion for Cm gives.
H H
∫ u R ( z ) sin mπz
( ) 2∫ u ( z )sin (
H
dz
mπz
H )
dz R
0 0
C =m H
=
mπR mπz mπR
HI (
I (
H ) (H) H )
2
0 ∫ sin dx 0
0 [83]
The solutions in equations [71] and [82] (along with the corresponding equations
for Cm in equations [73] and [83]) can be used along with the principle of
superposition to obtain solutions to Laplace’s equation in a solid cylinder for more
complex boundary conditions.
L L
∫ sin mπx
2
L ( )x
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
6 0 0
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+
x 2
y ( x )=0 z
dz dz [ ](
+ −
z
+k 2 z y=0 )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r These modified Bessel functions may be regarded as functions to
be found in tables or from software. They are the solution to a modified Bessel equation in which
the final term has a negative instead of a positive sign. From this equation one can show that
I(z) = i-J(iz). This definition means that I is real valued if z is real valued.
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
1 ∂ ∂u ∂2 u
r + =0 Ri ≤r≤R o , 0≤z≤H
r ∂r ∂ r ∂ z 2
u(r , 0)=u(r , H )=u (R i , z )=0 u( Ro , z )=uo ( z ) [84]
The solution to this equation proceeds in exactly the same way as the solution to
the equation [79] problem. However, we can not eliminate the K 0 term in this
case. With the eigenfunction solutions in the z direction, a typical two
dimensional solution could be written as follows.
mπ
um (r , z )=[ C m I 0 ( λm r )+D m K 0 ( λ m r ) ] sin ( λm z )=C m P m(r )sin ( λm z ) λm =
H [85]
Applying the inner-radius boundary condition that u(R i,z) = 0 to equation [85]
gives the following relationship between Cm and Dm.
C m I 0 ( λ m Ri )
Dm=−
K 0 ( λm R i ) [86]
With this definition we can write our radial solution from equation [85], for one
eigenfunction, Pm, as follows.
I 0 ( λm R i )
[
C m Pm (r )=Cm I 0 ( λ m r )−
K 0 ( λm R i )
K 0( λ m r )
] [87]
Substituting this result into equation [85] and summing over all values of m gives
the general solution for u(r,z).
∞
mπ
u(r , z)= ∑ C m sin( λm z )P m (r ) λ m=
m=1 H [88]
We can fit the boundary condition that u(R 0,z) = uo(z) by an eigenfunction
expansion of this boundary condition. In the same way that equation [83] was
derived, we can write
H H
∫ u o( z )sin mπz
( ) H
dz 2 ∫ u ( z )sin (
mπz
H )
o dz
0 0
C =
m H
=
mπz HP ( R ) m o
P ( R ) ∫ sin (
H )
2
m o dx
0 [89]
Using the definition of the radial solution from equation [87] gives the solution to
the problem in equation [84] as follows.
mπRi
( ) mπr
I0
H mπr
H ( ) I0 ( ) H
−
mπR i
K0
H
∞
u( r , z )= ∑
m=1
[ 2 ∫ uo ( z ) sin
0
H
( mπz
H )
dz
]
sin
mπz
( )
H
( )
( ) mπR o
K0
I0
H
mπRi
H mπR o
( ) I0 ( )H
−
mπR i
K0
H
( ) K0
H
[90]
The solution to this problem proceeds in the same way as the solution of the
initial radial geometry problem that lead to equation [67] copied below.
This is a pair of homogenous algebraic equations that has the matrix equation
shown below.
J 0 ( λRi ) Y 0 ( λRi ) C
=0
[ ][ ] [ ]
J 0 ( λR o ) Y 0 ( λR o ) D 0
[93]
This pair of equations will have a trivial solution C = D = 0 unless the determinant
of the coefficients vanishes.
J 0 ( λRi ) Y 0 ( λRi )
Det
[ J 0 ( λRo ) Y 0 ( λRo ) ]
=J 0 ( λRi )Y 0 ( λR o )−J 0 ( λRo )Y 0 ( λR i )=0
[94]
Equation [94] is a transcendental equation that can be solved for the eigenvalues
for this problem. If we define = Ro, we can rewrite equation [94] as follows.
αRi αR i
J0
( ) Ro ( )
Y 0 (α )−J 0 (α )Y 0
Ro
=0
[95]
Thus, the eigenvalues in this problem depend on the ratio of the inner radius to
the outer radius. A plot of equation [95] for three different radius ratios is shown
in Figure 2.
Ri Ri
() ()
J 0 αm Y 0(αm)−J0(αm)Y 0 αm =0
R0 R0
Because the determinant in the equation for C and D is zero, there are an infinite
number of solutions for these variables, but all solutions have the same ratio of C
to D. We can find this ratio from the second row of the matrix equation [94].
With this equation, we can write our radial solution in equation [67] as follows.
J 0 ( λR o )
[
CJ 0 ( λr )+ DY 0 ( λr )=C J 0 ( λr )−
Y 0 ( λR o ) ]
Y 0 ( λr ) =
C
Y ( λRo )J 0 ( λr )−J 0 ( λRo )Y 0 ( λr ) ]=EP 0 ( λr )
Y 0 ( λR o ) [ 0 [97]
The two new terms, E and P0(r) in this equation are defined in equation [98].
C
E= and P0 ( λr )=Y 0 ( λR o )J 0 ( λr )−J 0 ( λR o )Y 0 ( λr )
Y 0 ( λRo ) [98]
We can replace the radial solution in equation [67] with EP 0(r) and set B = 0 in
that equation to satisfy the boundary condition that u(r,0) = 0 to obtain.
As usual, the general solution for u(r,z) is the sum of all the eigenfunction
solutions as shown in equation [99]. In the general solution we can combine the
constant product AE into a single constant C m, which will be different for each
eigenfunction.
∞
u(r , z)= ∑ C m sinh( λ m z)P 0 ( λ m z)
m=1 [100]
The boundary condition that u(r,H) = uN(r) can be found from an eigenfunction
expansion. The normalization integral7 in this expansion is given below.
L L
∫ sin mπx
2
L ( )x
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
7 0 0
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+ 2 y ( x )=0
x
z
dz dz [ ](
+ − +k 2 z y=0
z )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r Carslaw and Jaeger, Conduction of Heat in Solids, Oxford
University Press, 1958.
R0
2 2[ J 20 ( λm Ri )−J 20 ( λm R0 )]
∫ r [ P 0 ( λm r ) ] dr = 2
Ri [ πλ m J 0 ( λm Ri ) ] [101]
[102]
The case of a constant boundary potential at the top of the cylinder, u(r,H) = U,
gives the following integral.7
R0 R0
2U [ J 0 ( λ m R i )−J 0 ( λ m R 0 )]
∫ ru N (r )P 0( λ m r )dr ∫ rUP0 ( λm r )dr = πλm2 J 0 ( λ m Ri )
Ri Ri
[103]
π UJ 0 ( λ m Ri )
C m=
sinh( λ m L)[ J 0 ( λm R i )+ J 0 ( λm R0 ) ] [104]
Substituting this expression for Cm into equation [100] gives the following solution
to the problem in equation [91], where u(r,H) = u N(r) = U, a constant; the values of
m = m/Ro are found from equation [95].
∞ sinh( λ m z ) J 0 ( λ m Ri )P0 ( λm r )
u(r , z )=Uπ ∑
m=1 sinh( λ m H ) J 0 ( λm R i )+J 0 ( λm R0 ) [105]
Using the roots of the eigenvalue equation, m = mR, allows us to rewrite this
equation as follows.
z Ri r
u(r , z )=Uπ ∑
∞ (
sinh α m
R0 ) ( )
J 0 αm
R0( )
( α m R i ) P0 α m
R0
H Ri
m=1
(
sinh α m
R0 ) ( ) ( )
J0 αm
R0
+J0 αm
[105b]
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
Using the definition of P0 from equation [98], with replaced by /R, gives the full
solution as follows.
z J α Ri Y ( α ) J α r −J ( α ) Y α r
u (r ,z)
=π ∑
∞ (
sinh α m
Ro 0
) (
m
Ro 0 )[m 0 m
Ro 0( m 0 m )
Ro ( )]
U H Ri
m=1
(
sinh α m
Ro ) ( )
J0 α m
Ro
+ J 0 ( α m)
[105c]
Consider the simple modification to the problem in equation [1] where the
boundary conditions that u(0,y) = u(L,y) = 0 is replaced by the condition
that the gradient of u is zero at both boundaries.
∂2 u ∂2 u
+ =0 0≤x≤L , 0≤ y≤H
∂ x2 ∂ y2
∂u ∂u
|( 0, y )= |( L, y)=u( x ,0 )=0 u( x , H )=uN ( x )
∂x ∂x [106]
We can still apply the separation of variables approach that led to equation [8],
however the zero gradient boundary conditions give the eigenfunctions as
cosines so that the solution to equation [106] has the following form in place of
the solution in equation [13].
∞
nπ
u( x , y )=u0 + ∑ Cn cos( λ n x )sinh( λn y ) with λn =
n=1 L [107]
In this solution, unlike the solution with sine eigenfunctions, the n = 0 eigenvalue
corresponds to a nonzero solution, cos(0) = 1. We have separated out the n = 0
eigenfunction in this solution, because it has a different form. This different form
arises from the basic separation of variables solution in equation [5]. When we
have n = 0, the separation constant = 0 and the ordinary differential equations
that result from the separation of variables have the following form in place of
equation [5].
2 2
1 d X 0( x ) 1 d Y 0( y ) n0 π
=0 =0 λ0 = =0
X 0 ( x ) dx 2 Y 0 ( y ) dy 2 L [108]
u0 (x , y )= X 0 (x )Y 0 ( y )=BCy =C 0 y [110]
With this solution for u0, the general solution for u(x,y) in equation [107] becomes
∞
nπ
u( x , y )=C 0 y + ∑ Cn cos( λ n x )sinh( λn y ) with λn =
n=1 L
[111]
We have the usual eigenfunction expansion for finding the initial conditions, but
in this case we have a separate normalization integral for the n = 0 eigenfunction,
cos(0) = 1. Thus our general equation for the constants C n is
L
1
sinh
nπH
( )
L
C n= L
2
L0
∫ { ∫ u ( x )dx
L0 N
u N ( x )cos
nπx
L
n=0
( )
dx n≥1
[112]
This difference in the prefactors 1/L in the n = 0 case and 2/L for the n ≥ 1
coefficients is the same as the difference for the cosine terms in Fourier series.
Problems with gradients in the y-direction will have similar results to those shown
above with the sine or cosine terms in the y direction and the sinh term in the x
direction.
Solution by separation of variables gives the same result shown in equation [67].
The Y0 term in the radial solution is dropped out to allow the solution to remain
finite as r approaches zero. This leaves the radial solution as P(r) = J 0(r). The
gradient boundary condition at r = R requires the radial solution derivative,
dP(r)/dr = 0 at r = R. This radial boundary condition requires 8
dP(r ) dJ ( λr )
|r=R= 0 | =− λJ 1 ( λR)=0
dr dr r=R [116]
In this problem, the eigenvalues, m = m1/R, where m1 are the zeros of the
Bessel function J1. Except for the difference in the definition of the eigenfunction,
L L
∫ sin mπx
2
L ( )x
dx= −
L
2 4 mπ [
sin
2mπx
L ( )] = L2 − 4 mπL sin ( 2 mπL
L )=
L
2
8 0 0
2
2
d dy ν
d y (x ) 1 dy( x ) x2 −ν 2
dx 2
+
x dx
+
x 2
y ( x )=0
dz dz
z [ ]( + −
z
+k 2 z y=0 )
∂ r ∂ P(r ) +λ 2 rP(r )=0
∂r ∂ r The derivative of J0 is an application of the general equation for
Bessel function derivatives combined with the result that J -m(x) = (-1)mJm(x):
d m dJ 0 ( x ) d 0
[ x J m ( x ) ]=x m J m−1 ( x ) ⇒ = [ x J 0 ( x ) ]=x 0 J 0−1 ( x )=J −1 ( x ) =−J 1 ( x )
dx dx dx
the solution to the problem is the same as the solution given for the problem
where u(R,z) = 0 shown in equation [71].
∞ αm 1
u(r , z )= ∑ C m sinh( λ m H )J 0 (r ) λ m=
m=1 R [117]
The only difference between this equation and the C m equation in [73] is the
appearance of J0 instead of J1 in the denominator of the second term. This
comes about because the normalization integral has the following result.
R
2 R2 2
∫ r [ J 0 ( λm r )] dr= 2 [ J 0( λ m R )+J 21( λ m R )]
0 [119]
In the problem solved here the eigenvalues were given by the equation J 1(mR) =
0; in the Cm equation in [73], the eigenvalues were given by the equation J 0(mR)
= 0.