0% found this document useful (0 votes)
362 views

Calculus of Variations Project

transversality, weierstrass excess function and sufficient conditions, lagrange multipliers, isoperimetric problem

Uploaded by

BrianWeathers
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
362 views

Calculus of Variations Project

transversality, weierstrass excess function and sufficient conditions, lagrange multipliers, isoperimetric problem

Uploaded by

BrianWeathers
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Tyler Bolles

Dr. Mesterston-Gibbons
Calculus of Variations
Third Assignment

Part I
Consider the functional to be minimized,
J=

Z b

F (x, y, y 0 )dx =

Z b

y 0 (x + 1)2 dx

(1)

subject to y(0) = 0 and b > 0 when (b, ) must lie on y = 1 + ln(x + 1). Here is a problem
with a variable end point that we can imagine as a problem of perhaps optimizing the time
for two particles to meet where one is given a prescribed path. As we have done for most
necessary conditions, we can derive one to suit this problem by first imagining a minimized
J over a fixed set of boundaries. Then we perturb J by perturbing the boundaries and seek
implications on the minimizing curve. In general, for functions of single variables, we find
there are four types of constraints on the boundary: absolutely fixed (as we are used to),
freedom in either the x or y axis, or freedom along a parametrized curve. The implications
on the minimizer are then F0 | = 0 when there is freedom along some line
x = constant,

dy
0
H(x, , )| = 0 when x is free or y = constant and most generally, F0 dx = H(x, , 0 )|

where is the boundary and the ordinary derivative corresponds to the boundary curve.
Then the only remaining complication is to ensure the system of equations at the endpoint
is consistent. To begin we solve the Euler-Lagrange equation:
0 = 2y 0 + 0 + y 00 (2x + 2)

y 00 (1 + x) + y 0 = 0

Let v(x) = y 0 (x) , v 0 (x) = y 00 (x):


dv
(1 + x) + v = 0
dx

v = ek

ln v = ln(1 + x) + k
Z

dy =

kdx
1+x

dv
dx
=
v
1+x

1
k
=
1+x
1+x

y = k ln(1 + x) + l

then the boundary conditions:


y(0) = k ln(1 + 0) + l = l = 0
1

(2)

making
y = = k ln(1 + x)

(3)

Then since (b, ) lies on y = ln(1 + x) + 1, the general necessary condition


dy
F0 = H(x, , 0 )|
dx
becomes

(4)

1
= 0 {20 (1 + b)} 2 (1 + b)
1+b
2
k
==
k=2
1+b
1+b

20 (1 + b)

(5)
(6)

Finally,

= ln(1 + b) + 1 = 2 ln(1 + b)
b=1e

ln(1 + b) = 1

= 2 ln(1 + e 1) = 2

(7)
(8)

Thus in all,
= 2 ln 1 + x

(9)

is the an admissible extremal since it solves the Euler-Lagrange equation and since there
exists a solution for the system of equations in b and .

Part II
Now consider a collection of problems that will be solve with a combination of techniques.
Given a general minimization of
J[y] =

Z 1
0

1
y + y y + y + y dx
2
02

(1)

we can check various conditions for which our solution applies and whats more, achieves a
minimum. We begin with the first integral of the Euler-Lagrange, the Hamiltonian:
1
1
y 0 (2y 0 + y + 1) y 02 y 0 y y y = y 02 y = k
2
2

(2)

We note that this ODE has the same form as in Part V (eqn.s (2-7)) and guess a quadratic
as the solution,
y = ax2 + bx + c
(3)
so by substituting,
1
(2ax + b)2 (ax2 + bx + c) = k
2
2

a=

1
8

so that the general solution is


1
(4)
y = x2 + bx + c
8
Now the first particular case we will investigate is when y(0) = 0 and y(1) = . In this
case, the boundaries give c = 0 and b = 18 . If we apply the Transversality condition
corresponding to vertical freedom of a single endpoint, we see that for (1) =
20 + + 1 =

1
1
+ 2 + + 1 = 0
2
4

5
12

(5)

so that, upon substituting into the expression for b, our particular solution becomes
1
13
= x2 x
8
24

(6)

We can then verify if this value of achieves a minimum by minimizing J with ordinary
calculus:
J=

Z 1 (
x
0

 "

+b

x2
x2 bx
67
x
3b2 13b
+b +
+ bx + 1 +
+
+
+
dx =
4
8
16
2
2
8
384
!

(7)

Then the minimizing value of J will correspond to the only critical point of the concave-up
d
d
= d
. Thus
polynomial in b = 18 . The one-to-one relationship for b and means db
d
db

3b2 13b
67
+
+
2
8
384

= 3b +

13
=0
8

b = b =

13
24

= b +

1
5
=
(8)
8
12

and the which satisfies the Transversality and conditions also minimizes J so we say a
minimum is achieved.
The second case we look into is
 where
 y(0) = and y(1) = 0. Here the conditions make it
1
obvious that c = and b = + 8 so that
1
1
(x) = x2 +
x+
8
8


(9)

The same Transverality condition applies so that for (0) = ,


1
1
20 + + 1 = (0) 2 + + 1 = 0
2
4

3
4

(10)

giving the specialized solution


1
7
3
= x2 x +
8
8
4

(11)

Again, we check if this value of minimizes J; b = +


Z 1 (
x
0

 "

+b

1
8

and c =

x
x2
x2 bx c
+b +
+ bx + c + 1 +
+
+
dx
4
8
16
2
2
!

(12)

3b2
13
67
3
67
5c
1 2
1
13
5
=
+b c+
+
=
(13)
+
+
+
+
+ +
2
8
8
384
2
8
8
8
8
384
making


d
3
13
1
5
J = 3 + +
= +
(14)
d
8
8
8
8
d2
J =2>0
(15)
d2
and
3
(16)
= =
4
Thus once more, we have found congruence between the minimizing parameter, , of J and
that parameter which satisfies the Transversality conditions, so a minimum is once more
achieved.
Finally we consider case when both y-values of the boundary are free, y(0) = and y(1) = .
The only difference we must take note of is the additional variable and equation in the system
corresponding to the other boundary. Thus after recognizing c = ,




1
= x2 + bx +
8
means that
=

1
+b+
8

(17)

b=

1
8

(18)

then for (0) = ,


1
1
++1=0
2 + + 1 = (0) 2
2
8


= 2 +

3
4

(19)

and for (1) = ,


1
1
2 + + 1 = 2 (1) +
4
8
0



3
5
+ + 1 = + 3 2 2 +
=0
4
4


1
= ,
4

1
4

(20)
(21)

so that upon substituting,


1
5
1
= x2 x +
(22)
8
8
4
Now we check to see if these values of and achieve a minimum. Our job is made easier
by noting that for arbitrary polynomial coefficients, a, b 6= 0, the minimization problem is
identical to that iterated in (12) and the first half of (13) so if we substitute c = and
b = 18 the expression becomes


f (, ) = 1.5

1
8

2

+
4

1
8



13
5
+ y
8
8


(23)

With a small bit of foresight, we note that this bivariate function is a quadratic in either of
its arguments. Thus the second derivatives will each be constant:
2
f (, ) = 3
x2

(24)

2
f (, ) = 1
y 2

(25)

and

2
f (, ) = 2
xy
but the second derivative test for bivariate functions
2
D = fxx fyy fxy
= 3 4 = 1 < 0

(26)

(27)

implies that the only extrema of the function is a saddle point so that no matter the value
of and , there are always lower values of J and a minimum will never be achieved. We
note here that when a minimum existed in this set of boundary conditions, the parameters
in question coincided with those given by the necessary Transversality conditions and that
in the case when the minimum did not exist, the values given by said conditions gave at
least a saddle point of the functional.

Part III
Imagine we obtain a general solution to the Euler-Lagrange equation. Then without fixing
the arbitrary constants obtained from integration, we can generate a field of potential extremals, , based on the variation of those parameters. Then one can construct an integral
over the path, , that, within the field, regardless of the choice of parameter and therefore
path, will yield the same result for the same bounds of integration. In this context, we refer
the such an integral as Hilberts Invariant Integral,
K[] =

Z b
a

dy
Fy0 (x, y, )dx
F (x, y, ) +
dx

(1)

and its invariance will prove a useful tool in deriving more necessary and eventually sufficient
conditions for minimization. Let us elaborate on the idea with a concrete example. Consider
J[y] =

Z 1q

1 + y 02 y 2 dx

(2)

subject to y(0) = 1 and y(2) = 3. Before we construct any fields we must calculate the
solution to Euler-Lagrange:
2yy 02
Fy =
,
2 1 + y 2 y 02

2y 0 y 2
Fy0 =
,
2 1 + y 2 y 02
5

Fyy0 =
factoring out

2yy 0
2y 3 y 03

1 + y 2 y 02 2(1 + y 2 y 02 )3/2

y
(1+y 2 y 02 )3/2

Fy 0 y 0 =

y2
y2y0
+
1 + y 2 y 02 (1 + y 2 y 02 )3/2

and substituting:

y 02 (1 + y 2 y 02 ) = 2y 02 (1 + y 2 y 02 ) y 2 y 04 + y 00 y 2 (1 + y 2 y 02 y 02 y 2 )

(3)

y 2 y 04 = y 02 (1 + y 02 y 2 ) + y 00 y
y 00 y + y 02 = 0
let v(y) = y 0 (x),

dv
dx

=
0

dv dy
dy dx

(4)

= v 0 v:
2

v vy + v = 0
ln v = ln y + k

v
dv
=
dy
y

dy
k
1
v=
= ek =
dx
y
y

y = = k x + l

dv Z dy
=
v
y
Z

ydy =

kdx
(5)

Where the boundary conditions give l = l = 1 and k = k = 4. Noting these values, we


form the direction field, by finding the slope of and eliminating one parameter (k, l) by
using their admissible values and substituting.
k
y0 =
2 kx + l
Then
y 2 kx = l

k
k
=
=
2y
2 kx + y 2 kx

(6)

(7)

and we can check to see if this direction field is valid for this functional by making sure
it satisfies the Euler-Lagrange equation since it represents all possible extremal. Thus we
substitute everywhere for y 0 in the (4), while first noting
y 00 =

k2
k2
=

4(kx + l)3/2
4y 3

so that
y 00 y + y 02 = 0

(k )2
(k )2 (k )2
2
+

=0
4y 3
4y 2
4y 2

(8)

(9)

Similarly,
y 2 l
x
2

y l
1
q
=
2
x 2 y l x + l
k=

(10)

=
and
y 00 y + y 02 = 0

y 2 l
2xy

k2
(y 2 l )2 (y 2 l )2
2
+

=0
4y 3
(2xy)2
4x2 y 2

(11)

(12)

Part IV
Equipped with Hilberts Invariant Integral (1), Part III, we make an argument for sufficiency.
dy
, which is of course only
Note that the second term in the integral goes to zero as the dx
possible if was embedded in , as it usually can be with general enough choice of k and l.
In this case K[ ] = J[ ] where now refers the the minimizing curve. Thus we have an
alternative representation of the original problem which, as we have noted earlier, remains
invariant for choice of , i.e., K[ ] = K[] . Then we can invoke our usual argument
regarding perturbation and the corresponding change when speaking of a minimum, J[]
J[ ] 0, or, with the above relations, J[] K[] 0, which can be re-written as a single
integral,
Z
b

{F (x, y, y 0 ) F (x, y, ) (y 0 )Fy0 (x, y, )} dx 0

(1)

which is commonly re-written as the integral of Weierstrasss excess function,


Z b

E(x, y, , )dx

(2)

where corresponds to y 0 evaluated anywhere over [a, b]. However, upon expanding the last
term in the function with Taylors theorem with remainder, there exists some , which, upon
cancellation of the resulting additive inverse pairs, gives
1
E = ( )2 Fy0 y0 (x, y, 1 + )
2

(3)

which is clearly positive for all regular (Legendres strong condition) problems. Thus so long
as our problem is regular, we need only show that the minimizing curve can be embedded
in a field, as in the previous problem. Now let us apply this new technique to our hearts
satisfaction; i.e, to minimize
J[y] =

Z 1

y 2 + y 02 + 2ye2x dx

(4)

subject to y(0) =

1
3

and y(1) = 13 e2 . The Euler-Lagrange:


2(y + e2x ) = 2y 00

(5)

y 00 y = e2x

(6)

The characteristic equation gives


yhom = kex + lex

(7)

We simply guess that the particular solution is a multiple of the right hand side, which
readily verifies itself to give the general
1
y = kex + lex + e2x
3

(8)

Then the boundary conditions make both arbitrary constants zero. Proceeding as in the
previous problem,
2
y 0 = kex lex + e2x
(9)
3
1
l = (y k ex e2x )ex
(10)
3
so
2
1
= k ex (y k ex e2x )ex ex + e2x
3
3
x
2x
= 2k e + e y
(11)
Similarly, we can eliminate k to get
1
= y 2l ex + e2x
3

(12)

Then since either field contains the extremal to this problem and Fy0 y0 = 2 > 0,
1
= e2x
3

(13)

is the minimizer.

Part V
Recall from ordinary calculus that when trying to extremize some function f (x, y) contained
by some other g(x, y) = C, that the normal procedure is to eliminate y in terms of x and then
to set the derivative of the resulting functions equal to zero, the xs for which the function
are extremized, and then to each other to find that the gradients of f and g are actually
parallel, scaled by a factor , also known as a Lagrange Multiplier. So it is when we speak of
functionals rather than functions, granted we have at least the same bounds of integration.
Consider
Z 2
Z 2
02
J[y] =
y dx subject to
dx = 8
(1)
0

Because the gradient of these to functionals is given by the solution to the Euler-Lagrange,
we form the problem as
= F G = y 02 y
(2)
and instead of calculating the Euler-Lagrange for F and G we do so for ;
H(x, y, y 0 ) = y 0 0y = y 0 (2y 0 ) y 02 + y = y 02 + y = k
Thus
Z

(3)

Z
dy
= dx
k y

(4)

so let u = k y, dy = du
, making

1 Z du
2 u
2 k y
=
=
=x+l

or

!2

2
1
k (x + l)2
y=

4
(

k y = (x + l)
2

(5)

but upon rewriting we see that


2 xl l2 k
y= x

+
4
2
4

so if we choose to a = 4 , b = xl
and c =
4

l2
,
4

(6)

we get the familiar

y = ax2 + bx + c

(7)

Now we use our first trio of constraints, the right integral in (1) with y(0) = 1 and y(2) = 3,
to solve our three unknowns, a, b, and c:

y(0) = 1

y(2) = 3
Z 2
0

ydx =

Z 2

c=1
b = 1 2a

ax2 + (1 2a)x + 1dx = 8

8a 1
+ (1 2a)(4) = 8
3
2
Thus upon substituting a, b and c into (7),
2+

y = 3x2 + 7x + 1

a = 3

(8)

Of course if we alter the constraints, we can use the same equations and substitutions to get
a different particular solution. Thus for y(1) = 2, y(3) = 4 and
Z 3

ydx = 2

(9)

we find that for


y = ax2 + bx + c

(10)

y = 3x2 + 13x 8

(11)

b = 1 4a, c = 1 + 3a and a = 3 or

To conclude, we restate that (8) and (11) both satisfy the modified Euler-Lagrange problem
given by the theory of the Lagrange multipliers as applied to the functional theory, and both
satisfy their respective boundary conditions and constraints so they are admissible extremals
each.

10

You might also like