0% found this document useful (0 votes)
60 views

CH 10 1 (M)

This document discusses nonlinear programming problems (NLPPs) and the Lagrange multiplier method for solving NLPPs with equality constraints. It provides the formulation of an NLPP as an optimization problem with objective and constraint functions. It then presents the Lagrange multiplier method for determining the optimal solution that satisfies the constraints, by constructing a Lagrangian function and deriving necessary conditions that the optimal solution must satisfy. Finally, it generalizes the method to problems with multiple variables and constraints, and provides examples demonstrating how to apply the method.

Uploaded by

aisha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

CH 10 1 (M)

This document discusses nonlinear programming problems (NLPPs) and the Lagrange multiplier method for solving NLPPs with equality constraints. It provides the formulation of an NLPP as an optimization problem with objective and constraint functions. It then presents the Lagrange multiplier method for determining the optimal solution that satisfies the constraints, by constructing a Lagrangian function and deriving necessary conditions that the optimal solution must satisfy. Finally, it generalizes the method to problems with multiple variables and constraints, and provides examples demonstrating how to apply the method.

Uploaded by

aisha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

OPERATIONS RESEARCH

Chapter 10
Non-linear Programming

Prof. Bibhas C. Giri

Department of Mathematics
Jadavpur University
Kolkata, India
Email: [email protected]
MODULE - 1: NLPP with Equality
Constraints: Lagrange Multiplier Method

1.1 Nonlinear Programming Problem (NLPP)

Let z be a real valued function of n variables defined by z = f (x1 , x2 , · · · xn )


Let {b1 , b2 , · · · bn } be a set of constants such that

g1 (x1 , x2 , · · · xn ) ≤ = ≥ b1
g2 (x1 , x2 , · · · xn ) ≤ = ≥ b2
··· ···
gm (x1 , x2 , · · · xn ) ≤ = ≥ bm
xj ≥ 0, j = 1, 2, · · · , n,

where gi ’s are real valued functions.


If either f (x1 , x2 , · · · xn ) or some gi (x1 , x2 , · · · xn ), i = 1, 2, · · · , n or both are nonlinear, then
the problem of determining (x1 , x2 , · · · xn ) which makes z a maximum or minimum and
satisfies the constraints is called a Nonlinear Programming Problem (NLPP).

In matrix notation, the NLPP may be written as follows:


Optimize z = f (x)
subject to gi (x) ≤ = ≥ bi , i = 1, 2, · · · , m,
x ≥ 0, x = (x1 , x2 , · · · , xn ) ∈ Rn ,
where either f (x) or some gi (x) or both are nonlinear in x. Sometimes we write the
constraints gi (x) ≤ = ≥ bi as hi (x) ≤ = ≥ 0, where hi (x) = gi (x) − bi .

2
1.2 NLPP with Equality Constraints

Lagrange Multiplier Method - If an NLPP is composed of some differentiable ob-


jective function and equality constraints, then the optimization may be achieved by
Lagrange Multiplier technique. We shall now develop this method for a function of
two variables. Later, we shall generalize the arguments for any number of variables.

Suppose that we would like to find an optimal value of a differentiable function


f (x, y) whose variables are subject to a constraint g(x, y) = 0 where g is also differen-
tiable. If such an optimum occurs at a point (x0 , y0 ) at which at least one of the partial
∂g ∂g
derivatives ∂x
or ∂y
does not vanish, then we can proceed as follows:

Near (x0 , y0 ), the equation of the curve g(x, y) = 0 can be written in the form y =
h(x). Since g vanishes along the curve, we have
d ∂g ∂g dh
[g(x, h(x))] = + · = 0 at (x0 , y0 ) (1.1)
dx ∂x ∂y dx
and since (x0 , y0 ) gives the constrained optimum value, we also have
d ∂f ∂f dh
[f (x, h(x))] = + · = 0 at (x0 , y0 ) (1.2)
dx ∂x ∂y dx
∂g ∂f ∂g
Let ∂y
, 0 at (x0 , y0 ). Then we can define a parameter λ by ∂y
−λ ∂y = 0 at (x0 , y0 ). If
equation (1.1) is multiplied by λ and the result is subtracted from equation (1.2), then
we obtain
∂f ∂g
−λ = 0 at (x0 , y0 )
∂x ∂x

∂f ∂g ∂f ∂g
Thus, the equations −λ = 0 and −λ =0 (1.3)
∂x ∂x ∂y ∂y
hold at (x0 , y0 ). If we set

L(x, y, λ) = f (x, y) − λg(x, y) (1.4)

∂L ∂L
then equation (1.3) can be written as ∂x
= 0 and ∂y
= 0 and the original constraint
∂L
g(x, y) = 0 as ∂λ
= 0. Thus we see that the necessary conditions for unconstrained
optimum of L are also necessary conditions for constrained optimum of f (x, y). The
function L defined in (1.4) is called the Lagrangian function and λ is called the La-
grangian multiplier.
1.2.1 Generalized Lagrangian Method
The Lagrangian method developed above can be readily generalized as follows:
Consider a differentiable function z = f (x), x = (x1 , x2 , · · · , xn ) ∈ Rn . The variables are
subject to m(≤ n) constraints gi (x) = 0, i = 1, 2, · · · , m and x ≥ 0.

The necessary conditions for maximum (or minimum) of f (x) are the system of
(m + n) equations:
∑m 
∂L ∂f

∂gi
· · · 

= λ
i=1 i ∂xj = 0, j = 1, 2, , n 

∂xj ∂xj 

 (1.5)
∂L
= −gi = 0, i = 1, 2, · · · , m, 

∂λj 

where λ1 , λ2 , · · · , λm are Lagrangian multipliers. These necessary conditions are also


the sufficient conditions for a maximum (minimum) of the objective function if the
objective function is concave (convex) and the constraints are equations.

Example 1.1: Minimize z = f (x1 , x2 ) = 3e2x1 +1 + 2ex2 +5 , subject to x1 + x2 = 7 and


x1 , x2 ≥ 0.

Solution: We construct the Lagrangian function


L(x1 , x2 , λ) = f (x1 , x2 ) − λ(x1 + x2 − 7) = 3e2x1 +1 + 2ex2 +5 − λ(x1 + x2 − 7)
where λ is the Lagrangian multiplier.
The necessary conditions for the minimum of f (x1 , x2 ) are given by

∂L ∂L
= 6e2x1 +1 − λ = 0, = 2ex2 +5 − λ = 0
∂x1 ∂x2
∂L
= −(x1 + x2 − 7) = 0
∂λ
From these equations, we get x1 = 13 [11 − log 3] and x2 = 7 − 31 [11 − log 3] which mini-
mize z.

Example 1.2: Find the dimensions of a rectangular parallelopiped with the largest
volume whose sides are parallel to the coordinate planes to be inscribed in the ellip-
x2 y2 2
soid g(x, y, z) = a2
+ b2 + zc2 − 1 = 0

Solution: Let the dimensions of the rectangular parallelopiped be x, y and z. Its vol-
ume which is to be maximized is v(x, y, z) = xyz. The Lagrangian function L is given by
L(x, y, z) = v(x, y, z) − λg(x, y, z)
The necessary conditions are
∂L 2λx
= 0 ⇒ yz − 2 = 0
∂x a
∂L 2λy
= 0 ⇒ xz − 2 = 0
∂y b
∂L 2λz
= 0 ⇒ xy − 2 = 0
∂z c
∂L x2 y 2 z2
= 0 ⇒ 2 + 2 + 2 −1 = 0
∂λ a b c
Multiplying the first three equations by x, y, z, respectively, adding and making use of
the last equation, we obtain 3v(x, y, z) − 2λ = 0. This gives λ = 32 v(x, y, z) = 32 xyz.
√ √ √
Then we have x = a/ 3, y = b/ 3, z = c/ 3.

1.2.2 Sufficient Conditions for Optimum


Case I For single equality constraint
Consider an NLPP involving n variables and a single constraint:

max( or min) z = f (x), x = (x1 , x2 , · · · , xn ) ∈ Rn


subject to g(x) = 0, x ≥ 0.

The Lagrangian function L is given by

L(x, λ) = f (x) − λg(x).

The necessary conditions for a stationary point to be a maximum or minimum are:


∂L ∂f ∂g
= −λ = 0, j = 1, 2, · · · , n
∂xj ∂xj ∂xj
∂L
= −g(x) = 0.
∂λ
∂f ∂g
The value of λ is obtained by λ = /
∂xj ∂xj
for j = 1, 2, · · · , n.
The sufficient conditions for a maximum or minimum need the computation of (n − 1)
principal minors of the determinant for each stationary point, as given below:
∂g ∂g ∂g

0 ∂x1 ∂x2
··· ∂xn


∂g ∂2 f
−λ
∂2 g ∂2 f
− λ ∂x
∂2 g
···
∂2 f
− λ ∂x
∂2 g
∂x1 ∂x12 ∂x12 ∂x1 ∂x2 1 ∂x2 ∂x1 ∂xn 1 ∂xn
∆n+1 = ∂g ∂2 f ∂2 g
− λ ∂x ∂x
∂2 f
−λ 2
∂2 g
···
∂2 f ∂2 g
− λ ∂x ∂x


∂x2 ∂x2 ∂x1 2 1 ∂x22 ∂x2 ∂x2 ∂xn 2 n

.. .. .. .. ..

. . . . .
∂g ∂2 f ∂2 g ∂2 f ∂2 g ∂2 f ∂2 g

− λ ∂x − λ ∂x ··· −λ
∂xn ∂xn ∂x1 n ∂x1 ∂xn ∂x2 n ∂x2 ∂xn2 ∂xn2
If ∆3 > 0, ∆4 < 0, ∆5 > 0, · · · , i.e., the signs are alternatively positive and negative, the
stationary point is a local maximum.
If ∆3 < 0, ∆4 < 0, ∆5 < 0, · · · , i.e., the sign being always negative, the stationary point is
a local minimum.

Example 1.3: Solve the NLPP: Optimize z = 2x12 + x22 + 3x22 + 10x1 + 8x2 + 6x3 − 100
subject to x1 + x2 + x3 = 20 and x1 , x2 , x3 ≥ 0.

Solution: Lagrangian function L is given by


L(x, λ) = 2x12 + x22 + 3x32 + 10x1 + 8x2 + 6x3 − 100 − λ(x1 + x2 + x3 − 20).
The necessary conditions for optimum are

∂L ∂L
= 4x1 + 10 − λ = 0; = 2x2 + 8 − 2λ = 0
∂x1 ∂x2

∂L ∂L
= 6x3 + 6 − λ = 0; = −(x1 + x2 + x3 − 20) = 0
∂x3 ∂λ

Solving these equations, we get the stationary point x∗ = (x1 , x2 , x3 ) = (5, 11, 4) and
λ∗ = 30. We now evaluate n − 1 = 3 − 1 = 2 principal minors at the stationary point x∗
as follows:
0 1 1 1
0 1 1
1 4 0 0
∆3 = 1 4 0 = −6 and ∆4 = = 48.
0 2 0
1 0 2
1

1 0 0 6

Since ∆3 < 0 and ∆4 > 0, therefore, the extreme point x∗ = (5, 11, 4) is a local maximum.
At this point, the value of the objective function is z∗ = 281.

Case II For more than one equality constraints


Consider the following NLPP:

max( or min) z = f (x), x = (x1 , x2 , · · · , xn ) ∈ Rn


subject to gi (x) = 0, i = 1, 2, · · · , m(< n)
x ≥ 0.

Then the Lagrangian function is given by



m
L(x, λ) = f (x) − λi gi (x)
i=1
where λ = (λ1 , λ2 , · · · , λm ). One may verify that the necessary conditions for optimum
of L, i.e.,
∂L
= 0 for j = 1, 2, · · · , n
∂xj
∂L
= 0 for i = 1, 2, · · · , m
∂λi

are also the necessary conditions for optimum of f (x). So, the optimization of f (x)
subject to gi (x) = 0, i = 1, 2, · · · , m is equivalent to the optimization of L(x, λ).

Suppose that the function L(x, λ), f (x) and gi (x) all possess partial derivatives of
first and second
[ order
] with respect to the decision variables.
∂2 L(x,λ)
Let M = ∂xi ∂xj
for all i and j, be the matrix of second partial derivatives of
n×n
L(x, λ)
[ with
] respect to decision variables, and
∂gi (x)
V = ∂xj
where i = 1, 2, · · · , m; j = 1, 2, · · · , n.
m×n
 
 O V 
We define the square matrix HB =  
VT M 
(m+n)×(m+n)
where O is an m × m null matrix. The matrix HB is called the bordered Hessian matrix.

The sufficient conditions for maximum and minimum are given below:
Let (x∗ , λ∗ ) be the stationary point for the Lagrangian function L(x, λ) and HB∗ be the
value of the corresponding bordered Hessian matrix. Then

(i) x∗ is a maximum point, if starting with principal minor of order (2m + 1), the last
(n − m) principal minors of HB∗ have alternating sign starting with (−1)m+n sign.

(ii) x∗ is a minimum point, if starting with principal minor of order (2m + 1), the last
(n − m) principal minors of HB∗ have the sign of (−1)m .

Note: The above conditions are only sufficient for identifying an extreme point but
not necessary. That is, a stationary point may be an extreme point without satisfying
the above conditions.

Example 1.4: Optimize z = x12 + x22 + x32 , subject to x1 + x2 + 3x3 = 2, 5x1 + 2x2 + x3 = 5,
and x1 , x2 ≥ 0.
Solution: The Lagrangian function is L(x, λ) = x12 +x22 +x32 −λ1 (x1 +x2 +3x3 −2)−λ2 (5x1 +
2x2 + x3 − 5). The stationary point (x∗ , λ∗ ) can be obtained from the following necessary
conditions:
∂L ∂L
= 2x1 − λ1 − 5λ2 = 0; = 2x2 − λ1 − 2λ2 = 0
∂x1 ∂x2
∂L ∂L
= 2x3 − 3λ1 − λ2 = 0; = −(x1 + x2 + 3x3 − 2) = 0
∂x3 λ1
∂L
= −(5x1 + 2x2 + x3 − 5) = 0
λ2

Solving these equations simultaneously, we get


x∗ = (x1 , x2 , x3 ) = (37/46, 16/46, 13/46) and λ∗ = (λ1 , λ2 ) = (2/23, 7/23). For this station-
ary point (x∗ , λ∗ ), the bordered Hessian matrix is obtained as
 .. 
 0 0 . 1 1 3 
 
 .. 
 
 0 0 . 5 2 1 
 
 ··· ··· · ··· ··· ··· 
 
HB∗ =  . .

 1 5 .. 2 0 0 
 
 . 
 2 2 .. 0 2 0 
 
 
 .
3 1 .. 0 0 2

Here we have m = 2, n = 3. Therefore, n − m = 1 and 2m + 1 = 5.


Hence we have to check the determinant of HB∗ and it must have the positive sign (i.e.,
sign of (−1)2 ) for minimum of z. Since |HB∗ | = 460 > 0, therefore, the stationary point
x∗ corresponds to the minimum point.

You might also like