CH 10 1 (M)
CH 10 1 (M)
Chapter 10
Non-linear Programming
Department of Mathematics
Jadavpur University
Kolkata, India
Email: [email protected]
MODULE - 1: NLPP with Equality
Constraints: Lagrange Multiplier Method
g1 (x1 , x2 , · · · xn ) ≤ = ≥ b1
g2 (x1 , x2 , · · · xn ) ≤ = ≥ b2
··· ···
gm (x1 , x2 , · · · xn ) ≤ = ≥ bm
xj ≥ 0, j = 1, 2, · · · , n,
2
1.2 NLPP with Equality Constraints
Near (x0 , y0 ), the equation of the curve g(x, y) = 0 can be written in the form y =
h(x). Since g vanishes along the curve, we have
d ∂g ∂g dh
[g(x, h(x))] = + · = 0 at (x0 , y0 ) (1.1)
dx ∂x ∂y dx
and since (x0 , y0 ) gives the constrained optimum value, we also have
d ∂f ∂f dh
[f (x, h(x))] = + · = 0 at (x0 , y0 ) (1.2)
dx ∂x ∂y dx
∂g ∂f ∂g
Let ∂y
, 0 at (x0 , y0 ). Then we can define a parameter λ by ∂y
−λ ∂y = 0 at (x0 , y0 ). If
equation (1.1) is multiplied by λ and the result is subtracted from equation (1.2), then
we obtain
∂f ∂g
−λ = 0 at (x0 , y0 )
∂x ∂x
∂f ∂g ∂f ∂g
Thus, the equations −λ = 0 and −λ =0 (1.3)
∂x ∂x ∂y ∂y
hold at (x0 , y0 ). If we set
∂L ∂L
then equation (1.3) can be written as ∂x
= 0 and ∂y
= 0 and the original constraint
∂L
g(x, y) = 0 as ∂λ
= 0. Thus we see that the necessary conditions for unconstrained
optimum of L are also necessary conditions for constrained optimum of f (x, y). The
function L defined in (1.4) is called the Lagrangian function and λ is called the La-
grangian multiplier.
1.2.1 Generalized Lagrangian Method
The Lagrangian method developed above can be readily generalized as follows:
Consider a differentiable function z = f (x), x = (x1 , x2 , · · · , xn ) ∈ Rn . The variables are
subject to m(≤ n) constraints gi (x) = 0, i = 1, 2, · · · , m and x ≥ 0.
The necessary conditions for maximum (or minimum) of f (x) are the system of
(m + n) equations:
∑m
∂L ∂f
−
∂gi
· · ·
= λ
i=1 i ∂xj = 0, j = 1, 2, , n
∂xj ∂xj
(1.5)
∂L
= −gi = 0, i = 1, 2, · · · , m,
∂λj
∂L ∂L
= 6e2x1 +1 − λ = 0, = 2ex2 +5 − λ = 0
∂x1 ∂x2
∂L
= −(x1 + x2 − 7) = 0
∂λ
From these equations, we get x1 = 13 [11 − log 3] and x2 = 7 − 31 [11 − log 3] which mini-
mize z.
Example 1.2: Find the dimensions of a rectangular parallelopiped with the largest
volume whose sides are parallel to the coordinate planes to be inscribed in the ellip-
x2 y2 2
soid g(x, y, z) = a2
+ b2 + zc2 − 1 = 0
Solution: Let the dimensions of the rectangular parallelopiped be x, y and z. Its vol-
ume which is to be maximized is v(x, y, z) = xyz. The Lagrangian function L is given by
L(x, y, z) = v(x, y, z) − λg(x, y, z)
The necessary conditions are
∂L 2λx
= 0 ⇒ yz − 2 = 0
∂x a
∂L 2λy
= 0 ⇒ xz − 2 = 0
∂y b
∂L 2λz
= 0 ⇒ xy − 2 = 0
∂z c
∂L x2 y 2 z2
= 0 ⇒ 2 + 2 + 2 −1 = 0
∂λ a b c
Multiplying the first three equations by x, y, z, respectively, adding and making use of
the last equation, we obtain 3v(x, y, z) − 2λ = 0. This gives λ = 32 v(x, y, z) = 32 xyz.
√ √ √
Then we have x = a/ 3, y = b/ 3, z = c/ 3.
Example 1.3: Solve the NLPP: Optimize z = 2x12 + x22 + 3x22 + 10x1 + 8x2 + 6x3 − 100
subject to x1 + x2 + x3 = 20 and x1 , x2 , x3 ≥ 0.
∂L ∂L
= 4x1 + 10 − λ = 0; = 2x2 + 8 − 2λ = 0
∂x1 ∂x2
∂L ∂L
= 6x3 + 6 − λ = 0; = −(x1 + x2 + x3 − 20) = 0
∂x3 ∂λ
Solving these equations, we get the stationary point x∗ = (x1 , x2 , x3 ) = (5, 11, 4) and
λ∗ = 30. We now evaluate n − 1 = 3 − 1 = 2 principal minors at the stationary point x∗
as follows:
0 1 1 1
0 1 1
1 4 0 0
∆3 = 1 4 0 = −6 and ∆4 = = 48.
0 2 0
1 0 2
1
1 0 0 6
Since ∆3 < 0 and ∆4 > 0, therefore, the extreme point x∗ = (5, 11, 4) is a local maximum.
At this point, the value of the objective function is z∗ = 281.
are also the necessary conditions for optimum of f (x). So, the optimization of f (x)
subject to gi (x) = 0, i = 1, 2, · · · , m is equivalent to the optimization of L(x, λ).
Suppose that the function L(x, λ), f (x) and gi (x) all possess partial derivatives of
first and second
[ order
] with respect to the decision variables.
∂2 L(x,λ)
Let M = ∂xi ∂xj
for all i and j, be the matrix of second partial derivatives of
n×n
L(x, λ)
[ with
] respect to decision variables, and
∂gi (x)
V = ∂xj
where i = 1, 2, · · · , m; j = 1, 2, · · · , n.
m×n
O V
We define the square matrix HB =
VT M
(m+n)×(m+n)
where O is an m × m null matrix. The matrix HB is called the bordered Hessian matrix.
The sufficient conditions for maximum and minimum are given below:
Let (x∗ , λ∗ ) be the stationary point for the Lagrangian function L(x, λ) and HB∗ be the
value of the corresponding bordered Hessian matrix. Then
(i) x∗ is a maximum point, if starting with principal minor of order (2m + 1), the last
(n − m) principal minors of HB∗ have alternating sign starting with (−1)m+n sign.
(ii) x∗ is a minimum point, if starting with principal minor of order (2m + 1), the last
(n − m) principal minors of HB∗ have the sign of (−1)m .
Note: The above conditions are only sufficient for identifying an extreme point but
not necessary. That is, a stationary point may be an extreme point without satisfying
the above conditions.
Example 1.4: Optimize z = x12 + x22 + x32 , subject to x1 + x2 + 3x3 = 2, 5x1 + 2x2 + x3 = 5,
and x1 , x2 ≥ 0.
Solution: The Lagrangian function is L(x, λ) = x12 +x22 +x32 −λ1 (x1 +x2 +3x3 −2)−λ2 (5x1 +
2x2 + x3 − 5). The stationary point (x∗ , λ∗ ) can be obtained from the following necessary
conditions:
∂L ∂L
= 2x1 − λ1 − 5λ2 = 0; = 2x2 − λ1 − 2λ2 = 0
∂x1 ∂x2
∂L ∂L
= 2x3 − 3λ1 − λ2 = 0; = −(x1 + x2 + 3x3 − 2) = 0
∂x3 λ1
∂L
= −(5x1 + 2x2 + x3 − 5) = 0
λ2