0% found this document useful (0 votes)
5 views19 pages

Unit-9 (1)

This document covers the topic of constrained optimization with equality constraints, detailing methods for finding stationary values, including the method of substitution and the Lagrange multiplier method. It explains the importance of constraints in optimization problems, particularly in economic applications such as consumer equilibrium. The document also discusses second-order conditions and provides examples to illustrate the concepts presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views19 pages

Unit-9 (1)

This document covers the topic of constrained optimization with equality constraints, detailing methods for finding stationary values, including the method of substitution and the Lagrange multiplier method. It explains the importance of constraints in optimization problems, particularly in economic applications such as consumer equilibrium. The document also discusses second-order conditions and provides examples to illustrate the concepts presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Constrained

UNIT 9 CONSTRAINED OPTIMISATION Optimisation with


Equality
WITH EQUALITY CONSTRAINS Constraints

Structure
9.0 Objectives
9.1 Introduction
9.2 Finding the Stationary Values
9.2.1 The Method of Substitution
9.2.2 The Lagrange Multiplier Method
9.3 Second Order Conditions
9.4 Economic Applications
9.4.1 Consumer Equilibrium
9.5 Let Us Sum Up
9.6 Answers/Hints to Check Your Progress Exercises

9.0 OBJECTIVES
After reading the Unit you should be able to:

• Explain the effect of a constraint on an optimisation problem;


• Describe the process of obtaining the stationary values in a constrained
optimisation problem;
• Describe the second-order conditions in constrained optimisation
problems; and
• Discuss some economic applications of constrained optimization
problems.

9.1 INTRODUCTION
In the previous unit, you studied the topic of optimisation. This finds
numerous applications in economics. You were made familiar with the case
where the dependent variable is a function of several independent variables.
However, the optimisation was unconstrained. In economic applications,
unconstrained optimisation is a relatively rare case. More often, we find
instances of constrained optimisation, that is optimisation subject to a
constraint. What this means is that there is a side condition on the
optimisation exercise. The domain of the function that is sought to be
optimised is restricted by one or more side relations. Consider the case of
utility maximization by a consumer. An individual as consumer has unlimited
wants. But she is constrained by her budget set. So she maximizes her utility
subject to her budget constraint. Similarly a producer would want to
minimize her cost subject to a given level of output.
167
Multivariate
This unit is concerned with an exposition of constrained optimisation. The
Optimisation
unit begins with a discussion of how to find stationary values of the objective
function. It is shown that in simple cases we can use the method of
substitution. But in more complex cases, different methods have to be
employed. We will see the importance of the Lagrangian multiplier and how
it is used in static constrained optimization. After this the unit discusses
second-order conditions for constrained optimization. After discussing these
conditions, the unit will go on to discuss some economic applications.

9.2 FINDING THE STATIONARY VALUES


9.2.1 The Method of Substitution
With the help of an example from co-ordinate geometry, we shall learn to
maximize/minimize a function subject to a constraint which restricts the
domain of the function.

Consider a simple example. Find the smallest circle centred on (0,0) which
has a point common with the straight line x+y = 10.
The equation for the circle is x2+y2 = r2. The smallest circle will be one with
the smallest radius. The restriction is that the circle must have a point in
common with a given straight line. Without this restriction, the smallest
circle can easily be seen to be a circle with radius zero, i.e. a point.

Thus our problem is the following:


Minimise x 2 + y 2 subject to x + y = 10

Here the unconstrained solution x=0, y=0 will not be available. The
constraint prohibits this solution. What we have to do is to consider as the
domain only those values of x and y for which x + y = 10. So we see that the
constraint has diminished the domain. How to find the solution? From the
constraint we find =y 10 − x . Now if we substitute this into the minimand x2
+ y2 we get x 2 + (10 − x ) , which incorporates the constraint. Let us now
2

minimize this expression


d 2
 x + (10 − x) 2  = 2 x + 2(10 − x)(−1) = 4 x − 20
dx 

For a stationary value 4 x − 20 x 5


= 0 or =

To check whether the stationary value is truly a minimum we differentiate the


function once again with respect to x. Thus,

d2 2
 x + (10 − x) 2 
dx 2 

168
d Constrained
= (4 x − 20) Optimisation with
dx Equality
Constraints
= 4>0

Which proves that we have minimized the expression x2 + y2


We know that the constraint makes y = 10 − x = 5 , thus giving us a solution
x 5,=
= y 5 for which x2 + y2 =50.

Let us check on the method used for the problem above, and see whether we
can formulate a general method for all such problems. We have a function,
in general f ( x, y ) which we shall call the objective function. This is to be
maximised or minimised (as specified), subject to the constraint equation
which has the general form g ( x, y ) = 0 . (The actual constraint above should
be written as x + y − 10 =0 to conform to the general form).

In solving the problem our method involved several steps:


Step 1 : Solve the constraint equation to get one variable in terms of the
other. Above we found y in terms of x. Thus, = y 10 − x

Step 2 : Substitute this solution in the objective function to ensure that the
domain of the function is a set of pairs of values of x and y which
satisfy the constraint. Note that the objective function thus
modified is a function of x only.
Step 3 : Differentiate this modified function to get the first derivative.
Find the value of x for which this derivative equals zero.
Step 4 : Put this value of x in the constraint to find the value of y.
Step 5 : Calculate the value of the objective function for this pair of
values of x and y.
Step 6 : Check the second order condition to find whether the stationary
point is an extremum.

In general it may not be easy or indeed possible to solve the constraint


equation. In such cases it would appear that we are stuck at step (1) of the
procedure described above. For instance, we may have a constraint equation
like x3 + 2x2y + 9y3 - 2y – 117 = 0. To find y in terms of x, or x in terms of y
from this complicated equation is extremely difficult. Suppose now that the
problem has been referred to a mathematician and while we wait for the
solution we prepare the ground for our computation.

When the mathematician finds the solution it will be of the form y = h(x).
Let us proceed to the next step. In step(2) the objective function f(x,y) is
transformed into f(x,h(x)). Step(3) requires to differentiate this function and
dh( x) dh( x)
what we shall get is fx + fy . If we know then we set the
dx dx
expression equal to zero and solve for x, completing step (3). The rest of the
steps present no problem.
169
Multivariate
Note that the exact form of the function h(x) is not necessary in the solution
Optimisation
of our problem. What we need is the derivative of this problem in order to
follow our method of steps (1) to (6). But how could we find the derivative
of a function without knowing the function itself?

Strangely enough there is a theorem which tells us how and the conditions
when this is possible. It is the implicit function theorem.

The Implicit Function Theorem


If F(x,y) is continuous with continuous derivatives Fx and Fy and if F(x0,y0) =
0 but Fy(x0,y0)≠0, then
i) there exists a rectangle x1 ≤ x ≤ x2, and y1 ≤ y ≤ y2 such that for every x ∈
[x1, x2] the equation F(x,y) = 0 determines exactly one value y = m (x), y
∈ [y1, y2] i.e., within this rectangle, y can be expressed as a function of
x, y = m(x).
ii) This function satisfies y0 = m(x0), and for every x ∈ [x1, x2], F(x, m(x)) =
0.
iii) This function m(x) is continuous and differentiable, and its derivative
dy dm( x) F
= = − x
dx dx Fy

(This is because taking total differential of the function F(x,y) = 0, Fxdx +


dy F
Fydy=0, i.e., = − x)
dx Fy

We shall not prove this theorem here, but let us translate the theorem into
informal language. Let us go back to our problem of finding y in terms of x
from our constraint equation whose general form is g(x,y) = 0. Thus our
problem is to maximize (or minimize) F(x,y), Fy≠0 subject to g(x,y)=0. The
theorem says that if

i) the function g is continuous with continuous derivatives and


ii) gy(x,y) ≠0 at a point (x0, y0) which satisfied the equation, then we can use
the steps outlines earlier even without finding the function y =h(x). In
then we can use the steps outlined earlier even without finding the
function y=h(x). In the third step we will need h¢(x) and the theorem
g
states that h¢(x)= – x .
gy

Let us apply this method to the following problem.

Find the maximum value of f(x,y)=x2 + y2 subject to x2 + y2-4x-2y+4=0.


δ 2 2
gy= (x + y -4x-2y+4) = 2y-2 (for y≠1 this is non-zero).
δy

170
Constrained
By the implicit function theorem whenever gy≠0, we can express y as a Optimisation with
function of x, namely y=h(x) (even though we do not know the exact form). Equality
Constraints
Substituting into f(x,y) we get

f(x,y) = f(x,h(x)) = x2 + (h(x))2


dz
A stationary value of z would make = 0.
dx

dz g
=2x +2h(x)(h’(x))=2x+2y(h’(x))=2x+2y(- t )
dx gy

2x − 4
=2x + 2y (− ) =-x + 2y
2y − 2

that is, we must have x=2y at the stationary value of z.


1
Applying step (4) as used above gives us 5y2-10y+4=0 from which y=1±
5
and
1
thus x=2(1± ). The objective function (step 5) now becomes
5
1 2
x2 + y2 = (1± ) =2(3± 5 )
5

The last step would involve a somewhat complicated procedure involving


second order conditions, but we postpone this till later. In this particular case
2(3+ 5 ) is a maximum and 2(3- 5 ) is a minimum.

We now discuss a method which uses a round-about mode and derives the
answer by initially complicating the problem further by introducing another
variable. This is known as the Lagrange multiplier method.

9.2.2 The Lagrange Multiplier Method


Consider once more the problem:

Maximise f(x, y) subject to g(x, y) = 0

If gy(x, y) ≠ 0, then y = h(x), so that the problem is transformed into:


Maximise f(x,h(x))

1st order condition gives us fx + fyh’(x) =0 ……..(i)


g(x,h(x)) = 0 is an identity so that we have

gx + gyh’(x) =0 ……..(ii)
fy
Now define λ = . Multiplying (ii) by λ , we get
gy

171
Multivariate λg x + f y h' ( x) = 0 …….(iii)
Optimisation

From (i) and (iii), fx- λ gx=0 …….(1)


fy
From λ = , f y − λg y = 0 …….(2)
gy

Also the constraint is g(x,y) = 0 …….(3)


(1), (2) and (3) are three equations in three variables, x, y and λ . When we
solve these three simultaneous equations we get the value of x and y which
solve our problem; we also get the value of λ which is Lagrangian multiplier,
some thing which does not seem to have any relevance to our problem. We
shall see presently that it is full of importance and meaning.

We started this exercise with the assumption gy (x, y) ≠ 0. You should work
out the consequences of this proceeding in the same way as above, except
f
that now one should define λ = x . The result is that we land with the same
gx
set of equation as (1), (2) and (3) above. In a problem a large number of
variables, say x1, x2, ……………,x17, as long as one partial derivative of
constraint function with respect to some variable is non-zero we shall get the
same set of equations which are symmetrical with respect to all variables.

Let us apply the lagrangian multiplier method to the following problem:

Example
Minimise f(x,y) = (x-1)2 + y2 subject to g (x,y) = y2-4x = 0.
Define a new function L(x,y, λ )=f(x,y) - λ g(x,y)
= (x-1)2 + y2- λ (y2-4x)

Find the stationary point of L by considering the first order equations.


Lx = fx - λ gx = 2(x-1) + 4 λ =0
Ly = fy - λ gy = 2y-2 λ y = 0
L λ =-g(x,y) =-(y2-4x)=0
1
From (ii) y(1- λ ) 0. For y =0, we get from the other equations x=0, λ = .
2
The other alternative λ =1 gives us from (i) x=-1 which conflicts with (iii).
The stationary point therefore is at x=0, y = 0 which gives the constrained
1
minimum as (-1)2 +(0)2 = 1. We shall here ignore the by product λ = .
2

Note carefully that we have not suggested that the stationary values of L (x,
y, λ ) is either a maximum or minimum. Indeed where a constrained
extremum can be found, the associated lagrangean function does not have
either a maximum or minimum point. For the function L (x, y, λ ) the
172 stationary point is saddle-point, a minimum in one direction and a maximum
Constrained
from another direction. A saddle point is a higher dimensional analogue of an Optimisation with
inflection point. Equality
Constraints
Also note that function L has been artfully devised so that its first order
conditions coincide with the conditions necessary for the solutions of the
constrained extremum. It offers the desirable characteristic of symmetry with
respect to all variables .
fx gx
We see that the stationary value, = . This really means for x, y which
fy gy
solve the problem we have tangency between the constraint and a level curve
of the function f(x, y) obtained from the equation (x, y) = c. For various
values of c we get various level curves of f and the constrained extremum
occurs at appoint where a level curve of f just touches the constraint curve.
Now let us see how this method is useful in Economics. Constrained
optimization problems abound in Economics. Frequently some function is
maximized subject to some constraint. For example, in consumer
equilibrium, we maximise satisfaction subject to income or budget constraint
similarly, for producer equilibrium, we obtain a given output at minimum
factor cost subject to resource constraint. For constrained optimisation we use
langragean multiplier. We explain below the various steps needed for this.
1) State clearly the function to be optimized. This is called the objective
function (OF)
2) Identify the constraint function (CF) and reduce it to C ax by 0
form (i.e. implicit function)
3) Create another function ( z or v ) such that v OF CF where is
some proportion ( is read as lamda)
4) Find Vx 0, Vy 0 and solve for x and y . If clear values of x and y
are not obtainable, then find V 0 (This will be the constraint
function). With the help of Vx 0 , u y 0 and V 0 , the required
solution (i.e. values of x, y ) can be obtained.

Let us consider an example:


Maximise 5 x 2 6 y2 xy , subject to constraint x 2 y 24
Solution: Given OF : 5 x 2 6 y2 xy
CF : x 2y 24 24 x 2 y 0
Let v OF CF 5x2 6 y 2 xy 24 x 2 y 0
or v 5x2 6 y2 xy 24 x 2 y
vx 10 x 0 y 0 0 0 or 10x y (1)
vy 0 12 y x 0 0 2 0 or 12 y x 2 (2)

173
Multivariate 2
Optimisation From (13) and (14) we get 20 x 2 y 12 y x x y (3)
3
Since we do not get clear values of x and y ,
therefore we find v 2u x 2 y 0 (4)

From (15) and (16), we get


2 2
24 y 2y 0 or 2 y 24 or y 9
3 3

2
From (15) x 9 6
3
Hence constrained maximization takes place when x 6 and y 9

A consumers utility function is given as u y 1 x 2 . If his budget


constraint is 2 x 5 y 51 , how of x and y he should consumer to maximise
his satisfaction.
Sol. Given OF :u y 1 x 2 xy x 2 y 2

CF : 51 2 x 5 y 0

Let v OF CF where is langragean multiplier.

or v = xy + x + 2 y + 2 + λ ( 51 − 2 x − 5 y )
= xy + x + 2 y + 2 + 51λ − 2λ x − 5λ y

y 1
vx y 1 2 0 0 or = (5)
2 2
x 2
vy x 0 2 0 0 0 5 0 or (6)
5 5
y 1 x 2 y 1 x 2
From (17) and (18), we get or
2 2 5 5 2 5
5y 1
5y 5 2x 4 or x (7)
2
Since no clear solution emerges, therefore, we find v .

v 51 2 x 5 y 0 (8)

From (19) and (20), we get


51 x 5 y 1
5y 0 or 51 5 y 1 5 y 0 or
2
10 y 50, y 5

5y 1 5 5 1 26
x 13
2 2 2

174
Therefore, the solution is x 13, y 5 . That is the consumer will consumer Constrained
Optimisation with
13 units x and 5 units of y . Equality
Constraints
Another example:
Use the method of langragean multiplier to find equilibrium consumption of
two goods x and y on the basis of the following information.

Utility function of the consumer is u xy 2 x

Price of x =Rs.4/-, Price of y =Rs.2/- and consumer money income = Rs.60/-

Solution We first obtain constraint function. It is given by


xpx yp y M or 4x 2 y 60 or 60 4 x 2 y 0

We are also given objective function: 4 xy 2 x

Applying the langragean multiplier, we get v OF CF as


v xy 2 x 60 4 x 2 y xy 2 x 60 4 x 2 y

1
vx y 2 0 4 0 0 or 2 x or x (9)
2
1
vy x 0 0 0 2 0 or 2 x or x (10)
2
1 y 1 y
From (9) and (10), we get x or x 1 (11)
2 4 2 2
Since no clear solution emerges, therefore, we find v as.

v 60 4 x 2 y 0 (12)

y
From (11) and (12), we get, 60 4 1 2y 0
2

6
or y 14
4
y 14
x 1 1 8 . Thus solution is x 8, y 14 . Consumer is in
2 2
equilibrium (or gets maximum satisfaction) when he purchases 8 units of
good x and 14 units of y .

Check Your Progress 1


1) Explain the concept of : (a) objective function (b) constraint.

……………………………………………………………………………
……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………
175
Multivariate
2) Outline the method of substitution in obtaining a solution to a
Optimisation
constrained optimisation problem.
……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

3) Describe the method of Lagrange multiplier to obtain a solution to a


constrained optimisation problem, outlining the various steps.
……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

……………………………………………………………………………

9.3 SECOND ORDER CONDITIONS


As explained earlier, optimisation consists of both maximisation and
minimisation. We want to maximise one which is useful to us and minimise
what is costly to us. For example, we want to maximise bus service in a city
and minimise pollution . A student would like to maximise marks with
minimum efforts.

We would like to obtain conditions of sufficiency for optimisation subject to


constraints. We saw that both for maximisation and for minimisation, the first
order conditions are the same. since we were searching for stationary values,
we formed a Lagrangian function and set the first-order partial derivatives to
zero. The second-order conditions help us determine conditions for
maximization or minimization. Le us study the second-order conditions now.
For that we will need to build up some tools to which we now turn. The first
of these is is the concept of total differential. You have already studied this.

Total Differential
The concept of total differential is very useful in constrained optimisation. In
discussing the concept of total differential, remember that if we have a
∂f
function f ( x, y ) then the symbol f x stands for
∂x
If z f x, y is a homogeneous function of first degree, then the total
differential dz can be expressed as:

176
z z Constrained
dz fxdx fyd y dx d y approximately Optimisation with
x y Equality
Constraints
It may be noted that this formula holds good whether x and y are dependent
or independent variables. The expression dz shows the increment in the
function z f x, y when there is an infinitesimal increments in x and well
as y . For example

if z x3 y 3 , then total differential can be expressed as


dz f x d x f y d y 3x 2 d x 3 y 2 d y

The following rules on total differential will be found useful. Let z and w
represent two functions of x and y , then

1) d w z dw dz
fxdx f ydy gxdx g yd y

2) d wz w.dz z dw
w gxdx gyd y z fxdx f yd y (Product rule)

w z.dw wdz
3) d
z z2
z fxdx f ydy w gxdx gx d y
(Quotient Rule)
z2
4) z f u and u g x, y , then
dz f u du , where du is differential of u which in turn is a function
of x and y .
Example. z u n , where u f x, y , then
d n
dz u du nu n 1 du
dx
Let us now solve some problems on total differentials.
1) Find du when u 3x3 2 y2 y3

Sol. Total differential du is given by:


du f x d x f y d y 9 x 2 d x uy 3 y 2 d y
9 x2d x y u 3y dy

2) Find total differentials y the following functions

x2 y2 2
y2
a) u b) w ex c) u log x 2 y2
x2 y2

177
Multivariate
x2 y2
Optimisation Sol. a) u , apply quotient rule
x2 y2
z fxdx f ydz w g u du fyd y
z2

x2 y 2 d x2 y2 x2 y2 d x2 y2
2
x2 y
x2 y 2 2 xdx 2 ydy x2 y 2 2 xdx 2 ydy
2
x2 y2
4 xy 2 dx 4 x 2 ydy
2
x2 y2
2
y2
b) w ex
Put u x2 y 2 so that w eu and dw eu du (13)

Also du d x2 d y2 2 xdx 2 ydy (14)

From (11) and (9), we get


2
y2 2
y2 2
y2
dw e x 2 xdx 2 ydy 2 xe x dx 2 ye x dy

c) u log x 2 y2

Let us try it by using the formula:


1 xx 1 2y
du fxdx fydy dx dy
x2 y 2 x2 y 2

2x 2y 2 xdx 2 ydy 2 xdu ydy


2 2
dx dy
x y x 2
y 2
x2 y 2 x 2
y2

But for studying second-order conditions in the case of constrained


optimization, we need to be able to compute second order differentials.
We may state here a basic result. Given the equation z = f ( x, y ) , its second-
order total differential is:
d 2 z = f xx dx 2 + 2 f xy dxdy + f yy dy 2 + f y d 2 y

Now let us consider the second-order sufficiency conditions in the case of


constrained optimum. Before proceeding, it is requested to go through the
previous unit again carefully, as there the concepts of total differentials and
quadratic forms were discussed, and these come in handy to study
constrained optimisation.
To proceed, consider the objective function dg = 0

z = f ( x, y )
178
Constrained
subject to the constraint Optimisation with
Equality
g ( x, y ) = c Constraints

Here c is a constant.

Proceeding systematically, we write the Lagrangian function as

L f ( x, y ) + λ c − g ( x, y ) 
=

For stationary values of L , regarded as a function of the three variables


λ , x, y , the necessary condition is
Lx =f x − λ g x =0

Ly =f y − λ g y =0

c − g ( x, y ) =
Lλ = 0

For the second-order necessary-and-sufficient conditions still are related to


the algebraic sign of the second-order total differential d 2 z evaluated at a
stationary point, as in the case of unconstrained optimization that you studied
in unit 8 in block 4, but there is one difference in the case of constrained
optimization. In constrained optimization, we are concerned with the sign
definiteness or semi-definiteness of d 2 z , not for all possible values of
dx and dy , but only for those dx and dy values satisfying the linear
0 . Thus the second-order necessary conditions are:
constraint g x dx + g y dy =

For maximum of z, d 2 z negative semidefinite, subject to dg = 0

For minimum of z, d 2 z positive semidefinite, subject to dg = 0 .

The second-order sufficient conditions are:


For maximum of z, d 2 z negative definite, subject to dg = 0

For minimum of z, d 2 z positive definite, subject to dg = 0 .

Let us concentrate on the second-order sufficient conditions. In unit 8, we


saw that the second-order sufficient could be expressed using the Hessian
determinant. However, in the case of constrained extremum, we come across
what is called a bordered Hessian. This determinant used in this case in
nothing but the original determinant, with a border (row) placed on top and a
similar border (column) on the left. Moreover, this border is merely
composed of the coefficients from the constraint, with a zero in the principal
diagonal. We had expressed the second-order differential as: given the
equation z = f ( x, y ) , its second-order total differential is:

d 2 z = f xx dx 2 + 2 f xy dxdy + f yy dy 2 + f y d 2 y .

Now recall the first-order conditions that we mentioned a little while ago:

179
Multivariate Lx =f x − λ g x =0
Optimisation

Ly =f y − λ g y =0

c − g ( x, y ) =
Lλ = 0

We can partially differentiate the derivatives again to get:


L=
xx f xx − λ g xx

L=
yy f yy − λ g yy

Lxy =f xy − λ g xy =Lyx

Making use of the Lagrangian, we can express d 2 z as:


d 2 z =Lxx dx 2 + Lxy dxdy + Lyx dydx + Lyy dy 2

When we apply the bordered Hessian to the above case, we get the conditions

0 gx gy
d 2 z is positive definite subject to dg = 0 if g x Lxx Lxy (determinant) < 0
gy Lyx Lyy

The condition for negative definiteness is similar with the sign of bordered
Hessian reversed (> 0)

For the n-variable, general case, the condition for relative constrained
extremum for
z = f ( x1 ,..., xn )

subject to g ( x1 ,..., xn ) = c

with
= L f ( x1 ,..., xn ) + λ c − g ( x1 ,..., xn ) 

The first-order necessary condition for a maximum is


Lλ= L1= ...= Ln= 0

Here the subscript denotes partial derivative of L with respect to that variable.

The first order necessary condition for a minimum is the same.


The second-order sufficient condition for a maximum is

H 2 > 0; H 3 < 0; H 4 > 0..... ( −1) H n > 0


n

For a minimum, the second order condition is that

H1 , H 2 ,..., H n < 0

Here the H with the bar on top stands for the bordered Hessian, and the
subscripts stand for various order of the determinant.
180
Constrained
Check Your Progress 2 Optimisation with
Equality
1) What is a bordered Hessian? Constraints

……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

2) State the second-order sufficient condition for a minimum in the case of


constrained minimum.
……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

9.4 ECONOMIC APPLICATIONS


9.4.1 Consumer’s Equilibrium
We first consider a simple case: maximisation of a utility function subject to
a budget constraint. Let the utility function be u(x, y) and the budget
constraint pxx + pyy =M. Using the Lagrangian multiplier method we find
u p
that the necessary first order conditions require that x = x and that the
u y py
values of x and y are such that the budget constraint is satisfied. The second
order condition would involve second order partial derivatives of function u,
but these partial derivatives have to be defined on the set of x, y which
satisfies the constraint. The constraint equation pxx + pyy =M yields pxdx +
py
pydy =0. We can solve for dx =– dy. Now let us go back to the concept of
px
quadratic form we discussed earlier. For maximum utility we must have
stationary point at which the quadratic form d2u=uxx +2uxydxdy + uyydy2 is
negative definite. In the constrained maximisation at hand we do not examine
all dx and dy; instead we limit ourselves to dx, dy which fit the constraint,
py
namely where dx = – dy is satisfied. To ensure this we substitute and get
px

181
Multivariate
Optimisation
2
 py   py 
u xx  dy  + 2u xy  dy  dy + u yy dy 2
p
 x  p
 x 

 p2 p 
=  2 y u xx − 2 y u yy  dy 2
 p x px 

(u xx p y2 − 2u xy px p y + u xy p y2
=

0 px py
dy 2
= − px u xx u xy 2
p
py u yx u yy x

The determinant is known as the bordered Hessian determinant for obvious


reasons.
For a maximum the quadratic form defined by d2u has to be negative; when
we constraint the function the quadratic form is transformed, but this form
also must be negative. As d2y and p2x are both positive and the entire
expression starts with a negative sign, the bordered Hessian determinant must
be positive for a maximum. For a minimum by a similar set of arguments the
determinant must be negative.

The utility maximisation problem discussed above is simple on two counts:


(i) the constraint is linear; and (ii) only two variables are considered. A
problem involving a non-linear constraint is to minimise the cost of attaining
a given level of utility . Formally this problem can be stated as:

Minimise pxx + pyy subject to u(x,y) = u .

Here the constraint is given indifference curve which is curve convex to


origin. Again any problem involving n variables, n > 2 cannot be handled by
the technique discussed above. The mathematics for both types of complexity
is too involved to present here but we shall state the general result without
proof.

Suppose that the problem is:


Maximise f(x1,x2, ……..xn) subject to g (x1,x2, ……..xn) ≠ 0 and that g is a
non-linear function. Then the second order condition is the boarded Hessin
determent(D)
0 g1 g2 gn
g1 f11 − λ g11 f12 − λ g12 f1n − λ g1n
g2 f 21 − λ g 21 f 22 − λ g 22 f nn − λ g 2 n
gn f n1 − λ g1n fn 2 − λ g 2n f n − λn g 2 n
182
Constrained
has the sign (-1)n. Optimisation with
Equality
and the principal minors have the sign (-1)t where t is the order of the Constraints
principal minor t ≥ 2.
For a minimum D and its principal minors must all be negative. Let us try
this on problem of minimising cost to attain a given level of utility.

9.4.2 Cost and Supply


Assume a smooth production function with variable inputs, say L and K,
standing for labour and capital. Let the production function be Q = Q ( L, K )
where QL , QK > 0 . If w and r are the prices of labour and capital
respectively,the problem here is one of minimizing the cost
C wL + rK
=
subject to the output constraint Q ( L, K ) = Qo

Hence the Lagrangian function is

Z = wL + rK + λ Qo − Q ( L, K )  .

The first-order condition are


Qo − Q ( L, K ) =
Zλ = 0
∂Q
w−λ
ZL = 0
=
∂L
∂Q
r −λ
ZK = 0
=
∂K
The last two conditions imply the condition
w r
= = λ
∂Q ∂Q
∂L ∂K
The denominators above denote the marginal products of the inputs.

Thus at the point of optimal input combination, the price-marginal product


ratio must be the same for each input. Since this ratio measures the amount of
outlay per marginal product of each input, the Lagrangian multiplier can be
interpreted as the marginal cost of production in the optimum state.

The above equation can also be written as


w MPL
=
r MPK

This says that the ratio of marginal products, which is the marginal rate of
technical substitution is equal to the input prices.

183
Multivariate
Check Your Progress 3
Optimisation
1) Briefly set out how the consumer’s equilibrium can be found using the
Lagrange multiplier method.

……………………………………………………………………………

……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

……………………………………………………………………………

2) How would you use the concept of constrained optimization to explain


least-cost combination of input use by a firm/

……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

……………………………………………………………………………
……………………………………………………………………………

9.5 LET US SUM UP


This unit extended the discussion started in the previous about optimization.
The previous unit had discussed unconstrained optimization. This unit
provided a discussion about optimization in the presence of constraints.
Constraints, as we saw, introduce restrictions so that the constrained optimum
in general differs from free optimum. The constraint, or the side relation
essentially narrows down the domain, and therefore the range of the objective
function.

The unit then went on to a discussion of how to find stationary values. It


mentioned that for very simple problems, a simple way could be substituting
the conditions of the constraint into the objective function, but pointed out
that this method may not work in more complicated cases. Then the crucial,
and one of the key tools for a student of economics, the Lagrange multiplier
method was discussed. The way the Lagrangian function is set up was
explained, and the idea of the multiplier was discussed.

Following this, the unit went on to a discussion of second-order conditions


for constrained optimization. The unit elaborated on the concept of bordered
Hessian and its use in deriving second-order condition for constrained
optimization. Finally the unit went on to discuss applications of constrained
optimization in economics. Instances of constrained optimization in
economics is ubiquitous, and constrained optimization forms a central tool
and technique in economics.
184
Constrained
9.6 ANSWERS/HINTS TO CHECK YOUR Optimisation with
PROGRESS EXERCISES Equality
Constraints

Check Your Progress 1


1) See subsection 9.2.1
2) See subsection 9.2.1

3) See subsection 9.2.2

Check Your Progress 2


1) See section 9.3

2) See section 9.3

Check Your Progress 3


1) See subsection 9.4.1

2) See subsection 9.4.2

185

You might also like