Algorithm For Unconstrained-Multivariable Case-2 (CH 6)
Algorithm For Unconstrained-Multivariable Case-2 (CH 6)
Muhammad Shafiq
Department of Industrial Engineering
University of Engineering and Technology
Xi1 Xi ui
in which
is a predefined scalar step length
u i is a randomly generated unit vector in the i iteration
r1
r
1
2
u
2
2
2
r1 r2 ... rn
rn
It should be noted the unit vector u generated above will be employed
only if R r12 r22 ... rn2 1; otherwise, another set of random
numbers should be generated
2
length is smaller than or equal to , go to step 9. Otherwise, go
to step 4
95. Stop the procedure: X opt X1 , f opt f1
Xi1 Xi i*ui
in which * can be determined by solving:
fi1 f Xi u Min f Xi i ui
*
i i
Notes:
The method can work for discontinuous or nondifferentiable
functions at some points; or when the objective function
possesses several relative minima (maxima)
The method is not so efficient. However, it can be used in the
early stage of optimization to help detect the region where the
global optimum is likely to be found. Then, other more efficient
techniques can be used to find the precise optimum point
likely to be found. Then, other more efficient techniques can be
used to find the precise optimum point
Q #1
Write Dichotomous/Golden section method
procedure (conditions, iterative process,
termination condition etc.) for a minimization
problem
Q #2
2
f X
f xn
dX dX ds dxi
T
i 1
f
T
df
dxi f X dX
i 1 xi
n
dX uds
And hence, the rate of change of the function w.r.t. the step length ds
is given by
n
df
f dxi
T dX
T
f X
f X u
ds i1 xi ds
ds
or
df
f X u cos
ds
( : the angle between the vectors f X and u )
So,
df
1.
will be maximum when 0 , i.e., when u is along f X (the
ds
df
direction of ascent, due to
0)
ds
df
2.
will be minimum when 1800 (the direction of descent, due
ds
df
to
0)
ds
and set
Convergence criteria
1. Change in function value in two consecutive iterations is small
f Xi1 f Xi
f Xi
2. The partial derivatives are small
f
i 1,2,..., n
xi
3. Change in the design vector in two consecutive iterations is small
Xi1 Xi
Example:
Minimize f x1 , x2 x1 x2 2 x12 2 x1 x2 x22
0
starting from X1
0
We have
1 4 x1 2 x2
f X
1 2 x1 2 x2
Iteration 1:
1
f1 f X1
1
1
S1 f1
1
f X1 1 S1 f 1 , 1 12 21
So,
1* 1
1
X2 X1 S
1
*
1 1
1
X 2 is not optimum due to f 2 f X2 0
1
Iteration 2:
1
S 2 f 2
1
Find the optimal step length 2* by minimizing:
f X2 2 S2 f 1 2 ,1 2 522 22 1
So,
2* 15
0.8
X3 X 2 S 2
1.2
*
2
0.2
X3 is not optimum due to f3 f X3
0
0.2
Iteration 3:
0.2
S3 f3
0.2
Find the optimal step length 3* by minimizing:
So,
3* 1.0
1.0
X 4 X3 S
1.4
*
3 3
0.2
X 4 is not optimum due to f 4 f X4
0
0.2
.
The optimum point:
1.0
X
1.5
*
Note:
The method of steepest descent is not really effective in most
problems because the steepest descent direction is a local
property. So, it is good for local search, not global!
convergence
X2 X1 1*S1
where 1* is the optimal step length in the direction S1
Set i 2 and go to the next step
4. Find fi f Xi and set
Si fi
fi
fi1
2
2
Si 1
Example:
Minimize f x1 , x2 x1 x2 2 x12 2 x1 x2 x22
0
starting from X1
0
We have
1 4 x1 2 x2
f X
2
x
2
x
1
2
Iteration 1:
1
f1 f X1
1
1
S1 f1
1
f X1 1 S1 f 1 , 1 12 21
So,
1* 1
1
X2 X1 S
1
*
1 1
1
X 2 is not optimum due to f 2 f X2 0
1
Iteration 2:
The next search direction is
S 2 f 2
f 2
f1
0
S1
2
f X2 2 S2 f 1,1 22 422 22 1
So,
2* 1 4
1
X3 X 2 S 2
1.5
*
2
X3 is optimum due to f3 f X3 0
f 2
2, f3
f3
0
S3 f3
S 0
2 2
f 2
0
NEWTONS METHOD
The Newton-Raphson method presented before for single-variable
optimization can be generalized for multivariable case.
Consider the quadratic approximation of f X at X Xi :
1
T
T
f X f Xi f Xi X Xi X Xi J i X Xi
2
in which J i is the Hessian matrix of f X evaluated at X Xi :
J i 2 f X X X
Setting the partial derivatives of f X equal to 0 (i.e., f X 0 ), we
i
have:
If J i
f Xi J i X Xi 0
is nonsingular, an improved approximation can be obtained as:
Xi1 Xi J i f Xi
1
Example:
Minimize f x1 , x2 x1 x2 2 x12 2 x1 x2 x22
0
starting from X1
0
We have:
1 4 x1 2 x2
f X
1 2 x1 2 x2
Hence,
1
f X1
1
1
1
4 2
1
2
2
1
1
2 2
1
1
2
1
1
X2 X1 J1 f X1 3
0
f X2 : Stop
0
It should be noted that the Newtons method will find the optimum of
a quadratic function in only one iteration. This can be proved as
follows:
Consider the general form of a quadratic function:
1 T
f X X AX BT X C
2
The optimum of f X is the solution of:
X* A1B
f X AX B 0
Using the Newtons method:
1
X2 X1 J1 f X1 X1 A 1 AX1 B A 1B (Q.E.D.)
Notes:
The Newtons method will converge if the initial point is
sufficiently close to the solution.
To prevent divergence or convergent to saddle points/relative
maxima, the method can be modified using
1
Xi1 Xi i*Si Xi i* J i f Xi
in which i* is the minimizing step length in the direction