0% found this document useful (0 votes)
4 views

Optimization Lesson 4 - Numerical Solutions of Unconstrained Single-Variable Optimization

This document discusses numerical solution techniques for unconstrained single-variable optimization problems, focusing on methods such as bracketing, region elimination, and gradient-based techniques. It details algorithms for the Exhaustive Search, Golden Section Search, Bisection, and Newton-Raphson methods, emphasizing their application in finding minima of objective functions. Additionally, it covers the numerical calculation of first and second derivatives using the central difference method.

Uploaded by

Sameer Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Optimization Lesson 4 - Numerical Solutions of Unconstrained Single-Variable Optimization

This document discusses numerical solution techniques for unconstrained single-variable optimization problems, focusing on methods such as bracketing, region elimination, and gradient-based techniques. It details algorithms for the Exhaustive Search, Golden Section Search, Bisection, and Newton-Raphson methods, emphasizing their application in finding minima of objective functions. Additionally, it covers the numerical calculation of first and second derivatives using the central difference method.

Uploaded by

Sameer Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

DESIGN OF FRICTIONAL MACHINE ELEMENTS

(ME3202)

Module 7: Optimization in Design


Lesson 4: Numerical Solution Techniques for the
Unconstrained Single-Variable Optimization Problems

B. Tech., Mech. Engg., 6th Sem, 2022-23


Numerical Solution to the
Unconstrained Single Variable Optimization
• The objective function consists of only one design variable, which is not constrained.
• The minimum of a single variable objective function can be found in two phases. First, by
a crude technique a lower and an upper bound is estimated, known as the 1) Bracketing
methods.
• Techniques, categorized under bracketing methods like:
a) Exhaustive Search technique
b) Bounding Phase technique.
Bracketing Method – Exhaustive Search technique
The optimum of a function is bracketed by calculating the function values at a number of
equally spaced points. Usually, the search begins from a lower bound and three consecutive
function values are compared at a time.

If the optimal is not found, the


search continued by shifting all
the three points towards the
upper bound by one interval
each. The search continues until
the minimum is bracketed.
Exhaustive Search technique – Algorithm
Step 1 Set, 𝑥1 = 𝑎
𝑏−𝑎
Calculate, ∆𝑥 = 𝑛
, where, 𝑛 is the number of intermediate points
Calculate, 𝑥2 = 𝑥1 + ∆𝑥
Calculate, 𝑥3 = 𝑥2 + ∆𝑥
Step 2 Check, if 𝑓(𝑥1 ) ≥ 𝑓(𝑥2 ) ≤ 𝑓(𝑥3 ) then, the minima lies in (𝑥1 , 𝑥3 ), and Terminate;
else, Set, 𝑥1 = 𝑥2 ,
Set, 𝑥2 = 𝑥3 ,
Calculate, 𝑥3 = 𝑥2 + ∆𝑥,
Step 3 Check, if 𝑥3 ≤ 𝑏 then, go to Step 2,
else, print: ‘no minimum exists in (𝑎, 𝑏) or any of the boundary points (𝑎 or 𝑏) is the
minimum point’
Region Elimination Method
• After a suitable bracketing, secondly, a more sophisticated method is used to search within
these limits and find the optimal solution with the desired accuracy called the 2) Region
Elimination methods.
• Let us consider two points 𝑥1 and 𝑥2 which lie in
the interval (𝑎, 𝑏) and satisfy 𝑥1 < 𝑥2 . For
minimization of an unimodal function 𝑓(𝑥), we
can conclude the following:
 If 𝑓(𝑥1 ) > 𝑓(𝑥2 ) then, the minimum does
not lie in (𝑎, 𝑥1 ).
 If 𝑓(𝑥1 ) < 𝑓(𝑥2 ) then the minimum does
not lie in (𝑥2 , 𝑏).
 If 𝑓(𝑥1 ) = 𝑓(𝑥2 ) then the minimum does
not lie in (𝑎, 𝑥1 ) and (𝑥2 , 𝑏).
• Methods like: a) Interval Halving technique, b) Fibonacci Search technique and c) Golden
Section Search technique are categorized under region elimination methods.
Golden Section Search technique
The search space (𝑎, 𝑏) is first
linearly mapped to a unit interval
search space (0,1) . Then, two
points at 𝑇 from either end of the
search space are chosen so that at
every iteration the eliminated
region is (1 − 𝑇) to that in the
previous iteration. 𝑇 is the
conjugate golden ratio, 𝑇 = 0.618.
Golden Section Search – Algorithm
Step 1 Choose a lower bound 𝑎 and an upper bound 𝑏.
Choose a small number 𝜀 (minimum allowable width of the interval)
𝑥−𝑎
Normalize the variable 𝑥 by using: 𝑤 = 𝑏−𝑎, thus, calculate 𝑎𝑤 = 0, 𝑏𝑤 = 1
Set, 𝑖 = 1
Step 2 Calculate 𝐿𝑤 = 𝑏𝑤 − 𝑎𝑤
Set, 𝑤1 = 𝑎𝑤 + 0.618 𝐿𝑤 and 𝑤2 = 𝑏𝑤 − 0.618 𝐿𝑤 .
Compute 𝑓(𝑤1 ) or 𝑓(𝑤2 ), depending on whichever was not evaluated earlier.
Step 3 Use the fundamental region-elimination rule to eliminate a region.
Step 4 Set new 𝑎𝑤 and 𝑏𝑤 for the remaining region
Calculate 𝐿𝑤 = 𝑏𝑤 − 𝑎𝑤
Step 5 Check, if |𝐿𝑤 | < 𝜖 then Terminate,
else, set, 𝑖 = 𝑖 + 1, go to Step 2.
Point Estimation method
• Instead of relative comparison of the function values at different points, as described in the
previous methods, in 3) Point Estimation method, the function is evaluated at different
suitable points and the function values provide some information about the location of the
minimum in the search space like the a) Successive Quadratic Estimation technique.

Successive Quadratic Estimation technique


• The points 𝑥𝑖 , 𝑓 𝑥𝑖 , designated by evaluated
function values 𝑓 𝑥𝑖 at different 𝑥𝑖 locations,
are used to fit a known unimodal function (like a
quadratic function, 𝑞(𝑥) say for example), whose
known minima 𝑥ҧ gives an estimate to the
actual minima (𝑥 ∗ ) of the original objective
function 𝑓(𝑥).
Gradient Based Methods
• In many real-world problems, it is difficult to obtain the information about derivatives,
either due to the nature of the problem or due to the computations involved in calculating
the derivatives. 4) Gradient based methods are effective where the derivative information
is available or can be calculated easily.
• The optimality property that ‘at a local or a global optimum the gradient is zero’ can be
used to terminate the search process.
• Most popular gradient based methods are: a) Bisection technique, b) Newton-Raphson
technique, c) Secant technique, and d) Cubic search technique.
Bisection technique
• Bisection method uses the function values and the sign of the first derivatives at two
points to eliminate a part of the search space. It does not use the second derivative
• It is to some extent similar with region elimination method
• The unimodality of the function within the search space is required.
• At two bounds 𝑎 and 𝑏, if the gradients of the function, 𝑓 ′ 𝑎 and 𝑓 ′ 𝑏 are of opposite
signs, the optima lies within the bounds.
• The gradient of the function changes sign at the optimum point, and the value becomes
zero at the optimum point.
Bisection technique – Algorithm
Step 1: Assume the bounds 𝑎, 𝑏 such that, 𝑓 ′ 𝑎 < 0 and 𝑓 ′ 𝑏 > 0
Set tolerance 𝜀
𝑎+𝑏
Step 2: Compute 𝑥 ∗ = 2
Evaluate 𝑓 ′ 𝑥 |𝑥=𝑥∗

Step 3: Check, if 𝑓 ′ 𝑥 |𝑥=𝑥 ∗ < 𝜀, then Terminate


Else if, Check, if 𝑓 ′ 𝑎 𝑓 ′ 𝑥 |𝑥=𝑥∗ > 0, then Set 𝑎 = 𝑥 ∗ and go to step 2
Else if, Check, if 𝑓 ′ 𝑏 𝑓 ′ 𝑥 |𝑥=𝑥 ∗ > 0, then Set 𝑏 = 𝑥 ∗ and go to step 2
Newton-Raphson’s technique
• For the function being extremum, the first derivative w.r.t the departure (ℎ) of the
function, around a point 𝑥𝑖 i.e. 𝑓 𝑥𝑖 + ℎ at 𝑥𝑖 + ℎ in the 𝑖 𝑡ℎ iteration, is expanded using
the Taylor’s series expansion, and is equated to zero to find the next guess:
2
𝑑 𝑑 ℎ
𝑓 𝑥𝑖 + ℎ = 𝑓 𝑥𝑖 + ℎ 𝑓 ′ 𝑥𝑖 + 𝑓 ′′ 𝑥𝑖 + ⋯ ℎ𝑖𝑔ℎ𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 𝑡𝑒𝑟𝑚𝑠 = 0
𝑑ℎ 𝑑ℎ 2!
• As ℎ is small, higher order terms can be neglected.
𝑡ℎ 𝑓 ′ 𝑥𝑖
• The next 𝑖 + 1 guess: 𝑥𝑖+1 = 𝑥𝑖 − .
𝑓 ′′ 𝑥𝑖

• The convergence depends on the initial point and the nature of the objective function.
Newton-Raphson’s technique – Algorithm
Step 1 Set 𝑖 = 1.
Choose initial guess 𝑥𝑖
Compute 𝑓 ′ 𝑥𝑖
Choose a small number 𝜀 (minimum allowable value for the first derivative).
Step 2 Compute 𝑓 ′′ 𝑥𝑖
𝑓 ′ 𝑥𝑖
Step 3 Calculate 𝑥𝑖+1 = 𝑥𝑖 − 𝑓′′ 𝑥𝑖

Step 4 Compute 𝑓 ′ 𝑥𝑖+1


Step 5 Check if 𝑓 ′ 𝑥𝑖+1 < 𝜀 then terminate
Else, set 𝑖 = 𝑖 + 1 and go to Step 2
Numerical Calculation of First & Second Derivative
• In practice, the gradients have to be computed numerically.
• At a point 𝑥, the first and second derivatives are computed as follows, using the central
difference method:

𝑓 𝑥 + ∆𝑥 − 𝑓 𝑥 − ∆𝑥
𝑓 𝑥 =
2 ∆𝑥
and,
′′
𝑓 𝑥 + ∆𝑥 − 2 𝑓 𝑥 + 𝑓 𝑥 − ∆𝑥
𝑓 𝑥 =
∆𝑥 2

You might also like