Optimization in Rubber Industry
Optimization in Rubber Industry
COLLEGE OF ENGINEERING
AHMEDABAD – 380 015
Ahmedabad -380015.
i
Certificate
examination on ___________________.
ii
3724003 Index
Index
Sr. Page Number Date of Date of Initial
Title From To
No Start Completion of Staff
Tutorial – 01
1 01 03 23-02-2024 01-03-2024
(Newton’s Method)
Tutorial – 02
2 04 13 01-03-2024 15-03-2024
(Newton-Raphson Method)
Tutorial – 03
3 14 16 15-03-2024 22-03-2024
(Secant Method)
Tutorial – 04
4 17 20 22-03-2024 05-04-2024
(Word based problems)
Tutorial – 05
5 21 22 05-04-2024 12-04-2024
(Golden Section Method)
Tutorial – 06
6 23 26 12-04-2024 19-04-2024
(Fibonacci Search Method)
Tutorial – 07
7 27 31 19-04-2024 26-04-2024
(Simplex Method)
Tutorial – 08
8 32 34 26-04-2024 03-05-2024
(Box Complex Method)
Tutorial – 09
9 35 41 03-05-2024 17-05-2024
(Genetic Algorithm)
Tutorial – 10
10 (Mixed Integer Linear 42 44 17-05-2024 24-05-2024
Programming - MILP)
iii
3724003 Tutorial - 01
Tutorial - 01
Newton’s Method
𝟏
1.1 Find the minimum of 𝒇(𝒙) = 𝟐 𝒙𝟐 − 𝐬𝐢𝐧 𝒙, where 𝒙𝟎 = 𝟎. 𝟓.
1
𝑓(𝑥) = 𝑥 2 − sin 𝑥
2
𝑓′(𝑥) = 𝑥 − cos 𝑥
𝑓′′(𝑥) = 1 + sin 𝑥
As per formula,
𝒇′(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′′(𝒙)
𝑓′ (𝑥)
𝑥1 = 𝑥0 − 𝑓′′ (𝑥)
1
3724003 Tutorial - 01
1.2 Find the minimum of 𝒈(𝒙) = 𝒙𝟑 − 𝟏𝟐. 𝟐 𝒙𝟐 + 𝟕. 𝟒𝟓 𝒙 + 𝟒𝟐 = 𝟎, where 𝒙𝟎 = 𝟏𝟐.
As per formula,
𝒈′(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒈′′(𝒙)
𝑔′ (𝑥)
𝑥1 = 𝑥0 − 𝑔′′(𝑥)
2
3724003 Tutorial - 01
1.3 Minimum the function 𝒇(𝒙) = 𝒙𝟐 − 𝒙, where 𝒙𝟎 = 𝟑.
𝑓(𝑥) = 𝑥 2 − 𝑥
𝑓 ′(𝑥) = 2 𝑥 − 1
𝑓′′(𝑥) = 2
As per formula,
𝒇′(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′′(𝒙)
𝑓′ (𝑥)
𝑥1 = 𝑥0 − 𝑓′′ (𝑥)
3
3724003 Tutorial - 02
Tutorial - 02
𝑓(𝑦) = 𝑦 3 − 2 𝑦 − 5
𝑓′(𝑦) = 3 𝑦 2 − 2
𝒇(𝒚)
𝒚𝒏+𝟏 = 𝒚𝒏 − 𝒇′(𝒚)
𝑦𝑛 3 −2 𝑦𝑛 −5
∴ 𝑦𝑛+1 = 𝑦𝑛 −
3 𝑦𝑛 2 −2
3 𝑦𝑛 3 − 2 𝑦𝑛 − 𝑦𝑛 3 + 2 𝑦𝑛 + 5
∴ 𝑦𝑛+1 = 3 𝑦𝑛 2 −2
2 𝑦𝑛 3 + 5
∴ 𝑦𝑛+1 = 3 𝑦𝑛 2 −2
𝒏 𝒚𝒏 𝟐 𝒚𝒏 𝟑 + 𝟓 𝟑 𝒚𝒏 𝟐 − 𝟐 𝒚𝒏+𝟏
4
3724003 Tutorial - 02
2.2 Find all solution of 𝒆𝟐𝒙 = 𝒙 + 𝟔, correct to 4 decimal places; use the Newton Method.
𝑓(𝑥) = 𝑒 2𝑥 − 𝑥 − 6
𝑓′(𝑥) = 2 𝑒 2𝑥 − 1
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
As 𝑓(0.95) is nearer to 0. So, by intermediate value theorem root lies between 𝑓(0.95) and 1. 𝑓(0.95)
is closer to 0. So, root is nearer to 0.95. Assume 0.97 because it is very closer to 0.
Thus, 𝑥0 = 0.97
5
3724003 Tutorial - 02
2.3 Find all solutions of 𝟓𝒙 + 𝒍𝒏 𝒙 = 𝟏𝟎𝟎𝟎𝟎, correct to 4 decimal places; use the Newton
method.
𝑓(𝑥) = 5𝑥 + 𝑙𝑛 𝑥 − 10000
1
𝑓 ′(𝑥) = 5 +
𝑥
𝑓(𝑥) = 5𝑥 + 𝑙𝑛 𝑥 − 10000
𝟏
𝒏 𝒙𝒏 𝟓𝒙 + 𝒍𝒏 𝒙 − 𝟏𝟎𝟎𝟎𝟎 𝟓+ 𝒙𝒏+𝟏
𝒙
6
3724003 Tutorial - 02
2.4 Use the Newton-Raphson Method with 3 as starting point to find a fraction that is within
𝟏𝟎−𝟖 of √𝟏𝟎, Show that your answer is indeed within 𝟏𝟎−𝟖 of the truth.
Here, 𝑥0 = 3
𝑓(𝑥) = 𝑥 2 − 10
𝑓′(𝑥) = 2 𝑥
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′(𝒙)
𝒏 𝒙𝒏 𝒙𝟐 − 𝟏𝟎 𝟐𝒙 𝒙𝒏+𝟏
7
3724003 Tutorial - 02
2.5 Let 𝒇(𝒙) = 𝒙𝟐 − 𝒂. Show that the Newton method leads to the recurrence
𝟏 𝒂
𝒙𝒏+𝟏 = 𝟐 (𝒙𝒏 + 𝟐 𝒙 ),
𝒏
𝑓(𝑥) = 𝑥 2 − 𝑎
𝑓′(𝑥) = 2 𝑥
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
𝑥𝑛 2 −𝑎
∴ 𝑥𝑛+1 = 𝑥𝑛 − 2 𝑥𝑛
2 𝑥𝑛 2 −𝑥𝑛 2 +𝑎)
∴ 𝑥𝑛+1 = 2 𝑥𝑛
𝑥𝑛 2 +𝑎
∴ 𝑥𝑛+1 = 2 𝑥𝑛
𝑥 2 𝑎
∴ 𝑥𝑛+1 = 2 𝑛𝑥 + 2 𝑥
𝑛 𝑛
𝟏 𝒂
∴ 𝒙𝒏+𝟏 = 𝟐 (𝒙𝒏 + 𝟐 𝒙 )
𝒏
8
3724003 Tutorial - 02
𝟏 𝟏
2.6 Use the equation 𝒙 = 𝟏. 𝟑𝟕, the Newton method to find 𝟏.𝟑𝟕 correct to 8 decimal places.
1 1 1
𝑓(𝑥) = 1.37 − 𝑥 =𝑎−𝑥 and 𝑓′(𝑥) = 𝑥 2
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
1
𝑎−
𝑥𝑛
∴ 𝑥𝑛+1 = 𝑥𝑛 − ( 1 )
𝑥𝑛 2
1
∴ 𝑥𝑛+1 = 𝑥𝑛 − [𝑥𝑛 2 (𝑎 − 𝑥 )]
𝑛
∴ 𝑥𝑛+1 = 𝑥𝑛 − [𝑎𝑥𝑛 2 − 𝑥𝑛 ]
∴ 𝑥𝑛+1 = 𝑥𝑛 + 𝑥𝑛 − 𝑎𝑥𝑛 2
∴ 𝑥𝑛+1 = 𝑥𝑛 (2 − 𝑎𝑥𝑛 )
∴ 𝑥𝑛+1 = 𝑥𝑛 (2 − 1.37 𝑥𝑛 )
𝒏 𝒙𝒏 𝟐 − 𝟏. 𝟑𝟕 𝒙𝒏 𝒙𝒏 𝒙𝒏+𝟏
9
3724003 Tutorial - 02
2.7(a) A devotee of Newton-Raphson used the method to solve the equation 𝒙𝟏𝟎𝟎 = 𝟎, using the
initial estimate 𝒙𝟎 = 𝟎. 𝟏, Calculate the next 5 Newton estimates.
𝑓(𝑥) = 𝑥 100
𝑓′(𝑥) = 100 𝑥 99
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
𝑥 100
∴ 𝑥𝑛+1 = 𝑥𝑛 − 100𝑛 𝑥 99
𝑛
𝑛 𝑥
∴ 𝑥𝑛+1 = 𝑥𝑛 − 100
99 𝑥𝑛
∴ 𝑥𝑛+1 = 100
∴ 𝑥𝑛+1 = 0.99 𝑥𝑛
𝒏 𝒙𝒏 𝒙𝒏+𝟏 = 𝟎. 𝟗𝟗 𝒙𝒏
0 0.100000 0.099000
1 0.099000 0.098010
2 0.098010 0.097030
3 0.097030 0.096060
4 0.096060 0.095099
10
3724003 Tutorial - 02
𝟏⁄
2.7(b) The devotee then tried to use the method to solve 𝟑𝒙 𝟑 = 𝟎, using 𝒙𝟎 = 𝟎. 𝟏,
Calculate the next 10 estimates.
1⁄
𝑓(𝑥) = 3 𝑥 3
−2⁄
𝑓′(𝑥) = 𝑥 3
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
1
3 𝑥𝑛 ⁄3
∴ 𝑥𝑛+1 = 𝑥𝑛 − −2⁄
𝑥𝑛 3
1 2
( + )
∴ 𝑥𝑛+1 = 𝑥𝑛 − 3 𝑥𝑛 3 3
∴ 𝑥𝑛+1 = 𝑥𝑛 − 3 𝑥𝑛 1
∴ 𝑥𝑛+1 = 𝑥𝑛 − 3 𝑥𝑛
∴ 𝑥𝑛+1 = −2 𝑥𝑛
𝒏 𝒙𝒏 𝒙𝒏+𝟏 = −𝟐 𝒙𝒏 𝒏 𝒙𝒏 𝒙𝒏+𝟏 = −𝟐 𝒙𝒏
11
3724003 Tutorial - 02
−𝟏⁄
𝒙𝟐
2.8 Suppose that 𝒇(𝒙) = { 𝒆 𝑖𝑓 𝒙 ≠ 𝟎
𝟎 𝑖𝑓 𝒙 = 𝟎
Show that if 𝒙𝟎 = 𝟎. 𝟎𝟎𝟎𝟏, it takes more than 100 million iterations of the Newton’s
method to get below the value of 0.00005.
−1⁄
−1 −1 2𝑒 𝑥 2
𝑓(𝑥) = 𝑒 ⁄𝑥 2 and 𝑓 ′(𝑥)
= (𝑒 ⁄𝑥 2 ) × (2⁄𝑥 3 ) = 𝑥 3
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
−1⁄
−1⁄ 2𝑒 𝑥𝑛 2
∴ 𝑥𝑛+1 = 𝑥𝑛 − [(𝑒 𝑥𝑛2 ) ÷( 𝑥𝑛 3
)]
−1⁄
𝑥𝑛 3 ∙ 𝑒 𝑥𝑛 2
∴ 𝑥𝑛+1 = 𝑥𝑛 − −1⁄
2𝑒 𝑥𝑛 2
𝑥𝑛 3
∴ 𝑥𝑛+1 = 𝑥𝑛 − 2
𝑥𝑛 3
∴ 𝑥𝑛 − 𝑥𝑛+1 = 2
By taking 𝑥0 = 0.0001, it shows that it takes more than 100 million iterations of the Newton’s method
to get below the value of 0.00005.
12
3724003 Tutorial - 02
2.9 Use the Newton Method to find the smallest and the second smallest positive roots of
the equation 𝒕𝒂𝒏 𝒙 = 𝟒𝒙, correct to 4 decimal places.
𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)
tan 𝑥−4𝑥
∴ 𝑥𝑛+1 = 𝑥𝑛 − sec2 𝑥−4
13
3724003 Tutorial - 03
Tutorial - 03
Secant Method
3.1 Using secant method, solve the function 𝒇(𝒙) = 𝟎 where 𝒇(𝒙) = 𝒙𝟐 − 𝟐.
𝑥2 − 2 = 0
As per formula,
𝑥𝑛 𝑓(𝑥𝑛+1 ) − 𝑥𝑛+1 𝑓(𝑥𝑛 )
𝑥𝑛+2 =
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
Taking 𝑛 = 0
(1)(0.25) − (1.5)(−1) 0.25 + 1.5 1.75
𝑥2 = = = = 1.4
(0.25) − (−1) 0.25 + 1 1.25
𝑓(𝑥2 ) = −0.04
(1.5)(−0.04) − (1.4)(0.25) −0.06 − 0.35 0.41
𝑥3 = = = = 1.4137931
(−0.04) − (0.25) −0.04 − 0.25 0.29
𝑓(𝑥3 ) = −0.00118907
(1.5)(−0.04) − (1.4)(0.25) −0.06 − 0.35 0.41
𝑥4 = = = = 1.4137931
(−0.04) − (0.25) −0.04 − 0.25 0.29
14
3724003 Tutorial - 03
3.2 Using secant method to find the root of the function 𝒇(𝒙) = 𝒄𝒐𝒔 𝒙 + 𝟐 𝒔𝒊𝒏 𝒙 + 𝒙𝟐 to 5
decimal places.
As per formula,
𝑥𝑛 𝑓(𝑥𝑛+1 ) − 𝑥𝑛+1 𝑓(𝑥𝑛 )
𝑥𝑛+2 =
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
15
3724003 Tutorial - 03
3.3 Use secant method to find the root of 𝒇(𝒙) = 𝒙𝟑 − 𝟒 to 5 decimal places.
As per formula,
𝑥𝑛 𝑓(𝑥𝑛+1 ) − 𝑥𝑛+1 𝑓(𝑥𝑛 )
𝑥𝑛+2 =
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
16
3724003 Tutorial - 04
Tutorial - 04
4.1 If is required to replace a length of piping within a process plant. Since the site is
crumped, the pipe run must be brought into the site in length and welded in site.
Obviously to reduce the cost of welding, the pipe length should be brought in sections
which are long as possible if the head room is low and the pipe has to be carried
horizontally what length should be introduced if the restriction is the passage of the
pipe around the corner foam 10 feet wide corridor into a 6 feet wide corridor.
10 feet
α
6 feet
10 6
𝑦= +
𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛼
𝑐𝑜𝑠𝛼 𝑠𝑖𝑛𝛼
∴ 𝑦 ′ = −10 2
+6 =0
𝑠𝑖𝑛 𝛼 𝑐𝑜𝑠 2 𝛼
∴ 6𝑠𝑖𝑛3 𝛼 − 10𝑐𝑜𝑠 3 𝛼 = 0
10
∴ 𝑡𝑎𝑛3 𝛼
6
∴ 𝑡𝑎𝑛𝛼 = 1.186
∴ 𝛼 = 𝑡𝑎𝑛−1 1.186
∴ 𝛼 = 49.85°
∴ 𝑦 = 22.38 𝑓𝑒𝑒𝑡
17
3724003 Tutorial - 04
4.2 An open top box is to be made out of a piece of cardboard measuring 𝟐𝒎 × 𝟐𝒎 by
cutting of equal surfaces from the corners and turning up the side. Fix height of the box
for maximum volume.
2m
2m
∴ 𝑉 = (2 − 2𝑥)(2 − 2𝑥)(𝑥)
∴ 𝑉 = (4 − 4𝑥 − 4𝑥 + 4𝑥 2 )(𝑥)
∴ 𝑉 = (4𝑥 3 − 8𝑥 2 + 4𝑥)
𝑑𝑉
= 12𝑥 2 − 16𝑥 + 4 = 0
𝑑𝑥
∴𝑥=1 or 𝑥 = 0.33
∴ 𝑉𝑚𝑎𝑥 = 0.5925
18
3724003 Tutorial - 04
4.3 Find the specification of an open top rectangular tank whose total are is to be 108 m2.
If a maximum volume is required.
𝑉 = 𝑥𝑦𝑧
𝐴 = 𝑥𝑦 + (2𝑥 + 2𝑦)𝑧 = 108
∴ 𝑥𝑦 + 2𝑥𝑧 + 2𝑦𝑧 = 108
∴ 𝑥𝑦 + 2𝑥𝑧 = 108 − 2𝑦𝑧
∴ 𝑥(𝑦 + 2𝑧) = 108 − 2𝑦𝑧
108 − 2𝑦𝑧
∴𝑥=
𝑦 + 2𝑧
19
3724003 Tutorial - 04
108𝑦 2 − 4𝑦 3 𝑧 − 4𝑦 2 𝑧 2
∴ =0
(𝑦 + 2𝑧)2
𝑑𝑉 𝑑𝑉
From the 𝑑𝑦 and 𝑑𝑧
Now, 𝑦 = 2𝑧 = 2(3) = 6
108−2𝑦𝑧
and 𝑥= 𝑦+2𝑧
108−2(6)(3)
∴𝑥= 6+2(3)
108−36
∴𝑥= 6+6
72
∴𝑥=
12
∴𝑥=6
So, 𝑉 = 𝑥𝑦𝑧
∴ 𝑉 = (6)(6)(3)
∴ 𝑉 = 72 𝑚3
20
3724003 Tutorial - 05
Tutorial - 05
5.1 Find the value of 𝒙 in the interval (𝟎, 𝟏) which minimize the function 𝒇(𝒙) = 𝒙(𝒙 − 𝟏. 𝟓)
using Golden Section Method.
a 0.382 0.618 b
0 𝑥1 𝑥2 1
As per Formula,
𝑥1 = 𝑏 − 0.618(𝑏 − 𝑎)
𝑥2 = 𝑎 + 0.618(𝑏 − 𝑎)
0.708+0.798 1.507
Hence, 𝑥 ∗ = = = 0.753 and 𝑓(𝑥 ∗ ) = −0.562
2 2
21
3724003 Tutorial - 05
5.2 Find 𝒇(𝒙) = 𝒙𝟐 − 𝒙; minimize the function when 𝒙𝟎 = 𝟑.
a 0.382 0.618 b
0 𝑥1 𝑥2 1
As per Formula,
𝑥1 = 𝑏 − 0.618(𝑏 − 𝑎)
𝑥2 = 𝑎 + 0.618(𝑏 − 𝑎)
0.0+0.70809 0.70809
Hence, 𝑥 ∗ = = = 0.35404 and 𝑓(𝑥 ∗ ) = −0.229
2 2
22
3724003 Tutorial - 06
Tutorial - 06
6.1 Find the value of 𝒙 in the interval (𝟎, 𝟏) which minimize the function 𝒇(𝒙) = 𝒙(𝒙 − 𝟏. 𝟓)
with accuracy ±𝟎. 𝟎𝟓 using Fibonacci Search Method.
a 0.382 0.618 b
0 𝑥1 𝑥2 1
Here given accuracy is ±0.05;
So, 𝐿𝑛 ≤ 0.05 × 𝐿0
𝐿𝑛
∴ ≤ 0.05
𝐿0
1 𝐿𝑛 1
∴ ≤ 0.05 (∵ = )
𝐹𝑛 𝐿0 𝐹𝑛
∴ 𝐹𝑛 ≥ 20
Now taking the Fibonacci Series;
𝐹0 = 1 𝐹4 = 5
𝐹1 = 1 𝐹5 = 8
𝐹2 = 2 𝐹6 = 13
𝐹3 = 3 𝐹7 = 21
𝑥1 = 𝑎 + 𝑏 − 𝑥2
23
3724003 Tutorial - 06
𝒌 𝑭𝒏−𝒌
𝒂 𝒃 𝒙𝟏 𝒙𝟐 𝒇(𝒙𝟏 ) 𝒇(𝒙𝟐 ) 𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏
𝑭𝒏−𝒌+𝟏
𝐹6 𝒇(𝒙𝟐 )
1 ⁄𝐹 0.0000 1.0000 0.3810 0.6190 -0.4263 -0.5454
7
𝐹5 𝒇(𝒙𝟐 )
2 ⁄𝐹 0.3810 1.0000 0.6190 0.7619 -0.5454 -0.5624
6
𝐹4
3 ⁄𝐹 0.6190 1.0000 0.7619 0.8571 -0.5624 -0.5510 𝒇(𝒙𝟏 )
5
𝐹3
4 ⁄𝐹 0.6190 0.8571 0.7143 0.7619 -0.5612 -0.5624 𝒇(𝒙𝟐 )
4
𝐹2
5 ⁄𝐹 0.7143 0.8571 0.7619 0.8095 -0.5624 -0.5590 𝒇(𝒙𝟏 )
3
𝐹1
6 ⁄𝐹 0.7143 0.8095 0.7619 0.7619 -0.5624 -0.5624 𝒇(𝒙𝟏 ) = 𝒇(𝒙𝟐 )
2
𝐹0
7 ⁄𝐹 0.7143 0.7619 0.7143 0.7619 -0.5612 -0.5624 𝒇(𝒙𝟐 )
1
0.7143+0.7619 1.476
Hence, 𝑥 ∗ = = = 0.738 and 𝑓(𝑥 ∗ ) = −0.562
2 2
24
3724003 Tutorial - 06
6.2 Minimize the function by using Fibonacci algorithm in the range [𝟏. 𝟓, 𝟒. 𝟓] for the
𝟏 𝒙−𝟏
function 𝒇(𝒙) = − (𝒙−𝟏)𝟐 (𝒍𝒐𝒈 𝒙 − 𝟐 𝒙+𝟏) such that 0.07 is the required accuracy.
a 0.382 0.618 b
1.5 𝑥1 𝑥2 4.5
1 𝐿𝑛 1
∴ 𝐹 ≤ 0.07 (∵ =𝐹)
𝑛 𝐿0 𝑛
∴ 𝐹𝑛 ≥ 14.29
𝑭𝒏−𝒌
As well as, 𝑥2 = 𝑎 + 𝑭 (𝑏 − 𝑎)
𝒏−𝒌+𝟏
𝑥1 = 𝑎 + 𝑏 − 𝑥2
25
3724003 Tutorial - 06
𝒌 𝑭𝒏−𝒌
𝒂 𝒃 𝒙𝟏 𝒙𝟐 𝒇(𝒙𝟏 ) 𝒇(𝒙𝟐 ) 𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏
𝑭𝒏−𝒌+𝟏
𝐹6 𝒇(𝒙𝟐 )
1 ⁄𝐹 1.5000 4.5000 2.6429 3.3571 0.1778 0.1001
7
𝐹5 𝒇(𝒙𝟐 )
2 ⁄𝐹 2.6429 4.5000 3.3571 3.7857 0.1001 0.0755
6
𝐹4 𝒇(𝒙𝟐 )
3 ⁄𝐹 3.3571 4.5000 3.7857 4.0714 0.0755 0.0638
5
𝐹3 𝒇(𝒙𝟐 )
4 ⁄𝐹 3.7857 4.5000 4.0714 4.2143 0.0638 0.0589
4
𝐹2 𝒇(𝒙𝟐 )
5 ⁄𝐹 4.0714 4.5000 4.2143 4.3571 0.0589 0.0545
3
𝐹1
6 ⁄𝐹 4.2143 4.5000 4.3571 4.3571 0.0545 0.0545 𝒇(𝒙𝟏 ) = 𝒇(𝒙𝟐 )
2
𝐹0
7 ⁄𝐹 4.3571 4.5000 4.3571 4.5000 0.0545 0.0506 𝒇(𝒙𝟐 )
1
4.3571+4.5 8.8571
Hence, 𝑥 ∗ = = = 4.4286 and 𝑓(𝑥 ∗ ) = 0.52
2 2
26
3724003 Tutorial - 07
Tutorial - 07
Simplex Method
𝒙𝟏 ≥ 𝟎; 𝒙𝟐 ≥ 𝟎;
−3𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟐;
2𝒙𝟏 − 𝟒𝒙𝟐 ≤ 𝟑;
𝒙𝟏 + 𝒙𝟐 ≤ 𝟔
𝒙𝟏 𝒙𝟐 Ratio
𝒙𝟑 2 -3 2 -0.667
𝒙𝟒 3 2 -4 Pivot Row 1.5
𝒙𝟓 6 1 1 6
𝒚 0 -2 1
Pivot Column
ep = 2
27
3724003 Tutorial - 07
New transformed Array 2:
𝒙𝟒 𝒙𝟐 Ratio
𝒙𝟑 6.5 1.5 -4 -1.625
𝒙𝟏 1.5 0.5 -2 -0.75
𝒙𝟓 4.5 -0.5 3 Pivot Row 1.5
𝒚 3 1 -3
Pivot Column
Array 3:
𝒙𝟒 𝒙𝟓
𝒙𝟑 12.5 0.833 1.333
𝒙𝟏 4.5 0.167 0.667
𝒙𝟐 1.5 -0.167 0.333
𝒚 7.5 0.5 1.0
28
3724003 Tutorial - 07
7.2 Maximize 𝒚 = 𝟔𝒙𝟏 + 𝟓𝒙𝟐 subject to the constraints,
5𝒙𝟏 + 𝒙𝟐 ≥ 𝟓;
3𝒙𝟏 + 𝟏𝟏𝒙𝟐 ≥ 𝟑𝟑
𝟓𝒙𝟏 + 𝒙𝟐 − 𝒙𝟒 + 𝒙𝟏𝟎𝟎 = 𝟓;
𝟑𝒙𝟏 + 𝟏𝟏𝒙𝟐 − 𝒙𝟓 + 𝒙𝟏𝟎𝟏 = 𝟑𝟑
Array 1:
𝒙𝟏 𝒙𝟐 𝒙𝟒 𝒙𝟓
𝒙𝟑 20 2 5 0 0
𝒙𝟏𝟎𝟎 5 5 1 -1 0
𝒙𝟏𝟎𝟏 33 3 11 0 -1 Pivot Row
𝒚 0 -6 -5 0 0
(𝑷) -38 -8 -12 1 1
Pivot Column
29
3724003 Tutorial - 07
Array 2:
𝒙𝟏 𝒙𝟏𝟎𝟏 𝒙𝟒 𝒙𝟓
𝒙𝟑 5 0.636 -0.455 0 0.455
𝒙𝟏𝟎𝟎 2 4.727 -0.091 -1 0.091
𝒙𝟐 3 0.273 0.091 0 -0.091
𝒚 15 -4.636 0.455 0 -0.455
(𝑷) -2 -4.727 1.091 1 -0.091
𝒙𝟏 𝒙𝟒 𝒙𝟓
𝒙𝟑 5 0.636 0 0.455
𝒙𝟏𝟎𝟎 2 4.727 -1 0.091
𝒙𝟐 3 0.273 0 -0.091 Pivot Row
𝒚 15 -4.636 0 -0.455
(𝑷) -2 -4.727 1 -0.091
Pivot Column
Array 3:
𝒙𝟏𝟎𝟎 𝒙𝟒 𝒙𝟓
𝒙𝟑 4.731 -0.135 0.135 0.442
𝒙𝟏 0.423 0.212 -0.212 0.019
𝒙𝟐 2.885 -0.058 0.058 -0.096
𝒚 16.962 0.981 -0.981 -0.365
(𝑷) 0 1 -1 0
𝒙𝟒 𝒙𝟓
𝒙𝟑 4.731 0.135 0.442 Pivot Row
𝒙𝟏 0.423 -0.212 0.019
𝒙𝟐 2.885 0.058 -0.096
𝒚 16.962 -0.981 -0.365
Pivot Column
Final solution is:
𝒙𝟑 𝒙𝟓
𝒙𝟒 35.141 7.428 3.286
𝒙𝟏 7.857 1.571 0.714
𝒙𝟐 0.857 -0.429 -0.286
𝒚 51.429 7.285 2.857
30
3724003 Tutorial - 07
7.3 Maximize the function 𝒚 = 𝟏𝟎𝟎 − (𝟏𝟎 − 𝒙𝟏 )𝟐 − (𝟓 − 𝒙𝟐 )𝟐 with 𝒂 = 𝟐, using Sequential
Simplex Method.
Given 𝒂 = 𝟐.
Let’s calculate 𝒑 and 𝒒.
𝑎
𝑝 = [√𝑛 + 1 + (𝑛 − 1)]
𝑛√2
2
𝑝 = [√3 + (1)] = 1.9318
2√2
𝑎
𝑞 = [√𝑛 + 1 − 1]
𝑛√2
2
𝑞= [√3 − 1] = 0.5176
2√2
𝑛+1
𝑥𝑖𝑗 − 𝑥𝑗𝑅
𝑥𝑖𝑛 = 2 {∑ } − 𝑥𝑖𝑅
𝑛
𝑖=1
Point
Point 𝒋 𝒙𝟏𝒋 𝒙𝟐𝒋 𝒚𝒊 Points in simplex
Rejected
1 0 0 -25
Starting
2 1.9318 0.5176 14.81 1 2 3
Simplex
3 0.5176 1.9318 0.670
4 2.4494 2.4494 36.48 1 4 2 3
5 3.864 1.0352 46.6298 3 4 2 5
6 4.3816 2.967 64.3004 2 4 6 5
7 5.7962 1.5528 70.2171 4 7 6 5
8 6.3138 3.4846 84.1154 5 7 6 8
9 7.7284 2.0704 86.2572 6 7 9 8
10 8.8246 4.0022 95.9278 7 10 9 8
11 9.6606 2.588 94.0671 8 10 9 11
12 10.1782 4.5198 99.7376 9 10 12 11
13 8.7636 5.934 97.5989 11 10 12 13
14 10.6958 6.4516 97.4087 10 14 12 13
15 12.1104 5.0374 95.5448 12 14 15 13
16 11.5928 3.1056 93.8742 13 14 15 16
31
3724003 Tutorial - 08
Tutorial - 08
Box Complex Method.
8.1 Define a suitable search region and a feasible initial base point for the complex method
of search in minimizing 𝒚 = 𝟒𝒙𝟏 + 𝒙𝟐 + 𝟐𝒙𝟑 , subject to the restriction that 𝒙𝒊 ≥ 𝟎 and
𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 ≤ 𝟔;
𝟓𝒙𝟏 − 𝒙𝟐 + 𝒙𝟑 ≤ 𝟒;
𝒙𝟏 + 𝟑𝒙𝟐 + 𝟐𝒙𝟑 ≥ 𝟏
Vertices used in complex method each of which satisfy all the imposed constraints and such region
is called suitable search region.
Choose any suitable point, check all the constrains and if they are contradicted to minimize the
function,
𝑆 = ∑ 𝑔𝑘
𝑘=1
The summation being taken only are those constraints which are violated. A more formal
presentation of this procedure is to minimize,
𝑆 = ∑ 𝐻(𝑔𝑘 )𝑔𝑘
𝑘=1
1 𝑖𝑓 𝑔𝑘 ≥ 0
𝐻(𝑔𝑘 ) = {
0 𝑖𝑓 𝑔𝑘 ≤ 0
So that the summation S is indeed only over the violated constraints when S vanishes, we have found
a feasible initial base point.
32
3724003 Tutorial - 08
Now, Minimize
𝒚 = 𝟒𝒙𝟏 + 𝒙𝟐 + 𝟐𝒙𝟑
𝟎 ≤ 𝒙𝟏 ≤ 𝟏; 𝟐 ≤ 𝒙𝟐 ≤ 𝟑; 𝟎 ≤ 𝒙𝟑 ≤ 𝟏
So, from there four points, worst point is the 3rd point.
where, 𝑥𝑖𝑚 is centroid, 𝑥𝑖𝑅 is worst point and we will take 𝛼 = 1.3.
33
3724003 Tutorial - 08
Here worst point is 7th point.
0 2 0 2
Here worst point is 11th point so after 3 cycles we get minimum value of 𝑦 = 2.
34
3724003 Tutorial - 09
Tutorial - 09
Genetic Algorithm using Inbuilt Scilab Code.
8.1 Theory
Science arises from the very human desire to understand and control the world. Over the course of
history, we humans have gradually built up a grand edifice of knowledge that enables us to predict,
to varying extents, the weather, the motions of the planets, solar and lunar eclipses, the courses of
diseases, the rise and fall of economic growth, the stages of language development in children, and a
vast panorama of other natural, social, and cultural phenomena. More recently we have even come to
understand some fundamental limits to our abilities to predict. Over the cons we have developed
increasingly complex means to control many aspects of our lives and our interactions with nature,
and we have learned, often the hard way, the extent to which other aspects are uncontrollable. The
advent of electronic computers has arguably been the most revolutionary development in the history
of science and technology. This ongoing revolution is profoundly increasing our ability to predict and
control nature in ways that were barely conceived of even half a century ago. For many, the crowning
achievements of this revolution will be the creation-in the form of computer programs of new species
of intelligent beings. and even of new forms of life.
The goals of creating artificial intelligence and artificial life can be traced back to the very beginnings
of the computer age. The earliest computer scientists-Alan Turing. John von. Neumann, Norbert
Wiener, and others were motivated in large part by visions of imbuing computer programs with
intelligence, with the life-like ability to self-replicate, and with the adaptive capability to learn and to
control their environments. These early pioneers of computer science were as much interested in
biology and psychology as in electronics, and they looked to natural systems as guiding metaphors
for how to achieve their visions. It should be no surprise, then, that from the earliest days computers
were applied not only to calculating missile trajectories and deciphering military codes but also to
modelling the brain, mimicking human learning, and simulating biological evolution. These
biologically motivated computing activities have waxed and waned over the years, but since the early
1980s they have all undergone a resurgence in the computation research community. The first has
grown into the field of neural networks, the second into machine learning, and the third into what is
now called "evolutionary computation," of which genetic algorithms are the most prominent
example.
35
3724003 Tutorial - 09
8.2 GA Operators
The simplest form of genetic algorithm involves three types of operators: selection, crossover (single
point), and mutation.
8.3 Selection
This operator selects chromosomes in the population for reproduction. The fitter the chromosome,
the more times it is likely to be selected to reproduce.
8.4 Crossover
This operator randomly chooses a locus and exchanges the subsequence before and after that locus
between two chromosomes to create two offspring. For example, the strings 10000100 and
11111111 could be crossed over after the third locus in each to produce the two offspring 10011111
and 11100100. The crossover operator roughly mimics biological recombination between two
single-chromosome (haploid) organisms.
8.5 Mutation
This operator randomly flips some of the bits in a chromosome. For example, the string 00000100
might be mutated in its second position to yield 01000100. Mutation can occur at each bit position
in a string with some probability, usually very small (e.g., 0.001)
Given a clearly defined problem to be solved and a bit string representation for candidate solutions,
a simple
GA works as follows:
function 𝑦 = 𝑚𝑦𝐹𝑢𝑛1(𝑥)
endfunction
function 𝑦 = 𝑚𝑦𝐹𝑢𝑛2(𝑥)
𝑃 = 1𝐸4;
endfunction
PopSize = 100;
37
3724003 Tutorial - 09
Proba_cross = 0.9;
Proba_mut = 0.2;
NbGen = 100;
NbCouples = 110;
Log = %T;
Pressure = 0.05;
ga_params = init_param();
ga_params = add_param(ga_params,”dimension”,2);
ga_params = add_param(ga_params,”beta”,0);
ga_params = add_param(ga_params,”delta”,0.1);
ga_params = add_param(ga_params,”init_func”,init_ga_default);
ga_params = add_param(ga_params,”crossover_func”,crossover_ga_default);
ga_params = add_param(ga_params,”mutation_func”,mutation_ga_default);
ga_params = add_param(ga_params,”codage_func”,coading_ga_identity);
ga_params = add_param(ga_params,”selection_func”,selection_ga_elitist);
// ga_params = add_param(ga_params,”selection_func”,selection_ga_random);
38
3724003 Tutorial - 09
ga_params = add_param(ga_params,”nb_couples”,NbCouples);
disp([min(fobj_pop_opt)mean(fobj_pop_opt)max(fobj_pop_opt)])
// Get the best x (i.e. the one which achieves the minimum function value)
[fmin, k] = min(fobj_pop_opt)
Xmin = pop_opt(k)
[fmax, k] = max(fobj_pop_opt)
Xmax = pop_opt(k)
8.8 Output
39
3724003 Tutorial - 09
optim_ga: iteration 5/100 - min / max value found = 0.678666/6.983001
……………………………………………………………………………………………………………………………………………………
…………………………………………………………………………………………………………………………………………………….
-->[fmin, k] = min(fobj_pop_opt)
k = 75
fmin = 0.499975
-->xmin = 2.5000249
1.5000251
40
3724003 Tutorial - 09
-->[fmax, k] = max(fobj_pop_opt)
k = 62
fmax = 0.499975
-->xmax = pop_opt(k)
xmax = 2.5000275
1.5000225
Validation:
If we change the population size and number of generations so according to that we get the result so
it will depend on the population size and number of generation.
41
3724003 Tutorial - 10
Tutorial - 10
To study about Mixed Integer Linear Programming (MILP).
10.1 Introduction
Many problems in plant operation, design, location, and scheduling involve variables that are not
continuous but instead have integer values. Decision variables for which the levels are a dichotomy-
to install or not install a new piece of equipment, for example-are termed "0-1" or binary variables.
Sometimes we can treat integer variables as if they were continuous, especially when the range of a
variable contains a large number of integers, such as 100 trays in a distillation column, and round the
optimal solution to the nearest integer value. But when there is a smaller range available, rounding
up to an optimal solution becomes more difficult.
These problems relating to the optimization of discrete variables are dealt with Mixed Integer
Programming (MIP).
Here, the objective function depends on two sets of variables, x and y, x is a vector of continuous
variables and y is a vector of integer variables. Many MIP problems are linear in the objective function
and constraints and hence are subject to solution by linear programming. These problems are called
mixed-integer linear programming (MILP) problems.
Suppose, we have objects. The weight of the ith object is w, and its value is vi. Select a subset of the
objects such that their total weight does not exceed W (the capacity of the knapsack) and their total
value is a maximum.
𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒆: 𝒇(𝒚) = ∑ 𝒗𝒊 𝒚𝒊
𝒊=𝟏
𝑺𝒖𝒃𝒋𝒆𝒄𝒕 𝒕𝒐: ∑ 𝒘𝒊 𝒚𝒊 ≤ 𝑾 𝒚𝒊 = 𝟎, 𝟏 𝒊 = 𝟏, 𝟐, … , 𝒏
𝒊=𝟏
The binary variable 𝑦𝑖 indicates whether an object 𝑖 is selected (𝑦𝑖 = 1) or not selected (𝑦𝑖 = 0).
42
3724003 Tutorial - 10
10.3 Approaches for MILP problems
Approaches for MILP and MINLP are capable of finding aa optimal solution and verify that they have
done so. Specifically, we consider branch-and-bound (BB) and outer linearization (OL) methods.
BB can be applied to both linear and nonlinear problems, but OL is used for nonlinear problems by
solving a sequence of MILPs.
Example:
𝒚𝟏 , 𝒚𝟐 , 𝒚𝟑 = 𝟎, 𝟏
One or more of the integer constraints 𝒚𝒊 = 𝟎 𝒐𝒓 𝟏 are replaced by the relaxed condition 0 ≤ 𝑦𝑖 ≤ 1,
which includes the original integers, but also all of the real values in between.
1. The optimal solution has one fractional (non-integer) variable (y2) and an objective function
value of 129.1. Because the feasible region of the relaxed problem includes the feasible region
of the initial IP problem, 129.1 is an upper bound on the value of the objective function of the
KP. If we knew a feasible binary solution, its objective value would be a lower bound on the
value of the objective function, but none is assumed here, so the lower bound is set to -∞.
2. At node 1, y2 is the only fractional variable, and hence any feasible integer solution must
satisfy either y2 = 0 or y2 = 1, We create two new relaxations represented by nodes 2 and 3 by
imposing these two integer constraints. The process of creating these two relaxed sub-
problems is called branching.
3. If the relaxed IP problem at a given node has an optimal binary solution, that solution solves
the IP, and there is no need to proceed further. This node is said to be fathomed, because we
do not need to branch from it.
𝑮𝒂𝒑
≤ 𝒕𝒐𝒍
𝟏. 𝟎 + |𝒍𝒃|
A tol value of 10-4 would be a tight tolerance, 0.01 would be neither tight nor loose, and 0.03 or higher
would be loose. The termination criterion used in the Microsoft Excel Solver has a default tol value of
0.05.
1 𝟎 ≤ 𝒚𝟏 ≤ 𝟏
Upper bound = 129.1 𝟎 ≤ 𝒚𝟐 ≤ 𝟏
Lower bound = -∞ 𝟎 ≤ 𝒚𝟑 ≤ 𝟏 Continuous LP optimum
No incumbent 𝒚 ∗= (𝟏, 𝟎. 𝟕𝟕𝟔, 𝟏)
𝒇 = 𝟏𝟐𝟗. 𝟏𝟎
2 𝟎 ≤ 𝒚𝟏 ≤ 𝟏 3 𝟎 ≤ 𝒚𝟏 ≤ 𝟏
𝒚𝟐 = 𝟎 𝒚𝟐 = 𝟏
Upper bound = 128.11
𝟎 ≤ 𝒚𝟑 ≤ 𝟏 𝟎 ≤ 𝒚𝟑 ≤ 𝟏
Lower bound = 126.00
𝒚 ∗= (𝟏, 𝟎, 𝟏) 𝒚 ∗= (𝟎. 𝟗𝟕𝟖, 𝟏, 𝟏)
𝒇 = 𝟏𝟐𝟔. 𝟎𝟎 𝒇 = 𝟏𝟐𝟖. 𝟏𝟏
IP Optimum
𝒚𝟏 = 𝟎
𝒚𝟏 = 𝟏
4 𝒚𝟏 = 𝟎 5 𝒚𝟏 = 𝟏
𝒚𝟐 = 𝟏 𝒚𝟐 = 𝟏
𝟎 ≤ 𝒚𝟑 ≤ 𝟏 𝟎 ≤ 𝒚𝟑 ≤ 𝟏
𝒚 ∗= (𝟎, 𝟏, 𝟏) 𝒚 ∗= (𝟏, 𝟏, 𝟎. 𝟓𝟗𝟓)
𝒇 = 𝟒𝟒. 𝟎𝟎 𝒇 = 𝟏𝟏𝟑. 𝟖𝟏
44