0% found this document useful (0 votes)
43 views48 pages

Optimization in Rubber Industry

How to optimize the various parameters in Rubber Industries.

Uploaded by

Maulik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views48 pages

Optimization in Rubber Industry

How to optimize the various parameters in Rubber Industries.

Uploaded by

Maulik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

L. D.

COLLEGE OF ENGINEERING
AHMEDABAD – 380 015

Name: Maulik Maheshbhai Chauhan

Semester: M.E.-II Enrollment No.: 230280788001

Subject: Optimization in Rubber Industries (3724003) Year: 2023-24


Certificate

This is to certify that Maulik Maheshbhai Chauhan

Enrollment No. 230280788001 of M.E. Semester-II of Rubber

Technology Department has satisfactorily completed the course

in the subject of Optimization in Rubber Industries

(3724003) within four walls of L. D. College of Engineering,

Ahmedabad -380015.

Date of Submission : ______________________________________

Staff in-Charge : ______________________________________

Head of Department : ______________________________________

i
Certificate

This is to certify that the above term work of Maulik Maheshbhai

Chauhan University Exam No. 230280788001 of M.E. Semester-II of

Rubber Technology Department is assessed for University

examination on ___________________.

Int. Examiner ______________________________________

Ext. Examiner ______________________________________

ii
3724003 Index

Index
Sr. Page Number Date of Date of Initial
Title From To
No Start Completion of Staff

Tutorial – 01
1 01 03 23-02-2024 01-03-2024
(Newton’s Method)

Tutorial – 02
2 04 13 01-03-2024 15-03-2024
(Newton-Raphson Method)

Tutorial – 03
3 14 16 15-03-2024 22-03-2024
(Secant Method)

Tutorial – 04
4 17 20 22-03-2024 05-04-2024
(Word based problems)

Tutorial – 05
5 21 22 05-04-2024 12-04-2024
(Golden Section Method)

Tutorial – 06
6 23 26 12-04-2024 19-04-2024
(Fibonacci Search Method)

Tutorial – 07
7 27 31 19-04-2024 26-04-2024
(Simplex Method)

Tutorial – 08
8 32 34 26-04-2024 03-05-2024
(Box Complex Method)

Tutorial – 09
9 35 41 03-05-2024 17-05-2024
(Genetic Algorithm)

Tutorial – 10
10 (Mixed Integer Linear 42 44 17-05-2024 24-05-2024
Programming - MILP)

iii
3724003 Tutorial - 01

Tutorial - 01

Newton’s Method

𝟏
1.1 Find the minimum of 𝒇(𝒙) = 𝟐 𝒙𝟐 − 𝐬𝐢𝐧 𝒙, where 𝒙𝟎 = 𝟎. 𝟓.

1
𝑓(𝑥) = 𝑥 2 − sin 𝑥
2

𝑓′(𝑥) = 𝑥 − cos 𝑥

𝑓′′(𝑥) = 1 + sin 𝑥

As per formula,

𝒇′(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′′(𝒙)

𝑓′ (𝑥)
𝑥1 = 𝑥0 − 𝑓′′ (𝑥)

𝒏 𝒙𝒏 𝒇′(𝒙) = 𝒙 − 𝐜𝐨𝐬 𝒙 𝒇′′(𝒙) = 𝟏 + 𝐬𝐢𝐧 𝒙 𝒙𝒏+𝟏

0 0.500 -0.3776 1.4794 0.7552

1 0.7552 0.0271 1.6855 0.7391

2 0.7391 0.0001 1.6737 0.7391

1
3724003 Tutorial - 01
1.2 Find the minimum of 𝒈(𝒙) = 𝒙𝟑 − 𝟏𝟐. 𝟐 𝒙𝟐 + 𝟕. 𝟒𝟓 𝒙 + 𝟒𝟐 = 𝟎, where 𝒙𝟎 = 𝟏𝟐.

𝑔(𝑥) = 𝑥 3 − 12.2 𝑥 2 + 7.45 𝑥 + 42

𝑔′(𝑥) = 3 𝑥 2 − 24.4 𝑥 + 7.45

𝑔′′ (𝑥) = 6 𝑥 − 24.4

As per formula,

𝒈′(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒈′′(𝒙)

𝑔′ (𝑥)
𝑥1 = 𝑥0 − 𝑔′′(𝑥)

𝒏 𝒙𝒏 𝒈′(𝒙) = 𝟑 𝒙𝟐 − 𝟐𝟒. 𝟒 𝒙 + 𝟕. 𝟒𝟓 𝒈′′ (𝒙) = 𝟔 𝒙 − 𝟐𝟒. 𝟒 𝒙𝒏+𝟏

0 12.000 146.6500 47.6000 8.9191

1 8.9191 28.4755 29.1147 7.9411

2 7.9411 2.8697 23.2464 7.8176

3 7.8176 0.0457 22.5057 7.8156

4 7.8156 0.0000 22.4936 7.8156

2
3724003 Tutorial - 01
1.3 Minimum the function 𝒇(𝒙) = 𝒙𝟐 − 𝒙, where 𝒙𝟎 = 𝟑.

𝑓(𝑥) = 𝑥 2 − 𝑥

𝑓 ′(𝑥) = 2 𝑥 − 1

𝑓′′(𝑥) = 2

As per formula,

𝒇′(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′′(𝒙)

𝑓′ (𝑥)
𝑥1 = 𝑥0 − 𝑓′′ (𝑥)

𝒏 𝒙𝒏 𝒇′(𝒙) = 𝟐 𝒙 − 𝟏 𝒇′′(𝒙) = 𝟐 𝒙𝒏+𝟏

0 3.0 5.0 2.0 0.5

1 0.5 0.0 2.0 0.5

3
3724003 Tutorial - 02

Tutorial - 02

Newton Raphson Method

2.1 Newton’s equation 𝒚𝟑 − 𝟐 𝒚 − 𝟓 = 𝟎, has a root near 𝒚 = 𝟎. Starting with 𝒚𝟎 = 𝟐,


compute 𝒚𝟏 , 𝒚𝟐 and 𝒚𝟑 the three Newton-Raphson estimates for the root.

𝑓(𝑦) = 𝑦 3 − 2 𝑦 − 5

𝑓′(𝑦) = 3 𝑦 2 − 2

As per Newton-Raphson Method,

𝒇(𝒚)
𝒚𝒏+𝟏 = 𝒚𝒏 − 𝒇′(𝒚)

𝑦𝑛 3 −2 𝑦𝑛 −5
∴ 𝑦𝑛+1 = 𝑦𝑛 −
3 𝑦𝑛 2 −2

3 𝑦𝑛 3 − 2 𝑦𝑛 − 𝑦𝑛 3 + 2 𝑦𝑛 + 5
∴ 𝑦𝑛+1 = 3 𝑦𝑛 2 −2

2 𝑦𝑛 3 + 5
∴ 𝑦𝑛+1 = 3 𝑦𝑛 2 −2

𝒏 𝒚𝒏 𝟐 𝒚𝒏 𝟑 + 𝟓 𝟑 𝒚𝒏 𝟐 − 𝟐 𝒚𝒏+𝟏

0 2.0000 21.0000 10.0000 2.1000

1 2.1000 23.5220 11.2300 2.0946

2 2.0946 23.3786 11.1616 2.0946

4
3724003 Tutorial - 02
2.2 Find all solution of 𝒆𝟐𝒙 = 𝒙 + 𝟔, correct to 4 decimal places; use the Newton Method.

𝑓(𝑥) = 𝑒 2𝑥 − 𝑥 − 6

𝑓′(𝑥) = 2 𝑒 2𝑥 − 1

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

For initial value,

𝑓(1) = 𝑒 2(1) − (1) − 6 = 7.3890 − 7 = 0.3890

𝑓(0.95) = 𝑒 2(0.95) − (0.95) − 6 = 6.6859 − 6.95 = 0.2641

As 𝑓(0.95) is nearer to 0. So, by intermediate value theorem root lies between 𝑓(0.95) and 1. 𝑓(0.95)
is closer to 0. So, root is nearer to 0.95. Assume 0.97 because it is very closer to 0.

Thus, 𝑥0 = 0.97

𝒏 𝒙𝒏 𝒇(𝒙) = 𝒆𝟐𝒙 − 𝒙 − 𝟔 𝒇′(𝒙) = 𝟐 𝒆𝟐𝒙 − 𝟏 𝒙𝒏+𝟏

0 0.9700 -0.0112 12.9175 0.9709

1 0.9709 0.0000 12.9418 0.9709

5
3724003 Tutorial - 02
2.3 Find all solutions of 𝟓𝒙 + 𝒍𝒏 𝒙 = 𝟏𝟎𝟎𝟎𝟎, correct to 4 decimal places; use the Newton
method.

𝑓(𝑥) = 5𝑥 + 𝑙𝑛 𝑥 − 10000

1
𝑓 ′(𝑥) = 5 +
𝑥

In this 𝑙𝑛 𝑥 is very small. So we will decide the root by neglecting 𝑙𝑛 𝑥.

𝑓(𝑥) = 5𝑥 + 𝑙𝑛 𝑥 − 10000

If we take root nearer to 0 than,

𝑓(2000) = 5(2000) + 𝑙𝑛 (2000) − 10000 = 7.6

By assuming the 𝒙𝟎 = 𝟐𝟎𝟎𝟎

𝟏
𝒏 𝒙𝒏 𝟓𝒙 + 𝒍𝒏 𝒙 − 𝟏𝟎𝟎𝟎𝟎 𝟓+ 𝒙𝒏+𝟏
𝒙

0 2000.0000 7.6009 5.0005 1998.4800

1 1998.4800 0.0000 5.0005 1998.4800

6
3724003 Tutorial - 02
2.4 Use the Newton-Raphson Method with 3 as starting point to find a fraction that is within
𝟏𝟎−𝟖 of √𝟏𝟎, Show that your answer is indeed within 𝟏𝟎−𝟖 of the truth.

Here, 𝑥0 = 3

𝑓(𝑥) = 𝑥 2 − 10

𝑓′(𝑥) = 2 𝑥

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 −
𝒇′(𝒙)

𝒏 𝒙𝒏 𝒙𝟐 − 𝟏𝟎 𝟐𝒙 𝒙𝒏+𝟏

0 3.0000 -1.0000 6.0000 3.1667

1 3.1667 0.0278 6.3333 3.1623

2 3.1623 0.0000 6.3246 3.1623

7
3724003 Tutorial - 02
2.5 Let 𝒇(𝒙) = 𝒙𝟐 − 𝒂. Show that the Newton method leads to the recurrence
𝟏 𝒂
𝒙𝒏+𝟏 = 𝟐 (𝒙𝒏 + 𝟐 𝒙 ),
𝒏

Heron of Alexandria used an algebra version of the above equation.

𝑓(𝑥) = 𝑥 2 − 𝑎

𝑓′(𝑥) = 2 𝑥

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

𝑥𝑛 2 −𝑎
∴ 𝑥𝑛+1 = 𝑥𝑛 − 2 𝑥𝑛

2 𝑥𝑛 2 −𝑥𝑛 2 +𝑎)
∴ 𝑥𝑛+1 = 2 𝑥𝑛

𝑥𝑛 2 +𝑎
∴ 𝑥𝑛+1 = 2 𝑥𝑛

𝑥 2 𝑎
∴ 𝑥𝑛+1 = 2 𝑛𝑥 + 2 𝑥
𝑛 𝑛

𝟏 𝒂
∴ 𝒙𝒏+𝟏 = 𝟐 (𝒙𝒏 + 𝟐 𝒙 )
𝒏

8
3724003 Tutorial - 02
𝟏 𝟏
2.6 Use the equation 𝒙 = 𝟏. 𝟑𝟕, the Newton method to find 𝟏.𝟑𝟕 correct to 8 decimal places.

Let assume that, 1.37 = 𝑎

1 1 1
𝑓(𝑥) = 1.37 − 𝑥 =𝑎−𝑥 and 𝑓′(𝑥) = 𝑥 2

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

1
𝑎−
𝑥𝑛
∴ 𝑥𝑛+1 = 𝑥𝑛 − ( 1 )
𝑥𝑛 2

1
∴ 𝑥𝑛+1 = 𝑥𝑛 − [𝑥𝑛 2 (𝑎 − 𝑥 )]
𝑛

∴ 𝑥𝑛+1 = 𝑥𝑛 − [𝑎𝑥𝑛 2 − 𝑥𝑛 ]

∴ 𝑥𝑛+1 = 𝑥𝑛 + 𝑥𝑛 − 𝑎𝑥𝑛 2

∴ 𝑥𝑛+1 = 2𝑥𝑛 − 𝑎𝑥𝑛 2

∴ 𝑥𝑛+1 = 𝑥𝑛 (2 − 𝑎𝑥𝑛 )

∴ 𝑥𝑛+1 = 𝑥𝑛 (2 − 1.37 𝑥𝑛 )

Let’s take 𝑥0 = 0.75

𝒏 𝒙𝒏 𝟐 − 𝟏. 𝟑𝟕 𝒙𝒏 𝒙𝒏 𝒙𝒏+𝟏

0 0.75000000 0.97250000 0.75000000 0.72937500

1 0.72937500 1.00075625 0.72937500 0.72992659

2 0.72992659 1.00000057 0.72992659 0.72992701

3 0.72992701 1.00000000 0.72992701 0.72992701

9
3724003 Tutorial - 02
2.7(a) A devotee of Newton-Raphson used the method to solve the equation 𝒙𝟏𝟎𝟎 = 𝟎, using the
initial estimate 𝒙𝟎 = 𝟎. 𝟏, Calculate the next 5 Newton estimates.

𝑓(𝑥) = 𝑥 100

𝑓′(𝑥) = 100 𝑥 99

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

𝑥 100
∴ 𝑥𝑛+1 = 𝑥𝑛 − 100𝑛 𝑥 99
𝑛

𝑛 𝑥
∴ 𝑥𝑛+1 = 𝑥𝑛 − 100

99 𝑥𝑛
∴ 𝑥𝑛+1 = 100

∴ 𝑥𝑛+1 = 0.99 𝑥𝑛

𝒏 𝒙𝒏 𝒙𝒏+𝟏 = 𝟎. 𝟗𝟗 𝒙𝒏

0 0.100000 0.099000

1 0.099000 0.098010

2 0.098010 0.097030

3 0.097030 0.096060

4 0.096060 0.095099

10
3724003 Tutorial - 02
𝟏⁄
2.7(b) The devotee then tried to use the method to solve 𝟑𝒙 𝟑 = 𝟎, using 𝒙𝟎 = 𝟎. 𝟏,
Calculate the next 10 estimates.

1⁄
𝑓(𝑥) = 3 𝑥 3

−2⁄
𝑓′(𝑥) = 𝑥 3

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

1
3 𝑥𝑛 ⁄3
∴ 𝑥𝑛+1 = 𝑥𝑛 − −2⁄
𝑥𝑛 3

1 2
( + )
∴ 𝑥𝑛+1 = 𝑥𝑛 − 3 𝑥𝑛 3 3

∴ 𝑥𝑛+1 = 𝑥𝑛 − 3 𝑥𝑛 1

∴ 𝑥𝑛+1 = 𝑥𝑛 − 3 𝑥𝑛

∴ 𝑥𝑛+1 = −2 𝑥𝑛

𝒏 𝒙𝒏 𝒙𝒏+𝟏 = −𝟐 𝒙𝒏 𝒏 𝒙𝒏 𝒙𝒏+𝟏 = −𝟐 𝒙𝒏

0 0.100 -0.200 5 -3.200 6.400

1 -0.200 0.400 6 6.400 -12.80

2 0.400 -0.800 7 -12.80 25.60

3 -0.800 1.600 8 25.60 -51.20

4 1.600 -3.200 9 -51.20 102.40

11
3724003 Tutorial - 02
−𝟏⁄
𝒙𝟐
2.8 Suppose that 𝒇(𝒙) = { 𝒆 𝑖𝑓 𝒙 ≠ 𝟎
𝟎 𝑖𝑓 𝒙 = 𝟎

Show that if 𝒙𝟎 = 𝟎. 𝟎𝟎𝟎𝟏, it takes more than 100 million iterations of the Newton’s
method to get below the value of 0.00005.

−1⁄
−1 −1 2𝑒 𝑥 2
𝑓(𝑥) = 𝑒 ⁄𝑥 2 and 𝑓 ′(𝑥)
= (𝑒 ⁄𝑥 2 ) × (2⁄𝑥 3 ) = 𝑥 3

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

−1⁄
−1⁄ 2𝑒 𝑥𝑛 2
∴ 𝑥𝑛+1 = 𝑥𝑛 − [(𝑒 𝑥𝑛2 ) ÷( 𝑥𝑛 3
)]

−1⁄
𝑥𝑛 3 ∙ 𝑒 𝑥𝑛 2
∴ 𝑥𝑛+1 = 𝑥𝑛 − −1⁄
2𝑒 𝑥𝑛 2

𝑥𝑛 3
∴ 𝑥𝑛+1 = 𝑥𝑛 − 2

𝑥𝑛 3
∴ 𝑥𝑛 − 𝑥𝑛+1 = 2

By taking 𝑥0 = 0.0001, it shows that it takes more than 100 million iterations of the Newton’s method
to get below the value of 0.00005.

12
3724003 Tutorial - 02
2.9 Use the Newton Method to find the smallest and the second smallest positive roots of
the equation 𝒕𝒂𝒏 𝒙 = 𝟒𝒙, correct to 4 decimal places.

𝑓(𝑥) = tan 𝑥 − 4𝑥 and 𝑓′(𝑥) = sec 2 𝑥 − 4

As per Newton-Raphson Method,

𝒇(𝒙)
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′(𝒙)

tan 𝑥−4𝑥
∴ 𝑥𝑛+1 = 𝑥𝑛 − sec2 𝑥−4

Let’s take 𝑥0 = 1.4

𝒏 𝒙𝒏 𝒇(𝒙) = 𝐭𝐚𝐧 𝒙 − 𝟒𝒙 𝒇′(𝒙) = 𝐬𝐞𝐜 𝟐 𝒙 − 𝟒 𝒙𝒏+𝟏

0 1.4000 0.1979 30.6155 1.3935

1 1.3935 0.0081 28.1612 1.3932

13
3724003 Tutorial - 03

Tutorial - 03

Secant Method

3.1 Using secant method, solve the function 𝒇(𝒙) = 𝟎 where 𝒇(𝒙) = 𝒙𝟐 − 𝟐.

𝑥2 − 2 = 0

∴ 𝑥 = √2 (Neglecting negative value for simplicity)


Let choose 𝑥0 = 1 and 𝑥1 = 1.5 for 𝑓(𝑥) = 𝑥 2 − 2
𝑓(𝑥0 ) = −1
𝑓(𝑥1 ) = 0.25

As per formula,
𝑥𝑛 𝑓(𝑥𝑛+1 ) − 𝑥𝑛+1 𝑓(𝑥𝑛 )
𝑥𝑛+2 =
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
Taking 𝑛 = 0
(1)(0.25) − (1.5)(−1) 0.25 + 1.5 1.75
𝑥2 = = = = 1.4
(0.25) − (−1) 0.25 + 1 1.25
𝑓(𝑥2 ) = −0.04
(1.5)(−0.04) − (1.4)(0.25) −0.06 − 0.35 0.41
𝑥3 = = = = 1.4137931
(−0.04) − (0.25) −0.04 − 0.25 0.29
𝑓(𝑥3 ) = −0.00118907
(1.5)(−0.04) − (1.4)(0.25) −0.06 − 0.35 0.41
𝑥4 = = = = 1.4137931
(−0.04) − (0.25) −0.04 − 0.25 0.29

𝒏 𝒙𝒏 𝒙𝒏+𝟏 𝒇(𝒙𝒏 ) 𝒇(𝒙𝒏+𝟏 ) 𝒙𝒏+𝟐

0 1 1.5 -1 0.25 1.4

1 1.5 1.4 0.25 -0.04 1.413793

2 1.4 1.413793 -0.04 -0.00119 1.414216

3 1.413793 1.414216 -0.00119 6.01E-06 1.414214


𝑥4 = 1.414214 which is equal to √2.

14
3724003 Tutorial - 03
3.2 Using secant method to find the root of the function 𝒇(𝒙) = 𝒄𝒐𝒔 𝒙 + 𝟐 𝒔𝒊𝒏 𝒙 + 𝒙𝟐 to 5
decimal places.

As per formula,
𝑥𝑛 𝑓(𝑥𝑛+1 ) − 𝑥𝑛+1 𝑓(𝑥𝑛 )
𝑥𝑛+2 =
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )

𝒏 𝒙𝒏 𝒙𝒏+𝟏 𝒇(𝒙𝒏 ) 𝒇(𝒙𝒏+𝟏 ) 𝒙𝒏+𝟐

0 0.00000 -0.10000 1.00000 0.80534 -0.51371

1 -0.10000 -0.51371 0.80534 0.15200 -0.60996

2 -0.51371 -0.60996 0.15200 0.04605 -0.65180

3 -0.60996 -0.65180 0.04605 0.00660 -0.65880

4 -0.65180 -0.65880 0.00660 0.00041 -0.65926

5 -0.65880 -0.65926 0.00041 0.00000 -0.65927

6 -0.65926 -0.65927 0.00000 0.00000 -0.65927

15
3724003 Tutorial - 03
3.3 Use secant method to find the root of 𝒇(𝒙) = 𝒙𝟑 − 𝟒 to 5 decimal places.

As per given function,


𝑥3 − 4 = 0
𝑥3 = 4
3
∴ 𝑥 = √4
∴ 𝑥 = 1.58740

As per formula,
𝑥𝑛 𝑓(𝑥𝑛+1 ) − 𝑥𝑛+1 𝑓(𝑥𝑛 )
𝑥𝑛+2 =
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )

𝒏 𝒙𝒏 𝒙𝒏+𝟏 𝒇(𝒙𝒏 ) 𝒇(𝒙𝒏+𝟏 ) 𝒙𝒏+𝟐

0 1.00000 1.50000 -3.00000 -0.62500 1.63158

1 1.50000 1.63158 -0.62500 0.34334 1.58493

2 1.63158 1.58493 0.34334 -0.01869 1.58733

3 1.58493 1.58733 -0.01869 -0.00051 1.58740

4 1.58733 1.58740 -0.00051 0.00000 1.58740

16
3724003 Tutorial - 04

Tutorial - 04

Word based problems

4.1 If is required to replace a length of piping within a process plant. Since the site is
crumped, the pipe run must be brought into the site in length and welded in site.
Obviously to reduce the cost of welding, the pipe length should be brought in sections
which are long as possible if the head room is low and the pipe has to be carried
horizontally what length should be introduced if the restriction is the passage of the
pipe around the corner foam 10 feet wide corridor into a 6 feet wide corridor.

10 feet

α
6 feet

10 6
𝑦= +
𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛼
𝑐𝑜𝑠𝛼 𝑠𝑖𝑛𝛼
∴ 𝑦 ′ = −10 2
+6 =0
𝑠𝑖𝑛 𝛼 𝑐𝑜𝑠 2 𝛼
∴ 6𝑠𝑖𝑛3 𝛼 − 10𝑐𝑜𝑠 3 𝛼 = 0
10
∴ 𝑡𝑎𝑛3 𝛼
6
∴ 𝑡𝑎𝑛𝛼 = 1.186
∴ 𝛼 = 𝑡𝑎𝑛−1 1.186
∴ 𝛼 = 49.85°
∴ 𝑦 = 22.38 𝑓𝑒𝑒𝑡

17
3724003 Tutorial - 04
4.2 An open top box is to be made out of a piece of cardboard measuring 𝟐𝒎 × 𝟐𝒎 by
cutting of equal surfaces from the corners and turning up the side. Fix height of the box
for maximum volume.

2m

2m

𝑉 = (𝐿 − 2𝑥) × (𝑊 − 2𝑥) × (𝑥)


where, L = Length
W = Breadth
x = height

∴ 𝑉 = (2 − 2𝑥)(2 − 2𝑥)(𝑥)
∴ 𝑉 = (4 − 4𝑥 − 4𝑥 + 4𝑥 2 )(𝑥)
∴ 𝑉 = (4𝑥 3 − 8𝑥 2 + 4𝑥)

𝑑𝑉
= 12𝑥 2 − 16𝑥 + 4 = 0
𝑑𝑥

∴𝑥=1 or 𝑥 = 0.33
∴ 𝑉𝑚𝑎𝑥 = 0.5925

18
3724003 Tutorial - 04
4.3 Find the specification of an open top rectangular tank whose total are is to be 108 m2.
If a maximum volume is required.

𝑉 = 𝑥𝑦𝑧
𝐴 = 𝑥𝑦 + (2𝑥 + 2𝑦)𝑧 = 108
∴ 𝑥𝑦 + 2𝑥𝑧 + 2𝑦𝑧 = 108
∴ 𝑥𝑦 + 2𝑥𝑧 = 108 − 2𝑦𝑧
∴ 𝑥(𝑦 + 2𝑧) = 108 − 2𝑦𝑧
108 − 2𝑦𝑧
∴𝑥=
𝑦 + 2𝑧

Placing value of 𝑥 in volume equation,


108 − 2𝑦𝑧
𝑉=( ) 𝑦𝑧
𝑦 + 2𝑧
108𝑦𝑧 − 2𝑦 2 𝑧 2
∴𝑉=( )
𝑦 + 2𝑧
𝑑𝑉 (𝑦 + 2𝑧)(108𝑧 − 4𝑦𝑧 2 ) − (108𝑦𝑧 − 2𝑦 2 𝑧 2 )(1 + 0)
=
𝑑𝑦 (𝑦 + 2𝑧)2
108𝑦𝑧 − 4𝑦 2 𝑧 2 + 216𝑧 2 − 8𝑦𝑧 3 − 108𝑦𝑧 + 2𝑦 2 𝑧 2
∴ =0
(𝑦 + 2𝑧)2
216𝑧 2 − 2𝑦 2 𝑧 2 − 8𝑦𝑧 3
∴ =0
(𝑦 + 2𝑧)2

𝑑𝑉 (𝑦+2𝑧)(108𝑦−4𝑦 2 𝑧)−(108𝑦𝑧−2𝑦 2 𝑧 2 )(0+2)


Now, 𝑑𝑧 = (𝑦+2𝑧)2

108𝑦 2 − 4𝑦 3 𝑧 + 216𝑦𝑧 − 8𝑦 2 𝑧 2 − 216𝑦𝑧 + 4𝑦 2 𝑧 2


∴ =0
(𝑦 + 2𝑧)2

19
3724003 Tutorial - 04
108𝑦 2 − 4𝑦 3 𝑧 − 4𝑦 2 𝑧 2
∴ =0
(𝑦 + 2𝑧)2

𝑑𝑉 𝑑𝑉
From the 𝑑𝑦 and 𝑑𝑧

2𝑧 2 (108 − 𝑦 2 − 4𝑦𝑧) = 0 and


𝑦 2 (108 − 4𝑦𝑧 − 4𝑧 2 ) = 0

So, (108 − 𝑦 2 − 4𝑦𝑧) − (108 − 4𝑦𝑧 − 4𝑧 2 ) = 0 − 0


∴ 4𝑧 2 − 𝑦 2 = 0
∴ 4𝑧 2 = 𝑦 2
∴ 𝑦 = 2𝑧

Now, value of y putting into the equation;


108 − 4𝑦𝑧 − 4𝑧 2 = 0
∴ 108 − 4(2𝑧)𝑧 − 4𝑧 2 = 0
∴ 108 − 8𝑧 2 − 4𝑧 2 = 0
∴ 108 = 12𝑧 2
∴ 𝑧2 = 9
∴𝑧=3

Now, 𝑦 = 2𝑧 = 2(3) = 6
108−2𝑦𝑧
and 𝑥= 𝑦+2𝑧
108−2(6)(3)
∴𝑥= 6+2(3)
108−36
∴𝑥= 6+6
72
∴𝑥=
12

∴𝑥=6
So, 𝑉 = 𝑥𝑦𝑧
∴ 𝑉 = (6)(6)(3)
∴ 𝑉 = 72 𝑚3

20
3724003 Tutorial - 05

Tutorial - 05

Golden Section Method

5.1 Find the value of 𝒙 in the interval (𝟎, 𝟏) which minimize the function 𝒇(𝒙) = 𝒙(𝒙 − 𝟏. 𝟓)
using Golden Section Method.

a 0.382 0.618 b
0 𝑥1 𝑥2 1
As per Formula,
𝑥1 = 𝑏 − 0.618(𝑏 − 𝑎)
𝑥2 = 𝑎 + 0.618(𝑏 − 𝑎)

𝒌 𝒂 𝒃 𝒙𝟏 𝒙𝟐 𝒇(𝒙𝟏 ) 𝒇(𝒙𝟐 ) 𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏

1 0.000 1.000 0.382 0.618 -0.427 -0.545 𝒇(𝒙𝟐 )

2 0.382 1.000 0.618 0.764 -0.545 -0.562 𝒇(𝒙𝟐 )

3 0.618 1.000 0.764 0.854 -0.562 -0.552 𝒇(𝒙𝟏 )

4 0.618 0.854 0.708 0.764 -0.561 -0.562 𝒇(𝒙𝟐 )

5 0.708 0.854 0.764 0.798 -0.562 -0.560 𝒇(𝒙𝟏 )

6 0.708 0.798 0.743 0.764 -0.562 -0.562 𝒇(𝒙𝟏 ) = 𝒇(𝒙𝟐 )

Thus, 𝑥𝑚𝑖𝑛 ∈ [0.708,0.798]

0.708+0.798 1.507
Hence, 𝑥 ∗ = = = 0.753 and 𝑓(𝑥 ∗ ) = −0.562
2 2

21
3724003 Tutorial - 05
5.2 Find 𝒇(𝒙) = 𝒙𝟐 − 𝒙; minimize the function when 𝒙𝟎 = 𝟑.

a 0.382 0.618 b
0 𝑥1 𝑥2 1

As per Formula,
𝑥1 = 𝑏 − 0.618(𝑏 − 𝑎)
𝑥2 = 𝑎 + 0.618(𝑏 − 𝑎)

𝒌 𝒂 𝒃 𝒙𝟏 𝒙𝟐 𝒇(𝒙𝟏 ) 𝒇(𝒙𝟐 ) 𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏

1 0.00000 3.00000 1.14600 1.85400 0.16732 1.58332 𝒇(𝒙𝟏 )

2 0.00000 1.85400 0.70823 1.14577 -0.20664 0.16702 𝒇(𝒙𝟏 )

3 0.00000 1.14577 0.43768 0.70809 -0.24612 -0.20670 𝒇(𝒙𝟏 )

4 0.00000 0.70809 0.27049 0.43760 -0.19732 -0.24611 𝒇(𝒙𝟏 ) = 𝒇(𝒙𝟐 )

Thus, 𝑥𝑚𝑖𝑛 ∈ [0.0,0.70809]

0.0+0.70809 0.70809
Hence, 𝑥 ∗ = = = 0.35404 and 𝑓(𝑥 ∗ ) = −0.229
2 2

22
3724003 Tutorial - 06

Tutorial - 06

Fibonacci Search Method

6.1 Find the value of 𝒙 in the interval (𝟎, 𝟏) which minimize the function 𝒇(𝒙) = 𝒙(𝒙 − 𝟏. 𝟓)
with accuracy ±𝟎. 𝟎𝟓 using Fibonacci Search Method.

a 0.382 0.618 b
0 𝑥1 𝑥2 1
Here given accuracy is ±0.05;
So, 𝐿𝑛 ≤ 0.05 × 𝐿0
𝐿𝑛
∴ ≤ 0.05
𝐿0

1 𝐿𝑛 1
∴ ≤ 0.05 (∵ = )
𝐹𝑛 𝐿0 𝐹𝑛

∴ 𝐹𝑛 ≥ 20
Now taking the Fibonacci Series;
𝐹0 = 1 𝐹4 = 5
𝐹1 = 1 𝐹5 = 8
𝐹2 = 2 𝐹6 = 13
𝐹3 = 3 𝐹7 = 21

The smallest 𝑛 satisfying this is 7.


So, 𝐹7 > 𝐹𝑛 = 20 > 𝐹6 ; So, taking 𝑛 = 7
𝑭𝒏−𝒌
As well as, 𝑥2 = 𝑎 + 𝑭 (𝑏 − 𝑎)
𝒏−𝒌+𝟏

𝑥1 = 𝑎 + 𝑏 − 𝑥2

23
3724003 Tutorial - 06

𝒌 𝑭𝒏−𝒌
𝒂 𝒃 𝒙𝟏 𝒙𝟐 𝒇(𝒙𝟏 ) 𝒇(𝒙𝟐 ) 𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏
𝑭𝒏−𝒌+𝟏

𝐹6 𝒇(𝒙𝟐 )
1 ⁄𝐹 0.0000 1.0000 0.3810 0.6190 -0.4263 -0.5454
7

𝐹5 𝒇(𝒙𝟐 )
2 ⁄𝐹 0.3810 1.0000 0.6190 0.7619 -0.5454 -0.5624
6

𝐹4
3 ⁄𝐹 0.6190 1.0000 0.7619 0.8571 -0.5624 -0.5510 𝒇(𝒙𝟏 )
5

𝐹3
4 ⁄𝐹 0.6190 0.8571 0.7143 0.7619 -0.5612 -0.5624 𝒇(𝒙𝟐 )
4

𝐹2
5 ⁄𝐹 0.7143 0.8571 0.7619 0.8095 -0.5624 -0.5590 𝒇(𝒙𝟏 )
3

𝐹1
6 ⁄𝐹 0.7143 0.8095 0.7619 0.7619 -0.5624 -0.5624 𝒇(𝒙𝟏 ) = 𝒇(𝒙𝟐 )
2

𝐹0
7 ⁄𝐹 0.7143 0.7619 0.7143 0.7619 -0.5612 -0.5624 𝒇(𝒙𝟐 )
1

Thus, 𝑥𝑚𝑖𝑛 ∈ [0.7143,0.7619]

0.7143+0.7619 1.476
Hence, 𝑥 ∗ = = = 0.738 and 𝑓(𝑥 ∗ ) = −0.562
2 2

24
3724003 Tutorial - 06
6.2 Minimize the function by using Fibonacci algorithm in the range [𝟏. 𝟓, 𝟒. 𝟓] for the
𝟏 𝒙−𝟏
function 𝒇(𝒙) = − (𝒙−𝟏)𝟐 (𝒍𝒐𝒈 𝒙 − 𝟐 𝒙+𝟏) such that 0.07 is the required accuracy.

a 0.382 0.618 b
1.5 𝑥1 𝑥2 4.5

Here given accuracy is 0.07;


So, 𝐿𝑛 ≤ 0.07 × 𝐿0
𝐿𝑛
∴ ≤ 0.07
𝐿0

1 𝐿𝑛 1
∴ 𝐹 ≤ 0.07 (∵ =𝐹)
𝑛 𝐿0 𝑛

∴ 𝐹𝑛 ≥ 14.29

Now taking the Fibonacci Series;


𝐹0 = 1 𝐹4 = 5
𝐹1 = 1 𝐹5 = 8
𝐹2 = 2 𝐹6 = 13
𝐹3 = 3 𝐹7 = 21

The smallest 𝑛 satisfying this is 7.


So, 𝐹7 > 𝐹𝑛 = 14.29 > 𝐹6 ; So, taking 𝑛 = 7

𝑭𝒏−𝒌
As well as, 𝑥2 = 𝑎 + 𝑭 (𝑏 − 𝑎)
𝒏−𝒌+𝟏

𝑥1 = 𝑎 + 𝑏 − 𝑥2

25
3724003 Tutorial - 06

𝒌 𝑭𝒏−𝒌
𝒂 𝒃 𝒙𝟏 𝒙𝟐 𝒇(𝒙𝟏 ) 𝒇(𝒙𝟐 ) 𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏
𝑭𝒏−𝒌+𝟏

𝐹6 𝒇(𝒙𝟐 )
1 ⁄𝐹 1.5000 4.5000 2.6429 3.3571 0.1778 0.1001
7

𝐹5 𝒇(𝒙𝟐 )
2 ⁄𝐹 2.6429 4.5000 3.3571 3.7857 0.1001 0.0755
6

𝐹4 𝒇(𝒙𝟐 )
3 ⁄𝐹 3.3571 4.5000 3.7857 4.0714 0.0755 0.0638
5

𝐹3 𝒇(𝒙𝟐 )
4 ⁄𝐹 3.7857 4.5000 4.0714 4.2143 0.0638 0.0589
4

𝐹2 𝒇(𝒙𝟐 )
5 ⁄𝐹 4.0714 4.5000 4.2143 4.3571 0.0589 0.0545
3

𝐹1
6 ⁄𝐹 4.2143 4.5000 4.3571 4.3571 0.0545 0.0545 𝒇(𝒙𝟏 ) = 𝒇(𝒙𝟐 )
2

𝐹0
7 ⁄𝐹 4.3571 4.5000 4.3571 4.5000 0.0545 0.0506 𝒇(𝒙𝟐 )
1

Thus, 𝑥𝑚𝑖𝑛 ∈ [4.3571,4.5]

4.3571+4.5 8.8571
Hence, 𝑥 ∗ = = = 4.4286 and 𝑓(𝑥 ∗ ) = 0.52
2 2

26
3724003 Tutorial - 07

Tutorial - 07
Simplex Method

7.1 Maximize the function 𝒚 = 𝟐𝒙𝟏 − 𝒙𝟐 subject to the constraints,

𝒙𝟏 ≥ 𝟎; 𝒙𝟐 ≥ 𝟎;

−3𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟐;
2𝒙𝟏 − 𝟒𝒙𝟐 ≤ 𝟑;
𝒙𝟏 + 𝒙𝟐 ≤ 𝟔

Step 1: Equation can be transformed using slack variable as


−3𝒙𝟏 + 𝟐𝒙𝟐 + 𝒙𝟑 = 𝟐;
2𝒙𝟏 − 𝟒𝒙𝟐 + 𝒙𝟒 = 𝟑;
𝒙𝟏 + 𝒙𝟐 + 𝒙𝟓 = 𝟔

Step 2: Best feasible solution = (0, 0, 2, 3, 6)


Step 3: 𝒙𝟏 and 𝒙𝟐 are basis variables and 𝒙𝟑 , 𝒙𝟒 and 𝒙𝟓 are non-basis variables. Equations can
be rearranged as:
𝒙𝟑 − 3𝒙𝟏 + 𝟐𝒙𝟐 = 𝟐;
𝒙𝟒 + 2𝒙𝟏 − 𝟒𝒙𝟐 = 𝟑;
𝒙𝟓 + 𝒙𝟏 + 𝒙𝟐 = 𝟔;
𝟒 − 𝟐𝒙𝟏 + 𝒙𝟐 = 𝟎
Array 1:

𝒙𝟏 𝒙𝟐 Ratio
𝒙𝟑 2 -3 2 -0.667
𝒙𝟒 3 2 -4 Pivot Row 1.5
𝒙𝟓 6 1 1 6
𝒚 0 -2 1

Pivot Column
ep = 2

27
3724003 Tutorial - 07
New transformed Array 2:

𝒙𝟒 𝒙𝟐 Ratio
𝒙𝟑 6.5 1.5 -4 -1.625
𝒙𝟏 1.5 0.5 -2 -0.75
𝒙𝟓 4.5 -0.5 3 Pivot Row 1.5
𝒚 3 1 -3

Pivot Column

Array 3:

𝒙𝟒 𝒙𝟓
𝒙𝟑 12.5 0.833 1.333
𝒙𝟏 4.5 0.167 0.667
𝒙𝟐 1.5 -0.167 0.333
𝒚 7.5 0.5 1.0

28
3724003 Tutorial - 07
7.2 Maximize 𝒚 = 𝟔𝒙𝟏 + 𝟓𝒙𝟐 subject to the constraints,

2𝒙𝟏 + 𝟓𝒙𝟐 ≤ 𝟐𝟎;

5𝒙𝟏 + 𝒙𝟐 ≥ 𝟓;
3𝒙𝟏 + 𝟏𝟏𝒙𝟐 ≥ 𝟑𝟑

Introducing slack and artificial variables wherever needed;

𝟐𝒙𝟏 + 𝟓𝒙𝟐 + 𝒙𝟑 = 𝟐𝟎;

𝟓𝒙𝟏 + 𝒙𝟐 − 𝒙𝟒 + 𝒙𝟏𝟎𝟎 = 𝟓;
𝟑𝒙𝟏 + 𝟏𝟏𝒙𝟐 − 𝒙𝟓 + 𝒙𝟏𝟎𝟏 = 𝟑𝟑

First basic solution


𝒙𝟏 = 𝟎, 𝒙𝟐 = 𝟎, 𝒙𝟑 = 𝟐𝟎, 𝒙𝟒 = 𝟎, 𝒙𝟓 = 𝟎, 𝒙𝟏𝟎𝟎 = 𝟓, 𝒙𝟏𝟎𝟏 = 𝟑𝟑
Where 𝒙𝟑 , 𝒙𝟏𝟎𝟎 and 𝒙𝟏𝟎𝟏 form the basis.

Adding a penalty function to the main equation, we get;

𝒚 = 𝟔𝒙𝟏 + 𝟓𝒙𝟐 − 𝑷(𝒙𝟏𝟎𝟎 + 𝒙𝟏𝟎𝟏 )


or in terms of non-basis variable.
𝒚 = 𝟔𝒙𝟏 + 𝟓𝒙𝟐 − 𝑷(𝟑𝟖 − 𝟖𝒙𝟏 − 𝟏𝟐𝒙𝟐 + 𝒙𝟒 + 𝒙𝟓 )

Array 1:

𝒙𝟏 𝒙𝟐 𝒙𝟒 𝒙𝟓
𝒙𝟑 20 2 5 0 0
𝒙𝟏𝟎𝟎 5 5 1 -1 0
𝒙𝟏𝟎𝟏 33 3 11 0 -1 Pivot Row
𝒚 0 -6 -5 0 0
(𝑷) -38 -8 -12 1 1

Pivot Column

29
3724003 Tutorial - 07
Array 2:

𝒙𝟏 𝒙𝟏𝟎𝟏 𝒙𝟒 𝒙𝟓
𝒙𝟑 5 0.636 -0.455 0 0.455
𝒙𝟏𝟎𝟎 2 4.727 -0.091 -1 0.091
𝒙𝟐 3 0.273 0.091 0 -0.091
𝒚 15 -4.636 0.455 0 -0.455
(𝑷) -2 -4.727 1.091 1 -0.091

Now 𝒙𝟏𝟎𝟏 is zero, we can drop this column.

𝒙𝟏 𝒙𝟒 𝒙𝟓
𝒙𝟑 5 0.636 0 0.455
𝒙𝟏𝟎𝟎 2 4.727 -1 0.091
𝒙𝟐 3 0.273 0 -0.091 Pivot Row
𝒚 15 -4.636 0 -0.455
(𝑷) -2 -4.727 1 -0.091

Pivot Column

Array 3:
𝒙𝟏𝟎𝟎 𝒙𝟒 𝒙𝟓
𝒙𝟑 4.731 -0.135 0.135 0.442
𝒙𝟏 0.423 0.212 -0.212 0.019
𝒙𝟐 2.885 -0.058 0.058 -0.096
𝒚 16.962 0.981 -0.981 -0.365
(𝑷) 0 1 -1 0

Now 𝒙𝟏𝟎𝟎 is zero, so we remove it from the array.

𝒙𝟒 𝒙𝟓
𝒙𝟑 4.731 0.135 0.442 Pivot Row
𝒙𝟏 0.423 -0.212 0.019
𝒙𝟐 2.885 0.058 -0.096
𝒚 16.962 -0.981 -0.365

Pivot Column
Final solution is:
𝒙𝟑 𝒙𝟓
𝒙𝟒 35.141 7.428 3.286
𝒙𝟏 7.857 1.571 0.714
𝒙𝟐 0.857 -0.429 -0.286
𝒚 51.429 7.285 2.857
30
3724003 Tutorial - 07
7.3 Maximize the function 𝒚 = 𝟏𝟎𝟎 − (𝟏𝟎 − 𝒙𝟏 )𝟐 − (𝟓 − 𝒙𝟐 )𝟐 with 𝒂 = 𝟐, using Sequential
Simplex Method.

As can be understood, maximum is obtained at 𝒙𝟏 = 𝟏𝟎 and 𝒙𝟐 = 𝟓.

Given 𝒂 = 𝟐.
Let’s calculate 𝒑 and 𝒒.

𝑎
𝑝 = [√𝑛 + 1 + (𝑛 − 1)]
𝑛√2

2
𝑝 = [√3 + (1)] = 1.9318
2√2

𝑎
𝑞 = [√𝑛 + 1 − 1]
𝑛√2

2
𝑞= [√3 − 1] = 0.5176
2√2

To find the next point after rejecting worst point,

𝑛+1
𝑥𝑖𝑗 − 𝑥𝑗𝑅
𝑥𝑖𝑛 = 2 {∑ } − 𝑥𝑖𝑅
𝑛
𝑖=1

Point
Point 𝒋 𝒙𝟏𝒋 𝒙𝟐𝒋 𝒚𝒊 Points in simplex
Rejected
1 0 0 -25
Starting
2 1.9318 0.5176 14.81 1 2 3
Simplex
3 0.5176 1.9318 0.670
4 2.4494 2.4494 36.48 1 4 2 3
5 3.864 1.0352 46.6298 3 4 2 5
6 4.3816 2.967 64.3004 2 4 6 5
7 5.7962 1.5528 70.2171 4 7 6 5
8 6.3138 3.4846 84.1154 5 7 6 8
9 7.7284 2.0704 86.2572 6 7 9 8
10 8.8246 4.0022 95.9278 7 10 9 8
11 9.6606 2.588 94.0671 8 10 9 11
12 10.1782 4.5198 99.7376 9 10 12 11
13 8.7636 5.934 97.5989 11 10 12 13
14 10.6958 6.4516 97.4087 10 14 12 13
15 12.1104 5.0374 95.5448 12 14 15 13
16 11.5928 3.1056 93.8742 13 14 15 16

31
3724003 Tutorial - 08

Tutorial - 08
Box Complex Method.

8.1 Define a suitable search region and a feasible initial base point for the complex method
of search in minimizing 𝒚 = 𝟒𝒙𝟏 + 𝒙𝟐 + 𝟐𝒙𝟑 , subject to the restriction that 𝒙𝒊 ≥ 𝟎 and

𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 ≤ 𝟔;

𝟓𝒙𝟏 − 𝒙𝟐 + 𝒙𝟑 ≤ 𝟒;

𝒙𝟏 + 𝟑𝒙𝟐 + 𝟐𝒙𝟑 ≥ 𝟏

Set up a complex method of search and carryout three cycles of search.

Vertices used in complex method each of which satisfy all the imposed constraints and such region
is called suitable search region.

Choose any suitable point, check all the constrains and if they are contradicted to minimize the
function,

𝑆 = ∑ 𝑔𝑘
𝑘=1

The summation being taken only are those constraints which are violated. A more formal
presentation of this procedure is to minimize,

𝑆 = ∑ 𝐻(𝑔𝑘 )𝑔𝑘
𝑘=1

where 𝑔𝑘 is the heavy-side function define as-

1 𝑖𝑓 𝑔𝑘 ≥ 0
𝐻(𝑔𝑘 ) = {
0 𝑖𝑓 𝑔𝑘 ≤ 0

So that the summation S is indeed only over the violated constraints when S vanishes, we have found
a feasible initial base point.

32
3724003 Tutorial - 08
Now, Minimize

𝒚 = 𝟒𝒙𝟏 + 𝒙𝟐 + 𝟐𝒙𝟑

First define a range -

𝟎 ≤ 𝒙𝟏 ≤ 𝟏; 𝟐 ≤ 𝒙𝟐 ≤ 𝟑; 𝟎 ≤ 𝒙𝟑 ≤ 𝟏

Here variable are 3 and we will take 𝑛 + 1 = 3 + 1 = 4 points.

To find 𝑥𝑖 we have 𝑥𝑖 𝑙 + 𝑟𝑖𝑗 (𝑥𝑖 𝑈 − 𝑥𝑖 𝑙 )

No. 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒚 Random Point

1 0.5 2.5 0.5 5.5 0.5 0.5 0.5

2 0.4 2.3 0.6 5.1 0.4 0.3 0.6

3 0.7 2.8 0.4 6.4 0.7 0.8 0.4

4 0.2 2.7 0.1 3.8 0.2 0.7 0.8

So, from there four points, worst point is the 3rd point.

For next cycle we have the formula-

(𝛼 + 1)𝑥𝑖𝑚 − 𝛼𝑥𝑖𝑅 = 𝑥𝑖𝑛

where, 𝑥𝑖𝑚 is centroid, 𝑥𝑖𝑅 is worst point and we will take 𝛼 = 1.3.

No. 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒚 Random Point

5 0.0 2.11 0.4 2.91

6 0.5 2.60 0.5 5.60 0.1 0.6 0.5

7 0.9 2.90 0.8 8.10 0.4 0.9 0.8

8 0.1 2.20 0.3 3.20 0.7 0.2 0.5

33
3724003 Tutorial - 08
Here worst point is 7th point.

No. 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒚 Random Point

9 -0.71 1.5276 -0.12

0 2 0 2

8 0.1 2.1 0.1 2.7 0.1 0.1 0.1

10 0.2 2.3 0.1 3.3 0.2 0.3 0.1

11 0.3 2.1 0.1 3.5 0.3 0.1 0.1

12 0.2 2.2 0.1 3.2 0.2 0.2 0.1

Here worst point is 11th point so after 3 cycles we get minimum value of 𝑦 = 2.

34
3724003 Tutorial - 09

Tutorial - 09
Genetic Algorithm using Inbuilt Scilab Code.

8.1 Theory

Science arises from the very human desire to understand and control the world. Over the course of
history, we humans have gradually built up a grand edifice of knowledge that enables us to predict,
to varying extents, the weather, the motions of the planets, solar and lunar eclipses, the courses of
diseases, the rise and fall of economic growth, the stages of language development in children, and a
vast panorama of other natural, social, and cultural phenomena. More recently we have even come to
understand some fundamental limits to our abilities to predict. Over the cons we have developed
increasingly complex means to control many aspects of our lives and our interactions with nature,
and we have learned, often the hard way, the extent to which other aspects are uncontrollable. The
advent of electronic computers has arguably been the most revolutionary development in the history
of science and technology. This ongoing revolution is profoundly increasing our ability to predict and
control nature in ways that were barely conceived of even half a century ago. For many, the crowning
achievements of this revolution will be the creation-in the form of computer programs of new species
of intelligent beings. and even of new forms of life.

The goals of creating artificial intelligence and artificial life can be traced back to the very beginnings
of the computer age. The earliest computer scientists-Alan Turing. John von. Neumann, Norbert
Wiener, and others were motivated in large part by visions of imbuing computer programs with
intelligence, with the life-like ability to self-replicate, and with the adaptive capability to learn and to
control their environments. These early pioneers of computer science were as much interested in
biology and psychology as in electronics, and they looked to natural systems as guiding metaphors
for how to achieve their visions. It should be no surprise, then, that from the earliest days computers
were applied not only to calculating missile trajectories and deciphering military codes but also to
modelling the brain, mimicking human learning, and simulating biological evolution. These
biologically motivated computing activities have waxed and waned over the years, but since the early
1980s they have all undergone a resurgence in the computation research community. The first has
grown into the field of neural networks, the second into machine learning, and the third into what is
now called "evolutionary computation," of which genetic algorithms are the most prominent
example.

35
3724003 Tutorial - 09
8.2 GA Operators

The simplest form of genetic algorithm involves three types of operators: selection, crossover (single
point), and mutation.

8.3 Selection

This operator selects chromosomes in the population for reproduction. The fitter the chromosome,
the more times it is likely to be selected to reproduce.

8.4 Crossover

This operator randomly chooses a locus and exchanges the subsequence before and after that locus
between two chromosomes to create two offspring. For example, the strings 10000100 and
11111111 could be crossed over after the third locus in each to produce the two offspring 10011111
and 11100100. The crossover operator roughly mimics biological recombination between two
single-chromosome (haploid) organisms.

8.5 Mutation

This operator randomly flips some of the bits in a chromosome. For example, the string 00000100
might be mutated in its second position to yield 01000100. Mutation can occur at each bit position
in a string with some probability, usually very small (e.g., 0.001)

8.6 A Simple Genetic Algorithm

Given a clearly defined problem to be solved and a bit string representation for candidate solutions,
a simple

GA works as follows:

1. Start with a randomly generated population of n 1-bit chromosomes (candidate solutions to


a problem)
36
3724003 Tutorial - 09
2. Calculate the fitness 𝑓(𝑥) of each chromosome x in the population.
3. Repeat the following steps until n offspring have been created:
a. Select a pair of parent chromosomes from the current population, the probability of
selection being an increasing function of fitness. Selection is done "with replacement,"
meaning that the same chromosome can be selected more than once to become a parent.
b. With probability pe (the "crossover probability" or "crossover rate"), cross over the pair
at a randomly chosen point (chosen with uniform probability) to form two offspring. If
no crossover takes place, form two offspring that are exact copies of their respective
parents. (Note that here the crossover rate is defined to be the probability that two
parents will crossover in a single point. There are also "multi-point crossover" versions
of the GA in which the crossover rate for a pair of parents is the number of points at which
a crossover takes place.)
c. Mutate the two offspring at each locus with probability pm (the mutation probability or
mutation rate), and place the resulting chromosomes in the new population. If n is odd,
one new population member can be discarded at random.
4. Replace the current population with the new population.
5. Go to step 2.

8.7 Programme in Scilab

GA: PENALTY FUNCTION

function 𝑦 = 𝑚𝑦𝐹𝑢𝑛1(𝑥)

𝑦 = 100 × (𝑥(2, : ) − 𝑥(1, : ). ^2). ^2) + (1 − 𝑥(1, : )). ^2;

endfunction

function 𝑦 = 𝑚𝑦𝐹𝑢𝑛2(𝑥)

𝑃 = 1𝐸4;

𝑦 = (𝑥(1, : ) − 3). ^2 + (𝑥(2, : ) − 2). ^2 + 𝑃 × (𝑥(1, : ) + 𝑥(2, : ) − 4). ^2;

endfunction

PopSize = 100;
37
3724003 Tutorial - 09
Proba_cross = 0.9;

Proba_mut = 0.2;

NbGen = 100;

NbCouples = 110;

Log = %T;

Pressure = 0.05;

ga_params = init_param();

// Parameters to adopt to the shape of the optimization problem

ga_params = add_param(ga_params,”minbound”,[-10; -10;]);

ga_params = add_param(ga_params,”maxbound”,[10; 10;]);

ga_params = add_param(ga_params,”dimension”,2);

ga_params = add_param(ga_params,”beta”,0);

ga_params = add_param(ga_params,”delta”,0.1);

// Parameters to fine tune the Genetic algorithm.

// All these parameters are optional for continuous optimization.

ga_params = add_param(ga_params,”init_func”,init_ga_default);

ga_params = add_param(ga_params,”crossover_func”,crossover_ga_default);

ga_params = add_param(ga_params,”mutation_func”,mutation_ga_default);

ga_params = add_param(ga_params,”codage_func”,coading_ga_identity);

ga_params = add_param(ga_params,”selection_func”,selection_ga_elitist);

// ga_params = add_param(ga_params,”selection_func”,selection_ga_random);

38
3724003 Tutorial - 09
ga_params = add_param(ga_params,”nb_couples”,NbCouples);

ga_params = add_param(ga_params,”pressure”, pressure);

[pop_opt, fobj_pop_opt, pop_init, fobj_pop_init] = …

optim_ga(myFun1, PopSize, NbGen, Proba_mut, Proba_cross, Log, ga_params);

// Display basic statistics

// min, mean and max function values of the population.

disp([min(fobj_pop_opt)mean(fobj_pop_opt)max(fobj_pop_opt)])

// Get the best x (i.e. the one which achieves the minimum function value)

[fmin, k] = min(fobj_pop_opt)

Xmin = pop_opt(k)

// Get the worst x

[fmax, k] = max(fobj_pop_opt)

Xmax = pop_opt(k)

8.8 Output

optim ga: Initialization of the population

optim_ga: iteration 1/100 - min / max value found = 70.244198/41050.080081

optim_ga: iteration 2/100 - min / max value found = 0.720068/2469.146131

optim_ga: iteration 3/100 - min / max value found = 0.720068/260.936590

optim_ga: iteration 4/100 - min / max value found = 0.720068/36.992600

39
3724003 Tutorial - 09
optim_ga: iteration 5/100 - min / max value found = 0.678666/6.983001

optim_ga: iteration 6/100 - min /max value found = 0.597847/2.377016

optim ga: iteration 7/100 - min /max value found = 0.500013/1.015235

optim ga: iteration 8/100 - min /max value found 0.500013/0.633310

optim ga: iteration 9/100 - min /max value found 0.500013/0.53338

……………………………………………………………………………………………………………………………………………………
…………………………………………………………………………………………………………………………………………………….

optim ga: iteration 95/100 - min / max value found = 0.499975/0.499975

optim ga: iteration 96/100 - min / max value found = 0.499975/0.499975

optim ga: iteration 97/100 - min / max value found = 0.499975/0.499975

optim_ga: iteration 98/100 - min / max value found = 0.499975/0.499975

optim ga: iteration 99/100 - min / max value found = 0.499975/0.499975

optim ga: iteration 100/100 - min / max value found = 0.499975/0.499975

0.499975 0.499975 0.499975

disp([min(fobj_pop_opt) mean(fobj_pop_opt) max(fobj_pop_opt)])

0.499975 0.499975 0.499975

-->[fmin, k] = min(fobj_pop_opt)

k = 75

fmin = 0.499975

-->xmin = 2.5000249

1.5000251

40
3724003 Tutorial - 09
-->[fmax, k] = max(fobj_pop_opt)

k = 62

fmax = 0.499975

-->xmax = pop_opt(k)

xmax = 2.5000275

1.5000225

Validation:

If we change the population size and number of generations so according to that we get the result so
it will depend on the population size and number of generation.

41
3724003 Tutorial - 10

Tutorial - 10
To study about Mixed Integer Linear Programming (MILP).

10.1 Introduction

Many problems in plant operation, design, location, and scheduling involve variables that are not
continuous but instead have integer values. Decision variables for which the levels are a dichotomy-
to install or not install a new piece of equipment, for example-are termed "0-1" or binary variables.
Sometimes we can treat integer variables as if they were continuous, especially when the range of a
variable contains a large number of integers, such as 100 trays in a distillation column, and round the
optimal solution to the nearest integer value. But when there is a smaller range available, rounding
up to an optimal solution becomes more difficult.

These problems relating to the optimization of discrete variables are dealt with Mixed Integer
Programming (MIP).

Here, the objective function depends on two sets of variables, x and y, x is a vector of continuous
variables and y is a vector of integer variables. Many MIP problems are linear in the objective function
and constraints and hence are subject to solution by linear programming. These problems are called
mixed-integer linear programming (MILP) problems.

10.2 Algorithm/Problem Formulation

Suppose, we have objects. The weight of the ith object is w, and its value is vi. Select a subset of the
objects such that their total weight does not exceed W (the capacity of the knapsack) and their total
value is a maximum.

𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒆: 𝒇(𝒚) = ∑ 𝒗𝒊 𝒚𝒊
𝒊=𝟏

𝑺𝒖𝒃𝒋𝒆𝒄𝒕 𝒕𝒐: ∑ 𝒘𝒊 𝒚𝒊 ≤ 𝑾 𝒚𝒊 = 𝟎, 𝟏 𝒊 = 𝟏, 𝟐, … , 𝒏
𝒊=𝟏

The binary variable 𝑦𝑖 indicates whether an object 𝑖 is selected (𝑦𝑖 = 1) or not selected (𝑦𝑖 = 0).

42
3724003 Tutorial - 10
10.3 Approaches for MILP problems

Approaches for MILP and MINLP are capable of finding aa optimal solution and verify that they have
done so. Specifically, we consider branch-and-bound (BB) and outer linearization (OL) methods.

BB can be applied to both linear and nonlinear problems, but OL is used for nonlinear problems by
solving a sequence of MILPs.

Example:

𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒆: 𝒇 = 𝟖𝟔𝒚𝟏 + 𝟒𝒚𝟐 + 𝟒𝟎𝒚𝟑

𝑺𝒖𝒃𝒋𝒆𝒄𝒕 𝒕𝒐: 𝟕𝟕𝟒𝒚𝟏 + 𝟕𝟔𝒚𝟐 + 𝟒𝟐𝒚𝟑 ≤ 𝟖𝟕𝟓

𝟔𝟕𝒚𝟏 + 𝟐𝟕𝒚𝟐 + 𝟓𝟑𝒚𝟑 ≤ 𝟖𝟕𝟓

𝒚𝟏 , 𝒚𝟐 , 𝒚𝟑 = 𝟎, 𝟏

One or more of the integer constraints 𝒚𝒊 = 𝟎 𝒐𝒓 𝟏 are replaced by the relaxed condition 0 ≤ 𝑦𝑖 ≤ 1,
which includes the original integers, but also all of the real values in between.

1. The optimal solution has one fractional (non-integer) variable (y2) and an objective function
value of 129.1. Because the feasible region of the relaxed problem includes the feasible region
of the initial IP problem, 129.1 is an upper bound on the value of the objective function of the
KP. If we knew a feasible binary solution, its objective value would be a lower bound on the
value of the objective function, but none is assumed here, so the lower bound is set to -∞.

2. At node 1, y2 is the only fractional variable, and hence any feasible integer solution must
satisfy either y2 = 0 or y2 = 1, We create two new relaxations represented by nodes 2 and 3 by
imposing these two integer constraints. The process of creating these two relaxed sub-
problems is called branching.

3. If the relaxed IP problem at a given node has an optimal binary solution, that solution solves
the IP, and there is no need to proceed further. This node is said to be fathomed, because we
do not need to branch from it.

The difference (ub - lb) is called the "gap"


43
3724003 Tutorial - 10

𝑮𝒂𝒑
≤ 𝒕𝒐𝒍
𝟏. 𝟎 + |𝒍𝒃|

A tol value of 10-4 would be a tight tolerance, 0.01 would be neither tight nor loose, and 0.03 or higher
would be loose. The termination criterion used in the Microsoft Excel Solver has a default tol value of
0.05.

1 𝟎 ≤ 𝒚𝟏 ≤ 𝟏
Upper bound = 129.1 𝟎 ≤ 𝒚𝟐 ≤ 𝟏
Lower bound = -∞ 𝟎 ≤ 𝒚𝟑 ≤ 𝟏 Continuous LP optimum
No incumbent 𝒚 ∗= (𝟏, 𝟎. 𝟕𝟕𝟔, 𝟏)
𝒇 = 𝟏𝟐𝟗. 𝟏𝟎

Upper bound = 129.1


Lower bound = 126.0 𝒚𝟐 = 𝟎 𝒚𝟐 = 𝟏
Incumbent = (1, 0, 1)

2 𝟎 ≤ 𝒚𝟏 ≤ 𝟏 3 𝟎 ≤ 𝒚𝟏 ≤ 𝟏
𝒚𝟐 = 𝟎 𝒚𝟐 = 𝟏
Upper bound = 128.11
𝟎 ≤ 𝒚𝟑 ≤ 𝟏 𝟎 ≤ 𝒚𝟑 ≤ 𝟏
Lower bound = 126.00
𝒚 ∗= (𝟏, 𝟎, 𝟏) 𝒚 ∗= (𝟎. 𝟗𝟕𝟖, 𝟏, 𝟏)
𝒇 = 𝟏𝟐𝟔. 𝟎𝟎 𝒇 = 𝟏𝟐𝟖. 𝟏𝟏

IP Optimum

𝒚𝟏 = 𝟎
𝒚𝟏 = 𝟏

4 𝒚𝟏 = 𝟎 5 𝒚𝟏 = 𝟏
𝒚𝟐 = 𝟏 𝒚𝟐 = 𝟏
𝟎 ≤ 𝒚𝟑 ≤ 𝟏 𝟎 ≤ 𝒚𝟑 ≤ 𝟏
𝒚 ∗= (𝟎, 𝟏, 𝟏) 𝒚 ∗= (𝟏, 𝟏, 𝟎. 𝟓𝟗𝟓)
𝒇 = 𝟒𝟒. 𝟎𝟎 𝒇 = 𝟏𝟏𝟑. 𝟖𝟏

44

You might also like