0% found this document useful (0 votes)
53 views

Shortcourseof Optimization Technique

This document discusses optimization techniques and modeling concepts. It defines optimization as using techniques to achieve the most favorable operating conditions in engineering by measuring maximum or minimum values of a function. The document outlines common terms used in optimization like objectives, variables, constraints, and optimization process steps. It provides an example of specifying the optimization of a steam condenser. It also discusses modeling concepts like defining a confounding variable, and comparing fundamental versus empirical models. The key goal of optimization techniques is to efficiently utilize resources while minimizing environmental impact through analytical modeling approaches.

Uploaded by

Addisu Tsehay
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Shortcourseof Optimization Technique

This document discusses optimization techniques and modeling concepts. It defines optimization as using techniques to achieve the most favorable operating conditions in engineering by measuring maximum or minimum values of a function. The document outlines common terms used in optimization like objectives, variables, constraints, and optimization process steps. It provides an example of specifying the optimization of a steam condenser. It also discusses modeling concepts like defining a confounding variable, and comparing fundamental versus empirical models. The key goal of optimization techniques is to efficiently utilize resources while minimizing environmental impact through analytical modeling approaches.

Uploaded by

Addisu Tsehay
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/330204775

Optimization Technique

Technical Report · October 2016

CITATIONS READS
0 30,032

1 author:

Ghanim M. Alwan
Madenat Al-Elem University College (MAUC)
55 PUBLICATIONS 251 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Removal of heavy metals from wastewater based on agricultural materials View project

Traffic Accident Analysis Using Machine Learning View project

All content following this page was uploaded by Ghanim M. Alwan on 19 May 2023.

The user has requested enhancement of the downloaded file.


Optimization Technique

Dr.Ghanim.M. Alwan
Visiting Scholar, Missouri University of Science and Technology.
E-mail: [email protected]
What is optimization?

• Optimization technique in the field of engineering is a powerful tool to utilize the


resources in an efficient way as well as to reduce the environmental impact of a
process. Application of optimization process helps us achieve the most favorable
operating conditions.

• The primary focus of using optimization techniques is to measure the Maximum or


Minimum value of a function depending on the circumstance.

• Any engineering or research discipline involving design, maintenance and


manufacturing requires certain technical decisions to be taken at different stages. The
outcome of taking these decisions is to maximize the profit with minimum utilization
of resources.
• Optimization search has a good significance in appearing a weakness of the simulated
model and guiding the software for best executing. Robust optimization could predict
uncertainty parameters and complex confounding (hidden) variables in interacted non-
linear systems.

• Knowledge of optimization theory as well its practical application is essential for


all engineers.
Terms commonly used in optimization process:
Objectives to be minimized: cost, energy, loss, waste, processing time, raw material
consumption….etc.
Objectives to be maximized: profit, conversion, yield, utility, efficiency, capacity…etc.

+LP, linear programming problem: objective f(x) and constrains ci(x) are linear.
+NLP, nonlinear programming problem: f(x) is linear/nonlinear, ci(x):is nonlinear/linear i.e,one of
them is nonlinear.
+ quality constrains represented by numerical values.
+ Inequality constrains represented by related equation.
.*Static and dynamic problems:
Most optimization problems are based on steady state models that could be formulated from
experimental data. Optimization problems involving dynamic models are more suitable for
"Optimal control".
**Continuous and discrete variables:
Continuous variable is any process variable that has real value such
as:pressure,temperature,concentration…..etc,while discrete variable(design variable) is integer
value only, number of tubes in a heat exchanger and number of distillation trays in a distillation
column ….etc.Optimization problems without discrete variables are far easier to solve.
Reliability of optimization technique depends on capturing of discrete variables.

Optimization process steps:

1. Selecting of objectives
2. Selecting of effective decision variables
3. Limiting of constraints
4. Formulating of reliable optimization model
5. Implementing of suitable optimization algorithm.
Example: Specifying the Optimization of a Steam Condenser
Variables, Objectives, Constraints
Figure below shows a steam condenser. The designer needs to design a condenser that will
cost a minimum amount and condense a specified amount of steam, mmin. Steam flow rate, ms,
steam condition, x, water flow rate, mw, water temperature, Tw, water pressure Pw, and
materials are specified. Variables under the designer’s control include the outside diameter
of the shell, D; tube wall thickness, t; length of tubes, L; number of passes, N; number of
tubes per pass, n; baffle spacing, B, tube diameter, d. The model calculates the actual steam
condensed, mcond, the corrosion potential, CP, condenser pressure drop, ∆Pcond, cost and
overall size, Vcond. The overall size must be less than Vmax and the designer would like to limit
overall pressure drop to be less than ∆Pmax.

Figure: Schematic of steam condenser.

From this description it appears we have the following:


Design Variables: Process Variables:
Outside diameter of the shell, D Steam flow rate, ms
Tube wall thickness, t Steam condition, x
Length of tubes, L Water flow rate, mw
Number of passes, N Water temperature, Tw
Number of tubes per pass, n
Water pressure, Pw
Baffle spacing, B,
Tube diameter, d. Material properties ,𝜌 ,CP

Constraints: Objective Functions:


Vcond ≤Vmax Minimize Cost
Overall size, Vcond
mcond ≥ mmin
Steam condensed, mcond
∆Pcond ≤∆Pmax corrosion potential, CP
condenser pressure drop,∆Pcond
Concept of Modeling and Simulation

1.1 Definition: Modeling and simulation is a discipline for developing a level of understanding of the
behavior of the parts of a system, and of the system as a whole. Modeling and simulation is very much an
art. A model is a simplified representation of a system at some particular point to promote understanding
of the real system. The word model actually comes from the Latin word modus, which means a measure.
Mathematical modeling takes talent, practice, and experience to be a successful mathematical modeler. The
mathematical model usually describes a system by a set of variables and a set of equations that establish
relationships between the variables. In general, there are five basic groups of variables: input variables,
decision variables, confounding variables, random variables, and output variables. The three groups of
variables are commonly used to describe a chemical process are: input variables, confounding variables and
output variables.

1.2 What is a Confounding Variable?


A confounding variable is an “extra” variable that you didn’t account for. They can ruin an experiment and
give you useless results. They can suggest there is correlation when in fact there isn’t. That’s why it’s
important to know what one is, and how to avoid getting them into your experiment in the first place.
In an experiment, the independent variable typically has an effect on your dependent variable. Confounding
variables are any other variable that also has an effect on your dependent variable. They are like extra
independent variables that are having a hidden effect on your dependent variables. Confounding variables
can cause two major problems:

• Increase variance
• Introduce bias.

What happens if you don't control confounding variables?


In research studies, confounding variables influence both the cause and effect that the researchers are
assessing. Consequently, if the analysts do not include these confounders in their statistical model, it can
exaggerate or mask the real relationship between two other variables. The reliable models (especially
regression models) are flexible to eliminate the effects of confounders.
1.3 Fundamental model vs. empirical model:
T h e fundamental models, also called first-principles models, are based on physical– chemical relationships.
Actually, these models are derived by applying the conservation principle and may also include transport
phenomena, reaction kinetics, and thermodynamic (e.g., phase equilibrium) relationships. The fundamental
models offer several potential benefits. Since these models include detailed physical–chemical relationships,
they can better represent the nonlinear behavior and process dynamics; this allows the model to be used
beyond the operating range in which the model was constructed. Another advantage of utilizing the first-
principles approach is that the states are generally physical variables such as temperature or concentration
that can be directly measured. However, the fundamental models are time consuming to develop and they
often have a large number of equations with many parameters that need to be estimated.
The empirical model is generally developed to use when the actual process is too complex and the underlying
phenomena are not well understood or when the numerical solution of the fundamental model is quite
difficult or when the empirical model provides satisfactory predictions of the process characteristics.
Experimental plant data are used to develop a relationship between the process input and process output as
an empirical model using a mathematical framework such as artificial neural network (ANN) .Although the
time requirement to obtain this type of models is often significantly reduced, the empirical models generally
can be used with confidence only for the operating range in which they are constructed.
Another type of model is the mixed or hybrid model. The mixed models are developed, as the name suggests,
by combining the fundamental and empirical models, thus utilizing the benefits of both. As an example, the
mixed modeling techniques have been used to model the polymerization reactors. The mass balance equations
for the reactants are developed within the fundamental modeling approach, whereas the unknown rates of the
reactions taking place are modeled within the empirical approach.
Steps of Machine Learning
• Gathering Data
• Preparing the data
• Choosing a model
• Training the model
• Evaluation the model
• Hyper parameters Tuning
• Prediction
Optimization of Machine Learning model: hyperparameters tuning

 Machine learning algorithms have been used widely in various applications and
areas. To fit a machine learning model into different problems, its hyper
parameters HP must be tuned. Selecting the best hyper-parameter configuration
for machine learning models has a direct impact on the model’s performance.

 Two types of parameters exist in machine learning models: one that can be
initialized and updated through the data learning process (e.g., the weights of
neurons in neural networks), named model parameters; while the other, named
hyper-parameters, cannot be directly estimated from data learning and must be
set before training a ML model because they define the architecture of a ML
model.

 Hyper parameters are the parameters that are used to either configure a ML
model (e.g., the penalty parameter C in a support vector machine, the variable
K in K-NN algorithm. and the learning rate to train a neural network) or to specify
the algorithm used to minimize the loss function (e.g., the activation function and
optimizer types in a neural network, and the kernel type in a support vector
machine).

 To build an optimal ML model, a range of possibilities must be explored. The


process of designing the ideal model architecture with an optimal
hyperparameter HPO configuration is named hyper parameters tuning. Tuning
hyper-parameter is considered a key component of building an effective ML
model, especially for tree-based ML models and deep neural networks, which
have many hyper parameters.

 Hyper-parameter tuning process is different among different ML algorithms due


to their different types of hyper parameters, including categorical, discrete, and
continuous hyper parameters.

 Among all optimization methods, hybrid technique of Genetic Algorithm (GA) and
Pattern Search Optimization (PSO) is the most prevalent metaheuristic algorithm
used for HPO problems. GA detects well performing hyper parameters
combinations in each generation, and pass them to the next generation until the
best-performing combination is identified. In PSO algorithm, the values of
parameters are refined in each iteration until the final global optimum is
detected..
Example: optimum tunning of hyperparameters of controller

1.4

1.2

0.8
Response

0.6

0.4

0.2

0
0 10 20 30 40 50 60 70 80 90 100
Time
Optimization Methods :
1. Deterministic optimization: implemented for convex (differentiate) equations, for LP
,NLP ,single and multi-variable functions :

-Lagrange multipliers

-Successive quadratic programming

-Newton’s method

-Qusi-Newton method

-Marquardt method

-Levenberg-Marquardt Algorithm

-Simplex Method

-Non-Simplex method

-Secant method

-Modified Hook-Jeevs method

2. Statistical optimization: for correlation and design of experiment


-Least Square method

-Design of Experiment-Response Surface Methodology

-ANOVA Study

3. Stochastic Optimization: for complex non-convex (not differentiable) equations.


-Genetic Algorithm

- Pattern Search optimization

-Firefly Algorithm

Software Tools: MATLAB; LINGO; GAMS and gPROMS


Classification of Model Programs:
❖ General Algebraic Modeling System (GAMS):
GAMS is used to model and analysis mixed-integer, linear and non-linear
optimization problems. It is useful to analysis large and complex
systems. The GAMS tool is widely used to solve complex optimization
problems of various power and energy systems.

❖ gPROMS process is a next-generation advanced Process modeling for


design and operation of high-performance process plants.

gPROMS can optimize whole process flow sheets involving tens of continuous
and/or integer (discrete) decision variables in steady-state or dynamic
optimization mode to come up with truly optimal process design and
operations.
Engineering Models in Optimization:

Engineering models play a key role in engineering optimization. In this section we will discuss
some further aspects of engineering models. We refer to engineering models as analysis
models.
In a very general sense, analysis models can be viewed as shown in Fig 1 below. A model
requires some inputs in order to make calculations. These inputs are called analysis variables.
Analysis variables include design variables (the variables we can change) plus other quantities
such as material properties, boundary conditions, etc. which typically would not be design
variables. When all values for all the analysis variables have been set, the analysis model can
be evaluated. The analysis model computes outputs called analysis functions. These functions
represent what we need to determine the “goodness” of a design.
For example, analysis functions might be stresses, deflections, cost, efficiency, heat transfer,
pressure drop, etc. It is from the analysis functions that we will select the design functions,
i.e., the objectives and constraints.

Fig. 1. The operation of analysis models

Thus from a very general viewpoint, analysis models require inputs—analysis variables—
and compute outputs—analysis functions. Essentially all analysis models can be viewed this
way.
Models and Optimization by Trial-and-Error:

The analysis model is to compute the values of analysis functions. The designer
specifies values for analysis variables, and the model computes the corresponding
functions.

Note that the analysis software does not make any kind of “judgment” regarding the goodness
of the design. If an engineer is designing a reactor, for example, and has software to predict
conversion and yield, the analysis software merely reports those values—it does not suggest
how to change the reactor design to increase conversion in a particular location. Determining
how to improve the design is the job of the designer.

To improve the design, the designer will often use the model in an iterative fashion, as shown
in Fig. 2 below. The designer specifies a set of inputs, evaluates the model, and examines the
outputs. Suppose, in some respect, the outputs are not satisfactory. Using intuition and
experience, the designer proposes a new set of inputs which he or she feels will result in a better
set of outputs. The model is evaluated again. This process may be repeated many times.

Fig. 2. Common “trial-and-error” iterative design process.

We refer to this process as “optimization by design trial-and-error.” This is the way most analysis
software is used. Often the design process ends when time and/or money run out.
Optimal results by MINLP:
Optimization with Computer Algorithms:
Computer-based optimization is an attempt to bring some high-tech help to the
decision making side of Fig. 2. With this approach, the designer is taken out of
the trial-and-error loop. The computer is now used to both evaluate the model
and search for a better operating conditions. This process is illustrated in Fig.3.

Fig..3. Moving the designer out of the trial-and-error loop with computer-based
optimization software.

The researcher now operates at a higher level. Instead of adjusting variables


and interpreting function values, the designer is specifying goals for the design
problem and interpreting optimization results. Usually a better design can be
found in a shorter time.
Energy and Process Optimization
Five Ways to Improve Energy Efficiency

The five ways in which improved energy efficiency can be achived within plant
processes are highlighted below:

• Minimizing wastes and losses


• Optimizing process operation
• Achieving better heat recovery
• Determing process changes
• Optimizing energy supply system
Modelling and Optimization of Crude Oil
Hydrotreating Process in Trickle Bed Reactor:
Energy Consumption and Recovery Issues
Aysar T. Jarullah, Iqbal M. Mujtaba, and Alastair S. Wood

Abstract
Energy consumption is a very important consideration for reducing environmental impact and
maximizing the profitability of operations. Since high temperatures are employed in hydrotreating
(HDT) processes, hot effluents can be used to heat other cold process streams. The aim of the
present paper is to describe and analyze the heat integration (during hydrotreating of crude oil in
trickle bed reactor) of a hydrotreating plant’s process based upon experimental work.

In this work, crude oil is hydrotreated upon a commercial cobalt-molybdenum on alumina


catalyst presulfided at specified conditions. Detailed pilot plant experiments are conducted in a
continuous flow isothermal trickle bed reactor (TBR) in which the main hydrotreating reactions,
are hydrodesulfurization (HDS), hydrodenitrogenation (HDN), hydrodeasphaltenization (HDAs)
and hydrodemetallization (HDM). The latter includes hydrodevanadization (HDV) and
hydrodenickelation (HDNi). The reaction temperature, the hydrogen pressure, and the liquid
hourly space velocity (LHSV) are varied within certain ranges, with constant hydrogen to oil ratio
(H2/Oil).

Experimental information obtained from a pilot plant, together with kinetics and reactor
modeling tools,and a commercial process data are employed for heat integration process model.
The optimization problem to minimize the overall annual cost is formulated as a Non-Linear
Programming (NLP) problem, which is solved using Successive Quadratic Programming (SQP)
within gPROMS.

KEYWORDS: hydrotreating, trickle-bed reactor, integrated process, energy recovery


Chemical Product and Process Modeling, Vol. 6 [2011], Iss. 2, Art. 3

Figure 1: Process of heat integrated reaction system

As depicted in Figure 1, the crude oil feedstock (cold stream) is pumped


by P1 before preheating from TC0 to TC1 in heat exchanger H.E1. Then, the crude
oil is fed into furnace F1 in order to preheat from TC1 to the reaction temperature
(TR). The second main feedstock, which is hydrogen (cold stream) is fed into heat
exchanger H.E2 to preheat from TH0 to TH1. After this, its temperature rises from
TH1 to the reaction temperature (TR) by the furnace F1. The product stream
leaving the reactor (hot stream) is cooled from TP1 to TP2 by contacting with the
main crude oil feedstock in heat exchanger H.E1. Due to high reaction products

http://www.bepress.com/cppm/vol6/iss2/3 4
DOI: 10.2202/1934-2659.1600
Chemical Product and Process Modeling, Vol. 6 [2011], Iss. 2, Art. 3

Ct ($/yr) = Annualized Capital Cost ($/yr) + Operating Cost ($/yr) (27)

To calculate the annualized capital cost (ACC) from capital cost (CC), the
following equation is used (Smith, 2005):

i (1  i ) n
ACC  CC  (28)
(1  i ) n  1

n is number of years and i is the fractional interest per year; n = 10 years, i = 5%


(Smith, 2005).

Capital Cost (CC, $) = Reactor Cost (CR) + Compressor Cost (CComp) + Heat
Exchanger Cost (CHE) + Pump Cost (CP) + Furnace Cost (CF) (29)

The operating cost is calculated as shown below:

Operating Cost ($/yr) = Heating Cost (CH) + Compression Cost (CCmpr) +


Pumping Cost (CPU) + Cooling Cost (CCol) (30)

The capital costs of equipment can be estimated using the following


equations (Douglas, 1988; Smith, 2005; Quintero and Villamil, 2009):

a) Reactor Cost (CR)

M &S
C R ($)    101.9 DR LR (2.18  FC )
1.066 0.802
(31)
 280 
FC  Fm F p (32)

b) Compressor Cost (CComp)

M &S
C Comp ($)   (517.5)(bhp) (2.11  Fd )
0.82
(33)
 280 
hp
bhp  (34)
ise

 3.03  10 5   P  
hp    Pin Qin  out   1 (35)
    Pin  

http://www.bepress.com/cppm/vol6/iss2/3 10
DOI: 10.2202/1934-2659.1600
Jarullah et al.: Optimization of Crude Oil Hydrotreating

 cp H 2 
 H  1
cv
  
2

(36)
 cp H2
 H 
 cv 
2

cv G  cp G  R (37)

c) Heat Exchanger Cost (CHE)

M &S
C HE ($)    210.78 At (2.29  FC )
0.65
(38)
 280 
FC  ( Fd  F p ) Fm (39)

d) Pump Cost (CP)

0.55
M &S Q 
C P ($)    9.84  10 FC  P 
3
(40)
 280   4 
FC  Fm F p FT (41)

e) Furnace Cost (CF)

M &S
C F ($)    5.52  10 QF (1.27  FC )
3 0.85
(42)
 280 
FC  Fd  Fm  F p (43)

M&S is the Marshall and Swift index for cost escalation (M&S = 1468.6
(chemical engineering, 2010)), bhp is the brake horsepower required in the
compressor motor, hp is the compressor horsepower,ise is isentropic efficiency, γ
is the specific heat ratio, Qin is the volumetric flow rate at compressor suction, cvG
is the specific heat capacity at constant volume and Pin and Pout are the pressure in
the compressor inlet and outlet, respectively. ηise ranges between 70-90% (here it
is assumed to be 80%) (Douglas, 1988). Qp is the pump power, QF is the heat duty
of the furnace, LR is the reactor length, DR is the reactor diameter and FC, Fm, Fp,
FT and Fd are the dimensionless factors that are functions of the construction
material, operating pressure and temperature, and design type.

Published by Berkeley Electronic Press, 2011 11


Example 1:objective function Y=2X2-4X-5 with constrain of -
2< X<5

Find the optimum value and is it minimum or maximum?

Answer: by 1st differentiate the objective function with respect to


x and equalize to zero:

dy/dx=4X-4=0 ,then X=1 this singular(optimum) value of X,Yop=


2*1-4*1-5= -7

To check nature of optimum value: 2nd differentiate to objective


function:

d2Y/dx2= 4 +ve value, so it is minimum( concave) as shown in the


figure.

Example 2: objective function Y=-2X2-4X-5 with constrain of -


10< X<5

Find the optimum value and is it minimum or maximum?

Answer: by 1st differentiate the objective function with respect to x


and equalize to zero:

dy/dx=-4X-4=0 ,then X=-1 this singular(optimum) value of X,Yop= -2*1-4*1-5= -11

To check nature of optimum value: 2nd differentiate to objective function:

d2Y/dx2= -4 -ve value, so it is maximum( convex) as shown in the figure


Example:
Find the optimal points of the objective function=3*X ^3-16*x+5

Ist derivative: 9*X^2-16=0, then X^2=16/9, So X1= 4/3, X2= - 4/3

Then, Y1=3*(4/3) ^3-16*(4/3) +5=-9.22, Y2=3*(-4/3) ^3-16*(-4/3) +5=19.22

We have two singular points (-4/3, 19.22), (4/3,-9.22)

2nd derivative: d2Y/dX2= 18*X because it is variable, we have inflection

point. Inflection point means two types of optimal points are available

(minimax model) d2Y/dX2= 18*X = 0, then X=0 and Y = 5 this is axis of

inflection point (0, 5)

Now test the optimal point at 2nd derivative:

Pt.1 Y=18*(-4/3) = -24 (–ve) the objective is local convex (max)

Pt.2 Y=18*(4/3) = 24 (+ve) the objective is local concave (min)

300

200

100

0
Y

-100

-200

-300
-5 -4 -3 -2 -1 0 1 2 3 4 5
X
Example: multi-variable function: Prove the following function is concave?
Y=f(X) =5*X1^2+X2^2-3*X1*X2 , starting point=[X0]= [1 1]T
Using Newton’s method with Hessian matrix:

Newton’s method for minimum search:

[XN]=[XO]T -[HO]-1 *∇f(XO)

𝑑2𝑓/𝑑𝑋12 𝑑2𝑓/𝑑𝑋1𝑋2
For two variables H = ( )
𝑑2𝑓/𝑑𝑋2𝑑𝑋1 𝑑2𝑓/𝑑𝑋2

df/dX1=10*X1-3*X2 d2f/dX12=10 d2f/dX1dX2 = -3

df/dX2=2*X2 -3*X1 d2f/dX2X1= -3 d2f/dx22 =2

10 −3 0.1818 0.2727
so,H=( ) H-1=( )
−3 2 0.2727 0.9091

𝑑𝑓/𝑑𝑋1 7
∇f(XO)= ( ) = ( )
𝑑𝑓/𝑑𝑋2 −1
0.1818 0.2727 7
Then,[X1]= [1 1]T - ( )( )
0.2727 0.9091 −1
0
[X1]=( ) , therefore the objective is concave (minimum)
0
Example: Find the optimal value and its type of the objective f(x,y)=Z=X2+3Y2:
[X0]= [4 5]
𝑑2𝑓/𝑑𝑋 2 𝑑2𝑓/𝑑𝑋𝑑𝑌
For two variables H = ( )
𝑑2𝑓/𝑑𝑌𝑑𝑋 𝑑2𝑓/𝑑𝑌 2

df/dX=2*X d2f/dX2=2 d2f/dXdY =0

df/dY=6*Y d2f/dYdX=0 d2f/dY2 =6


Lagrange Multipliers method:

Example: Test the optimal values of the following multi-variable system:

Y=f(X1, X2) = 2*X1^2 +2*X2^2-3*X1X2

𝑑2𝑓/𝑑𝑋12 𝑑2𝑓/𝑑𝑋1𝑑𝑋2
For two variables H-matrix = ( )
𝑑2𝑓/𝑑𝑋2𝑑𝑋1 𝑑2𝑓/𝑑𝑋22

df/dX1=4*X1-3*X2 d2f/dX12=4 d2f/dX1dX2= -3

df/dX2= 4*X2-3*X1 d2f/dX2dX1=-3 d2f/dX22 = 4

4 −3 4−𝛽 −3
S0, H=( ), then by Lagrange method: H (𝛽) = ( )
−3 4 −3 4−𝛽
Suppose determine of H (𝛽) =0, det(𝛽)=0 where 𝛽′𝑠 are the eigen values

det(𝛽)=0= (4-𝛽)*(4-𝛽)-(-3)*(-3)=0
16-8* 𝛽+ 𝛽 2-9=0 or 𝛽 2 -8 𝛽+7= 0 , then 𝛽1=7 and 𝛽2=1
Conditions:
𝛽’s >o definitely concave (minimum), therefore the objective(Y) is minimum
𝛽’s <o definitely convex (maximum)
𝛽’s=0 saddle point (critical)
Flow-chart of Modified Hook-Jeevs Method
Multi-objective optimization methods:
The relevance and importance of multi-objective optimization (MOO) in
chemical engineering is increasing. The solutions of an MOO problem are
known as the pareto-optimal solutions. Several methods are deal with multi-
objective problems that are;

A.Deterministic Methods:

1. phase-plane method:

Objective space: plot f1(x) against f2(x)

2-weighting method:

Convert MOO to SOO problem and using the same constraints

Y=w*f1(x) + (1-w)*f2(x)

Where w is a weight factor (0-1)

Eventhough the weighting method is conceptually straight forward, but


choosing of suitable (w) to find many pareto-optiimal solutions is difficult.

3. €-constraints method:

The MOO problem is converted to a SOO problem by using one objective


and converting others into inequality constraints, for example;

Minimize f1(x) (1)


Maximize f2(x) (2)
Subject to X1 < x<Xu (3)

h(x) =0 (4)

g(x) <0 (5)

It is desired to maximize f2(x);

With respect to x subject to the bounds and constraints in equations (3 to 5)


as well as a additional constraints will be used:

f1(x) <€ (6)

Obviously, the user will have to select which objective to be retained and the
value of €.The difficulties in this method are the selection of € value and
solving the optimization
problem.

B.Stochastic Genetic Algorithm


Stochastic optimization:
Deterministic algorithms for function optimization are generally limited to convex regular
functions. However, many functions are either not differentiable or need a lot of difficult
mathematical treatment (discretization, sensitivity computation….etc.) for differentiating.
Therefore, stochastic sampling methods have been found suitable for optimize such
functions which are Genetic Algorithm GA , Pattern search PS and Firefly Algorithm, FA.
Genetic Algorithm (GA) is searching algorithms based on mechanics of natural selection
and natural genetics. Philosophically GA is based on Darwins′theory.Genetic algorithm
have the following advantages over traditional methods:

 GAs search from a population of points, not a single point. Hence GAs are said to
be Global optimization techniques.
 GAs use only the value of concave (minimize) objective function. The derivatives
are not used in the search process.
 GAs use probabilistic transition rules, not deterministic rules.
 Genetic algorithms are the most popular form of evolutionary algorithms. A
population of chromosomes represents a set of possible solution. These solutions are
classified by an evaluation function, giving better values, or fitness to better
solutions.The simplest representation is a value representation where the
chromosome consists of the values of the design variables placed side by side. For
example, suppose we have 6 discrete design variables whose values are integer
values ranging from 1 to 5.Suppose we also have 4 continuous design variables
whose values are real numbers ranging from 3.000 to 9.000. A possible
chromosome is shown in the following figure.

8.157 5.893 6.594 3.572 5 2 3 1 3 4

Figure: Chromosome
Fitness Function:

Flow-chart of GA
Case study: Spouted bed – Optimization of Operating Conditions

Process variables:

Air velocity Vg =0.74, 0.95 and 1.0 m/s, density S= 2400, 7400 kg/m^3 dp=1.09 mm (glass), 2.18 mm(steel)

The objectives are global UI (uniformity index) of solid particles and global PD (pressure drop)
across the bed which are correlated with the three decision variables (Vg, S and dp).
After several trials, the modified conflicted optimization functions are:
Signal from Tip 1
Signal from Tip 2

Max UI=0.184Vg-.214 S .12dp-.267 (1)


Min PD=0.037Vg .38. S .407dp -0.221 (2)

Subject to inequality constraints:


0.74 ≤ Vg ≤ 1.0
2400.0 ≤ S≤ 7400.0 (3)
1.09 ≤ dp ≤ 2.18

The spouted bed is highly nonlinear interacted process. GA is the best global search for solving
the optimization problem of the process.
Adapted hyper parameters of multi-objective GA

parameter value

Population size 80

Crossover function Scattered

Crossover fraction 0.8

Mutation function Adaptive feasible

Migration direction Forward

Migration fraction 0.2

Hybrid function PS
Pareto front
1.15

1.1

1.05

0.95
PD

0.9

0.85

0.8

0.75

0.7

0.65
-0.56 -0.54 -0.52 -0.5 -0.48 -0.46 -0.44 -0.42 -0.4
UI

UI PD Vg s dp
Example for application of optimization process:
The objective is function of two variables :

Y=f(x1,x2)
 step 1 : study effect of each variable on objective Y:

25

20

15
objective ,Y

10

0
-2 -1 0 1 2 3 4
Variable,X1

Fig.1.mutation of Y with X1

25

20

15
objective ,Y

10

0
-2 -1 0 1 2 3 4
Variable,X2

Fig.2.mutation of Y with X2
 Step 2: formulating of objective function:

Depending on available data :

A-Refined experimental data.

B- Reliable simulated data.

The objective Y is formulated with critical variables that selected as


decision variables ( for example : x1 and x2). Several regression methods
can be implemented with aid of MATLAB and Statistica software.

The best form of optimization problem equation is:

Objective Y=2X12-3X1X2+2X2 (1)

The globle plot of objective function:

80

60

40

20

0
4
4
2
2
0
0
-2 -2

Fig.3. Mesh plot of global objective function Y.


 Step 3: Optimization Solution:

I. Deterministic method:

A-Analytical technique: using two methods:

Method 1: by differentiation:

Y=2X12-3X1X2+2X2 (1)

/ 1=4X1-3X2 =0 (2)

/ 2=4X2-3X1 =0 (3)

2Y / X12= 2Y/ X22=4 (4)

Notes:
1. Because of the 2nd derivative (eq.4) has positive sign; hence the
objective has minimum optimal value with X1and X2 as shown in Figs.1-

2. Estimating of optimum values of decision variables (X1and X2) by


solving Eqs.2 and 3 simultaneously.
B- Using MATLAB:

Each deterministic method needs to start point.

1. Unconstraint non-linear solving technique (fsolve):

Optimization search depends on Levenberg-Marquradt method

Start point [1 1]

-4
x 10 Current Point
4

3
Current point

0
1 2
Number of variables: 2
Current Function Value: 6.06109e-08
1
Function value

0.5

0
0 2 4 6 8 10 12
Iteration

Number of iteration =12

At final iteration: minimum Y=6.06 x10-8, X1=2.45x10-4 and

X2=2.46x10-4
2. Constraint non-linear minimization (fmincon):

Optimization search depends on Hessian of the Lagrangian

Start point [1 1]

Constraints:
Lower = [-2 -2], Upper= [4 4]

-9
x 10 Current Point
0
Current point

-2

-4

-6
1 2
Number of variables: 2
Current Function Value: 2.29858e-17
1
Function value

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Iteration

Number of iteration =5

At final iteration: minimum Y=2.29x10-17, X1=-4.8x10-9 and

X2=-4.8x10-9

By comparison between two methods, one can conclude that constraint


method is more reliable (Y~ zero) and need short time of computation.
II.Stochastic method:

1. Implementing of Genetic Algorithm, GA

 GAs use probabilistic transition rules, not deterministic rule


 GAs use only the value of convex (minimize) objective function.
The derivatives are not used in the search process.
 GAs search from a population of points, not a single point. Hence
GAs are said to be Global optimization techniques.

Results of GA:
1. Unconstraint

Best: 0.000586665 Mean: 0.419652


3
Best fitness
Mean fitness
Fitness value

0
0 10 20 30 40 50 60 70 80 90 100
Generation
Current Best Individual
0.03
Current best individual

0.02

0.01

0
1 2
Number of variables (2)

Number of iteration =60

At final iteration: minimum Y=5x10-4, X1= 0.015 and

X2=0.024
2. Constraints:

Lower bound [-2 -2] and upper bound

Best: 3.65286e-07 Mean: 1.6445e-06


1.5
Best fitness
Mean fitness
Fitness value

0.5

0
0 10 20 30 40 50 60 70 80 90 100
Generation
-4
x 10 Current Best Individual
8
Current best individual

0
1 2
Number of variables (2)

Number of iteration =50

At final iteration: minimum Y=3.65x10-7, X1=6.1x10-4 and

X2=3.8x10-4.

Note:
GA search would be enhanced with constraints.
Friefly Algorithm:
Firefly algorithm (FA) is one of the new metaheuristic algorithms for optimization problems.
The algorithm is inspired by the flashing behavior of fireflies. The algorithm assumes that all
fireflies are unisex, which means any firefly can be attracted by any other firefly; the
attractiveness of a firefly is directly proportional to its brightness which depends on the
objective function.FA is excuted by three parameters which are randomness,absorption and
brightness.The brightness parameter is based on light intensity between the fireflies.

Flow chart of Firefly Algorithm


For example: f1=exp(-(x-4)^2-(y-4)^2)+exp(-(x+4)^2-(y-4)^2)
f2 =2*exp(-x^2-(y+4)^2)+2*exp(-x^2-y^2)

By using 24 fireflies Subject to X=[ -5 5 ] ,Y=[-5 5]

alpha=0.4
gamma=1 Randomness, Absorption coefficient and brightness coffecient
delta=0.97

0
Y

-1

-2

-3

-4

-5
-5 -4 -3 -2 -1 0 1 2 3 4 5
X

Optimal results of X and Y

View publication stats

You might also like