SlideShare a Scribd company logo
MULTI OBJECTIVE OPTIMIZATION AND TRADE
    OFFS USING PARETO OPTIMALITY
                Amogh Mundhekar
                  Nikhil Aphale
           [University at Buffalo, SUNY]
Multi-Objective Optimization
Multi-Objective Optimization
Multi-Objective Optimization
•Involves the simultaneous optimization of several
incommensurable and often competing objectives.
•These optimal solutions are termed as Pareto optimal
solutions.
•Pareto optimal sets are the solutions that cannot be
improved in one objective function without
deteriorating their performance in at least one of the
rest.
•Problems usually Conflicting in nature (Ex: Minimize
cost, Maximize Productivity)
•Designers are required to resolve Trade-offs.
Pareto Optimal means:

         “Take from Peter to pay Paul”
Typical Multi-Objective
     Optimization Formulation
Minimize {f1(x),…….,fn(x)}T
where fi(x) = ith objective function to be minimized,
        n = number of objectives

Subject to:
        g(x) ≤ 0;
        h(x) = 0;
        x min ≤ (x) ≤ (x max)
Basic Terminology

   Search space or design space is the set of all possible
    combinations of the design variables.
   Pareto Optimal Solution achieves a trade off. They are
    solutions for which any improvement in one objective results in
    worsening of atleast one other objective.
   Pareto Optimal Set: Pareto Optimal Solution is not unique,
    there exists a set of solutions known as the Pareto Optimal Set.
    It represents a complete set of solutions for a Multi-Objective
    Optimization (MOO).
   Pareto Frontier: A plot of entire Pareto set in the Design
    Objective Space (with design objectives plotted along each
    axis) gives a Pareto Frontier.
Dominated & Non- Dominated points

   A Dominated design point, is the one for which there
    exists at least one feasible design point that is better
    than it in all design objectives.
   Non Dominated point is the one, where there does not
    exist any feasible design point better than it. Pareto
    Optimal points are non-dominated and hence are also
    known as Non-dominated points.
Challenges in the Multi Objective
Optimization problem.

Challenge 1: Populate the Pareto Set

Challenge 2: Select the Best Solution

Challenge 3: Find the corresponding Design Variables
Solution methods for Challenge 1
Methods discusses in earlier lectures:
 Random Sampling
 Weighting Method
 Distance Method
 Constrained Trade-off method
Solution methods for Challenge 1
Methods discusses to be discussed today:
 Random Sampling
 Weighting Method
 Distance Method
 Constrained Trade-off method
 Normal Boundary Intersection method
 Goal Programming
 Pareto Genetic Algorithm
Weighted Sum Approach
Weighted Sum Approach

   Uses weight functions to reflect the importance of each
    objective.



   Involves relative preferences.
   Inter-criteria preference- Preference among several
    objectives. (e.g. cost > aesthetique)
   Intra-criterion preference- Preference within an objective.
    (e.g. 100< mass <200)
Drawbacks of Weighted sum method
   Finding points on the Pareto front by varying the weighting
    coefficients yields incorrect outputs.
   Small changes in ‘w’ may cause dramatic changes in the
    objective vectors. Whereas large changes in ‘w’ may result in
    almost unnoticeable changes in the objective vectors. This
    makes the relation between weights and performance very
    complicated and non-intuitive.
   Uneven sampling of the Pareto front.
   Requires Scaling.
Drawbacks of Weighted sum method..
   For an even spread of the weights, the optimal solutions in the
    criterion space are usually not evenly distributed
   Weighted sum method is essentially subjective, in that a
    Decision Maker needs to provide the weights.
   This approach cannot identify all non-dominated solutions.
    Only solutions located on the convex part of the Pareto front
    can be found. If the Pareto set is not convex, the Pareto points
    on the concave parts of the trade-off surface will be missed.
   Does not provide the means to effectively specify intra-
    criterion preferences.
Normal Boundary
Intersections (NBI)
Normal Boundary Intersections (NBI)
   NBI is a solution methodology developed by Das and Dennis
    (1998) for generating Pareto surface in non-linear
    multiobjective optimization problems.
   This method is independent of the relative scales of the
    objective functions and is successful in producing an evenly
    distributed set of points in the Pareto surface given an evenly
    distributed set of parameters, which is an advantage compared
    to the most common multiobjective approaches—weighting
    method and the ε-constraint method.
   A method for finding several Pareto optimal points for a
    general nonlinear multi criteria optimization problem, aimed at
    capturing the tradeoff among the various conflicting objectives.
Normal Boundary Intersections (NBI)
   NBI is a solution methodology developed by Das and Dennis
    (1998) for generating Pareto surface in non-linear
    multiobjective optimization problems.
   This method is independent of the relative scales of the
    objective functions and is successful in producing an evenly
    distributed set of points in the Pareto surface given an evenly
    distributed set of parameters, which is an advantage compared
    to the most common multiobjective approaches—weighting
    method and the ε-constraint method.
   A method for finding several Pareto optimal points for a
    general nonlinear multi criteria optimization problem, aimed at
    capturing the tradeoff among the various conflicting objectives.
Payoff Matrix (n x n)
Pareto Frontier using NBI




  Where,
           fN – Nadir point
           fU – Utopia point
Convex Hull of Individual Minima (CHIM)
   The set of points in objective space that are convex
    combinations of each row of payoff table, is referred to as the
    Convex Hull of Individual Minima (CHIM).
Formulation of NBI Sub Problem



Where,
         n          : Normal Vector from CHIM towards the origin
         D          : Represents the set of points on the normal.
         Beta : Weight
   The vector constraint F(x) ensures that the point x is actually
    mapped by F to a point on the normal, while the remaining
    constraints ensure feasibility of x with respect to the original
    problem (MOP).
NBI algorithm
Advantages of NBI
   Finds a uniform spread of Pareto points.
   NBI improves on other traditional methods like goal
    programming in the sense that it never requires any prior
    knowledge of 'feasible goals'.
   It improves on multilevel optimization techniques from
    the tradeoff standpoint, since multilevel techniques
    usually can only improve only a few of the 'most
    important' objectives, leaving no compromise for the rest.
Goal Programming
zi
           -
         WGP
                           +
                         WGP




          Target value
                               Mi
               -    +
          dGP      dGP
History
 Goal programming was first used by Charnes
                       and Cooper in 1955.

The first engineering application of goal
programming, was by Ignizio in 1962:

  Designing and placing of the antennas on the
                 second stage of the Saturn V.
How this method works?
Requirements:

1) Choosing either Max or Min the objective
2) Setting a target or a goal value for each objective
3) Designer specifies      -
                        WGP & WGP   +

Therefore, indicate penalties for deviating from either sides

Basic principle:

Minimize the deviation of each
design objective from its target value
                         -         +
Deviational variables dGP & dGP
zi
       -
     WGP
                       +
                     WGP




      Target value
                           Mi
           -    +
      dGP      dGP
The Game
Advantages and disadvantages
   1)   Simplicity and ease of use

   2)    It is better than weighted sum method because the
        designer specify two different values of weights for
        each objective on the two sides of the target value


   1) Specifying weights for the designer preference is not
      easy

   2) What about…?
Testing for Pareto
Why the solution was not a Pareto optimal?
Because the designer set a pessimistic target value
Larbani & Aounini method
Goal programming method: (Program 1)




                             The output of Program 1 is X1
Pareto Method: (Program 2)




                             The output of Program 2 is X2

   If X1 is a solution of program 2, therefore it is Pareto optimal
                          solution and vice versa
Multiobjective Optimization
       using Pareto Genetic
                 Algorithm
Genetic Algorithms (GAs) :
Adaptive heuristic search algorithm based on the
evolutionary ideas of natural selection.
Darvin’s Theory:
The individuals who best adapt to the environment are
the ones who will most likely survive.
Important Concepts in GAs
1. Fitness:
 Each nondominated point in a model should be equally
  important and considered an optimal goal.
 Nondominated rank procedure.
2. Reproduction
a. Crossover:
Produces new individuals in
combining the information
contained in two or more parents.


b. Mutation:
Altering individuals with low
probability of survival.
3. Niche
   Maintaining variety
   Genetic Drift
4. Pareto Set Filter
 Reproduction cannot guarantee that best characteristics of
  the parents are inherited by their next generation.
 Some of them maybe Pareto optimal points
 Filter pools nondominated points ranked 1 at each
  generation and drops dominated points.
NNP: no. of
nondominated
points
PFS: Pareto set
Filter Size
Detailed Algorithm




              Pn= Population Size
Constrained Multiobjective Optimization via GAs
 Transform a constrained optimization problem into an
  unconstrained one via penalty function method.
         Minimize F(x)
                subject to,
                 g(x) <= 0
                 h(x) = 0
    Transform to,
         Minimize Φ(x) = F(x) + rp P(x)
   A penalty term is added to the fitness of an infeasible
    point so that its fitness never attains that of a feasible
    point.
Fuzzy Logic (FL) Penalty Function Method
    Derives from the fact that classes and concepts for
     natural phenomenon tend to be fuzzy instead of crisp.
    Fuzzy Set
      A point is identified with its degree of membership in that set.
     A fuzzy set A in X( a collection of objects) is defined as,



μ mapping from X to unit interval [0,1] called as membership
    A :

     function
         0: worst possible case
         1: best possible case



   When treating a points violated amount for
    constraints, a fuzzy quantity- such as the points
    relationship to feasible zone as very close, close , far,
    very far- can provide the information required for GA
    ranking.
   Fuzzy penalty function
   For any point k,




   KD value depends on membership function.
   Entire search space is divided into zones.
   Penalty value increases
    from zone to zone.
   Same penalty for points
    in the same zone.
   Advantages of GAs
       Doesn’t require gradient information
       Only input information required from the given problem is
        fitness of each point in present model population.
       Produce multiple optima rather than single local optima.


   Disadvantages
       Not good when Function evaluation is expensive.
       Large computations required.

More Related Content

What's hot (20)

PPT
Multi criteria decision making
Kartik Bansal
 
PDF
Deep Q-Learning
Nikolay Pavlov
 
PDF
Artificial neural networks
stellajoseph
 
PPTX
Introduction to artificial neural network
Dr. C.V. Suresh Babu
 
PPTX
Practical Swarm Optimization (PSO)
khashayar Danesh Narooei
 
PPT
Optimization Methods
metamath
 
PPTX
Data Transformation – Standardization & Normalization PPM.pptx
ssuser5cdaa93
 
PPTX
NON LINEAR PROGRAMMING
karishma gupta
 
PDF
Metaheuristic Algorithms: A Critical Analysis
Xin-She Yang
 
PDF
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Ahmed Gad
 
PPTX
Optimization problems and algorithms
Aboul Ella Hassanien
 
PPTX
Optmization techniques
Deepshika Reddy
 
PDF
Interactions in Multi Agent Systems
SSA KPI
 
PPTX
Combinatorial Optimization
Institute of Technology, Nirma University
 
PPTX
Metaheuristics
ossein jain
 
PPTX
Soft computing
ganeshpaul6
 
PPTX
Alpha-beta pruning (Artificial Intelligence)
Falak Chaudry
 
PPTX
MACHINE LEARNING - GENETIC ALGORITHM
Puneet Kulyana
 
PDF
Unit3:Informed and Uninformed search
Tekendra Nath Yogi
 
Multi criteria decision making
Kartik Bansal
 
Deep Q-Learning
Nikolay Pavlov
 
Artificial neural networks
stellajoseph
 
Introduction to artificial neural network
Dr. C.V. Suresh Babu
 
Practical Swarm Optimization (PSO)
khashayar Danesh Narooei
 
Optimization Methods
metamath
 
Data Transformation – Standardization & Normalization PPM.pptx
ssuser5cdaa93
 
NON LINEAR PROGRAMMING
karishma gupta
 
Metaheuristic Algorithms: A Critical Analysis
Xin-She Yang
 
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Ahmed Gad
 
Optimization problems and algorithms
Aboul Ella Hassanien
 
Optmization techniques
Deepshika Reddy
 
Interactions in Multi Agent Systems
SSA KPI
 
Combinatorial Optimization
Institute of Technology, Nirma University
 
Metaheuristics
ossein jain
 
Soft computing
ganeshpaul6
 
Alpha-beta pruning (Artificial Intelligence)
Falak Chaudry
 
MACHINE LEARNING - GENETIC ALGORITHM
Puneet Kulyana
 
Unit3:Informed and Uninformed search
Tekendra Nath Yogi
 

Similar to Multiobjective optimization and trade offs using pareto optimality (20)

PDF
A Level Set Method For Multiobjective Combinatorial Optimization Application...
Scott Faria
 
PDF
Method of solving multi objective optimization
eSAT Publishing House
 
PDF
Method of solving multi objective optimization problem in the presence of unc...
eSAT Journals
 
PDF
Directed Optimization on Pareto Frontier
eArtius, Inc.
 
PDF
White paper multi objopt
AltairKorea
 
PPT
Sota
guesta4fafe
 
PPTX
9 Multi criteria Operation Decision Making - Nov 16 2020. pptx (ver2).pptx
dnbtraniemyu
 
PDF
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
ieee_cis_cyprus
 
DOCX
Pareto optimal
rmpas
 
PDF
Multi objective predictive control a solution using metaheuristics
ijcsit
 
PDF
[IJCT-V3I2P31] Authors: Amarbir Singh
IJET - International Journal of Engineering and Techniques
 
PDF
igor-kupczynski-msc-put-thesis
Igor Kupczyński
 
PDF
Gradient-Based Multi-Objective Optimization Technology
eArtius, Inc.
 
PDF
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
Amir Ziai
 
PPTX
Moga
ErSweety Mittal
 
PPTX
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
glorypreciousj
 
PDF
Anirban part1
kamatchi priya
 
PDF
EJSR(5)
AMIT KUMAR
 
PDF
05_chapter1ANINTRODUCTIONTOGOALPROGRAMMING.pdf
belete0621
 
PPTX
Linear Programming
Pulchowk Campus
 
A Level Set Method For Multiobjective Combinatorial Optimization Application...
Scott Faria
 
Method of solving multi objective optimization
eSAT Publishing House
 
Method of solving multi objective optimization problem in the presence of unc...
eSAT Journals
 
Directed Optimization on Pareto Frontier
eArtius, Inc.
 
White paper multi objopt
AltairKorea
 
9 Multi criteria Operation Decision Making - Nov 16 2020. pptx (ver2).pptx
dnbtraniemyu
 
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
ieee_cis_cyprus
 
Pareto optimal
rmpas
 
Multi objective predictive control a solution using metaheuristics
ijcsit
 
[IJCT-V3I2P31] Authors: Amarbir Singh
IJET - International Journal of Engineering and Techniques
 
igor-kupczynski-msc-put-thesis
Igor Kupczyński
 
Gradient-Based Multi-Objective Optimization Technology
eArtius, Inc.
 
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
Amir Ziai
 
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
glorypreciousj
 
Anirban part1
kamatchi priya
 
EJSR(5)
AMIT KUMAR
 
05_chapter1ANINTRODUCTIONTOGOALPROGRAMMING.pdf
belete0621
 
Linear Programming
Pulchowk Campus
 
Ad

Recently uploaded (20)

PDF
ArcGIS Utility Network Migration - The Hunter Water Story
Safe Software
 
PDF
Understanding The True Cost of DynamoDB Webinar
ScyllaDB
 
PDF
Kubernetes - Architecture & Components.pdf
geethak285
 
PPTX
Smart Factory Monitoring IIoT in Machine and Production Operations.pptx
Rejig Digital
 
PPTX
Smarter Governance with AI: What Every Board Needs to Know
OnBoard
 
PDF
99 Bottles of Trust on the Wall — Operational Principles for Trust in Cyber C...
treyka
 
PDF
Simplify Your FME Flow Setup: Fault-Tolerant Deployment Made Easy with Packer...
Safe Software
 
PPTX
Paycifi - Programmable Trust_Breakfast_PPTXT
FinTech Belgium
 
PDF
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
Edge AI and Vision Alliance
 
PPTX
Practical Applications of AI in Local Government
OnBoard
 
PPTX
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
PPTX
Reimaginando la Ciberdefensa: De Copilots a Redes de Agentes
Cristian Garcia G.
 
PDF
GDG Cloud Southlake #44: Eyal Bukchin: Tightening the Kubernetes Feedback Loo...
James Anderson
 
PPTX
Mastering Authorization: Integrating Authentication and Authorization Data in...
Hitachi, Ltd. OSS Solution Center.
 
PDF
Enhancing Environmental Monitoring with Real-Time Data Integration: Leveragin...
Safe Software
 
PDF
Automating the Geo-Referencing of Historic Aerial Photography in Flanders
Safe Software
 
PDF
Hyderabad MuleSoft In-Person Meetup (June 21, 2025) Slides
Ravi Tamada
 
PDF
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
PDF
Darley - FIRST Copenhagen Lightning Talk (2025-06-26) Epochalypse 2038 - Time...
treyka
 
PDF
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
ArcGIS Utility Network Migration - The Hunter Water Story
Safe Software
 
Understanding The True Cost of DynamoDB Webinar
ScyllaDB
 
Kubernetes - Architecture & Components.pdf
geethak285
 
Smart Factory Monitoring IIoT in Machine and Production Operations.pptx
Rejig Digital
 
Smarter Governance with AI: What Every Board Needs to Know
OnBoard
 
99 Bottles of Trust on the Wall — Operational Principles for Trust in Cyber C...
treyka
 
Simplify Your FME Flow Setup: Fault-Tolerant Deployment Made Easy with Packer...
Safe Software
 
Paycifi - Programmable Trust_Breakfast_PPTXT
FinTech Belgium
 
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
Edge AI and Vision Alliance
 
Practical Applications of AI in Local Government
OnBoard
 
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
Reimaginando la Ciberdefensa: De Copilots a Redes de Agentes
Cristian Garcia G.
 
GDG Cloud Southlake #44: Eyal Bukchin: Tightening the Kubernetes Feedback Loo...
James Anderson
 
Mastering Authorization: Integrating Authentication and Authorization Data in...
Hitachi, Ltd. OSS Solution Center.
 
Enhancing Environmental Monitoring with Real-Time Data Integration: Leveragin...
Safe Software
 
Automating the Geo-Referencing of Historic Aerial Photography in Flanders
Safe Software
 
Hyderabad MuleSoft In-Person Meetup (June 21, 2025) Slides
Ravi Tamada
 
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
Darley - FIRST Copenhagen Lightning Talk (2025-06-26) Epochalypse 2038 - Time...
treyka
 
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
Ad

Multiobjective optimization and trade offs using pareto optimality

  • 1. MULTI OBJECTIVE OPTIMIZATION AND TRADE OFFS USING PARETO OPTIMALITY Amogh Mundhekar Nikhil Aphale [University at Buffalo, SUNY]
  • 4. Multi-Objective Optimization •Involves the simultaneous optimization of several incommensurable and often competing objectives. •These optimal solutions are termed as Pareto optimal solutions. •Pareto optimal sets are the solutions that cannot be improved in one objective function without deteriorating their performance in at least one of the rest. •Problems usually Conflicting in nature (Ex: Minimize cost, Maximize Productivity) •Designers are required to resolve Trade-offs.
  • 5. Pareto Optimal means: “Take from Peter to pay Paul”
  • 6. Typical Multi-Objective Optimization Formulation Minimize {f1(x),…….,fn(x)}T where fi(x) = ith objective function to be minimized, n = number of objectives Subject to: g(x) ≤ 0; h(x) = 0; x min ≤ (x) ≤ (x max)
  • 7. Basic Terminology  Search space or design space is the set of all possible combinations of the design variables.  Pareto Optimal Solution achieves a trade off. They are solutions for which any improvement in one objective results in worsening of atleast one other objective.  Pareto Optimal Set: Pareto Optimal Solution is not unique, there exists a set of solutions known as the Pareto Optimal Set. It represents a complete set of solutions for a Multi-Objective Optimization (MOO).  Pareto Frontier: A plot of entire Pareto set in the Design Objective Space (with design objectives plotted along each axis) gives a Pareto Frontier.
  • 8. Dominated & Non- Dominated points  A Dominated design point, is the one for which there exists at least one feasible design point that is better than it in all design objectives.  Non Dominated point is the one, where there does not exist any feasible design point better than it. Pareto Optimal points are non-dominated and hence are also known as Non-dominated points.
  • 9. Challenges in the Multi Objective Optimization problem. Challenge 1: Populate the Pareto Set Challenge 2: Select the Best Solution Challenge 3: Find the corresponding Design Variables
  • 10. Solution methods for Challenge 1 Methods discusses in earlier lectures:  Random Sampling  Weighting Method  Distance Method  Constrained Trade-off method
  • 11. Solution methods for Challenge 1 Methods discusses to be discussed today:  Random Sampling  Weighting Method  Distance Method  Constrained Trade-off method  Normal Boundary Intersection method  Goal Programming  Pareto Genetic Algorithm
  • 13. Weighted Sum Approach  Uses weight functions to reflect the importance of each objective.  Involves relative preferences.  Inter-criteria preference- Preference among several objectives. (e.g. cost > aesthetique)  Intra-criterion preference- Preference within an objective. (e.g. 100< mass <200)
  • 14. Drawbacks of Weighted sum method  Finding points on the Pareto front by varying the weighting coefficients yields incorrect outputs.  Small changes in ‘w’ may cause dramatic changes in the objective vectors. Whereas large changes in ‘w’ may result in almost unnoticeable changes in the objective vectors. This makes the relation between weights and performance very complicated and non-intuitive.  Uneven sampling of the Pareto front.  Requires Scaling.
  • 15. Drawbacks of Weighted sum method..  For an even spread of the weights, the optimal solutions in the criterion space are usually not evenly distributed  Weighted sum method is essentially subjective, in that a Decision Maker needs to provide the weights.  This approach cannot identify all non-dominated solutions. Only solutions located on the convex part of the Pareto front can be found. If the Pareto set is not convex, the Pareto points on the concave parts of the trade-off surface will be missed.  Does not provide the means to effectively specify intra- criterion preferences.
  • 17. Normal Boundary Intersections (NBI)  NBI is a solution methodology developed by Das and Dennis (1998) for generating Pareto surface in non-linear multiobjective optimization problems.  This method is independent of the relative scales of the objective functions and is successful in producing an evenly distributed set of points in the Pareto surface given an evenly distributed set of parameters, which is an advantage compared to the most common multiobjective approaches—weighting method and the ε-constraint method.  A method for finding several Pareto optimal points for a general nonlinear multi criteria optimization problem, aimed at capturing the tradeoff among the various conflicting objectives.
  • 18. Normal Boundary Intersections (NBI)  NBI is a solution methodology developed by Das and Dennis (1998) for generating Pareto surface in non-linear multiobjective optimization problems.  This method is independent of the relative scales of the objective functions and is successful in producing an evenly distributed set of points in the Pareto surface given an evenly distributed set of parameters, which is an advantage compared to the most common multiobjective approaches—weighting method and the ε-constraint method.  A method for finding several Pareto optimal points for a general nonlinear multi criteria optimization problem, aimed at capturing the tradeoff among the various conflicting objectives.
  • 20. Pareto Frontier using NBI Where, fN – Nadir point fU – Utopia point
  • 21. Convex Hull of Individual Minima (CHIM)  The set of points in objective space that are convex combinations of each row of payoff table, is referred to as the Convex Hull of Individual Minima (CHIM).
  • 22. Formulation of NBI Sub Problem Where, n : Normal Vector from CHIM towards the origin D : Represents the set of points on the normal. Beta : Weight  The vector constraint F(x) ensures that the point x is actually mapped by F to a point on the normal, while the remaining constraints ensure feasibility of x with respect to the original problem (MOP).
  • 24. Advantages of NBI  Finds a uniform spread of Pareto points.  NBI improves on other traditional methods like goal programming in the sense that it never requires any prior knowledge of 'feasible goals'.  It improves on multilevel optimization techniques from the tradeoff standpoint, since multilevel techniques usually can only improve only a few of the 'most important' objectives, leaving no compromise for the rest.
  • 25. Goal Programming zi - WGP + WGP Target value Mi - + dGP dGP
  • 26. History Goal programming was first used by Charnes and Cooper in 1955. The first engineering application of goal programming, was by Ignizio in 1962: Designing and placing of the antennas on the second stage of the Saturn V.
  • 27. How this method works? Requirements: 1) Choosing either Max or Min the objective 2) Setting a target or a goal value for each objective 3) Designer specifies - WGP & WGP + Therefore, indicate penalties for deviating from either sides Basic principle: Minimize the deviation of each design objective from its target value - + Deviational variables dGP & dGP
  • 28. zi - WGP + WGP Target value Mi - + dGP dGP
  • 30. Advantages and disadvantages 1) Simplicity and ease of use 2) It is better than weighted sum method because the designer specify two different values of weights for each objective on the two sides of the target value 1) Specifying weights for the designer preference is not easy 2) What about…?
  • 31. Testing for Pareto Why the solution was not a Pareto optimal? Because the designer set a pessimistic target value
  • 32. Larbani & Aounini method Goal programming method: (Program 1) The output of Program 1 is X1 Pareto Method: (Program 2) The output of Program 2 is X2 If X1 is a solution of program 2, therefore it is Pareto optimal solution and vice versa
  • 33. Multiobjective Optimization using Pareto Genetic Algorithm
  • 34. Genetic Algorithms (GAs) : Adaptive heuristic search algorithm based on the evolutionary ideas of natural selection. Darvin’s Theory: The individuals who best adapt to the environment are the ones who will most likely survive.
  • 35. Important Concepts in GAs 1. Fitness:  Each nondominated point in a model should be equally important and considered an optimal goal.  Nondominated rank procedure.
  • 36. 2. Reproduction a. Crossover: Produces new individuals in combining the information contained in two or more parents. b. Mutation: Altering individuals with low probability of survival.
  • 37. 3. Niche  Maintaining variety  Genetic Drift
  • 38. 4. Pareto Set Filter  Reproduction cannot guarantee that best characteristics of the parents are inherited by their next generation.  Some of them maybe Pareto optimal points  Filter pools nondominated points ranked 1 at each generation and drops dominated points.
  • 39. NNP: no. of nondominated points PFS: Pareto set Filter Size
  • 40. Detailed Algorithm Pn= Population Size
  • 41. Constrained Multiobjective Optimization via GAs  Transform a constrained optimization problem into an unconstrained one via penalty function method. Minimize F(x) subject to, g(x) <= 0 h(x) = 0 Transform to, Minimize Φ(x) = F(x) + rp P(x)  A penalty term is added to the fitness of an infeasible point so that its fitness never attains that of a feasible point.
  • 42. Fuzzy Logic (FL) Penalty Function Method  Derives from the fact that classes and concepts for natural phenomenon tend to be fuzzy instead of crisp.  Fuzzy Set  A point is identified with its degree of membership in that set. A fuzzy set A in X( a collection of objects) is defined as, μ mapping from X to unit interval [0,1] called as membership A : function  0: worst possible case  1: best possible case 
  • 43. When treating a points violated amount for constraints, a fuzzy quantity- such as the points relationship to feasible zone as very close, close , far, very far- can provide the information required for GA ranking.  Fuzzy penalty function  For any point k,  KD value depends on membership function.
  • 44. Entire search space is divided into zones.  Penalty value increases from zone to zone.  Same penalty for points in the same zone.
  • 45. Advantages of GAs  Doesn’t require gradient information  Only input information required from the given problem is fitness of each point in present model population.  Produce multiple optima rather than single local optima.  Disadvantages  Not good when Function evaluation is expensive.  Large computations required.