0% found this document useful (0 votes)
2 views25 pages

MODULE 5 Softcomputing

The document discusses multi-objective optimization, focusing on concepts like Pareto optimality, Pareto front, and various optimization methods such as weighted sum and epsilon-constraint. It highlights the significance of trade-offs, dominance, and the importance of convergence and diversity in achieving optimal solutions. Additionally, it covers real-world applications, challenges, and comparisons between different optimization algorithms and approaches.

Uploaded by

dorothybiswas06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views25 pages

MODULE 5 Softcomputing

The document discusses multi-objective optimization, focusing on concepts like Pareto optimality, Pareto front, and various optimization methods such as weighted sum and epsilon-constraint. It highlights the significance of trade-offs, dominance, and the importance of convergence and diversity in achieving optimal solutions. Additionally, it covers real-world applications, challenges, and comparisons between different optimization algorithms and approaches.

Uploaded by

dorothybiswas06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

1.

Pareto Optimality in Multi-Objective Optimization

Pareto optimality refers to a state in multi-objective optimization where no objective can be


improved without worsening at least one other objective. In other words, a solution is Pareto
optimal if it is non-dominated—there's no other solution that is better in all objectives.

Significance:

 Helps in identifying a set of optimal trade-off solutions.


 Useful when there is no single best solution, and multiple conflicting objectives must be
considered.
 Forms the basis of decision-making in fields like economics, engineering design, and
machine learning.

2. Pareto Front

The Pareto front (or Pareto frontier) is the set of all Pareto optimal solutions in the objective
space. Graphically, it represents the boundary where no objective can be improved without
sacrificing another.

Use in Evaluation:

 Helps visualize the trade-offs among conflicting objectives.


 Provides decision-makers with a range of optimal solutions instead of a single one.
 A solution is evaluated based on whether it lies on or near the Pareto front.

3. Single-Objective vs Multi-Objective Optimization

Single-Objective
Feature Multi-Objective Optimization
Optimization
Objectives One Two or more (often conflicting)
Solution One optimal solution A set of Pareto optimal solutions
Evaluation Easy to compare values Solutions must be compared using dominance concepts
Balancing trade-offs, high computation, decision-
Challenges Finding global optimum
making complexity
Examples Maximize profit Maximize profit and minimize environmental impact
4. Dominance and Non-Dominance

Dominance:
A solution A dominates solution B if:

 A is no worse than B in all objectives, and


 A is strictly better in at least one objective.

Non-Dominance:
A solution is non-dominated if there is no other solution that dominates it.

Example:

Suppose we want to minimize cost and maximize quality for a product. Consider the following
solutions:

Solution Cost Quality


A 100 80
B 120 85
C 100 90

 Solution C dominates A (same cost, better quality).


 C also dominates B (lower cost, higher quality).
 C is non-dominated.

5. Trade-offs in Multi-Objective Optimization

Trade-offs arise because improving one objective may degrade another (e.g., increasing quality
increases cost).

Dealing with Trade-offs:

 Weighted sum approach: Assign weights to objectives based on importance.


 ε-constraint method: Optimize one objective while treating others as constraints.
 Interactive decision-making: Human decision-makers iteratively choose among Pareto-
optimal solutions.
 Preference-based methods: Incorporate user preferences into the optimization process.

Here are concise and comprehensive answers to your next set of questions on multi-objective
optimization:
6. Weighted Sum Method in Multi-Objective Optimization

Role:

The weighted sum method transforms a multi-objective optimization problem into a single-
objective problem by assigning weights to each objective and summing them:

Minimize: f(x)=w1f1(x)+w2f2(x)+⋯+wnfn(x)\text{Minimize: } f(x) = w_1f_1(x) + w_2f_2(x) +


\dots + w_nf_n(x)

Where:

 fi(x)f_i(x): individual objective functions


 wiw_i: weights (positive constants summing to 1)

Limitations:

 Cannot find non-convex Pareto fronts.


 Choice of weights is difficult and can be subjective.
 Poor scalability with many objectives.
 No guarantee of finding a diverse set of Pareto solutions.

7. Epsilon-Constraint Method

Explanation:

In this method, one objective is optimized, and the others are converted into constraints with
upper bounds (ε):

Minimize f1(x)Subject to: f2(x)≤ϵ2,f3(x)≤ϵ3,…\text{Minimize } f_1(x) \\ \text{Subject to: }


f_2(x) \leq \epsilon_2, \quad f_3(x) \leq \epsilon_3, \dots

Example:

Suppose you want to minimize cost and emissions:

 Objective 1: Minimize cost f1(x)f_1(x)


 Objective 2: Emissions f2(x)f_2(x)

Convert f2(x)f_2(x) to a constraint:

Minimize f1(x)subject to f2(x)≤50\text{Minimize } f_1(x) \quad \text{subject to } f_2(x) \leq 50

By varying ϵ\epsilon, you can explore the Pareto front.


8. Evolutionary Multi-Objective Optimization (EMO)

Definition:

EMO algorithms use evolutionary algorithms (like genetic algorithms) to evolve a population
of solutions toward the Pareto front.

Differences from Traditional Methods:

Feature Traditional Methods EMO Algorithms


Search Strategy Deterministic/gradient-based Population-based, stochastic
Handles non-linearity Poorly Very well
Diversity Maintenance Not inherent Built-in (e.g., crowding, sharing)
Multiple solutions One at a time Many (Pareto set) in one run

9. NSGA-II (Non-dominated Sorting Genetic Algorithm II)

Working Steps:

1. Initialization: Generate an initial random population.


2. Non-dominated Sorting: Classify solutions into different Pareto fronts.
3. Crowding Distance Assignment: Measure the density around each solution.
4. Selection: Choose individuals using a binary tournament based on rank and crowding
distance.
5. Crossover & Mutation: Generate offspring.
6. Combine & Reduce: Merge parent and offspring, sort them, and select the best
individuals for the next generation.
7. Repeat: Until stopping criteria are met.

✅ Advantages: Fast, elitist, maintains diversity well.

10. Fitness Sharing and Crowding Distance

Fitness Sharing:

 Encourages diversity by penalizing similar solutions.


 Similar individuals share fitness, reducing the likelihood that a crowded area dominates.

Crowding Distance:
 Measures how close an individual is to its neighbors in the objective space.
 Individuals in less crowded regions are preferred to preserve diversity.

Why Important?

 Avoids premature convergence to a single region.


 Ensures a well-distributed Pareto front.

Here are detailed explanations for your remaining questions on multi-objective optimization:

11. Importance of Convergence and Diversity in Multi-Objective Optimization

✅ Convergence

 Definition: How close the solutions are to the true (optimal) Pareto front.
 Importance: Poor convergence means you're not solving the actual problem well.
 Measurement:
o Generational Distance (GD): Average distance from obtained solutions to the
true Pareto front.
o Inverted Generational Distance (IGD): Measures how well the Pareto front is
approximated.

✅ Diversity

 Definition: How well the solutions are spread across the Pareto front.
 Importance: Helps decision-makers explore trade-offs across the full range of
objectives.
 Measurement:
o Spacing Metric: Measures the evenness of spacing between solutions.
o Spread (Δ): Evaluates the extent and uniformity of spread.
o Crowding Distance: Used in NSGA-II to promote diversity.

12. Pareto Dominance vs Weak Pareto Dominance

✅ Pareto Dominance

A solution A dominates B if:

 AA is no worse than BB in all objectives, and


 AA is strictly better in at least one objective.

✅ Weak Pareto Dominance


A solution A weakly dominates B if:

 AA is no worse than BB in all objectives (strictly better not required).

🔍 Example (Assume minimization):

 A=(3,5)A = (3, 5), B=(4,6)B = (4, 6), C=(3,5)C = (3, 5)


 AA dominates BB (better in both).
 AA weakly dominates CC (equal in both).

13. Scalarization in Multi-Objective Optimization

✅ Definition:

Scalarization converts a multi-objective optimization problem into a single-objective one by


combining all objectives using a scalar function.

✅ Role:

Allows the use of classical single-objective optimization techniques.

✅ Methods:

 Weighted Sum: f(x)=∑wifi(x)f(x) = \sum w_i f_i(x)


 Epsilon-Constraint: One objective optimized, others as constraints.
 Tchebycheff Scalarization: Focuses on minimizing the maximum deviation from the
ideal point.

🔍 Example:

To minimize cost f1f_1 and time f2f_2:

Minimize 0.6f1+0.4f2\text{Minimize } 0.6f_1 + 0.4f_2

14. NSGA-II vs SPEA2 (Strength Pareto Evolutionary Algorithm 2)

Feature NSGA-II SPEA2


Strength values based on dominance
Sorting Method Non-dominated sorting into ranks
relationships
Diversity
Crowding distance k-nearest neighbor density estimation
Mechanism
Archiving No separate archive, uses elitism Maintains an external archive of
Feature NSGA-II SPEA2
within population Pareto solutions
Complexity O(MN2)O(MN^2) O(MN2)O(MN^2)
Strong elitism and better density
Strength Simple and effective
estimation
General problems with good diversity Need stronger archiving and better
Used When
& speed diversity control

15. Selecting the Most Preferred Solution from a Pareto Set

✅ Common Approaches:

1. A Priori Methods:
o Preferences are defined before optimization (e.g., weights).
o Example: Weighted sum or utility function.
2. A Posteriori Methods:
o Decision-maker analyzes the full Pareto front after optimization.
o Uses visual tools or selects based on objectives of interest.
3. Interactive Methods:
o Iterative process where preferences are updated during optimization.
o Human-in-the-loop approach.

🔍 Tools/Methods for Selection:

 Multi-Criteria Decision Making (MCDM) tools: AHP, TOPSIS, ELECTRE.


 Visualization: Parallel coordinate plots, scatter plots.
 Knee point selection: Selects solutions with the best trade-off (e.g., maximum curvature
on the Pareto front).

Here are clear and structured answers to your final set of questions on multi-objective
optimization (MOO):

16. Decomposition-Based Methods in Multi-Objective Optimization

✅ Definition:

Decomposition-based methods transform a multi-objective problem into multiple scalar


subproblems, each optimized separately.

✅ Basic Working:
1. The objective space is decomposed using weight vectors or scalarizing functions (e.g.,
Tchebycheff, Weighted Sum).
2. Each weight vector defines a subproblem.
3. A population of solutions is evolved to optimize each subproblem in parallel.
4. Solutions collectively approximate the Pareto front.

✅ Example:

 MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition) is a


popular decomposition-based EMO algorithm.

17. Preference-Based Approach in Multi-Objective Optimization

✅ Definition:

Preference-based methods incorporate the decision-maker’s preferences during or before the


optimization process to guide the search toward desirable regions of the Pareto front.

✅ Types:

1. A priori: Preferences defined before optimization (e.g., weights, aspiration levels).


2. A posteriori: Preferences applied after obtaining the Pareto front.
3. Interactive: Preferences updated dynamically during optimization.

✅ Benefits:

 Reduces computational effort by focusing on relevant solutions.


 Leads to more user-satisfactory outcomes.

🔍 Example:

A designer wants low cost and moderate performance, so the algorithm prioritizes solutions that
align with that trade-off preference.

18. Visualization Techniques in Multi-Objective Optimization

✅ Purpose:

Helps understand trade-offs, assess solution quality, and support decision-making.

✅ Common Techniques:
Technique Description
Plots solutions in objective space (only feasible for 2–3
Pareto Front Plot
objectives).
Each axis represents an objective; lines show solutions across
Parallel Coordinate Plot
axes. Useful for 3+ objectives.
Heatmaps & Radar Charts Show distribution of objective values.
Dimensionality Reduction
Visualize high-dimensional objective spaces in 2D.
(PCA, t-SNE)

19. Real-World Applications of Multi-Objective Optimization

✅ Engineering Design:

 Automotive Design: Optimize weight, fuel efficiency, and safety.


 Aerospace: Trade-offs between lift, drag, and cost.

✅ Supply Chain Management:

 Objectives: Minimize cost, delivery time, emissions.


 Applications: Route optimization, inventory management.

✅ Finance:

 Portfolio optimization: Risk vs. return.

✅ Healthcare:

 Treatment planning: Maximize recovery, minimize cost and side effects.

20. Challenges in Large-Scale and High-Dimensional MOO

✅Key Challenges:

1. Curse of Dimensionality: As objectives grow, most solutions become non-dominated.


2. Scalability: Maintaining diversity and convergence in high dimensions is hard.
3. Visualization: Interpreting solutions becomes nearly impossible beyond 3–4 objectives.
4. Computational Cost: Solving many subproblems or large populations is resource-
intensive.

✅ Solutions:

 Dimensionality Reduction: Use PCA or t-SNE for objective reduction.


 Objective Selection: Filter or cluster objectives based on correlation.
 Many-Objective Algorithms: NSGA-III, MOEA/DD are designed for >3 objectives.
 Surrogate Models: Approximate expensive evaluations using machine learning.

Here are clear, concise answers to questions 21–25 related to Multi-Objective Optimization
Problems (MOOPs):

21. Compare: Pareto-based vs. Non-Pareto-based Approaches

Feature Pareto-Based Approaches Non-Pareto-Based Approaches


Use Pareto dominance to Use scalarization, aggregation, or other
Principle
rank/compare solutions fitness-based methods
Weighted Sum, ε-constraint, Goal
Examples NSGA-II, SPEA2, MOGA
Programming
Pareto front (set of non-dominated Single or limited solutions depending on
Output
solutions) preferences
Finds diverse trade-offs, no need for Simpler, easier to implement, efficient in
Advantages
prior preferences low-dimensional cases
Disadvantages Computationally expensive, complex Biased by weighting or constraint choices

22. In NSGA, Distinguish Between:

i. Assigning Dummy Fitness Value

 Done after non-dominated sorting.


 All solutions in the same Pareto front are assigned a dummy fitness based on their rank
(e.g., Front 1 → rank 1, Front 2 → rank 2).
 Helps in selection—lower ranks are better.

ii. Sharing the Fitness Value

 Within each front, solutions that are too close share fitness to reduce crowding.
 Penalizes solutions in densely populated regions, promoting diversity.
 Uses a sharing function based on distance between individuals.

23. Decision Space vs. Objective Space

Aspect Decision Space Objective Space


Definition Space defined by decision variables Space defined by objective function values
Aspect Decision Space Objective Space
(inputs) (outputs)
Dimensionality Depends on number of variables Depends on number of objectives
Where the optimization search Where trade-offs and performance are
Purpose
occurs evaluated
Example Variables: (x₁, x₂, ..., xₙ) Objectives: (f₁(x), f₂(x), ..., fₖ(x))

24. Conditions of Pareto Dominance

Let solution A dominate solution B if:

1. A is no worse than B in all objectives:


∀i: fᵢ(A) ≤ fᵢ(B)
2. A is strictly better than B in at least one objective:
∃j: fⱼ(A) < fⱼ(B)

This ensures A offers a meaningful improvement over B without being worse in any objective.

25. Compare: NSGA vs. NSGA-II

Feature NSGA NSGA-II


Improved fast non-dominated
Sorting Non-dominated sorting
sorting
Fitness Assignment Dummy fitness + sharing Rank-based + crowding distance
Sharing function (computationally Crowding distance (simpler and
Diversity Mechanism
expensive) faster)
Computational
O(MN³) O(MN²)
Complexity
Elitism through combined parent-
Elitism Not present
offspring selection
Efficiency Less scalable More efficient and widely adopted

Let’s go step by step to address questions 26–30. Since question 28 refers to a figure, and it's not
provided, I’ll assume a sample dataset of 5 solutions with two objectives: one to maximize f1,
and another to minimize f2. This assumption will help us explain questions 28–30 clearly.

26. Framework of Non-dominated Sorting Genetic Algorithm (NSGA)


NSGA Framework:

1. Initialization:
o Generate an initial population of solutions randomly.
2. Non-dominated Sorting:
o Rank individuals by dominance level.
o First front: non-dominated solutions.
o Second front: dominated by solutions in the first front only, and so on.
3. Fitness Assignment:
o Assign dummy fitness values based on ranks.
o Lower rank = higher fitness.
4. Fitness Sharing:
o Share fitness among individuals in the same front based on proximity.
o Reduces crowding and promotes diversity.
5. Selection:
o Apply selection (e.g., binary tournament) based on fitness and sharing.
6. Crossover & Mutation:
o Generate offspring using standard GA operators.
7. Replacement:
o Replace the old population with new offspring (no elitism in NSGA).
8. Repeat:
o Loop until termination criteria (e.g., max generations) are met.

27. Mathematical Formulation of a Multi-Objective Optimization Problem


(MOOP)

General Form:

Minimize/Maximize: F(x)=[f1(x),f2(x),...,fk(x)]\text{Minimize/Maximize: }
\mathbf{F}(\mathbf{x}) = [f_1(\mathbf{x}), f_2(\mathbf{x}), ..., f_k(\mathbf{x})]

Subject to:

{gj(x)≤0,j=1,2,...,J(inequality constraints)hl(x)=0,l=1,2,...,L(equality constraints)xi(min)≤xi≤xi(


max),i=1,2,...,n\begin{cases} g_j(\mathbf{x}) \leq 0, & j = 1, 2, ..., J \quad \text{(inequality
constraints)} \\ h_l(\mathbf{x}) = 0, & l = 1, 2, ..., L \quad \text{(equality constraints)} \\
x_i^{\text{(min)}} \leq x_i \leq x_i^{\text{(max)}}, & i = 1, 2, ..., n \\ \end{cases}

Where:

 x=(x1,x2,...,xn)∈Rn\mathbf{x} = (x_1, x_2, ..., x_n) \in \mathbb{R}^n = decision


variables.
 fi(x)f_i(\mathbf{x}) = objective functions.
 kk = number of objectives.
28. Assume Sample Data (f1: Maximize, f2: Minimize)

Solution f1 (maximize) f2 (minimize)


A 7 8
B 6 5
C 8 9
D 5 4
E 9 6

29. Dominance Table

Dominates A B C D E
A - No No No No
B No - No No No
C No No - No No
D No No No - No
E Yes (A) Yes (B) Yes (C) Yes (D) -

🟢 Explanation:

 E dominates A, B, C, D because:
o f1(E)=9f1(E) = 9 is better (higher) than all.
o f2(E)=6f2(E) = 6 is better than A and C and equal/better for others.

30. Non-Dominated Fronts

 Front 1 (non-dominated): E
 Front 2: B, D (not dominated among themselves, but dominated by E)
 Front 3: A, C (dominated by E and B/D)

✅ Non-Dominated Fronts:

Front Solutions
Front 1 E
Front 2 B, D
Front 3 A, C
Let's address your questions one by one:

31. Characteristics of a Self-Organized System

A self-organized system is a complex system in which global order arises from local interactions
among its components without external control. Key characteristics include:

1. Decentralized Control – No single point of control; decisions emerge from interactions.


2. Emergence – Complex patterns arise from simple rules.
3. Adaptability – Can adjust to changes in the environment dynamically.
4. Robustness – Tolerant to disturbances and component failures.
5. Nonlinearity – Small changes can cause large effects.
6. Feedback Loops – Positive and negative feedback influence the system's behavior.
7. Distributed Intelligence – Each agent has limited knowledge, yet the system achieves
complex goals.

32. Significance of the Pareto Front in Multi-Objective Decision Making

The Pareto front represents solutions where no objective can be improved without degrading
another. It helps decision-makers by:

 Visualizing trade-offs: Clarifies how improvement in one objective affects others.


 Reducing solution space: Filters out dominated and sub-optimal solutions.
 Supporting preference-based selection: Decision-makers can choose based on priorities
(e.g., minimizing cost vs. maximizing quality).
 Enabling interactive exploration: Visual tools (like Pareto charts) allow exploration of
“what-if” scenarios.

➡️ Value: It transforms a difficult multi-dimensional decision problem into a comprehensible set


of meaningful options.

33. Computational Efficiency and Scalability of Multi-Objective Optimization


Algorithms

Efficiency Factors:

 Population size
 Number of generations
 Evaluation cost of objectives
 Sorting and diversity-preservation methods

Algorithm Efficiency Scalability (Objectives) Notes


Fast non-dominated sorting;
NSGA-II Moderate Up to ~3–4 objectives
struggles in high-D
Good elitism and archive
SPEA2 Moderate Similar to NSGA-II
maintenance
Scales better to many Uses decomposition (parallel
MOEA/D High
objectives subproblems)
Designed for many-objective Uses reference points instead of
NSGA-III High
problems (5+) crowding distance
Good convergence control, but
ε-MOEA High Moderate
may lack diversity
MOPSO (Swarm- May converge faster but may lack
Variable Moderate
based) diversity

Scalability Issues:

 Many-objective problems (MaOPs) suffer from:


o Loss of selection pressure
o High computational cost in sorting
o Poor visualization
 Solutions:
o Use reference vectors (NSGA-III)
o Decomposition-based methods (MOEA/D)
o Hybrid/metaheuristic combinations

34. MOOP Framework with Qualitative Preferences and Linguistic Variables

Framework Overview:

1. Objective Functions:
Quantitative (e.g., cost, time) and qualitative (e.g., comfort, aesthetics).
2. Preference Modeling:
Use fuzzy logic to convert linguistic terms into numerical scores.
3. Fuzzy Representation Scheme:
o "High", "Medium", "Low" → fuzzy sets.
o Example: Comfort = {Low: 0–3, Medium: 4–7, High: 8–10}
4. Weighted Aggregation:
Combine crisp and fuzzy objectives using a fuzzy weighted sum or multi-criteria fuzzy
decision-making (e.g., TOPSIS, Fuzzy AHP).
5. Fuzzy Pareto Dominance:
Use fuzzy dominance rules for comparing solutions.
6. Outcome:
o Preserves user subjectivity.
o Handles ambiguity.
o Enhances decision relevance.

35. MOOP: Maximize Y and Minimize K

Given:

Y=x3−(x−2)2+3K=(x−2)2+3Y = x^3 - (x - 2)^2 + 3 \\ K = (x - 2)^2 + 3

Let’s compute values for some example points of x:

x Y K
0 0−4+3=−10 - 4 + 3 = -1 4+3=74 + 3 = 7
1 1−1+3=31 - 1 + 3 = 3 1+3=41 + 3 = 4
2 8−0+3=118 - 0 + 3 = 11 0+3=30 + 3 = 3
3 27−1+3=2927 - 1 + 3 = 29 1+3=41 + 3 = 4
4 64−4+3=6364 - 4 + 3 = 63 4+3=74 + 3 = 7

Now, find non-dominated solutions (maximize Y, minimize K):

1. (x=0): Y = -1, K = 7 → Dominated


2. (x=1): Y = 3, K = 4 → Dominated by (x=2) and (x=3)
3. (x=2): Y = 11, K = 3 → Not dominated
4. (x=3): Y = 29, K = 4 → Not dominated
5. (x=4): Y = 63, K = 7 → Not dominated

➡️ Non-Dominated Front:

 Solutions at x = 2, 3, 4

These are trade-offs:

 x = 2: Best K, moderate Y
 x = 3: Balanced
 x = 4: Best Y, worst K

Let's tackle each sub-question of Question 20 step-by-step:


20.1: Given a set of points and a MOP (Multi-Objective Optimization Problem), is it
always true that one point dominates the others?

Answer:
No, it is not always true.
In multi-objective optimization, especially in real-world problems, it's common that no single
point dominates all others. Instead, we often find a set of non-dominated (Pareto optimal)
solutions, where each offers a trade-off.

20.2: Does every MOP have a Pareto set?

Answer:
Yes, every well-defined MOP has at least one Pareto optimal solution, so it has a Pareto set.
Even if no solution dominates another (e.g., in conflicting objectives), the set of non-dominated
points forms the Pareto set.

20.3: Sketch and explain a convex Pareto front for a MOP for the following cases:

We consider a 2D objective space with f1f_1 on the x-axis and f2f_2 on the y-axis.

(a) Minimize f1f_1, Maximize f2f_2

 Direction: Left (minimize f1f_1) and Up (maximize f2f_2)


 Convex Pareto front curves upward like this:

f2 ↑
| .
| .
| .
| .
|.
+-------------→ f1

(b) Maximize f1f_1, Minimize f2f_2

 Direction: Right (maximize f1f_1) and Down (minimize f2f_2)


 Convex front curves downward:

f2 ↑
|.
| .
| .
| .
| .
+-------------→ f1

(c) Maximize both f1f_1 and f2f_2

 Direction: Right and Up


 Convex Pareto front curves upward:

f2 ↑
|.
| .
| .
| .
| .
+-------------→ f1

Each curve shows the best trade-off region given the optimization direction.

20.4: Given the objective values for 4 points for a multi-objective minimization
problem:

Point x(i)x^{(i)} f1f_1 f2f_2


x(1)x^{(1)} 1 1
x(2)x^{(2)} 1 2
x(3)x^{(3)} 2 1
x(4)x^{(4)} 2 2

(a) Which point dominates all the others?

 x(1)x^{(1)} dominates all:


o Has lowest f1f_1 and lowest f2f_2
✅ Answer: x(1)x^{(1)}

(b) Which point does x(2)x^{(2)} and x(3)x^{(3)} dominate?

 x(2)x^{(2)} dominates x(4)x^{(4)}: same f1f_1, better f2f_2


 x(3)x^{(3)} also dominates x(4)x^{(4)}
✅ Answer: Both x(2)x^{(2)} and x(3)x^{(3)} dominate x(4)x^{(4)}

(c) Which point is non-dominated?

 Only x(1)x^{(1)} is not dominated by any other point


✅ Answer: x(1)x^{(1)}

(d) Which point is Pareto optimal?


 All non-dominated points form the Pareto optimal set
✅ Answer: x(1)x^{(1)}, x(2)x^{(2)}, x(3)x^{(3)} are all Pareto optimal (though
dominated by x(1)x^{(1)}, they are not dominated by each other unless strict dominance
is assumed)

Let’s walk through Questions 20.5 to 20.8 one by one, giving clear definitions, calculations, and
reasoning.

20.5:

Given points from 20.4:

Point x(i)x^{(i)} f1f_1 f2f_2


x(1)x^{(1)} 1 1
x(2)x^{(2)} 1 2
x(3)x^{(3)} 2 1
x(4)x^{(4)} 2 2

a) Additive ε-dominance:

Point aa ε-dominates point bb additively if:


fi(a)≤fi(b)+ε∀if_i(a) \leq f_i(b) + \varepsilon \quad \forall i

Does x(2)x^{(2)} ε-dominate x(1)x^{(1)}?

 f1(x(2))=1≤1+εf_1(x^{(2)}) = 1 \leq 1 + ε → ε≥0ε \geq 0


 f2(x(2))=2≤1+εf_2(x^{(2)}) = 2 \leq 1 + ε → ε≥1ε \geq 1

✅ So: ε ≥ 1 for x(2)x^{(2)} to additively ε-dominate x(1)x^{(1)}

Same for x(3)x^{(3)}:

 f1(x(3))=2≤1+εf_1(x^{(3)}) = 2 \leq 1 + ε → ε≥1ε \geq 1


 f2(x(3))=1≤1+εf_2(x^{(3)}) = 1 \leq 1 + ε → ε≥0ε \geq 0

✅ So: ε ≥ 1 for x(3)x^{(3)} to additively ε-dominate x(1)x^{(1)}

For x(4)x^{(4)}:

 f1=2≤1+εf_1 = 2 \leq 1 + ε → ε≥1ε \geq 1


 f2=2≤1+εf_2 = 2 \leq 1 + ε → ε≥1ε \geq 1
✅ So: ε ≥ 1 also for x(4)x^{(4)}
b) Multiplicative ε-dominance:

Point aa ε-dominates point bb multiplicatively if:


fi(a)≤(1+ε)⋅fi(b)∀if_i(a) \leq (1 + \varepsilon) \cdot f_i(b) \quad \forall i

Test for x(2)x^{(2)} vs x(1)x^{(1)}:

 f1=1≤(1+ε)⋅1f_1 = 1 \leq (1+ε) \cdot 1 → OK for any ε ≥ 0


 f2=2≤(1+ε)⋅1f_2 = 2 \leq (1+ε) \cdot 1 →
→ Solve: 2≤1+ε2 \leq 1+ε → ε ≥ 1

Same for x(3)x^{(3)}:

 f1=2≤(1+ε)⋅1f_1 = 2 \leq (1+ε) \cdot 1 → ε ≥ 1


 f2=1≤(1+ε)⋅1f_2 = 1 \leq (1+ε) \cdot 1 → ε ≥ 0

x(4)x^{(4)}:

 f1=2≤(1+ε)⋅1f_1 = 2 \leq (1+ε) \cdot 1 → ε ≥ 1


 f2=2≤(1+ε)⋅1f_2 = 2 \leq (1+ε) \cdot 1 → ε ≥ 1

✅ Multiplicative ε ≥ 1 for all 3 points to ε-dominate x(1)x^{(1)}

20.6

Example of two points where neither multiplicatively ε-dominates the other for any ε > 0:

Let:

 A=(1,4)A = (1, 4)
 B=(4,1)B = (4, 1)

Since it's minimization, test for AA dominating BB:

 For AA to ε-dominate BB:


1 ≤ (1+ε)·4 → ε ≥ -0.75
4 ≤ (1+ε)·1 → ε ≥ 3
→ ε must be ≥ 3

But reverse:

 For BB to ε-dominate AA:


4 ≤ (1+ε)·1 → ε ≥ 3
1 ≤ (1+ε)·4 → ε ≥ -0.75
→ Again, ε ≥ 3

But neither point satisfies both conditions for any ε < 3

Hence, no ε < 3 exists such that one multiplicatively dominates the other.

✅ Conclusion: Neither dominates the other multiplicatively for any ε < 3

20.7

Example of two Pareto fronts P1P_1 and P2P_2 with same number of points:

Let’s define in 2D (minimization):

 P1P_1 = {(1, 5), (3, 3), (5, 1)}


 P2P_2 = {(2, 4), (3, 3), (4, 2)}

These are 3 points each.

 Union hypervolume (area under the curves to the reference point) is larger for P2P_2,
because they are closer to the origin (2,4 instead of 1,5 etc.)
 But intersection hypervolume (shared dominated region) between P2P_2 and some
baseline reference (say origin or some other reference front) is less than P1P_1 due to
broader spread in P1P_1

✅ This shows such a case is possible. (We can sketch this if needed.)

20.8

Given:

Front X (4 points):

 x₁ = (3,4)
 x₂ = (3,3)
 x₃ = (2,2)
 x₄ = (5,2)

Front Y (3 points):

 y₁ = (1,3)
 y₂ = (4,3)
 y₃ = (4,1)

a) Coverage of X relative to Y (C(X,Y)):

Coverage metric C(A,B)C(A, B) = proportion of solutions in B dominated by at least one in A

Check which y-points are dominated by any x-point:

 y1=(1,3)y₁ = (1,3): No x has f1≤1f_1 ≤ 1 → Not dominated


 y2=(4,3)y₂ = (4,3): Dominated by x₃ = (2,2)
 y3=(4,1)y₃ = (4,1): Dominated by x₃ = (2,2)

→ 2 out of 3 points in Y are dominated by X

✅ C(X,Y) = 2 / 3

b) Coverage of Y relative to X (C(Y,X)):

Which x-points are dominated by any y-point?

 x₁ = (3,4): dominated by y₁ = (1,3)


 x₂ = (3,3): dominated by y₁ = (1,3)
 x₃ = (2,2): dominated by none
 x₄ = (5,2): dominated by y₃ = (4,1)

→ 3 out of 4 points dominated by Y

✅ C(Y,X) = 3 / 4

c) Which is better based on coverage?

Since C(Y,X) > C(X,Y), the Y front dominates more of X than vice versa.

✅ Answer: Y is a better approximation to the Pareto front based on coverage metric.

Certainly! Let's delve into each of your questions:


20.9 Why must all objectives be non-negative in the product aggregation method
(Equation 20.26)?

In the product aggregation method, multiple objectives are combined by multiplying them:

Aggregate Objective=f1(x)×f2(x)×…×fn(x)\text{Aggregate Objective} = f_1(x) \times f_2(x)


\times \ldots \times f_n(x)

This approach assumes all objective function values are non-negative. If any objective is
negative:

 Sign Reversal: Multiplying an odd number of negative objectives can result in a negative
aggregate, potentially misrepresenting the solution's quality.
 Zero Value: If any objective is zero, the entire product becomes zero, possibly
undervaluing a solution that performs well on other objectives.
 Interpretation Issues: Negative or zero aggregate values can complicate the
interpretation and comparison of solutions.

Therefore, ensuring all objectives are non-negative maintains the method's validity and
interpretability.

20.10 Difference between elitism and an archive in evolutionary multi-objective


optimization

Elitism:

 Involves retaining the best solutions (typically non-dominated) from the current
generation to the next.
 Ensures that high-quality solutions are not lost due to stochastic variations.
 Directly influences the selection and reproduction process.

Archive:

 A separate storage (external to the main population) that maintains a record of the best
solutions found so far.(cscjournals.org)
 May not directly participate in generating new solutions but serves as a repository for
decision-makers.(arXiv)
 Helps in preserving diversity and provides a comprehensive set of trade-off solutions.

In summary, while both aim to preserve quality solutions, elitism focuses on immediate
generational retention, whereas an archive serves as a long-term memory.
20.11 Maximum number of individuals in the archive of an e-MOEA

In evolutionary Multi-Objective Optimization Algorithms (e-MOEAs), the archive stores non-


dominated solutions.(SpringerLink)

 Two Objectives: The maximum number of non-dominated solutions that can be stored is
theoretically unbounded, as any number of solutions can lie on the Pareto front. However,
practical implementations often set a fixed archive size to manage computational
resources.
 Three Objectives: Similarly, the number of non-dominated solutions can be vast,
forming a surface in three-dimensional space. Again, a fixed archive size is typically
enforced in practice.

Therefore, while there's no strict theoretical limit, practical considerations necessitate setting a
maximum archive size.

20.12 Crowding distance calculation using the largest enclosing rectangle in


NSGA-II

In NSGA-II, crowding distance is a measure of solution density, calculated based on neighboring


solutions in each objective dimension.

If we modify this approach to use the largest rectangle (in 2D) or hyperrectangle (in higher
dimensions) that encloses a solution without including any other points, the crowding distance
becomes the perimeter or volume of this rectangle.

This method emphasizes the isolation of a solution, potentially providing a more accurate
representation of sparsity in the solution space.

20.13 Relationship between d₁(x) and d₂(x) in NSGA-II

Let:

 d₁(x): Crowding distance calculated using nearest neighbors in each objective dimension.
 d₂(x): Crowding distance calculated using the largest rectangle enclosing x without
including other points.

If d₁(x) < d₁(y), it indicates that x is in a denser region than y based on immediate neighbors.
However, this doesn't necessarily imply that d₂(x) < d₂(y), as d₂ considers the overall space
around a point, not just immediate neighbors.

Therefore, d₁(x) < d₁(y) does not guarantee d₂(x) < d₂(y).
20.14 Raw cost and strength values in SPEA

In the Strength Pareto Evolutionary Algorithm (SPEA):(Medium)

 Strength (S): For each individual, it's the number of solutions it dominates.(CiteSeerX)
 Raw Fitness (R): For each individual, it's the sum of the strengths of individuals that
dominate it.

For example, if an individual is dominated by two others with strengths 3 and 2, its raw fitness is
3 + 2 = 5.

These metrics help in selecting individuals for the next generation, balancing convergence and
diversity.

If you need further clarification or examples on any of these topics, feel free to ask!

You might also like