The document discusses artificial neural networks and backpropagation. It provides an overview of backpropagation algorithms, including how they were developed over time, the basic methodology of propagating errors backwards, and typical network architectures. It also gives examples of applying backpropagation to problems like robotics, space robots, handwritten digit recognition, and face recognition.
This document outlines a presentation on knowledge representation. It begins with an introduction to propositional logic, including its syntax, semantics, and properties. Several inference methods for propositional logic are discussed, including truth tables, deductive systems, and resolution. Predicate logic and semantic networks are also mentioned as topics to be covered. The overall document provides an outline of the key concepts to be presented on knowledge representation using logic.
Guest Lecture about genetic algorithms in the course ECE657: Computational Intelligence/Intelligent Systems Design, Spring 2016, Electrical and Computer Engineering (ECE) Department, University of Waterloo, Canada.
This document provides information about the CS407 Neural Computation course. It outlines the lecturer, timetable, assessment, textbook recommendations, and covers topics from today's lecture including an introduction to neural networks, their inspiration from the brain, a brief history, applications, and an overview of topics to be covered in the course.
Neural networks are inspired by biological neural networks and are composed of interconnected processing elements called neurons. Neural networks can learn complex patterns and relationships through a learning process without being explicitly programmed. They are widely used for applications like pattern recognition, classification, forecasting and more. The document discusses neural network concepts like architecture, learning methods, activation functions and applications. It provides examples of biological and artificial neurons and compares their characteristics.
This document provides an overview and introduction to the course "Knowledge Representation & Reasoning" taught by Ms. Jawairya Bukhari. It discusses the aims of developing skills in knowledge representation and reasoning using different representation methods. It outlines prerequisites like artificial intelligence, logic, and programming. Key topics covered include symbolic and non-symbolic knowledge representation methods, types of knowledge, languages for knowledge representation like propositional logic, and what knowledge representation encompasses.
This document summarizes artificial neural networks. It discusses how neural networks are composed of interconnected neurons that can learn complex behaviors through simple principles. Neural networks can be used for applications like pattern recognition, noise reduction, and prediction. The key components of neural networks are neurons, synapses, weights, thresholds, and activation functions. Neural networks offer advantages like adaptability and fault tolerance, though they are not exact and can be complex. Examples of neural network applications discussed include object trajectory learning, radiosity for virtual reality, speechreading, target detection and tracking, and robotics.
Here is a MATLAB program to implement logic functions using a McCulloch-Pitts neuron:
% McCulloch-Pitts neuron for logic functions
% Inputs
x1 = 1;
x2 = 0;
% Weights
w1 = 1;
w2 = 1;
% Threshold
theta = 2;
% Net input
net = x1*w1 + x2*w2;
% Activation function
if net >= theta
y = 1;
else
y = 0;
end
% Output
disp(y)
This implements a basic AND logic gate using a McCulloch-Pitts neuron.
Neural networks can be biological models of the brain or artificial models created through software and hardware. The human brain consists of interconnected neurons that transmit signals through connections called synapses. Artificial neural networks aim to mimic this structure using simple processing units called nodes that are connected by weighted links. A feed-forward neural network passes information in one direction from input to output nodes through hidden layers. Backpropagation is a common supervised learning method that uses gradient descent to minimize error by calculating error terms and adjusting weights between layers in the network backwards from output to input. Neural networks have been applied successfully to problems like speech recognition, character recognition, and autonomous vehicle navigation.
Introduction and architecture of expert systempremdeshmane
An expert system is an interactive computer program that uses knowledge acquired from experts to solve complex problems in a specific domain. It consists of an inference engine that applies rules and logic to the facts contained within a knowledge base in order to provide recommendations or advice to users. The first expert system was called DENDRAL and was developed in the 1970s at Stanford University to identify unknown organic molecules. Expert systems are used in applications like diagnosis, financial planning, configuration, and more to perform tasks previously requiring human expertise. They have benefits like increased productivity and quality, reduced costs and errors, and the ability to capture scarce human knowledge. However, they also have limitations such as difficulty acquiring and representing human expertise and an inability to operate outside their
Competitive Learning [Deep Learning And Nueral Networks].pptxraghavaram5555
Neural networks can use unsupervised learning techniques like competitive learning. Competitive learning involves nodes competing to respond to input data, with the winning node updating its weights. Models include winner-take-all nets, self-organizing maps, and learning vector quantization. Specific competitive learning algorithms discussed include MAXNET, Mexican Hat networks, and Hamming nets. Kohonen self-organizing maps are a type of competitive learning network where neurons organize themselves in a topological map based on input patterns.
Genetic programming is an evolutionary computation technique that can automatically solve problems without requiring the user to specify the form of the solution in advance. It works by generating an initial population of computer programs randomly, then uses genetic operations like crossover and mutation to breed new programs, evaluating their fitness at each generation. The fittest programs survive and produce offspring for the next generation. Programs in GP are represented as syntax trees with functions as internal nodes and variables/constants as leaves. GP has been successfully applied to problems like circuit design, predictive modeling, and control systems optimization.
FOIL is an algorithm for inductive logic programming that learns sets of first-order rules from examples to perform tasks like predicting relationships between people. FOIL extends earlier rule learning algorithms to handle first-order logic representations using predicates, variables, and quantification. It searches for the most specific rules that cover positive training examples, removing covered examples and searching for additional rules. FOIL's use of first-order logic makes the rules it learns more generally applicable than propositional rules.
This presentation discusses the following ANN concepts:
Introduction
Characteristics
Learning methods
Taxonomy
Evolution of neural networks
Basic models
Important technologies
Applications
Artificial neural networks are a form of artificial intelligence inspired by biological neural networks. They are composed of interconnected processing units that can learn patterns from data through training. Neural networks are well-suited for tasks like pattern recognition, classification, and prediction. They learn by example without being explicitly programmed, similarly to how the human brain learns.
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersMohammed Bennamoun
This document provides an overview of single layer perceptrons (SLPs) and classification. It defines a perceptron as the simplest form of neural network consisting of adjustable weights and a bias. SLPs can perform binary classification of linearly separable patterns by adjusting weights during training. The document outlines limitations of SLPs, including their inability to represent non-linearly separable functions like XOR. It introduces Bayesian decision theory and how it can be used for optimal classification by comparing posterior probabilities given prior probabilities and likelihood functions. Decision boundaries are defined for dividing a feature space into non-overlapping regions to classify patterns.
This document provides an overview of different agent architectures, including reactive, deliberative, and hybrid architectures. It discusses key concepts like the types of environments agents can operate in, including accessible vs inaccessible, deterministic vs non-deterministic, episodic vs non-episodic, and static vs dynamic environments. Reactive architectures are focused on fast reactions to environmental changes with minimal internal representation and computation. Deliberative architectures emphasize long-term planning and goal-driven behavior using symbolic representations. Rodney Brooks proposed that intelligence can emerge from the interaction of simple agents following stimulus-response rules, without complex internal models, as seen in ant colonies.
The document discusses gradient descent methods for unconstrained convex optimization problems. It introduces gradient descent as an iterative method to find the minimum of a differentiable function by taking steps proportional to the negative gradient. It describes the basic gradient descent update rule and discusses convergence conditions such as Lipschitz continuity, strong convexity, and condition number. It also covers techniques like exact line search, backtracking line search, coordinate descent, and steepest descent methods.
Deep learning and neural networks are inspired by biological neurons. Artificial neural networks (ANN) can have multiple layers and learn through backpropagation. Deep neural networks with multiple hidden layers did not work well until recent developments in unsupervised pre-training of layers. Experiments on MNIST digit recognition and NORB object recognition datasets showed deep belief networks and deep Boltzmann machines outperform other models. Deep learning is now widely used for applications like computer vision, natural language processing, and information retrieval.
Genetic algorithms are optimization techniques inspired by Darwin's theory of evolution. They use operations like selection, crossover and mutation to evolve solutions to problems by iteratively trying random variations. The document outlines the history, concepts, process and applications of genetic algorithms, including using them to optimize engineering design, routing, computer games and more. It describes how genetic algorithms encode potential solutions and use fitness functions to guide the evolution toward better outcomes.
The document discusses the 8-puzzle problem and the A* algorithm. The 8-puzzle problem involves a 3x3 grid with 8 numbered tiles and 1 blank space that can be moved. The A* algorithm maintains a tree of paths from the initial to final state, extending the paths one step at a time until the final state is reached. It is complete and optimal but depends on the accuracy of the heuristic used to estimate costs.
This document provides an introduction to genetic algorithms. It describes genetic algorithms as probabilistic optimization algorithms inspired by biological evolution, using concepts like natural selection and genetic inheritance. The key components of a genetic algorithm are described, including encoding solutions, initializing a population, selecting parents, applying genetic operators like crossover and mutation, evaluating fitness, and establishing termination criteria. An example problem of maximizing binary string ones is used to illustrate how a genetic algorithm works over multiple generations.
This document discusses genetic algorithms and their application to optimization problems. It begins with an introduction to genetic algorithms, including their biological inspiration, search space, working principles and basic genetic algorithm approach. It then covers encoding techniques for genetic algorithms, including binary, value, permutation and tree encoding. The document concludes with examples of genetic algorithm operators like selection, crossover and mutation, and provides basic examples of applying genetic algorithms to optimization problems.
This document provides an overview and introduction to the course "Knowledge Representation & Reasoning" taught by Ms. Jawairya Bukhari. It discusses the aims of developing skills in knowledge representation and reasoning using different representation methods. It outlines prerequisites like artificial intelligence, logic, and programming. Key topics covered include symbolic and non-symbolic knowledge representation methods, types of knowledge, languages for knowledge representation like propositional logic, and what knowledge representation encompasses.
This document summarizes artificial neural networks. It discusses how neural networks are composed of interconnected neurons that can learn complex behaviors through simple principles. Neural networks can be used for applications like pattern recognition, noise reduction, and prediction. The key components of neural networks are neurons, synapses, weights, thresholds, and activation functions. Neural networks offer advantages like adaptability and fault tolerance, though they are not exact and can be complex. Examples of neural network applications discussed include object trajectory learning, radiosity for virtual reality, speechreading, target detection and tracking, and robotics.
Here is a MATLAB program to implement logic functions using a McCulloch-Pitts neuron:
% McCulloch-Pitts neuron for logic functions
% Inputs
x1 = 1;
x2 = 0;
% Weights
w1 = 1;
w2 = 1;
% Threshold
theta = 2;
% Net input
net = x1*w1 + x2*w2;
% Activation function
if net >= theta
y = 1;
else
y = 0;
end
% Output
disp(y)
This implements a basic AND logic gate using a McCulloch-Pitts neuron.
Neural networks can be biological models of the brain or artificial models created through software and hardware. The human brain consists of interconnected neurons that transmit signals through connections called synapses. Artificial neural networks aim to mimic this structure using simple processing units called nodes that are connected by weighted links. A feed-forward neural network passes information in one direction from input to output nodes through hidden layers. Backpropagation is a common supervised learning method that uses gradient descent to minimize error by calculating error terms and adjusting weights between layers in the network backwards from output to input. Neural networks have been applied successfully to problems like speech recognition, character recognition, and autonomous vehicle navigation.
Introduction and architecture of expert systempremdeshmane
An expert system is an interactive computer program that uses knowledge acquired from experts to solve complex problems in a specific domain. It consists of an inference engine that applies rules and logic to the facts contained within a knowledge base in order to provide recommendations or advice to users. The first expert system was called DENDRAL and was developed in the 1970s at Stanford University to identify unknown organic molecules. Expert systems are used in applications like diagnosis, financial planning, configuration, and more to perform tasks previously requiring human expertise. They have benefits like increased productivity and quality, reduced costs and errors, and the ability to capture scarce human knowledge. However, they also have limitations such as difficulty acquiring and representing human expertise and an inability to operate outside their
Competitive Learning [Deep Learning And Nueral Networks].pptxraghavaram5555
Neural networks can use unsupervised learning techniques like competitive learning. Competitive learning involves nodes competing to respond to input data, with the winning node updating its weights. Models include winner-take-all nets, self-organizing maps, and learning vector quantization. Specific competitive learning algorithms discussed include MAXNET, Mexican Hat networks, and Hamming nets. Kohonen self-organizing maps are a type of competitive learning network where neurons organize themselves in a topological map based on input patterns.
Genetic programming is an evolutionary computation technique that can automatically solve problems without requiring the user to specify the form of the solution in advance. It works by generating an initial population of computer programs randomly, then uses genetic operations like crossover and mutation to breed new programs, evaluating their fitness at each generation. The fittest programs survive and produce offspring for the next generation. Programs in GP are represented as syntax trees with functions as internal nodes and variables/constants as leaves. GP has been successfully applied to problems like circuit design, predictive modeling, and control systems optimization.
FOIL is an algorithm for inductive logic programming that learns sets of first-order rules from examples to perform tasks like predicting relationships between people. FOIL extends earlier rule learning algorithms to handle first-order logic representations using predicates, variables, and quantification. It searches for the most specific rules that cover positive training examples, removing covered examples and searching for additional rules. FOIL's use of first-order logic makes the rules it learns more generally applicable than propositional rules.
This presentation discusses the following ANN concepts:
Introduction
Characteristics
Learning methods
Taxonomy
Evolution of neural networks
Basic models
Important technologies
Applications
Artificial neural networks are a form of artificial intelligence inspired by biological neural networks. They are composed of interconnected processing units that can learn patterns from data through training. Neural networks are well-suited for tasks like pattern recognition, classification, and prediction. They learn by example without being explicitly programmed, similarly to how the human brain learns.
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersMohammed Bennamoun
This document provides an overview of single layer perceptrons (SLPs) and classification. It defines a perceptron as the simplest form of neural network consisting of adjustable weights and a bias. SLPs can perform binary classification of linearly separable patterns by adjusting weights during training. The document outlines limitations of SLPs, including their inability to represent non-linearly separable functions like XOR. It introduces Bayesian decision theory and how it can be used for optimal classification by comparing posterior probabilities given prior probabilities and likelihood functions. Decision boundaries are defined for dividing a feature space into non-overlapping regions to classify patterns.
This document provides an overview of different agent architectures, including reactive, deliberative, and hybrid architectures. It discusses key concepts like the types of environments agents can operate in, including accessible vs inaccessible, deterministic vs non-deterministic, episodic vs non-episodic, and static vs dynamic environments. Reactive architectures are focused on fast reactions to environmental changes with minimal internal representation and computation. Deliberative architectures emphasize long-term planning and goal-driven behavior using symbolic representations. Rodney Brooks proposed that intelligence can emerge from the interaction of simple agents following stimulus-response rules, without complex internal models, as seen in ant colonies.
The document discusses gradient descent methods for unconstrained convex optimization problems. It introduces gradient descent as an iterative method to find the minimum of a differentiable function by taking steps proportional to the negative gradient. It describes the basic gradient descent update rule and discusses convergence conditions such as Lipschitz continuity, strong convexity, and condition number. It also covers techniques like exact line search, backtracking line search, coordinate descent, and steepest descent methods.
Deep learning and neural networks are inspired by biological neurons. Artificial neural networks (ANN) can have multiple layers and learn through backpropagation. Deep neural networks with multiple hidden layers did not work well until recent developments in unsupervised pre-training of layers. Experiments on MNIST digit recognition and NORB object recognition datasets showed deep belief networks and deep Boltzmann machines outperform other models. Deep learning is now widely used for applications like computer vision, natural language processing, and information retrieval.
Genetic algorithms are optimization techniques inspired by Darwin's theory of evolution. They use operations like selection, crossover and mutation to evolve solutions to problems by iteratively trying random variations. The document outlines the history, concepts, process and applications of genetic algorithms, including using them to optimize engineering design, routing, computer games and more. It describes how genetic algorithms encode potential solutions and use fitness functions to guide the evolution toward better outcomes.
The document discusses the 8-puzzle problem and the A* algorithm. The 8-puzzle problem involves a 3x3 grid with 8 numbered tiles and 1 blank space that can be moved. The A* algorithm maintains a tree of paths from the initial to final state, extending the paths one step at a time until the final state is reached. It is complete and optimal but depends on the accuracy of the heuristic used to estimate costs.
This document provides an introduction to genetic algorithms. It describes genetic algorithms as probabilistic optimization algorithms inspired by biological evolution, using concepts like natural selection and genetic inheritance. The key components of a genetic algorithm are described, including encoding solutions, initializing a population, selecting parents, applying genetic operators like crossover and mutation, evaluating fitness, and establishing termination criteria. An example problem of maximizing binary string ones is used to illustrate how a genetic algorithm works over multiple generations.
This document discusses genetic algorithms and their application to optimization problems. It begins with an introduction to genetic algorithms, including their biological inspiration, search space, working principles and basic genetic algorithm approach. It then covers encoding techniques for genetic algorithms, including binary, value, permutation and tree encoding. The document concludes with examples of genetic algorithm operators like selection, crossover and mutation, and provides basic examples of applying genetic algorithms to optimization problems.
The document discusses general problem solving in artificial intelligence. It defines key concepts like problem space, state space, operators, initial and goal states. Problem solving involves searching the state space to find a path from the initial to the goal state. Different search algorithms can be used, like depth-first search and breadth-first search. Heuristic functions can guide searches to improve efficiency. Constraint satisfaction problems are another class of problems that can be solved using techniques like backtracking.
This document provides an introduction to optimization methods and strategies. It defines optimization as selecting the best solution from available alternatives according to some criteria. Traditional gradient-based algorithms can find local optima but not global optima. Gradient-free methods avoid limitations of gradient methods but are less efficient. Metaheuristic methods like genetic algorithms and particle swarm optimization are inspired by nature and can find global optima without derivatives. Engineering optimization applies these techniques to achieve design goals. Nature-inspired metaheuristics use randomization and local search to find high-quality solutions without guaranteeing optimality.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
This document provides a review of non-traditional optimization algorithms that have been used to solve simultaneous scheduling problems in industrial and production environments. It discusses several metaheuristic algorithms such as genetic algorithms, simulated annealing, particle swarm optimization, artificial immune systems, differential evolution, and ant colony optimization. These algorithms are able to find good solutions to combinatorial optimization problems like scheduling that are NP-complete and cannot be solved optimally in polynomial time using traditional methods. The review concludes that these non-traditional techniques can yield global optimal solutions and efficiently explore solution spaces, making them useful approaches for simultaneous scheduling problems.
Solving Multidimensional Multiple Choice Knapsack Problem By Genetic Algorith...Shubhashis Shil
This document summarizes a study that used a genetic algorithm to solve the multidimensional multiple choice knapsack problem (MMKP) and measured its performance against traditional approaches. The genetic algorithm was able to obtain near-optimal revenue solutions for large-scale MMKP problems in less time than traditional methods like Branch and Bound with Linear Programming (BBLP), Modified Heuristic (M-HEU), and Multiple Upgrade of Heuristic (MU-HEU). While the revenue obtained was nearly the same across all methods, the genetic algorithm had significantly better timing complexity and its effectiveness increased as the problem constraints grew larger.
A Genetic Algorithm Problem Solver For ArchaeologyAmy Cernava
This document introduces a genetic algorithm workbench that can be used to solve archaeology problems. Genetic algorithms are inspired by natural selection and use populations of individuals that evolve over generations to optimize solutions. The workbench allows users to define problems in terms of data types and variables, and evaluate solutions. It implements genetic operators like selection, crossover and mutation to iteratively improve populations. Archaeologists are invited to test genetic algorithms on their problems by using the freely available workbench.
2 - Structural optimisation and inverse analysis strategies for masonry structures
Corrado Chisari
Dept. of Civil and Environmental Engineering, Imperial College London
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
The document discusses the optimal synthesis of a crank rocker mechanism for point-to-point path generation using genetic algorithms. It introduces the concept of orientation structure error of the fixed link and presents a new optimal synthesis method. A mathematical model is developed using the mechanism's link lengths and angles as design variables. The objective is to minimize the crank link length. Genetic algorithms are applied to optimize the mechanism, with the crank length as the objective function and the link dimensions as variables coded in binary strings.
The document discusses various optimization techniques including evolutionary computing techniques such as particle swarm optimization and genetic algorithms. It provides an overview of the goal of optimization problems and discusses black-box optimization approaches. Evolutionary algorithms and swarm intelligence techniques that are inspired by nature are also introduced. The document then focuses on particle swarm optimization, providing details on the concepts, mathematical equations, components and steps involved in PSO. It also discusses genetic algorithms at a high level.
The document discusses using a NSGA-III-based meta decision tree to classify real estate data. NSGA-III is a metaheuristic optimization technique that can find optimal solutions. It is used to iteratively optimize the parameters of a meta-J48 decision tree model to improve classification accuracy of real estate data. Experimental results found the proposed NSGA-III technique outperformed other methods in terms of accuracy, true positive rate, true negative rate, precision, and F-measure. Therefore, the technique is applicable for real-time real estate users.
Operational research (OR) is a discipline that deals with applying advanced analytical methods to help make better decisions. OR uses scientific methods and especially mathematical modeling to study complex problems. It is considered a subfield of applied mathematics. Some key applications of OR include scheduling, facility planning, planning and forecasting, credit scoring, marketing, and defense planning. OR takes a systems approach, uses interdisciplinary teams, and aims to optimize objectives subject to constraints through quantitative modeling and analysis.
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...Madhav Mishra
The document discusses various topics related to evolutionary computation and artificial intelligence, including:
- Evolutionary computation concepts like genetic algorithms, genetic programming, evolutionary programming, and swarm intelligence approaches like ant colony optimization and particle swarm optimization.
- The use of intelligent agents in artificial intelligence and differences between single and multi-agent systems.
- Soft computing techniques involving fuzzy logic, machine learning, probabilistic reasoning and other approaches.
- Specific concepts discussed in more depth include genetic algorithms, genetic programming, swarm intelligence, ant colony optimization, and metaheuristics.
This document provides information about an algorithms course, including the course syllabus and topics that will be covered. The course topics include introduction to algorithms, analysis of algorithms, algorithm design techniques like divide and conquer, greedy algorithms, dynamic programming, backtracking, and branch and bound. It also covers NP-hard and NP-complete problems. The syllabus outlines 5 units that will analyze performance, teach algorithm design methods, and solve problems using techniques like divide and conquer, dynamic programming, and backtracking. It aims to help students choose appropriate algorithms and data structures for applications and understand how algorithm design impacts program performance.
Learning agents are intelligent systems that can perceive their environment, learn from experiences, and improve over time. They employ machine learning algorithms to adapt based on feedback and interactions. Designing learning agents involves defining a performance measure, the environment, actuators, sensors, the learning element, knowledge representation, and a feedback mechanism. Applications include autonomous vehicles, recommender systems, robotics, and gaming.
Performance Analysis of Genetic Algorithm as a Stochastic Optimization Tool i...paperpublications3
Abstract: Engineering design problems are complex by nature because of their critical objective functions involving many variables and Constraints. Engineers have to ensure the compatibility with the imposed specifications keeping the manufacturing costs low. Moreover, the methodology may vary according to the design problem.
The main issue is to choose the proper tool for optimization. In the earlier days, a design problem was optimized by some of the conventional optimization techniques like gradient Search, evolutionary optimization, random search etc. These are known as classical methods.
The method is to be properly Chosen depending on the nature of the problem- an incorrect choice may sometimes fail to give the optimal solution. So the methods are less robust.
Now-a-days soft-computing techniques are being widely used for optimizing a function. These are more robust. Genetic algorithm is one such method. It is an effective tool in the realm of stochastic optimization (non-classical). The algorithm produces many strings and generation to reach the optimal point.
The main objective of the paper is to optimize engineering design problems using Genetic Algorithm and to analyze how the algorithm reaches the optima effectively and closely. We choose a mathematical expression for the objective function in terms of the design variables and optimize the same under given constraints using GA.
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses graph algorithms and directed acyclic graphs (DAGs). It explains that the edges in a graph can be identified as tree, back, forward, or cross edges based on the color of vertices during depth-first search (DFS). It also defines DAGs as directed graphs without cycles and describes how to perform a topological sort of a DAG by inserting vertices into a linked list based on their finishing times from DFS. Finally, it discusses how to find strongly connected components (SCCs) in a graph using DFS on the original graph and its transpose.
This document discusses string matching algorithms. It begins with an introduction to the naive string matching algorithm and its quadratic runtime. Then it proposes three improved algorithms: FC-RJ, FLC-RJ, and FMLC-RJ, which attempt to match patterns by restricting comparisons based on the first, first and last, or first, middle, and last characters, respectively. Experimental results show that these three proposed algorithms outperform the naive algorithm by reducing execution time, with FMLC-RJ working best for three-character patterns.
The document discusses shortest path problems and algorithms. It defines the shortest path problem as finding the minimum weight path between two vertices in a weighted graph. It presents the Bellman-Ford algorithm, which can handle graphs with negative edge weights but detects negative cycles. It also presents Dijkstra's algorithm, which only works for graphs without negative edge weights. Key steps of the algorithms include initialization, relaxation of edges to update distance estimates, and ensuring the shortest path property is satisfied.
The document discusses strongly connected component decomposition (SCCD) which uses depth-first search (DFS) to separate a directed graph into subsets of mutually reachable vertices. It describes running DFS on the original graph and its transpose to find these subsets in Θ(V+E) time, then provides an example applying the three step process of running DFS on the graph and transpose, finding two strongly connected components.
Red-black trees are self-balancing binary search trees. They guarantee an O(log n) running time for operations by ensuring that no path from the root to a leaf is more than twice as long as any other. Nodes are colored red or black, and properties of the coloring are designed to keep the tree balanced. Inserting and deleting nodes may violate these properties, so rotations are used to restore the red-black properties and balance of the tree.
This document discusses recurrences and the master method for solving recurrence relations. It defines a recurrence as an equation that describes a function in terms of its value on smaller functions. The master method provides three cases for solving recurrences of the form T(n) = aT(n/b) + f(n). If f(n) is asymptotically smaller than nlogba, the solution is Θ(nlogba). If f(n) is Θ(nlogba), the solution is Θ(nlogba lgn). If f(n) is asymptotically larger and the regularity condition holds, the solution is Θ(f(n)). It provides examples of applying
The document discusses the Rabin-Karp algorithm for string matching. It defines Rabin-Karp as a string search algorithm that compares hash values of strings rather than the strings themselves. It explains that Rabin-Karp works by calculating a hash value for the pattern and text subsequences to compare, and only does a brute force comparison when hash values match. The worst-case complexity is O(n-m+1)m but the average case is O(n+m) plus processing spurious hits. Real-life applications include bioinformatics to find protein similarities.
The document discusses minimum spanning trees (MST) and two algorithms for finding them: Prim's algorithm and Kruskal's algorithm. It begins by defining an MST as a spanning tree (connected acyclic graph containing all vertices) with minimum total edge weight. Prim's algorithm grows a single tree by repeatedly adding the minimum weight edge connecting the growing tree to another vertex. Kruskal's algorithm grows a forest by repeatedly merging two components via the minimum weight edge connecting them. Both algorithms produce optimal MSTs by adding only "safe" edges that cannot be part of a cycle.
This document discusses the analysis of insertion sort and merge sort algorithms. It covers the worst-case and average-case analysis of insertion sort. For merge sort, it describes the divide-and-conquer technique, the merge sort algorithm including recursive calls, how it works to merge elements, and analyzes merge sort through constructing a recursion tree to prove its runtime is O(n log n).
The document discusses loop invariants and uses insertion sort as an example. The invariant for insertion sort is that at the start of each iteration of the outer for loop, the elements in A[1...j-1] are sorted. It shows that this invariant is true before the first iteration, remains true after each iteration by how insertion sort works, and when the loops terminate the entire array A[1...n] will be sorted, proving correctness.
Linear sorting algorithms like counting sort, bucket sort, and radix sort can sort arrays of numbers in linear O(n) time by exploiting properties of the data. Counting sort works for integers within a range [0,r] by counting the frequency of each number and using the frequencies to place numbers in the correct output positions. Bucket sort places numbers uniformly distributed between 0 and 1 into buckets and sorts the buckets. Radix sort treats multi-digit numbers as strings by sorting based on individual digit positions from least to most significant.
The document discusses heap data structures and algorithms. A heap is a binary tree that satisfies the heap property of a parent being greater than or equal to its children. Common operations on heaps like building
Greedy algorithms make locally optimal choices at each step in the hope of finding a globally optimal solution. The activity selection problem involves choosing a maximum set of activities that do not overlap in time. The greedy algorithm for this problem sorts activities by finish time and selects the earliest finishing activity at each step. This algorithm is optimal because the activity selection problem exhibits the optimal substructure property and the greedy algorithm satisfies the greedy-choice property at each step.
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Fundamentals of Genetic Algorithms (Soft Computing)
1. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
Genetic Algorithms & Modeling : Soft Computing Course Lecture 37 – 40, notes, slides
www.myreaders.info/ , RC Chakraborty, e-mail [email protected] , Dec. 01, 2010
http://www.myreaders.info/html/soft_computing.html
Genetic Algorithms & Modeling
Soft Computing
www.myreaders.info
Return to Website
Genetic algorithms & Modeling, topics : Introduction, why genetic
algorithm? search optimization methods, evolutionary algorithms
(EAs), genetic algorithms (GAs) - biological background, working
principles; basic genetic algorithm, flow chart for Genetic
Programming. Encoding : binary encoding, value encoding,
permutation encoding, tree encoding. Operators of genetic
algorithm : random population, reproduction or selection -
roulette wheel selection, Boltzmann selection; Fitness function;
Crossover - one-point crossover, two-point crossover, uniform
crossover, arithmetic, heuristic; Mutation - flip bit, boundary,
Gaussian, non-uniform, and uniform; Basic genetic algorithm :
examples - maximize function f(x) = x2
and two bar pendulum.
2. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
Genetic Algorithms & Modeling
Soft Computing
Topics
(Lectures 37, 38, 39, 40 4 hours) Slides
1. Introduction
What are Genetic Algorithms and why Genetic Algorithm? Search
optimization methods; Evolutionary Algorithms (EAs), Genetic
Algorithms (GAs) : Biological background, Working principles, Basic
Genetic Algorithm, Flow chart for Genetic Programming.
03-23
2. Encoding
Binary Encoding, Value Encoding, Permutation Encoding, Tree
Encoding.
24-29
3. Operators of Genetic Algorithm
Random population, Reproduction or Selection : Roulette wheel
selection, Boltzmann selection; Fitness function; Crossover: One-point
crossover, Two-point crossover, Uniform crossover, Arithmetic,
Heuristic; Mutation: Flip bit, Boundary, Gaussian, Non-uniform, and
Uniform;
30-43
4. Basic Genetic Algorithm
Examples : Maximize function f(x) = x2
and Two bar pendulum.
44-49
5. References 50
02
3. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
Genetic Algorithms & Modeling
What are GAs ?
• Genetic Algorithms (GAs) are adaptive heuristic search algorithm based
on the evolutionary ideas of natural selection and genetics.
• Genetic algorithms (GAs) are a part of Evolutionary computing, a rapidly
growing area of artificial intelligence. GAs are inspired by Darwin's
theory about evolution - "survival of the fittest".
• GAs represent an intelligent exploitation of a random search used to
solve optimization problems.
• GAs, although randomized, exploit historical information to direct the
search into the region of better performance within the search space.
• In nature, competition among individuals for scanty resources results
in the fittest individuals dominating over the weaker ones.
03
4. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
1. Introduction
Solving problems mean looking for solutions, which is best among others.
Finding the solution to a problem is often thought :
− In computer science and AI, as a process of search through the space of
possible solutions. The set of possible solutions defines the search space
(also called state space) for a given problem. Solutions or partial solutions
are viewed as points in the search space.
− In engineering and mathematics, as a process of optimization. The
problems are first formulated as mathematical models expressed in terms
of functions and then to find a solution, discover the parameters that
optimize the model or the function components that provide optimal
system performance.
04
5. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Why Genetic Algorithms ?
It is better than conventional AI ; It is more robust.
- unlike older AI systems, the GA's do not break easily even if the
inputs changed slightly, or in the presence of reasonable noise.
- while performing search in large state-space, multi-modal state-space,
or n-dimensional surface, a genetic algorithms offer significant benefits
over many other typical search optimization techniques like - linear
programming, heuristic, depth-first, breath-first.
"Genetic Algorithms are good at taking large, potentially huge search
spaces and navigating them, looking for optimal combinations of things,
the solutions one might not otherwise find in a lifetime.” Salvatore
Mangano Computer Design, May 1995.
05
6. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
1.1 Optimization
Optimization is a process that finds a best, or optimal, solution for a
problem. The Optimization problems are centered around three factors :
■ An objective function : which is to be minimized or maximized;
Examples:
1. In manufacturing, we want to maximize the profit or minimize
the cost .
2. In designing an automobile panel, we want to maximize the
strength.
■ A set of unknowns or variables : that affect the objective function,
Examples:
1. In manufacturing, the variables are amount of resources used or
the time spent.
2. In panel design problem, the variables are shape and dimensions of
the panel.
■ A set of constraints : that allow the unknowns to take on certain
values but exclude others;
Examples:
1. In manufacturing, one constrain is, that all "time" variables to
be non-negative.
2 In the panel design, we want to limit the weight and put
constrain on its shape.
An optimization problem is defined as : Finding values of the variables
that minimize or maximize the objective function while satisfying
the constraints.
06
7. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Optimization Methods
Many optimization methods exist and categorized as shown below.
The suitability of a method depends on one or more problem
characteristics to be optimized to meet one or more objectives like :
− low cost,
− high performance,
− low loss
These characteristics are not necessarily obtainable, and requires
knowledge about the problem.
Fig. Optimization Methods
Each of these methods are briefly discussed indicating the nature of
the problem they are more applicable.
07
Optimization
Methods
Non-Linear
Programming
Linear
Programming
Stochastic
Methods
Classical
Methods
Enumerative
Methods
8. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
■ Linear Programming
Intends to obtain the optimal solution to problems that are
perfectly represented by a set of linear equations; thus require
a priori knowledge of the problem. Here the
− the functions to be minimized or maximized, is called objective
functions,
− the set of linear equations are called restrictions.
− the optimal solution, is the one that minimizes (or maximizes)
the objective function.
Example : “Traveling salesman”, seeking a minimal traveling distance.
08
9. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
■ Non- Linear Programming
Intended for problems described by non-linear equations.
The methods are divided in three large groups:
Classical, Enumerative and Stochastic.
Classical search uses deterministic approach to find best solution.
These methods requires knowledge of gradients or higher order
derivatives. In many practical problems, some desired information
are not available, means deterministic algorithms are inappropriate.
The techniques are subdivide into:
− Direct methods, e.g. Newton or Fibonacci
− Indirect methods.
Enumerative search goes through every point (one point at a
time ) related to the function's domain space. At each point, all
possible solutions are generated and tested to find optimum
solution. It is easy to implement but usually require significant
computation. In the field of artificial intelligence, the enumerative
methods are subdivided into two categories:
− Uninformed methods, e.g. Mini-Max algorithm
− Informed methods, e.g. Alpha-Beta and A* ,
Stochastic search deliberately introduces randomness into the search
process. The injected randomness may provide the necessary impetus
to move away from a local solution when searching for a global
optimum. e.g., a gradient vector criterion for “smoothing” problems.
Stochastic methods offer robustness quality to optimization process.
Among the stochastic techniques, the most widely used are :
− Evolutionary Strategies (ES),
− Genetic Algorithms (GA), and
− Simulated Annealing (SA).
The ES and GA emulate nature’s evolutionary behavior, while SA is
based on the physical process of annealing a material.
09
10. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
1.2 Search Optimization
Among the three Non-Linear search methodologies, just mentioned
in the previous slide, our immediate concern is Stochastic search
which means
− Evolutionary Strategies (ES),
− Genetic Algorithms (GA), and
− Simulated Annealing (SA).
The two other search methodologies, shown below, the Classical and
the Enumerative methods, are first briefly explained. Later the Stochastic
methods are discussed in detail. All these methods belong to Non-
Linear search.
Fig Non- Linear search methods
10
Search
Optimization
Stochastic Search
(Guided Random Search)
Enumerative
Search
Classical
Search
Evolutionary
Strategies
(ES)
Genetic
Algorithms
(GA)
Simulated
Annealing
(ES)
11. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Classical or Calculus based search
Uses deterministic approach to find best solutions of an optimization
problem.
− the solutions satisfy a set of necessary and sufficient conditions of
the optimization problem.
− the techniques are subdivide into direct and indirect methods.
◊ Direct or Numerical methods :
− example : Newton or Fibonacci,
− tries to find extremes by "hopping" around the search space
and assessing the gradient of the new point, which
guides the search.
− applies the concept of "hill climbing", and finds the best
local point by climbing the steepest permissible gradient.
− used only on a restricted set of "well behaved" functions.
◊ Indirect methods :
− does search for local extremes by solving usually non-linear
set of equations resulting from setting the gradient of the
objective function to zero.
− does search for possible solutions (function peaks), starts by
restricting itself to points with zero slope in all directions.
11
12. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Enumerative Search
Here the search goes through every point (one point at a time) related
to the function's domain space.
− At each point, all possible solutions are generated and tested to find
optimum solution.
− It is easy to implement but usually require significant computation.
Thus these techniques are not suitable for applications with large
domain spaces.
In the field of artificial intelligence, the enumerative methods are
subdivided into two categories : Uninformed and Informed methods.
◊ Uninformed or blind methods :
− example: Mini-Max algorithm,
− search all points in the space in a predefined order,
− used in game playing.
◊ Informed methods :
− example: Alpha-Beta and A* ,
− does more sophisticated search
− uses domain specific knowledge in the form of a cost
function or heuristic to reduce cost for search.
Next slide shows, the taxonomy of enumerative search in AI domain.
12
13. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
[Ref : previous slide Enumerative search]
The Enumerative search techniques follows, the traditional search and
control strategies, in the domain of Artificial Intelligence.
− the search methods explore the search space "intelligently"; means
evaluating possibilities without investigating every single possibility.
− there are many control structures for search; the depth-first search
and breadth-first search are two common search strategies.
− the taxonomy of search algorithms in AI domain is given below.
Fig. Enumerative Search Algorithms in AI Domain
13
User heuristics h(n)
No h(n) present
Priority Queue:
f(n)=h(n)+g(n
LIFO Stack
Gradually increase
fixed depth limit
Impose fixed
depth limit
Priority
Queue: g(n)
Priority
Queue: h(n)
FIFO
Enumerative Search
G (State, Operator, Cost)
Informed Search
Uninformed Search
Depth-First
Search
Breadth-First
Search
Cost-First
Search
Generate
-and-test
Hill
Climbing
Depth
Limited
Search
Iterative
Deepening
DFS
Problem
Reduction
Constraint
satisfaction
Mean-end-
analysis
Best first
search
A* Search AO* Search
14. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Stochastic Search
Here the search methods, include heuristics and an element of
randomness (non-determinism) in traversing the search space. Unlike
the previous two search methodologies
− the stochastic search algorithm moves from one point to another in
the search space in a non-deterministic manner, guided by heuristics.
− the stochastic search techniques are usually called Guided random
search techniques.
The stochastic search techniques are grouped into two major subclasses :
− Simulated annealing and
− Evolutionary algorithms.
Both these classes follow the principles of evolutionary processes.
◊ Simulated annealing (SAs)
− uses a thermodynamic evolution process to search
minimum energy states.
◊ Evolutionary algorithms (EAs)
− use natural selection principles.
− the search evolves throughout generations, improving the
features of potential solutions by means of biological
inspired operations.
− Genetic Algorithms (GAs) are a good example of this
technique.
The next slide shows, the taxonomy of evolutionary search algorithms.
It includes the other two search, the Enumerative search and Calculus
based techniques, for better understanding of Non-Linear search
methodologies in its entirety.
14
15. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Taxonomy of Search Optimization
Fig. below shows different types of Search Optimization algorithms.
Fig. Taxonomy of Search Optimization techniques
We are interested in Evolutionary search algorithms.
Our main concern is to understand the evolutionary algorithms :
- how to describe the process of search,
- how to implement and carry out search,
- what are the elements required to carry out search, and
- the different search strategies
The Evolutionary Algorithms include :
- Genetic Algorithms and
- Genetic Programming
15
Search
Optimization
Guided Random Search
techniques
Enumerative
Techniques
Calculus
Based
Techniques
Indirect
method
Direct
method
Simulated
Annealing
Informed
Search
Hill
Climbing
Tabu
Search
Genetic
Algorithms
Genetic
Programming
Newton Finonacci
Uninformed
Search
Evolutionary
Algorithms
16. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
1.3 Evolutionary Algorithm (EAs)
Evolutionary Algorithm (EA) is a subset of Evolutionary Computation (EC)
which is a subfield of Artificial Intelligence (AI).
Evolutionary Computation (EC) is a general term for several
computational techniques. Evolutionary Computation represents powerful
search and optimization paradigm influenced by biological mechanisms of
evolution : that of natural selection and genetic.
Evolutionary Algorithms (EAs) refers to Evolutionary Computational
models using randomness and genetic inspired operations. EAs
involve selection, recombination, random variation and competition of the
individuals in a population of adequately represented potential solutions.
The candidate solutions are referred as chromosomes or individuals.
Genetic Algorithms (GAs) represent the main paradigm of Evolutionary
Computation.
- GAs simulate natural evolution, mimicking processes the nature uses :
Selection, Crosses over, Mutation and Accepting.
- GAs simulate the survival of the fittest among individuals over
consecutive generation for solving a problem.
Development History
EC = GP + ES + EP + GA
Evolutionary
Computing
Genetic
Programming
Evolution
Strategies
Evolutionary
Programming
Genetic
Algorithms
Rechenberg
1960
Koza
1992
Rechenberg
1965
Fogel
1962
Holland
1970
16
17. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
1.4 Genetic Algorithms (GAs) - Basic Concepts
Genetic algorithms (GAs) are the main paradigm of evolutionary
computing. GAs are inspired by Darwin's theory about evolution – the
"survival of the fittest". In nature, competition among individuals for
scanty resources results in the fittest individuals dominating over the
weaker ones.
− GAs are the ways of solving problems by mimicking processes nature
uses; ie., Selection, Crosses over, Mutation and Accepting, to evolve a
solution to a problem.
− GAs are adaptive heuristic search based on the evolutionary ideas
of natural selection and genetics.
− GAs are intelligent exploitation of random search used in optimization
problems.
− GAs, although randomized, exploit historical information to direct the
search into the region of better performance within the search space.
The biological background (basic genetics), the scheme of evolutionary
processes, the working principles and the steps involved in GAs are
illustrated in next few slides.
17
18. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Biological Background – Basic Genetics
‡ Every organism has a set of rules, describing how that organism
is built. All living organisms consist of cells.
‡ In each cell there is same set of chromosomes. Chromosomes are
strings of DNA and serve as a model for the whole organism.
‡ A chromosome consists of genes, blocks of DNA.
‡ Each gene encodes a particular protein that represents a trait
(feature), e.g., color of eyes.
‡ Possible settings for a trait (e.g. blue, brown) are called alleles.
‡ Each gene has its own position in the chromosome called its locus.
‡ Complete set of genetic material (all chromosomes) is called a
genome.
‡ Particular set of genes in a genome is called genotype.
‡ The physical expression of the genotype (the organism itself after
birth) is called the phenotype, its physical and mental characteristics,
such as eye color, intelligence etc.
‡ When two organisms mate they share their genes; the resultant
offspring may end up having half the genes from one parent and half
from the other. This process is called recombination (cross over) .
‡ The new created offspring can then be mutated. Mutation means,
that the elements of DNA are a bit changed. This changes are mainly
caused by errors in copying genes from parents.
‡ The fitness of an organism is measured by success of the organism
in its life (survival).
18
19. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
[ continued from previous slide - Biological background ]
Below shown, the general scheme of evolutionary process in genetic
along with pseudo-code.
Fig. General Scheme of Evolutionary process
Pseudo-Code
BEGIN
INITIALISE population with random candidate solution.
EVALUATE each candidate;
REPEAT UNTIL (termination condition ) is satisfied DO
1. SELECT parents;
2. RECOMBINE pairs of parents;
3. MUTATE the resulting offspring;
4. SELECT individuals or the next generation;
END.
19
Parents
Offspring
Population
Recombination
Mutation
Parents
Termination
Initialization
Survivor
20. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Search Space
In solving problems, some solution will be the best among others.
The space of all feasible solutions (among which the desired solution
resides) is called search space (also called state space).
− Each point in the search space represents one possible solution.
− Each possible solution can be "marked" by its value (or fitness) for
the problem.
− The GA looks for the best solution among a number of possible
solutions represented by one point in the search space.
− Looking for a solution is then equal to looking for some extreme value
(minimum or maximum) in the search space.
− At times the search space may be well defined, but usually only a few
points in the search space are known.
In using GA, the process of finding solutions generates other points
(possible solutions) as evolution proceeds.
20
21. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Working Principles
Before getting into GAs, it is necessary to explain few terms.
− Chromosome : a set of genes; a chromosome contains the solution in
form of genes.
− Gene : a part of chromosome; a gene contains a part of solution. It
determines the solution. e.g. 16743 is a chromosome and 1, 6, 7, 4
and 3 are its genes.
− Individual : same as chromosome.
− Population: number of individuals present with same length of
chromosome.
− Fitness : the value assigned to an individual based on how far or
close a individual is from the solution; greater the fitness value better
the solution it contains.
− Fitness function : a function that assigns fitness value to the individual.
It is problem specific.
− Breeding : taking two fit individuals and then intermingling there
chromosome to create new two individuals.
− Mutation : changing a random gene in an individual.
− Selection : selecting individuals for creating the next generation.
Working principles :
Genetic algorithm begins with a set of solutions (represented by
chromosomes) called the population.
− Solutions from one population are taken and used to form a new
population. This is motivated by the possibility that the new population
will be better than the old one.
− Solutions are selected according to their fitness to form new solutions
(offspring); more suitable they are, more chances they have to
reproduce.
− This is repeated until some condition (e.g. number of populations or
improvement of the best solution) is satisfied.
21
22. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Outline of the Basic Genetic Algorithm
1. [Start] Generate random population of n chromosomes (i.e. suitable
solutions for the problem).
2. [Fitness] Evaluate the fitness f(x) of each chromosome x in the
population.
3. [New population] Create a new population by repeating following
steps until the new population is complete.
(a) [Selection] Select two parent chromosomes from a population
according to their fitness (better the fitness, bigger the chance to
be selected)
(b) [Crossover] With a crossover probability, cross over the parents
to form new offspring (children). If no crossover was performed,
offspring is the exact copy of parents.
(c) [Mutation] With a mutation probability, mutate new offspring at
each locus (position in chromosome).
(d) [Accepting] Place new offspring in the new population
4. [Replace] Use new generated population for a further run of the
algorithm
5. [Test] If the end condition is satisfied, stop, and return the best
solution in current population
6. [Loop] Go to step 2
Note : The genetic algorithm's performance is largely influenced by two
operators called crossover and mutation. These two operators are the
most important parts of GA.
22
23. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Introduction
• Flow chart for Genetic Programming
Fig. Genetic Algorithm – program flow chart
23
Yes
No
No
No
Natural
Selection
Natural
Selection
Mutation
Crossover
Survival of Fittest
Reproduction
Recombination
Genesis
Yes
Yes
Seed Population
Generate N individuals
Scoring : assign fitness
to each individual
Select two individuals
(Parent 1 Parent 2)
Select one off-spring
Use crossover operator
to produce off- springs
Scoring : assign fitness
to off- springs
Apply replacement
operator to incorporate
new individual into
population
Mutation
Finished?
Terminate?
Finish
Crossover
Finished?
Start
Apply Mutation operator
to produce Mutated
offspring
Scoring : assign
fitness to off- spring
24. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Encoding
2. Encoding
Before a genetic algorithm can be put to work on any problem, a method is
needed to encode potential solutions to that problem in a form so that a
computer can process.
− One common approach is to encode solutions as binary strings: sequences
of 1's and 0's, where the digit at each position represents the value of
some aspect of the solution.
Example :
A Gene represents some data (eye color, hair color, sight, etc.).
A chromosome is an array of genes. In binary form
a Gene looks like : (11100010)
a Chromosome looks like: Gene1 Gene2 Gene3 Gene4
(11000010, 00001110, 001111010, 10100011)
A chromosome should in some way contain information about solution
which it represents; it thus requires encoding. The most popular way of
encoding is a binary string like :
Chromosome 1 : 1101100100110110
Chromosome 2 : 1101111000011110
Each bit in the string represent some characteristics of the solution.
− There are many other ways of encoding, e.g., encoding values as integer or
real numbers or some permutations and so on.
− The virtue of these encoding method depends on the problem to work on .
24
25. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Encoding
• Binary Encoding
Binary encoding is the most common to represent information contained.
In genetic algorithms, it was first used because of its relative simplicity.
− In binary encoding, every chromosome is a string of bits : 0 or 1, like
Chromosome 1: 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 1 0 0 1 0 1
Chromosome 2: 1 1 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1 1 1
− Binary encoding gives many possible chromosomes even with a small
number of alleles ie possible settings for a trait (features).
− This encoding is often not natural for many problems and sometimes
corrections must be made after crossover and/or mutation.
Example 1:
One variable function, say 0 to 15 numbers, numeric values,
represented by 4 bit binary string.
Numeric
value
4–bit
string
Numeric
value
4–bit
string
Numeric
value
4–bit
string
0 0 0 0 0 6 0 1 1 0 12 1 1 0 0
1 0 0 0 1 7 0 1 1 1 13 1 1 0 1
2 0 0 1 0 8 1 0 0 0 14 1 1 1 0
3 0 0 1 1 9 1 0 0 1 15 1 1 1 1
4 0 1 0 0 10 1 0 1 0
5 0 1 0 1 11 1 0 1 1
25
26. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Encoding
[ continued binary encoding ]
Example 2 :
Two variable function represented by 4 bit string for each variable.
Let two variables X1 , X2 as (1011 0110) .
Every variable will have both upper and lower limits as ≤ ≤
Because 4-bit string can represent integers from 0 to 15,
so (0000 0000) and (1111 1111) represent the points for X1 , X2 as
( , ) and ( , ) respectively.
Thus, an n-bit string can represent integers from
0 to 2n
-1, i.e. 2n
integers.
Binary Coding Equivalent integer Decoded binary substring
1 0 1 0
0 x 2
0
= 0
1 x 2
1
= 2
0 x 2
2
= 0
1 x 2
3
= 8
10
Let Xi is coded as a substring
Si of length ni. Then decoded
binary substring Si is as
where Si can be 0 or 1 and the
string S is represented as
Sn-1 . . . . S3 S2 S1 S0
Example : Decoding value
Consider a 4-bit string (0111),
− the decoded value is equal to
23
x 0 + 22
x 1 + 21
x 1 + 20
x 1 = 7
− Knowing and corresponding to (0000) and (1111) ,
the equivalent value for any 4-bit string can be obtained as
( − )
Xi = + --------------- x (decoded value of string)
( 2ni
− 1 )
− For e.g. a variable Xi ; let = 2 , and = 17, find what value the
4-bit string Xi = (1010) would represent. First get decoded value for
Si = 1010 = 23
x 1 + 22
x 0 + 21
x 1 + 20
x 0 = 10 then
(17 -2)
Xi = 2 + ----------- x 10 = 12
(24
- 1)
The accuracy obtained with a 4-bit code is 1/16 of search space.
By increasing the string length by 1-bit , accuracy increases to 1/32.
26
X
L
i
Xi
X
U
i
X
L
1 X
L
2
X
U
2
X
U
1
2 10 Remainder
2 5 0
2 2 1
1 0
Σ
k=0
K=ni - 1
2
k
Sk
X
L
i X
U
i
X
L
i
X
U
i
X
L
i
X
L
i X
U
i
27. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Encoding
• Value Encoding
The Value encoding can be used in problems where values such as real
numbers are used. Use of binary encoding for this type of problems
would be difficult.
1. In value encoding, every chromosome is a sequence of some values.
2. The Values can be anything connected to the problem, such as :
real numbers, characters or objects.
Examples :
Chromosome A 1.2324 5.3243 0.4556 2.3293 2.4545
Chromosome B ABDJEIFJDHDIERJFDLDFLFEGT
Chromosome C (back), (back), (right), (forward), (left)
3. Value encoding is often necessary to develop some new types of
crossovers and mutations specific for the problem.
27
28. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Encoding
• Permutation Encoding
Permutation encoding can be used in ordering problems, such as traveling
salesman problem or task ordering problem.
1. In permutation encoding, every chromosome is a string of numbers
that represent a position in a sequence.
Chromosome A 1 5 3 2 6 4 7 9 8
Chromosome B 8 5 6 7 2 3 1 4 9
2. Permutation encoding is useful for ordering problems. For some
problems, crossover and mutation corrections must be made to
leave the chromosome consistent.
Examples :
1. The Traveling Salesman problem:
There are cities and given distances between them. Traveling
salesman has to visit all of them, but he does not want to travel more
than necessary. Find a sequence of cities with a minimal traveled
distance. Here, encoded chromosomes describe the order of cities the
salesman visits.
2. The Eight Queens problem :
There are eight queens. Find a way to place them on a chess board
so that no two queens attack each other. Here, encoding
describes the position of a queen on each row.
28
29. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Encoding
• Tree Encoding
Tree encoding is used mainly for evolving programs or expressions.
For genetic programming :
− In tree encoding, every chromosome is a tree of some objects, such as
functions or commands in programming language.
− Tree encoding is useful for evolving programs or any other structures
that can be encoded in trees.
− The crossover and mutation can be done relatively easy way .
Example :
Chromosome A Chromosome B
( + x ( / 5 y ) ) ( do until step wall )
Fig. Example of Chromosomes with tree encoding
Note : Tree encoding is good for evolving programs. The programming
language LISP is often used. Programs in LISP can be easily parsed as a
tree, so the crossover and mutation is relatively easy.
29
+
/
x
y
5
do untill
step wall
30. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
3. Operators of Genetic Algorithm
Genetic operators used in genetic algorithms maintain genetic diversity.
Genetic diversity or variation is a necessity for the process of evolution.
Genetic operators are analogous to those which occur in the natural world:
− Reproduction (or Selection) ;
− Crossover (or Recombination); and
− Mutation.
In addition to these operators, there are some parameters of GA.
One important parameter is Population size.
− Population size says how many chromosomes are in population (in one
generation).
− If there are only few chromosomes, then GA would have a few possibilities
to perform crossover and only a small part of search space is explored.
− If there are many chromosomes, then GA slows down.
− Research shows that after some limit, it is not useful to increase population
size, because it does not help in solving the problem faster. The population
size depends on the type of encoding and the problem.
30
31. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
3.1 Reproduction, or Selection
Reproduction is usually the first operator applied on population. From
the population, the chromosomes are selected to be parents to crossover
and produce offspring.
The problem is how to select these chromosomes ?
According to Darwin's evolution theory "survival of the fittest" – the best
ones should survive and create new offspring.
− The Reproduction operators are also called Selection operators.
− Selection means extract a subset of genes from an existing population,
according to any definition of quality. Every gene has a meaning, so
one can derive from the gene a kind of quality measurement called
fitness function. Following this quality (fitness value), selection can be
performed.
− Fitness function quantifies the optimality of a solution (chromosome) so
that a particular solution may be ranked against all the other solutions.
The function depicts the closeness of a given ‘solution’ to the desired
result.
Many reproduction operators exists and they all essentially do same thing.
They pick from current population the strings of above average and insert
their multiple copies in the mating pool in a probabilistic manner.
The most commonly used methods of selecting chromosomes for parents
to crossover are :
− Roulette wheel selection, − Rank selection
− Boltzmann selection, − Steady state selection.
− Tournament selection,
The Roulette wheel and Boltzmann selections methods are illustrated next.
31
32. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Example of Selection
Evolutionary Algorithms is to maximize the function f(x) = x2
with x in
the integer interval [0 , 31], i.e., x = 0, 1, . . . 30, 31.
1. The first step is encoding of chromosomes; use binary representation
for integers; 5-bits are used to represent integers up to 31.
2. Assume that the population size is 4.
3. Generate initial population at random. They are chromosomes or
genotypes; e.g., 01101, 11000, 01000, 10011.
4. Calculate fitness value for each individual.
(a) Decode the individual into an integer (called phenotypes),
01101 → 13; 11000 → 24; 01000 → 8; 10011 → 19;
(b) Evaluate the fitness according to f(x) = x2
,
13 → 169; 24 → 576; 8 → 64; 19 → 361.
5. Select parents (two individuals) for crossover based on their fitness
in pi. Out of many methods for selecting the best chromosomes, if
roulette-wheel selection is used, then the probability of the i
th
string
in the population is pi = F i / ( F j ) , where
F i is fitness for the string i in the population, expressed as f(x)
pi is probability of the string i being selected,
n is no of individuals in the population, is population size, n=4
n * pi is expected count
String No Initial
Population
X value Fitness Fi
f(x) = x2
p i Expected count
N * Prob i
1 0 1 1 0 1 13 169 0.14 0.58
2 1 1 0 0 0 24 576 0.49 1.97
3 0 1 0 0 0 8 64 0.06 0.22
4 1 0 0 1 1 19 361 0.31 1.23
Sum 1170 1.00 4.00
Average 293 0.25 1.00
Max 576 0.49 1.97
The string no 2 has maximum chance of selection.
32
Σ
j=1
n
33. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Roulette wheel selection (Fitness-Proportionate Selection)
Roulette-wheel selection, also known as Fitness Proportionate Selection, is
a genetic operator, used for selecting potentially useful solutions for
recombination.
In fitness-proportionate selection :
− the chance of an individual's being selected is proportional to its
fitness, greater or less than its competitors' fitness.
− conceptually, this can be thought as a game of Roulette.
Fig. Roulette-wheel Shows 8
individual with fitness
The Roulette-wheel simulates 8
individuals with fitness values Fi,
marked at its circumference; e.g.,
− the 5th
individual has a higher
fitness than others, so the wheel
would choose the 5th
individual
more than other individuals .
− the fitness of the individuals is
calculated as the wheel is spun
n = 8 times, each time selecting
an instance, of the string, chosen
by the wheel pointer.
Probability of i
th
string is pi = F i / ( F j ) , where
n = no of individuals, called population size; pi = probability of ith
string being selected; Fi = fitness for i
th
string in the population.
Because the circumference of the wheel is marked according to
a string's fitness, the Roulette-wheel mechanism is expected to
make copies of the ith
string.
Average fitness = F j / n ; Expected count = (n =8 ) x pi
Cumulative Probability5 = pi
33
5%
1
9%
2
13%
3
17%
4
8%
6
8%
7
20%
8
20%
5
Σ
j=1
n
F
F
F
Σ
i=1
N=5
34. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Boltzmann Selection
Simulated annealing is a method used to minimize or maximize a function.
− This method simulates the process of slow cooling of molten metal to
achieve the minimum function value in a minimization problem.
− The cooling phenomena is simulated by controlling a temperature like
parameter introduced with the concept of Boltzmann probability
distribution.
− The system in thermal equilibrium at a temperature T has its energy
distribution based on the probability defined by
P(E) = exp ( - E / kT ) were k is Boltzmann constant.
− This expression suggests that a system at a higher temperature has
almost uniform probability at any energy state, but at lower
temperature it has a small probability of being at a higher energy state.
− Thus, by controlling the temperature T and assuming that the search
process follows Boltzmann probability distribution, the convergence of
the algorithm is controlled.
34
35. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
3.2 Crossover
Crossover is a genetic operator that combines (mates) two chromosomes
(parents) to produce a new chromosome (offspring). The idea behind
crossover is that the new chromosome may be better than both of the
parents if it takes the best characteristics from each of the parents.
Crossover occurs during evolution according to a user-definable crossover
probability. Crossover selects genes from parent chromosomes and
creates a new offspring.
The Crossover operators are of many types.
− one simple way is, One-Point crossover.
− the others are Two Point, Uniform, Arithmetic, and Heuristic crossovers.
The operators are selected based on the way chromosomes are encoded.
35
36. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• One-Point Crossover
One-Point crossover operator randomly selects one crossover point and
then copy everything before this point from the first parent and then
everything after the crossover point copy from the second parent. The
Crossover would then look as shown below.
Consider the two parents selected for crossover.
Parent 1 1 1 0 1 1 | 0 0 1 0 0 1 1 0 1 1 0
Parent 2 1 1 0 1 1 | 1 1 0 0 0 0 1 1 1 1 0
Interchanging the parents chromosomes after the crossover points -
The Offspring produced are :
Offspring 1 1 1 0 1 1 | 1 1 0 0 0 0 1 1 1 1 0
Offspring 2 1 1 0 1 1 | 0 0 1 0 0 1 1 0 1 1 0
Note : The symbol, a vertical line, | is the chosen crossover point.
36
37. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Two-Point Crossover
Two-Point crossover operator randomly selects two crossover points within
a chromosome then interchanges the two parent chromosomes between
these points to produce two new offspring.
Consider the two parents selected for crossover :
Parent 1 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0
Parent 2 1 1 0 1 1 | 1 1 0 0 0 0 1 | 1 1 1 0
Interchanging the parents chromosomes between the crossover points -
The Offspring produced are :
Offspring 1 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0
Offspring 2 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0
37
38. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Uniform Crossover
Uniform crossover operator decides (with some probability – know as the
mixing ratio) which parent will contribute how the gene values in the
offspring chromosomes. The crossover operator allows the parent
chromosomes to be mixed at the gene level rather than the segment
level (as with one and two point crossover).
Consider the two parents selected for crossover.
Parent 1 1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0
Parent 2 1 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0
If the mixing ratio is 0.5 approximately, then half of the genes in the
offspring will come from parent 1 and other half will come from parent 2.
The possible set of offspring after uniform crossover would be:
Offspring 1 11 12 02 11 11 12 12 02 01 01 02 11 12 11 11 02
Offspring 2 12 11 01 12 12 01 01 11 02 02 11 12 01 12 12 01
Note: The subscripts indicate which parent the gene came from.
38
39. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Arithmetic
Arithmetic crossover operator linearly combines two parent chromosome
vectors to produce two new offspring according to the equations:
Offspring1 = a * Parent1 + (1- a) * Parent2
Offspring2 = (1 – a) * Parent1 + a * Parent2
where a is a random weighting factor chosen before each crossover
operation.
Consider two parents (each of 4 float genes) selected for crossover:
Parent 1 (0.3) (1.4) (0.2) (7.4)
Parent 2 (0.5) (4.5) (0.1) (5.6)
Applying the above two equations and assuming the weighting
factor a = 0.7, applying above equations, we get two resulting offspring.
The possible set of offspring after arithmetic crossover would be:
Offspring 1 (0.36) (2.33) (0.17) (6.87)
Offspring 2 (0.402) (2.981) (0.149) (5.842)
39
40. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Heuristic
Heuristic crossover operator uses the fitness values of the two parent
chromosomes to determine the direction of the search.
The offspring are created according to the equations:
Offspring1 = BestParent + r * (BestParent − WorstParent)
Offspring2 = BestParent
where r is a random number between 0 and 1.
It is possible that offspring1 will not be feasible. It can happen if r is
chosen such that one or more of its genes fall outside of the allowable
upper or lower bounds. For this reason, heuristic crossover has a user
defined parameter n for the number of times to try and find an r
that results in a feasible chromosome. If a feasible chromosome is not
produced after n tries, the worst parent is returned as offspring1.
40
41. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
3.3 Mutation
After a crossover is performed, mutation takes place.
Mutation is a genetic operator used to maintain genetic diversity from
one generation of a population of chromosomes to the next.
Mutation occurs during evolution according to a user-definable mutation
probability, usually set to fairly low value, say 0.01 a good first choice.
Mutation alters one or more gene values in a chromosome from its initial
state. This can result in entirely new gene values being added to the
gene pool. With the new gene values, the genetic algorithm may be able
to arrive at better solution than was previously possible.
Mutation is an important part of the genetic search, helps to prevent the
population from stagnating at any local optima. Mutation is intended to
prevent the search falling into a local optimum of the state space.
The Mutation operators are of many type.
− one simple way is, Flip Bit.
− the others are Boundary, Non-Uniform, Uniform, and Gaussian.
The operators are selected based on the way chromosomes are
encoded .
41
42. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Flip Bit
The mutation operator simply inverts the value of the chosen gene.
i.e. 0 goes to 1 and 1 goes to 0.
This mutation operator can only be used for binary genes.
Consider the two original off-springs selected for mutation.
Original offspring 1 1 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0
Original offspring 2 1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0
Invert the value of the chosen gene as 0 to 1 and 1 to 0
The Mutated Off-spring produced are :
Mutated offspring 1 1 1 0 0 1 1 1 0 0 0 0 1 1 1 1 0
Mutated offspring 2 1 1 0 1 1 0 1 1 0 0 1 1 0 1 0 0
42
43. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Operators
• Boundary
The mutation operator replaces the value of the chosen gene with either
the upper or lower bound for that gene (chosen randomly).
This mutation operator can only be used for integer and float genes.
• Non-Uniform
The mutation operator increases the probability such that the amount of
the mutation will be close to 0 as the generation number increases. This
mutation operator prevents the population from stagnating in the early
stages of the evolution then allows the genetic algorithm to fine tune the
solution in the later stages of evolution.
This mutation operator can only be used for integer and float genes.
• Uniform
The mutation operator replaces the value of the chosen gene with a
uniform random value selected between the user-specified upper and
lower bounds for that gene.
This mutation operator can only be used for integer and float genes.
• Gaussian
The mutation operator adds a unit Gaussian distributed random value to
the chosen gene. The new gene value is clipped if it falls outside of the
user-specified lower or upper bounds for that gene.
This mutation operator can only be used for integer and float genes.
43
44. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Examples
4. Basic Genetic Algorithm :
Examples to demonstrate and explain : Random population, Fitness, Selection,
Crossover, Mutation, and Accepting.
• Example 1 :
Maximize the function f(x) = x2
over the range of integers from 0 . . . 31.
Note : This function could be solved by a variety of traditional methods
such as a hill-climbing algorithm which uses the derivative.
One way is to :
− Start from any integer x in the domain of f
− Evaluate at this point x the derivative f’
− Observing that the derivative is +ve, pick a new x which is at a small
distance in the +ve direction from current x
− Repeat until x = 31
See, how a genetic algorithm would approach this problem ?
44
45. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Examples
[ continued from previous slide ]
Genetic Algorithm approach to problem - Maximize the function f(x) = x2
1. Devise a means to represent a solution to the problem :
Assume, we represent x with five-digit unsigned binary integers.
2. Devise a heuristic for evaluating the fitness of any particular solution :
The function f(x) is simple, so it is easy to use the f(x) value itself to rate
the fitness of a solution; else we might have considered a more simpler
heuristic that would more or less serve the same purpose.
3. Coding - Binary and the String length :
GAs often process binary representations of solutions. This works well,
because crossover and mutation can be clearly defined for binary solutions.
A Binary string of length 5 can represents 32 numbers (0 to 31).
4. Randomly generate a set of solutions :
Here, considered a population of four solutions. However, larger populations
are used in real applications to explore a larger part of the search. Assume,
four randomly generated solutions as : 01101, 11000, 01000, 10011.
These are chromosomes or genotypes.
5. Evaluate the fitness of each member of the population :
The calculated fitness values for each individual are -
(a) Decode the individual into an integer (called phenotypes),
01101 → 13; 11000 → 24; 01000 → 8; 10011 → 19;
(b) Evaluate the fitness according to f(x) = x 2
,
13 → 169; 24 → 576; 8 → 64; 19 → 361.
(c) Expected count = N * Prob i , where N is the number of
individuals in the population called population size, here N = 4.
Thus the evaluation of the initial population summarized in table below .
String No
i
Initial
Population
(chromosome)
X value
(Pheno
types)
Fitness
f(x) = x2
Prob i
(fraction
of total)
Expected count
N * Prob i
1 0 1 1 0 1 13 169 0.14 0.58
2 1 1 0 0 0 24 576 0.49 1.97
3 0 1 0 0 0 8 64 0.06 0.22
4 1 0 0 1 1 19 361 0.31 1.23
Total (sum) 1170 1.00 4.00
Average 293 0.25 1.00
Max 576 0.49 1.97
Thus, the string no 2 has maximum chance of selection.
45
46. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Examples
6. Produce a new generation of solutions by picking from the existing
pool of solutions with a preference for solutions which are better
suited than others:
We divide the range into four bins, sized according to the relative fitness of
the solutions which they represent.
Strings Prob i Associated Bin
0 1 1 0 1 0.14 0.0 . . . 0.14
1 1 0 0 0 0.49 0.14 . . . 0.63
0 1 0 0 0 0.06 0.63 . . . 0.69
1 0 0 1 1 0.31 0.69 . . . 1.00
By generating 4 uniform (0, 1) random values and seeing which bin they fall
into we pick the four strings that will form the basis for the next generation.
Random No Falls into bin Chosen string
0.08 0.0 . . . 0.14 0 1 1 0 1
0.24 0.14 . . . 0.63 1 1 0 0 0
0.52 0.14 . . . 0.63 1 1 0 0 0
0.87 0.69 . . . 1.00 1 0 0 1 1
7. Randomly pair the members of the new generation
Random number generator decides for us to mate the first two strings
together and the second two strings together.
8. Within each pair swap parts of the members solutions to create
offspring which are a mixture of the parents :
For the first pair of strings: 0 1 1 0 1 , 1 1 0 0 0
− We randomly select the crossover point to be after the fourth digit.
Crossing these two strings at that point yields:
0 1 1 0 1 ⇒ 0 1 1 0 |1 ⇒ 0 1 1 0 0
1 1 0 0 0 ⇒ 1 1 0 0 |0 ⇒ 1 1 0 0 1
For the second pair of strings: 1 1 0 0 0 , 1 0 0 1 1
− We randomly select the crossover point to be after the second digit.
Crossing these two strings at that point yields:
1 1 0 0 0 ⇒ 1 1 |0 0 0 ⇒ 1 1 0 1 1
1 0 0 1 1 ⇒ 1 0 |0 1 1 ⇒ 1 0 0 0 0
46
47. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Examples
9. Randomly mutate a very small fraction of genes in the population :
With a typical mutation probability of per bit it happens that none of the bits
in our population are mutated.
10. Go back and re-evaluate fitness of the population (new generation) :
This would be the first step in generating a new generation of solutions.
However it is also useful in showing the way that a single iteration of the
genetic algorithm has improved this sample.
String No Initial
Population
(chromosome)
X value
(Pheno
types)
Fitness
f(x) = x2
Prob i
(fraction
of total)
Expected count
1 0 1 1 0 0 12 144 0.082 0.328
2 1 1 0 0 1 25 625 0.356 1.424
3 1 1 0 1 1 27 729 0.415 1.660
4 1 0 0 0 0 16 256 0.145 0.580
Total (sum) 1754 1.000 4.000
Average 439 0.250 1.000
Max 729 0.415 1.660
Observe that :
1. Initial populations : At start step 5 were
0 1 1 0 1 , 1 1 0 0 0 , 0 1 0 0 0 , 1 0 0 1 1
After one cycle, new populations, at step 10 to act as initial population
0 1 1 0 0 , 1 1 0 0 1 , 1 1 0 11 , 1 0 0 0 0
2. The total fitness has gone from 1170 to 1754 in a single generation.
3. The algorithm has already come up with the string 11011 (i.e x = 27) as
a possible solution.
47
48. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Examples
• Example 2 : Two bar pendulum
Two uniform bars are connected by pins at A and B and supported
at A. Let a horizontal force P acts at C.
Fig. Two bar pendulum
Given : Force P = 2, Length of bars ℓ1 = 2 ,
ℓ2 = 2, Bar weights W1= 2, W2 = 2 . angles = Xi
Find : Equilibrium configuration of the system if
fiction at all joints are neglected ?
Solution : Since there are two unknowns θ1 and
θ2 , we use 4 – bit binary for each unknown.
XU
- XL
90 - 0
Accuracy = ----------- = --------- = 60
24
- 1 15
Hence, the binary coding and the corresponding angles Xi are given as
XiU
- XiL
Xi = Xi
L
+ ----------- Si where Si is decoded Value of the i th
chromosome.
24
- 1
e.g. the 6th chromosome binary code (0 1 0 1) would have the corresponding
angle given by Si = 0 1 0 1 = 23
x 0 + 22
x 1 + 21
x 0 + 20
x 1 = 5
90 - 0
Xi = 0 + ----------- x 5 = 30
15
The binary coding and the angles are given in the table below.
S. No. Binary code
Si
Angle
Xi
S. No. Binary code
Si
Angle
Xi
1 0 0 0 0 0 9 1 0 0 0 48
2 0 0 0 1 6 10 1 0 0 1 54
3 0 0 1 0 12 11 1 0 1 0 60
4 0 0 1 1 18 12 1 0 1 1 66
5 0 1 0 0 24 13 1 1 0 0 72
6 0 1 0 1 30 14 1 1 0 1 78
7 0 1 1 0 36 15 1 1 1 0 84
8 0 1 1 1 42 16 1 1 1 1 90
Note : The total potential for two bar pendulum is written as
∏ = - P[(ℓ1 sinθ1 + ℓ2 sinθ2 )] - (W1 ℓ1 /2)cosθ1 - W2 [(ℓ2 /2) cosθ2 + ℓ1 cosθ1] (Eq.1)
Substituting the values for P, W1 , W2 , ℓ1 , ℓ2 all as 2 , we get ,
∏ (θ1 , θ2 ) = - 4 sinθ1 - 6 cosθ1 - 4 sinθ2 - 2 cosθ2 = function f (Eq. 2)
θ1 , θ2 lies between 0 and 90 both inclusive ie 0 ≤ θ1 , θ2 ≤ 90 (Eq. 3)
Equilibrium configuration is the one which makes ∏ a minimum .
Since the objective function is –ve , instead of minimizing the function f let us
maximize -f = f ’ . The maximum value of f ’ = 8 when θ1 and θ2 are zero.
Hence the fitness function F is given by F = – f – 7 = f ’ – 7 (Eq. 4)
48
W2
W1
y
A
θ2
θ1
ℓ2
B
C
P
ℓ1
49. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
SC – GA - Examples
[ continued from previous slide ]
First randomly generate 8 population with 8 bit strings as shown in table below.
Population
No.
Population of 8 bit strings
(Randomly generated)
Corresponding Angles
(from table above)
θ1 , θ2
F = – f – 7
1 0 0 0 0 0 0 0 0 0 0 1
2 0 0 1 0 0 0 0 0 12 6 2.1
3 0 0 0 1 0 0 0 0 6 30 3.11
4 0 0 1 0 1 0 0 0 12 48 4.01
5 0 1 1 0 1 0 1 0 36 60 4.66
6 1 1 1 0 1 0 0 0 84 48 1.91
7 1 1 1 0 1 1 0 1 84 78 1.93
8 0 1 1 1 1 1 0 0 42 72 4.55
These angles and the corresponding to fitness function are shown below.
Fig. Fitness function F for various population
The above Table and the Fig. illustrates that :
− GA begins with a population of random strings.
− Then, each string is evaluated to find the fitness value.
− The population is then operated by three operators –
Reproduction , Crossover and Mutation, to create new population.
− The new population is further evaluated tested for termination.
− If the termination criteria are not met, the population is iteratively operated
by the three operators and evaluated until the termination criteria are met.
− One cycle of these operation and the subsequent evaluation procedure is
known as a Generation in GA terminology.
49
F=1
θ1=0
θ2=0
F=2.1
θ1=12
θ2=6
F=3.11
θ1=6
θ2=30
F=3.11
θ1=12
θ2=48
F=1.91
θ1=84
θ2=48
F=1.93
θ1=84
θ2=78
F=4.55
θ1=42
θ2=72
F=4.6
θ1=36
θ2=60
50. R
C
C
h
a
k
r
a
b
o
r
t
y
,
w
w
w
.
m
y
r
e
a
d
e
r
s
.
i
n
f
o
Sc – GA – References
5. References : Textbooks
1. "Neural Network, Fuzzy Logic, and Genetic Algorithms - Synthesis and
Applications", by S. Rajasekaran and G.A. Vijayalaksmi Pai, (2005), Prentice
Hall, Chapter 8-9, page 225-293.
2. "Genetic Algorithms in Search, Optimization, and Machine Learning", by David E.
Goldberg, (1989), Addison-Wesley, Chapter 1-5, page 1- 214.
3. "An Introduction to Genetic Algorithms", by Melanie Mitchell, (1998), MIT Press,
Chapter 1- 5, page 1- 155,
4. "Genetic Algorithms: Concepts And Designs", by K. F. Man, K. S. and Tang, S.
Kwong, (201), Springer, Chapter 1- 2, page 1- 42,
5. "Practical genetic algorithms", by Randy L. Haupt, (2004), John Wiley & Sons
Inc, Chapter 1- 5, page 1- 127.
6. Related documents from open source, mainly internet. An exhaustive list is
being prepared for inclusion at a later date.
50