First Chapter: 1.what Is Artificial Intelligence?
First Chapter: 1.what Is Artificial Intelligence?
Examples are
Smart assistants like siri and alexa
Manufacturing and drone robots
Spam filters on email
Conversational bots for marketing and customer services
Disease mapping and prediction tools
Social media monitoring tools for dangerous content or false news
Song or TV show recommendations from Spotify and Netflix
Artificial intelligence is gaining popularity at a quiker pace, the way we live, interact,
and improve customer experience. There’s much more to come in the upcoming
years with more improvement, development, and governance.
6. Ethical challenges
One of the major AI problems that are yet be tackled are the ethics and
morality. The way how the developers are technically grooming the AI bots to
perfection where it can flawlessly imitate human conversations, making it
increasingly tough to spot a difference between a machine and a real customer
service rep.
8. Legal Challenges
An AI application with an erroneous algorithm and data governance can cause
legal challenges for the company. This is yet again one of the biggest Artificial
Intelligence problems that a developer faces in a real world.
AI Techniques:
1. Heuristics.
2. Support Vector Machines.
3. Artificial Neural Networks.
4. Markov Decision Process.
5. Natural Language Processing.
Heuristics
Robotics
Planning
Expert Systems
Machine Learning
Theorem Proving
Symbolic Mathematics
Game Playing
5. Explain AI with proper example.
1. Siri
Siri is one of the most popular personal assistant offered by Apple in iPhone
and iPad. The friendly female voice-activated assistant interacts with the user
on a daily routine. She assists us to find information, get directions, send
messages, make voice calls, open applications and add events to the
calendar.
2. Tesla
Not only smartphones but automobiles are also shifting towards Artificial
Intelligence. Tesla is something you are missing if you are a car geek. This is
one of the best automobiles available until now. The car has not only been
able to achieve many accolades but also features like self-driving, predictive
capabilities, and absolute technological innovation.
If you are a technology geek and dreamt of owning a car like shown in
Hollywood movies, Tesla is one you need in your garage. The car is getting
smarter day by day through over the air updates.
SECOND CHAPTER
SHORT NOTE ON:
1. DFS
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph
data structures. The algorithm starts at the root node (selecting some arbitrary
node as the root node in the case of a graph) and explores as far as possible
along each branch before backtracking.
Example:
Question. Which solution would DFS find to move from node S to
node G if run on the graph below?
Solution. The equivalent search tree for the above graph is as follows. As DFS
traverses the tree “deepest node first”, it would always pick the deeper branch until it
reaches the solution (or it runs out of nodes, and goes to the next branch). The
traversal is shown in blue arrows.
DFS.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: DFS is complete if the search tree is finite, meaning for a given
finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching the
solution, or the cost spent in reaching it is high.
2.. BFS
Breadth First Search
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph
data structures. It starts at the tree root (or some arbitrary node of a graph,
sometimes referred to as a ‘search key’), and explores all of the neighbor nodes at
the present depth prior to moving on to the nodes at the next depth level.
Example:
Question. Which solution would BFS find to move from node S to node G if run on
the graph below?
Solution. The equivalent search tree for the above graph is as follows. As BFS
traverses the tree “shallowest node first”, it would always pick the shallower branch
until it reaches the solution (or it runs out of nodes, and goes to the next branch).
The traversal is shown in blue arrows.
shallowest solution.
Space complexity: Equivalent to how large can the fringe
get.
Completeness: BFS is complete, meaning for a given search tree, BFS will come
up with a solution if it exists.
Optimality: BFS is optimal as long as the costs of all edges are equal.
DFS
Depth First Search (DFS) algorithm traverses a graph in a depthward motion and
uses a stack to remember to get the next vertex to start a search when a dead end
occurs in any iteration.
UCS is different from BFS and DFS because here the costs come into play. In other
words, traversing via different edges might not have the same cost. The goal is to
find a path where the cumulative sum of costs is least.
Cost of a node is defined as:
cost(node) = cumulative cost of all nodes from root
cost(root) = 0
Example:
Question. Which solution would UCS find to move from node S to node G if run on
the graph below?
Solution. The equivalent search tree for the above graph is as follows. Cost of each
node is the cumulative cost of reaching that node from the root. Based on UCS
strategy, the path with least cumulative cost is chosen. Note that due to the many
options in the fringe, the algorithm explores most of them so long as their cost is low,
and discards them when a lower cost path is found; these discarded traversals are
not shown below. The actual traversal is shown in blue.
Time complexity:
Space complexity:
Advantages:
· UCS is complete.
· UCS is optimal.
Disadvantages:
· Explores options in every “direction”.
· No information on goal location.
Here, the algorithms have information on the goal state, which helps in more efficient
searching. This information is obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics: In an informed search, a heuristic is a function that estimates
how close a state is to the goal state. For examples – Manhattan distance, Euclidean
distance, etc. (Lesser the distance, closer the goal.)
Different heuristics are used in different informed algorithms discussed below.
Greedy Search
In greedy search, we expand the node closest to the goal node. The “closeness” is
estimated by a heuristic h(x) .
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with
lower h value.
Example:
Question. Find the path from S to G using greedy search. The heuristic values h of
each node below the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it
has the lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We
choose E with lower heuristic cost. Finally, from E, we go to G(h=0). This entire
traversal is shown in the search tree below, in blue.
A* Tree Search
S 7 0 7
S -> A 9 3 12
S -> D 5 2 7
2 + 1 =
S -> D -> B 4 3 7
2 + 4 =
S -> D -> E 3 6 9
3 + 2 =
S -> D -> B -> C 2 5 7
3 + 1 =
S -> D -> B -> E 3 4 7
5 + 4 =
S -> D -> B -> C -> G 0 9 9
A* Graph Search
A* tree search works well, except that it takes time re-exploring the
branches it has already explored. In other words, if the same node has
expanded twice in different branches of the search tree, A* search might
explore both of those branches, thus wasting time
A* Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
Heuristic. Graph search is optimal only when the forward cost between
two successive nodes A and B, given by h(A) - h (B) , is less than or equal
to the backward cost between those two nodes g(A -> B). This property of
graph search heuristic is called consistency.
Consistency:
Example
Question. Use graph search to find path from S to G in the following graph.
Solution. We solve this question pretty much the same way we solved last question,
but in this case, we keep a track of nodes explored so that we don’t re-explore them.
5.Heuristic Function
Heuristics function: Heuristic is a function which is used in Informed Search,
and it finds the most promising path. It takes the current state of the agent as its
input and produces the estimation of how close agent is from the goal.
Heuristics function: Heuristic is a function which is used in Informed Search, and it
finds the most promising path. It takes the current state of the agent as its input and
produces the estimation of how close agent is from the goal. The heuristic method,
however, might not always give the best solution, but it guaranteed to find a good
solution in reasonable time. Heuristic function estimates how close a state is to the
goal. It is represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.
6. Hill Climbing
Hill Climbing is a heuristic search used for mathematical optimization problems in
the field of Artificial Intelligence.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently
good solution to the problem. This solution may not be the global optimal
maximum.
In the above definition, mathematical optimization problems implies
that hill-climbing solves the problems where we need to maximize or
minimize a given real function by choosing values from the given inputs.
Example-Travelling salesman problem where we need to minimize the
distance traveled by the salesman.
‘Heuristic search’ means that this search algorithm may not find the
optimal solution to the problem. However, it will give a good solution
in reasonable time.
A heuristic function is a function that will rank all the possible
alternatives at any branching step in search algorithm based on the
available information. It helps the algorithm to select the best route out of
possible routes.
1. Simple Hill climbing : It examines the neighboring nodes one by one
and selects the first neighboring node which optimizes the current cost as
next node.
Step 3 : Exit.
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current
state as initial state
Step 2 : Repeat these steps until a solution is found or current state does not
change
i. Let ‘target’ be a state such that any successor of the current state will be better
than it;
ii. for each operator that applies to the current state
a. apply the new operator and create a new state
b. evaluate the new state
c. if this state is goal state then quit else compare with ‘target’
d. if this state is better than ‘target’, set this state as ‘target’
e. if target is better than current state set current state to Target
Step 3 : Exit
3.Stochastic hill climbing : It does not examine all the neighboring
nodes before deciding which node to select .It just selects a neighboring
node at random and decides (based on the amount of improvement in
that neighbor) whether to move to that neighbor or to examine another.
State space diagram is a graphical representation of the set of states our search
algorithm can reach vs the value of our objective function(the function which we
wish to maximize).
X-axis : denotes the state space ie states or configuration our algorithm may
reach.
Y-axis : denotes the values of objective function corresponding to a particular
state.The best solution will be that state space where objective function has
maximum value(global maximum).
7. A* Search algorithm
A* Search algorithm is one of the best and popular technique used in path-finding
and graph traversals.
Why A* Search Algorithm ?
Informally speaking, A* Search algorithms, unlike other traversal techniques, it has
“brains”. What it means is that it is really a smart algorithm which separates it from
the other conventional algorithms. This fact is cleared in detail in below sections.
And it is also worth mentioning that many games and web-based maps use this
algorithm to find the shortest path very efficiently (approximation).
Explanation
Consider a square grid having many obstacles and we are given a starting cell and
a target cell. We want to reach the target cell (if possible) from the starting cell as
quickly as possible. Here A* Search Algorithm comes to the rescue.
What A* Search Algorithm does is that at each step it picks the node according to
a value-‘f’ which is a parameter equal to the sum of two other parameters – ‘ g’ and
‘h’. At each step it picks the node/cell having the lowest ‘f’, and process that
node/cell.
We define ‘g’ and ‘h’ as simply as possible below
g = the movement cost to move from the starting point to a given square on the
grid, following the path generated to get there.
h = the estimated movement cost to move from that given square on the grid to
the final destination. This is often referred to as the heuristic, which is nothing but
a kind of smart guess. We really don’t know the actual distance until we find the
path, because all sorts of things can be in the way (walls, water, etc.). There can
be many ways to calculate this ‘h’ which are discussed in the later sections.
So suppose as in the below figure if we want to reach the target cell from the
source cell, then the A* Search algorithm would follow path as shown below. Note
that the below figure is made by considering Euclidean Distance as a heuristics.
DEFINE IN DETAIL:
1. Constraint Satisfaction
2. Mean-Ends Analysis
The means -ends analysis process centers around finding the difference
between current state and goal state. The problem space of means - ends
analysis has an initial state and one or more goal state, a set of operate with a
set of preconditions their application and difference functions that computes
the difference between two state a(i) and s(j). A problem is solved using
means - ends analysis by
3. Simulated Annealing
DIFFERENCE BETWEEN:
Informed and uninformed search
Definition BFS, stands for Breadth DFS, stands for Depth First Search.
1
First Search.
Data BFS uses Queue to find DFS uses Stack to find the shortest
2
structure the shortest path. path.
Sr. Key BFS DFS
No.
Source BFS is better when target DFS is better when target is far from
3
is closer to Source. source.
Game playing can be formalized as search as follows: the initial state is the
initial board position. The operators define the set of legal moves from any
position. The final test determines when the game is over. The utility function
gives a numeric outcome for the game.
Both players try to win the game. So, both of them try to make the best move
possible at each turn. Searching techniques like BFS (Breadth First Search)
are not accurate for this as the branching factor is very high, so searching
will take a lot of time. So, we need another search procedures that improve –
Generate procedure so that only good moves are generated.
Test procedure so that the best move can be explored first.
MOVEGEN: It generates all the possible moves that can be generated from
the current position.
STATICEVALUATION: It returns a value depending upon the goodness
from the viewpoint two-player
This algorithm is a two player game, so we call the first player as PLAYER1
and second player as PLAYER2. The value of each node is backed-up
from its children. For PLAYER1 the backed-up value is the maximum value
of its children and for PLAYER2 the backed-up value is the minimum value
of its children. It provides most promising move to PLAYER1, assuming
that the PLAYER2 has make the best move. It is a recursive algorithm, as
same procedure occurs at each level.
Figure 1: Before backing-up of values
We assume that PLAYER1 will start the game. 4 levels are generated. The
value to nodes H, I, J, K, L, M, N, O is provided by STATICEVALUATION
function. Level 3 is maximizing level, so all nodes of level 3 will take
maximum values of their children. Level 2 is minimizing level, so all its
nodes will take minimum values of their children. This process continues.
The value of A is 23. That means A should choose C move to win.
1. α>=β
Let's take an example of two-player search tree to understand the working of Alpha-
beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞
and β= +∞, these value of alpha and beta passed down to node B where again α= -∞
and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α
at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min, Now β= +∞, will compare with the available subsequent nodes value,
i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E
α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm
will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max
(-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still
α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3
and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G
will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal value for the maximizer is
3 for this example.
Move Ordering in Alpha-Beta pruning:
o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune
any of the leaves of the tree, and works exactly as minimax algorithm. In this
case, it also consumes more time because of alpha-beta factors, such a move
of pruning is called worst ordering. In this case, the best move occurs on the
right side of the tree. The time complexity for such an order is O(b m).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of
pruning happens in the tree, and best moves occur at the left side of the tree.
We apply DFS hence it first search left of the tree and go deep twice as
minimax algorithm in the same amount of time. Complexity in ideal ordering is
O(bm/2).
i. Start by pushing the original goal on the stack. Repeat this until the stack
becomes empty. If stack top is a compound goal, then push its unsatisfied
subgoals on the stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and
push the action’s precondition on the stack to satisfy the condition.
iii. If stack top is an action, pop it from the stack, execute it and change the
knowledge base by the effects of the action.
iv. If stack top is a satisfied goal, pop it from the stack.
5. Explain the different methods of planning.
Choose the best rule for applying the next rule based on the best available
heuristics.
Apply the chosen rule for computing the new problem state.
Detect when a solution has been found.
Detect dead ends so that they can be abandoned and the system’s effort is
directed in more fruitful directions.
Detect when an almost correct solution has been found.
The stack is used in an algorithm to hold the action and satisfy the goal. A
knowledge base is used to hold the current state, actions.
Goal stack is similar to a node in a search tree, where the branches are
created if there is a choice of an action.
The important steps of the algorithm are as stated below:
i. Start by pushing the original goal on the stack. Repeat this until the stack
becomes empty. If stack top is a compound goal, then push its unsatisfied subgoals
on the stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and push the
action’s precondition on the stack to satisfy the condition.
iii. If stack top is an action, pop it from the stack, execute it and change the
knowledge base by the effects of the action.
iv. If stack top is a satisfied goal, pop it from the stack.
Non-linear planning
This planning is used to set a goal stack and is included in the search space of all
possible subgoal orderings. It handles the goal interactions by interleaving method.
It takes larger search space, since all possible goal orderings are taken into
consideration.
Complex algorithm to understand.
Algorithm
1. Choose a goal 'g' from the goalset
2. If 'g' does not match the state, then