0% found this document useful (0 votes)
80 views

First Chapter: 1.what Is Artificial Intelligence?

Artificial intelligence (AI) is the development of intelligent machines that work and think like humans. Some key AI techniques discussed include heuristics, support vector machines, artificial neural networks, Markov decision processes, and natural language processing. The document provides examples of AI applications such as smart assistants, robots, spam filters, recommendations engines, and more. It also discusses challenges to AI adoption such as technical knowledge, data issues, high costs, workforce shortages, and ethical concerns.

Uploaded by

Aakanksha Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

First Chapter: 1.what Is Artificial Intelligence?

Artificial intelligence (AI) is the development of intelligent machines that work and think like humans. Some key AI techniques discussed include heuristics, support vector machines, artificial neural networks, Markov decision processes, and natural language processing. The document provides examples of AI applications such as smart assistants, robots, spam filters, recommendations engines, and more. It also discusses challenges to AI adoption such as technical knowledge, data issues, high costs, workforce shortages, and ethical concerns.

Uploaded by

Aakanksha Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

FIRST CHAPTER

1.What is artificial intelligence?


 Artificial intelligence is a set of algorithms and intelligence to try to mimic
human intelligence. Machine learning is one of them, and deep learning is one
of those machine learning techniques."
 According to the father of Artificial Intelligence, John McCarthy, it is “The
science and engineering of making intelligent machines, especially intelligent
computer programs”.
 It is the branch of computer sciences that emphasizes the development of
intelligence machines, thinking and working like humans. For example:
speech recognition, learning and planning, problem solving.

Examples are
 Smart assistants like siri and alexa
 Manufacturing and drone robots
 Spam filters on email
 Conversational bots for marketing and customer services
 Disease mapping and prediction tools
 Social media monitoring tools for dangerous content or false news
 Song or TV show recommendations from Spotify and Netflix
Artificial intelligence is gaining popularity at a quiker pace, the way we live, interact,
and improve customer experience. There’s much more to come in the upcoming
years with more improvement, development, and governance.

2. What are Artificial Intelligence Problems or challanges?


1. Lack of technical knowledge
 To integrate, deploy and implement AI applications in the enterprise, the
organization must have the knowledge of the current AI advancement and
technologies as well as its shortcomings. The lack of technical know-how is
hindering the adoption of this niche domain in most of the organization.
2. The price factor
 Small and mid-sized organization struggles a lot when it comes to adopting AI
technologies as it is a costly affair. Even big firms like Facebook, Apple,
Microsoft, Google, Amazon (FAMGA) allocate a separate budget for adopting
and implementing AI technologies.

3. Data acquisition and storage


 One of the biggest Artificial Intelligence problems is data acquisition and
storage. Business AI systems depend on sensor data as its input. For
validation of AI, a mountain of sensor data is collected. Irrelevant and noisy
datasets may cause obstruction as they are hard to store and analyze.

4. Rare and expensive workforce


 As mentioned above, adoption and deployment of AI technologies require
specialists like data scientists, data engineer and other SMEs (Subject Matter
Experts). These experts are expensive and rare in the current marketplace. 
5. Issue of responsibility
 The implementation of AI application comes with great responsibility. Any
specific individual must bear the burden of any sort of hardware malfunctions.

6. Ethical challenges
 One of the major AI problems that are yet be tackled are the ethics and
morality. The way how the developers are technically grooming the AI bots to
perfection where it can flawlessly imitate human conversations, making it
increasingly tough to spot a difference between a machine and a real customer
service rep.

7. Lack of computation speed


 AI, Machine learning and deep learning solutions require a high degree of
computation speeds offered only by high-end processors. Larger infrastructure
requirements and pricing associated with these processors has become a
hindrance in their general adoption of the AI technology.

8. Legal Challenges
 An AI application with an erroneous algorithm and data governance can cause
legal challenges for the company. This is yet again one of the biggest Artificial
Intelligence problems that a developer faces in a real world.

9. Difficulty of assessing vendors


In any emerging field, a tech procurement is quite challenging as AI is particularly
vulnerable. Businesses face a lot of problems to know how exactly they can use AI
effectively as many non-AI companies engage in AI washing, some
organizations overstate.

3. List out AI Techniques.

AI Techniques:

1. Heuristics.
2. Support Vector Machines.
3. Artificial Neural Networks.
4. Markov Decision Process.
5. Natural Language Processing.
Heuristics

 It is one of the most popular search algorithms used in Artificial


Intelligence.
 It is implemented to solve problems faster than the classic methods
or to find the solutions for which classic methods cannot.
 Heuristics techniques basically employ heuristic for its moves and
are used to reduce the total number of alternatives for the results.
 This technique is one of the most basic techniques used for AI and is
based on the principle of trial and error. It learns from the mistakes.
 Heuristics is one of the best options for solving difficult problems. For
instance, to know the shorter route for any destination, the best way
is to identify all the possible routes and then to identify the shortest
one.

Support Vector Machines

 Support Vector Machine is a supervised machine learning algorithm


used for regression challenges or classification problems.
 However, in the majority of cases, it is used for classification only, for
instance, email systems use vector machines for email classification
as Social or Promotion or any other. It categorizes each mail
according to the respective categories.
 This technique is widely used for face recognition, text recognition
and image recognition systems.

Artificial Neural Network

 Neural networks are generally found in the brains of living


organisms.
 These are basically the neural circuits which help living beings to
transmit and process the information.
 For this purpose, there are billions of neurons which helps to make
the neural systems for making decisions in day-to-day life and learn
new things.
 These natural neural networks have inspired the design of an
Artificial Neural Network. Instead of Neurons, Artificial Neural
Networks are composed of Nodes.
 These networks help in identifying patterns from the data and then
learns from it.
 For this purpose, it uses different learning method such as
supervised learning, unsupervised learning and reinforced learning.
 From an application perspective, it is used in machine learning, deep
learning and pattern recognition.

Markov Decision Process

 A Markov Decision Process (MDP) is a framework for decision-


making modeling where in some situations the outcome is partly
random and partly based on the input of the decision maker.
 Another application where MDP is used is optimized planning. The
basic goal of MDP is to find a policy for the decision maker,
indicating what particular action should be taken at what state.
 An MDP model consists of the following parts:
1. A set of possible states: for example, this can refer to a grid world of
a robot or the states of a door (open or closed).
2. A set of possible actions: a fixed set of actions that e.g. a robot can
take, such as going north, left, south or west. Or with respect to a
door, closing or opening it.
3. Transition probabilities: this is the probability of going from one state
to another. For example, what is the probability that the door is
closed, after the action of closing the door has been performed?
4. Rewards: these are used to direct the planning. For instance, a robot
may want to move north to reach its destination. Actually going north
will result in a higher reward.

Natural Language Processing

 Basically, it is a technique used by computers to understand,


interpret and manipulate human language. Going by its use, it is
helpful for speech recognition and speech synthesis.
 Already, this technique is used for several applications by a myriad
of companies. Apple’s Siri, Google Assistant, Microsoft’s Cortana
and Alexa are some of the applications which uses the Natural
Language Processing techniques.
 Additionally, it is also used for parsing techniques, part-of-speech
tagging, and text recognition.

4.What are applications of AI?


AI has applications in all fields of human study, such as finance and
economics, environmental engineering, chemistry, computer science, and so
on.
Some of the applications of AI are listed below:
 Perception
■ Machine vision
■ Speech understanding
■ Touch ( tactile or haptic) sensation

 Robotics

 Natural Language Processing


■ Natural Language Understanding
■ Speech Understanding
■ Language Generation
■ Machine Translation

 Planning
 Expert Systems
 Machine Learning
 Theorem Proving
 Symbolic Mathematics
 Game Playing
5. Explain AI with proper example.

1. Siri

Siri is one of the most popular personal assistant offered by Apple in iPhone
and iPad. The friendly female voice-activated assistant interacts with the user
on a daily routine. She assists us to find information, get directions, send
messages, make voice calls, open applications and add events to the
calendar.

Siri uses machine-learning technology in order to get smarter and capable-to-


understand natural language questions and requests. It is surely one of the
most iconic examples of machine learning abilities of gadgets.

2. Tesla

Not only smartphones but automobiles are also shifting towards Artificial
Intelligence. Tesla is something you are missing if you are a car geek. This is
one of the best automobiles available until now. The car has not only been
able to achieve many accolades but also features like self-driving, predictive
capabilities, and absolute technological innovation.
If you are a technology geek and dreamt of owning a car like shown in
Hollywood movies, Tesla is one you need in your garage. The car is getting
smarter day by day through over the air updates.
SECOND CHAPTER
SHORT NOTE ON:

1. DFS
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph
data structures. The algorithm starts at the root node (selecting some arbitrary
node as the root node in the case of a graph) and explores as far as possible
along each branch before backtracking.
Example:
Question. Which solution would DFS find to move from node S to
node G if run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As DFS
traverses the tree “deepest node first”, it would always pick the deeper branch until it
reaches the solution (or it runs out of nodes, and goes to the next branch). The
traversal is shown in blue arrows.

Path:   S -> A -> B -> C -> G


Let   = the depth of the search tree = number of levels of the search tree.
 = number of nodes in level  .
Time complexity: Equivalent to the number of nodes traversed in

DFS. 
Space complexity: Equivalent to how large can the fringe
get. 
Completeness: DFS is complete if the search tree is finite, meaning for a given
finite search tree, DFS will come up with a solution if it exists.
Optimality: DFS is not optimal, meaning the number of steps in reaching the
solution, or the cost spent in reaching it is high.

2.. BFS
Breadth First Search
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph
data structures. It starts at the tree root (or some arbitrary node of a graph,
sometimes referred to as a ‘search key’), and explores all of the neighbor nodes at
the present depth prior to moving on to the nodes at the next depth level.
Example:
Question. Which solution would BFS find to move from node S to node G if run on
the graph below?

Solution. The equivalent search tree for the above graph is as follows. As BFS
traverses the tree “shallowest node first”, it would always pick the shallower branch
until it reaches the solution (or it runs out of nodes, and goes to the next branch).
The traversal is shown in blue arrows.

Path:   S -> D -> G


Let   = the depth of the shallowest solution.
 = number of nodes in level  .
Time complexity: Equivalent to the number of nodes traversed in BFS until the

shallowest solution. 
Space complexity: Equivalent to how large can the fringe
get. 
Completeness: BFS is complete, meaning for a given search tree, BFS will come
up with a solution if it exists.
Optimality: BFS is optimal as long as the costs of all edges are equal.

BFS and DFS are graph traversal algorithms.


BFS
Breadth First Search (BFS) algorithm traverses a graph in a breadthward motion and
uses a queue to remember to get the next vertex to start a search when a dead end
occurs in any iteration.

DFS
Depth First Search (DFS) algorithm traverses a graph in a depthward motion and
uses a stack to remember to get the next vertex to start a search when a dead end
occurs in any iteration.

3.Uniform Cost Search

UCS is different from BFS and DFS because here the costs come into play. In other
words, traversing via different edges might not have the same cost. The goal is to
find a path where the cumulative sum of costs is least.
Cost of a node is defined as:
cost(node) = cumulative cost of all nodes from root
cost(root) = 0
Example:
Question. Which solution would UCS find to move from node S to node G if run on
the graph below?

Solution. The equivalent search tree for the above graph is as follows. Cost of each
node is the cumulative cost of reaching that node from the root. Based on UCS
strategy, the path with least cumulative cost is chosen. Note that due to the many
options in the fringe, the algorithm explores most of them so long as their cost is low,
and discards them when a lower cost path is found; these discarded traversals are
not shown below. The actual traversal is shown in blue.

Path:   S -> A -> B -> G


Cost:   5
Let   = cost of solution.
 = arcs cost.

Then   effective depth

Time complexity:   

Space complexity:   
Advantages:
· UCS is complete.
· UCS is optimal.
Disadvantages:
· Explores options in every “direction”.
· No information on goal location.

4.Informed Search Algorithms

Here, the algorithms have information on the goal state, which helps in more efficient
searching. This information is obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.
1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics: In an informed search, a heuristic is a function that estimates
how close a state is to the goal state. For examples – Manhattan distance, Euclidean
distance, etc. (Lesser the distance, closer the goal.)
Different heuristics are used in different informed algorithms discussed below.

Greedy Search

In greedy search, we expand the node closest to the goal node. The “closeness” is
estimated by a heuristic h(x) .
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with
lower h value.
Example:
Question. Find the path from S to G using greedy search. The heuristic values h of
each node below the name of the node.
Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it
has the lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We
choose E with lower heuristic cost. Finally, from E, we go to G(h=0). This entire
traversal is shown in the search tree below, in blue.

Path:   S -> D -> E -> G


Advantage: Works well with informed search problems, with fewer steps to reach a
goal.
Disadvantage: Can turn into unguided DFS in the worst case.

A* Tree Search

A* Tree Search, or simply known as A* Search, combines the strengths of uniform-


cost search and greedy search. In this search, the heuristic is the summation of the
cost in UCS, denoted by g(x), and the cost in greedy search, denoted by h(x). The
summed cost is denoted by f(x).
Heuristic: The following points should be noted wrt heuristics in A*
search. 
· Here, h(x) is called the forward cost, and is an estimate of the distance of the
current node from the goal node.
· And, g(x) is called the backward cost, and is the cumulative cost of a node from
the root node.
· A* search is optimal only when for all nodes, the forward cost for a
node h(x) underestimates the actual cost h*(x) to reach the goal. This property of A*
heuristic is called admissibility.
Admissibility:   
Strategy: Choose the node with lowest f(x) value.
Example:
Question. Find the path to reach from S to G using A* search.

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the


fringe at each step, choosing the node with the lowest sum. The entire working is
shown in the table below.
Note that in the fourth set of iteration, we get two paths with equal summed cost  f(x),
so we expand them both in the next set. The path with lower cost on further
expansion is the chosen path.

Path h(x)g(x) f(x)

S 7 0 7

       

S -> A 9 3 12

S -> D    5 2 7

       

2 + 1 =
S -> D -> B    4 3 7

2 + 4 =
S -> D -> E 3 6 9

     

3 + 2 =
S -> D -> B -> C    2 5 7
3 + 1 =
S -> D -> B -> E    3 4 7

       

5 + 4 =
S -> D -> B -> C -> G 0 9 9

S -> D -> B -> E ->


4 + 3 =
G    0 7 7

Path:   S -> D -> B -> E -> G


Cost:   7

A* Graph Search
 A* tree search works well, except that it takes time re-exploring the
branches it has already explored. In other words, if the same node has
expanded twice in different branches of the search tree, A* search might
explore both of those branches, thus wasting time
 A* Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
 Heuristic. Graph search is optimal only when the forward cost between
two successive nodes A and B, given by h(A) - h (B) , is less than or equal
to the backward cost between those two nodes g(A -> B). This property of
graph search heuristic is called consistency.
Consistency:   
Example
Question. Use graph search to find path from S to G in the following graph.
Solution. We solve this question pretty much the same way we solved last question,
but in this case, we keep a track of nodes explored so that we don’t re-explore them.

Path:   S -> D -> B -> C -> E -> G


Cost:   7

5.Heuristic Function
Heuristics function: Heuristic is a function which is used in Informed Search,
and it finds the most promising path. It takes the current state of the agent as its
input and produces the estimation of how close agent is from the goal.
Heuristics function: Heuristic is a function which is used in Informed Search, and it
finds the most promising path. It takes the current state of the agent as its input and
produces the estimation of how close agent is from the goal. The heuristic method,
however, might not always give the best solution, but it guaranteed to find a good
solution in reasonable time. Heuristic function estimates how close a state is to the
goal. It is represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.

Heuristic Search Techniques in AI


 Best-First Search.
 A* Search.
 Bidirectional Search.
 Beam Search.
 Simulated Annealing.
 Hill Climbing.

6. Hill Climbing
Hill Climbing is a heuristic search used for mathematical optimization problems in
the field of Artificial Intelligence.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently
good solution to the problem. This solution may not be the global optimal
maximum.
 In the above definition, mathematical optimization problems implies
that hill-climbing solves the problems where we need to maximize or
minimize a given real function by choosing values from the given inputs.
Example-Travelling salesman problem  where we need to minimize the
distance traveled by the salesman.
 ‘Heuristic search’ means that this search algorithm may not find the
optimal solution to the problem. However, it will give a good solution
in reasonable time.
 A heuristic function is a function that will rank all the possible
alternatives at any branching step in search algorithm based on the
available information. It helps the algorithm to select the best route out of
possible routes.

Features of Hill Climbing

1. Variant of generate and test algorithm : It is a variant of generate and


test algorithm. The generate and test algorithm is as follows :

1. Generate possible solutions.


2. Test to see if this is the expected solution.
3. If the solution has been found quit else go to step 1.
1. Hence we call Hill climbing as a variant of generate and test algorithm as
it takes the feedback from the test procedure. Then this feedback is
utilized by the generator in deciding the next move in search space.
2. Uses the Greedy approach : At any point in state space, the search
moves in that direction only which optimizes the cost of function with the
hope of finding the optimal solution at the end.

Types of Hill Climbing

1. Simple Hill climbing : It examines the neighboring nodes one by one
and selects the first neighboring node which optimizes the current cost as
next node.

Algorithm for Simple Hill climbing :


Step 1 :  Evaluate the initial state. If it is a goal state then stop and return
success. Otherwise, make initial state as current state.
Step 2 :  Loop until the solution state is found or there are no new
operators present which can be applied to the current state.
a) Select a state that has not been yet applied to the current state and
apply it to produce a new state.
b) Perform these to evaluate new state
        i. If the current state is a goal state, then stop and return success.
        ii. If it is better than the current state, then make it current state and
proceed further.
        iii. If it is not better than the current state, then continue in the loop
until a solution is found.

Step 3 :  Exit.

2.Steepest-Ascent Hill climbing: It first examines all the neighboring nodes


and then selects the node closest to the solution state as of next node.

Step 1 :  Evaluate the initial state. If it is goal state then exit else make the current
state as initial state
Step 2 :  Repeat these steps until a solution is found or current state does not
change
i. Let ‘target’ be a state such that any successor of the current state will be better
than it;
ii. for each operator that applies to the current state
     a. apply the new operator and create a new state
     b. evaluate the new state
        c. if this state is goal state then quit else compare with ‘target’
        d. if this state is better than ‘target’, set this state as ‘target’
        e. if target is better than current state set current state to Target
Step 3 :  Exit
3.Stochastic hill climbing : It does not examine all the neighboring
nodes before deciding which node to select .It just selects a neighboring
node at random and decides (based on the amount of improvement in
that neighbor) whether to move to that neighbor or to examine another.

State Space diagram for Hill Climbing

State space diagram is a graphical representation of the set of states our search
algorithm can reach vs the value of our objective function(the function which we
wish to maximize).
X-axis : denotes the state space ie states or configuration our algorithm may
reach.
Y-axis : denotes the values of objective function corresponding to a particular
state.The best solution will be that state space where objective function has
maximum value(global maximum).

7. A* Search algorithm
A* Search algorithm is one of the best and popular technique used in path-finding
and graph traversals.
Why A* Search Algorithm ? 
Informally speaking, A* Search algorithms, unlike other traversal techniques, it has
“brains”. What it means is that it is really a smart algorithm which separates it from
the other conventional algorithms. This fact is cleared in detail in below sections.  
And it is also worth mentioning that many games and web-based maps use this
algorithm to find the shortest path very efficiently (approximation). 

Explanation 
Consider a square grid having many obstacles and we are given a starting cell and
a target cell. We want to reach the target cell (if possible) from the starting cell as
quickly as possible. Here A* Search Algorithm comes to the rescue.
What A* Search Algorithm does is that at each step it picks the node according to
a value-‘f’ which is a parameter equal to the sum of two other parameters – ‘ g’ and
‘h’. At each step it picks the node/cell having the lowest ‘f’, and process that
node/cell.
We define ‘g’ and ‘h’ as simply as possible below
g = the movement cost to move from the starting point to a given square on the
grid, following the path generated to get there. 
h = the estimated movement cost to move from that given square on the grid to
the final destination. This is often referred to as the heuristic, which is nothing but
a kind of smart guess. We really don’t know the actual distance until we find the
path, because all sorts of things can be in the way (walls, water, etc.). There can
be many ways to calculate this ‘h’ which are discussed in the later sections.
So suppose as in the below figure if we want to reach the target cell from the
source cell, then the A* Search algorithm would follow path as shown below. Note
that the below figure is made by considering Euclidean Distance as a heuristics.
DEFINE IN DETAIL:
1. Constraint Satisfaction

 A constraint satisfaction problem (or CSP) is a special kind of problem


that satisfies some additional structural properties corresponding to general
problems.
 In a CSP, the states are defined by the values of a set of variables and
the goal test specifies a set of constraints that the values must follow.
 CSP represents values for all the variables as a solution so that the
constraints are satisfied.
For example: The 8-queens problem is referred to as CSP. Here, the
arrangement is in such a way that variables are the locations of each of the
eight queens square denote the possible values on the board  and no two
queens can be placed in the same row, column or diagonal for the constraint
state.

2. Mean-Ends Analysis

Most of the search strategies either reason forward of backward however,


often a mixture o the two directions is appropriate. Such mixed strategy would
make it possible to solve the major parts of problem first and solve the smaller
problems the arise when combining them together. Such a technique is called
"Means - Ends Analysis".

The means -ends analysis process centers around finding the difference
between current state and goal state. The problem space of means - ends
analysis has an initial state and one or more goal state, a set of operate with a
set of preconditions their application and difference functions that computes
the difference between two state a(i) and s(j). A problem is solved using
means - ends analysis by

1. Computing the current state s1 to a goal state s2 and computing their


difference D12.
2. Satisfy the preconditions for some recommended operator op is selected,
then to reduce the difference D12.
3. The operator OP is applied if possible. If not the current state is solved a
goal is created and means- ends analysis is applied recursively to reduce the
sub goal.
4. If the sub goal is solved state is restored and work resumed on the original
problem.
( the first AI program to use means - ends analysis was the GPS General
problem solver)
means- ends analysis I useful for many human planning activities.
Consider the example of planing for an office worker. Suppose we have a
different table of three rules:
1. If in out current state we are hungry , and in our goal state we are not
hungry , then either the "visit hotel" or "visit Canteen " operator is
recommended.
2. If our current state we do not have money , and if in your goal state we
have money, then the "Visit our bank" operator or the "Visit secretary"
operator is recommended.
3. If our current state we do not know where something is , need in our goal
state we do know, then either the "visit office enquiry" , "visit secretary" or
"visit co worker " operator is recommended.

3. Simulated Annealing
DIFFERENCE BETWEEN:
 Informed and uninformed search

 DFS AND BFS

Sr. Key BFS DFS


No.

Definition BFS, stands for Breadth DFS, stands for Depth First Search.
1
First Search.

Data BFS uses Queue to find DFS uses Stack to find the shortest
2
structure the shortest path. path.
Sr. Key BFS DFS
No.

Source BFS is better when target DFS is better when target is far from
3
is closer to Source. source.

Suitablity As BFS considers all DFS is more suitable for decision


for neighbour so it is not tree. As with one decision, we need
4 decision suitable for decision tree to traverse further to augment the
tree used in puzzle games. decision. If we reach the conclusion,
we won.

5 Speed BFS is slower than DFS. DFS is faster than BFS.

Time Time Complexity of BFS Time Complexity of DFS is also


6 Complexity = O(V+E) where V is O(V+E) where V is vertices and E is
vertices and E is edges. edges.

 Hill climbing and simulated annealing


FOURTH CHAPTER

1. Write a short note on game planning?

Game playing is a discrete, structured task, which made it an early favorite


of AI researchers. Most work focuses on games of perfect information such
as tic-tac-toe, chess, backgammon, etc. Initially it was thought that the
computer would play a successful game by cleverly choosing among
different strategies from a database. Experience has shown that straight
forward search does better. This is what Deep Blue used last year to beat
the great master Kasparov.

Game playing can be formalized as search as follows: the initial state is the
initial board position. The operators define the set of legal moves from any
position. The final test determines when the game is over. The utility function
gives a numeric outcome for the game.

Game Playing is an important domain of artificial intelligence. Games don’t


require much knowledge; the only knowledge we need to provide is the
rules,legal moves and the conditions of winning or losing the game.

Both players try to win the game. So, both of them try to make the best move
possible at each turn. Searching techniques like BFS (Breadth First Search)
are not accurate for this as the branching factor is very high, so searching
will take a lot of time. So, we need another search procedures that improve –
 Generate procedure so that only good moves are generated.
 Test procedure so that the best move can be explored first.

The most common search technique in game playing is Minimax search


procedure. It is depth-first depth-limited search procedure. It is used for
games like chess and tic-tac-toe.

2. Explain MiniMAx algorithm with one example.?

Minimax is an algorithm that searches game trees. It assumes that the


players take alternate moves. It uses a utility function whose values are
good for player 1 when they are big and whose values are good for player
2 when the values are small. Thus, the first player's (MAX's)
goal is to select a move that maximizes the utility function.
The second player's (MIN's) goal is to select a move that
minimizes the utility function (hence the name of the algorithm). It
maximizes the utility under the assumption that the opponent will play
perfectly. The time complexity of minimax is
O(b^m) and the space complexity is O(bm), where b is the number of legal
moves at each point and m is the maximum depth of the tree.

N-move look ahead is a variation of minimax that is applied when there is


no time to search all the way to the leaves of the tree. The search stops
earlier and a heuristic
evaluation function is applied to the leaves instead of a utility function. The
evaluation function should agree with the utility function on terminal states,
should not take too long to calculate, and should reflect the actual chances
of winning (e.g., for chess use the difference in the number of pieces
belonging to MAX and MIN or even assign different weight to each piece).
Problems: evaluation to a fixed ply-depth can be misleading. Horizon effect
problem.

Minimax algorithm uses two functions –

MOVEGEN: It generates all the possible moves that can be generated from
the current position.
STATICEVALUATION: It returns a value depending upon the goodness
from the viewpoint two-player

This algorithm is a two player game, so we call the first player as PLAYER1
and second player as PLAYER2. The value of each node is backed-up
from its children. For PLAYER1 the backed-up value is the maximum value
of its children and for PLAYER2 the backed-up value is the minimum value
of its children. It provides most promising move to PLAYER1, assuming
that the PLAYER2 has make the best move. It is a recursive algorithm, as
same procedure occurs at each level.
Figure 1: Before backing-up of values
 

Figure 2: After backing-up of values

We assume that PLAYER1 will start the game. 4 levels are generated. The
value to nodes H, I, J, K, L, M, N, O is provided by STATICEVALUATION
function. Level 3 is maximizing level, so all nodes of level 3 will take
maximum values of their children. Level 2 is minimizing level, so all its
nodes will take minimum values of their children. This process continues.
The value of A is 23. That means A should choose C move to win.

3.Explain Alpha bita cut-off(pruning algorithm) with one example.?

o Alpha-beta pruning is a modified version of the minimax algorithm. It is an


optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game
states it has to examine are exponential in depth of the tree. Since we cannot
eliminate the exponent, but we can cut it to half. Hence there is a technique
by which without checking each node of the game tree we can compute the
correct minimax decision, and this technique is called pruning. This involves
two threshold parameter Alpha and beta for future expansion, so it is
called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
not only prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:

1. Alpha: The best (highest-value) choice we have found so far at any


point along the path of Maximizer. The initial value of alpha is -∞.
2. Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same
move as the standard algorithm does, but it removes all the nodes which are
not really affecting the final decision but making algorithm slow. Hence by
pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

1. α>=β  

Key points about alpha-beta pruning:

o The Max player will only update the value of alpha.


o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes
instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-
beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞
and β= +∞, these value of alpha and beta passed down to node B where again α= -∞
and β= +∞, and Node B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α
at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min, Now β= +∞, will compare with the available subsequent nodes value,
i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E
α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm
will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At
node A, the value of alpha will be changed the maximum available value is 3 as max
(-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is
Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still
α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3
and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G
will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal value for the maximizer is
3 for this example.
Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on the order in which


each node is examined. Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune
any of the leaves of the tree, and works exactly as minimax algorithm. In this
case, it also consumes more time because of alpha-beta factors, such a move
of pruning is called worst ordering. In this case, the best move occurs on the
right side of the tree. The time complexity for such an order is O(b m).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of
pruning happens in the tree, and best moves occur at the left side of the tree.
We apply DFS hence it first search left of the tree and go deep twice as
minimax algorithm in the same amount of time. Complexity in ideal ordering is
O(bm/2).

Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta pruning:


o Occur the best move from the shallowest node.
o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try order:
captures first, then threats, then forward moves, backward moves.
o We can bookkeep the states, as there is a possibility that states may repeat.

4. What is planning? Explain goal stack planning with one example.


 The planning in Artificial Intelligence is about the decision making tasks
performed by the robots or computer programs to achieve a specific
goal.
 The execution of planning is about choosing a sequence of actions with
a high likelihood to complete the specific task.

Goal stack planning

This is one of the most important planning algorithms, which is specifically


used by STRIPS.
 The stack is used in an algorithm to hold the action and satisfy the
goal.  A knowledge base is used to hold the current state, actions.
 Goal stack is similar to a node in a search tree, where the branches
are created if there is a choice of an action.

The important steps of the algorithm are as stated below:

i. Start by pushing the original goal on the stack. Repeat this  until the stack
becomes empty. If stack top is a compound goal, then push its unsatisfied
subgoals on the stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and
push the action’s precondition on the stack to satisfy the condition.
iii. If stack top is an action, pop it from the stack, execute it and change the
knowledge base by the effects of the action.
iv. If stack top is a satisfied goal, pop it from the stack.
5. Explain the different methods of planning.

Blocks-World planning problem


 The blocks-world problem is known as Sussman Anomaly.
 Noninterleaved planners of the early 1970s were unable to solve this
problem, hence it is considered as anomalous.
 When two subgoals G1 and G2 are given, a noninterleaved planner produces
either a plan for G1 concatenated with a plan for G2, or vice-versa.
 In blocks-world problem, three blocks labeled as 'A', 'B', 'C' are allowed to rest
on the flat surface. The given condition is that only one block can be moved
at a time to achieve the goal.
 The start state and goal state are shown in the following diagram.

Components of Planning System


The planning consists of following important steps:

 Choose the best rule for applying the next rule based on the best available
heuristics.
 Apply the chosen rule for computing the new problem state.
 Detect when a solution has been found.
 Detect dead ends so that they can be abandoned and the system’s effort is
directed in more fruitful directions.
 Detect when an almost correct solution has been found.

Goal stack planning


This is one of the most important planning algorithms, which is specifically used
by STRIPS.

 The stack is used in an algorithm to hold the action and satisfy the goal.  A
knowledge base is used to hold the current state, actions.
 Goal stack is similar to a node in a search tree, where the branches are
created if there is a choice of an action.
The important steps of the algorithm are as stated below:

i. Start by pushing the original goal on the stack. Repeat this  until the stack
becomes empty. If stack top is a compound goal, then push its unsatisfied subgoals
on the stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and push the
action’s precondition on the stack to satisfy the condition.
iii. If stack top is an action, pop it from the stack, execute it and change the
knowledge base by the effects of the action.
iv. If stack top is a satisfied goal, pop it from the stack.
Non-linear planning
This planning is used to set a goal stack and is included in the search space of all
possible subgoal orderings. It handles the goal interactions by interleaving method.

Advantage of non-Linear planning


Non-linear planning may be an optimal solution with respect to plan length
(depending on search strategy used).

Disadvantages of Nonlinear planning

 It takes larger search space, since all possible goal orderings are taken into
consideration.
 Complex algorithm to understand.
Algorithm
1. Choose a goal 'g' from the goalset
2. If 'g' does not match the state, then

 Choose an operator 'o' whose add-list matches goal g


 Push 'o' on the opstack
 Add the preconditions of 'o' to the goalset
3. While all preconditions of operator on top of opstack are met in state

 Pop operator o from top of opstack


 state = apply(o, state)
 plan = [plan; o]

Other Planning Techniques


Triangle tables
 Provides a way of recording the goals that each operator expected to satisfy as
well as the goals that must be true for it to execute correctly.
Meta-planning
 A technique for reasoning not just about the problem solved but also about the
planning process itself.
Macro-operators
 Allow a planner to build new operators that represent commonly used sequences
of operators.
Case-based planning:
 Re-uses old plans to make new ones.
6. Explain Nonlinear planning, hierarchial planning, reactive systems
planning?
7.Examples based on MinMAX and Alphabita pruning.
SAME AS 2 and 3

You might also like