0% found this document useful (0 votes)
4 views

UNIT-2-AI-Notes

Uploaded by

vanamsanthosh54
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

UNIT-2-AI-Notes

Uploaded by

vanamsanthosh54
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit-2-Adversarial Search: Game ,Optimal Decisions In Games, Alpha-Beta

Pruning, Imperfect real-time decisions.

Adversarial Search: The Environment With Multiple-Agents more than


one agent, in which each agent is an opponent of other agent and playing
against each other by considering the action of other agent and effect of that
action on their performance , So Searches with conflicting goals are trying to
explore the Same Search Space for the solution, are called Adversarial
Searches also known as Games.
Adversarial search is a search, where we examine the problem which
arises when we try to plan ahead of the world and other agents are planning
against us.

Game : -The situations where more than one agent opponent to each other ,
searching for the solution or goal in the same search spaceis called as Game
Playing.

The two main factors to help to model and solve games in AI.
• Search problem
• Heuristic Evaluation function

Types of Games in AI:

• Perfect information: A game with the perfect information is that in which


agents can look into the complete board. Agents have all the information
about the game, and they can see each other moves also.
Example: Chess, Checkers, Go, etc.
• Imperfect information: If in a game agents do not have all information about
the game and not aware with what's going on, such type of games are called
the game with imperfect information: Example: Tic-Tac-Toe, Battleship, Blind,
Bridge, Etc.
• Deterministic games: Deterministic games are those games which follow a
strict pattern and set of rules for the games, and there is no randomness
associated with them.
Example: Chess, Checkers, Go, Tic-Tac-Toe, Etc.

• Non-deterministic games: Non-deterministic games not follow a strict


pattern and set of rules for the games. These games have various
unpredictable events and has a factor of chance or luck is introduced by
either dice or cards. These are random, and each action response is not
fixed. Example: Backgammon, Monopoly, Poker, etc

Adversarial Search or Game has following Elements :


A game can be defined as a type of search in AI which can be formalized of the
following elements:
• Initial state: It specifies how the game is set up at the start.
• Player(s): It specifies the set of players have moved in the state space of
game.
• Action(s): It specifies the set of legal moves or actions in state space.
• Result(s, a): It is the transition model, which specifies the result of moves in
the state space.
• Terminal-Test(s): Terminal test is true if the game is over, else it is false at
any case. The state where the game ends is called terminal states.
• Utility(s, p): A utility function gives the final numeric value for a game that
ends in terminal states s for player p. It is also called payoff function. For
Chess, the outcomes with values are a win as 1 loss as -1, and draw as 0.

Optimal Decisions In Games:-


The adversarial search, the result depends on the players which will decide the result
of the game. It is also obvious that the solution for the goal state will be an optimal
solution because the player will try to win the game with the shortest path and under
limited time. In this algorithm two players play the game, one is called MAX and other is
called MIN.

the minimax procedure works as follows:


• It aims to find the optimal strategy for MAX to win the game.
• It follows the approach of Depth-first search.
• In the game tree, optimal leaf node could appear at any depth of the tree.
• Propagate the minimax values up to the tree until the terminal node
discovered.
In a given game tree, the optimal strategy can be determined from the minimax
value of each node, which can be written as MINIMAX(n). MAX prefer to move to a
state of maximum value and MIN prefer to move to a state of minimum value then:

Types Of Adversarial Searches are :

• Minmax Algorithm
• Alpha-beta Pruning.
Mini-Max Algorithm in Artificial Intelligence:-
• Mini-max algorithm is a recursive or backtracking algorithm which is used in
decision-making and game theory. It provides an optimal move for the player
assuming that opponent is also playing optimally.
• Both the players are opponent of each other , fight it as the opponent player
gets the minimum benefit while they get the maximum benefit.
where MAX will select the maximized value and MIN will select the
minimized value.
• The minimax algorithm performs a Depth-First Search algorithm for the
exploration of the complete game tree Alternatively Level By Level.
• The minimax algorithm proceeds all the way down to the terminal node of the
tree, then backtrack the tree as the recursion.

Min-Max Algorithm:

Step-1: In the first step, the algorithm generates the entire game-tree and apply the
utility function to get the utility values for the terminal states. let's Node A is the initial
state of the tree. first initial value of MAX = - ∞ and MINI = +∞.

Step 2: compare Maximizer initial value is -∞, with each value in terminal state and
determines the higher nodes values. It will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7
Step 3: Compare Minimizer initial value +∞ with node B, C and will find the 3rd layer
node values.
• For node B= MIN(4,6) = 4
• For node C= MIN (-3, 7) = -3

Step 4: Maximizer again choose the maximum of value of B node, C node and find the
maximum value for the root node A.
In this game tree, there are only 4 layers, hence we reach immediately to the
root node, but in real games, there will be more than 4 layers.
• For node A Max(4, -3)= 4
That was the complete workflow of the Mini-Max two player game.
Properties of Mini-Max algorithm:
• Complete- Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
• Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
• Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of the
game-tree, and m is the maximum depth of the tree.
• Space Complexity- Space complexity of Mini-max algorithm is also similar to
DFS which is O(bm).

Limitation of the minimax Algorithm:


• The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc.
• This type of games has a huge branching factor, and the player has lots of
choices to decide.

This limitation of the minimax algorithm can be improved from alpha-beta pruning.
Alpha-Beta Pruning:
• Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm by cut it to half sub tree
without checking each node of game tree to compute the correct minimax
decision is called Alpha-beta pruning.

The technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is
called pruning.
This involves two threshold parameter Alpha and Beta for future
expansion, so it is called alpha-beta pruning.
Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.

• The two-parameter can be defined as:


a) Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.
b) Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.

The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.

Condition for Alpha-beta pruning:


The main condition which required for alpha-beta pruning is: α>=β

Key points about alpha-beta pruning:


• The Max player will only update the value of alpha.
• The Min player will only update the value of beta.
• While backtracking the tree, the node values will be passed to upper nodes
instead of values of alpha and beta.
• We will only pass the alpha, beta values to the child nodes.

Alpha-Beta Pruning algorithm:

Step 1: At the first step the, Max player will start first move from node A where α= -∞
and β= +∞, these value of alpha and beta passed down to node B where again α= -∞
and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node
D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a
turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e.
min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and
the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current
value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and
β= 3, where α>=β, so the right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.

Pseudo-code for Alpha-beta Pruning:

1. function minimax(node, depth, alpha, beta, maximizingPlayer) is


2. if depth ==0 or node is a terminal node then
3. return static evaluation of node
4. if MaximizingPlayer then // for Maximizer Player
5. maxEva= -infinity
6. for each child of node do
7. eva= minimax(child, depth-1, alpha, beta, False)
8. maxEva= max(maxEva, eva)
9. alpha= max(alpha, maxEva)
10. if beta<=alpha
11. break
12. return maxEva
13. else // for Minimizer player
14. minEva= +infinity
15. for each child of node do
16. eva= minimax(child, depth-1, alpha, beta, true)
17. minEva= min(minEva, eva)
18. beta= min(beta, eva)
19. if beta<=alpha
20. break
21. return minEva

Imperfect real-time decisions:-

It is not feasible to consider the whole game tree (even with alpha-beta), so
programs should cut the search off at some point earlier and apply a heuristic
evaluation function to states in the search, effectively turning nonterminal nodes into
terminal leaves.
i.e. Alter minimax or alpha-beta in 2 ways:

1) replace the utility function by a heuristic evaluation function EVAL, which estimates the
position’s utility.
2) replace the terminal test by a cutoff test that decides when to apply EVAL.

Evaluation function
An evaluation function returns an estimate of the expected utility of the game
from a given position.

How do we design good evaluation functions?

1) The evaluation function should order the terminal states in the same way as the true
utility function.
2) The computation must not take too long.
3) For nonterminal states, the evaluation function should be strongly correlated with the
actual chances of winning.
Features of the state: Most evaluation function work by calculating various features of
the state,
Example . in chess, number of white pawns, black pawns, white queens, black queens,
etc.
Categories: The features, taken together, define various categories (a.k.a. equivalence
classes) of states, the states in each category have the same values for all the features.

Any given category will contain some states that lead to win, draws or losses, the
evaluation function can return a single value that reflects the proportion of states with
each outcome.

Two ways to design a evaluation function:

a. Expected value: (requires too many categories and hence too much experience to
estimate) Example:

72% of the states encountered in the two-pawns vs. one-pawn category lead to a
win(utility+!); 20% to a loss(0) and 8% to a draw(1/2).

Then a reasonable evaluation for states in the category is the expected value:
(0.72*1)+(0.20*0)+(0.08*1/2)=0/76.
As with terminal states, the evaluation function need not return actual expected
values as long as the ordering of the sates is the same.

b. weighted linear function (most evaluation function use that.)


We can compute separate numerical contributions from each feature and then combine
them to find the total value.

Each wi is a weight and each fi is a feature of the position. For chess, the fi could
be the numbers of each kind of piece on the board (i.e. feature), and wi could be the
values of the pieces (1 for pawn, 3 for bishop, etc.).
Adding up the values of features in fact involves a strong assumption (that the
contribution of each feature is independent of the values of the other features), thus
current programs for games also use nonlinear combinations of features.
UNIT-2 :-Constraint Satisfaction Problems : Defining Constraint Satisfaction
Problems, Constraint Propagation, Backtracking Search for CSPs, Local Search for
CSPs, The Structure of Problems.

Defining Constraint Satisfaction Problems:- The problem-solving technique based


on conditions followed by agents known as Constraint satisfaction technique. By the
name, it is understood that constraint satisfaction means solving a problem under
certain constraints or rules.
Constraint or condition satisfaction is a technique where a problem is solved
when its values satisfy certain constraints or rules of the problem.

Such type of technique leads to a deeper understanding of the problem


structure as well as its.

Constraint satisfaction depends on three components:


• VariablesX: It is a set complexityof variables.
• Domain D: It is a set of domains are the spaces where the variables reside,
where the variables reside. There is a specific domain for each variable.
• Constraint C: It is a set of constraints which are followed by the set of variables.

In constraint satisfaction, domains The constraint value consists of a pair


of {scope, rel}.
Scope->It is a tuple of variables which participate in the constraint.
Rel->It is a relation which includes a list of values which the variables can take to satisfy
the constraints of the problem.

Solving of Constraint Satisfaction Problems:


The Requirements to solve a constraint satisfaction problem (CSP) are:
• A state-space
• The notion of the solution.
A state in state-space is defined by assigning values to some or all variables such as
{X1=v1, X2=v2, and so on…}.

Assignment of values to a variable 3 three ways:


• Consistent or Legal Assignment: An assignment which does not break any
constraint.
• Complete Assignment: An assignment where every variable is assigned with a
value, and the solution to the CSP remains consistent.
• Partial Assignment: An assignment which assigns values to some of the
variables only.
Types of Domains in CSP:
There are 2 two types of domains are used by the variables :

• Discrete Domain: It is an infinite domain which can have one state for multiple
variables. For example, a start state space can be allocated infinite times for
each variable.
• Finite Domain: It is a finite domain which can have continuous states describing
one domain for one specific variable. It is also called a continuous domain.

Constraint Types in CSP:With respect to the variables, basically 3 types of


constraints:
• Unary Constraints: It is the simplest type of constraints that restricts the value
of a single variable.
• Binary Constraints: It is the constraint type which relates two variables. A
value x2 will contain a value which lies between x1 and x3.
• Global Constraints: It is the constraint type which involves an arbitrary number
of variables.
Some special types of constraints:
• Linear Constraints: These are commonly used in linear programming where
each variable containing an integer value exists in Linear Form only.
• Non-linear Constraints: These type of constraints are used in non-linear
programming where each variable (an integer value) exists in a Non-linear Form.
Note: A special constraint which works in real-world is known as Preference
constraint.

Constraint Propagation:-
Constraint propagation is a special type of inference or permission which helps in
Reducing the Legal Number Of Values For The Variables. The idea behind constraint
propagation is Local Consistency.
In local consistency, variables are treated as nodes, and each binary constraint is
treated as an arc in the given problem.
are following local consistencies which are discussed below:
• Node Consistency: A single variable is said to be node consistent if all the
values in the variable’s domain satisfy the unary constraints on the variables.
• Arc Consistency: A variable is arc consistent if every value in its domain
satisfies the binary constraints of the variables.
• Path Consistency: When the evaluation of a set of two variable with respect to a
third variable can be extended over another variable, satisfying all the binary
constraints. It is similar to arc consistency.
• K-consistency: This type of consistency is used to define the notion of stronger
forms of propagation. Here, we examine the k-consistency of the variableas.
CSP Problems:
Constraint satisfaction includes those problems which contains some constraints while
solving the problem. CSP includes the following problems: Example
• Sudoku Playing: The gameplay where the constraint is that no number from 0-9
can be repeated in the same row or column.

Backtracking search for CSPs:

Backtracking search: A depth-first search that chooses values for one variable
at a time and backtracks when a variable has no legal values left to assign.

Commutativity: CSPs are all commutative. A problem is commutative if the order of


application of any given set of actions has no effect on the outcome.

Backtracking algorithm repeatedly chooses an unassigned variable, and then tries


all values in the domain of that variable in turn, trying to find a solution. If an
inconsistency is detected, then BACKTRACK returns failure, causing the previous call
to try another value.

There is no need to supply BACKTRACKING-SEARCH with a domain-specific


initial state, action function, transition model, or goal test.
BACKTRACKING-SARCH keeps only a single representation of a state and
alters that representation rather than creating a new ones.

To solve CSPs efficiently without domain-specific knowledge, address following


questions:

1)function SELECT-UNASSIGNED-VARIABLE: which variable should be assigned


next?
function ORDER-DOMAIN-VALUES: in what order should its values be tried?
2)function INFERENCE: what inferences should be performed at each step in the
search?
3)When the search arrives at an assignment that violates a constraint, can the search
avoid repeating this failure?

1. Variable and value ordering: SELECT-UNASSIGNED-VARIABLE

Variable selection—fail-first:
Minimum-remaining-values (MRV) heuristic:
The idea of choosing the variable with the fewest “legal” value. A.k.a. “most
constrained variable” or “fail-first” heuristic, it picks a variable that is most likely to cause
a failure soon thereby pruning the search tree. If some variable X has no legal values
left, the MRV heuristic will select X and failure will be detected immediately—avoiding
pointless searches through other variables.

E.g. After the assignment for WA=red and NT=green, there is only one possible value
for SA, so it makes sense to assign SA=blue next rather than assigning Q.
[Powerful guide]

Degree heuristic: The degree heuristic attempts to reduce the branching factor on
future choices by selecting the variable that is involved in the largest number of
constraints on other unassigned variables. [useful tie-breaker]

e.g. SA is the variable with highest degree 5; the other variables have degree 2 or 3; T
has degree 0.

ORDER-DOMAIN-VALUES:--Value selection—fail-lastIf we are trying to find all the


solution to a problem (not just the first one), then the ordering does not matter.

Least-constraining-value heuristic: prefers the value that rules out the fewest choice
for the neighboring variables in the constraint graph. (Try to leave the maximum
flexibility for subsequent variable assignments.)
e.g. We have generated the partial assignment with WA=red and NT=green and that
our next choice is for Q. Blue would be a bad choice because it eliminates the last legal
value left for Q’s neighbor, SA, therefore prefers red to blue.

The minimum-remaining-values and degree heuristic are domain-independent


methods for deciding which variable to choose next in a backtracking search. The least-
constraining-value heuristic helps in deciding which value to try first for a given
variable.

MAC (Maintaining Arc Consistency) algorithm: [More powerful than forward


checking, detect this inconsistency.] After a variable Xi is assigned a value, the
INFERENCE procedure calls AC-3, but instead of a queue of all arcs in the CSP, we
start with only the arcs(Xj, Xi) for all Xj that are unassigned variables that are neighbors
of Xi. From there, AC-3 does constraint propagation in the usual way, and if any variable
has its domain reduced to the empty set, the call to AC-3 fails and we know to backtrack
immediately.
Intelligent backtracking

Chronological Backtracking: The BACKGRACKING-SEARCH in Fig 6.5. When


a branch of the search fails, back up to the preceding variable and try a different value
for it. (The most recent decision point is revisited.)
e.g.
Suppose we have generated the partial assignment

{Q=red, NSW=green, V=blue, T=red}.


When we try the next variable SA, we see every value violates a constraint.
We back up to T and try a new color, it cannot resolve the problem.

Intelligent backtracking: Backtrack to a variable that was responsible for making one
of the possible values of the next variable (e.g. SA) impossible.

Conflict set for a variable: A set of assignments that are in conflict with some value for
that variable.
(e.g. The set {Q=red, NSW=green, V=blue} is the conflict set for SA.)

backjumping method: Backtracks to the most recent assignment in the conflict set.
(e.g. backjumping would jump over T and try a new value for V.)

Local search for CSPs:


Local search algorithms for CSPs use a complete-state formulation: the initial
state assigns a value to every variable, and the search change the value of one variable
at a time.
The Min-Conflicts Heuristic: In choosing a new value for a variable, select the value
that results in the minimum number of conflicts with other variables.
Local search techniques in can be used in local search for CSPs.
The landscape of a CSP under the mini-conflicts heuristic usually has a series of
plateau. Simulated annealing and Plateau search (i.e. allowing sideways moves to
another state with the same score) can help local search find its way off the plateau.
This wandering on the plateau can be directed with tabu search: keeping a small list of
recently visited states and forbidding the algorithm to return to those tates.

Constraint weighting: a technique that can help concentrate the search on the
important constraints.
Each constraint is given a numeric weight Wi, initially all 1.
At each step, the algorithm chooses a variable/value pair to change that will result in the
lowest total weight of all violated constraints.
The weights are then adjusted by incrementing the weight of each constraint that is
violated by the current assignment.

Local search can be used in an online setting when the problem changes, this is
particularly important in scheduling problems.
The structure of problem:

1. The structure of constraint graph


The structure of the problem as represented by the constraint graph can be used to find
solution quickly.
e.g. The problem can be decomposed into 2 independent subproblems: Coloring T
and coloring the mainland.

Tree: A constraint graph is a tree when any two varyiable are connected by only one
path.
Directed arc consistency (DAC): A CSP is defined to be directed arc-consistent under
an ordering of variables X1, X2, … , Xn if and only if every Xi is arc-consistent with
each Xj for j>i.
By using DAC, any tree-structured CSP can be solved in time linear in the number of
variables.
How to solve a tree-structure CSP:

Pick any variable to be the root of the tree;


Choose an ordering of the variable such that each variable appears after its parent in
the tree. (topological sort)
Any tree with n nodes has n-1 arcs, so we can make this graph directed arc-consistent
in O(n) steps, each of which must compare up to d possible domain values for 2
variables, for a total time of O(nd2).
Once we have a directed arc-consistent graph, we can just march down the list of
variables and choose any remaining value.
Since each link from a parent to its child is arc consistent, we won’t have to backtrack,
and can move linearly through the variables.
Unit-2 –Propositional Loagcic:- Knowledge Based Agents In AI, The Wumpus World,
Logic , Propositional Logic .

Knowledge Based Agents:


The person brain with basic Knowledge to know and understand the things logically,
able to do that thing in a better way, such an element of human beings are known as
knowledge-based agents in AI.

Logic : Logic is the key behind any knowledge to filter the necessary information from the
bulk and draw a conclusion. In artificial intelligence, the representation of knowledge is
done via logics.

There are three main components of logic, which are as follows:

• Syntax: Syntax is the representation of a language follows in order to form a


sentence. Example: ax2+bx+c is a well-formed syntax of a quadratic equation.
• Semantics: The sentence or the syntax which a logic follows should be meaningful.
Semantics defines the sense of the sentence which relates to the real world.
Example: Indian people celebrate Diwali every year. This sentence true fact
about the country and its people who are Indians
• Logical Inference: Logical inference is thinking all the possible reasons which could
lead to a proper result. Inference means to infer or draw some conclusions about
some fact or a problem.

Types of Knowledge: There are mainly 5 types of knowledge.

• Meta Knowledge: It is the information/knowledge about knowledge.


• Heuristic Knowledge: It is the knowledge regarding a specific topic.
• Procedural Knowledge: It gives information about achieving something.
• Declarative Knowledge: It describes a particular object and its attributes.
• Structural Knowledge: It describes the knowledge between the objects.

Knowledge-based agents:- So far we have studied about intelligent agents which acquire
knowledge about the world to make better decisions in the real world. Knowledge-based
agent uses some task-specific knowledge to solve a problem efficiently.

A knowledge-based system comprises of two distinguishable features which are:


• A Knowledge base
• An Inference Engine
Knowledge base: A Knowledge base represents the actual facts which exist in the real
world. It is the central component of a knowledge-based agent. It is a set of sentences
which describes the information related to the world.
Note: Here, a sentence is not an English language sentence, but it is represented in a
language known as Knowledge representation language.

Inference Engine: It is the engine of a knowledge-based system which allows to infer new
knowledge in the system.

Actions Of An Agent:-
When there is a need to add/ update some new information or sentences in
the knowledge-based system, we require an inference system. Also, to know what
information is already known to the agent, we require the inference system. The technical
words used for describing the mechanism of the inference system are: TELL and ASK.
When the agent solves a problem, it calls the agent program each time. The agent

program performs three things:


1. It TELLS the knowledge base what it has perceived from the environment.
2. It ASKS the knowledge base about the actions it should take?
3. It TELLS the action which is chosen, and finally, the agent executes that action.
The details of the knowledge representation language are abstracted under these
three functions. These functions create an interface between the two main components of
an intelligent agent, i.e., sensors and actuators.

The Functions Are Discussed Below:


• MAKE-PERCEPT-SENTENCE(): This function returns a sentence which tells the
perceived information by the agent at a given time.
• MAKE-ACTION-QUERY(): This function returns a sentence which tells what action
the agent must take at the current time.
• MAKE-ACTION-SENTENCE(): This function returns a sentence which tells an
action is selected as well as executed.

Levels Of A Knowledge-Based Agent :


• Knowledge Level: In this level, the behavior of an agent is decided by specifying the
following :
• The agent’s current knowledge it has percieved.
• The goal of an agent.
• Implementation Level: This level is the physical representation of the knowledge
level. Here, it is understood that “how the knowledge-based agent actually
implements its stored knowledge.”

For example, Consider an automated air conditioner. The inbuilt knowledge stored in its
system is that “ It would adjust its temperature according to the weather.” This represents
the knowledge level of the agent. The actual working and its adjustment define the
implementation level of the knowledge-based agent.
Approaches used to build a Knowledge-based agent:-
There are following two approaches used to design the behavior of a knowledge-
based system:

• Declarative Approach: Feeding The Necessary Information Is Used To Design A


Knowledge-Based System Without Empty. The agent designer TELLS sentences to
the empty system one by one until the system becomes knowledgeable enough to
deal with the environment.

• Procedural Approach: Operation by operation what to do is stored in the form of


program code to design the behaviour of the system via coding is called Procedural
approach. It is a different approach to the declarative approach.

Propositional logic :-
Propositional logic (PL) is the simplest form of logic where all the statements are made
by propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form.
Example:
1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
3. c) 3+3= 7(False proposition)
4. d) 5 is a prime number.
Following are some basic facts about propositional logic:
• Propositional logic is also called Boolean logic , can be either true or false, but it cannot
be both. as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can use
any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
• Propositional logic consists of an object, relations or function, and logical connectives.
These connectives are also called logical operators which connects two sentences.
• The propositions and connectives are the basic elements of the propositional logic.
• A proposition formula which is always true is called tautology, and it is also called a
valid sentence.
• A proposition formula which is always false is called Contradiction.
• Statements which , what , why, where ,who how are questions, commands, or opinions
are not propositions such as "Where is Rohini", "How are you", "What is your name",
are not propositions.

Syntax of propositional logic:


The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
1. Atomic Propositions
2. Compound propositions.

1.Atomic Proposition: Atomic propositions are the simple propositions. It consists of a


single proposition symbol. These are the sentences which must be either true or false.
Example:
1. 2+2 is 4, it is an atomic proposition as it is a true fact.
2. "The Sun is cold" is also a proposition as it is a false fact.

2.Compound proposition: Compound propositions are constructed by combining simpler


or atomic propositions, using parenthesis and logical connectives.
Example:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a
sentence logically. We can create compound propositions with the help of logical connectives.
There are mainly five connectives, which are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either
Positive literal or negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a
conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called
disjunction, where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are
also known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If
I am breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.

Following is the summarized table for Propositional Logic Connectives:


Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible
scenarios. We can combine all the possible combination with logical connectives, and the
representation of these combinations in a tabular format is called Truth table.

Following are the truth table for all logical connectives:

Truth table with three propositions:


We can build a proposition composing three propositions P, Q, and R. This truth
table is made-up of 8n Tuples as we have taken three proposition symbols.
Precedence of connectives:-
Just like arithmetic operators, there is a precedence order for propositional
connectors or logical operators. This order should be followed while evaluating a
propositional problem. Following is the list of the precedence order for operators:

Precedence Operators
First Precedence Parenthesis
Second Precedence Negation
Third Precedence Conjunction(AND)
Fourth Precedence Disjunction(OR)
Fifth Precedence Implication
Six Precedence Biconditional

Note: For better understanding use parenthesis to make sure of the correct interpretations.
Such as ¬R∨ Q, It can be interpreted as (¬R) ∨ Q.

Logical equivalence:
Logical equivalence is one of the features of propositional logic. Two propositions are said
to be logically equivalent if and only if the columns in the truth table are identical to each
other.
Let's take two propositions A and B, so for logical equivalence, we can write it as A⇔B. In
below truth table we can see that column for ¬A∨ B and A→B, are identical hence A is
Equivalent to B,

A B ~A ~AVB A→B
T T F T T
T F F F F
F T T T T
F F T T T

Properties of Operators:
• Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
• Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
• Identity element:
• P ∧ True = P,
• P ∨ True= True.
• Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
• ¬ (¬P) = P.
Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with propositional logic.
Example:
a. All the girls are intelligent.
b. Some apples are sweet.
Propositional logic has limited expressive power.
In propositional logic, we cannot describe statements in terms of their properties or logical
relationships.

You might also like