0% found this document useful (0 votes)
9 views

AI Record

Uploaded by

Yogi Nambula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

AI Record

Uploaded by

Yogi Nambula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

AI Record

1. Aim: To write a Python program for solving the given water-jug problem.
Source Code:
def waterJug(target,ljug,sjug):
jug1=jug2=0
step=1
print(jug1, jug2)
jug2=sjug
step+=1
print(jug1, jug2)
jug1+=jug2
jug2=0
step+=1
print(jug1, jug2)
while jug1!=target:
jug2=0
jug2=sjug
step+=1
print(jug1, jug2)
space=ljug-jug1
if space<=jug2:
jug1+=space
jug2-=space
print(jug1, jug2)
jug1=0
jug1+=jug2
step+=1
else:
jug1+=jug2
jug2=0
step+=1
print(jug1, jug2)
if jug2==target:
jug1=0
step+=1
print(jug1, jug2)
jug1+=jug2
step+=1
jug2=0
break
print(jug1,jug2)
print("Water in jug1 is: ",jug1)
print("Number of steps involved are: ",step)
target=int(input("Enter the target amount of water: "))
ljug=int(input("Enter the capacity of large jug: "))
sjug=int(input("Enter the capacity of small jug: "))
print("The States are: ")
waterJug(target,ljug,sjug)
Output 1:
Enter the target amount of water: 4
Enter the capacity of large jug: 8
Enter the capacity of small jug: 3
The States are:
0 0
0 3
3 0
3 3
6 0
6 3
8 1
1 3
4 0
4 0
Water in jug1 is: 4
Number of steps involved are: 9
Output 2:
Enter the target amount of water: 2
Enter the capacity of large jug: 4
Enter the capacity of small jug: 3
The States are:
0 0
0 3
3 0
3 3
4 2
0 2
2 0
Water in jug1 is: 2
Number of steps involved are: 7

2. Aim: To develop a simple reflex agent program in Python for the vacuum-cleaner
world problem.
Source Code:
def action1(status, location):
if status == 1:
return "CLEAN"
elif location == "A":
return "RIGHT"
elif location == "B":
return "LEFT"
def goal_test(percept):
location, status = percept
return status == 0
def run_test(initial_state):
print("Initial State:", initial_state)
steps = []
percept = initial_state
while not goal_test(percept):
action = action1(percept[1], percept[0])
steps.append(action)
if action == "CLEAN":
percept = (percept[0], 0)
elif action == "RIGHT":
percept = ("B", 1)
elif action == "LEFT":
percept = ("A", 1)
print("Actions:", steps)
print("Path Cost:", len(steps))
def vacuum_world():
a = int(input("Enter Room A State (1-Dirty/0-Clean): "))
b = int(input("Enter Room B State (1-Dirty/0-Clean): "))
initial_state_1 = ("A", a)
run_test(initial_state_1)
initial_state_2 = ("B", b)
run_test(initial_state_2)
vacuum_world()
Output 1:
Enter Room A State(1-Dirty/0-clean): 1
Enter Room B State(1-Dirty/0-clean): 1
Initial State: ('A', 1)
Actions: ['CLEAN']
Path Cost: 1
Initial State: ('B', 1)
Actions: ['CLEAN']
Path Cost: 1
Output 2:
Enter Room A State(1-Dirty/0-clean): 0
Enter Room B State(1-Dirty/0-clean): 1
Initial State: ('A', 0)
Actions: []
Path Cost: 0
Initial State: ('B', 1)
Actions: ['CLEAN']
Path Cost: 1

3. Aim: To implement Breadth First Search for the given 8-puzzle Problem.
Source Code:
def get_blank_pos(puzzle):
for i in range(len(puzzle)):
for j in range(len(puzzle)):
if puzzle[i][j] == 0:
return i, j

def actions_done(preState):
nextState = []
position = get_blank_pos(preState)
i, j = position[0], position[1]
if i > 0:
new = [x[:] for x in preState]
new[i][j], new[i-1][j] = new[i-1][j], new[i][j]
nextState.append(new)
if i < 2:
new = [x[:] for x in preState]
new[i][j], new[i+1][j] = new[i+1][j], new[i][j]
nextState.append(new)
if j > 0:
new = [x[:] for x in preState]
new[i][j], new[i][j-1] = new[i][j-1], new[i][j]
nextState.append(new)
if j < 2:
new = [x[:] for x in preState]
new[i][j], new[i][j+1] = new[i][j+1], new[i][j]
nextState.append(new)
return nextState

def bfs(initial, goal):


que = [(initial, [])]
visited = set()
visited.add(tuple(map(tuple, initial)))
while que:
state, path = que.pop(0)
if state == goal:
return path
visited.add(tuple(map(tuple, state)))
for next in actions_done(state):
if tuple(map(tuple, next)) not in visited:
que.append((next, path+[next]))
visited.add(tuple(map(tuple, next)))
return None

initial_state=[]
goal_state=[]
print("Enter the initial state:")
for i in range(3):
initial_state.append(list(map(int,input().split())))
print("Enter the goal state:")
for i in range(3):
goal_state.append(list(map(int,input().split())))
path = bfs(initial_state, goal_state)
step = 1
if path:
print("Solution found in", len(path), "steps: \n")
for state in path:
print("Step:", step)
step += 1
for row in state:
print(row,end=' ')
print()
else:
print("No path")
Output 1:
Enter the initial state:
1 2 3
8 0 4
7 6 5
Enter the goal state:
2 8 3
1 6 4
7 0 5
Solution found in 5 steps:
Step: 1
[1, 2, 3] [0, 8, 4] [7, 6, 5]
Step: 2
[0, 2, 3] [1, 8, 4] [7, 6, 5]
Step: 3
[2, 0, 3] [1, 8, 4] [7, 6, 5]
Step: 4
[2, 8, 3] [1, 0, 4] [7, 6, 5]
Step: 5
[2, 8, 3] [1, 6, 4] [7, 0, 5]
Output 2:
Enter the initial state:
1 2 3
4 5 6
8 7 0
Enter the goal state:
1 2 3
4 5 6
7 8 0
No path

4. Aim: To implement Depth First Search for the given 8-puzzle Problem.
Source Code:
def get_blank_pos(puzzle):
for i in range(len(puzzle)):
for j in range(len(puzzle)):
if puzzle[i][j] == 0:
return i, j
def actions_done(preState):
nextState = []
position = get_blank_pos(preState)
i, j = position[0], position[1]
if i > 0:
new = [x[:] for x in preState]
new[i][j], new[i-1][j] = new[i-1][j], new[i][j]
nextState.append(new)
if i < 2:
new = [x[:] for x in preState]
new[i][j], new[i+1][j] = new[i+1][j], new[i][j]
nextState.append(new)
if j > 0:
new = [x[:] for x in preState]
new[i][j], new[i][j-1] = new[i][j-1], new[i][j]
nextState.append(new)
if j < 2:
new = [x[:] for x in preState]
new[i][j], new[i][j+1] = new[i][j+1], new[i][j]
nextState.append(new)
return nextState
def dls(state, goal, depth_limit):
if state == goal:
return [state]
if depth_limit == 0:
return None
for next_state in actions_done(state):
path = dls(next_state, goal, depth_limit - 1)
if path is not None:
return [state] + path
return None
def ids(initial, goal, max_depth):
for depth_limit in range(max_depth+1):
path = dls(initial, goal, depth_limit)
if path is not None:
return path
return None
initial_state=[]
goal_state=[]
print("Enter the initial state:")
for i in range(3):
initial_state.append(list(map(int,input().split())))
print("Enter the goal state:")
for i in range(3):
goal_state.append(list(map(int,input().split())))
max_depth = 10
path = ids(initial_state, goal_state, max_depth)
step = 1
if path:
print("Solution found in", len(path)-1, "steps:\n")
for state in path:
print("Step:", step)
step += 1
for row in state:
print(row)
print()
else:
print("No path found within the depth limit")
Output 1:
Enter the initial state:
123
804
765
Enter the goal state:
283
164
705
Solution found in 5 steps:

Step: 1
[1, 2, 3]
[8, 0, 4]
[7, 6, 5]

Step: 2
[1, 2, 3]
[0, 8, 4]
[7, 6, 5]

Step: 3
[0, 2, 3]
[1, 8, 4]
[7, 6, 5]

Step: 4
[2, 0, 3]
[1, 8, 4]
[7, 6, 5]

Step: 5
[2, 8, 3]
[1, 0, 4]
[7, 6, 5]

Step: 6
[2, 8, 3]
[1, 6, 4]
[7, 0, 5]
Output 2:
Enter the initial state:
123
456
870
Enter the goal state:
123
456
780
No path found within the depth limit\

5. Aim: To implement Greedy Best First Search for the given map.
Source Code:
graph = {
'A': {'B': 6, 'F': 3},
'B': {'C': 3, 'D': 2},
'C': {'E': 5},
'D': {'E': 8},
'E': {'J': 5, 'I': 5},
'F': {'G': 1, 'H': 7},
'G': {'I': 3},
'H': {'I': 2},
'I': {'J': 3},
'J': {}
}

heuristic = {'A': 10, 'B': 8, 'C': 5, 'D': 7, 'E': 3, 'F': 6, 'G': 5, 'H': 3, 'I': 1, 'J': 0}
def gbfs(start,dest):
openList=[]
closeList=[]
cost=0
openList.append(start)
while openList:
cur=openList[0]
x=0
for i in range(len(openList)):
if heuristic[openList[i]]<heuristic[cur]:
cur=openList[i]
x=i
openList.pop(x)
closeList.append(cur)
if cur==dest:
return closeList
for adj,cost in graph[cur].items():
if adj not in openList and adj not in closeList:
openList.append(adj)
return None
start=input("Enter Starting Point: ")
end=input("Enter Destination: ")
path=gbfs(start,end)
if path:
for i in path:
print('[',i,',',heuristic[i],']',end=' ')
else:
print("None")
Output 1:
Enter Starting Point: A
Enter Destination: J
[ A , 10 ] [ F , 6 ] [ H , 3 ] [ I , 1 ] [ J , 0 ]
Output 2:
Enter Starting Point: F
Enter Destination: A
None
6. Aim: To develop Mini-max search strategy for Tic-Tac-Toe game.
Source Code:
import math

board = [' ' for _ in range(9)]


human_player = 'X'
computer_player = 'O'

def print_board(board):
for i in range(0, 9, 3):
print("|".join(board[i:i+3]))

def get_empty_cells(board):
return [i for i, cell in enumerate(board) if cell == ' ']

def is_winner(board, player):


winning_combinations = [
[0, 1, 2], [3, 4, 5], [6, 7, 8], # Rows
[0, 3, 6], [1, 4, 7], [2, 5, 8], # Columns
[0, 4, 8], [2, 4, 6] # Diagonals
]
return any(all(board[i] == player for i in combo) for combo in winning_combinations)

def is_board_full(board):
return ' ' not in board

def evaluate(board):
if is_winner(board, computer_player):
return 1 # Computer wins
elif is_winner(board, human_player):
return -1 # Human wins
else:
return 0 # Draw

def minimax(board, depth, maximizing_player):


if depth == 0 or is_winner(board, human_player) or is_winner(board, computer_player) or
is_board_full(board):
return evaluate(board)

if maximizing_player:
max_eval = -math.inf
for cell in get_empty_cells(board):
board[cell] = computer_player
eval = minimax(board, depth - 1, False)
board[cell] = ' '
max_eval = max(max_eval, eval)
return max_eval
else:
min_eval = math.inf
for cell in get_empty_cells(board):
board[cell] = human_player
eval = minimax(board, depth - 1, True)
board[cell] = ' '
min_eval = min(min_eval, eval)
return min_eval

def find_best_move(board):
best_eval = -math.inf
best_move = -1
for cell in get_empty_cells(board):
board[cell] = computer_player
eval = minimax(board, 9, False)
board[cell] = ' '
if eval > best_eval:
best_eval = eval
best_move = cell
return best_move

def play_game():
current_player = human_player
while not is_winner(board, human_player) and not is_winner(board, computer_player) and
not is_board_full(board):
if current_player == human_player:
print("Your turn (", human_player, ")")
move = int(input("Enter your move (0-8): "))
if move not in get_empty_cells(board):
print("Invalid move. Try again.")
continue
else:
print("Computer's turn (", computer_player, ")")
move = find_best_move(board)
board[move] = current_player
print_board(board)
current_player = human_player if current_player == computer_player else
computer_player
if is_winner(board, human_player):
print("You win!")
elif is_winner(board, computer_player):
print("Computer wins!")
else:
print("It's a draw!")
print("Welcome to Tic Tac Toe!")
print("Here is the initial board:")
print_board(board)
play_game()
Output:
Welcome to Tic Tac Toe!
Here is the initial board:
||
||
||
Your turn ( X )
Enter your move (0-8): 0
X| |
||
||
Computer's turn ( O )
X| |
|O|
||
Your turn ( X )
Enter your move (0-8): 6
X| |
|O|
X| |
Computer's turn ( O )
X| |
O|O|
X| |
Your turn ( X )
Enter your move (0-8): 5
X| |
O|O|X
X| |
Computer's turn ( O )
X|O|
O|O|X
X| |
Your turn ( X )
Enter your move (0-8): 7
X|O|
O|O|X
X|X|
Computer's turn ( O )
X|O|
O|O|X
X|X|O
Your turn ( X )
Enter your move (0-8): 1
Invalid move. Try again.
Your turn ( X )
Enter your move (0-8): 2
X|O|X
O|O|X
X|X|O
It's a draw!

7. Aim: To implement Alpha-Beta pruning strategy.


Source Code:
import math
def minimax(index, depth, alpha, beta, maximizing_player):
if depth == 0:
return scores[index]
if maximizing_player:
max_eval = -math.inf
left_child_index = 2 * index
right_child_index = 2 * index + 1
if left_child_index < len(scores):
max_eval = max(max_eval, minimax(left_child_index, depth - 1, alpha, beta, False))
alpha = max(alpha, max_eval)
if beta <= alpha:
return max_eval
if right_child_index < len(scores):
max_eval = max(max_eval, minimax(right_child_index, depth - 1, alpha, beta,
False))
alpha = max(alpha, max_eval)
if beta <= alpha:
return max_eval
return max_eval
else:
min_eval = math.inf
left_child_index = 2 * index
right_child_index = 2 * index + 1
if left_child_index < len(scores):
min_eval = min(min_eval, minimax(left_child_index, depth - 1, alpha, beta, True))
beta = min(beta, min_eval)
if beta <= alpha:
return min_eval
if right_child_index < len(scores):
min_eval = min(min_eval, minimax(right_child_index, depth - 1, alpha, beta, True))
beta = min(beta, min_eval)
if beta <= alpha:
return min_eval
return min_eval
scores = list(map(int, input("Enter Scores: ").split()))
depth = int(math.log2(len(scores)))
print("The Optimal Value is:", minimax(0, depth, -math.inf, math.inf, True))
Output 1:
Enter Scores: 2 3 5 9 0 1 7 5
The Optimal Value is: 3
Output 2:
Enter Scores: 3 5 6 9 1 2 0 -1
The Optimal Value is: 5

8. Aim: To implement logical agent for the Wumps world problem.


Source Code:
import random
size = 4
visited=[[0,0]]
grid = [['-' for _ in range(size)] for _ in range(size)]
agent_position = (0, 0)
def generate_world():
# Place the wumpus
wumpus_pos = get_random_position()
grid[wumpus_pos[0]][wumpus_pos[1]] = 'W'
# Place the gold
gold_pos = get_random_position()
grid[gold_pos[0]][gold_pos[1]] = 'G'
# Place the pits
num_pits = size // 2
for _ in range(num_pits):
pit_pos = get_random_position()
grid[pit_pos[0]][pit_pos[1]] = 'P'
def get_random_position():
while True:
x = random.randint(0, size - 1)
y = random.randint(0, size - 1)
if [x,y] not in visited:
visited.append([x,y])
return x, y
def move_agent(direction):
global agent_position
x, y = agent_position
if direction == 'up' and x > 0:
agent_position = (x - 1, y)
elif direction == 'down' and x < size - 1:
agent_position = (x + 1, y)
elif direction == 'left' and y > 0:
agent_position = (x, y - 1)
elif direction == 'right' and y < size - 1:
agent_position = (x, y + 1)
def is_game_over():
x, y = agent_position
if grid[x][y] == 'W':
return "You were eaten by the Wumpus! Game over."
elif grid[x][y] == 'P':
return "You fell into a pit! Game over."
elif grid[x][y] == 'G':
return "Congratulations! You found the gold and won the game."
else:
return False
def print_grid():
for row in grid:
print(" ".join(row))
generate_world()
print("Actions: up, down, left, right, quit")
while True:
grid[agent_position[0]][agent_position[1]]='A'
print_grid()
grid[agent_position[0]][agent_position[1]]='-'
print(f"Current position: {agent_position}")
action = input("Enter your action: ")
if action == 'quit':
break
move_agent(action)
result = is_game_over()
if result:
print(result)
break
else:
print("Keep exploring!")
Output 1:
Actions: up, down, left, right, quit
A---
-W-P
---G
--P-
Current position: (0, 0)
Enter your action: down
Keep exploring!
----
AW-P
---G
--P-
Current position: (1, 0)
Enter your action: down
Keep exploring!
----
-W-P
A--G
--P-
Current position: (2, 0)
Enter your action: right
Keep exploring!
----
-W-P
-A-G
--P-
Current position: (2, 1)
Enter your action: right
Keep exploring!
----
-W-P
- -AG
--P-
Current position: (2, 2)
Enter your action: right
Congratulations! You found the gold and won the game.
Output 2:
Actions: up, down, left, right, quit
A-P-
----
--GP
---W
Current position: (0, 0)
Enter your action: right
Keep exploring!
-AP-
----
--GP
---W
Current position: (0, 1)
Enter your action: right
You fell into a pit! Game over.

9. Aim: To implement constraint satisfaction approach for the given map-coloring


problem.
Source Code:
n=7
m=3
variables=["Alaska","Maldives","Central City","Mystic Falls","New Orleans","Small
Ville","London"]
g= [
[0,1,1,0,0,0,0],
[1,0,1,1,0,0,0],
[1,1,0,1,1,1,0],
[0,1,1,0,1,0,0],
[0,0,1,1,0,1,0],
[0,0,1,0,1,0,0],
[0,0,0,0,0,0,0]
]
colors=["Red","Green","Blue"]
def isSafe(curr,color,adj):
for i in range(n):
if g[curr][i]==1 and color[i]==adj:
return False
return True

def graphColor(curr,n,color):
if curr==n:
return True
for i in range(1,m+1):
if isSafe(curr,color,i):
color[curr]=i
print(color,end=" ")
print(curr)
if graphColor(curr+1,n,color):
return True
color[curr]=0
color=[0]*n
if graphColor(0,n,color):
c=0
for j in color:
print(variables[c]+": "+colors[j-1])
c+=1
else:
print("No possiblity to color")
Output:
Alaska: Red
Maldives: Green
Central City: Blue
Mystic Falls: Red
New Orleans: Green
Small Ville: Red
London: Red

10. Aim: To write a prolog program for the given Knowledge base.
1) Marcus was a man
2) Marcus was a Pompeian
3) All Pompeians were romans
4) Caesar was a ruler
5) All romans were either loyal to Caesar or hated him
6) Everyone is loyal to someone
7) People only try to assassinate rulers they are not loyalto
8) Marcus tried to assassinate Caesar.
Query:
1) Was marcus loyal to Caesar?
2) Is marcus hate Caesar?
Source Code:
%facts
man(marcus).
pompeian(marcus).
ruler(caesar).
loyalto(x,y).
trytoassasinate(marcus,caesar).

%rules
hate(X,caesar):-
not/ loyalto(X,caesar).
people(X):-
man(X).
roman(X):-
pompeian(X).
roman(X):-
loyalto(X,caesar);
hate(X,caesar).
not/ loyalto(X,Y):-
people(X),
ruler(Y),
trytoassasinate(X,Y).
Output:
Query 1: loyalto(marcus,caesar).
Result: false
Query 2: hate(marcus,caesar).
Result: true
11. Prolog 2nd Program:
Source Code:
%rules
owns(nono,m1).
enemy(nono,america).
missile(m1).
american(west).

%facts
weapon(X):-
missile(X).
hostile(X):-
enemy(X,america).
sells(west,X,nono):-
missile(X),
owns(nono,X).
criminal(X):-
american(X),
weapon(Y),
sells(X,Y,Z),
hostile(Z).
Output:
Query: criminal(west).
Result: true

You might also like