AI & ML Lab Manual
AI & ML Lab Manual
4. BAYESIAN NETWORK
5. REGRESSION MODELS
7. SVM MODEL
8. ENSEMBLE TECHNIQUES
9. CLUSTERING ALGORITHMS
EX.NO:1
UNIFORMED SEARCH ALGORITHMS
DATE:
AIM:
ALGORITHM:
1. Create a graph
2. Initialize a starting node
3. Send the graph and initial node as parameters to the bfs function.
4. Mark the initial node as visited and push it into the queue
5. Explore the initial node and add its neighbours to the queue and remove the initial
node from the queue
6. Check if the neighbour node of a neighbouring node is already visited
7. If not, visit the neighbouring node neighbours and mark them as visited
8. Repeat this process until all the nodes in a graph are visited and the queue becomes
empty
PROGRAM:
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : ['F'],
'D' : [],
'E' : ['F'],
'F' : []
}
visited = []
queue = []
def bfs(visited, graph, node):
visited.append(node)
queue.append(node)
while queue:
s = queue.pop(0)
print (s, end = " ")
for neighbour in graph[s]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
bfs(visited, graph, 'A')
2022-2023
KIT & KIM TECHNICAL CAMPUS
OUTPUT:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
Thus the above programs are executed and the outputs are verified.
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:2
INFORMED SEARCH ALGORITHMS
DATE:
AIM:
1. Implement A* algorithm
ALGORITHM:
1. Firstly, Place the starting node into OPEN and find its f (n) value
2. Then remove the node from OPEN, having the smallest f (n) value. If it is a goal
node, then stop and return to success
3. Else remove the node from OPEN, and find all its successors
4. Find the f (n) value of all the successors, place them into OPEN, and place the
removed node into CLOSE
5. Goto Step-2
6. Exit
PROGRAM:
class Graph:
return H[n]
2022-2023
KIT & KIM TECHNICAL CAMPUS
closed_list = set([])
g = {}
g[start_node] = 0
parents = {}
parents[start_node] = start_node
if n == None:
print('Path does not exist!')
return None
if n == stop_node:
reconst_path = []
while parents[n] != n:
reconst_path.append(n)
n = parents[n]
reconst_path.append(start_node)
reconst_path.reverse()
print('Path found: {}'.format(reconst_path))
return reconst_path
else:
if g[m] > g[n] + weight:
g[m] = g[n] + weight
parents[m] = n
if m in closed_list:
closed_list.remove(m)
open_list.add(m)
open_list.remove(n)
closed_list.add(n)
2022-2023
KIT & KIM TECHNICAL CAMPUS
adjacency_list = {
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjacency_list)
graph1.a_star_algorithm('A', 'D')
OUTPUT:
ALGORITHM:
1. Initialize the CLOSE AND OPEN list
2. Initialize the starting node
3. Find the path with the lowest weight
4. Add previous weight and the current heuristics and weight of the node
5. Find the shortest path with weight for the goal node
6. Exit
PROGRAM:
nodes = {
'A': [['B', 6], ['F', 3]],
'B': [['A', 6], ['C', 3], ['D', 2]],
'C': [['B', 3], ['D', 1], ['E', 5]],
'D': [['B', 2], ['C', 1], ['E', 8]],
'E': [['C', 5], ['D', 8], ['I', 5], ['J', 5]],
'F': [['A', 3], ['G', 1], ['H', 7]],
'G': [['F', 1], ['I', 3]],
'H': [['F', 7], ['I', 2]],
'I': [['G', 3], ['H', 2], ['E', 5], ['J', 3]],
'J': [['E', 5], ['I', 3]]
}
h={
'A' : 10,
2022-2023
KIT & KIM TECHNICAL CAMPUS
'B' : 8,
'C' : 5,
'D' : 7,
'E' : 3,
'F' : 6,
'G' : 5,
'H' : 3,
'I' : 1,
'J' : 0
}
closed = closed[::-1]
min = 1000
for i in opened:
if i[1] < min:
min = i[1]
lens = len(closed)
i=0
while i < lens-1:
nei = []
for j in nodes[closed[i]]:
2022-2023
KIT & KIM TECHNICAL CAMPUS
nei.append(j[0])
if closed[i+1] not in nei:
del closed[i+1]
lens-=1
i+=1
closed = closed[::-1]
return closed, min
print(astar('A', 'J'))
OUTPUT:
RESULT:
Thus the above programs are executed and the outputs are verified.
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:3
NAÏVE BAYES MODEL
DATE:
AIM:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:4
BAYESIAN NETWORK
DATE:
AIM:
ALGORITHM:
1. Import necessary packages and modules.
2. Load the Bayesian model
3. Draw the conditional probability table for each node in the Bayesian network
4. Draw the conditional probability table for posterior probability of burglary if john
calls and marry calls and alarm if the burglary happens and earthquake happens
PROGRAM:
import pgmpy.models
import pgmpy.inference
import networkx as nx
import pylab as plt
model = pgmpy.models.BayesianModel([('Burglary', 'Alarm'),
('Earthquake', 'Alarm'),
('Alarm', 'JohnCalls'),
('Alarm', 'MaryCalls')])
cpd_burglary = pgmpy.factors.discrete.TabularCPD('Burglary', 2, [[0.001], [0.999]])
cpd_earthquake = pgmpy.factors.discrete.TabularCPD('Earthquake', 2, [[0.002], [0.998]])
cpd_alarm = pgmpy.factors.discrete.TabularCPD('Alarm', 2, [[0.95, 0.94, 0.29, 0.001],
[0.05, 0.06, 0.71, 0.999]],
evidence=['Burglary', 'Earthquake'],
evidence_card=[2, 2])
cpd_john = pgmpy.factors.discrete.TabularCPD('JohnCalls', 2, [[0.90, 0.05],
[0.10, 0.95]],
evidence=['Alarm'],
evidence_card=[2])
cpd_mary = pgmpy.factors.discrete.TabularCPD('MaryCalls', 2, [[0.70, 0.01],
[0.30, 0.99]],
evidence=['Alarm'],
evidence_card=[2])
model.add_cpds(cpd_burglary, cpd_earthquake, cpd_alarm, cpd_john, cpd_mary)
model.check_model()
2022-2023
KIT & KIM TECHNICAL CAMPUS
10
2022-2023
KIT & KIM TECHNICAL CAMPUS
OUTPUT:
RESULT:
Thus the above programs are executed and the output are verified.
11
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:5
REGRESSION MODELS
DATE:
AIM:
ALGORITHM:
PROGRAM:
12
2022-2023
KIT & KIM TECHNICAL CAMPUS
OUTPUT:
RESULT:
13
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:6
DECISION TREES AND RANDOM FORESTS
DATE:
AIM:
ALGORITHM:
PROGRAM:
14
2022-2023
KIT & KIM TECHNICAL CAMPUS
OUTPUT:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
Thus the above programs are executed and the outputs are verified.
15
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:7
SVM MODEL
DATE:
AIM:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
16
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:8
ENSEMBLE TECHNIQUES
DATE:
AIM:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
17
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:9
CLUSTERING ALGORITHMS
DATE:
AIM:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
18
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:10
EM FOR BAYESIAN NETWORK
DATE:
AIM:
ALGORITHM:
PROGRAM:
import numpy as np
import time
graphNodes = ["a", "b", "c", "d", "e", "f", "g", "h"]
graphNodeIndices = {}
for idx, node in enumerate(graphNodes):
graphNodeIndices[node] = idx
graphNodeNumStates = {
"a": 3,
"b": 4,
"c": 5,
"d": 4,
"e": 3,
"f": 4,
"g": 5,
"h": 4
}
nodeParents = {
"a": [],
"b": [],
"c": ["a"],
"d": ["a", "b"],
19
2022-2023
KIT & KIM TECHNICAL CAMPUS
tensorNodeOrder = {}
for node in graphNodes:
tensorNodeOrder[node] = [node] + nodeParents[node]
def randomTensorGenerator(shape):
return np.random.uniform(0.0, 1.0, shape)
np.random.seed(0)
p = {}
for node in graphNodes:
tensorDimensions = [graphNodeNumStates[x] for x in tensorNodeOrder[node]]
p[node] = randomTensorGenerator(tensorDimensions)
for node in p:
p[node] = conditionNodeOnParents(p[node], tensorNodeOrder[node][0], tensorNodeOrder
[node])
print("p(" + node + "|" + str(nodeParents[node]) + ") dimensions: " + str(p[node].shape))
np.random.seed(int(time.time()))
phat = {}
for node in p:
phat[node] = randomTensorGenerator(p[node].shape)
phat[node] = conditionNodeOnParents(phat[node], tensorNodeOrder[node][0], tensorNode
Order[node])
print("phat(" + node + "|" + str(nodeParents[node]) + ") dimensions: " + str(phat[node].sha
pe))
20
2022-2023
KIT & KIM TECHNICAL CAMPUS
OUTPUT:
RESULT:
21
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:11
NEURAL NETWORK MODEL
DATE:
AIM:
ALGORITHM:
PROGRAM:
22
2022-2023
KIT & KIM TECHNICAL CAMPUS
OUTPUT:
RESULT:
23
2022-2023
KIT & KIM TECHNICAL CAMPUS
EX.NO:12
DEEP LEARNING NEURAL NETWORK MODEL
DATE:
AIM:
ALGORITHM:
PROGRAM:
OUTPUT:
RESULT:
24
2022-2023