0% found this document useful (0 votes)
6 views

Untitled Document-3

Es

Uploaded by

akhileshworks593
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Untitled Document-3

Es

Uploaded by

akhileshworks593
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

INDEX

Sr No. Date Practical Pg No Signature


1 12-09-24 Breadth First Search & Iterative Depth First Search 2
● Implement the Breadth First Search algorithm to solve a
given problem.
● Implement the Iterative Depth First Search algorithm to
solve the same problem.

2 19-09-24 A* Search and Recursive Best-First Search 5


● Implement the A* Search algorithm for solving a
pathfinding problem.
● Implement the Recursive Best-First Search algorithm for
the same problem.

3 06-10-24 Decision Tree Learning Implement the Decision 9


● Tree Learning algorithm to build a decision tree for a
given dataset.
● Evaluate the accuracy and effectiveness of the decision
tree on test data.
● Visualize and interpret the generated decision tree.

4 10-10-24 Feed Forward Backpropagation Neural Network 12


● Implement the Feed Forward Backpropagation algorithm
to train a neural network.
● Use a given dataset to train the neural network for a
specific task.
● Evaluate the performance of the trained network on test
data.

5 12-09-24 Support Vector Machines (SVM) 15


● Implement the SVM algorithm for binary classification.
● Train an SVM model using a given dataset and optimize
its parameters.
● Evaluate the performance of the SVM model on test data
and analyze the results.

6 12-09-24 Adaboost Ensemble Learning 16


● Implement the Adaboost algorithm to create an ensemble
of weak classifiers.
● Train the ensemble model on a given dataset and
evaluate its performance.
● Compare the results with individual weak classifiers.

7 06-10-24 K-Nearest Neighbors (K-NN) 18


● Implement the K-NN algorithm for classification or
regression.
● Apply the K-NN algorithm to a given dataset and predict
the class or value for test data.
● Evaluate the accuracy or error of the predictions and
analyze the results.
PRACTICAL N0: 1

Aim : 1A. Implement the Breadth First Search algorithm to solve a given problem.

CODE :

import queue as Q
from RMP import dict_gn

start='Arad'
goal='Bucharest'
result=''

def BFS(city, cityq, visitedq):


global result
if city==start:
result=result+' '+city
for eachcity in dict_gn[city].keys():
if eachcity==goal:
result=result+' '+eachcity
return
if eachcity not in cityq.queue and eachcity not in visitedq.queue:
cityq.put(eachcity)
result=result+' '+eachcity
visitedq.put(city)
BFS(cityq.get(),cityq,visitedq)

def main():
cityq=Q.Queue()
visitedq=Q.Queue()
BFS(start, cityq, visitedq)
print("BFS Traversal from ",start," to ",goal," is: ")
print(result)

Page 2 of 22
main()

OUTPUT :

Aim : 1B. Implement the Iterative Depth First Search algorithm to solve the same
problem.

CODE :

import queue as Q
from RMP import dict_gn

start='Arad'
goal='Bucharest'
result=''

def DLS(city, visitedstack, startlimit, endlimit):


global result
found=0
result=result+city+' '
visitedstack.append(city)
if city==goal:
return 1
if startlimit==endlimit:
return 0
for eachcity in dict_gn[city].keys():
if eachcity not in visitedstack:

Page 3 of 22
found=DLS(eachcity, visitedstack, startlimit+1, endlimit)
if found:
return found

def IDDFS(city, visitedstack, endlimit):


global result
for i in range(0, endlimit):
print("Searching at Limit: ",i)
found=DLS(city, visitedstack, 0, i)
if found:
print("Found")
break
else:
print("Not Found! ")
print(result)
print("-----")
result=' '
visitedstack=[]

def main():
visitedstack=[]
IDDFS(start, visitedstack, 9)
print("IDDFS Traversal from ",start," to ", goal," is: ")
print(result)

main()

OUTPUT :

Page 4 of 22
PRACTICAL NO : 2

Aim : 2A. Implement the A* Search algorithm for solving a pathfinding problem.

CODE :

import queue as Q
from RMP import dict_gn
from RMP import dict_hn

start='Arad'
goal='Bucharest'
result=''

def get_fn(citystr):
cities=citystr.split(" , ")
hn=gn=0
for ctr in range(0, len(cities)-1):
gn=gn+dict_gn[cities[ctr]][cities[ctr+1]]
hn=dict_hn[cities[len(cities)-1]]
return(hn+gn)

def expand(cityq):
global result
tot, citystr, thiscity=cityq.get()
if thiscity==goal:
result=citystr+" : : "+str(tot)
return
for cty in dict_gn[thiscity]:
cityq.put((get_fn(citystr+" , "+cty), citystr+" , "+cty, cty))
expand(cityq)

def main():
cityq=Q.PriorityQueue()
thiscity=start
cityq.put((get_fn(start),start,thiscity))
expand(cityq)
print("The A* path with the total is: ")
print(result)

Page 5 of 22
main()

OUTPUT :

Aim : 2B. Implement the Recursive Best-First Search algorithm for the same
problem.

CODE :

import queue as Q
from RMP import dict_gn
from RMP import dict_hn

start='Arad'
goal='Bucharest'
result=''

def get_fn(citystr):
cities=citystr.split(',')
hn=gn=0
for ctr in range(0,len(cities)-1):
gn=gn+dict_gn[cities[ctr]][cities[ctr+1]]
hn=dict_hn[cities[len(cities)-1]]
return(hn+gn)

def printout(cityq):
for i in range(0,cityq.qsize()):

Page 6 of 22
print(cityq.queue[i])

def expand(cityq):
global result
tot,citystr,thiscity=cityq.get()
nexttot=999
if not cityq.empty():
nexttot,nextcitystr,nextthiscity=cityq.queue[0]
if thiscity==goal and tot<nexttot:
result=citystr+'::'+str(tot)
return
print("Expanded city------------------------------",thiscity)
print("Second best f(n)------------------------------",nexttot)
tempq=Q.PriorityQueue()

for cty in dict_gn[thiscity]:


tempq.put((get_fn(citystr+','+cty),citystr+','+cty,cty))
for ctr in range(1,3):
ctrtot,ctrcitystr,ctrthiscity=tempq.get()
if ctrtot<nexttot:
cityq.put((ctrtot,ctrcitystr,ctrthiscity))
else:
cityq.put((ctrtot,citystr,thiscity))
break
printout(cityq)
expand(cityq)

def main():
cityq=Q.PriorityQueue()
thiscity=start
cityq.put((999,"NA","NA"))
cityq.put((get_fn(start),start,thiscity))
expand(cityq)
print(result)

main()

Page 7 of 22
OUTPUT :

Page 8 of 22
PRACTICAL NO : 3

Aim : Implement the Decision Tree Learning algorithm to build a decision tree for
a given dataset.

CODE:

Page 9 of 22
OUTPUT :

Page 10 of 22
Page 11 of 22
PRACTICAL NO : 4

Feed Forward Backpropagation Neural Network

Aim : Implement the Feed Forward Backpropagation algorithm to train a neural


network.

CODE :
import numpy as np

class NeuralNetwork():
def __init__(self):
#seeding for random number generation
np.random.seed()

#converting weights to a 3 by 1 matrix


self.synaptic_weights=2*np.random.random((3,1))-1

#x is output variable
def sigmoid(self, x):
#applying the sigmoid function
return 1/(1+np.exp(-x))

def sigmoid_derivative(self,x):
#computing derivative to the sigmoid function
return x*(1-x)

def train(self,training_inputs,training_outputs,training_iterations):

#training the model to make accurate predictions while adjusting


for iteration in range(training_iterations):
#siphon the training data via the neuron
output=self.think(training_inputs)

error=training_outputs-output

#performing weight adjustments


adjustments=np.dot(training_inputs.T,error*self.sigmoid_derivative(output))

Page 12 of 22
self.synaptic_weights+=adjustments

def think(self,inputs):
#passing the inputs via the neuron to get output
#converting values to floats

inputs=inputs.astype(float)
output=self.sigmoid(np.dot(inputs,self.synaptic_weights))

return output

if __name__=="__main__":
#initializing the neuron class
neural_network=NeuralNetwork()
print("Beginning randomly generated weights: ")
print(neural_network.synaptic_weights)

#training data consisting of 4 examples--3 inputs & 1 output


training_inputs=np.array([[0,0,1],[1,1,1],[1,0,1],[0,1,1]])
training_outputs=np.array([[0,1,1,0]]).T

#training taking place


neural_network.train(training_inputs,training_outputs,15000)

print("Ending weights after training: ")


print(neural_network.synaptic_weights)

user_input_one=str(input("User Input One: "))


user_input_two=str(input("User Input Two: "))
user_input_three=str(input("User Input Three: "))

print("Considering new situation: ",user_input_one,user_input_two,user_input_three)


print("New output data: ")
print(neural_network.think(np.array([user_input_one,user_input_two,user_
input_three])))

Page 13 of 22
OUTPUT :

Page 14 of 22
PRACTICAL NO : 5

ADABOOST ENSEMBLE LEARNING

Aim : Implement the Adaboost algorithm to create an ensemble of weak


classifiers. Train the ensemble model on a given dataset and evaluate its
performance.

CODE :

Page 15 of 22
PRACTICAL NO : 6

SUPPORT VECTOR MACHINE (SVM)

Aim : Implement the SVM algorithm for binary classification.

Page 16 of 22
Page 17 of 22
PRACTICAL NO : 7

K-NEAREST NEIGHBORS (K-NN)

Aim : Implement the K-NN algorithm for classification or regression. Apply the
K-NN algorithm to a given dataset and predict the class or value for test data.

CODE :

Page 18 of 22
Page 19 of 22
Page 20 of 22
Page 21 of 22
Page 22 of 22

You might also like