AI&ML- Lab Manual
AI&ML- Lab Manual
Reg.No
EX.
TITLE MARKS SIGNATURE
NO DATE
1 Implementation of uninformed search Algorithms (BFS, DFS)
AIM:
Aim is to write a program in python to solve problems by using
Implementation ofUninformed search algorithms (BFS, DFS)
ALGORITHM:
Step 1: Create an empty queue (for BFS) or stack (for DFS) and add the initial state to it.
Remove the first state from the queue (or the last state from the stack).
If the state is the goal state, return the path from the initial state to the
currentstate.
For each action, generate the resulting state and check if it has been visited
before. If it has not been visited, add it to the queue (or stack) and mark it as
visited.
Step 4: If the queue (or stack) is empty and no goal state has been found, return failure.
PROGRAM:
class Graph:
def __init__(self):
self.graph = {}
while queue:
vertex = queue.popleft()
if vertex not in visited:
result.append(vertex)
visited.add(vertex)
if vertex in self.graph:
for neighbor in self.graph[vertex]:
queue.append(neighbor)
return result
# Example usage
graph = Graph()
graph.add_edge(0, 1)
graph.add_edge(0, 2)
graph.add_edge(1, 2)
graph.add_edge(2, 0)
graph.add_edge(2, 3)
graph.add_edge(3, 3)
print("BFS:", graph.bfs(2))
print("DFS:", graph.dfs(2))
OUTPUT:
BFS: [2, 0, 3, 1]
DFS: [2, 0, 1, 3]
RESULT :-
Thus the python program for implementation of uninformed search algorithms (BFS,
DFS) has been written and executed successfully.
EX. NO:2A IMPLEMENTATION OF INFORMED SEARCH
DATE: ALGORITHMS A*
AIM:
Aim is to write a program in python to solve problems by using
Implementation ofInformed search algorithms A*
ALGORITHM:
Step 1: Create an open set and a closed set, both initially empty.
Step 2: Add the initial state to the open set with a cost of 0 and an estimated total cost
(f-score) of theheuristic value of the initial state.
Choose the state with the lowest f-score from the open set.
If this state is the goal state, return the path from the initial state to this state.
For each action, generate the resulting state and compute the cost to get to that
state by adding the cost of the current state plus the cost of the action.
If the resulting state is not in the closed set or the new cost to get there is less
than the old cost, update its cost and estimated total cost in the open set and add
it to the open set.
Step 4: If the open set is empty and no goal state has been found, return failure.
PROGRAM :
import heapq
class Node:
def __init__(self, state, parent=None, g=0, h=0):
self.state = state
self.parent = parent
self.g = g # cost from start node to current node
self.h = h # heuristic cost from current node to goal node
def f(self):
return self.g + self.h
while frontier:
_, current_node = heapq.heappop(frontier)
if current_node.state == goal_state:
return current_node # Goal found
explored.add(current_node.state)
successor_node = Node(state=successor_state,
parent=current_node,
g=current_node.g + cost,
h=heuristic(successor_state, goal_state))
# Example usage
start_state = [[1, 2, 3],
[4, 5, 6],
[7, 8, 0]] # Initial state of 8-puzzle problem
goal_state = [[1, 2, 3],
[4, 5, 6],
[7, 8, 0]] # Goal state of 8-puzzle problem
if result:
# Print path
path = []
while result:
path.append(result.state)
result = result.parent
path.reverse()
for state in path:
print(state)
else:
print("Goal not found.")
OUTPUT :
RESULT :-
Thus the python program for implementation of informed search algorithms A* has
been written and executed successfully.
EX. NO: 2B INFORMED SEARCH ALGORITHMS MEMORY-
DATE: BOUNDED A*
AIM:
Aim is to write a program in python to solve problems by using Implementation of
Informed search algorithms memory-bounded A*.
ALGORITHM:
Step 1: Create an open set and a closed set, both initially empty.
Step 2: Add the initial state to the open set with a cost of 0 and an estimated total cost(f-score) of the
heuristic value of the initial state.
a. Choose the state with the lowest f-score from the open set.
b. If this state is the goal state, return the path from the initial state to this state.
d. For each action, generate the resulting state and compute the cost to get to that state by adding the
cost of the current state plus the cost ofthe action.
e. If the resulting state is not in the closed set or the new cost to get thereis less than the old cost,
update its cost and estimated total cost in the open set and add it to the open set.
g. If the size of the closed set plus the open set exceeds the maximum memory usage, remove the
state with the highest estimated total costfrom the closed set and add it back to the open set.
Step 4: If the open set is empty and no goal state has been found, return failure.
PROGRAM :
import heapq
class Node:
def __init__(self, state, parent=None, g=0, h=0):
self.state = state
self.parent = parent
self.g = g # cost from start node to current node
self.h = h # heuristic cost from current node to goal node
def f(self):
return self.g + self.h
while frontier:
_, current_node = heapq.heappop(frontier)
if current_node.state == goal_state:
return current_node # Goal found
explored.add(current_node.state)
successor_node = Node(state=successor_state,
parent=current_node,
g=current_node.g + cost,
h=heuristic(successor_state, goal_state))
# Example usage
start_state = [[1, 2, 3],
[4, 5, 6],
[7, 8, 0]] # Initial state of 8-puzzle problem
goal_state = [[1, 2, 3],
[4, 5, 6],
[7, 8, 0]] # Goal state of 8-puzzle problem
if result:
# Print path
path = []
while result:
path.append(result.state)
result = result.parent
path.reverse()
for state in path:
print(state)
else:
print("Goal not found within memory limit.")
OUTPUT :
RESULT :-
Thus the python program for informed search algorithms (Memory-Bounded A*) has
been written and executed successfully.
EX. NO: 3A IMPLEMENT NAIVE BAYES MODELS(GAUSSIAN NAIVE
DATE: BAYES)
AIM:
Aim is to write a program in python to solve problems by using Implement naive
Bayesmodel (Gaussian Naive Bayes).
ALGORITHM:
Input:
• Training dataset with features X and corresponding labels Y
• Test dataset with features X_test
Output:
• Predicted labels for test dataset Y_pred
Step 1: Calculate the prior probabilities of each class in the training dataset, i.e., P(Y = c),
where c isthe class label.
Step 2: Calculate the mean and variance of each feature for each class in the training dataset.
Step 3: For each test instance in X_test, calculate the posterior probability of each class c, i.e.,
P(Y = c| X = x_test), using the Gaussian probability density function: P(Y = c | X = x_test) = (1 /
(sqrt(2*pi)*sigma_c)) * exp(-((x_test - mu_c)^2) / (2 * sigma_c^2)) where mu_c and sigma_care
the mean and variance of feature values for class c, respectively.
Step 4: For each test instance in X_test, assign the class label with the highest posterior
probability asthe predicted label Y_pred.
PROGRAM :
import numpy as np
class GaussianNaiveBayes:
def fit(self, X, y):
self.classes = np.unique(y)
self.parameters = {}
for c in self.classes:
X_c = X[y == c]
self.parameters[c] = {
'mean': X_c.mean(axis=0),
'std': X_c.std(axis=0)
}
# Example usage
X_train = np.array([[5, 2], [3, 4], [6, 4], [7, 3], [2, 3], [5, 3]])
y_train = np.array([0, 0, 1, 1, 0, 1]) # Example binary classes
model = GaussianNaiveBayes()
model.fit(X_train, y_train)
X_test = np.array([[4, 3], [6, 4]])
predictions = model.predict(X_test)
print("Predictions:", predictions)
OUTPUT :
Predictions: [0 1]
RESULT :-
Thus the python program for implement naive bayes models(gaussian naive bayes) has
been written and executed successfully.
EX. NO: 3B IMPLEMENT NAIVE BAYES MODELS(MULTINOMIAL
AIM:
Aim is to write a program in python to solve problems by using Implement naive
Bayesmodel (Multinomial Naive Bayes)
ALGORITHM:
Step 1: Convert the training dataset into a frequency table where each row represents a
document, and each column represents a word in the vocabulary. The values in the table
represent the frequency of each word in each document.
Step 2: Calculate the prior probabilities of each class label by dividing the number of
documentsin each class by the total number of documents.
Step 3: Calculate the conditional probabilities of each word given each class label. This
involves calculating the frequency of each word in each class and dividing it by the total
number of words in that class.
Step 4: For each document in the test dataset, calculate the posterior probability of each
classlabel using the Naive Bayes formula:
Step 6: where word1, word2, ..., wordn are the words in the document and P(word |
class_label)is the conditional probability of that word given the class label.
Step 7: Predict the class label with the highest posterior probability for each document in
the testdataset.
Step 8: Return the predicted class labels for the test dataset.
PROGRAM :
import numpy as np
class MultinomialNaiveBayes:
def fit(self, X, y):
self.classes = np.unique(y)
self.class_probabilities = {}
self.feature_probabilities = {}
# Example usage
X_train = np.array([[1, 0, 1, 0], [1, 1, 0, 1], [0, 1, 0, 0], [1, 1, 1, 0], [0, 1, 1, 1]])
y_train = np.array([0, 1, 0, 1, 0]) # Example binary classes
model = MultinomialNaiveBayes()
model.fit(X_train, y_train)
print("Predictions:", predictions)
OUTPUT :
Predictions: [0 0]
RESULT :-
Thus the python program for implement naive bayes models(multinomial naive bayes)
has been written and executed successfully.
EX. NO: 4
IMPLEMENT BAYESIAN NETWORKS
DATE:
AIM:
Aim is to write a program in python to solve problems by using implement
BayesianNetworks.
ALGORITHM:
Step 2: Define the structure of the Bayesian network by creating a BayesianModel object
andspecifying the nodes and their dependencies.
Step 3: Define the conditional probability distributions (CPDs) for each node using the
TabularCPD class.
Step 4: Add the CPDs to the model using the add cpds method.
Step 5: Check if the model is valid using the check model method. If the model is not valid,
anerror message will be raised.
Step 7: Use the inference query method to compute the probability of the Letter node being
goodgiven that the Intelligence node is high and the Difficulty node is low.
PROGRAM :
# Perform variable elimination to compute the probability of getting a good letter given
high intelligence and low difficulty
inference = VariableElimination(model)
query = inference.query(variables=['Letter'], evidence={'Intelligence': 1, 'Difficulty': 0},
show_progress=False)
print("P(Letter=Good | Intelligence=High, Difficulty=Low) =", query.values[0])
OUTPUT :
RESULT :-
Thus the python program for implement bayesian networks has been written and
executedsuccessfully
EX. NO: 5
BUILD REGRESSION MODELS
DATE:
AIM:
To write a python program to solve Build Regression Models
ALGORITHM:
3. Split the data into training and testing sets using the train_test_split function from
scikit-learn.
4. Train a linear regression model using the training data by creating an instance of the
LinearRegression class and calling its fit method with the training data.
5. Evaluate the performance of the model using mean squared error on both the training
andtesting data. Print the results to the console.
PROGRAM :
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# Load data
data = pd.read_csv(r `c:/data.csv')
# Evaluate model
train_pred =
reg.predict(X_train)test_pred =
reg.predict(X_test)
print('Train MSE:', mean_squared_error(y_train, train_pred))
print('Test MSE:', mean_squared_error(y_test, test_pred))
OUTPUT :
RESULT :-
Thus the python program for build regression models has been written and executed
successfully.
EX. NO: 6A
BUILD DECISION TREES
DATE:
AIM:
To write a python program to solve Build decision trees.
ALGORITHM:
PROGRAM :
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
# Load the dataset
dataset = pd.read_csv(r 'c:/housing.csv')
# Split the dataset into training and testing sets
X = dataset.drop('MEDV', axis=1)
y = dataset['MEDV']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
OUTPUT :
RESULT :-
Thus the python program for build decision trees has been written and executed
successfully
EX. NO: 6B
BUILD RANDOM FORESTS
DATE:
AIM:
To write a python program to solve Build random forests.
ALGORITHM:
PROGRAM :
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# Load the datasetiris = load_iris()
X = iris.data y = iris.target
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
OUTPUT :
Random Forest Accuracy: 1.0
RESULT :-
Thus the python program for build random forests has been written and executed
successfully.
EX. NO: 7
BUILD SVM MODELS
DATE:
AIM:
To write a python program to solve Build SVM Model’s.
ALGORITHM:
c. Specify the gamma value (if applicable) 5. Train the SVM model using the training data
PROGRAM :
# Import necessary
libraries from sklearn
import datasets
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
# Load the iris dataset iris = datasets.load_iris()
OUTPUT :
Accuracy: 1.
RESULT :-
Thus the python program for build SVM models has been written and executed
successfully
EX. NO: 8
IMPLEMENT ENSEMBLING TECHNIQUES
DATE:
AIM:
To write a python program to solve Implement ensembling techniques
ALGORITHM:
Step 1: Load the breast cancer dataset and split the data into training and testing sets using
train_test_split() function.
Step 2: Train 10 random forest models using bagging by randomly selecting 50% of the training
data for each model, and fit a random forest classifier with 100 trees to the selected data.
Step 3: Test each model on the testing set and calculate the accuracy of each model using
accuracy_score() function.
Step 4: Combine the predictions of the 10 models by taking the average of the predicted
probabilities for each class, round the predicted probabilities to the nearest integer, and calculate the
accuracy of the ensemble model using accuracy_score() function.
Step 5: Print the accuracy of each individual model and the ensemble model.
PROGRAM :
data = load_breast_cancer()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
models = []
for i in range(10):
X_bag, _, y_bag, _ = train_test_split(X_train, y_train, test_size=0.5)
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_bag, y_bag)
y_pred = model.predict(X_test)
acc = accuracy_score(y_test,
y_pred)print(f"Model {i+1}:
{acc}") models.append(model)
y_preds = []
for model in models:
y_pred = model.predict(X_test) y_preds.append(y_pred)
y_ensemble = sum(y_preds) / len(y_preds)
y_ensemble = [int(round(y)) for y in y_ensemble]
acc_ensemble = accuracy_score(y_test,
y_ensemble)print(f"Ensemble: {acc_ensemble}")
OUTPUT :
Model 1: 0.9649122807017544
Model 2: 0.9473684210526315
Model 3: 0.956140350877193
Model 4: 0.9649122807017544
Model 5: 0.956140350877193
Model 6: 0.9649122807017544
Model 7: 0.956140350877193
Model 8: 0.956140350877193
Model 9: 0.956140350877193
Model 10: 0.9736842105263158
Ensemble: 0.956140350877193
RESULT :-
Thus the python program for implement ensembling techniques has been written and
executed successfully.
EX. NO: 9A
IMPLEMENT CLUSTERING ALGORITHMS
DATE:
(HIERARCHICAL CLUSTERING)
AIM:
To write a python program to solve Implement clustering algorithms (Hierarchical
clustering)
ALGORITHM:
Step 1: Begin with a dataset containing n data points.
Step 2: Calculate the pairwise distance between all data points.
Step 3: Create n clusters, one for each data point.
Step 4: Find the closest pair of clusters based on the pairwise distance between their data points.
Step 5: Merge the two closest clusters into a new cluster.
Step 6: Update the pairwise distance matrix to reflect the distance between the new cluster
andthe remaining clusters.
Step 7: Repeat steps 4-6 until all data points are in a single cluster.
PROGRAM:
import numpy as np
from scipy.cluster.hierarchy import dendrogram, linkage
import matplotlib.pyplot as plt
# Generate sample data
X = np.array([[1,2], [1,4], [1,0], [4,2], [4,4], [4,0]])
# Perform hierarchical
clusteringZ = linkage(X, 'ward')
# Plot dendrogram plt.figure(figsize=(10, 5))
dendrogram(Z) plt.show()
OUTPUT :
RESULT :-
Thus the python program for implement clustering algorithms(hierarchical clustering)
has been written and executed successfully.
EX. NO: 9B
IMPLEMENT CLUSTERING ALGORITHMS(DENSITY-
DATE:
BASED CLUSTERING)
AIM:
To write a python program to solve Implement clustering algorithms (Density-
basedclustering)
ALGORITHM:
Step 1: Choose an appropriate distance metric (e.g., Euclidean distance) to measure the
similarity between data points.
Step 2: Choose the value of the radius eps around each data point that will be considered
when identifying dense regions. This value determines the sensitivity of the algorithm to
noise and outliers.
Step 3: Choose the minimum number of points min_samples that must be found within a
radius of eps around a data point for it to be considered a core point. Points with fewer
neighbors are considered border points, while those with no neighbors are considered noise
points.
Step 4: Randomly choose an unvisited data point p from the dataset.
Step 5: Determine whether p is a core point, border point, or noise point based on the
number of points within a radius of eps around p.
Step 6: If p is a core point, create a new cluster and add p and all its density-reachable
neighbors to the cluster.
Step 7: If p is a border point, add it to any neighboring cluster that has not reached its
min_samples threshold.
Step 8: Mark p as visited.
Step 9: Repeat steps 4-8 until all data points have been visited.
Step 10: Merge clusters that share border points.
PROGRAM:
RESULT :-
Thus the python program for implement clustering algorithms(density-based clustering)
has been written and executed successfully.
EX. NO: 10
IMPLEMENT EM FOR BAYESIAN NETWORKS
DATE:
AIM:
To write a python program to solve Implement EM for Bayesian networks.
ALGORITHM:
PROGRAM :
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.inference import VariableElimination
import numpy as np
# Define the EM
algorithmfor i in
range(10):
# E-step: compute the expected sufficient statistics of the hidden variable C
infer = VariableElimination(model)
evidence = data.to_dict('records')
qs = infer.query(['C'],
evidence=evidence)p_C = qs['C'].values
╒═════╤═════╤═════╕
│C │ C_0 │ C_1 │
├─────┼─────┼─────┤
│ F_0 │ 0.8 │ 0.3 │
├─────┼─────┼─────┤
│ F_1 │ 0.2 │ 0.7 │
╘═════╧═════╧═════
RESULT :-
Thus the python program to implement EM for bayesian networks has been written and
executed successfully.
EX. NO: 11
BUILD SIMPLE NN MODELS
DATE:
AIM:
To write a python program to solve Build simple NN models.
ALGORITHM:
PROGRAM :
import numpy as np
from keras.models import
Sequentialfrom keras.layers import
Dense
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
# Define the model architecturemodel = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])#
Train the model
model.fit(X, y, epochs=1000, batch_size=4,
verbose=0)# Test the model
test_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
predictions = model.predict(test_data)
print(predictions)
OUTPUT :
[[0.5]
[0.5]
[0.5]
[0.5]]
RESULT :-
Thus the python program for build simple NN models has been written and executed
successfully.
EX. NO: 12
BUILD DEEP LEARNING NN MODELS
DATE:
AIM:
To write a python program to solve Build deep learning NN models
ALGORITHM:
Step 1: Load the MNIST dataset using mnist.load_data() from the keras.datasets module.
Step 2: Preprocess the data by reshaping the input data to a 1D array, converting the data type to
float32, normalizing the input data to values between 0 and 1, and converting the target variable
to categorical using np_utils.to_categorical().
Step 3: Define the neural network architecture using the Sequential() class from Keras. The
model should have an input layer of 784 nodes, two hidden layers of 512 nodes each with ReLU
activation and dropout layers with a rate of 0.2, and an output layer of 10 nodes with softmax
activation.
Step 4: Compile the model using compile() with 'categorical_crossentropy' as he loss function,
'adam' as the optimizer, and 'accuracy' as the evaluation metric.
Step 5: Train the model using fit() with the preprocessed training data, the batch size of 128, the
number of epochs of 10, and the validation data. Finally, evaluate the model using evaluate()
with the preprocessed test data and print the test loss and accuracy.
PROGRAM :
import numpy as np
from keras.datasets import mnist from
keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.utils import np_utils
# Convert data type to float32 and normalize the input data to values between 0 and 1
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
RESULT :-
Thus the python program for build deep learning NN models has been written and
executed successfully.