AI LAB manual
AI LAB manual
LIST OF EXPERIMENTS:
1. Implementation of Uninformed search algorithms (BFS, DFS)
2. Implementation of Informed search algorithms (A*, memory-bounded A*)
3. Implement naïve Bayes models
4. Implement Bayesian Networks
5. Build Regression models
6. Build decision trees and random forests
7. Build SVM models
8. Implement ensembling techniques
9. Implement clustering algorithms
10. Implement EM for Bayesian networks
11. Build simple NN models
12. Build deep learning NN models
COURSE OUTCOME:
i
TABLE OF CONTENTS
AIM:
To implement Breadth First Search (BFS) algorithm using python.
THEORY:
Breadth-First Search (BFS) is an algorithm used for traversing graphs or trees. Traversing means
visiting each node of the graph. Breadth-First Search is a recursive algorithm to search all the
vertices of a graph or a tree. BFS in python can be implemented by using data structures like a
dictionary and lists. Breadth-First Search in tree and graph is almost the same. The only
difference is that the graph may contain cycles, so we may traverse to the same node again.
ALGORITHM:
1. Pick any node, visit the adjacent unvisited vertex, mark it as visited, display it, and insert
it in a queue.
2. If there are no remaining adjacent vertices left, remove the first vertex from the queue.
3. Repeat step 1 and step 2 until the queue is empty or the desired node is found.
EXAMPLE:
Let us use an undirected graph with 5 vertices.
From the vertex P, the BFS algorithmic program starts by putting it within the Visited list and
puts all its adjacent vertices within the stack.
1
Next, we have a tendency to visit the part at the front of the queue i.e. Q and visit its adjacent
nodes. Since P has already been visited, we have a tendency to visit R instead.
Vertex R has an unvisited adjacent vertex in T, thus we have a tendency to add that to the rear of
the queue and visit S, which is at the front of the queue.
Now, only T remains within the queue since the only adjacent node of S i.e. P is already visited.
We have a tendency to visit it.
2
Since the queue is empty, we've completed the Traversal of the graph.
PROGRAM:
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : ['F'],
'D' : [],
'E' : ['F'],
'F' : []
}
visited = [] # List to keep track of visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node):
visited.append(node)
queue.append(node)
while queue:
s = queue.pop(0)
print s,
for neighbour in graph[s]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
bfs(visited, graph, 'A')
OUTPUT:
A B C D E F
RESULT:
Thus the program to implement BFS algorithm using python is executed and verified
successfully.
3
EX. NO: 1(B) IMPLEMENTATION OF DEPTH FIRST SEARCH
ALGORITHM DATE :
AIM:
ALGORITHM:
1. Pick any node. If it is unvisited, mark it as visited and recur on all its adjacent nodes.
2. Repeat until all the nodes are visited, or the node to be searched is found.
EXAMPLE:
From the vertex P, the DFS rule starts by putting it within the Visited list and putting all its
adjacent vertices within the stack.
4
Next, we tend to visit the part at the highest of the stack i.e. Q, and head to its adjacent nodes.
Since P has already been visited, we tend to visit R instead.
Vertex R has the unvisited adjacent vertex in T, therefore we will be adding that to the highest of
the stack and visit it.
At last, we will visit the last component S, it does not have any unvisited adjacent
nodes, thus we've completed the Depth First Traversal of the graph.
5
PROGRAM:
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : ['F'],
'D' : [],
'E' : ['F'],
'F' : []
}
visited = set() # Set to keep track of visited
nodes. def dfs(visited, graph, node):
if node not in visited:
print node
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)
#Driver Code
dfs(visited, graph, 'A')
OUTPUT:
A
B
D
E
F
C
RESULT:
Thus the program to implement Depth First Search algorithm using python is executed and
verified successfully.
6
EX. NO: 2 (a) IMPLEMENTATION OF A*
ALGORITHM DATE :
AIM:
ALGORITHM:
PROGRAM:
8
else:
if g[m] > g[n] + weight:
#update g(m)
g[m] = g[n] + weight
#change parent of m to n
parents[m] = n
if n == None:
print('Path does not exist!')
return None
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
def heuristic(n):
H_dist = {
9
'A': 10,
10
'B': 8,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
}
aStarAlgo('A', 'J')
OUTPUT:
RESULT:
Thus the program to implement A* algorithm using python is executed and verified
successfully.
11
EXNO:2(b) IMPLEMENTATION OF SIMPLIFIED MEMORY BOUNDED A*
DATE: ALGORITHM
AIM:
ALGORITHM:
1. While open_set is not empty and expansions is less than max_expansions, do the following:
a. Pop the state with the lowest f-score from open_set. This will be the current state.
b. Check if the goal has been reached by calling goal_test(current_state). If so, return the
final path by calling construct_path(parents, current_state).
d. Expand the current state by generating its successor states using the
i. Calculate the tentative g-score by adding the successor cost to the current state's g-score:
tentative_g_score = g_scores[current_state] + successor_cost.
ii. Check if the successor state is already in closed_set. If so, skip this successor state and
move on to the next one.
iii. If the successor state is not in g_scores, or if the tentative g-score is less than the current
g- score for the successor state, update its g_scores, f_scores, and parents as follows:
- `g_scores[successor_state] = tentative_g_score`
- `parents[successor_state] = current_state`
12
If the successor state is not already in `open_set`, add it to `open_set` with
`(f_scores[successor_state], successor_state)`.
3. To construct the final path, start with the goal state and repeatedly follow its parent
pointers until the start state is reached. Return the path as a list of states in order from start to
goal.
PROGRAM:
import heapq
def sma_star(start_state, goal_test, successors, heuristic, suboptimality, max_expansions):
# Initialize data structures
f_scores = {start_state: heuristic(start_state)}
g_scores = {start_state: 0}
parents = {start_state: None}
closed_set = set()
open_set =[] heapq.heappush(open_set,
(f_scores[start_state], start_state)) expansions = 0
while open_set and expansions < max_expansions:
# Pop the state with the lowest f-score from the open set
current_state = heapq.heappop(open_set)[1]
# Check if the goal has been reached
if goal_test(current_state):
return construct_path(parents, current_state)
closed_set.add(current_state)
expansions += 1
# Expand the current state
for successor in successors(current_state):
successor_state, successor_cost = successor
# Calculate tentative g-score
tentative_g_score = g_scores[current_state] + successor_cost
# Check if the successor is already closed
if successor_state in closed_set:
continue
# Add the successor to the open set if it's new
if successor_state not in g_scores or tentative_g_score < g_scores[successor_state]:
parents[successor_state] = current_state
g_scores[successor_state] = tentative_g_score
f_scores[successor_state] = g_scores[successor_state] +
suboptimality
*heuristic(successor_state)
14
def successors(state):
if state == 1:
return [(2, 1), (3, 2)]
elif state == 2:
return [(4, 3), (5, 4)]
elif state == 3:
return [(4, 1), (5, 2)]
else:
return []
def heuristic(state):
return abs(state - 5)
start_state = 1
suboptimality = 2
max_expansions = 10
path = sma_star(start_state, goal_test, successors, heuristic, suboptimality, max_expansions)
print(path)
OUTPUT:
[1, 3, 5]
RESULT:
Thus the program to implement SMA A* algorithm using python is executed and verified
successfully.
15
EX NO:3 IMPLEMENTATION NAÏVE BAYES
MODELS DATE:
AIM:
ALGORITHM:
1. Load the dataset: Load the dataset that you want to classify using Naive Bayes algorithm.
The dataset should have labeled data points with attributes and their corresponding classes.
2.Split the dataset: Split the dataset into training and testing sets. Use the training set to train the
Naive Bayes classifier and the testing set to evaluate its performance.
3.Preprocess the data: Preprocess the data by removing any irrelevant or noisy attributes, cleaning
the text data (if applicable), and converting the data into numerical form.
4.Compute the prior probabilities: Calculate the prior probabilities of each class by dividing the
number of training instances in each class by the total number of training instances.
5.Compute the likelihood probabilities: Compute the likelihood probabilities of each attribute
given each class by calculating the conditional probabilities of each attribute given each class.
6.Apply Bayes theorem: Apply Bayes theorem to compute the posterior probabilities of each class
given the attributes of a test instance. Choose the class with the highest posterior probability as the
predicted class for the test instance.
7.Evaluate the model: Evaluate the performance of the Naive Bayes classifier on the testing set
using metrics such as accuracy, precision, recall, and F1 score.
8.Tune the hyperparameters: Tune the hyperparameters of the Naive Bayes classifier to improve
its performance on the testing set.
9.Deploy the model: Deploy the Naive Bayes classifier in a real-world application to classify new
instances.
PROGRAM:
import numpy as np
class NaiveBayes:
def init (self):
self.prior = None
self.likelihood = None
self.classes = None
self.prior = np.zeros(n_classes)
self.likelihood = np.zeros((n_classes, n_features))
16
for i, c in enumerate(self.classes):
X_c = X[y==c]
self.prior[i] = X_c.shape[0] / X.shape[0]
self.likelihood[i, :] = ((X_c.sum(axis=0)) / X_c.sum()).flatten()
def predict(self, X):
y_pred = []
for x in X:
posterior = []
for i, c in enumerate(self.classes):
likelihood = np.prod(self.likelihood[i, :] ** x)
posterior.append(self.prior[i] * likelihood)
y_pred.append(self.classes[np.argmax(posterior)])
return y_pred
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load iris dataset
iris = load_iris()
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(iris.data,
iris.target, test_size=0.2, random_state=42)
# Fit Naive Bayes classifier on train data
nb = NaiveBayes()
nb.fit(X_train, y_train)
# Make predictions on test data
y_pred = nb.predict(X_test)
# Evaluate model performance
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
OUTPUT:
('Accuracy:', 0.3333333333333333)
RESULT:
Thus the program to implement naïve bayes models using python is executed and
verified successfully.
17
EXNO:4 IMPLEMENTATION OF BAYESIAN
NETWORKS DATE:
AIM:
ALGORITHM:
1. Import the necessary libraries and classes from pgmpy, which is a Python library for
working with probabilistic graphical models.
2. Define the structure of our Bayesian network by creating a new BayesianNetwork object and
specifying the edges between nodes using a list of tuples. In this case, we have two nodes B
and E that each point to a third node A.
3. Create conditional probability tables (CPDs) for each node using the TabularCPD class. The
first argument is the name of the node, the second argument is the number of states it can be in
(in this case, 2 for binary variables), and the third argument is the actual table of probabilities.
PROGRAM:
inference = VariableElimination(model)
posterior_a = inference.query(['A'], evidence={'B': 1, 'E': 0})
print(posterior_a)
OUTPUT:
RESULT:
Thus the program to implement Naïve Bayes models using python is executed and verified
successfully.
19
EXNO:5 BUILD REGRESSION
MODELS DATE:
AIM:
ALGORITHM:
2. The program generates sample data for three features (feature_1, feature_2, and feature_3)
and a target variable using numpy.random.rand() and adds some random noise to the target
variable.
3. A pandas DataFrame is created using the generated sample data, and the DataFrame is saved
to a CSV file using to_csv() function.
5. The features and target variables are defined by extracting the appropriate columns from
the DataFrame and assigning them to X and y, respectively.
7. A multiple regression model is created in the same way, but this time including all
three features.
9. The data and three regression models are plotted using matplotlib.pyplot. The scatter plot
shows the relationship between feature_1 and target, and the regression lines represent the
predictions of the linear, multiple, and polynomial regression models. The plot also includes a
legend to differentiate between the different regression models.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
20
x = np.array([1, 2, 3, 4, 5]).reshape((-1, 1))
y = np.array([2, 3, 5, 6, 8])
21
# Create a linear regression object
model = LinearRegression()
OUTPUT:
import numpy as np
from sklearn.linear_model import LinearRegression
# Generate some random data
x = np.array([[1, 2, 3], [2, 4, 6], [3, 6, 9], [4, 8, 12]])
y = np.array([6, 12, 18, 24])
# Create a linear regression object
model = LinearRegression()
# Fit the model to the data
model.fit(x, y)
# Predict the output for new inputs
x_new = np.array([[5, 10, 15]])
y_new = model.predict(x_new)
print(y_new) # Output: [30.]
# Print the coefficients and intercept
print('Coefficients:', model.coef_)
print('Intercept:', model.intercept_)
OUTPUT:
[30.]
Coefficients: [0.42857143 0.85714286 1.28571429]
Intercept: 7.105427357601002e-15
22
POLYNOMIAL REGRESSION:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
# Generate some random data
x = np.array([1, 2, 3, 4, 5]).reshape((-1, 1))
y = np.array([1, 3, 8, 13, 20])
# Create a polynomial features object
poly =
PolynomialFeatures(degree=2)
x_poly = poly.fit_transform(x)
# Create a linear regression object
model = LinearRegression()
# Fit the model to the data
model.fit(x_poly, y)
# Predict the output for new inputs
x_new = np.array([[6]])
x_new_poly = poly.transform(x_new)
y_new = model.predict(x_new_poly)
print(y_new) # Output: [31.]
OUTPUT:
RESULT:
Thus the program to implement regression models using python is executed and verified
successfully.
23
EXNO:6 BUILD DECISION TREES AND RANDOM
FORESTS DATE:
AIM:
ALGORITHM:
3. Convert the dataset into a pandas dataframe and split into features (X) and target (y).
4. Split the data into training and test sets using train_test_split from sklearn.model_selection.
6. Fit the decision tree model to the training data using the fit() method.
7. Evaluate the decision tree model on the test data by making predictions using the
predict() method, and then calculating the accuracy score using accuracy_score from
sklearn.metrics.
9. Fit the random forest model to the training data using the fit() method.
10. Evaluate the random forest model on the test data by making predictions using the
predict() method, and then calculating the accuracy score using accuracy_score from
sklearn.metrics.
11. Print the accuracy scores of both the decision tree and random forest models.
PROGRAM:
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
24
# Build decision tree
tree = DecisionTreeClassifier(max_depth=3, random_state=42)
25
tree.fit(X_train, y_train)
OUTPUT:
('Decision Tree Accuracy:', 1.0)
('Random Forest Accuracy:', 1.0)
RESULT:
Thus the program to implement decision trees and random forests models using python is
executed and verified successfully.
26
EX NO: 07 IMPLEMENTATION OF SVM
MODEL DATE:
AIM:
ALGORITHMS:
PROGRAMS:
grid.fit(X_train, y_train)
# best hyperparameters
print("Best hyperparameters: ", grid.best_params_)
# create SVM model with best hyperparameters
svm = SVC(C=grid.best_params_['C'], gamma=grid.best_params_['gamma'],
kernel=grid.best_params_['kernel'])
# fit SVM model to training data
svm.fit(X_train, y_train)
# predict classes of test set
y_pred = svm.predict(X_test)
27
# calculate accuracy score and confusion matrix of SVM model
accuracy = accuracy_score(y_test, y_pred)
28
confusion = confusion_matrix(y_test, y_pred)
print("Accuracy: ", accuracy)
print("Confusion matrix:\n", confusion)
OUTPUT:
Fitting 3 folds for each of 48 candidates, totalling 144 fits
[CV] kernel=linear, C=0.1, gamma=0.1 .................................
[CV] kernel=linear, C=0.1, gamma=0.1, score=0.975609756098, total=
RESULT:
Thus the program to implement svm models using python is executed and verified Successfully.
29
EX NO: 08 IMPLEMENTATION OF ENSEMBLING
TECHNIQUES DATE:
AIM:
ALGORITHMS:
PROGRAMS:
30
y_pred_rfc = rfc.predict(X_test)
y_pred_gbc = gbc.predict(X_test)
# Combine the predictions using a majority vote
ensemble_preds = []
for i in range(len(X_test)):
preds = [y_pred_rfc[i], y_pred_gbc[i]]
ensemble_preds.append(max(set(preds), key=preds.count))
# Calculate the accuracy score of the ensemble model
ensemble_accuracy = accuracy_score(y_test, ensemble_preds)
print("Ensemble model accuracy:", ensemble_accuracy)
OUTPUT:
RESULT:
Thus the program to implement ensembling techniques using python is executed and
verified successfully.
31
EX NO: 09 IMPLEMENTATION CLUSTERING
ALGORITHMS DATE:
AIM:
To implement clustering algorithms
ALGORITHMS:
1. Import the required libraries
2. Generate random data using the make_blobs function from scikit-learn, with 500 samples,
4 centers, a standard deviation of 1.0, and a random seed of 0
3. Standardize the data using the StandardScaler function from scikit-learn
4. Define a range of clusters to evaluate, from 2 to 9
5. Initialize empty lists to store the evaluation metrics
6. Loop through the range of clusters, and for each number of clusters
7. Initialize the K-Means model and fit the standardized data
8. Get the cluster labels and centroids from the fitted model
9. Calculate the sum of squared distances (SSE) and silhouette score for the current number
of clusters
10. Visualize the clustering results using a scatter plot of the data points with
cluster assignments indicated by color, and the cluster centroids indicated by red
crosses
11. After all iterations of the loop, plot the SSE and silhouette scores for each number of
clusters using subplots
12. Display the final plots to the user.
PROGRAMS:
import numpy as np
33
OUTPUT:
34
RESULT:
Thus the program to implement clustering using python is executed and verified
successfully.
35
Ex. No. 10 IMPLEMENTATION OF EXPECTATION MAXIMIZATION
(EM) DATE:
AIM:
To write a python code to implement expectation maximization (EM) algorithm.
ALGORITHM:
1. Import the required packages.
2. Generate and plot the cluster model.
3. Make an initial guess of parameter θ using random function.
4. Given the current estimates for θ, in the expectation step EM computes the
cluster posterior probabilities P(Ci |xj ) via the Bayes theorem.
5. In the maximization step, using the weights P(Ci |xj ) EM re-estimates θ, that is, it
re- estimates the parameters for each cluster.
6. Repeat the step 4 & 5 until it converges.
PROGRAM:
import random
import numpy as np # import numpy
from numpy.linalg import inv # for matrix inverse
import matplotlib.pyplot as plt # import matplotlib.pyplot for plotting
framework from scipy.stats import multivariate_normal # for generating pdf
m1 = [1,1] # consider a random mean and covariance
value m2 = [7,7]
cov1 = [[3, 2], [2, 3]]
cov2 = [[2, -1], [-1, 2]]
x = np.random.multivariate_normal(m1, cov1, size=(200,)) # Generating 200 samples for each
mean and covariance
y = np.random.multivariate_normal(m2, cov2, size=(200,))
d = np.concatenate((x, y), axis=0)
plt.figure(figsize=(10,10))
plt.scatter(d[:,0], d[:,1], marker='o')
plt.axis('equal')
plt.xlabel('X-Axis', fontsize=16)
plt.ylabel('Y-Axis', fontsize=16)
plt.title('Ground Truth',
fontsize=22) plt.grid()
plt.show()
m1 = random.choice(d)
m2 = random.choice(d)
cov1 = np.cov(np.transpose(d))
cov2 = np.cov(np.transpose(d))
pi = 0.5
x1 = np.linspace(-
4,11,200) x2 =
np.linspace(-4,11,200) X,
Y = np.meshgrid(x1,x2)
36
Z1 = multivariate_normal(m1, cov1)
37
Z2 = multivariate_normal(m2, cov2)
pos = np.empty(X.shape + (2,)) # a new array of given shape and type, without initializing entries
pos[:, :, 0] = X; pos[:, :, 1] = Y
plt.figure(figsize=(10,10)) # creating the figure and assigning the
size plt.scatter(d[:,0], d[:,1], marker='o')
plt.contour(X, Y, Z1.pdf(pos), colors="r" ,alpha = 0.5)
plt.contour(X, Y, Z2.pdf(pos), colors="b" ,alpha =
0.5) plt.axis('equal') # making both the axis equal
plt.xlabel('X-Axis', fontsize=16) # X-Axis
plt.ylabel('Y-Axis', fontsize=16) # Y-Axis
plt.title('Initial State', fontsize=22) # Title of the
plot plt.grid() # displaying gridlines
plt.show() ##Expectation step
def Estep(lis1):
m1=lis1[0]
m2=lis1[1]
cov1=lis1[2]
cov2=lis1[3]
pi=lis1[4]
for i in range(0,len(d)):
num_mu1 += (1-eval1[i]) * d[i]
din_mu1 += (1-eval1[i])
mu1 = num_mu1/din_mu1
mu2 = num_mu2/din_mu2
num_s1,din_s1,num_s2,din_s2=0,0,0,0
for i in range(0,len(d)):
q1 = np.matrix(d[i]-mu1)
num_s1 += (1-eval1[i]) * np.dot(q1.T, q1)
din_s1 += (1-eval1[i])
q2 = np.matrix(d[i]-mu2)
num_s2 += eval1[i] * np.dot(q2.T, q2)
din_s2 += eval1[i]
s1 = num_s1/din_s1
s2 = num_s2/din_s2
38
pi = sum(eval1)/len(d)
lis2=[mu1,mu2,s1,s2,pi]
return(lis2)
def plot(lis1):
mu1=lis1[0]
mu2=lis1[1]
s1=lis1[2]
s2=lis1[3]
Z1 = multivariate_normal(mu1, s1)
Z2 = multivariate_normal(mu2, s2)
40
plt.show()
##Expectation step
def Estep(lis1):
m1=lis1[0]
m2=lis1[1]
cov1=lis1[2]
cov2=lis1[3]
pi=lis1[4]
pt2 = multivariate_normal.pdf(d, mean=m2, cov=cov2)
pt1 = multivariate_normal.pdf(d, mean=m1, cov=cov1)
w1 = pi * pt2
w2 = (1-pi) * pt1
eval1 = w1/(w1+w2)
return(eval1)
## Maximization step
def Mstep(eval1):
num_mu1,din_mu1,num_mu2,din_mu2=0,0,0,0
for i in range(0,len(d)):
num_mu1 += (1-eval1[i]) * d[i]
din_mu1 += (1-eval1[i])
num_mu2 += eval1[i] * d[i]
din_mu2 += eval1[i]
mu1 = num_mu1/din_mu1
mu2 = num_mu2/din_mu2
num_s1,din_s1,num_s2,din_s2=0,0,0,0
for i in range(0,len(d)):
q1 = np.matrix(d[i]-mu1)
num_s1 += (1-eval1[i]) * np.dot(q1.T, q1)
din_s1 += (1-eval1[i])
q2 = np.matrix(d[i]-mu2)
num_s2 += eval1[i] * np.dot(q2.T, q2)
din_s2 += eval1[i]
s1 = num_s1/din_s1
s2 = num_s2/din_s2
pi =
sum(eval1)/len(d)
lis2=[mu1,mu2,s1,s2,pi]
return(lis2)
def plot(lis1):
mu1=lis1[0]
mu2=lis1[1]
s1=lis1[2]
s2=lis1[3]
Z1 = multivariate_normal(mu1, s1)
Z2 = multivariate_normal(mu2, s2)
pos = np.empty(X.shape + (2,))
41
# a new array of given shape and type, without initializing entries
42
pos[:, :, 0] = X; pos[:, :, 1] = Y
plt.figure(figsize=(10,10)) # creating the figure and assigning the
size plt.scatter(d[:,0], d[:,1], marker='o')
plt.contour(X, Y, Z1.pdf(pos), colors="r" ,alpha = 0.5)
plt.contour(X, Y, Z2.pdf(pos), colors="b" ,alpha = 0.5)
plt.axis('equal') # making both the axis
equal plt.xlabel('X-Axis', fontsize=16) # X-Axis
plt.ylabel('Y-Axis', fontsize=16) # Y-Axis
plt.grid() # displaying
gridlines
plt.show()
iterations = 20
lis1=[m1,m2,cov1,cov2,pi]
for i in range(0,iterations):
lis2 = Mstep(Estep(lis1))
lis1=lis2
if(i==0 or i == 4 or i == 9 or i == 14 or i ==
19): plot(lis1)
43
OUTPUT:
44
RESULT:
Thus the python program for expectation maximization on clustering is written and
executed successfully.
45
EX.NO. 11 SIMPLE NEURAL
NETWORK DATE:
AIM:
To write a python program to build simple neural networks.
ALGORITHM:
1. Import the libraries. For example: import numpy as np
2. Define/create input data. For example, use numpy to create a dataset and an array of
data values.
3. Add weights and bias (if applicable) to input features. These are learnable
parameters, meaning that they can be adjusted during training.
Weights = input parameters that influences output
Bias = an extra threshold value added to the output
4. Train the network against known, good data in order to find the correct values for
the weights and biases.
5. Test the Network against a set of test data to see how it performs.
PROGRAM:
import numpy as np
class NeuralNetwork():
def init (self):
# seeding for random number generation
np.random.seed(1)
#converting weights to a 3 by 1 matrix with values from -1 to 1 and mean of 0
self.synaptic_weights = 2 * np.random.random((3, 1)) - 1
def sigmoid(self, x):
#applying the sigmoid function
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x):
#computing derivative to the Sigmoid function
return x * (1 - x)
def train(self, training_inputs, training_outputs, training_iterations):
#training the model to make accurate predictions while adjusting weights continually
for iteration in range(training_iterations):
#siphon the training data via the neuron
output = self.think(training_inputs) #computing error rate for back-propagation
error = training_outputs - output #performing weight adjustments
adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output))
self.synaptic_weights += adjustments
def think(self, inputs):
#passing the inputs via the neuron to get output #converting values to floats
inputs = inputs.astype(float)
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output
if name == " main ":
#initializing the neuron
46
class
47
neural_network = NeuralNetwork()
print("Beginning Randomly Generated Weights: ")
print(neural_network.synaptic_weights)
#training data consisting of 4 examples--3 input values and 1 output
training_inputs = np.array([[0,0,1],
[1,1,1],
[1,0,1],
[0,1,1]])
training_outputs = np.array([[0,1,1,0]]).T
#training taking place
neural_network.train(training_inputs, training_outputs, 15000)
print("Ending Weights After Training: ")
print(neural_network.synaptic_weights)
user_input_one = str(input("User Input One: "))
user_input_two = str(input("User Input Two: "))
user_input_three = str(input("User Input Three: "))
print("Considering New Situation: ", user_input_one, user_input_two, user_input_three)
print("New Output data: ")
print(neural_network.think(np.array([user_input_one, user_input_two, user_input_three])))
print("Wow, we did it!")
OUTPUT:
RESULT:
Thus the python program for building simple neural network model is executed
successfully.
48
EX.NO. 12 DEEP LEARNING NEURAL
NETWORK DATE:
AIM:
To write a python program to build deep learning neural network models.
ALGORITHM:
1. Import necessary libraries and load the data
Use the NumPy library to load your dataset and two classes from the Keras
library to define the model
Use the Pima Indians onset of diabetes dataset
2. Create a Sequential model and add layers one at a time until satisfied with the
architecture. The model expects rows of data with 8 variables (the input_shape=(8,)
argument).
The first hidden layer has 12 nodes and uses the relu activation function.
The second hidden layer has 8 nodes and uses the relu activation function.
The output layer has one node and uses the sigmoid activation function.
3. Compile the model by specifying the loss function to use to evaluate a set of weights, the
optimizer used to search through different weights for the network and metrics for
reporting the training.
4. Train or fit the model on loaded data by calling the fit() function on the model.
5. Evaluate the model on the training dataset using the evaluate() function and pass it the
same input and output used to train the model.
PROGRAM:
from numpy import loadtxt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
from google.colab import files
uploaded = files.upload()
dataset = loadtxt('diabetes.csv', delimiter=',', dtype=float, skiprows=1)
dataset[:, 0] = dataset[:, 0].astype(float)
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model
model = Sequential()
model.add(Dense(12, input_shape=(8,), activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10, verbose=0)
# make class predictions with the model
predictions = (model.predict(X) > 0.5).astype(int)
# summarize the first 5 cases
49
for i in range(5):
50
print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))
OUTPUT:
Result:
Thus the python program for building deep learning neural network model is executed
successfully.
51