Aids Lab PDF
Aids Lab PDF
A Bayesian network (also spelt Bayes network, Bayes net, belief network, or judgment network) is a
probabilistic graphical model that depicts a set of variables and their conditional dependencies using a
directed acyclic graph (DAG).
Bayesian networks are perfect for taking an observed event and forecasting the likelihood that any of
numerous known causes played a role. A Bayesian network, for example, could reflect the probability
correlations between diseases and symptoms. Given a set of symptoms, the network may be used to
calculate the likelihood of the presence of certain diseases.
An acyclic directed graph is used to create a Bayesian network, which is a probability model. It’s factored
by utilizing a single conditional probability distribution for each variable in the model, whose distribution is
based on the parents in the graph. The simple principle of probability underpins Bayesian models. So, first,
let’s define conditional probability and joint probability distribution.
Conditional probability:
Conditional probability is a measure of the likelihood of an event occurring provided that another event has
already occurred (through assumption, supposition, statement, or evidence). If A is the event of interest and
B is known or considered to have occurred, the conditional probability of A given B is generally stated as
P(A|B) or, less frequently, PB(A) if A is the event of interest and B is known or thought to have occurred.
This can also be expressed as a percentage of the likelihood of B crossing with A:
Joint Probability
The chance of two (or more) events together is known as the joint probability. The sum of the probabilities
of two or more random variables is the joint probability distribution.
For example, the joint probability of events A and B is expressed formally as:
The letter P is the first letter of the alphabet (A and B).
The upside-down capital “U” operator or, in some situations, a comma “,” represents the “and” or conjunction.
P (A ^ B)
P (A, B)
1
By multiplying the chance of event A by the likelihood of event B, the combined probability for occurrences A and B
is calculated.
Posterior Probability
In Bayesian statistics, the conditional probability of a random occurrence or an ambiguous assertion is the conditional
probability given the relevant data or background. “After taking into account the relevant evidence pertinent to the
specific subject under consideration,” “posterior” means in this case.
The probability distribution of an unknown quantity interpreted as a random variable based on data from an experiment
or survey is known as the posterior probability distribution.
In this demonstration, we’ll use Bayesian Networks to solve the well-known Monty Hall Problem. Let me explain the
Monty Hall problem to those of you who are unfamiliar with it:
This problem entails a competition in which a contestant must choose one of three doors, one of which conceals a price.
The show’s host (Monty) unlocks an empty door and asks the contestant if he wants to swap to the other door after the
contestant has chosen one.
The decision is whether to keep the current door or replace it with a new one. It is preferable to enter by the other door
because the price is more likely to be higher. To come out from this ambiguity let’s model this with a Bayesian network.
For this demonstration, we are using a python-based package pgmpy is a Bayesian Networks implementation written
entirely in Python with a focus on modularity and flexibility. Structure Learning, Parameter Estimation, Approximate
(Sampling-Based) and Exact inference, and Causal Inference are all available as implementations.
Now we will check the model structure and associated conditional probability distribution by the argument get_cpds()
will return True if every this is fine else through an error msg.
model.check_model()
TRUE
2
Now let’s infer the network, if we want to check at the next step which door will the host open now. For that,
we need access to the posterior probability from the network and while accessing we need to pass the evidence
to the function. Evidence is needed to be given when we are evaluating posterior probability, here in our task
evidence is nothing but which door is Guest selected and where is the Price.
infer = VariableElimination(model)
posterior_p = infer.query(['Host'], evidence={'Guest': 2, 'Price': 2})
print(posterior_p)
The probability distribution of the Host is clearly satisfying the theme of the contest. In the reality also, in this
situation host definitely not going to open the second door he will open either of the first two and that’s what
the above simulation tells.
Now, let’s plot our above model. This can be done with the help of Network and Pylab. NetworkX is a Python-
based software package for constructing, altering, and researching the structure, dynamics, and function of
complex networks. PyLab is a procedural interface to the object-oriented charting toolkit Matplotlib, and it is
used to examine large complex networks represented as graphs with nodes and edges.
nx.draw(model, with_labels=True)
plt.savefig('model.png')
plt.close()
Conclusion
we have discussed what a Bayesian Network is. In addition to that we have discussed how the Bayesian
network can be represented using DAG and also we have discussed what are the general and simple
mathematical concepts are associated with the network. Lastly, we have seen the practical implementation of
the Bayesian network with help of the python tool pgmpy, and also plotted a DAG of our model using
Networkx and pylab.
3
Experiment 02
4
5
6
7
RESULT:
Conclusion:
● The purpose of cognitive computing is to build a computing framework that can solve complicated
problems without frequent human intervention.
● Cognitive computing doesn’t bring a drastic novelty into the AI and big data industry.
● Rather, it urges digital solutions to meet human-centric requirements like act, think, and behave like a
human in order to achieve maximum synergy from human-machine interaction.
● It is believed that soon every digital system will be measured based on its cognitive abilities.
● Hence we successfully implemented a cognitive computing Healthcare application.
8
Experiment 03
● neural networks
● machine learning
● deep learning
● speech recognition
CODE:
COGNITIVE INSURANCE APPLICATION:
9
10
11
12
13
14
15
RESULT:
Conclusion:
● The purpose of cognitive computing is to build a computing framework that can solve complicated
problems without frequent human intervention.
● Cognitive computing doesn’t bring a drastic novelty into the AI and big data industry.
● Rather, it urges digital solutions to meet human-centric requirements like act, think, and behave like a
human in order to achieve maximum synergy from human-machine interaction.
● It is believed that soon every digital system will be measured based on its cognitive abilities.
● Hence we successfully implemented cognitive computing Insurance application.
16
Experiment 04
Fuzzy inference system is the core part of any fuzzy logic system. Fuzzification is the first step in Fuzzy
Inference System.
Formally, a membership function for a fuzzy set A on the universe of discourse X is defined as µA: X → [0,
1], where each element of X is mapped to a value between 0 and 1. This value, called membership
value or degree of membership, quantifies the grade of membership of the element in X to the fuzzy set A.
Here, X is the universal set and A is the fuzzy set derived from X.
Fuzzy membership function is the graphical way of visualizing degree of membership of any value in given
fuzzy set. In the graph, X axis represents the universe of discourse and Y axis represents the degree of
membership in the range [0, 1]
Implementation:
zero = zero_mf(domain)
singleton = singleton_mf(domain, [0.5, 1.])
const = const_mf(domain, [1.])
tri = tri_mf(domain, [0., 0.5, 1., 1.])
ltri = ltri_mf(domain, [0.5, 1., 1.])
rtri = rtri_mf(domain, [0.5, 0., 1.])
trapezoid = trapezoid_mf(domain, [0., 0.25, 0.75, 1., 1.])
gaussian = gaussian_mf(domain, [0.5, 0.1, 1.])
plt.figure()
plt.plot(domain, zero, label="All zero MF")
plt.plot(domain, const, label="Const MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()
plt.figure()
plt.plot(domain, singleton, label="Singleton MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()
17
plt.figure()
plt.plot(domain, tri, label="Triangular MF")
plt.plot(domain, ltri, label="Left triangular MF")
plt.plot(domain, rtri, label="Right triangular MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()
plt.figure()
plt.plot(domain, trapezoid, label="Trapezoid MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()
plt.figure()
plt.plot(domain, gaussian, label="Gaussian MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()
OUTPUT:
18
Conclusion:
Implemented various fuzzy membership functions i.e. singleton, triangular, trapezoidal, gaussian successfully
19
Experiment 05
Union:
In the case of the union of crisp sets, we simply have to select repeated elements only once. In the case of
fuzzy sets, when there are common elements in both the fuzzy sets, we should select the element with the
maximum membership value.
The union of two fuzzy sets A and B is a fuzzy set C , written as C = A 𝖴 B
Graphically, we can represent union operations as follows: μA and μB membership functions represent the
fuzzy value for elements in sets A and B, respectively. Wherever these fuzzy functions overlap, we have to
consider the point with maximum membership value.
20
Intersection:
In the case of the intersection of crisp sets, we simply have to select common elements from both sets. In the case
of fuzzy sets, when there are common elements in both the fuzzy sets, we should select the element with minimum
membership value.
Graphically, we can represent the intersection operation as follows: μA and μB membership functions represent
the fuzzy value for elements in sets A and B, respectively. Wherever these fuzzy functions overlap, we have to
consider the point with the minimum membership value.
Complement:
Fuzzy complement is identical to crisp complement operation. Membership value of every element in the fuzzy set
is complemented with respect to 1, i.e. it is subtracted from 1.
21
Implementation:
import numpy as np
class FuzzySet:
def __init__(self, iterable: any):
self.f_set = set(iterable)
self.f_list = list(iterable)
self.f_len = len(iterable)
for elem in self.f_set:
if not isinstance(elem, tuple):
raise TypeError("No tuples in the fuzzy set")
if not isinstance(elem[1], float):
raise ValueError("Probabilities not assigned to elements")
def __invert__(self):
f_set = [x for x in self.f_set]
for indx, elem in enumerate(f_set):
f_set[indx] = (elem[0], float(round(1 - elem[1], 2)))
return FuzzySet(f_set)
22
def __sub__(self, other):
if len(self) != len(other):
raise ValueError("Length of the sets is different")
return self & ~other
@staticmethod
def max_min(array1: np.ndarray, array2: np.ndarray):
tmp = np.zeros((array1.shape[0], array2.shape[1]))
t = list()
for i in range(len(array1)):
for j in range(len(array2[0])):
for k in range(len(array2)):
t.append(round(min(array1[i][k], array2[k][j]), 2))
tmp[i][j] = max(t)
t.clear()
return tmp
def __len__(self):
self.f_len = sum([1 for i in self.f_set])
return self.f_len
def __str__(self):
return f'{[x for x in self.f_set]}'
def __iter__(self):
for i in range(len(self)):
yield self[i]
r = np.array([[0.6, 0.6, 0.8, 0.9], [0.1, 0.2, 0.9, 0.8], [0.9, 0.3, 0.4, 0.8], [0.9, 0.8, 0.1, 0.2]])
s = np.array([[0.1, 0.2, 0.7, 0.9], [1.0, 1.0, 0.4, 0.6], [0.0, 0.0, 0.5, 0.9], [0.9, 1.0, 0.8, 0.2]])
print(f"Max Min: of \n{r} \nand \n{s}\n:\n\n")
print(FuzzySet.max_min(r, s))
OUTPUT:
Conclusion:
Fuzzy set operations namely Union, Intersection, inversion, subtraction .These operations are generalization of crisp
set operations. Thus, we have successfully implemented Fuzzy Set Operations.
24
Experiment 06
Aim: To study and Implementation of Fuzzy logic control system of washing machine using Python
Theory:
Implementation:
from skfuzzy import control as ctrl
import skfuzzy as fuzz
import numpy as np
class washing_machine:
# Rule Application
rule1 = ctrl.Rule(degree_dirt['High'] | type_dirt['Fat'], wash_time['VeryLong'])
rule2 = ctrl.Rule(degree_dirt['Medium'] | type_dirt['Fat'], wash_time['long'])
rule3 = ctrl.Rule(degree_dirt['Low'] | type_dirt['Fat'], wash_time['long'])
rule4 = ctrl.Rule(degree_dirt['High'] | type_dirt['Medium'], wash_time['long'])
rule5 = ctrl.Rule(degree_dirt['Medium'] | type_dirt['Medium'], wash_time['medium'])
rule6 = ctrl.Rule(degree_dirt['Low'] | type_dirt['Medium'], wash_time['medium'])
rule7 = ctrl.Rule(degree_dirt['High'] | type_dirt['NonFat'], wash_time['medium'])
rule8 = ctrl.Rule(degree_dirt['Medium'] | type_dirt['NonFat'], wash_time['short'])
rule9 = ctrl.Rule(degree_dirt['Low'] | type_dirt['NonFat'], wash_time['very_short'])
def fuzzify_laundry(fuzz_type,fuzz_degree):
washing_machine.washing.input['type_dirt'] = fuzz_type
25
washing_machine.washing.input['degree_dirt'] = fuzz_degree
washing_machine.washing.compute()
washing_machine.wash_time.view(sim=washing_machine.washing)
return washing_machine.washing.output['wash_time']
def compute_washing_parameters(type_of_dirt,degree_of_dirt):
type_fuzzy = fuzzify_laundry(type_of_dirt,degree_of_dirt)
return type_fuzzy
if __name__ == "__main__":
type_of_dirt = float(input("Enter Type of Dirtiness [0-100]"))
degree_of_dirt = float(input("Enter Degree of Dirtiness [0-100]"))
washing_parameters = compute_washing_parameters(type_of_dirt,degree_of_dirt)
print(washing_parameters)
OUTPUT:
CONCLUSION:
By the use of fuzzy logic control system, we have been able to obtain a wash time for percentage of dirt and
percentage of grease. The system uses Min of Maximum technique for defuzzification. The situation
analysis ability has been incorporated in the machine which makes the machine much more automatic and
represents the decision taking power of the new arrangement. Thus, we have successfully implemented a
Fuzzy control system using Fuzzy tool.
26
Experiment 07
1.Activation Function:
There are several types of activation functions, but the most popular activation function is the Rectified Linear
Unit function, also known as the ReLU function. It’s known to be a better activation function than the sigmoid
function and the tanh function because it performs gradient descent faster. Notice in the image that when x (or
z) is very large, the slope is very small, which slows gradient descent significantly. This, however, is not the
case for the ReLU function.
27
2.Cost Function
A cost function for a neural network is similar to a cost function that you would use for any other machine
learning model. It’s a measure of how ‘good” a neural network is in regards to the values that it predicts
compared to the actual values. The cost function is inversely proportional to the quality of a model — the
better the model, the lower the cost function and vice versa.
The purpose of a cost function is so that you have value to optimize. By minimizing the cost function of a
neural network, you’ll achieve the optimal weights and parameters of the model, thereby maximizing the
performance of it.
There are several commonly used cost functions, including the quadratic cost, cross-entropy cost, exponential
cost, Hellinger distance, Kullback-Leibler divergence, etc.
3. Backpropagation
Backpropagation is an algorithm that closely ties with the cost function. Specifically, it is an algorithm that is
used to compute the gradient of the cost function. It has adopted a lot of popularity and use due to its speed &
efficiency compared to other approaches.
Its name stems from the fact that the calculation of the gradient starts with the gradient of the final layer of
weights and moves backwards to the gradient of the first layer of weights. Consequently, the error at layer k
is dependent on the next layer k+1.
Generally, backpropagation works as follows:
1. Calculates the forward phase for each input-output pair
2. Calculates the backward phase for each pair
3. Combine the individual gradients
4. Update the weights based on the learning rate and the total gradient
A Recurrent Neural Network (RNNs) is another type of neural network that works exceptionally well with sequential
data due to its ability to ingest inputs of varying sizes. RNNs consider both the current input as well as previous inputs
it was given, which means that the same input can technically produce a different output based on the previous inputs
given.
Technically speaking, RNNs are a type of neural network where connections between the nodes form a digraph along
a temporal sequence, allowing them to use their internal memory to process variable-length sequences of inputs.
7. Weight Initialization
The point of weight initialization is to make sure that a neural network doesn’t converge to a trivial solution.
If the weights are all initialized to the same value(eg. equal to zero) then each unit will get exactly the same signal and
every layer would behave as if it were a single cell. Therefore, you want to randomly initialize the weights near zero,
but not equal to zero. This is an expectation of the stochastic optimization algorithm that’s used to train the model.
CONCLUSION:
Thus we have studied all deep learning concepts.
30
Experiment 08
CODE:
Example of image recognition with Keras, from loading the data to evaluation.
The first thing we should do is import the necessary libraries. I'll show how these imports are used as we go, but for
now know that we'll be making use of Numpy, and various modules associated with Keras:
import numpy
from tensorflow import keras
from keras.constraints import maxnorm
31
from keras.utils import np_utils
We're going to be using a random seed here so that the results achieved in this article can be replicated by you, which
is why we need numpy:
Now let's load in the dataset. We can do so simply by specifying which variables we want to load the data into, and
then using the load_data() function:
# Loading in the data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
normalize the input data
# Normalize the inputs from 0-255 to between 0 and 1 by dividing by 255
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
# One-hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
class_num = y_test.shape[1]
Or, we can pass in each layer as an element in a list in the Sequential() constructor call:
model = keras.Sequential([
keras.layers.layer1,
keras.layers.layer2,
keras.layers.layer3])
model = keras.Sequential()
model.add(keras.layers.Conv2D(32, (3, 3), input_shape=X_train.shape[1:], padding='same'))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.Conv2D(32, 3, input_shape=(32, 32, 3), activation='relu', padding='same'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv2D(64, 3, activation='relu', padding='same'))
model.add(keras.layers.MaxPooling2D(2))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv2D(64, 3, activation='relu', padding='same'))
model.add(keras.layers.MaxPooling2D(2))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv2D(128, 3, activation='relu', padding='same'))
model.add(keras.layers.Dropout(0.2))
32
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Flatten())
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(32, activation='relu'))
model.add(keras.layers.Dropout(0.3))
model.add(keras.layers.BatchNormalization())
33
model.add(keras.layers.Dense(class_num, activation='softmax'))
compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', 'val_accuracy'])
We can print out the model summary to see what the whole model looks like.
print(model.summary())
Printing out the summary will give us quite a bit of info, and can be used to cross-check your own architecture against
the one laid out in the guide:
34
This results in:
Epoch 1/25
782/782 [==============================] - 12s 15ms/step - loss: 1.4851 - accuracy: 0.4721 - val_loss:
1.1805 - val_accuracy: 0.5777
...
Epoch 25/25
782/782 [==============================] - 11s 14ms/step - loss: 0.4154 - accuracy: 0.8538 - val_
RESULT: Accuracy: 82.01%
CONCLUSION:
Successfully implemented image classification using deep learning.
35
Experiment 09
The image dataset is consists of more than 50,000 pictures of various traffic signs(speed limit, crossing,
traffic signals, etc.) Around 43 different classes are present in the dataset for image classification. The
dataset classes vary in size like some class has very few images while others have a vast number of images.
The dataset doesn’t take much time and space to download as the file size is around 314.36 MB. It contains
two separate folders, train and test, where the train folder is consists of classes, and every category contains
various images.
Implementation:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt#to plot accuracy
import cv2
import tensorflow as tf
from PIL import Image
import os
from sklearn.model_selection import train_test_split #to split training and testing data
from keras.utils import to_categorical#to convert the labels present in y_train and t_test into one-hot encoding
from keras.models import Sequential, load_model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout#to create CNN
data = []
labels = []
classes = 43
cur_path = os.getcwd()
#Retrieving the images and their labels
for i in range(classes):
path = os.path.join(cur_path,'train',str(i))
images = os.listdir(path)
for a in images:
try:
image = Image.open(path + '\'+ a)
image = image.resize((30,30))
image = np.array(image)
#sim = Image.fromarray(image)
data.append(image)
labels.append(i)
except:
print("Error loading image")
36
#Converting lists into numpy arrays
data = np.array(data)
labels = np.array(labels)
print(data.shape, labels.shape)
#Splitting training and testing dataset
X_t1, X_t2, y_t1, y_t2 = train_test_split(data, labels, test_size=0.2, random_state=42)
print(X_t1.shape, X_t2.shape, y_t1.shape, y_t2.shape)
#Converting the labels into one hot encoding
y_t1 = to_categorical(y_t1, 43)
y_t2 = to_categorical(y_t2, 43)
#Building the model
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=X_train.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))
#Compilation of the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
eps = 15
anc = model.fit(X_t1, y_t1, batch_size=32, epochs=eps, validation_data=(X_t2, y_t2))
model.save("my_model.h5")
#plotting graphs for accuracy
plt.figure(0)
plt.plot(anc.anc['accuracy'], label='training accuracy')
plt.plot(anc.anc['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.show()
plt.figure(1)
plt.plot(history.history['loss'], label='training loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
#testing accuracy on test dataset
from sklearn.metrics import accuracy_score
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]
for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = model.predict_classes(X_test)
37
#Accuracy with the test data
from sklearn.metrics import accuracy_score
print(accuracy_score(labels, pred))
model.save(‘traffic_classifier.h5’)
import tkinter as tk
from tkinter import filedialog
from tkinter import *
from PIL import ImageTk, Image
import numpy
#load the trained model to classify sign
from keras.models import load_model
model = load_model('traffic_classifier.h5')
#dictionary to label all traffic signs class.
classes = { 1:'Speed limit (20km/h)',
2:'Speed limit (30km/h)',
3:'Speed limit (50km/h)',
4:'Speed limit (60km/h)',
5:'Speed limit (70km/h)',
6:'Speed limit (80km/h)',
7:'End of speed limit (80km/h)',
8:'Speed limit (100km/h)',
9:'Speed limit (120km/h)',
10:'No passing',
11:'No passing veh over 3.5 tons',
12:'Right-of-way at intersection',
13:'Priority road',
14:'Yield',
15:'Stop',
16:'No vehicles',
17:'Veh > 3.5 tons prohibited',
18:'No entry',
19:'General caution',
20:'Dangerous curve left',
21:'Dangerous curve right',
22:'Double curve',
23:'Bumpy road',
24:'Slippery road',
25:'Road narrows on the right',
26:'Road work',
27:'Traffic signals',
28:'Pedestrians',
38
29:'Children crossing',
30:'Bicycles crossing',
31:'Beware of ice/snow',
32:'Wild animals crossing',
33:'End speed + passing limits',
34:'Turn right ahead',
35:'Turn left ahead',
36:'Ahead only',
37:'Go straight or right',
38:'Go straight or left',
39:'Keep right',
40:'Keep left',
41:'Roundabout mandatory',
42:'End of no passing',
43:'End no passing vehicle with a weight greater than 3.5 tons' }
#initialise GUI
top=tk.Tk()
top.geometry('800x600')
top.title('Traffic sign classification')
top.configure(background='#CDCDCD')
label=Label(top,background='#CDCDCD', font=('arial',15,'bold'))
sign_image = Label(top)
def classify(file_path):
global label_packed
image = Image.open(file_path)
image = image.resize((30,30))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
pred = model.predict_classes([image])[0]
sign = classes[pred+1]
print(sign)
label.configure(foreground='#011638', text=sign)
def show_classify_button(file_path):
classify_b=Button(top,text="Classify Image",command=lambda: classify(file_path),padx=10,pady=5)
classify_b.configure(background='#364156', foreground='white',font=('arial',10,'bold'))
classify_b.place(relx=0.79,rely=0.46)
def upload_image():
try:
file_path=filedialog.askopenfilename()
uploaded=Image.open(file_path)
uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25)))
im=ImageTk.PhotoImage(uploaded)
sign_image.configure(image=im)
sign_image.image=im
label.configure(text='')
show_classify_button(file_path)
except:
pass
upload=Button(top,text="Upload an image",command=upload_image,padx=10,pady=5)
upload.configure(background='#364156', foreground='white',font=('arial',10,'bold'))
upload.pack(side=BOTTOM,pady=50)
sign_image.pack(side=BOTTOM,expand=True)
label.pack(side=BOTTOM,expand=True)
heading = Label(top, text="check traffic sign",pady=20, font=('arial',20,'bold'))
heading.configure(background='#CDCDCD',foreground='#364156')
heading.pack()
top.mainloop()
39
Output:
Conclusion:
we created a CNN model to identify traffic signs and classify them with 95% accuracy. We had observed the
accuracy and loss changes over a large dataset. GUI of this model makes it easy to understand how signs are
classified into several classes.
40
Experiment 10
Aim: Implementation of Supervised learning like
Ada-Boosting
Random Forests
Theory:
Supervised Machine Learning
Supervised learning is the types of machine learning in which machines are trained using well "labelled"
training data, and on basis of that data, machines predict the output. The labelled data means some input data
is already tagged with the correct
In supervised learning, the training data provided to the machines work as the supervisor that teaches the
machines to predict the output correctly. It applies the same concept as a student learns in the supervision of
the teacher.
Supervised learning is a process of providing input data as well as correct output data to the machine learning
model. The aim of a supervised learning algorithm is to find a mapping function to map the input variable(x)
with the output variable(y).
In the real-world, supervised learning can be used for Risk Assessment, Image classification, Fraud Detection,
spam filtering, etc.
The working of Supervised learning can be easily understood by the below example and diagram:
41
This figure shows how the first model is made and errors from the first model are noted by the algorithm. The
record which is incorrectly classified is used as input for the next model. This process is repeated until the
specified condition is met. As you can see in the figure, there are ‘n’ number of models made by taking the
errors from the previous model. This is how boosting works. The models 1,2, 3,…, N are individual models
that can be known as decision trees. All types of boosting models work on the same principle.
Implementation:
Importing libraries
42
So our final model will be the weighted mean of individual models.
43
Implementation: -
44
45
Conclusion:
Successfully Executed Supervised Learning algorithms such as Ada Boost and Random Forest.
46
EXPERIMENT 11
Theory:
Data Science Process: Data Science could be a space that incorporates working with colossal sums of
information, creating calculations, working with machine learning and more to come up with trade insights.
It incorporates working with the gigantic sum of information. Different processes are included to infer the
information from the source like extraction of data, information preparation, model planning, model building
and many more. The below image depicts the various processes of Data Science.
Classification: - As the name suggests, Classification is the task of “classifying things” into sub-categories.
But, by a machine! If that doesn’t sound like much, imagine your computer being able to differentiate between
you and a stranger. Between a potato and a tomato.
Between an A grade and an F. Now, it sounds interesting now. In Machine Learning and Statistics,
Classification is the problem of identifying to which of a set of categories (subpopulations), a new observation
belongs, on the basis of a training set of data containing observations and whose categories membership is
known.
Binary Classification: When we have to categorize given data into 2 distinct classes. Example – On the basis
of given health conditions of a person, we have to determine whether the person has a certain disease or not.
Multiclass Classification: The number of classes is more than 2. For Example – On the basis of data about
different species of flowers, we have to determine which specie our observation belongs.
47
Various Classification Algorithms: -
Logistic Regression
Naive Bayes
K-Nearest Neighbors
Decision Tree
Support Vector Machines
48
Code: -
49
Checking algorithms and importing plotly
50
Comparing Algorithms
Making Predictions
51
Confusion Matrix
52
Results:
Conclusion: -
53