ML LAB Rec
ML LAB Rec
3. The probability that it is Friday and that a student is absent is 3 %. Since there are 5 school days in
a week, the probability that it is Friday is 20 %. What is theprobability that a student is absent given
that today is Friday? Apply Baye’s rule in python to get the result. (Ans: 15%)
import math
def classify(points,p,k):
distance=[]
for g in points:
for xy in points[g]:
eucdis=math.sqrt((xy[0]-p[0])**2+(xy[1]-p[1])**2)
distance.append((eucdis,g))
distance=sorted(distance)[:k]
f1,f2=0,0
for d in distance:
if d[1]==0:
f1+=1
elif d[1]==1:
f2+=1
return 0 if f1>f2 else 1
points={1:[(1,1),(2,2),(3,1)] , 0:[(5,3),(4,4),(6,5)]}
p=(1,2)
k=3
ans=classify(points,p,k)
print("The value classified to unknown pointer is: {}".format(ans))
OUTPUT:
6. Given the following data, which specify classifications for nine combinations of VAR1 and VAR2
predict a classification for a case where VAR1=0.906 and VAR2=0.606, using the result of k-means
clustering with 3 means (i.e., 3 centroids)
VAR1 VAR2 CLASS
1.713 1.586 0
0.180 1.786 1
0.353 1.240 1
0.940 1.566 0
1.486 0.759 1
1.266 1.106 0
1.540 0.419 1
0.459 1.799 1
0.773 0.186 1
count_vect = CountVectorizer()
Xtrain_dims = count_vect.fit_transform(Xtrain)
Xtest_dims = count_vect.transform(Xtest)
df = pd.DataFrame(Xtrain_dims.toarray(),columns=count_vect.get_feature_names_out())
clf = MultinomialNB()
# to fit the train data into model
clf.fit(Xtrain_dims, Ytrain)
# to predict the test data
prediction = clf.predict(Xtest_dims)
print('******** Accuracy Metrics *********')
print('Accuracy : ', accuracy_score(Ytest, prediction))
print('Recall : ', recall_score(Ytest, prediction))
print('Precision : ',precision_score(Ytest, prediction))
print('Confusion Matrix : \n', confusion_matrix(Ytest,
prediction)) print(10*"-")
# to predict the input statement
test_stmt = [input("Enter any statement to predict :")]
test_dims = count_vect.transform(test_stmt)
pred = clf.predict(test_dims)
for stmt,lbl in zip(test_stmt,pred):
if lbl == 1:
print("Statement is
Positive") else:
print("Statement is Negative")
OUTPUT:
import random
def func(x,y,z):
return 6*x**3+9*y**2+90*z-25
def fitness(x,y,z):
ans=func(x,y,z)
if(ans==0):
return 9999
else:
return abs(1/ans)
solutions=[]
for s in range(1000):
solutions.append((random.uniform(0,1000),random.uniform(0,1000),random.uniform(0,1000)))
ranksol=[]
for s in solutions:
ranksol.append((fitness(s[0],s[1],s[2]),s))
ranksol.sort()
ranksol.reverse()
bestsol=ranksol[:10]
elements=[]
for s in bestsol:
elements.append(s[1][0])
elements.append(s[1][1])
elements.append(s[1][2])
NewGen=[]
for _ in range(1000):
e1=random.choice(elements)*random.uniform(0.9,1.01)
e2=random.choice(elements)*random.uniform(0.99,1.01)
e3=random.choice(elements)*random.uniform(0.99,1.01)
NewGen.append((e1,e2,e3))
solutions=NewGen
print("Top 10 best Solutions")
for x in bestsol:
print(x)
OUTPUT:
9. The following training examples map descriptions of individuals onto high, medium
and low credit-worthiness.
medium skiing design single twenties no -> highRisk
high golf trading married forties yes -> lowRisk
low speedway transport married thirties yes -> medRisk
medium football banking single thirties yes -> lowRisk
high flying media married fifties yes -> highRisk
low football security single twenties no -> medRisk
medium golf media single thirties yes -> medRisk
medium golf transport married forties yes -> lowRisk
high skiing banking single thirties yes -> highRisk
low golf unemployed married forties yes -> highRisk
Input attributes are (from left to right) income, recreation, job, status, age-group, home-owner.
Find the unconditional probability of `golf' and the conditional probability of `single' given
`medRisk' in the dataset?
x_input=[0.1,0.5,0.2]
w_weights=[0.4,0.3,0.6]
threshold=0.5
def stop(weight_sum):
if(weight_sum>threshold):
return 1
else:
return 0
def perceptron():
weight_sum=0
for x,w in zip(x_input,w_weights):
weight_sum += x*w
return stop(weight_sum)
print("Output: ",perceptron())
OUTPUT:
Output: 0
import pandas as pd
from sklearn import tree
iris=pd.read_csv("data.csv")
dic={"sunny":1,"overc":2,"rain":3}
iris["outlook"]=iris["outlook"].map(dic)
dic2={"hot":1,"mild":2,"cool":3}
iris["temp"]=iris["temp"].map(dic2)
dic3={"high":1,"normal":2}
iris["humidity"]=iris["humidity"].map(dic3)
dic4={"weak":1,"strong":2}
iris["wind"]=iris["wind"].map(dic4)
dic5={"yes":1,"no":0}
iris["playtennis"]=iris["playtennis"].map(dic5)
x=iris.iloc[:,:-1]
y=iris.iloc[:,-1]
clf=tree.DecisionTreeClassifier().fit(x,y)
tree.plot_tree(clf)
OUTPUT: