0% found this document useful (0 votes)
10 views

Aitt PBL

Artificial Intelligence project based learning

Uploaded by

RAHUL M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Aitt PBL

Artificial Intelligence project based learning

Uploaded by

RAHUL M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 2‬
‭Exp#‬ ‭1‬ ‭Week#‬ ‭2‬ ‭Date#‬

‭Aim :‬
‭Write a program to input graph and display adjacency nodes for a node given by user.‬

‭Libraries, Methods & Variables Used:‬


‭NONE‬

‭● adj(graph,node):returns the adjacent nodes of a particular node‬ ‭ ariables:‬


V
‭graph(dic),‬
‭ad(list),ele(c‬
‭har)‬

‭Program:‬
‭def adj(graph,ele):‬
‭return graph[ele]‬
‭graph={‬
‭"A":["B","C"],‬
‭"B":["A","C","E","D"],‬
‭"C":["A","B","D","E"],‬
‭"D":["B","C","E","F"],‬
‭"E":["B","C","D","F"],‬
‭"F":["E","F"]‬
‭}‬
‭ad=adj(graph,"B")‬
‭print("adjacent elements of B : ",ad)‬

‭Output:‬
‭adjacent elements of B : ['A', 'C', 'E', 'D']‬

‭Observation:‬
‭‬ G
● ‭ raph is represented using dictionary‬
‭●‬ H ‭ ere the adjacent nodes of the graph is stored as key value pairs in a‬
‭dictionary‬
‭●‬ ‭When we enter A’s node then it displaying BC‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Exp#‬ ‭2‬ ‭Week#‬ ‭2‬ ‭Date#‬

‭Aim :‬
‭To demonstrate the operations of a queue data structure.‬

‭Libraries, Methods & Variables Used:‬


‭NONE‬

e‭ nqueue(list,ele):‬‭Adding elements to the queue‬ ‭ ariables:‬


V
‭dequeue():‬‭It allows both ends to remove elements from‬‭the queue‬ ‭n(int),k(input),d(i‬
‭size()‬‭:return the size of the queue‬ ‭nt)‬

‭program:‬
‭def enqueue(queue,ele):‬
‭queue.append(ele)‬
‭def dequeue():‬
‭return queue.pop(0)‬
‭def size():‬
‭return len(queue)‬
‭queue=[]‬
‭print("Enter number of elements : ")‬
‭n=int(input())‬
‭print("Enter n elements to enqueue : ")‬
‭for i in range(0,n):‬
‭k=input()‬
‭enqueue(queue,k)‬
‭print("Queue elements : ",queue)‬
‭d=dequeue()‬
‭print("Dequeue element : ",d)‬
‭print(queue)‬
‭print("size of queue : ",size())‬

‭Output:‬
‭ nter number of elements :‬
E
‭3‬
‭Enter n elements to enqueue :‬
‭2‬
‭1‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

4‭ ‬
‭Queue elements : ['2', '1', '4']‬
‭Dequeue element : 2‬
‭['1', '4']‬
‭size of queue : 2‬
‭Observation:‬
‭‬ H
● ‭ ere we give input as any data type‬
‭●‬ ‭We can perform both insertion and deletion operations at both of its ends of the queue.‬

‭Exp#‬ ‭3‬ ‭Week#‬ ‭2‬ ‭Date#‬

‭Aim :‬
‭To demonstrate the operations of a stack data structure.‬

‭Libraries, Methods & Variables Used:‬


‭NONE‬

‭ ush(list,ele)‬‭:Adding elements to the stack‬


p ‭ ariables:‬
V
‭pop()‬‭:Removes the element from one side of stack‬ ‭n(int),stack(list),k(‬
‭size()‬‭:Return the size of stack‬ ‭int),p(int)‬

‭program:‬
‭def push(stack,i):‬
‭stack.append(i)‬
‭def pop():‬
‭return stack.pop()‬
‭def size():‬
‭return len(stack)‬
‭stack=[]‬
‭print("Enter no of elements to push : ")‬
‭n=int(input())‬
‭print("Enter n elements to push : ")‬
‭for i in range(0,n):‬
‭k=input()‬
‭push(stack,k)‬
‭print("stack elements : ",stack)‬
‭p=pop()‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

p‭ rint("pop element : ",p)‬


‭print("Size of stack : ",size())‬

‭Output:‬
‭ nter no of elements to push :‬
E
‭3‬
‭Enter n elements to push :‬
‭hi‬
‭hey‬
‭hello‬
‭stack elements : ['hi', 'hey', 'hello']‬
‭pop element : hello‬
‭Size of stack : 2‬

‭Observation:‬
‭ ‬ I‭ f we use get then it only retrieves the value‬

‭●‬ ‭Stack operations will perform at only one side‬
‭●‬ ‭Here we give any data type of elements as input‬

‭Exp#‬ ‭4‬ ‭Week#‬ ‭2‬ ‭Date#‬

‭Aim :‬
‭Demonstrate Breadth-First Search (BFS) AI uninformed search algorithm.‬

‭Libraries, Methods & Variables Used:‬


‭NONE‬

a‭ ppend():‬‭Add element to the end of the list‬ ‭ ariables:‬


V
‭GetNode(ele)‬‭:Gives Adjacent elements of a node‬ ‭queue(list),vis(list),‬
‭enqueue(list,ele)‬‭:Adding elements to the queue‬ ‭graph(dic)‬

‭Program:‬
q‭ ueue =[]‬
‭vis =[]‬
‭def enqueue(val,q):‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

q‭ .append(val)‬
‭def GetNode(c):‬
‭return graph[c]‬
‭def bfs(start):‬
‭enqueue(start,queue)‬
‭for i in queue:‬
‭if i not in vis:‬
‭for j in GetNode(i):‬
‭enqueue(j,queue)‬
‭vis.append(i)‬
‭graph={‬
‭"A":["B","C"],‬
‭"B":["A","C","D","E"],‬
‭"C":["A","B","D","E"],‬
‭"E":["B","C","D","F"],‬
‭"D":["B","C","E","F"],‬
‭"F":["D","E"]‬
‭}‬
‭bfs('A')‬
‭print(vis)‬

‭Output:‬
‭['A', 'B', 'C', 'D', 'E', 'F']‬

‭Observation:‬
‭●‬ T ‭ he graph is represented as a dictionary where each key is a node and its‬
‭corresponding value is a list of neighboring nodes.‬
‭●‬ ‭Queue is used to represent the Breadth First Search.‬
‭●‬ ‭BFS explore all the neighbors in the present depth level and then moves to the next‬
‭depth level.‬

‭Exp#‬ ‭5‬ ‭Week#‬ ‭2‬ ‭Date#‬

‭Aim :‬
‭Demonstrate Depth-First Search (DFS) AI uninformed search algorithm.‬

‭Libraries, Methods & Variables Used:‬


‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭NONE‬

a‭ ppend():‬‭Add element to the end of the list‬ ‭ ariables:‬


V
‭get(ele)‬‭:Gives Adjacent elements of a node‬ ‭stack(list),vis(list),g‬
‭push(list,ele)‬‭:Adding elements to the stack‬ ‭raph(dic),n(char),i(i‬
‭nt),j(int)‬
‭pop()‬‭:Removes the element from the stack‬

‭Program:‬
s‭ tack=[]‬
‭vis=[]‬
‭def push(val,s):‬
‭s.append(val)‬
‭def pop(s):‬
‭return s.pop()‬
‭def get(val):‬
‭return graph[val]‬
‭def dfs(val):‬
‭push(val,stack)‬
‭while stack:‬
‭n=pop(stack)‬
‭if n not in vis:‬
‭vis.append(n)‬
‭for j in get(n):‬
‭push(j,stack)‬
‭graph={ "A":["B","C"], "B":["A","D"], "C":["A","E"], "D":["B","E"], "E":["C","D"] }‬
‭dfs('A')‬
‭print(vis)‬

‭Output:‬
‭['A', 'C', 'E', 'D', 'B']‬

‭Observation:‬
‭‬ S
● ‭ tack is used to implement Depth First Search‬
‭●‬ ‭The vis set keeps track of visited nodes to avoid revisiting them.‬
‭●‬ ‭The graph is represented as a dictionary where each key is a node and its corresponding‬
‭value is a list of neighboring nodes.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 3‬
‭Exp#‬ ‭6‬ ‭Week#‬ ‭3‬ ‭Date#‬

‭Aim :‬
‭Calculate distance between two nodes in a graph using euclidean distance‬

‭Libraries, Methods & Variables Used:‬


‭math‬‭is a built-in library in Python that provides‬‭access to mathematical functions.‬

‭ ow():‬‭It returns the power a number.‬


P ‭ ariables:‬
V
‭sqrt():‬‭a function from the math library that‬‭returns‬‭the square root of‬ ‭a(int),b(int),c(int),d‬
‭a number‬ ‭(int),dis(float)‬

‭Program:‬
i‭mport math‬
‭a=int(input())‬
‭b=int(input())‬
‭c=int(input())‬
‭d=int(input())‬
‭dis=math.sqrt(pow((d-b),2)+pow((c-a),2))‬
‭print('Distance between two points : ',dis)‬

‭Output:‬
2‭ ‬
‭3‬
‭4‬
‭5‬
‭Distance between two points : 2.8284271247461903‬

‭Observation:‬
‭●‬ T
‭ he‬ ‭Euclidean‬ ‭distance‬ ‭is‬ ‭calculated‬ ‭using‬ ‭the‬ ‭Pythagorean‬ ‭theorem,‬ ‭which‬ ‭is‬ ‭the‬
‭square‬‭root‬‭of‬‭the‬‭sum‬‭of‬‭the‬‭squares‬‭of‬‭the‬‭differences‬‭between‬‭the‬‭x-coordinates‬‭and‬
‭the y-coordinates of the two nodes.‬
‭●‬ ‭In this program we use ** instead of pow().‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Exp#‬ ‭7‬ ‭Week#‬ ‭3‬ ‭Date#‬

‭Aim :‬
‭Calculate distance between two nodes in a graph using manhattan distance.‬

‭Libraries, Methods & Variables Used:‬


‭NONE‬

‭abs()‬‭:Return the absolute value of a given number.‬ ‭ ariables:‬


V
‭a(int),b(int),c(int),d‬
‭(int),mdis(int)‬

‭program:‬
a‭ =int(input())‬
‭b=int(input())‬
‭c=int(input())‬
‭d=int(input())‬
‭mdis=abs(d-b)+abs(c-a)‬
‭print("manhattan distance : ",mdis)‬

‭Output:‬
2‭ ‬
‭3‬
‭4‬
‭5‬
‭manhattan distance : 4‬

‭Observation:‬
‭ ‬ I‭ f we give non-integer value it will raise an value error‬

‭●‬ ‭The distance is calculated by using formula abs(x2-x1)+abs(y2-y1)‬
‭●‬ ‭We can use fabs instead of abs, But it returns float value‬

‭Exp#‬ ‭8‬ ‭Week#‬ ‭3‬ ‭Date#‬

‭Aim :‬
‭Demonstrate Best-First Search (BFS) AI informed search algorithm.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Libraries, Methods & Variables Used:‬


‭ eapq‬‭:‬‭It‬ ‭helps‬ ‭you‬ ‭to‬ ‭find‬ ‭the‬ ‭minimum‬‭or‬‭maximum‬‭element‬‭of‬‭a‬‭Python‬‭list,‬‭when‬‭new‬
h
‭elements are being added to the list periodically.‬

‭ fs(list,ele):‬‭It is used for find the shortest or‬‭most optimal path of a‬


b ‭ ariables:‬
V
‭graph.‬ ‭hq(list),vis(list),gra‬
‭ph(dic),res(list)‬

‭program:‬
i‭mport heapq‬
‭def bfs(graph,sc):‬
‭hq=[(0,sc)]‬
‭vis=set()‬
‭while hq:‬
‭s,d=heapq.heappop(hq)‬
‭if d not in vis:‬
‭vis.add(d)‬
‭for x,wt in graph[d]:‬
‭heapq.heappush(hq,(wt,x))‬
‭return list(vis)‬
‭graph={‬
‭'A':[('B',5),('C',7)],‬
‭'B':[('A',5),('D',4)],‬
‭'C':[('A',7),('E',6)],‬
‭'D':[('B',4),('E',2)],‬
‭'E':[('C',6),('D',2)]‬
‭}‬
‭sc='A'‬
‭res=bfs(graph,sc)‬
‭print(res)‬

‭Output:‬
‭['B', 'A', 'C', 'E', 'D']‬

‭Observation:‬
‭‬ T
● ‭ he program can also calculate the negative costs.‬
‭●‬ ‭We can get an Index Error if the nodes are out of range.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 4‬
‭Exp‬ ‭9‬ ‭Week‬ ‭4‬ ‭Date‬

‭Aim :‬
‭Demonstrate various NumPy array creation, manipulation, and statistical functions.‬

‭Libraries, Methods & Variables Used:‬


‭ umpy‬‭:NumPy‬ ‭is‬ ‭a‬ ‭Python‬ ‭library‬ ‭used‬ ‭for‬ ‭working‬ ‭with‬ ‭arrays.It‬ ‭also‬ ‭has‬ ‭functions‬ ‭for‬
n
‭working‬ ‭in‬ ‭domain‬ ‭of‬ ‭linear‬ ‭algebra,‬ ‭fourier‬ ‭transform,‬ ‭and‬ ‭matrices.‬‭NumPy‬ ‭stands‬ ‭for‬
‭Numerical Python.‬

‭ dim‬‭:‭T
n ‬ he ndim property returns the number of dimension‬‭of the DataFrame.‬ ‭ ariables:‬
V
‭dtype‬‭:‭T ‬ he‬ ‭NumPy‬ ‭array‬ ‭object‬ ‭has‬ ‭a‬ ‭property‬ ‭called‬ ‭dtype‬ ‭that‬ ‭returns‬ ‭the‬ ‭arr(list),arr‬
‭data type of the array:‬ ‭1(list),arr2(‬
‭reshape:‬‭Reshaping‬ ‭means‬ ‭changing‬ ‭the‬ ‭shape‬ ‭of‬ ‭an‬ ‭array.The‬ ‭shape‬ ‭of‬ ‭an‬ ‭list),arr3(li‬
‭array is the number of elements in each dimension.‬ ‭st)‬
‭Arrange():‬‭This function is used to create an array‬‭with a range of values.‬
‭add()‬‭:‬‭this‬ ‭function‬ ‭is‬ ‭used‬ ‭when‬ ‭we‬ ‭want‬ ‭to‬ ‭compute‬ ‭the‬ ‭addition‬ ‭of‬ ‭two‬
‭arrays.‬
‭subtract():‬ ‭this‬ ‭function‬ ‭is‬ ‭used‬ ‭when‬ ‭we‬ ‭want‬ ‭to‬‭compute‬‭the‬‭difference‬‭of‬
‭two array.‬
‭transpose():‬‭We‬ ‭can‬ ‭perform‬ ‭the‬‭simple‬‭function‬‭of‬‭transpose‬‭within‬‭one‬‭line‬
‭by using transpose method of Numpy.‬
‭dot()‬‭:‭D ‬ ot product of two arrays.‬
‭sin()‬‭:‭T ‬ his‬‭mathematical‬‭function‬‭helps‬‭user‬‭to‬‭calculate‬‭trigonometric‬‭sine‬‭for‬
‭all x‬
‭cos()‬‭:‬‭This‬ ‭mathematical‬ ‭function‬ ‭helps‬ ‭user‬‭to‬‭calculate‬‭trigonometric‬‭cosine‬
‭for all x‬
‭tan()‬‭:‭T ‬ his‬‭mathematical‬‭function‬‭helps‬‭user‬‭to‬‭calculate‬‭trigonometric‬‭sine‬‭for‬
‭all x‬
‭sum()‬‭:‬‭The‬‭sum()‬‭function returns a number, the sum‬‭of all items in an iterable.‬
‭power()‬‭:‬‭Array‬‭element‬‭from‬‭first‬‭array‬‭is‬‭raised‬‭to‬‭the‬‭power‬‭of‬‭element‬‭from‬
‭second element(all happens element-wise).‬
‭min()‬‭:‬‭The min() method returns the smallest element‬‭of an array along an axis.‬
‭max()‬‭:T‬‭he max() method returns the largest element.‬
‭mean()‬‭:It is used to calculate the mean/average of‬‭input values or data set.‬
‭average():‬‭calculates the mean (average) of the given‬‭data set.‬

‭PROGRAM:‬
i‭mport numpy as np‬
‭arr=np.array([1,2,3,4,5,6])‬
‭print("Given Array : ",arr)‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

p‭ rint("Type : ",type(arr))‬
‭print("No of dimensions : ",arr.ndim)‬
‭print("Data type : ",arr.dtype)‬
‭rarr=arr.reshape(2,3)‬
‭print("Reshaped array : ",rarr)‬
‭a=np.random.randint(1,10)‬
‭print("Random number : ",a)‬
‭ar=np.array([1.,2.,3.,4.,0.5])‬
‭print("Reciprocal of array : ",np.reciprocal(ar))‬
‭print("Power fo array : ",np.power(arr,2))‬
‭print("Mod 2 of array : ",np.mod(arr,2))‬
‭#Arithmetic operations‬
‭arr1=np.array([1,2,3,4])‬
‭arr2=np.array([5,6,7,8])‬
‭print("Addition : ",np.add(arr1,arr2))‬
‭print("Subtraction : ",np.subtract(arr1,arr2))‬
‭print("Multiplication : ",np.multiply(arr1,arr2))‬
‭print("Division : ",np.divide(arr1,arr2))‬
‭print("Sum of array : ",np.sum(arr1))‬
‭print("dot product : ",np.dot(arr1,2))‬
‭print("Sin func : ",np.sin(arr1))‬
‭print("cos func : ",np.cos(arr1))‬
‭print("tan func : ",np.tan(arr1))‬
‭arr3=np.array([[1,2,3],[4,5,6]])‬
‭print("Transpose matrix : ",arr3.transpose())‬
‭print("Length of array : ",len(arr1))‬
‭print("Arranged values : ",np.arange(4))‬
‭print("Min of arr2 : ",np.min(arr2))‬
‭print("Max of arr2 : ",np.max(arr2))‬
‭print("mean of arr1 : ",np.mean(arr1))‬
‭print("Average of arr2 : ",np.average(arr2))‬

‭Output:‬
‭ iven Array : [1 2 3 4 5 6]‬
G
‭Type : <class 'numpy.ndarray'>‬
‭No of dimensions : 1‬
‭Data type : int64‬
‭Reshaped array : [[1 2 3]‬
‭[4 5 6]]‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭ andom number : 3‬
R
‭Reciprocal of array : [1. 0.5 0.33333333 0.25 2. ]‬
‭Power fo array : [ 1 4 9 16 25 36]‬
‭Mod 2 of array : [1 0 1 0 1 0]‬
‭Addition : [ 6 8 10 12]‬
‭Subtraction : [-4 -4 -4 -4]‬
‭Multiplication : [ 5 12 21 32]‬
‭Division : [0.2 0.33333333 0.42857143 0.5 ]‬
‭Sum of array : 10‬
‭dot product : [2 4 6 8]‬
‭Sin func : [ 0.84147098 0.90929743 0.14112001 -0.7568025 ]‬
‭cos func : [ 0.54030231 -0.41614684 -0.9899925 -0.65364362]‬
‭tan func : [ 1.55740772 -2.18503986 -0.14254654 1.15782128]‬
‭Transpose matrix : [[1 4]‬
‭[2 5]‬
‭[3 6]]‬
‭Length of array : 4‬
‭Arranged values : [0 1 2 3]‬
‭Min of arr2 : 5‬
‭Max of arr2 : 8‬
‭mean of arr1 : 2.5‬
‭Average of arr2 : 6.5‬

‭Observation:‬
‭●‬ N ‭ umPy arrays are different from Python lists and are optimized for numerical‬
‭operations.‬
‭●‬ ‭import numpy as np is used to import the numpy library into a python program and‬
‭give a short alias np i.e we can use np.array instead of numpy.array to create an array.‬

‭Exp‬ ‭10‬ ‭Week‬ ‭4‬ ‭Date‬

‭Aim :‬
‭ emonstrate‬ ‭Pandas‬ ‭methods‬ ‭that‬ ‭reads‬ ‭and‬ ‭analyzes‬ ‭a‬ ‭dataset,‬ ‭perform‬ ‭tasks‬ ‭such‬ ‭as‬ ‭data‬
D
‭loading, display, manipulation, grouping, sorting, plotting, and handling missing values.‬

‭Libraries, Methods & Variables Used:‬


‭Pandas as pd‬‭: data manipulation and analysis library‬‭that provides data structures like‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭DataFrame and series‬

‭Methods:‬ ‭ ariables:‬
V
‭●‬ ‭pd.read_csv(): Reads a CSV file into a DataFrame.‬ ‭dt=data‬‭of‬‭a‬‭csv‬
‭●‬ ‭.head():‬‭Returns‬‭the‬‭first‬‭few‬‭rows‬‭(default‬‭is‬‭5)‬‭of‬‭the‬‭DataFrame‬‭to‬ ‭file‬
‭give a preview of the data.‬ ‭uni=unique‬
‭●‬ ‭tail():Returns the last few rows (default is 5) of the DataFrame to‬ ‭names‬
‭give a preview of the data.‬ ‭nuni=number‬
‭●‬ ‭.info():‬ ‭Provides‬ ‭a‬ ‭summary‬ ‭of‬ ‭the‬ ‭DataFrame,‬ ‭including‬ ‭the‬ ‭of‬ ‭unique‬
‭number of rows, columns, column data types, and memory usage.‬ ‭values‬
‭●‬ ‭.shape:‬ ‭Returns‬ ‭the‬ ‭shape‬ ‭of‬ ‭the‬ ‭DataFrame,‬ ‭which‬ ‭is‬ ‭a‬ ‭tuple‬ ‭new=dataframe‬
‭containing the number of rows and columns.‬ ‭after‬ ‭dropping‬
‭●‬ ‭.columns: Returns the column labels of the DataFrame.‬ ‭a column‬
‭●‬ ‭.isnull(): Detects missing values in the DataFrame.‬ ‭sort=dataframe‬
‭●‬ ‭.unique(): Finds the unique values in a column.‬ ‭after‬ ‭sorting‬
‭●‬ ‭sort_values():The‬‭sort_values()‬‭method sorts the DataFrame‬‭by the‬ ‭based‬ ‭on‬ ‭last‬
‭specified label.‬ ‭name‬
‭●‬ ‭nunique():Pandas dataframe.nunique() function returns a series with‬ ‭re_name=after‬
‭the specified axis’s total number of unique observations.‬ ‭renaming‬
‭●‬ ‭rename():rename columns in dataframe by using the rename()‬ ‭dataset‬
‭function.‬
‭PROGRAM:‬
i‭mport pandas as pd‬
‭df = pd.read_csv('std.csv')‬
‭print("Content of std : \n",df.to_string())‬
‭print("Last two rows : \n",df.tail(2))‬
‭print("First Two rows : \n",df.head(2))‬
‭print("Attribute data types : \n",df.dtypes)‬
‭print("Size of std : ",df.size)‬
‭print("Shape of std : ",df.shape)‬
‭print("Unique data : ",df['Name'].unique())‬
‭print("std Column names : ",df.columns)‬
‭print("Highest Ages : \n",df.nlargest(2,'Age'))‬
‭new=df.rename(columns={'Name':'Sname'})‬
‭print("Rename the Name : \n",new.head(2))‬
‭new['clg']='MVGR'‬
‭print("Adding new column : \n",new.tail(2))‬
‭Output:‬
‭Content of std :‬
‭Name Age Gender‬
‭0 Gowtham 19 M‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

1‭ Bhanu 19 M‬
‭2 Lilli 18 F‬
‭3 Bhaskar 20 M‬
‭4 Bhanu 18 M‬
‭Last two rows :‬
‭Name Age Gender‬
‭3 Bhaskar 20 M‬
‭4 Bhanu 18 M‬
‭First Two rows :‬
‭Name Age Gender‬
‭0 Gowtham 19 M‬
‭1 Bhanu 19 M‬
‭Attribute data types :‬
‭Name object‬
‭Age int64‬
‭Gender object‬
‭dtype: object‬
‭Size of std : 15‬
‭Shape of std : (5, 3)‬
‭Unique data : ['Gowtham' 'Bhanu' 'Lilli' 'Bhaskar']‬
‭std Column names : Index(['Name', 'Age', 'Gender'], dtype='object')‬
‭Highest Ages :‬
‭Name Age Gender‬
‭3 Bhaskar 20 M‬
‭0 Gowtham 19 M‬
‭Rename the Name :‬
‭Sname Age Gender‬
‭0 Gowtham 19 M‬
‭1 Bhanu 19 M‬
‭Adding new column :‬
‭Sname Age Gender clg‬
‭3 Bhaskar 20 M MVGR‬
‭4 Bhanu 18 M MVGR‬
‭Observation:‬
‭●‬ A ‭ ttributeError if trying to use inplace with methods that do not support it.Confirm that‬
‭the method being used supports the inplace parameter. For instance, fillna() supports‬
‭inplace=True, but other methods may not.‬
‭●‬ ‭By using this we can get top 999 rows‬‭pd.options.display.max_rows=999;‬
‭●‬ ‭astype() is used to change the datatype of a column‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 5‬
‭Exp‬ ‭11‬ ‭Week‬ ‭5‬ ‭Date‬

‭Aim :‬
‭Demonstrate loading of remote datasets using pandas.‬

‭Libraries, Methods & Variables Used:‬


‭ andas as pd‬‭:data manipulation and analysis library‬‭that provides data structures like‬
p
‭DataFrame and series‬
‭sklearn.datasets‬‭import load_iris :load_iris is a‬‭function from sklearn . The link provides‬
‭documentation‬‭: iris in your code will be a dictionary-like‬‭object. X and y will be numpy‬
‭arrays, and names has the array of possible targets as text The dataset is often used in data‬
‭mining, classification and clustering examples and to test algorithms.‬

l‭oad_iris()=load_iris‬ ‭is‬ ‭a‬ ‭function‬ ‭from‬ ‭sklearn‬ ‭.‬ ‭The‬ ‭link‬ ‭provides‬ V ‭ ariables:‬
‭documentation: iris in your code will be a dictionary-like object‬ ‭iris,df_iris‬
‭.map:The‬‭map()‬‭function‬‭is‬‭used‬‭to‬‭apply‬‭a‬‭given‬‭function‬‭to‬‭every‬‭item‬‭of‬
‭an‬ ‭iterable,‬ ‭such‬ ‭as‬ ‭a‬ ‭list‬ ‭or‬ ‭tuple,‬ ‭and‬ ‭returns‬ ‭a‬ ‭map‬‭object‬‭(which‬‭is‬‭an‬
‭iterator).‬
‭.target_names:‬
‭.data:consist of data‬
‭.feature_names:get attribute names‬
‭enumerate():Enumerate‬ ‭in‬ ‭Python‬ ‭is‬ ‭used‬ ‭to‬ ‭loop‬ ‭over‬ ‭an‬ ‭iterable‬ ‭and‬
‭automatically provide an index for each item.‬

‭PROGRAM:‬
#‭ Import necessary libraries‬
‭import pandas as pd‬
‭from sklearn.datasets import load_iris # Import load_iris‬
‭iris = load_iris()‬
‭# Convert to DataFrame‬
‭df_iris = pd.DataFrame(data=iris.data, columns=iris.feature_names)‬
‭print(df_iris.columns)‬
‭#print(df_iris.to_string())‬
‭df_iris['species'] = iris.target‬
‭print(df_iris["species"])‬
‭# Map target values to species names‬
‭df_iris['species'] = df_iris['species'].map(dict(enumerate(iris.target_names)))‬
‭print(df_iris["species"])‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Output:‬
‭Index(['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)',‬
‭'petal width (cm)'],‬
‭dtype='object')‬
‭0 0‬
‭1 0‬
‭2 0‬
‭3 0‬
‭4 0‬
‭..‬
‭145 2‬
‭146 2‬
‭147 2‬
‭148 2‬
‭149 2‬
‭Name: species, Length: 150, dtype: int64‬
‭0 setosa‬
‭1 setosa‬
‭2 setosa‬
‭3 setosa‬
‭4 setosa‬
‭...‬
‭145 virginica‬
‭146 virginica‬
‭147 virginica‬
‭148 virginica‬
‭149 virginica‬
‭Name: species, Length: 150, dtype: object‬

‭Observation:‬
1‭ . Data Loaded: The Iris dataset is loaded into a DataFrame.‬
‭2. Data Structure: The data has 150 rows and 5 columns (4 features + 1 target).‬
‭3. Data Summary: The summary statistics show the central tendency and variability‬
‭of the features.‬
‭4. Data Types: The features are numeric, while the target is categorical.‬
‭5. Data Size: The dataset has 150 samples.‬

‭Exp‬ ‭12‬ ‭Week‬ ‭5‬ ‭Date‬

‭Aim :‬
‭Demonstrate loading of toy/real world datasets using sklearn.datasets‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Libraries, Methods & Variables Used:‬


p‭ andas as pd‬
‭sklearn.datasets import load_iris‬

l‭oad_iris()‬ ‭ ariables:‬
V
‭.data‬ ‭iris_df‬
‭.feature_names‬
‭.target‬
‭.target_names‬
‭Categorical.from_codes(iris.target, iris.target_names)‬

‭PROGRAM:‬
f‭ rom sklearn.datasets import load_iris‬
‭import pandas as pd‬

#‭ Load the Iris dataset‬


‭iris = load_iris()‬

#‭ Creating a DataFrame from the dataset for easier manipulation‬


‭iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)‬
‭iris_df['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)‬

#‭ Print the first few rows of the DataFrame‬


‭print(iris_df.head())‬

#‭ Print a summary of the DataFrame‬


‭print(iris_df.describe())‬

‭Output:‬
s‭ epal length (cm) sepal width (cm) petal length (cm) petal width (cm) \‬
‭0 5.1 3.5 1.4 0.2‬
‭1 4.9 3.0 1.4 0.2‬
‭2 4.7 3.2 1.3 0.2‬
‭3 4.6 3.1 1.5 0.2‬
‭4 5.0 3.6 1.4 0.2‬

‭species‬
0‭ setosa‬
‭1 setosa‬
‭2 setosa‬
‭3 setosa‬
‭4 setosa‬
‭sepal length (cm) sepal width (cm) petal length (cm) \‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

c‭ ount 150.000000 150.000000 150.000000‬


‭mean 5.843333 3.057333 3.758000‬
‭std 0.828066 0.435866 1.765298‬
‭min 4.300000 2.000000 1.000000‬
‭25% 5.100000 2.800000 1.600000‬
‭50% 5.800000 3.000000 4.350000‬
‭75% 6.400000 3.300000 5.100000‬
‭max 7.900000 4.400000 6.900000‬

‭petal width (cm)‬


c‭ ount 150.000000‬
‭mean 1.199333‬
‭std 0.762238‬
‭min 0.100000‬
‭25% 0.300000‬
‭50% 1.300000‬
‭75% 1.800000‬
‭max 2.500000‬
‭Observation:‬

‭The dataset includes four features (measurements) for each flower:‬

1‭ .‬ ‭ epal length (cm)‬


S
‭2.‬ ‭Sepal width (cm)‬
‭3.‬ ‭Petal length (cm)‬
‭4.‬ ‭Petal width (cm)‬

‭Exp‬ ‭13‬ ‭Week‬ ‭5‬ ‭Date‬

‭Aim :‬
‭ emonstrate‬ ‭identification‬ ‭for‬ ‭missing‬ ‭values‬ ‭in‬ ‭a‬ ‭dataset‬ ‭and‬ ‭apply‬ ‭various‬ ‭cleaning‬
D
‭techniques using pandas.‬

‭Libraries, Methods & Variables Used:‬


‭ umPy‬ ‭(numpy)‬‭:‬ ‭NumPy‬ ‭is‬‭a‬‭fundamental‬‭package‬‭for‬‭scientific‬‭computing‬‭in‬‭Python.‬‭It‬
N
‭provides support for arrays, matrices, and various mathematical functions.‬
‭Pandas‬ ‭(pandas)‬‭:Pandas‬ ‭is‬ ‭a‬ ‭powerful‬ ‭data‬ ‭manipulation‬ ‭and‬ ‭analysis‬ ‭library.‬‭It‬‭provides‬
‭data‬‭structures‬‭like‬‭DataFrames,‬‭which‬‭are‬‭highly‬‭versatile‬‭for‬‭handling‬‭tabular‬‭data.‬‭Role‬‭in‬
‭Machine‬ ‭Learning‬‭:‬ ‭You‬ ‭can‬‭use‬‭it‬‭to‬‭load,‬‭clean,‬‭and‬‭prepare‬‭datasets‬‭for‬‭model‬‭training,‬
‭making it easier to manage large datasets.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Methods:‬ ‭ ariables:‬
V
‭●‬ ‭pd.read_csv(): Reads a CSV file into a DataFrame.‬ ‭u_c=to‬ ‭store‬
‭●‬ ‭.head(): Returns the first few rows (default is 5) of the DataFrame to‬ ‭used‬ ‭cars‬
‭give a preview of the data.‬ ‭dataset‬
‭●‬ ‭tail():Returns the last few rows (default is 5) of the DataFrame to‬
‭give a preview of the data.‬
‭●‬ ‭.info():‬ ‭Provides‬ ‭a‬ ‭summary‬ ‭of‬ ‭the‬ ‭DataFrame,‬ ‭including‬ ‭the‬
‭number of rows, columns, column data types, and memory usage.‬
‭●‬ ‭.shape:‬ ‭Returns‬ ‭the‬ ‭shape‬ ‭of‬ ‭the‬ ‭DataFrame,‬ ‭which‬ ‭is‬ ‭a‬ ‭tuple‬
‭containing the number of rows and columns.‬
‭●‬ ‭.columns: Returns the column labels of the DataFrame.‬
‭●‬ ‭.isnull(): Detects missing values in the DataFrame.‬
‭●‬ ‭.unique(): Finds the unique values in a column.‬
‭●‬ ‭sort_values():The‬ ‭sort_values()‬ ‭method‬‭sorts‬‭the‬‭DataFrame‬‭by‬‭the‬
‭specified label.‬
‭●‬ ‭nunique():Pandas‬‭dataframe.nunique()‬‭function‬‭returns‬‭a‬‭series‬‭with‬
‭the specified axis’s total number of unique observations.‬
‭●‬ ‭rename():rename‬ ‭columns‬ ‭in‬ ‭dataframe‬ ‭by‬ ‭using‬ ‭the‬ ‭rename()‬
‭function.‬
‭●‬ ‭sample():Pandas‬‭sample()‬‭is‬‭used‬‭to‬‭generate‬‭a‬‭sample‬‭random‬‭row‬
‭or column from the function caller data frame.‬
‭●‬ ‭describe():method‬ ‭in‬ ‭Pandas‬ ‭is‬ ‭a‬ ‭convenient‬ ‭way‬ ‭to‬ ‭get‬ ‭a‬ ‭quick‬
‭overview of your data.‬
‭●‬ ‭tostring():‬‭The‬ ‭to_string()‬ ‭method‬ ‭in‬ ‭Pandas‬ ‭is‬ ‭used‬ ‭to‬ ‭convert‬ ‭a‬
‭DataFrame or Series into a string representation.‬
‭●‬ ‭dataframe.‬ ‭isnull().‬ ‭sum()‬ ‭returns‬ ‭the‬ ‭number‬ ‭of‬ ‭missing‬ ‭values‬‭in‬
‭the dataset.‬
‭●‬ ‭The‬‭duplicated()‬‭method‬‭returns‬‭a‬‭Series‬‭with‬‭True‬‭and‬‭False‬‭values‬
‭that describe which rows in the DataFrame are duplicated and not.‬

‭PROGRAM:‬
i‭mport pandas as pd‬
‭import numpy as np‬
‭u_c=pd.read_csv("used_cars.csv")‬
‭u_c.head(5)‬
‭u_c.columns‬
‭u_c.describe()‬
‭u_c.shape‬
‭u_c.dtypes‬
‭u_c.info()‬
‭# The inplace=True argument modifies the DataFrame directly and returns None.‬
‭# Assigning it to newu_c is unnecessary and causes the error.‬
‭# Instead, use the following:‬
‭u_c.dropna(inplace=True)‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

#‭ Now u_c contains the DataFrame with dropped NA values.‬


‭u_c.isnull().sum() # Check for missing values in the modified u_c‬
‭print(u_c)‬
‭#cleaning‬
‭u_c.duplicated()‬
‭u_c.drop_duplicates(inplace=True)‬
‭u_c.isnull().sum()‬
‭u_c.fuel_type=u_c.fuel_type.fillna("unknown")‬
‭u_c.isnull().sum()‬
‭#changing datatype‬
‭u_c['model_year']=u_c['model_year'].astype(int)‬
‭Output:‬
‭ class 'pandas.core.frame.DataFrame'>‬
<
‭RangeIndex: 4009 entries, 0 to 4008‬
‭Data columns (total 12 columns):‬
‭# Column Non-Null Count Dtype‬
‭--- ------ -------------- -----‬
‭0 brand 4009 non-null object‬
‭1 model 4009 non-null object‬
‭2 model_year 4009 non-null int64‬
‭3 milage 4009 non-null object‬
‭4 fuel_type 3839 non-null object‬
‭5 engine 4009 non-null object‬
‭6 transmission 4009 non-null object‬
‭7 ext_col 4009 non-null object‬
‭8 int_col 4009 non-null object‬
‭9 accident 3896 non-null object‬
‭10 clean_title 3413 non-null object‬
‭11 price 4009 non-null object‬
‭dtypes: int64(1), object(11)‬
‭Index(['brand', 'model', 'model_year', 'milage', 'fuel_type', 'engine',‬
‭'transmission', 'ext_col', 'int_col', 'accident', 'clean_title',‬
‭'price'],‬
‭dtype='object')‬
‭memory usage: 376.0+ KB‬
‭brand model model_year milage \‬
‭0 Ford Utility Police Interceptor Base 2013 51,000 mi.‬
‭1 Hyundai Palisade SEL 2021 34,742 mi.‬
‭3 INFINITI Q50 Hybrid Sport 2015 88,900 mi.‬
‭6 Audi S3 2.0T Premium Plus 2017 84,000 mi.‬
‭7 BMW 740 iL 2001 242,000 mi.‬
‭... ... ... ... ...‬
‭4003 Mercedes-Benz E-Class E 300 4MATIC 2018 53,705 mi.‬
‭4004 Bentley Continental GT Speed 2023 714 mi.‬
‭4005 Audi S4 3.0T Premium Plus 2022 10,900 mi.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

4‭ 007 Ford F-150 Raptor 2020 33,000 mi.‬


‭4008 BMW X3 xDrive30i 2020 43,000 mi.‬

f‭ uel_type engine \‬
‭0 E85 Flex Fuel 300.0HP 3.7L V6 Cylinder Engine Flex Fuel Capa...‬
‭1 Gasoline 3.8L V6 24V GDI DOHC‬
‭3 Hybrid 354.0HP 3.5L V6 Cylinder Engine Gas/Electric H...‬
‭6 Gasoline 292.0HP 2.0L 4 Cylinder Engine Gasoline Fuel‬
‭7 Gasoline 282.0HP 4.4L 8 Cylinder Engine Gasoline Fuel‬
‭... ... ...‬
‭4003 Gasoline 241.0HP 2.0L 4 Cylinder Engine Gasoline Fuel‬
‭4004 Gasoline 6.0L W12 48V PDI DOHC Twin Turbo‬
‭4005 Gasoline 349.0HP 3.0L V6 Cylinder Engine Gasoline Fuel‬
‭4007 Gasoline 450.0HP 3.5L V6 Cylinder Engine Gasoline Fuel‬
‭4008 Gasoline 248.0HP 2.0L 4 Cylinder Engine Gasoline Fuel‬

‭transmission ext_col int_col \‬


0‭ 6-Speed A/T Black Black‬
‭1 8-Speed Automatic Moonlight Cloud Gray‬
‭3 7-Speed A/T Black Black‬
‭6 6-Speed A/T Blue Black‬
‭7 A/T Green Green‬
‭... ... ... ...‬
‭4003 A/T Black Black‬
‭4004 8-Speed Automatic with Auto-Shift C / C Hotspur‬
‭4005 Transmission w/Dual Shift Mode Black Black‬
‭4007 A/T Blue Black‬
‭4008 A/T Gray Brown‬

‭accident clean_title price‬


0‭ At least 1 accident or damage reported Yes 10,300‬
‭1 At least 1 accident or damage reported Yes 38,005‬
‭3 None reported Yes 15,500‬
‭6 None reported Yes 31,000‬
‭7 None reported Yes 7,300‬
‭... ... ... ...‬
‭4003 At least 1 accident or damage reported Yes 25,900‬
‭4004 None reported Yes 3,49,950‬
‭4005 None reported Yes 53,900‬
‭4007 None reported Yes 62,999‬
‭4008 At least 1 accident or damage reported Yes 40,000‬
‭brand 0‬
‭model 0‬
‭model_year 0‬
‭milage 0‬
‭fuel_type 170‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

e‭ ngine 0‬
‭transmission 0‬
‭ext_col 0‬
‭int_col 0‬
‭accident 113‬
‭clean_title 596‬
‭price 0‬
‭dtype: int64‬
‭brand 0‬
‭model 0‬
‭model_year 0‬
‭milage 0‬
‭fuel_type 0‬
‭engine 0‬
‭transmission 0‬
‭ext_col 0‬
‭int_col 0‬
‭accident 0‬
‭clean_title 0‬
‭price 0‬
‭0 False‬
‭1 False‬
‭2 False‬
‭3 False‬
‭4 False‬
‭...‬
‭4004 False‬
‭4005 False‬
‭4006 False‬
‭4007 False‬
‭4008 False‬
‭Length: 4009, dtype: bool‬
‭brand 0‬
‭model 0‬
‭model_year 0‬
‭milage 0‬
‭fuel_type 0‬
‭engine 0‬
‭transmission 0‬
‭ext_col 0‬
‭int_col 0‬
‭accident 113‬
‭clean_title 596‬
‭price 0‬
‭dtype: int64‬
‭[3269 rows x 12 columns]‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Observation:‬
‭‬ E
● ‭ ither fillna or dropna use both can’t be used‬
‭●‬ ‭AttributeError if trying to use inplace with methods that do not support it.Confirm that‬
‭the method being used supports the inplace parameter. For instance, fillna() supports‬
‭inplace=True, but other methods may not.‬
‭●‬ ‭By using this we can get top 999 rows‬‭pd.options.display.max_rows=999;‬
‭●‬ ‭astype() is used to change the datatype of a column‬
‭●‬ ‭Data Structure: The data is stored in a pandas DataFrame, which is a‬
‭2-dimensional labeled data structure.‬

‭Exp‬ ‭14‬ ‭Week‬ ‭5‬ ‭Date‬

‭Aim :‬
‭Demonstrate LogisticRegression Supervised ML algorithm with any dataset.‬

‭Libraries, Methods & Variables Used:‬


‭ ‬‭pandas as pd:‬‭A powerful data manipulation and analysis‬‭library.‬

‭●‬‭sklearn.datasets:‬‭A submodule of scikit-learn that‬‭provides access to various‬
‭datasets.‬
‭●‬‭sklearn.model_selection:‬‭A submodule of scikit-learn‬‭that provides utilities for‬
‭model‬‭selection and validation.‬
‭●‬‭sklearn.linear_model:‬‭A submodule of scikit-learn‬‭that provides linear models‬
‭for‬‭classification and regression.‬
‭●‬‭sklearn.metrics:‬‭A submodule of scikit-learn that‬‭provides tools for evaluating‬
‭the‬‭performance of models.‬

‭●‬‭load_digits():‬‭A function from sklearn.datasets‬‭that loads the‬ ‭ ariables:‬


V
‭digits dataset.‬ ‭data,X,Y,mod‬
‭●‬‭train_test_split():‬‭A function from sklearn.model_selection‬‭that‬ ‭el,accuracy,co‬
‭splits the dataset into training and testing sets.‬ ‭nf_matrix,X_t‬
‭●‬‭StandardScaler():‬‭A class from sklearn.preprocessing‬‭that‬ ‭rain,‬ ‭X_test,‬
‭standardizes features by removing the mean and scaling to‬ ‭y_train, y_test‬
‭unit‬‭variance.‬
‭●‬‭fit_transform():‬‭A method that fits the scaler to‬‭the data and then‬
‭transforms the data.‬
‭●‬‭LogisticRegression():‬‭A class from sklearn.linear_model‬‭that‬
‭implements logistic regression.‬
‭●‬‭fit():‬‭A method that fits the model to the training‬‭data.‬
‭●‬‭predict():‬‭A method that uses the trained model‬‭to predict the‬
‭target variable.‬
‭●‬‭confusion_matrix():‬‭A function from sklearn.metrics‬‭that‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

c‭ omputes the confusion matrix to evaluate the accuracy of‬


‭a‬‭classification.‬
‭●‬‭classification_report():‬‭A function from sklearn.metrics‬‭that‬
‭builds a text report showing the main classification metrics.‬
‭PROGRAM:‬
i‭mport numpy as np‬
‭import pandas as pd‬
‭import matplotlib.pyplot as plt‬
‭from sklearn.model_selection import train_test_split‬
‭from sklearn.linear_model import LogisticRegression‬
‭from sklearn.metrics import confusion_matrix, accuracy_score‬
‭# Create the dataset‬
‭data = {‬
‭'Study_Hours': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],‬
‭'Attendance': [50, 60, 65, 70, 80, 85, 90, 95, 100, 100],‬
‭'Pass': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]‬
‭}‬
‭# Convert to DataFrame‬
‭df = pd.DataFrame(data)‬
‭# Define independent variables (features) and dependent variable (target)‬
‭X = df[['Study_Hours', 'Attendance']]‬
‭y = df['Pass']‬
‭# Split the dataset into training and testing sets‬
‭X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)‬
‭print(X_train)‬
‭# Create and fit the logistic regression model‬
‭model = LogisticRegression()‬
‭model.fit(X_train, y_train)‬
‭# Predicting on the test set‬
‭y_pred = model.predict(X_test)‬
‭print(y_pred)‬
‭# Evaluate the model‬
‭accuracy = accuracy_score(y_test, y_pred)‬
‭conf_matrix = confusion_matrix(y_test, y_pred)‬
‭print('Accuracy: ',accuracy)‬
‭print("Confusion Matrix:")‬
‭print(conf_matrix)‬
‭# Visualizing the decision boundary‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

p‭ lt.scatter(df['Study_Hours'], df['Attendance'], c=df['Pass'], cmap='bwr', alpha=0.7,‬


‭edgecolor='k')‬
‭plt.xlabel('Study Hours')‬
‭plt.ylabel('Attendance Percentage')‬
‭plt.title('Logistic Regression: Pass/Fail Status')‬
‭plt.show()‬

‭Output:‬
‭ tudy_Hours Attendance‬
S
‭0 1 50‬
‭7 8 95‬
‭2 3 65‬
‭9 10 100‬
‭4 5 80‬
‭3 4 70‬
‭6 7 90‬
‭[1 0 1]‬
‭Accuracy: 1.0‬
‭Confusion Matrix:‬
‭[[1 0]‬
‭[0 2]]‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Observation:‬
1‭ . Data Preparation: The code loads the simple dataset, splits the data into‬
‭features and target, and splits the data into training and testing sets.‬
‭2. Model Training: A Linear Regression model is trained on the training data.‬
‭3. Model Evaluation: The model is evaluated on the test data, and the MSE and R2‬
‭score are calculated.‬
‭4. Model Performance: The MSE and R2 score indicate the model's performance,‬
‭with lower MSE and higher R2 score indicating better performance.‬
‭5. Coefficients: The coefficients of the model are printed, indicating the contribution‬
‭of each feature to the predicted target value.‬

‭Exp‬ ‭15‬ ‭Week‬ ‭5‬ ‭Date‬

‭Aim :‬
‭Demonstrate Linear Regression Supervised ML algorithm with any dataset.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Libraries, Methods & Variables Used:‬


‭ ‬‭numpy as np:‬‭A fundamental package for scientific‬‭computing with Python, used for‬

‭efficient array operations.‬
‭●‬‭pandas as pd:‬‭A powerful data manipulation and analysis‬‭library.‬
‭●‬‭sklearn.datasets:‬‭A submodule of scikit-learn that‬‭provides access to various datasets.‬
‭●‬‭sklearn.model_selection:‬‭A submodule of scikit-learn‬‭that provides utilities for model‬
‭selection and validation.‬
‭●‬‭sklearn.linear_model:‬‭A submodule of scikit-learn‬‭that provides linear models for‬
‭classification and regression.‬
‭●‬‭sklearn.metrics:‬‭A submodule of scikit-learn that‬‭provides tools for evaluating the‬
‭performance of models.‬

‭●‬ L ‭ inearRegression():‬ ‭Linear‬ ‭regression‬ ‭models‬ ‭the‬ V ‭ ariables:‬


‭relationship‬ ‭between‬ ‭a‬ ‭dependent‬‭variable‬‭(label)‬‭and‬‭one‬ ‭LR,df,df1,X,y,X_train,‬
‭or‬‭more‬‭independent‬‭variables‬‭(features)‬‭by‬‭fitting‬‭a‬‭linear‬ ‭X_test,y_train,y_test‬
‭equation to the data.‬
‭●‬ ‭corr():It’s‬ ‭useful‬ ‭for‬ ‭checking‬ ‭the‬ ‭relationships‬ ‭between‬
‭different features and can help identify multicollinearity.‬
‭●‬ ‭bar():Useful‬ ‭for‬ ‭visualizing‬ ‭categorical‬ ‭data‬ ‭or‬ ‭the‬
‭distribution of a variable.‬
‭●‬ ‭drop():A‬ ‭method‬ ‭in‬ ‭pandas‬ ‭used‬‭to‬‭drop‬‭rows‬‭or‬‭columns‬
‭from a DataFrame.‬
‭●‬ ‭fit():This‬ ‭method‬‭trains‬‭the‬‭linear‬‭regression‬‭model‬‭on‬‭the‬
‭training data.‬
‭●‬ ‭intercept_:This‬ ‭attribute‬ ‭of‬ ‭the‬ ‭trained‬ ‭linear‬ ‭regression‬
‭model‬ ‭represents‬ ‭the‬ ‭intercept‬ ‭(constant‬ ‭term)‬ ‭of‬ ‭the‬
‭regression line.‬
‭●‬ ‭Coef_:After‬ ‭fitting‬ ‭the‬ ‭model,‬ ‭you‬ ‭can‬ ‭access‬ ‭the‬
‭coefficients.‬
‭●‬ ‭predict():This‬ ‭method‬ ‭generates‬ ‭predictions‬ ‭using‬ ‭the‬
‭trained linear regression model.‬
‭PROGRAM:‬
‭ IMPLE LINEAR REGRESSION:‬
S
‭import numpy as np‬
‭import pandas as pd‬
‭import matplotlib.pyplot as plt‬
‭df=pd.read_csv('Salary_dataset.csv')‬
‭df1=df[['YearsExperience','Salary']]‬
‭plt.scatter(df1["YearsExperience"],df1["Salary"])‬
‭df1.corr()‬
‭plt.bar(df1["YearsExperience"],df1["Salary"])‬
‭from sklearn.model_selection import train_test_split‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭ =df1.drop('Salary',axis=1)‬
X
‭#print(X)‬
‭y=df1.Salary‬
‭X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0,test_size=0.30)‬
‭from sklearn.linear_model import LinearRegression‬
‭LR=LinearRegression()‬
‭LR.fit(X_train,y_train)‬
‭print(LR.intercept_)‬
‭print(LR.coef_)‬
‭LR.predict(X_test)‬

‭ ultiple Linear regression:‬


M
‭import pandas as pd‬
‭import numpy as np‬
‭u_c=pd.read_csv("used_cars.csv")‬
‭print(u_c.head(5))‬
‭u_c.columns‬
‭u_c.describe()‬
‭print(u_c.shape)‬
‭print(u_c.dtypes)‬
‭u_c.info()‬
‭print(u_c.isnull().sum())‬
‭u_c.dropna(inplace=True)‬
‭print(u_c.isnull().sum())‬
‭#cleaning‬
‭print(u_c.duplicated())‬
‭u_c.drop_duplicates(inplace=True)‬
‭u_c['price'] = u_c['price'].str.replace(',', '', regex=True).astype(float)‬
‭from sklearn.model_selection import train_test_split‬
‭X=u_c.drop('price',axis=1)‬
‭X=pd.get_dummies(X)‬
‭y=u_c.price‬
‭x_train,x_test,y_train,y_test=train_test_split(X,y,random_state=0,test_size=0.300)‬
‭from sklearn.linear_model import LinearRegression‬
‭LR=LinearRegression()‬
‭LR.fit(x_train,y_train)‬
‭print(LR.intercept_)‬
‭print(LR.coef_)‬
‭y_pred=LR.predict(x_test)‬
‭print(y_pred)‬
‭Output:‬
‭25842.36521257826‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭[9360.26128619]‬

[‭ 16]:‬
‭array([ 40818.78327049, 123189.08258899, 65155.46261459, 63283.41035735,‬
‭115700.87356004, 108212.66453108, 116636.89968866, 64219.43648597,‬
‭76387.77615802])‬

‭Multiple linear regression:‬


‭brand model model_year milage \‬
‭0 Ford Utility Police Interceptor Base 2013 51,000 mi.‬
‭1 Hyundai Palisade SEL 2021 34,742 mi.‬

f‭ uel_type engine \‬
0‭ E85 Flex Fuel 300.0HP 3.7L V6 Cylinder Engine Flex Fuel Capa...‬
‭1 Gasoline 3.8L V6 24V GDI DOHC‬

‭transmission ext_col int_col \‬


‭0 6-Speed A/T Black Black‬
‭1 8-Speed Automatic Moonlight Cloud Gray‬

a‭ ccident clean_title price‬


0‭ At least 1 accident or damage reported Yes 10,300‬
‭1 At least 1 accident or damage reported Yes 38,005‬

‭(4009, 12)‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

b‭ rand object‬
‭model object‬
‭model_year int64‬
‭milage object‬
‭fuel_type object‬
‭engine object‬
‭transmission object‬
‭ext_col object‬
‭int_col object‬
‭accident object‬
‭clean_title object‬
‭price object‬
‭dtype: object‬
‭<class 'pandas.core.frame.DataFrame'>‬
‭RangeIndex: 4009 entries, 0 to 4008‬
‭Data columns (total 12 columns):‬
‭# Column Non-Null Count Dtype‬
‭--- ------ -------------- -----‬
‭0 brand 4009 non-null object‬
‭1 model 4009 non-null object‬
‭2 model_year 4009 non-null int64‬
‭3 milage 4009 non-null object‬
‭4 fuel_type 3839 non-null object‬
‭5 engine 4009 non-null object‬
‭6 transmission 4009 non-null object‬
‭7 ext_col 4009 non-null object‬
‭8 int_col 4009 non-null object‬
‭9 accident 3896 non-null object‬
‭10 clean_title 3413 non-null object‬
‭11 price 4009 non-null object‬
‭dtypes: int64(1), object(11)‬
‭memory usage: 376.0+ KB‬
‭fuel_type 170‬
‭accident 113‬
‭clean_title 596‬
‭dtype: int64‬
‭0 False‬
‭1 False‬
‭3 False‬
‭6 False‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭7 False‬
‭...‬
‭4003 False‬
‭4004 False‬
‭4005 False‬
‭4007 False‬
‭4008 False‬
‭Length: 3269, dtype: bool‬
‭-1947876.7036248783‬
‭[ 1006.12786195 -22963.95727474 -16012.05946607 ... 624.13596452‬
‭-352.07534734 0. ]‬
‭[ 3.46184624e+04 3.94432492e+04 1.37342567e+03 7.32900762e+04‬
‭5.67404888e+04 7.01105267e+04 1.47821619e+05 6.11777246e+04‬
‭-4.13817338e+02 4.37912418e+03]‬
‭Observation:‬
‭ The code loads the wine dataset and splits it into training and testing sets. This‬

‭allows for evaluating the model's performance on unseen data.‬
‭● A LinearRegression model is trained using the training data and makes predictions‬
‭on the test data.‬
‭ValueError if train_test_split is incorrectly applied, leading to mismatched training and‬
‭testing sets.‬

‭Exp‬ ‭16‬ ‭Week‬ ‭5‬ ‭Date‬

‭Aim :‬
‭Demonstrate various plots using matplotlib.‬

‭Libraries, Methods & Variables Used:‬


‭ atplotlib import pyplot as p:This will import pyplot and give it the alias plt, which is‬
m
‭commonly used in the Python community.‬

b‭ ar():This is used to create bar charts in matplotlib.‬ ‭ =1-d data‬


x
‭show():This function displays the plot‬ ‭y=1-d data‬
‭stem():This‬‭function‬‭creates‬‭stem‬‭plots‬‭(a‬‭type‬‭of‬‭plot‬‭that‬‭displays‬‭data‬‭as‬
‭lines with markers on top).‬
‭p.plot():his will plot lines or markers between points.‬
‭p.title():This sets the title of the plot‬
‭p.legend():This adds a legend to the plot‬
‭hist():This creates a histogram‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

g‭ roupby(dataframe).sum().plot(kind='pie',y=’dataframe'):This is a pandas‬
‭method for plotting pie charts.‬

‭PROGRAM:‬
#‭ plot‬
‭from matplotlib import pyplot as pt‬
‭x = [ 1,3,5,7,9 ]‬
‭y = [ 6,9,3,15,17]‬
‭pt.plot(x, y)‬
‭pt.show()‬
‭#bar‬
‭from matplotlib import pyplot as pt‬
‭x = [ 1,3,5,7,9 ]‬
‭y = [ 6,9,3,15,17]‬
‭pt.bar(x, y)‬
‭pt.show()‬
‭#stem‬
‭from matplotlib import pyplot as pt‬
‭x = [ 1,3,5,7,9 ]‬
‭y = [ 6,9,3,15,17]‬
‭pt.stem(x, y)‬
‭pt.show()‬
‭#scatter‬
‭from matplotlib import pyplot as pt‬
‭x = [ 1,3,5,7,9 ]‬
‭y = [ 6,9,3,15,17]‬
‭pt.scatter(x, y)‬
‭pt.show()‬
‭#legend‬
‭from matplotlib import pyplot as pt‬
‭x = [ 1,3,5,7,9 ]‬
‭y = [ 6,9,3,15,17]‬
‭pt.plot(x)‬
‭pt.plot(y )‬
‭pt.legend(['Weight','Height'])‬
‭pt.show()‬
‭#pie chart‬
‭import pandas as pd;‬
‭df=pd.DataFrame({'party':['ycp','tdp','janasena','bjp'],‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

'‭result':[11,138,21,5]});‬
‭df.groupby(['party']).sum().plot(kind='pie',y='result');‬

‭Output:‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Observation:‬
‭ e labeled the x-axis and y-axis and plt.show() displays the line plotting for the given‬
W
‭input on the window‬
‭Creates a scatter plot with plt.scatter(), using x and y as the coordinates for the points.‬
‭The points are marked with dots (.) and colored in blue.‬
‭● And we labeled the x-axis and y-axis.‬
‭● plt.show() displays the scatter plot,barplot,stem on the window‬
‭● Creates a bar plot with plt.bar(), using x for the bar labels and y for the bar heights.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 6‬
‭Exp‬ ‭17‬ ‭Week‬ ‭6‬ ‭Date‬

‭Aim :‬
‭Using hand written digits dataset(or any other dataset) classify using SVM classifier.‬

‭Libraries, Methods & Variables Used:‬


s‭ klearn import datasets,svm,metrics:‬‭This imports the‬‭datasets module, which provides‬
‭various datasets like the Iris dataset, digits dataset,breast cancer etc., useful for machine‬
‭learning practice and experiments.‬
‭sklearn.model_selection import train_test_split‬‭:This‬‭imports the train_test_split function,‬
‭which is used to split a dataset into training and testing subsets.‬
‭import matplotlib.pyplot as plt:‬‭This imports matplotlib.pyplot‬‭as plt for creating‬
‭visualizations such as line plots, histograms, and scatter plots.‬

l‭oad_breast_cancer():‬‭This loads the breast cancer‬‭dataset from‬ ‭ ariables:‬


V
‭sklearn.datasets, which is commonly used for binary classification tasks.‬ ‭cancer,clf,pre,acc,x‬
‭feature_names:‬‭This attribute gives the names of the‬‭features in the‬ ‭_train,x_test,y_train‬
‭dataset (e.g., mean radius, mean texture, etc.).‬ ‭,y_test‬
‭target:‬‭This provides the labels (target values) of‬‭the dataset, where 0‬
‭indicates benign and 1 indicates malignant tumors.‬
‭Shape:‬‭You can access the shape of the data (number‬‭of samples and‬
‭number of features) using the .shape attribute.‬
‭accuracy_score(y_test,pre):T‬‭his function computes‬‭the accuracy,‬
‭which is the ratio of correctly predicted labels to total labels.‬

‭PROGRAM:‬
f‭ rom sklearn import datasets,svm,metrics‬
‭from sklearn.model_selection import train_test_split‬
‭from sklearn.metrics import accuracy_score‬
‭import matplotlib.pyplot as plt‬
‭cancer=datasets.load_breast_cancer()‬
‭data,target=cancer.data,cancer.target‬
‭print("Data shape : ",data.shape)‬
‭print("Target shape : ",target.shape)‬
‭print("Feature name : ",cancer.feature_names)‬
‭clf=svm.SVC(gamma=0.001)‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

x‭ _train,x_test,y_train,y_test=train_test_split(data,target,test_size=0.5,shuffle=True,random_sta‬
‭te=42)‬
‭clf.fit(x_train,y_train)‬
‭pre=clf.predict(x_test)‬
‭print("classification report for classifier",clf)‬
‭print(metrics.classification_report(y_test,pre))‬
‭acc=accuracy_score(y_test,pre)‬
‭print("Accuracy :",f"{acc*100:.2f}")‬

‭Output:‬
‭ ata shape : (569, 30)‬
D
‭Target shape : (569,)‬
‭Feature name : ['mean radius' 'mean texture' 'mean perimeter' 'mean area'‬
‭'mean smoothness' 'mean compactness' 'mean concavity'‬
‭'mean concave points' 'mean symmetry' 'mean fractal dimension'‬
‭'radius error' 'texture error' 'perimeter error' 'area error'‬
‭'smoothness error' 'compactness error' 'concavity error'‬
‭'concave points error' 'symmetry error' 'fractal dimension error'‬
‭'worst radius' 'worst texture' 'worst perimeter' 'worst area'‬
‭'worst smoothness' 'worst compactness' 'worst concavity'‬
‭'worst concave points' 'worst symmetry' 'worst fractal dimension']‬
‭classification report for classifier SVC(gamma=0.001)‬
‭precision recall f1-score support‬

0‭ 0.86 0.97 0.91 98‬


‭1 0.98 0.92 0.95 187‬

a‭ ccuracy 0.94 285‬


‭macro avg 0.92 0.94 0.93 285‬
‭weighted avg 0.94 0.94 0.94 285‬
‭Accuracy : 93.68‬

‭Observation:‬
‭‬ S
● ‭ VM is best suited for classification tasks.‬
‭●‬ ‭The primary objective of the SVM algorithm is to identify the optimal hyperplane in‬
‭an N-dimensional space that can effectively separate data points into different classes‬
‭in the feature space.‬
‭●‬ ‭The algorithm ensures that the margin between the closest points of different classes,‬
‭known as support vector‬‭s‭,‬ is maximized.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭●‬ T
‭ he dimension of the‬‭hyperplane‬‭depends on the number of features. For instance, if‬
‭there are two input features, the hyperplane is simply a line, and if there are three input‬
‭features, the hyperplane becomes a 2-D plane.‬

‭Exp‬ ‭18‬ ‭Week‬ ‭6‬ ‭Date‬

‭Aim :‬
‭Using any dataset of your choice classify the dataset using K-MEANS classifier.‬

‭Libraries, Methods & Variables Used:‬


s‭ klearn.datasets import make_blobs‬
‭sklearn.cluster import KMeans‬
‭matplotlib.pyplot as plt‬

‭ ake_blobs():‬‭Generates synthetic data points grouped‬‭into clusters, useful‬


m ‭ ariables:‬
V
‭for testing clustering algorithms like KMeans.‬ ‭n_samples,X,‬
‭KMeans():‬‭KMeans is an unsupervised machine learning‬‭algorithm that‬ ‭y,random_stat‬
‭e,labels,cluste‬
‭groups data points into a predefined number of clusters based on feature‬
‭r_centers‬
‭similarity.‬
‭fit(X):‬‭Fits the KMeans model on the dataset X.‬
‭Kmeans.labels_:‬‭After fitting the KMeans model, labels_‬‭gives the cluster‬
‭assignment (or label) for each data point.‬
‭Cluster_centers_:‬‭This attribute provides the coordinates‬‭of the cluster‬
‭centers.‬
‭Scatter:‬‭This is the scatter() function from matplotlib.pyplot‬‭used to create‬
‭scatter plots.‬

‭PROGRAM:‬
f‭ rom sklearn.datasets import make_blobs‬
‭from sklearn.cluster import KMeans‬
‭import matplotlib.pyplot as plt‬
‭n_samples = 300‬
‭random_state = 170‬
‭X, y = make_blobs(n_samples=n_samples, random_state=random_state)‬
‭kmeans = KMeans(n_clusters=3, random_state=random_state, n_init="auto")‬
‭# Fit the model to the data‬
‭kmeans.fit(X)‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

#‭ Get the cluster labels assigned to each data point‬


‭labels = kmeans.labels_‬
‭# Get the coordinates of the cluster centers‬
‭cluster_centers = kmeans.cluster_centers_‬
‭# Plot the data points and cluster centers‬
‭plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis')‬
‭plt.scatter(cluster_centers[:, 0], cluster_centers[:, 1], c='white', s=200, alpha=0.75)‬
‭plt.xlabel('Feature 1')‬
‭plt.ylabel('Feature 2')‬
‭plt.title('K-Means Clustering')‬
‭plt.show()‬

‭Output:‬

‭Observation:‬
‭‬
● ‭ he choice of the number of clusters (K) is crucial for KMeans clustering.‬
T
‭●‬ ‭If K is too small, you might under-cluster your data (combine distinct clusters).‬
‭●‬ ‭If K is too large, you might overfit (create unnecessary clusters).‬
‭●‬ ‭Visualizing the results of KMeans clustering can help in understanding the cluster‬
‭structure.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 7‬
‭Exp‬ ‭19‬ ‭Week‬ ‭7‬ ‭Date‬

‭Aim :‬
‭ emonstrate‬ ‭different‬ ‭image‬ ‭processing‬
D ‭operations‬ ‭on‬ ‭colour‬ ‭images.‬
‭1.‬ ‭Reading an image and displaying the image.‬
‭2.‬ ‭Reading a color image and converting it into grayscale.‬
‭3.‬ ‭Program to Rotate an image by 90/180 degrees clockwise using OpenCV.‬
‭4.‬ ‭Program to write and display an image using OpenCV.‬
‭5.‬ ‭Program for extracting image properties using OpenCV.‬
‭6.‬ ‭Program for accessing and modifying pixel values.‬
‭7.‬ ‭Program for color mixing in OpenCV.‬

‭Libraries, Methods & Variables Used:‬


c‭ v2:OpenCV (Open Source Computer Vision Library) is an open-source computer vision‬
‭and machine learning software library.Image processing,Computer vision: Real-time image‬
‭processing are tasks.‬
‭numpy as np:NumPy (Numerical Python) is a fundamental package for scientific computing‬
‭in Python, providing support for arrays and matrices.‬
‭matplotlib.pyplot as plt:‬
‭Matplotlib is a plotting library for Python and its numerical mathematics extension NumPy‬

c‭ v2.imread()‬‭:‬ ‭Reads‬ ‭an‬ ‭image‬ ‭from‬ ‭the‬ ‭specified‬ ‭ ariables:‬


V
‭file.‬ ‭image,color_image,gray_image,rotat‬
‭cv2.cvtColor()‬‭:‬ ‭Converts‬ ‭an‬ ‭image‬ ‭from‬ ‭one‬ ‭color‬ ‭ed_90,rotated_180,saved_image,hei‬
‭space to another.‬ ‭ght,width,channels,red_image,blue_i‬
‭cv2.rotate()‬‭:‬‭Rotates an image by specified angles.‬ ‭mage,mixed_image‬
‭cv2.imwrite()‬‭:‬‭Writes an image to a specified file.‬
‭image.shape:‬ ‭Returns‬ ‭a‬ ‭tuple‬ ‭containing‬ ‭the‬
‭dimensions of the image: (height, width, channels).‬
‭image.size:‬ ‭Returns‬ ‭the‬ ‭total‬ ‭number‬ ‭of‬ ‭pixels‬
‭(elements)‬ ‭in‬ ‭the‬ ‭image‬ ‭array‬ ‭(height‬ ‭×‬ ‭width‬ ‭×‬
‭channels).‬
‭cv2.addWeighted()‬‭:‬ ‭Blends‬ ‭two‬ ‭images‬ ‭together‬
‭based on specified weights.‬
‭plt.show()‬‭:‬‭Part of Matplotlib for displaying images.‬

‭PROGRAM:‬
1‭ )‬
‭import cv2‬
‭import matplotlib.pyplot as plt‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

#‭ Read the image‬


‭image = cv2.imread('dog.jpg') # Replace 'image.jpg' with your image file‬
‭# Display the image‬
‭plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))‬
‭plt.title('Original Image')‬
‭plt.axis('off')‬
‭plt.show()‬
‭2)‬
‭# Read the color image‬
‭color_image = cv2.imread('dog.jpg') # Replace 'image.jpg' with your image file‬
‭# Convert the image to grayscale‬
‭gray_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)‬
‭# Display the grayscale image‬
‭plt.imshow(gray_image, cmap='gray')‬
‭plt.title('Grayscale Image')‬
‭plt.axis('off')‬
‭plt.show()‬
‭3)‬
‭# Read the image‬
‭image = cv2.imread('dog.jpg') # Replace 'image.jpg' with your image file‬
‭# Rotate the image by 90 degrees clockwise‬
‭rotated_90 = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)‬
‭# Rotate the image by 180 degrees‬
‭rotated_180 = cv2.rotate(image, cv2.ROTATE_180)‬
‭# Display the rotated images‬
‭plt.figure(figsize=(10, 5))‬
‭plt.subplot(1, 2, 1)‬
‭plt.imshow(cv2.cvtColor(rotated_90, cv2.COLOR_BGR2RGB))‬
‭plt.title('Rotated 90 Degrees Clockwise')‬
‭plt.axis('off')‬
‭plt.subplot(1, 2, 2)‬
‭plt.imshow(cv2.cvtColor(rotated_180, cv2.COLOR_BGR2RGB))‬
‭plt.title('Rotated 180 Degrees')‬
‭plt.axis('off')‬
‭plt.show()‬
‭4)‬
‭# Read the image‬
‭image = cv2.imread('dog.jpg') # Replace 'image.jpg' with your image file‬
‭# Write the image to a new file‬
‭cv2.imwrite('output_image.jpg', image)‬
‭# Display the saved image‬
‭saved_image = cv2.imread('output_image.jpg')‬
‭plt.imshow(cv2.cvtColor(saved_image, cv2.COLOR_BGR2RGB))‬
‭plt.title('Saved Image')‬
‭plt.axis('off')‬
‭plt.show()‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

5‭ )‬
‭# Read the image‬
‭image = cv2.imread('dog.jpg') # Replace 'image.jpg' with your image file‬
‭# Get image properties‬
‭height, width, channels = image.shape‬
‭size = image.size‬
‭# Print image properties‬
‭print(f"Image Height: {height}")‬
‭print(f"Image Width: {width}")‬
‭print(f"Number of Channels: {channels}")‬
‭print(f"Image Size (in bytes): {size}")‬
‭6)‬
‭# Read the image‬
‭image = cv2.imread('dog.jpg') # Replace 'image.jpg' with your image file‬
‭# Accessing a pixel value at (x=100, y=50)‬
‭pixel_value = image[50, 100] # BGR format‬
‭print(f"Original Pixel Value at (100, 50): {pixel_value}")‬
‭# Modify the pixel value to a new color (e.g., pure red)‬
‭image[50, 100] = [0, 0, 255] # BGR format for red‬
‭# Display the modified image‬
‭plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))‬
‭plt.title('Modified Image')‬
‭plt.axis('off')‬
‭plt.show()‬
‭7)‬
‭import numpy as np‬
‭# Create two solid color images (Red and Blue)‬
‭red_image = np.zeros((300, 300, 3), dtype=np.uint8)‬
‭red_image[:] = [0, 0, 255] # BGR for red‬
‭blue_image = np.zeros((300, 300, 3), dtype=np.uint8)‬
‭blue_image[:] = [255, 0, 0] # BGR for blue‬
‭# Mix the images (50% each)‬
‭mixed_image = cv2.addWeighted(red_image, 0.5, blue_image, 0.5, 0)‬
‭# Display the mixed image‬
‭plt.imshow(cv2.cvtColor(mixed_image, cv2.COLOR_BGR2RGB))‬
‭plt.title('Color Mixing: Red and Blue')‬
‭plt.axis('off')‬
‭plt.show()‬

‭Output:‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

I‭ mage Height: 1280‬


‭Image Width: 853‬
‭Number of Channels: 3‬
‭Image Size (in bytes): 3275520‬
‭Original Pixel Value at (100, 50): [34 28 5]‬

‭Observation:‬
I‭ mage Reading and Displaying:‬‭You can load and visualize‬‭images using cv2.imread() and‬
‭matplotlib.pyplot.‬
‭Grayscale Conversion:‬‭Color images can be converted‬‭to grayscale for simpler analysis.‬
‭Image Rotation:‬‭OpenCV provides straightforward functions‬‭for rotating images by specific‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

d‭ egrees.‬
‭Image Writing:‬‭You can save images to disk using cv2.imwrite().‬
‭Image Properties:‬‭You can extract and print properties‬‭like dimensions and size of an image.‬
‭Pixel Access and Modification:‬‭You can directly access‬‭and modify pixel values using‬
‭NumPy array indexing.‬
‭Color Mixing:‬‭Use cv2.addWeighted() to blend two colors‬‭or images.‬

‭Exp‬ ‭20‬ ‭Week‬ ‭7‬ ‭Date‬

‭Aim :‬
‭Demonstrate Face and Eye detection using Haar cascade classifier in OpenCV.‬

‭Libraries, Methods & Variables Used:‬


c‭ v2:‬‭OpenCV (Open Source Computer Vision Library) is‬‭an open-source computer vision‬
‭and machine learning software library.Image processing,Computer vision: Real-time image‬
‭processing are tasks.‬
‭matplotlib.pyplot as plt‬‭:Matplotlib is a plotting‬‭library for Python and its numerical‬
‭mathematics extension NumPy.‬

c‭ v2.CascadeClassifier:‬‭It’s mainly used for tasks such‬‭as face detection,‬ ‭ ariables:‬


V
‭eye detection, or other object detection (e.g., pedestrians, cars).‬ ‭face_cascade,‬
‭cv2.cvtColor‬‭:Converts an image from one color space‬‭to another.‬ ‭eye_cascade,i‬
‭face_cascade.detectMultiScale():‬‭Detects objects (in‬‭this case, faces) in an‬ ‭mage,gray_im‬
‭image using the Haar Cascade Classifier.‬ ‭age,faces,roi_i‬
‭cv2.rectangle():‬‭Draws a rectangle on an image.‬ ‭mage,eyes‬
‭eye_cascade.detectMultiScale:‬‭Similar to‬
‭face_cascade.detectMultiScale(), but used for detecting eyes in an image‬
‭imshow():‬‭Displays an image in a window.‬

‭PROGRAM:‬
i‭mport cv2‬
‭import matplotlib.pyplot as plt‬
‭# Step 1: Load the Haar cascades for face and eye detection‬
‭face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +‬
‭'haarcascade_frontalface_default.xml')‬
‭eye_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml') # Fixed:‬
‭Assigned to eye_cascade instead of face_cascade‬
‭# Step 2: Load an image‬
‭image = cv2.imread('group1.jpg')‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

g‭ ray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale‬


‭# Step 3: Detect faces in the image‬
‭faces = face_cascade.detectMultiScale(gray_image, scaleFactor=1.3, minNeighbors=5)‬
‭# Step 4: Draw rectangles around faces and detect eyes within each face‬
‭for (x, y, w, h) in faces:‬
‭cv2.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 2) # Blue rectangle for face‬
‭# Define the region of interest (ROI) in the grayscale image for the face‬
‭roi_gray = gray_image[y:y + h, x:x + w]‬
‭roi_color = image[y:y + h, x:x + w]‬
‭# Step 5: Detect eyes within the face ROI‬
‭eyes = eye_cascade.detectMultiScale(roi_gray) # Now eye_cascade is defined and can be used‬
‭for (ex, ey, ew, eh) in eyes:‬
‭cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2) # Green rectangles for eyes‬
‭# Step 6: Display the result using matplotlib‬
‭plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))‬
‭plt.title('Face and Eye Detection')‬
‭plt.axis('off')‬
‭plt.show()‬
‭Output:‬

‭Observation:‬
‭ aar Cascade Efficiency:‬‭Haar cascades are fast and‬‭efficient for detecting objects,‬
H
‭especially faces. This method can detect faces in real-time if applied to video streams.‬
‭Detection Parameters:‬‭The choice of scaleFactor, minNeighbors,‬‭and minSize parameters‬
‭affects the performance and accuracy of the detection.‬
‭Single Face Detection:‬‭This code is suitable for detecting‬‭a single face in a well-lit‬
‭environment.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 8‬
‭Exp‬ ‭21‬ ‭Week‬ ‭8‬ ‭Date‬

‭Aim :‬
‭perform corner detection algorithms harris corner and shi thomshi corner detection algorithms.‬

‭Libraries, Methods & Variables Used:‬


c‭ v2:‬‭OpenCV (Open Source Computer Vision Library)‬‭is an open-source computer vision‬
‭and machine learning software library.‬
‭numpy as np‬‭:NumPy (Numerical Python) is a fundamental‬‭package for scientific‬
‭computing in Python, providing support for arrays and matrices.‬
‭matplotlib.pyplot as plt:‬
‭Matplotlib is a plotting library for Python and its numerical mathematics extension NumPy.‬

‭cv2.imread():‬‭Reads an image from a file and returns‬‭it as a NumPy array.‬ ‭ ariables:‬


V
‭cv2.cvtColor‬‭:Converts an image from one color space‬‭to another.‬ ‭image,gray_i‬
‭cv2.cornerHarris()‬‭:Detects corners in an image using‬‭the Harris corner‬ ‭mage,dst,thres‬
‭detection algorithm.‬ ‭hold,corners,r‬
‭Cv2.dilate:‬‭Applies a dilation operation to an image,‬‭which is typically‬ ‭esult,image,x,‬
‭used to expand the white regions of an image or enhance features like‬ ‭y‬
‭edges or corners.‬
‭dst.max()‬‭:This is not an OpenCV method but a NumPy‬‭method. If dst is a‬
‭NumPy array, dst.max() returns the maximum value in the array.‬
‭cv2.addWeighted():Blends two images by adding them together with‬
‭specific weights.‬
‭imshow()‬‭:Displays an image in a window.‬
‭PROGRAM:‬
i‭mport cv2‬
‭import numpy as np‬
‭import matplotlib.pyplot as plt‬
‭# Step 1: Load the image‬
‭image = cv2.imread('geeks.jpg')‬
‭gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)‬
‭# Step 2: Convert to float32‬
‭gray_image = np.float32(gray_image‬
‭# Step 3: Harris corner detection‬
‭dst = cv2.cornerHarris(gray_image, blockSize=2, ksize=3, k=0.04)‬
‭# Step 4: Dilate the result to mark the corners‬
‭dst = cv2.dilate(dst, None)‬
‭# Step 5: Thresholding to identify strong corners‬
‭threshold = 0.01 * dst.max()‬
‭corners = np.zeros_like(image)‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

c‭ orners[dst > threshold] = [0, 0, 255] # Mark corners in red‬


‭# Step 6: Display the original image with corners marked‬
‭result = cv2.addWeighted(image, 0.7, corners, 0.3, 0)‬
‭plt.imshow(cv2.cvtColor(result, cv2.COLOR_BGR2RGB))‬
‭plt.title('Harris Corners')‬
‭plt.axis('off')‬
‭plt.show()‬
‭Shi thomasi‬
‭import cv2‬
‭import numpy as np‬
‭import matplotlib.pyplot as plt‬

#‭ Step 1: Load the image‬


‭image = cv2.imread('geeks.jpg')‬
‭gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)‬

#‭ Step 2: Shi-Tomasi corner detection‬


‭corners = cv2.goodFeaturesToTrack(gray_image, maxCorners=100, qualityLevel=0.01,‬
‭minDistance=10)‬

#‭ Step 3: Draw the corners on the image‬


‭for corner in corners:‬
‭x, y = corner.ravel()‬
‭# Convert x and y to integers before passing to cv2.circle‬
‭x = int(x)‬
‭y = int(y)‬
‭cv2.circle(image, (x, y), 3, (0, 255, 0), -1)‬

#‭ Step 4: Display the image with corners marked‬


‭plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))‬
‭plt.title('Shi-Tomasi Corners')‬
‭plt.axis('off')‬
‭plt.show()‬

‭Output:‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Observation:‬
‭ threshold is applied to retain only the strongest corners. Points with values in the corner‬
A
‭response image (dst) greater than 1% of the maximum response (0.01 * dst.max()) are‬
‭considered corners.‬
‭These corners are marked in red ([0, 0, 255]).‬
‭Harris Corner Detection‬‭is sensitive to changes in‬‭the image, making it effective in detecting‬
‭sharp edges or corner-like features.‬
‭Shi-Tomasi Corner Detection:‬
‭●‬ ‭Selects only the strongest corners, which results in fewer, more distinct corners.‬
‭●‬ ‭More computationally efficient than Harris and performs better in noisy environments.‬

‭Exp‬ ‭22‬ ‭Week‬ ‭8‬ ‭Date‬

‭Aim :‬
‭ redict‬ ‭whether‬ ‭two‬ ‭images‬ ‭are‬ ‭similar‬ ‭using‬ ‭SIFT‬ ‭for‬ ‭feature‬ ‭detection‬ ‭and‬ ‭BF‬ ‭matcher‬
P
‭algorithms.‬
‭Libraries, Methods & Variables Used:‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭cv2:OpenCV (Open Source Computer Vision Library) is an open-source computer vision‬


‭and machine learning software library.‬
‭ atplotlib.pyplot as plt :Matplotlib is a plotting library for Python and its numerical‬
m
‭mathematics extension NumPy.‬

r‭ ead_image(path1, path2):Function to read the images by taking their‬ ‭ ariables:‬


V
‭paths‬ ‭img1,img2,sift,kp1,‬
‭convert_to_grayscale(pic1, pic2):Function to convert images from‬ ‭kp2,des1,des2,bf,m‬
‭RGB to grayscale‬ ‭atches,matched_im‬
‭cv2.cvtColor():convert color from rgb to grayscale and vice versa‬ ‭g,flags,num_matche‬
‭detectAndCompute() is used in OpenCV to detect keypoints and‬ ‭s,img1_path,img2_‬
‭compute descriptors in a single step.‬ ‭path‬
‭BF_FeatureMatcher(des1, des2): Function to find best detected‬
‭features using brute force matcher‬
‭cv2.BFMatcher: Brute-Force Matcher (BFMatcher) is a simple, yet‬
‭powerful algorithm to match descriptors between two images.‬
‭title():Adds a title to the plot.‬
‭axis():Turns the axis on or off for the plot.‬
‭show():Displays the current plot.‬
‭PROGRAM:‬
i‭mport cv2‬
‭import matplotlib.pyplot as plt‬
‭def detect_and_match(img1_path, img2_path):‬
‭# Read images‬
‭img1 = cv2.imread(img1_path, cv2.IMREAD_GRAYSCALE)‬
‭img2 = cv2.imread(img2_path, cv2.IMREAD_GRAYSCALE)‬
‭# Initialize SIFT detector‬
‭sift = cv2.SIFT_create()‬
‭# Detect keypoints and descriptors with SIFT‬
‭kp1, des1 = sift.detectAndCompute(img1, None)‬
‭kp2, des2 = sift.detectAndCompute(img2, None)‬
‭# BFMatcher with default params (L2 norm)‬
‭bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)‬
‭# Match descriptors‬
‭matches = bf.match(des1, des2)‬
‭# Sort them in the order of their distances‬
‭matches = sorted(matches, key=lambda x: x.distance)‬
‭# Draw the matches‬
‭matched_img = cv2.drawMatches(img1, kp1, img2, kp2, matches[:50], None,‬
‭flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)‬
‭# Display the image with matches‬
‭plt.figure(figsize=(12, 6))‬
‭plt.imshow(matched_img, cmap='gray')‬
‭plt.title(f'Number of Matches: {len(matches)}')‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

p‭ lt.axis('off')‬
‭plt.show()‬
‭# Calculate the number of good matches‬
‭num_matches = len(matches)‬
‭# Define a threshold for determining similarity‬
‭similarity_threshold = 10 # Adjust this based on experimentation‬
‭if num_matches > similarity_threshold:‬
‭return f"Images are similar with {num_matches} good matches."‬
‭else:‬
‭return f"Images are not similar with only {num_matches} matches."‬
‭# Example usage‬
‭img1_path = 'dog.jpg'‬
‭img2_path = 'dog1.jpg'‬
‭result = detect_and_match(img1_path, img2_path)‬
‭print(result)‬
‭Output:‬

‭Images are similar with 243 good matches.‬


‭Observation:‬
‭●‬ T ‭ he Brute-Force Matcher (BFMatcher) in OpenCV is a straightforward and effective‬
‭algorithm for matching feature descriptors between images‬
‭●‬ ‭BFMatcher compares each descriptor from the first set to every descriptor in the‬
‭second set. It computes the distance between each pair of descriptors and identifies the‬
‭best matches.‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭WEEK - 9‬
‭Exp‬ ‭23‬ ‭Week‬ ‭9‬ ‭Date‬

‭Aim :‬
‭Demonstrate implementation of a chatbot in botpress.‬
‭Description:‬
‭ ‬‭chatbot‬‭is an artificial intelligence (AI) software‬‭that can simulate a conversation with a‬
A
‭user in natural language through messaging applications, websites, mobile apps, or via the‬
‭telephone. It is designed to automate interactions and provide instant responses to user‬
‭inquiries, helping to streamline customer service, enhance user engagement, and improve‬
‭productivity.‬
‭Output:‬
‭A3CIL202 - AI Tools and Techniques Lab‬ ‭22331A0597‬

‭Observation:‬
‭ The distinction between rule-based and AI-powered chatbots helps readers understand the‬

‭different levels of chatbot capabilities, from handling simple to complex queries.‬
‭● They enhance user experience through personalised interactions and continuous learning‬
‭while being flexible and easy to manage.‬
‭● The platform's visual tools and modular design make it accessible for users with varying‬
‭technical skills.‬

You might also like