0% found this document useful (0 votes)
49 views

AI Mini Report

This document provides a mini report on an artificial intelligence project to develop a hand gesture recognition system. The system will use a camera to record video and identify specific gestures from snapshots using models trained on labeled image data. Work will be distributed among three students to collect and annotate image data, train and optimize machine learning models, and develop the user interface. The project will use Python libraries like OpenCV, Mediapipline, Numpy, and Tensorflow. It will follow an iterative process of data collection, model training and testing, and interface development over the period of January to May 2023.

Uploaded by

aswini kurra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

AI Mini Report

This document provides a mini report on an artificial intelligence project to develop a hand gesture recognition system. The system will use a camera to record video and identify specific gestures from snapshots using models trained on labeled image data. Work will be distributed among three students to collect and annotate image data, train and optimize machine learning models, and develop the user interface. The project will use Python libraries like OpenCV, Mediapipline, Numpy, and Tensorflow. It will follow an iterative process of data collection, model training and testing, and interface development over the period of January to May 2023.

Uploaded by

aswini kurra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

School of Computer Science and Engineering

ARTIFICIAL INTELIGENCE PROJECT MINI REPORT


(Project Term Jan – May, 2023)

HAND GESTURE RECOGNITION

Submitted by
NAME OF STUDENTS REGISTRATION NUMBER SECTION
Rishu Raj 12204901 K21SP
Sofiya khan 12113394 K21SP
Yallakkagari Vinay Kumar 12107392 K21SP

Submitted to : Gurleen kaur Walia


INTRODUCTION

The project is based on – “Hand gesture detection and recognition”.


The project introduces an application using computer vision for hand gesture
recognition. A camera records a live video stream, from which a snapshot is taken with
the help of interface. The system is trained to recognize some specific type of hand
gestures ( all the best, victory sign, etc.) and to test this system a test gesture is given to
the system for recognition.
People have used various ways for hand gesture recognition in the past like gloves or
markers for input in the system. We have no such constraint for using the system. The
user can give hand gesture in the view of the camera naturally.

We are going to build this project in python with the help of following libraries & tools:
• Opencv for creating and controlling the camera interface.
• Medipipeline is a tool from google for hand gesture detection.
• Numpy for image matrix manipulation.
• Tenserflow to identify the object.
• SQL for the database.

Work plan:
Here is a work plan for making hand recognition AI among three members:

Define the scope and requirements of the project:

Discuss the goals of the hand recognition AI, such as detecting hand gestures,
recognizing finger movements, or identifying hand shapes.
Determine the necessary hardware and software components required to
develop the AI, such as cameras, sensors, programming languages, and
machine learning frameworks.
Collect and annotate hand image data:

Gather a large dataset of hand images to train and test the AI.
Annotate the images with relevant labels, such as hand position, hand shape,
and hand movement.
Train and optimize the machine learning model:

Choose an appropriate machine learning algorithm, such as Convolutional


Neural Networks (CNNs) or Support Vector Machines (SVMs).
Train the model on the annotated hand image dataset.
Optimize the model's performance by adjusting hyperparameters, such as
learning rate, batch size, and number of layers.
Develop the user interface and integration:

Design a user-friendly interface for users to interact with the AI.


Integrate the AI with the user interface, ensuring smooth communication
between the AI and the hardware components, such as cameras or sensors.
Test and evaluate the AI:

Evaluate the AI's performance on a testing dataset, using metrics such as


accuracy, precision, and recall.
Refine the model and user interface based on feedback from users and
testing results.
Document the project and prepare for deployment:

Document the development process, including the algorithms, datasets, and


testing results.
Prepare the AI for deployment on a target device, such as a mobile app or
web application.
Create user manuals and tutorials for users to learn how to use the AI.
Assignments for each member:

Sofiya khan : Collect and annotate hand image data.


Yallakkagari Vinay Kumar : Train and optimize the machine learning model.
Rishu Raj : Develop the user interface and integration.

26 th to 31st April

7 th to 25th April

Research
Analyses
6 th April
Resouce gathering
Implementation
Project Testing
1 st to 5th April

22 th to 29th
March

0 2 4 6 8 10 12

You might also like