0% found this document useful (0 votes)
12 views

A_10 PPT

The project aims to develop a real-time system that converts sign language into text using artificial intelligence and computer vision technologies, making communication easier for deaf and mute individuals. It addresses the limitations of existing systems by operating on standard computers without the need for expensive hardware. The proposed work includes gesture recognition, video processing, and text display, with future enhancements planned for dynamic gesture recognition and mobile app development.

Uploaded by

Sanika Sonandkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

A_10 PPT

The project aims to develop a real-time system that converts sign language into text using artificial intelligence and computer vision technologies, making communication easier for deaf and mute individuals. It addresses the limitations of existing systems by operating on standard computers without the need for expensive hardware. The proposed work includes gesture recognition, video processing, and text display, with future enhancements planned for dynamic gesture recognition and mobile app development.

Uploaded by

Sanika Sonandkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Conversion Of Sign Language To Text

Presented by:

Rutika Sachin Patil


Rutuja Ramesh Patil
Shreya Shivgonda Patil
Sanika Santosh Sonandkar

Under the Guidance of:

Prof.K.N.Kamble
INTRODUCTION
Sign language is an important means of communication for deaf and mute people.
However, many people do not understand sign language, which often leads to
communication barriers. This project aims to convert hand signs into text using
artificial intelligence and computer vision technologies. It operates in real-time through
a standard webcam, making it convenient and accessible without requiring any special
or costly devices. The primary goal is to facilitate better communication for everyone
and promote inclusion. This system can be effectively used in various environments
such as homes, schools, and public places to support seamless interaction.
Motivation
• Helping People Communicate
Many people in India can't hear or speak. Most others don’t know sign language, so
they find it hard to talk to each other.

• Low-Cost and Easy to Use


Other systems are expensive or hard to use. Our project works on normal computers
and is free and open to everyone.

• Using Technology for Good


We use computer vision and machine learning to make the system better and help
more people be part of the digital world.
Existing System
MotionSavvy Uni :
Used Leap Motion sensor to track hand movements.AI-powered app converted
signs into text or speech.Enabled real-time sign-to-speech translation via tablet.

SignAll :
Used multiple cameras to capture signs.AI processed and translated sign language
into text/speech.Enabled real-time communication between deaf and hearing users.

Hand Talk :
Mobile app translating spoken Portuguese into Libras (Brazilian Sign
Language).Used 3D avatar "Hugo" to demonstrate signs.Aimed at bridging
communication for Libras users.
Drawbacks of Existing System
• Hardware Limits: Many current systems need extra devices like special gloves or
special cameras.
• Only Still Signs: A lot of models can only understand signs that don’t move, which
means they can’t recognize many words or expressions.
• Hard to Set Up: Some apps are difficult to install and use, needing technical skills.
• Delay Problems: Real-time systems can be slow because they need a lot of
computer power.
Problem Statement
Deaf and mute individuals struggle with communication due to limited understanding
of sign language and inaccurate translation tools. We need to develop a system that
quickly and accurately converts sign language from video into text and improving real-
time communication for these individuals.
Proposed Work
Proposed Work
Proposed Work
Proposed work

• Gesture Recognition (recognizeGesture


Detects hand using MediaPipe, applies rules or CNN to recognize gesture,
returns it as text.
• Video Processing (processVideo)
Captures webcam video, runs recognizeGesture on each frame, displays result
in real time.
• Text Display (displayDetectedText)
Shows recognized text in Streamlit, with options to change color and clear it.
System Architecture Diagram
Future Work

• Dynamic gesture recognition (continuous signing)

• Voice output using text-to-speech

• Two-way translator: Text to Animated Sign

• Android/iOS app

• Multi-hand and regional gesture support


Software Requirements
• OS :
Windows (latest).
• Languages :
Python
• Frameworks/Libraries :
OpenCv, Keras, Mediapipe, Streamlit, CNN, Tensorflow, Numpy
• IDE :
Visual Studio Code
Hardware Requirements
• Processor :
Intel Core i5+ / AMD Ryzen 5+.

• RAM :
8GB minimum (16GB recommended).

• Storage :
256GB+ SSD.
Conclusion

This project helps deaf and mute people communicate easily. It uses a webcam to
detect hand signs and converts them into text. Tools like MediaPipe, OpenCV, and
CNN make the system accurate and fast. It works in real time, is easy to use, and
does not need costly devices. This system supports inclusive communication and
shows how technology can help society.
Thank You!!!!

You might also like