0% found this document useful (0 votes)
5 views

Specially Abled

The document outlines a project aimed at developing an AI-powered human-computer interface to facilitate real-time communication for individuals with hearing and speech disabilities through sign language recognition. It discusses the challenges faced by these individuals and proposes a system that utilizes computer vision and machine learning to translate sign language into spoken or written language. The document also details the software and hardware requirements necessary for implementing the proposed system.

Uploaded by

Abinaya B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Specially Abled

The document outlines a project aimed at developing an AI-powered human-computer interface to facilitate real-time communication for individuals with hearing and speech disabilities through sign language recognition. It discusses the challenges faced by these individuals and proposes a system that utilizes computer vision and machine learning to translate sign language into spoken or written language. The document also details the software and hardware requirements necessary for implementing the proposed system.

Uploaded by

Abinaya B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

THANGAVELU ENGINEERING COLLEGE

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

REAL TIME COMMUNICATION POWERED BY AI FOR


SPECIALLY ABLED

SUBMITTED BY

ABINAYA .B 312621243001
ARTHI C 312631242003
Kaviyalaxmi .p 312621233017
GOAL

Our goal is to design a human


computer interface
system that can accurately
identify the language of the deaf
and dumb using AI

2
ARTIFICIAL
INTELLIGENCE

Artificial intelligence (AI)


refers to smart machines or
algorithms that are capable
of performing cognitive
tasks usually made by
humans

3
ABSTRACT

For individuals with physical disabilities, particularly those who


rely on sign language for communication, real-time interaction
can be challenging in environments where sign language is not
widely understood. Artificial intelligence (AI) has emerged as a
transformative tool in bridging this communication gap. This
paper explores AI-driven solutions that enable real-time sign
language recognition, translation, and synthesis into spoken or
written language. Leveraging machine learning, computer vision,
and natural language processing (NLP), AI-powered systems can
interpret hand gestures, facial expressions, and body movements
with high accuracy.

4
INTRODUCTION

Dumb people are usually face some problems on normal


communication with other people in society. It has been
observed that they sometimes find it difficult to interact
with normal people with their gestures. Because people
with hearing problems or deaf people cannot speak like
normal people, they have to depend on a kind of visual
communication in most cases. To overcome these
problems, we have proposed a system that uses
cameras to capture and convert videos of hand gestures
from dumb people who turn into speech for
understanding normal people.
5
PROBLEM STATEMENT

● People who have a hearing disability


and/or a speech disability need a different
way to communicate other than vocal
communication.
● They resort to sign language to
communicate with each other.
● However, Sign Language requires a lot of
training to be understood and learn, and
not every person may understand what the
sign language gestures mean
6
EXISTING
SYSTEM
Several AI-powered systems have been developed to facilitate
real-time communication for physically disabled individuals who
rely on sign language. These systems use technologies such as
computer vision, deep learning, natural language processing
(NLP), and wearable sensors to translate sign language into
spoken or written text.

Sign Language Recognition Using Computer Vision


•AI models trained on large datasets of sign language gestures interpret
hand movements, facial expressions, and body postures.
AI-Powered Mobile Applications
•Apps like Google’s Live Transcribe, HandTalk, and SignAll use smartphone
cameras to recognize and translate sign language in real time.
7
System requirements
Software Requirements
a. Operating System
•Windows 10/11, macOS, or Linux (for PC-based applications)
•Android/iOS (for mobile-based applications)

b. Programming Languages
•Python (AI & ML development)
•C++/C# (for hardware integration)
•JavaScript (React, Node.js) (for web-based applications)
•Swift/Kotlin (for mobile app development)

c. AI & Machine Learning Frameworks


•TensorFlow / PyTorch (for deep learning-based sign language recognition)
•OpenCV (for real-time computer vision)
•MediaPipe (for hand and gesture tracking)
•Google BERT / OpenAI GPT (for NLP-based text processing)

d. Speech Processing Tools


•Google Text-to-Speech, Amazon Polly, or Microsoft Azure Speech API (for voice synthesis)

e. Cloud & Database


•Google Firebase / AWS / Azure (for cloud storage and AI model hosting)
•MySQL / PostgreSQL / MongoDB (for storing user interactions and communication logs)
8
Hardware Requirements
a. Input Devices
•High-Resolution Camera (for sign language recognition)
• Minimum: 1080p resolution
• Recommended: Depth-sensing cameras (Intel RealSense, Microsoft Kinect, or LiDAR)
•Wearable Sensors (if applicable)
• Smart gloves with accelerometers and gyroscopes for motion detection
• EEG sensors for Brain-Computer Interface (BCI) systems
b. Processing Unit
•CPU
• Minimum: Intel Core i5 (or equivalent)
• Recommended: Intel Core i7/i9, AMD Ryzen 7/9, or Apple M-series
•GPU (for AI model processing)
• Minimum: NVIDIA GTX 1650 / AMD Radeon RX 5500
• Recommended: NVIDIA RTX 3060+ / AMD Radeon RX 6800+
•RAM
• Minimum: 8GB
• Recommended: 16GB or more
•Storage
• Minimum: 256GB SSD
• Recommended: 512GB SSD or more (for AI models and datasets)
c. Output Devices
•Display (for text-based communication and UI interaction)
•Speakers and Microphone (for voice output and speech recognition if required)
9
Proposed system

The proposed system consists of a camera which


captures a video feed.

● This video feed is processed frame by frame.

● A library called OpenCV is used to process this video


feed.
● The contours for the frames in the video are
identified by darkening the image and obtaining the
white border of the hand.

● This border is used to identify the contours of the


hand.

● The contours are then used to identify the type of 10


symbol provided in the video feed.

You might also like