Design Project 2
Design Project 2
Communicating with a specially challenged person who can’t speak or hear is quite difficult,
especially when you don’t know sign language.
So to ease this problem, today you will learn to make a sign Language Translator device that
converts sign language into a spoken language. This device will be based on an ML model
that can recognize the different sign language gestures for accurate translation
INTRODUCTION
There are different sign languages practiced in different countries. For example, India uses
Indian Sign Language (ISL) while the USA uses American sign language (ASL). So, you first
need to decide which type you wish to implement. here are more than 70 million people who
are mute or hard of hearing. These people can communicate via postures, body movements,
eyes, eyebrows, and hand gestures thanks to sign languages. People with hearing and/or
speech impairments use sign language as their natural mode of communication. Signs are
gestures made with one or both hands accompanied by facial expressions with specific
meanings. Although deaf, hard-of-hearing, and mute people can easily communicate with
each other, integration in educational, social and work environments is a significant barrier
for the differently abled. There is a communication barrier between an unimpaired person
who is not aware of the sign language system and an impaired person who wishes to
communicate. The use of computers to identify and render sign language has progressed
significantly in recent years. The way the world works is rapidly changing and improving due
to technological?advancements. As programs over the last two decades have progressed,
barriers for the differently abled are dissipating. Researchers are working to build gear and
software that will help these people interact and learn using image processing, artificial
intelligence, and pattern matching approaches. The project's goal is to help people overcome
these obstacles by developing a vision-based technology for recognising and translating sign
language into text. We want to create a Raspberry Pi application for real-time motion gesture
recognition using webcam input in Python. This project combines real-time motion detection
and gesture recognition. The user has to perform a specific gesture. The webcam captures and
recognizes the gesture(from a set of known gestures) and displays the accurate representation.
LITERATURE SURVEY
COMPONENTS
HARDWARE:
Raspberry Pi 3 Model B+
Zebion Webcam 20 Mega Pixel
SD Card 32 GB
Desktop or Laptop
SOFTWARE:
Python IDLE
Packets:
a. CV2
b. Numpy
c. Mediapipe
d. Tensorflow
e. Tf.keras
DESCRIPTION
Raspberry PI 3 Model B+
1. Windows and Android operating systems are supported by this computing device.
2. RPi is essentially a low-cost system programming and administration test machine.
3. The BCM2837B0 system-on-chip (SoC) features a 1.4 GHz quad-core ARMv8 64bit
processor and a powerful VideoCore IV GPU.
4. Snappy Ubuntu Core, Raspbian, Fedora, and Arch Linux, as well as Microsoft
Windows 10 IoT Core, are among the ARM GNU/Linux distributions that can be run
on the Raspberry Pi.
ZEBION Webcam 20 MP
This full functionality 20 MP camera can deliver smooth and detailed high-quality
Specifications :
Image processing should be enhanced so that the system can communicate in both
directions, i.e. transform conventional language to sign language and vice versa.
The system may be improved by allowing for multi-language display and speech
conversion.