Sign Language Recognition Using Deep Learning
Sign Language Recognition Using Deep Learning
Deep Learning
It is a language that includes gestures made with the hands and other body parts,
including facial expressions and postures of the body.
It used primarily by people who are deaf and dumb.
There are many different sign languages as, British, Indian and American sign
languages.
It bridges the gap between physically challenged people and normal people.
Problem Statement
Speech impaired people use hand signs and gestures to communicate. Normal people
Hence there is a need of a system which recognizes the different signs, gestures and
It bridges the gap between physically challenged people and normal people.
Proposed Solution
Obscure The Image Some of the existing systems have limitations on finding the edges and
features of images. To overcome this, the proposed system used a masking process to hide
some portions of an image.
Detecting The Hand Region Most of the existing systems have a limitation on background
subtraction. In the proposed system, the hand region is being tracked for background
elimination which gave better results over existing systems.
Color Conversion Some of the existing systems are processing color images which makes
complex computation because the color image has more bits.
Rescaling Some of the existing systems do not include the resizing step in pre-processing.
Image resizing is important to increase or decrease the total number of pixels.
Literature survey
1 American Sign 1. Kshitij Bantupalli 1. In the proposed model 1. One of the problems
Language Recognition Department of Computer create a vision based the model faced is with
using Deep Science application which offers facial features
Learning and Kennesaw State University sign language translation and skin tones.
Computer Vision Kennesaw, USA to text thus aiding
[email protected] communication between
signers and non-signers.
2. Ying Xie 2. This model takes video 2. The model also suffered
Department of Computer sequences and extracts from loss of accuracy with
Science temporal the inclusion of faces, as
Kennesaw State University and spatial features from faces of signers vary, the
Kennesaw, USA them. model ends up training
[email protected] incorrect features from
the videos.
Literature survey
S NO. Reacher Paper Author Description Drawback
1 Arabic Sign Language 1. Menna ElBadawy Scientific 1. In this paper, features 1. The Scoring Algorithm
Recognition with 3D Computing Department extractor with deep mentioned above uses the
Convolutional Neural Ain Shams University Cairo, behavior was used to deal frames after dividing the
Networks Egypt with the minor details of video. These frames are
[email protected] Arabic Sign Language resized and processed
m with Canny edge detector
to get the binary images
contain sharp edges.
Due to brightness and contrast sometimes webcam can hardly detect the expected skin
color.
Because of similarity of tracking environment background color and skin color the SLR
gets unexpected pixels.
Python (3.7.4)
Ira Cohen, Nicu Sebe, Ashutosh Garg, Lawrence S, Chen and Thomas S. Huang (2003,
February). Facial expression recognition from video sequences: temporal and static modeling.
Computer Vision and Image Undertaking 91.
Recognition of Isolated Indian Sign Language Gesture in Real Time, Anup Nandy, Jay Shankar
Prasad, Soumik Mondal, Pavan Chakraborty, G. C. Nandi, Communications in Computer and
Information Science book series (CCIS, volume 70)
K. Anetha and . P. J. Rejina, "Hand Talk-A Sign Language Recognition Based On Accelerometer
and SEMG Data," International Journal of Innovative Research in Computer and
Communication Engineering, vol. 2, no. 3, 2014.
D. J. Singha and K. , "Recognition of Indian Sign Language in Live Video," International Journal
of Computer Applications (0975 – 8887), vol. 70, no. 19, 2013.
Thank You