0% found this document useful (0 votes)
40 views11 pages

Final Conf PPT

The document summarizes recent work on sign language recognition using machine learning techniques. It reviews several papers on sign language recognition that use techniques like CNNs, RNNs, and MediaPipe for hand tracking. Some approaches achieved over 90% accuracy but were limited to static images. Recent work has obtained close to real-time prediction speeds and over 97.5% accuracy on videos by using techniques like CNNs and Squeezenet. However, most current methods still have limitations like lower accuracy in dim lighting and inability to recognize full words. The document concludes that while solutions show potential to improve communication for the deaf community, further improvements are needed to address practical challenges.

Uploaded by

Yash Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views11 pages

Final Conf PPT

The document summarizes recent work on sign language recognition using machine learning techniques. It reviews several papers on sign language recognition that use techniques like CNNs, RNNs, and MediaPipe for hand tracking. Some approaches achieved over 90% accuracy but were limited to static images. Recent work has obtained close to real-time prediction speeds and over 97.5% accuracy on videos by using techniques like CNNs and Squeezenet. However, most current methods still have limitations like lower accuracy in dim lighting and inability to recognize full words. The document concludes that while solutions show potential to improve communication for the deaf community, further improvements are needed to address practical challenges.

Uploaded by

Yash Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 11

ICEARS 2023 CONFERENCE

SIGN LANGUAGE PREDICTION USING MACHINE


LEARNING TECHNIQUES: A REVIEW

Paper Id : ICEARS272

Authors:
Dr.Deepti Aggarwal
Som Ahirwar
Yash Goel
Sankalp Srivastava
Sujal Verma
Contents
• Introduction
• Literature Survey
• Summary Of Literature Survey
• Technologies
• Limitation
• Conclusion
• References
Introduction
• Communication can be a challenge for people with disabilities, particularly those who are
deaf or mute. In today's world, the difference in communication between deaf, mute and
normal humans is very wide.

• In India, there are approximately 17 lakh people who are deaf or mute, yet there is little
focus on them. To address this issue, specific languages have been developed to ease
communication for deaf and mute individuals.

• American Sign Language (ASL) is one such language. It is a complete, natural language
with the same linguistic properties as spoken languages, but with grammar that differs
from English. ASL is expressed through movements of the hands and body, and is an
essential tool for communication within the deaf community.
Literature Survey
• Mohammed ali, A. H., Abbas, H. H., & Shah adi, H. I. (2022). Real-time
sign language recognition system. International Journal of Health Sciences
:- About 97.5% accuracy in the real-time with a competition time of about
3.3sec for capturing the image, predicting it, and converting it to a text and
spoken sentence. The system achieved this result without making any
preprocessing for the image, which gave it simplicity and low computing time.

• Li, Dongxu and Rodriguez, Cristian and Yu, Xin and Li, Hongdong.
Word-level Deep Sign Language Recognition from Video: A New Large-
scale Dataset and Methods Comparison :- Their results show that pose-
based and appearance-based models achieve comparable performances up to
62.63% at top-10 accuracy on 2,000 words/glosses, demonstrating the validity
and challenges of their dataset.
Literature Survey(Cont.)
• Sood, Anchal, and A.Mishra. "AAWAAZ: A communication system for deaf
and dumb." Reliability, Infocom Technologies and Optimization-They
created a sign language-based communication system for the deaf and dumb
that outputs both text and audio as well as taking sign language as its input.
Similar to that, it would display the relevant image if any text input were made
Literature Survey Table
Paper Techniques Advantages Disadvantages

       
 Word-level Deep Sign Language      
Recognition from Video: A New Large- 2D Conv. RNN, It resulted the accuracy of 62.63% and Computational cost and
scale Dataset and Methods Comparison Pose TGCN, Pose RNN and 3D Conv works on video. prediction time is high.
2020.

       
Real-time sign language recognition    Accuracy achieved in off- time testing was  Based on image input, doesn’t
system. CNN model and Squeezenet 100% whereas accuracy achieved in real- work on videos
International Journal of Health Sciences, time was 97.5%
6(S4), 10384–10407. 2022

Recognize only image data only,


AAWAAZ: A communication system for Harris Algorithm Translates hand gestures into speech for not applicable to videos input.
deaf and dumb ordinary people to understand expressions. Also recognize only letters, not
words.

Static Sign Language Recognition Using


Deep Learning 2019 CNN Model Achieves 98% training accuracy, along with Only works on static images.
90.04% of testing accuracy.
Technologies
• Convolutional Neural Network(CNN)-is a particular type of network design for
deep learning algorithms that is utilised for tasks like image recognition and pixel
data processing.

• Media Pipe- MediaPipe provides foundational machine learning models for


simple tasks like hand tracking, addressing the same barrier in development that
impacts a range of machine learning applications
Limitations
• Earlier research has shown several shortcomings in sign language recognition,
including less accuracy achieved in dim light situations and some algorithms with
high computational costs and predictive time in both training and testing.

• Additionally, prediction is only done with the alphabet, not on entire words.

• The application requires colored images in order to detect signs and may have
low accuracy in black and white images. These limitations must be considered
when using the tool for sign language recognition.
Conclusion
• Our interface solution provides a practical and easy way to achieve real-time conversion
of hand gestures into English sentences that can be easily understood by all.

• While the practical adaption of the interface solution for visually impaired and blind
people is limited by simplicity and usability in practical scenarios, our focus is primarily
on the conversion of videos to English readable text.

• With the ability to recognize hand gestures and convert them into corresponding words,
our application is a powerful tool for communication and understanding between the deaf
community and the general public. Overall, our solution has the potential to greatly
improve the lives of individuals with hearing impairments and promote
inclusivity in society.
References

[1] Mohammedali, A. H., Abbas, H. H., & Shahadi, H. I. (2022). Real-time sign language
recognition system. International Journal of Health Sciences, 6(S4), 10384–10407.
https://doi.org/10.53730/ijhs.v6nS4.12206

[2] Li, Dongxu and Rodriguez, Cristian and Yu, Xin and Li, Hongdong. Word-level Deep Sign
Language Recognition from Video: A New Large-scale Dataset and Methods Comparison.

[3] A.Haldera, A.Tayade. Real-time Vernacular Sign Language Recognition using MediaPipe and
Machine Learning Vol (2) Issue (5), pp. 9-17, 2021.

[4] A.Muppidi, A.Thodupunoori, Lalitha. Real Time Sign Language Detection for the Deaf and
Dumb, Volume 11, pp. 153-157, August 06,2022.
THANK YOU

You might also like