Final Conf PPT
Final Conf PPT
Paper Id : ICEARS272
Authors:
Dr.Deepti Aggarwal
Som Ahirwar
Yash Goel
Sankalp Srivastava
Sujal Verma
Contents
• Introduction
• Literature Survey
• Summary Of Literature Survey
• Technologies
• Limitation
• Conclusion
• References
Introduction
• Communication can be a challenge for people with disabilities, particularly those who are
deaf or mute. In today's world, the difference in communication between deaf, mute and
normal humans is very wide.
• In India, there are approximately 17 lakh people who are deaf or mute, yet there is little
focus on them. To address this issue, specific languages have been developed to ease
communication for deaf and mute individuals.
• American Sign Language (ASL) is one such language. It is a complete, natural language
with the same linguistic properties as spoken languages, but with grammar that differs
from English. ASL is expressed through movements of the hands and body, and is an
essential tool for communication within the deaf community.
Literature Survey
• Mohammed ali, A. H., Abbas, H. H., & Shah adi, H. I. (2022). Real-time
sign language recognition system. International Journal of Health Sciences
:- About 97.5% accuracy in the real-time with a competition time of about
3.3sec for capturing the image, predicting it, and converting it to a text and
spoken sentence. The system achieved this result without making any
preprocessing for the image, which gave it simplicity and low computing time.
• Li, Dongxu and Rodriguez, Cristian and Yu, Xin and Li, Hongdong.
Word-level Deep Sign Language Recognition from Video: A New Large-
scale Dataset and Methods Comparison :- Their results show that pose-
based and appearance-based models achieve comparable performances up to
62.63% at top-10 accuracy on 2,000 words/glosses, demonstrating the validity
and challenges of their dataset.
Literature Survey(Cont.)
• Sood, Anchal, and A.Mishra. "AAWAAZ: A communication system for deaf
and dumb." Reliability, Infocom Technologies and Optimization-They
created a sign language-based communication system for the deaf and dumb
that outputs both text and audio as well as taking sign language as its input.
Similar to that, it would display the relevant image if any text input were made
Literature Survey Table
Paper Techniques Advantages Disadvantages
Word-level Deep Sign Language
Recognition from Video: A New Large- 2D Conv. RNN, It resulted the accuracy of 62.63% and Computational cost and
scale Dataset and Methods Comparison Pose TGCN, Pose RNN and 3D Conv works on video. prediction time is high.
2020.
Real-time sign language recognition Accuracy achieved in off- time testing was Based on image input, doesn’t
system. CNN model and Squeezenet 100% whereas accuracy achieved in real- work on videos
International Journal of Health Sciences, time was 97.5%
6(S4), 10384–10407. 2022
• Additionally, prediction is only done with the alphabet, not on entire words.
• The application requires colored images in order to detect signs and may have
low accuracy in black and white images. These limitations must be considered
when using the tool for sign language recognition.
Conclusion
• Our interface solution provides a practical and easy way to achieve real-time conversion
of hand gestures into English sentences that can be easily understood by all.
• While the practical adaption of the interface solution for visually impaired and blind
people is limited by simplicity and usability in practical scenarios, our focus is primarily
on the conversion of videos to English readable text.
• With the ability to recognize hand gestures and convert them into corresponding words,
our application is a powerful tool for communication and understanding between the deaf
community and the general public. Overall, our solution has the potential to greatly
improve the lives of individuals with hearing impairments and promote
inclusivity in society.
References
[1] Mohammedali, A. H., Abbas, H. H., & Shahadi, H. I. (2022). Real-time sign language
recognition system. International Journal of Health Sciences, 6(S4), 10384–10407.
https://doi.org/10.53730/ijhs.v6nS4.12206
[2] Li, Dongxu and Rodriguez, Cristian and Yu, Xin and Li, Hongdong. Word-level Deep Sign
Language Recognition from Video: A New Large-scale Dataset and Methods Comparison.
[3] A.Haldera, A.Tayade. Real-time Vernacular Sign Language Recognition using MediaPipe and
Machine Learning Vol (2) Issue (5), pp. 9-17, 2021.
[4] A.Muppidi, A.Thodupunoori, Lalitha. Real Time Sign Language Detection for the Deaf and
Dumb, Volume 11, pp. 153-157, August 06,2022.
THANK YOU