0% found this document useful (0 votes)
33 views

Assistive Device Based on Machine Learning Approach for Communication of Visually Challenged and Muted Community

Uploaded by

jasperjesudasan1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Assistive Device Based on Machine Learning Approach for Communication of Visually Challenged and Muted Community

Uploaded by

jasperjesudasan1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)

IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

Assistive Device based on Machine Learning


Approach for Communication of Visually
Challenged and Muted Community
1Dr Smitha Sasi 2Dr Srividya B V 3Dr A R Aswatha

Department of Electronics and Department of Electronics and Department of Medical and Electronics
Telecommunication Engineering Telecommunication Engineering Dayananda Sagar College of
Dayananda Sagar College of Dayananda Sagar College of Engineering, Bangalore, India
Engineering, Bangalore, India Engineering, Bangalore, India [email protected]
2023 4th International Conference on Smart Electronics and Communication (ICOSEC) | 979-8-3503-0088-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICOSEC58147.2023.10275924

[email protected] [email protected]

Abstract— It is a great challenge to find a means of built-in camera takes a picture of the printed text, which
communication for people who are suffering from visual or is then subsequently transformed into digital text by
hearing impairment and also for those who are speechless. This
research study aims to develop a device from Raspberry Pi
Google Vision API. The text is then translated to audio
which can communicate with the visually challenged by using the TTS (Text-to-Speech) module to produce output
converting the messages to audio. The proposed device also that sounds exactly like the original book or paper.
helps the people with hearing loss by converting audios to text
and display the same. It also helps the speech impaired by The vocally handicapped can benefit from recording voice
converting the sign language into text or audio by using image or audio, converting that data into text, and display for
to text conversions. them to read. The message may be typed on a monitor
Keywords— Raspberry P, Assistive Device, Deaf, KNN using a custom keyboard, and for those with vocal
Classifier, Sign Language, Braille impairments, the gadget will read it out. With the help of
the TTS (Text to Speech) library, this text is rendered as
I. INTRODUCTION speech, and the user's audio input is recovered in a
Globally, 1.5 billion people have some degree of vision synthesized voice [2]
impairment, of which 200 million have a mild condition,
According to the World Health Organization, there are 285
220 million have a moderate to severe condition, and 35
million blind people, 300 million people who are hard of
million are visually impaired. The most of blind individuals
hearing, and 1 million mute people in the world.
in the world are thought to reside in India. The number of
Communication is a common challenge for those who are
mute and deaf people worldwide is 9 billion. Almost 5% of
dumb, deaf, or blind in daily life. The primary subject of
the global population, or 466 million people, suffer from a
this essay will be the aforementioned fact. It aims to
hearing loss that is incapacitating [1]. Technology is always
develop a new technology that will make it easier for
evolving, and over the past few decades, it has improved
people who are blind, deaf, or dumb to interact with
our quality of life and ease. Yet, the group of people with
normal people in social situations
physical disabilities in our culture has not received enough
attention. They are not aware of scientific breakthroughs, II. OBJECTIVES
yet they nevertheless face a range of challenges every day.
Human existence is impossible without communication.
➢ To provide protracted communication methods
There is a gap, though. They dislike Braille and the sign
employing wireless technologies for people who
language they use to communicate. They are frequently are blind, deaf, or hard of hearing.
compelled to enhance their communication abilities or to
➢ To create a more durable and lightweight
rely on outside help, like another person. This paper's main
wearable device in the form of a glove by
goal is to close that gap by giving them the self-assurance attaching sensors that would capture and evaluate
and communication skills to interact with regular people. different hand movements made by a deaf-mute
This gadget is precise, efficient, and robust thanks to the person.
Raspberry Pi and Google API, which make up its two main ➢ To provide means of braille communication for
parts. The system consists of three main parts, a single one the visually impaired.
for each of the three impairments: visual, auditory, and
➢ To develop distinctive testing strategies
verbal. It makes use of a Raspberry Pi, which Google API
specifically for speaker dependent testing using
supports, together with a camera, microphone, speaker, and Deep Learning Techniques for Feature
screen. For people who are blind or visually challenged, the Extraction

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1048


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)
IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

➢ To develop an effective way of text to speech speech conversion, this method enables blind people to listen
conversion for information delivery by fusing it to audio. Speech inputted through the microphone for a person
with Artificial Intelligence. who is deaf is translated into text and shown in a pop-up
window on the user's screen. The screen-visible keyboard is
A. Rationale for taking up the project: used by dumb people to input text, which is then translated
• Key objective of the prototype is to enable efficient into voice and spoken to them through the speaker[8].
interaction among all three types of disabilities viz. This study aims to develop a clever method that will allow
visual, hearing and speech impaired as well as blind people to read braille and hear audio messages that are
normal individuals. converted from the text that has been delivered in both braille
and text forms. Reading text on a screen rather than listening
• 24 different types of input output combinations are to sounds is more comfortable for deaf people. The technology
made possible in this prototype by transmission and that recognizes sign language and converts it into text that is
shown on the screen and sounds through the speaker allows
reception of messages suiting to the disabled the dumb to communicate with non-dumb people. A small
individual’s state(s). instrument and the Raspberry Pi are both used in the
• The prototype is wireless which makes it highly development of this system. This device has a QWERTY
suitable for long distance communication. keyboard that is connected to an LCD display on one side and
• This project mainly focuses on aiding social cause a 3-cell braille display on the other. On a QWERTY keyboard,
a person with normal vision types something. A blind person
by enabling visual, hearing and speech impaired can now read the displayed text by placing their fingertips on
individuals a way to bridge the communication gap the Braille display. To reply, use the Braille input keys. The
among themselves and the society. sighted individual can read the message on the LCD[9].
Because to the advancements in science and innovation,
human existence has gotten better and easier. The World
III. LITERATURE SURVEY
Health Organization (WHO) estimates that there are 285
In order for impaired persons to communicate easily with million blind people, 300 million deaf mute people, and 1
other normal people or others of their type, the focus of this million dumb people in the globe. This prototype aims to offer
study will be on developing new technology. The goal is to these types of disabled people a communication channel. This
create technology that can assist people who are struggling gadget accepts input from flex sensors from a deaf-mute
with blindness, deafness, or speechlessness. Therefore, the person using a sensor glove that recognizes hand motions, text
Sharojan Bridge is created and is based on wearable input from a blind person using a Braille keypad, and input
technology, which allows the user to wear the device while from a normal person using a web application. All three of the
moving the system with ease. Here, communications between aforementioned disabled people are now able to interact with
disabled persons were transmitted using an Arduino circuit other people and one another effectively. The message result
board and Texas Instruments circuitry[5]. will be displayed on an LCD screen, the speech output is
There are three modules in this project: one for the blind, produced using a speaker, and the Braille output is produced
one for the deaf, and one for the dumb. For communication using four solenoid motors arranged in a manner that
with other people, blind persons will use the microphone in resembles braille characters.
the blind module. They also have an app for this where a blind IV. METHODOLOGY
person can communicate with a specific contact by using his
own gestures. Anyone can use the terminal to communicate This research focus on how the technology recognizes sign
with a deaf person in the deaf module. Anything they type is language and translates it into voice and text. The main layout
visible to everyone else on the terminal page[6]. of the device is shown in the accompanying block diagram.
The LCD display, speaker, SD card, and camera are all housed
People who are dumb, blind, or deaf cannot communicate on the Raspberry-Pi, which is the prototype's main
effectively with others. These impaired people's component. The camera records sign language for those who
communication is improved because of this method. The have vocal impairments, and this gadget converts it to voice
Bluetooth-enabled Arduino Board and flex sensor are and text. For those who are visually blind, the audio output
supposed to be used to communicate. Flex detectors are put through the speaker is useful, and for those who are auditory
on the gloves and then connected to the Arduino board so that deaf, the message is shown on the LCD module [20]. Figure1
they may flex in response to finger motion. The LCD and explains the block diagram of this research study. This work
Speaker modules are connected to the Arduino, which is concentrate on how sign language is recognized and translated
configured with the code to display a certain message on the into speech and text in this system. The following block
LCD if the sensor is off-centre and it outputs sound on the diagram depicts the general configuration of the gadget. The
Speaker module. The Arduino IDE is used to create the LCD display, speaker, SD card, and camera are all associated
application. By attaching a Bluetooth module to Arduino, the with the Raspberry-Pi, which serves as the prototype’s main
message is also communicated via the app on a smartphone component. The system tends to work for both vocal and
and is shown on an LCD. The current location is tracked by a visual challenged people since the camera captures the sign
GPS module and shown on an LCD screen[6] language used by the vocally impaired, the audio output is
The goal of this study is to offer a straightforward, speedy, made through the speaker, which is useful for the visually
accurate, and cost-effective solution. For people who are impaired, and the message shows up on the LCD module for
blind, deaf, and unable to speak, the project uses a Google API the audibly impaired individual.
and Raspberry Pi-based solution. Due to image- to-text and

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1049


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)
IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

Dumb people utilize the camera to recognize hand For blind:


motions. The camera makes use of an algorithm to convert the
various movements into text and audio sounds using a speaker Step 1: The camera on the device, which is attached to the
and LCD display. Raspberry Pi, takes a picture of the sign the user is holding up.

This method allows for the identification of numerous Step 2: Based on the quantity and positioning of the hand's
hand gesture patterns. Thus, the message can be heard over the edges, the image is compared to the pre-trained code, and the
speaker by the blind, and it can be seen over the LCD by the message is shown on the LCD.
deaf. Additionally, the microphone is used to record any user's Step 3: The collected text is then converted to speech using
voice, which is subsequently converted into text using an the gates API.
algorithm and shown on an LCD to help deaf persons
understand the intended message. Step 4: The Raspberry Pi is linked to a high-quality
speaker, which outputs the audio, allowing a blind person to
decipher the message by listening to it.
For deaf:
Step 1: The USB microphone attached to the Raspberry Pi
in the device captures the sound or words being spoken and
saves them as an mp3 file for the user, who in this case may
not have hearing.
Step 2: The Google Speech API takes this audio file and
converts it into text that the user understands.
Step 3: The translated text is shown on the device's LCD
screen using a pop-up window created particularly for this
module using Python. As a result, the client comprehends
everything given to him quickly and efficiently.

The research for normal individuals who desire to use


alphabet sign language to communicate with the deaf as well
as mute will be the primary focus of this essay. It is necessary
to use a convenient, high-accuracy, low-cost gadget using a
simple procedure. several methods For image classification, a
method has been utilized. K-Nearest Neighbor (K-NN) [1] is
one such example. Because of its low complexity and ease of
use, K-NN has become a popular approach for classifying
images. Separate K-NN, however, isn't very accurate in
classifying images. When alphabet sign language is used, the
Fig 1: Block diagram of the device. weighting value in the KNN is extremely important for
accuracy in image classification. So that we can acquire the
Figure 2 explains the overall Methodology of the proposed best results possible for this survey's weight value. In order to
method. The camera module which is used by dumb people to achieve the highest prediction accuracy possible for this
recognize motions. The camera utilizes an algorithm to research, we shall optimize the weight value [18].Figure 3
convert the various movements into text and audio samples by explains the sign language detection of the proposed method.
utilizing a speaker and an LCD display. This method
recognizes a variety of hand motion patterns. Hence, the blind
can hear the message over the speaker, and the deaf can see it
on the LCD. Also, the microphone is used to record the speech
of any user, which is subsequently converted into text by
making use of an algorithm and presented on an LCD to help
the deaf interpret the message [19].

Fig 3: Proposed Sign language Detection

Fig 2: Methodology
Fig 4: Alphabet Sign language

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1050


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)
IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

A multidimensional vector that holds the crucial data is the


key point descriptor.
• Key point matching: Important points between two
images are matched by identifying their nearest neighbors.

KNN-CLASSIFIER
One of the most fundamental supervised learning-based
machine learning algorithms is K-Nearest Neighbor. On the
premise that the new iteration and the previous cases are
Fig 5: Hand detection outcome
comparable, the K-NN technique classifies the new
Hand detection technique is shown in Fig. 5. This figure occurrence in the group that is the nearest equivalent to the
shows how the hand area was photographed using a camera. existing categories. This algorithm stores all the information
that is previously available and categorizes additional
130 ≤ 𝐶𝑟 ≤ 180 &130 ≤ 𝐶𝑏 ≤ 180&0.01 ≤ 𝐻 information based on similarities. This indicates that new data
. A binary image created from Fig. 5 is shown. Hand can be reliably and quickly categorized using the K-NN
segmentation using skin colour detection is shown in above approach [10]. The majority of its employment and research
figure. After that, a binary image of the segmented image is is in classification issues, even though it may be utilized for
created. The steps that follow are used to create the binary both classification and regression.
image: Use the Otsu method to produce a binary image after K-NN is a quasi-method that makes no assumptions about
first converting the segmented image from the skin detection the underlying data. KNN is also known as the "lazy learner
into a grayscale image using the luminance algorithm. Also, algorithm" due to the fact that it keeps the training dataset
the bounding box method based on the top-end x and y rather than learning from it right away. After using the dataset
coordinates was used to crop the binary image [17]. to finish a procedure, it sorts the input. This approach
effectively keeps the input constant throughout the training
phase and, after acquiring new data, classifies it into a
A. Feature Extraction Algorithms category that is relatively similar to the old data. [11]

Algorithm for Feature Extraction:


V. WORKING OF K-NN
In order to extract values (features) from a set of measured
data that are meant to be informative and non-redundant, the
feature extraction technique is used in machine learning, The following algorithm may be used to describe how the
pattern recognition, and image processing. This strategy K-NN works:
speeds up learning, and in some cases, it improves human
interpretations. Dimension reduction and feature extraction 1: Choose the K-numbers for your neighbors.
work together seamlessly. When the input data is either too 2: Compute the Euclidean distances between neighbors
extensive to evaluate or deemed redundant, it may be up to a K-number of neighbors.
condensed to a more manageable collection of attributes. The
process of selecting a portion of the original traits is known as 3: Using the estimated Euclidean distance, select the K
feature selection. In order to carry out the desired task using closest neighbors.
this reduced representation rather than the complete 4: Count the number of data points in each category
representation, it is anticipated that the chosen features will among these k neighbors.
comprise the pertinent data from the entire representation [16].
The purpose of feature extraction is to describe enormous 5: Determine the number of data points in each cluster of
amounts of data with minimal resources. The vast amount of k close neighbors.
variables needed to analyze complex data is one of the key Take into account the situation when we must categories a
challenges. A method for integrating the many variables to get new data point in order to use it. Look at the illustration below:
around the issues while still accurately characterizing the data
is feature extraction. Many software programs for data
analysis provide capabilities like dimension reduction and
feature extraction. There are various software packages that
concentrate on certain feature extraction-related This method
is using SIFT, one of many feature extraction approaches
(Scale-Invariant- Feature-Transform).

A complicated algorithm is SIFT. This process mainly


comprises five steps:
• Scale-space peak selection: Potential site for finding
features.
• Precise key point localization for the feature key points.
• Assigning key points with orientation: this is known as
Fig 6: KNN Classifier Data Collection
key point assignment.

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1051


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)
IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

● Decide the number of neighbors first, therefore, The camera takes a photo of the hand and recognizes the
k=5. letter in it based on the number and positioning of edge points
● The Euclidean distance between the data points will
then be computed after that. The distance between
any two locations that have already been
investigated in geometry is known as the Euclidean
distance.

Fig 9: Display of Sign detected on LCD


in the displayed sign. It outputs the detected speech over
the speaker. Figure 9 shows the Signed Alphabet detection.
Fig 7 KNN Classifier Euclidian Distance calculation
In a similar manner, the images below depict the edge point
detection of the sign shown and identify the letter and display
● The Euclidean distance was calculated to identify the corresponding sign detected on the LCD screen.
the nearest neighbors. In class A, there were three
nearest neighbors, whereas group B had just two.
Analyze the image below.

As it can be observed, The new data point needs to fall into


a category. A because its three nearest neighbors are likewise
in category A.

VI. RESULTS AND DISCUSSION


The proposed gadget communicates through a Raspberry
Pi board, and Python is used for programming, displaying the
message on an LCD, and broadcasting speech via a speaker. Fig 10: Detection of the letter
VNC viewer and the Raspbian operating system are used for
this[13].The experimental setup shown in figure 8.

Fig 11: Display of Alphabet Sign detected on LCD

Fig 8: Experimental setup of the project

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1052


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)
IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

Visually Impaired Individuals: Problems & Issues” 19th IEEE


International Conference on Tools with Artificial Intelligence.
VII. APPLICATIONS [4] n.wikipedia.org/wiki/American_Sign_Language
[5] https://medium.com/data-breach/introduction-to- sift-scale-invariant-
• By utilizing the sign detection paradigm, international feature-transform-65d7f3a72d40
meetings may be made simple for individuals with disabilities [6] Implementation of Flex sensor and Electronic Compass for Hand
to understand and the significance of their labour can be Gesture Based Wireless Automation of Material Handling Robot -
recognized. The dataset may be easily modified and enlarged International Journal of Scientific and Research Publications, Volume
2, Issue 12, December 2012 1 ISSN 2250-3153
to accommodate the user's needs, and it may prove to be a
[7] Novel Approaches for Robotic Control Using Flex Sensor - Sangeetha.
crucial step in bridging the communication gap between the P et al. Int. Journal of Engineering Research and Applications
deaf and the dumb[14]. www.ijera.com ISSN: 2248- 9622, Vol. 5, Issue 2, ( Part -2) February
2015, pp.79-8.
• Anyone with a basic understanding of technology can use [8] Rohit Rastogi, Shashank Mittal, and Sajan Agarwal- “A Novel
the method, which is accessible to all. • This approach can be Approach for Communication among Blind, Deaf, and Dumb People”-
used in primary schools to introduce sign language to children 2nd International Conference on “Computing for Sustainable Global
Development”, 11th – 13th March 2015; IEEE Conference ID: 35071
as early as feasible.
[9] B. Buvaneswari, T. Hemalatha, G. Kalaivani, P. Pavithra, A. R.
Advantages: Preethisree - “Communication among blind, deaf and dumb People” -
International Journal of Advanced Engineering, Management and
• Each sign serves a particular purpose, such as providing text Science (IJAEMS) [Vol-6, Issue-4, Apr-2020] ISSN: 2454-1311
or audio output, and is typically used by dumb and deaf [10] Kasi Viswanathan G, Sathya Seelan C, S.Praveen Kumar - “A Novel
people. Approach on Communication between Blind, Deaf and Dumb People
using flex Sensors and Bluetooth” - International Journal of
• Accurate characteristic extraction. Engineering &Technology, 7 (3.12) (2018) 485-490
• Simpler algorithmic structures. [11] Karmel A, Anushka Sharma, Muktak Pandya, Diksha Garg - “IoT
• Trustworthy edge detection. based Assistive Device for Deaf, Dumb and Blind People International
Conference on Recent Trends in Advanced Computing 2019, ICRTAC
LIMITATIONS: 2019.
[12] Rajyashree, O. Deepak, Naresh Rengaswamy,K.S. Vishal-
• The model is subject to various limitations, such as those “Communication Assistant for Deaf,Dumb and Blind”- International
caused by environmental elements that impair detection Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-
3878, Volume-8, Issue-2S11, September 2019.
accuracy, such as dim lighting and an unmanaged background.
[13] Kasie, F. M. (2013). Combining simple multiple attribute rating
It is difficult to find the right gradient and it is quick to process. technique and analytical hierarchyprocess for designing multi criteria
performance measurement framework. Global Journal of
VII. CONCLUSION AND FUTURE SCOPE [14] Researches in Engineering Industrial Engineering, 13(1), 1-15.
[15] Silva, L. A., & Del-Moral-Hernandez, E. (2011), A SOM combined
A sign language detection system's main objective is to give with KNN for classification task.
deaf and hearing people a useful way of communicating [16] Proceedings of International Joint Conference on Neural Networks.
San Jose, California. Rahman, A. M., Ahsan, U., & Aktaruzzaman, M.
through hand gestures. This system will be utilized in (2011), Recognition static hand gestures of alphabet in ASL. Issn 2218-
conjunction with a pi- cam that recognizes and processes edge 5224 (Online), 2(1).
points. We can infer from the model results that the [17] Chai, J., Liu, J. N. K., & Ngai, E. W. T. (2012). Application of decision
recommended system can generate trustworthy results when making techniques in supplierselection: A systematic review of
the light and intensity are controlled. Additionally, new literature. Elsevier Ltd.
motions can be easily added, and the model will be more [18] Nachamai, M. (2013). Alphabet recognition of American sign
language: A hand gesture recognitionapproach using sift algorithm.
accurate with more photographs taken from different International Journal of Artificial Intelligence and Applications
perspectives and frames. As a result, the model may simply be (IJAIA),4(1).
scaled up to a huge size by expanding the dataset. Through [19] Somawirata, I. K., Uchimura, K., & Koutak, G. (2012). Image
this study, a ground-breaking device for helping visually, enlargement using adaptive manipulation
vocally, and audibly challenged people have been developed. [20] interpolation kernel based on local image data. Proceedings of
This device is accessible, adaptable, and mobile, thanks to the International Conference on Signal
most recent and popular technologies. The technology [21] Processing, Communication and Computing (pp. 474-478). Hongkong.
suggested in this research can significantly assist in addressing [22] Utaminingrum, F., & Mufarroha, F. A. (2017). Hand gesture
recognition using adaptive network based
some of the numerous difficulties faced by the differently
[23] fuzzy inference system and K-nearest neighbor. International Journal
abled of Technology (IJTech), 8(3),
[24] 559-567. Somawirata, K., & Utaminingrum, F. (2016). Road detection
REFERENCES based on the color space and clusterconnecting. Proceedings of
International Conference on Signal and Image Processing (pp. 118-
[1] NetchanokTanyawiwat and SurapaThiemjarus, esign of an Assistive 122).Beijing, China.
Communication Glove using Combined Sensory Channels, 2012, [25] Utaminingrum, A., & Sari, Syauqy, H. (2018). Left-right head
Ninth International Conference on Wearable and Implantable Body movement for controlling smartwheelchair by using centroid
Sensor Networks. coordinates distance. Journal of Theoretical and Applied Information
[2] M. Mohandes and S. Buraiky, ―Automation of the Arabic sign [26] Technology, 96(10), pp. 2852-2861.
language recognition using the power glove, AIML Journal, vol. 7, no. [27] Utaminingrum, F., Sari, Y. A., & Prasetya, R. P. (2016). Image
1, pp. 41–46, 2007. processing for rapidly eye detection based on robust haar sliding
[3] Nikolaos Bourbakis1,3, Anna Esposito2, D. Kabraki” Multi-modal window. International Journal of Electrical and Computer Engineering
Interfaces for Interaction- Communication between Hearing and (IJECE),7(2), pp. 31-37

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1053


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Fourth International Conference on Smart Electronics and Communication (ICOSEC-2023)
IEEE Xplore Part Number : CFP23V90-ART ; ISBN : 979-8-3503-0088-8

Author Contributions: Funding: This research received Fund from Karnataka State Science
Conceptualization, S.S.; Methodology, S.S. and S.B.V.; software, S.S. and Technology Faculty Proposal Scheme.
and ARA.; validation, S.S., S.B.V., S.S..; formal analysis, S.S and Data Availability Statement: This research mainly focuses on data
S.B.V.; investigation, S.S. and S.B.V.; resources, S.S.; data curation, communication text and image data. The availability of data and
A.R.A and S.S writing—original draft preparation, S.S.; writing— materials is dummy data.
review and editing, S.B.V., S.S. and A.R.A visualization, S.S., and Acknowledgements: We would like to thank the management of
S.B.V supervision. Dayananda Sagar College of Engineering for the support rendered.
Conflicts of Interest: The authors declare no conflict of interest.

979-8-3503-0088-8/23/$31.00 ©2023 IEEE 1054


Authorized licensed use limited to: Zhejiang University. Downloaded on February 26,2024 at 16:01:42 UTC from IEEE Xplore. Restrictions apply.

You might also like