0% found this document useful (0 votes)
15 views70 pages

Revised Chapter I - III - SignMo

The document discusses the background and objectives of developing a real-time sign language to speech translation system. It aims to address the communication barriers faced by deaf and mute individuals by translating sign language gestures to voice in real-time without requiring an interpreter. The system would use computer vision and machine learning techniques to recognize signs and text-to-speech to audibilize the translation.

Uploaded by

Eulene Figuera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views70 pages

Revised Chapter I - III - SignMo

The document discusses the background and objectives of developing a real-time sign language to speech translation system. It aims to address the communication barriers faced by deaf and mute individuals by translating sign language gestures to voice in real-time without requiring an interpreter. The system would use computer vision and machine learning techniques to recognize signs and text-to-speech to audibilize the translation.

Uploaded by

Eulene Figuera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Carlos Hilado Memorial State University

Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

SignMo: Real-time Sign Language Speech Translation

A Technical Project

Presented to the

Faculty of College of Engineering

Carlos Hilado Memorial State University

Alijis Campus, Bacolod City

In Partial Fulfillment

Of the Requirement for the Degree

BACHELOR OF SCIENCE IN COMPUTER ENGINEERING

By

Flora, Era Marie M.

Lachica, Esteban L.

Velasco, John Patrick M.

October, 2023
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

TABLE OF CONTENTS

Page

Title Page i

Table of Contents ii

List of Tables v

List of Figures vi

CHAPTERS

I INTRODUCTION 1

Background of the Study 1

Objectives of the Study 3

Conceptual Framework 4

Significance of the Study 5

Scope and Limitations 7

Definition of Terms 9

II REVIEW OF RELATED LITERATURE 11

Hearing and Speech Impaired 11

Sign Language Real-time Translation 14

Gesture and Motion Recognition 17


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Sign Language Translation Techniques 21

Summary of Insights 25

III METHODOLOGY 28

Research Design 28

Design Criteria 28

Design Plan Preparation and Fabrication 30

Evaluation Procedure 40

Instrumentation 41

Validity and Reliability of Research Instrument 41

Data to be Gathered 42

Data Analysis 43

Parameters for Analysis 44

Project Cost 46

REFERENCES 48

APPENDICES 55

Appendix A: Realtime Sign language to Speech Translation Survey Form 55

Appendix B: Letter to the Owner of the Research Instrument 58

Appendix C: Informed Consent Form 59


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Appendix D: Letter to the Dean 60

Appendix E: Letter to the School President 61

Appendix F: System Block Diagram 62

Appendix G: Proposed Design for Device 63


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

LIST OF TABLES

Table Page

1 Parts and function of the Computer Vision 34

2 Parts and function of the Text to Speech 35

3 Parts and function of the Auto-start script 36

4 List of Materials 36

5 Test Parameters 38

6 Interpretative Scale for Prototype Quality 43

7 Parameters for Analysis 44

8 Project Cost 46
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

LIST OF FIGURES

Figure Page

1 Paradigm of the Study 10

2 Prototype Development and Procedures 37

3 Flow Chart of the Ideal Working Conditions of the System 38

4 Flow Chart of the Ideal Working Conditions of the Computer Vision 39

5 Flow Chart of the Ideal Working Conditions of the Text to Speech 40


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Chapter I

Background of the Study

Individuals who are mute and with hearing impairment use sign language, a visual

and gestural form of communication. People who do not speak it suffer a significant

communication barrier even though its grammar and syntax make it an expressive and

sophisticated language (Hommes, Borash, Hartwig, et al., 2018). Individuals who are

mute and with hearing impaired often face challenges when interacting with the hearing

community, makes it difficult to their inclusion and access to various aspects of society

(Kushalnagar, 2019). In situations where a sign language interpreter isn't available, this

communication barrier becomes more evident, particularly in places like healthcare

facilities, schools, or everyday encounters (Haug et al., 2018).

Long acknowledged as a severe problem the communication gap between the

hearing public and the Deaf and hard-of-hearing communities. Text-based

communication and sign language interpreters have been used as traditional alternatives.

However, these approaches are not always feasible, effective, or easily accessible

(Nakamura et al., 2019). Additionally, they might not offer spontaneous, in-the-moment

communication, which is crucial in various situations (Woll, 2018).

Communication is more successful than ever because of technological

developments and improvements in computer vision, machine learning, and natural

language processing. These technological advancements may have created real-

time systems for translating sign language gestures into spoken language,
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

facilitating much simpler communication for the deaf and mute (Hassan et al.,

2021). Thanks to research, systems for recognizing sign language have advanced

significantly in recent years. The movements and gestures of sign language users

are captured and analyzed by these systems using computer vision techniques, and

machine learning algorithms are then used to identify the related sign languages

(Wadhawan & Kumar, 2020). Large sign language datasets have been created for

training and evaluation, which is one of the notable developments in gesture

recognition (Prasetya & Sarno, 2021).

Moreover, the use of accessible and affordable hardware platforms such as

Raspberry Pi has made it possible to construct small, portable sign language

translation systems (Kumar et al., 2022). These mobile gadgets have expanded the

availability of real-time translation in variety of settings.

It is impossible to stress the importance of closing the communication gap between

the hearing public and the Deaf and hard of hearing community. It affects social

inclusion, prospects for work, and the provisio Objectives of the Study n of healthcare

and education. In order to help the Deaf and mute community communicate more easily

and spontaneously, this thesis will develop a real-time sign language translation system

using a Raspberry Pi that combines computer vision, machine learning, and text to speech

engine.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Objectives of the Study

This study generally aims to design, develop and test a Real-time Sign Language Speech

Motion Translation for the 2nd Semester, School Year 2023 – 2024.

Specifically, this study sought to:

1. Design and develop a Real-time Sign Language Speech Motion Translation with the

following technical features:

a. Computer Vision;

b. Text to Speech; and

c. Auto-start Script;

2. Test the quality of the Real-time Sign Language Speech Motion Translation the in terms of:

a. Functionality;

b. Portability; and

c. Performance Efficiency

3. Develop a user’s manual.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Conceptual Framework

This section shows the element of the research paper. The given details show the

guidelines and process of how the study will be conducted, what the input and the output will be.

Figure 1. Paradigm of the Study

Figure 1 shows the framework presents a three-phase roadmap for developing SignMo, a

real-time sign language speech motion translation device. Phase 1 lays the technological

resources to use in developing the device. In Phase 2, the prototype is subjected in enhancing to

improve the user experience. Phase 3 ensures that SignMo fulfills its requirements and

transcends in its functionality to become seamless communication tool.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Significance of the Study

This study will be of considerable interest to the homeowners, law enforcers, proponent, and

future researchers.

Healthcare Providers and Professionals

Healthcare providers and professionals can benefit from this study as they can communicate well

with patients who are deaf or hard of hearing, leading to better care and understanding the needs

of the patients.

General Public

General public can benefit from Real-time Sign Language Speech Motion Translation device as

it improves interactions and communications within individuals especially in public places.

Educational Institutions and Teachers

Educational institutions and teachers can benefit from this study by using Real-time Sign

Language Speech Motion Translation as it enhances the communication and learning

experiences of students who are deaf or hard of hearing.

Future Researchers

Future researchers can benefit from this study by using it to gather data for further research

studies on real-time sign language translation system for future developments. The study’s
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

findings, methodologies, and challenges encountered can provide future researchers more

advanced and efficient communication solutions for deaf and hard of hearing individuals.

Hearing Impaired Individuals

Hearing impaired individuals can benefit in Real-time Sign Language Speech Motion

Translation device, they can communicate to normal individuals for easier access to information

and enhanced communication in daily interactions.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Scope and Limitations of the Study

This study is concerned with the designing, development, and testing of the SingMo:

Real-time Sign Language Speech Motion Translation during the second semester of School Year

2023-2024. The study also intended to develop a user’s manual.

The primary scope of this research includes the creation of a device that is capable of

recognizing and translating sign language gestures into speech in real-time. The proposed device

will utilize a Raspberry Pi microprocessor as its main processing unit. Additionally, it will utilize

a 5-megapixel camera module for sign-language hand gesture detection. For portability and

convenience, the device is packed with a rechargeable battery as its power supply. Moreover, a

speaker will be used for the sign-language to speech feature. The device will be trained using the

data sets that will be utilized during the sign-language detection and translation process though

the Computer Vision feature. These data sets will be the foundation in the entire machine

learning cycle of the device, from training to evaluating and improving its performance.

The respondents of the study will be composed of experts in the field of Computer

Engineering, Sign Language users, and/or Deaf and Hard of hearing students within Bacolod

City, Students that currently taking up Bachelor of Special Need Education in CHMSU Talisay

Campus, and Computer Engineering students at Carlos Hilado Memorial State University –

Alijis Campus for the 2nd Semester, School Year 2023-2024.

However, various limitations have been set in the course of this study. Notably, the

system will be limited to the detection of single-hand sign language gestures only. Furthermore,

the reliance on built-in data sets may restrict the system's adaptability to diverse sign language
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

variations. Additionally, the proficiency of the system is limited to simple and common

sentences only, and its effectiveness may be compromised in areas with inadequate lighting

conditions due to the camera module's dependency on good lighting.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Definition of terms

For better understanding, the following terms are defined conceptually and operationally.

Auto-start Script. It refers to a file that performs tasks during the startup process of a virtual

machine (VM) instance. Startup scripts can apply to all VMs in a project or to a single VM.

(Using Startup Scripts on Linux VMs, n.d.)

In this study, auto-start script refers to the pre-programmed instructions that automatically

execute a specific program when the system is activated.

Deaf. It refers to describe anyone who does not hear very much. Sometimes it is used to refer to

people who are severely hard of hearing too. Deaf people tend to communicate in sign language

as their first language. (SignHealth, 2023)

In this study, this refers to individuals who have hearing loss and uses sign language as a primary

mode of communication.

Sign Language. It refers to the fundamental means of communication for those with hearing or

speech impairments. Apart from that, they are also the primary carriers of Deaf culture, with

different beliefs, behaviors, literary traditions, history, and values. (Foggetti, 2023)

In this study, sign language refers to a form of communication of deaf and hard of hearing

individuals through hand and bod gestures.

Computer Vision. It refers to a field of computer science that focuses on enabling computers to

identify and understand objects and people in images and videos. (What Is Computer Vision? |

Microsoft Azure, n.d.-b)


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

In this study, computer vision refers to the use of computer to capture visual information;

recognition of sign language gestures.

Text-to-Speech. It describes as a technology that can be defined as a system that converts text

into speech. Trivedi et al. (2018) defines that text-to-speech is a process in which input text is

first analyzed, then processed and understood, and then the text is converted to digital audio and

then spoken.

In this study, text-to-speech refers to the translation of the written text converted from the sign

language gesture into audio output.

Functionality. It refers to the capability of a computer or other machine is how useful it is or

how many functions it can perform. (“FUNCTIONALITY Definition and Meaning | Collins

English Dictionary,” 2023)

In this study, functionality refers to the ability of the system to accurately perform real-time sign

language speech motion translation.

Portability. It refers to the ease with which a system, software, or data can be transferred and

used in different environments. (DevX, 2023)

In this study, portability refers to the ease of the device to be transfer or carry to a different place.

Performance Efficiency. It refers to a characteristic that represents the performance relative to

the amount of resources used under stated conditions. (ISO 25010, n.d.)

In this study, performance efficiency refers to the ability of the system to execute the signs and

perform real-time responses.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

CHAPTER II

REVIEW OF RELATED LITERATURE

This chapter presents the conceptual, research, and prior art literature from foreign and

local sources such as journals, books, and electronic media.

Hearing and Speech Impaired

Effective communication is essential between individuals. People who are "Specially

Abled, due to speech or hearing impairments, "Mute" or "Deaf" individuals, respectively,

constantly rely on visual communication (Goel et al., 2022). Language plays a pivotal role in

everyday life, serving as a complex system for expressing our personality and facilitating

effective communication with others. We interact with people in various contexts through words,

gestures, and vocal tones, conveying our emotions, desires, and inquiries. Individuals with severe

or profound hearing loss naturally rely on sign language as their mode of communication. 5% of

the global population, approximately 466 million people, have some form of hearing impairment.

By 2050, this number is expected to rise to 900 million, equivalent to one in every ten

individuals (World Health Organization, 2023). The means of communication for hearing and

speech-impaired people are through Sign language.

Sign Language enhances the understanding of the ability of physically impaired persons

in speech and hearing worldwide. It utilizes complete signs with facial expressions, hands, and

other body parts. Every country using its sign language is concerned with its syntactical and
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

grammatical meaning (Antony et al., 2020). 90% of children who are born deaf to hearing

families who are uninformed and have little knowledge about deafness (Terry, 2023). Many deaf

children grow up in environments with zero tolerance for oral communication alternatives;

however, relying solely on cochlear implants (CIs) or hearing aids for auditory information

might not guarantee full access to language (Humphries et al., 2019). By their very nature, sign

languages convey linguistic information directly through articulations of different body parts –

an overlooked advantage for linguistic analysis (Sandler, 2018).

The sense of hearing is a very crucial channel of input for all kinds of information that's

important in a child's development. A deaf person's sense of hearing will give him/her limited

information, which he/she will acquire through others, especially in visual communication

channels. However, the consequences of early childhood deafness are far-reaching and varied.

Some rudimentary knowledge of the linguistic, cognitive, social, and psychological aspects of

human development is necessary if an understanding of any specialized area is possible

(Meadow, 2023).

Communication barriers exist for hearing and speech-impaired individuals, especially

since many people don't know how to use sign language. Exploring second language acquisition

through research in signed and spoken languages, which operate in distinct modalities, offers

significant potential to expand our comprehension of learning mechanisms and emphasize their

significance in our interconnected world (Schönström, 2021). Unlike people with other types of
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

disabilities, Deaf communities have sign languages that enable them to communicate with each

other.

The deaf community has formed its culture and identity, fostering a sense of pride

(Becerra Sepúlveda, 2020). This pride has led them to embrace a social perspective, symbolized

by using the uppercase "D" to name their deafness. Here, "deaf" signifies a clinical or oralist

viewpoint, while "Deaf" represents individuals aligning with a linguistic and cultural minority

(Solano et al., 2018). Many individuals with hearing impairments desire to strive as much as a

normal person in the hearing community to gain acceptance in the community.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Sign Language Real-time Translation

Sign language translation systems employ diverse statistical methods to convert sign

language into spoken or written language, emphasizing intensive early interventions to enhance

communication skills for individuals with such disorders (Papatsimouli et al., 2023b). Although

deaf, hard-of-hearing, and mute people can easily communicate, integration in educational,

social, and work environments is a significant barrier for the differently abled. There is a

communication barrier between an unimpaired person unaware of the sign language system and

an impaired person who wishes to communicate (Pawar et al., 2022).

To address this communication gap, (Manikandan et al., 2018) developed a translational

device for mute and deaf and hard-of-hearing people. A sensing glove integrated with WiFi/

XBee technology utilized an Arduino UNO microcontroller as its central processing unit.

Through their proposed device, a hearing-impaired user can manually store any of the 26 most

frequently used words in his daily life in the Sentence Mode so that it makes it easy for him to

communicate. The user can access this mode by tripping the mode control switch to on state, and

the mode indicating LED gets turned ON, indicating the mode of operation is in sentence mode.

On the other hand, in Character Mode, any gesture formation created by the user will transmit a

single digital character with an equivalent alphabet character. This mode is used when the user

wants to transmit a word not part of Sentence mode. Thus, using this mode, the user can generate

any word or sentence by forming gestures.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Paasa (2022) also conducted a study in the same context of sign-language translation. In

his study, he developed an assistance-oriented model that utilized a Leap-Motion device, which

was done by tracking the hand and finger movements of the user. However, his device is limited

to translating Filipino sign language into digital text. However, despite this limitation, his device

helps Filipino people with disability, especially those who cannot speak.

In 2021, Murillo et al. conducted a study to develop a web-based real-time application

that recognizes Filipino Sign Language (FSL) and converts it into text. To assess the device's

level of acceptability in terms of content, design, and functionality, they did a purposive

sampling for the 30 selected respondents, which were 9 Special Education Students, 7 Special

Education Teachers, and 14 Non-Disabled People. According to the three sets of respondents, the

level of acceptability of the web-based real-time converter application in terms of content,

design, and functionality falls under the ―Very Highly Acceptable bracket. The very high

acceptability of the application among the three sets of respondents suggests that the application

was user-friendly and beneficial for the respondents in closing the communication gap.

Pawar et al. (2022) highlighted "Vision-based sign language recognition" in their study.

Their study suggests an algorithm or approach for an application that will aid in recognizing

Indian Sign Language's various signs. The approach has been designed with a single user in

mind, meaning the real-time images will be captured first and then saved in the directory. By

using the SIFT (scale invariance Fourier transform) algorithm, it will be possible to determine
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

which sign has been articulated by the user. The comparison will be done in reverse, and the

result will be generated based on matched critical points from the input image to the image that

has already been saved for a particular letter or word. In Indian Sign Language, twenty-six signs

match each letter of the alphabet, and the proposed algorithm delivers 95% accuracy.

Sign language is a remarkable development that has evolved. Unfortunately, there are

some disadvantages associated with this language (Pawar et al., 2022). Thus, the development of

sign language real-time translation posed a great help in addressing the communication gap. Our

study will utilize this process to develop a more advanced sign language translation by utilizing a

Raspberry Pi microprocessor, which will have a specific feature: a real-time sign language

translation into speech.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Gesture and Motion Recognition

Sign language serves as a non-verbal mode of communication, especially in the speech

and hearing-impaired community. A Sign Language Recognition system has made speech and

hard-of-hearing individuals connect easily. Antony et al. (2022) developed a sign language

recognition system that translates gestures into understandable forms. The system has two main

methods for implementing sign language motion recognition: i) Sensor-based approach and ii)

Vision-based approach. In the vision-based approach, cameras are used to capture images of

signs, and in the sensor-based approach, a glove is constructed using sensors that will track the

signs made by hand.

Sign language recognition poses significant challenges due to the intricate hand gestures,

body postures, and facial expressions, which often incorporate rapid and complex movements

(Jiang et al., 2021). Hand gesture recognition, in particular, is a complex aspect of sign language

recognition, characterized by high inter-class similarities, significant intra-class variation, and

frequent obstructions in hand morphologies, leading to substantial complexity and variability

(Tao et al., 2018).

Hand gesture recognition systems play a crucial role in various applications, including

natural Human-Computer Interaction (HCI) (Barbhuiya et al., 2020; Tan et al., 2021), virtual

object manipulation, multimedia and gaming interaction (Wong et al., 2021), smart homes, in-

vehicle infotainment systems (Chevtchenko et al., 2018), and sign language recognition (Saxena

et al., 2022).
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Human interaction is essential for sharing ideas, thoughts, and abilities, but there are deaf

and hard-of-hearing individuals who face challenges in everyday communication. Kute et al.

(2020) proposed a smart glove system that converts sign language into speech output. The

system acknowledges basic hand gestures and converts them into electrical signals using motion

sensors. By using a gesture recognition module, the system shows flex sensors fixed on hand

gloves. The sensors acknowledge the English alphabet and a few words and then convert them

into speech output using speakers.

In contemporary technology, deaf people may communicate through hand gestures,

which each movement of the hands can convert into different audio messages. The data from the

accelerometer is analyzed by a microcontroller and flex sensor used to capture hand motions. In

order to create recorded sound signals for speaker delivery, the technology uses the correct

signals generated by the flex. Hand motions are translated into spoken or printed words using a

tool and approach for sign language recognition. An Arduino microcontroller setup and a glove

make up the bulk of the topical gadget. The data glove has four flex sensors strategically placed

on it. In an instant, the hand signals can be translated. All can be recognized by the 26 letters just

looking at where the fingers are (Tippannavar et al., 2023).

In order to gather visual-spatial trait characteristics, (Amangeldy et al., 2023) used the

MediaPipe Holistic conveyor in a multi-stage method to extract gesture key points from video

data. The system consists of multiple stages, each intended to target specific limitations of

individual pose models or hand components. These may be achieved by training a carefully
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

designed and structured multilayer perception model using the stage-by-stage extracted posture

and hand properties. A multi-stage pipe that extracts the spatiotemporal characteristics of sign

language was used in this method. The natural language processing (NLP) processor interprets

the sentences produced by gesture recognition, which comprise words in their base form.

Therefore, the researchers should strive for practical and unobtrusive solutions like computer-

vision-based solutions in continuous sign language recognition.

Sampaga et al. (2023) aimed to create a real-time two-way communication device for

Filipino Sign Language (FSL) users, addressing grammatical differences among sign languages.

When employing image processing and recognition systems, the device translates FSL gestures

and facial expressions into speech, utilizing Convolutional Neural Networks (CNNs) for

enhanced accuracy and speed. Additionally, it incorporates a speech-to-text (STT) feature,

allowing non-signers to communicate with the deaf without an interpreter. The system operates

in real-time, achieving a 93% accuracy rate in recognizing gestures, converting sign language to

speech in 1.84 seconds and speech to text in 2.74 seconds on average. Feedback from Manila

High School participants indicated an 85.50% approval rating, suggesting its effectiveness in

fostering two-way communication and overcoming communication barriers.

The Filipino deaf community continues to lag behind the fast-paced and technology-

driven society in the Philippines. Filipino Sign Language (FSL) has improved communication

for deaf people; however, most Filipinos do not understand FSL. In developing the FSL sign

language recognition model, Montefalcon et al. (2021) utilized computer vision to obtain the
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

images. They used Convolutional Neural Network (CNN) ResNet architecture to build the

automated FSL. The FSL number recognition model can recognize Filipino number signs from 0

to 9. However, the FSL does not result in real-time recognition.

Sign language has become a crucial instrument for impaired people to communicate with

others. However, the lack of knowledge and mastery regarding sign language became a major

impediment in society, creating a communication barrier between impaired and non-impaired

people. With this concept of a sign language translator, the language barrier for deaf and mute

and ordinary people is addressed. The software provided translations corresponding to the hand

gestures presented in front of the camera or monitor of the system. The system analyzes the hand

gestures captured from the camera or monitor and then translates them into spoken or text

representations (Eser et al., 2023).


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Sign Language Translation Techniques

Computer vision methods have significantly advanced sign language interpretation,

benefiting from recent profound learning breakthroughs in natural language processing and

image/video captioning. Sign language, being visual-spatial, poses challenges due to its

continuous nature, requiring context for meaning. Ananthanarayana et al. (2021) explored

diverse machine translation models, from basic sequence-to-sequence approaches to more

intricate networks like attention-based, reinforcement learning, and the transformer model.

Implementing translation methods across German (GSL), American (ASL), and Chinese sign

languages (CSL), along with input embeddings from ResNet50 or pose-based landmark features,

revealed the transformer model's superiority. It outperformed other sequence-to-sequence

models.

The limitations of existing glove-based solutions for sign language recognition are

discussed. These solutions can only recognize discrete single gestures, such as numerals, letters,

or words, rather than complete sentences. Wen, Zhang, He, and Lee (2021) propose an AI-based

sign language recognition and communication system to address this. The segmentation

technique divides complete sentence signals into word units, allowing the DL model to recognize

all word elements. The proposed model achieves an average accuracy rate of 86.67% in

recognizing novel sentences formed by recombining word elements.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Carlock (2021) developed a communication system to handle various inputs and outputs,

including text and audio. This system contains a translation engine that interacts with the

communication device to generate translations between sign language and word content. It can

also translate word content found within text inputs. The translation engine's function involves

comparing sign language content segments with content indicators related to representations of

word content. The communication device can capture, display, and process video streams, while

the translation engine is precisely engineered to identify sign language content segments within

these video streams.

Sumadeep et al. (2019) proposed a hardware-based solution to recognize the hand

motion and translate it into speech. This device consists of two components: the first circuit is

the transmitter circuit, and the other circuit is the receiver circuit. The transmitter circuit

comprises a microcontroller, an accelerometer, and a flex sensor. The receiver circuit comprises

an audio module, an amplifier, and a speaker. With the help of an accelerometer and flex sensor,

when a gesture is detected, the A to D converts produce the necessary digital output. The

information is sent to the microcontroller. The microcontroller then finds the values in the

database. Specific values of these in the database are now transmitted to the receiver.

Garcia et al. (2022) developed a CNN-based translator for fingerspelling American Sign

Language (ASL). They employed transfer learning using pre-trained GoogLeNet architecture,

which was trained on ASL and ILSVRC2012 datasets. The models created accurately recognize
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

letters from a to e, and the other set works from a to k. However, the highlighted issue in the

paper is the author's claim regarding the potential for accuracy and efficiency improvements

together with additional datasets. Relying on research speculations can cause challenges

regarding reliability and applicability in real-life situations.

The article implements a hand gesture Filipino Sign Language recognition model using

Raspberry Pi. Numerous studies on Filipino Sign Language (FSL) frequently identify a letter

with a glove and using a plain background, which may be challenging if implemented in a more

complex environment. Limited research on implementing YOLO-Lite and MobileNetV2 on FSL

was also observed. The model demonstrated dependability in a variety of complex backgrounds.

However, the researchers encountered challenges in recognizing the letters Q, J, and Z.

Additionally, in the letters N and M, N is sometimes mistakenly interpreted as M due to their

similar hand structures (Ang et al., 2022).

Technological developments have been made to help the majority communicate with the

Deaf and Mute Community. Unfortunately, these have not reached the Deaf population of the

Philippines. The study aims to recognize Filipino Sign Language (FSL) movements using the

Kinect V2 and machine learning classification models in RapidMiner. Additionally, most

Filipino Sign Language translation technology research focuses more on finger signing and facial

expressions. This leaves out one of the most essential things in sign language, the arm and hand
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

movement. Subsequently, the resulting feature data was synced with the annotated data

manually. This synced data was then grouped into ten frames to simulate motion. (Cronin et al.)

Sign language is the primary language used by the Deaf and Hard-of-Hearing (DHH)

community in the Philippines. Herrera et al. (2023) developed a millimeter wave (mmWave)

technology system that uses gesture recognition applications in sign language. A mmWave-based

FSL recognition system translates isolated signs into their equivalent gloss. The system captures

raw data from a user's motion before the radar sensor. The captured data from the TI IWR1443

radar sensor is then fed into the recognition module, starting with the processing algorithm to

clean the data. It is then provided through the deep learning model to classify the data and return

the gloss of the sign. The researchers conducted simple tests to determine the semi-real-time

capability of the system. The system automatically inferred the gloss corresponding to the

performed sign with some delay. These delays were measured to compute the overall recognition

latency of the system.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Summary of Insights

The Related Literature and Prior Research presented in this study provides valuable

insights for researchers studying hearing and speech impairments, sign language, and the

development of sign language real-time translation device.

The first theme highlighted the importance of sign language for hearing and speech

impaired people. Prior research and studies included in this theme enlightened the researchers of

this study about how communication is gapped between hearing and speech impaired and abled

individuals. Halim & Abbas made it clear that sign language is a communication used by hearing

and speech-impaired individuals worldwide but unfortunately, it is not understood by people

who doesn’t use or study this language. By this gap, communication barrier existed for hearing

and speech-impaired individuals across different countries and that Deaf and speech impaired

individuals are often overlooked due to their lower visibility. With this, researchers of this study

will strive to address this gap and bridge the communication between the abled and hearing and

speech impaired individuals.

The second theme focused on the prior studies that utilized sign language real-time

translation systems. Several approaches in sign language real-time translation that were utilized

by different researchers in their study will be very beneficial in the course of this study.

However, most of the studies included in this theme were limited in translating sign language

into text only but, despite this limitation these studies has a great impact on this research. Vision-
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

based technique and built-in data sets approach that were introduced from the prior studies were

the crucial approaches that will be utilized in this study.

The third theme focused mainly on studies about Gesture and Motion Recognition. Prior

research about gesture and motion recognition highlighted that the use of gesture and motion

recognition technology for sign language recognition has made communication easier for speech

and hearing-impaired individuals. In the context of gesture and motion recognition, there were

two main methods for implementing sign language motion recognition: sensor-based and vision-

based. In the course of this study, vision-based will be utilized.

The last theme on the other hand focused mainly on Sign Language Translation

Techniques. Various techniques for sign language translation, have become increasingly

important as a means of enabling effective communication between deaf or hard-of-hearing

individuals and others who don't know sign language. One of the techniques that the prior studies

have utilized is the hardware-based solution proposed by Sumadeep et al. (2019), which uses two

circuits, a transmitter circuit, and a receiver circuit. The transmitter circuit comprises a

microcontroller, an accelerometer, and a flex sensor, while the receiver circuit comprises an

audio module, an amplifier, and a speaker. Another technique is computer vision-based models,

which require a minimum requirement of one camera and image processing techniques to

classify and categorize motions. The main difficulty with these systems is transporting a camera

and CPU inside a box or a container. The lighting effects also significantly impact; under good
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

lighting, the system may not recognize the hand gesture and interpret the displayed signal

mistakenly, making it difficult for the ordinary user to operate the system. Which as what

mentioned above, will be utilized in the course of this research.

Understanding the difficulties and addressing the limitation of the prior research related

to this study, the researchers came up with an idea to develop a sign language translation device

that will incorporate the techniques and approaches which are applicable, that were introduced in

the prior research through this study titled, “SignMo: Real-time Sign Language to Speech

Translation”, a device that translates sign language into speech in real-time using Raspberry Pi.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Chapter III

METHODOLOGY

This chapter discusses the research design, respondents, measures, procedures, and data

analysis that will be employed in the study.

Research Design

This quantitative study will utilize descriptive and developmental research methods to

design, develop, and test a Realtime Sign Language to Speech Translation. The developmental

phase will include the design criteria, parameters for analysis, and the design plan preparation

and fabrication of the system. Moreover, the descriptive research will include the evaluation

procedure, instrumentation, and the data to be gathered.

Design Criteria

Design criteria are the specific guidelines or requirements when creating an object. It

encompasses various aspects to ensure the final product's effectiveness, functionality, and

practicality (Willis, 2018). In this study, the researchers aim for better sign language translation

that will benefit speech and deaf and hard-of-hearing individuals using the technical features of

Computer Vision, Text-to-Speech, and Auto-start Script.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Computer Vision

This feature is an optical equipment that allows the system to recognize and interpret sign

language gestures. Computer vision is crucial in Sign Language Translation because it captures

and analyzes the real-time movement input of the user’s signs. The gathered data from the visual

component translates the hand gestures or sign gestures into spoken language between

individuals who use sign language as communication and those who do not.

Text to Speech

This second feature within the sign language translation system bridges the communication gap

between the interpreted sign language gestures and spoken language. Then, the TTS processes

the gestures captured from computer vision and converts the sign language gestures into spoken

language.

Auto-start Script

An Auto-start Script is a crucial part of the sign language translation system because it consists

of instructions or commands that automates the sequence of sign gestures involved in capturing,

processing, and translating sign language gestures into spoken language.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Design Plan Preparation and Fabrication

The proponent will employ the following procedures to ensure the quality of the prototype

that will be developed. The successful utilization of the methods provides the successful design,

development, and testing of the SignMo: Real-time Sign Language to Speech Translation. This

approach includes a well-defined step: planning, component gathering, assembling, coding,

testing, revising, and finalizing. These well-organized stages ensure the systematic and efficient

operation of SignMo: Real-time Sign Language to Speech Translation.

Figure 2. Prototype Development and Procedures


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Planning

The researchers started the planning process by creating a flowchart showing how the device

would work. Figure 3 shows the ideal working conditions of the device. The researchers will

define the physical form of sign language and its interface with a sign language-to-speech

translation device. They’ll also establish a comprehensive set of design criteria and select the

necessary components for the device.

Figure 3. Flow Chart of the Ideal Working Conditions of the System

As shown in figure 3, the device will start in capturing sign language gesture as input

data. The captured sign language will be analyzed if the gesture detected or not. If there is a sign

language gesture detected, the neural network will identify the sign language gesture captured.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

When the neural network successfully recognizes sign language in the hand gesture, it will

generate the corresponding text and process it to output the corresponding audio output.

Figure 4. Flow Chart of the Ideal Working Conditions of the Computer Vision

As shown in Figure 4, the device will start capturing an image using a camera, which

serves as raw data for the hand detection and recognition system. After analyzing the image, the

system proceeds to the neural network, which contains dataset of images and their corresponding

labels. The neural network attempts to identify the specific gesture performed by the hand in the

image. If the neural network successfully recognizes the hand gesture, it will generate the
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

corresponding speech output. If step 3 does not detect a hand and no gesture recognition can be

executed, the process will end.

Figure 5. Flow Chart of the Ideal Working Conditions of the Text to Speech

Figure 5 shows the flowchart of the text-to-speech feature where the process begins with

the text that needs to be converted into speech. The text came from the sign language performed

by the signer. With the help of the Raspberry Pi, the TTS engine will utilize various algorithms

and create a voice data to create an audio representation that corresponds to the sign gestures

being performed. This process converts the synthesized text of sign gestures into spoken words,

making the sign gestures understandable in many audiences.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Component Gathering

The researchers will acquire or place an order for the needed components, such as camera

modules, speakers, microcontrollers, and other electronic components. They will also ensure that

the selected components are necessary for the successful operation of the device and satisfy the

project’s needs.

Table 1

Parts and function of the Computer Vision

Parts Function

Raspberry Pi Camera ● It captures the live video input of sign


Module language gestures that enables the computer
vision to analyze and recognize the gestures.

Raspberry Pi ● Its function is to receive and process the sign


language data, run algorithms for sign
language gesture interpretation, interface with
TTS engines and outputs accurate
interpretation into written and spoken
language.

Table 1 presents the parts of Computer Vision and its corresponding function. Computer

vision is the most crucial part of the device as it serves as the eyes of the device that captures the
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

live video of the sign language gestures. It captures and recognizes live video of hand gestures,

enabling real-time translation of the data gathered from the camera module.

Table 2

Parts and function of the Text to Speech

Parts Function

Speaker ● It outputs the corresponding audio that emits


by the synthesized speech generated by the
text-to-speech (TTS) engine.

Raspberry Pi ● Its function is to receive and process the sign


language data, run algorithms for sign
language gesture interpretation, interface with
TTS engines and outputs accurate
interpretation into spoken language.

Table 2 presents the parts of the Text to Speech and their corresponding function. The

TTS engine translates the interpreted text into spoken language and outputs the synthesized text

through the speaker. This feature enables seamless communication between individuals who use

sign language and those who do not.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Table 3

Parts and function of the Auto-start script

Parts Function

Raspberry Pi ● Its function is to receive and process the sign


language data, run algorithms for sign
language gesture interpretation, interface with
TTS engines and outputs accurate
interpretation into spoken language.

Table 3 presents the parts of the Auto-start Script and its corresponding function. The

Raspberry Pi receives and processes the sign language data from the camera. The auto-start

script manages the automatic launch and initialization of the translation function upon the startup

of the device.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Table 4
List of Materials

Components Description

MicroSD Card ● a small removable chip in which the data can be


stored.

Speaker ● a transducer that converts the translated sign


language gestures into audible sound output

Power Supply ● an electrical device that ensures a stable power source


for continuous operation.

Raspberry Pi Camera ● a custom designed add-on module for Raspberry Pi


Module hardware that captures an image. It attaches to
Raspberry Pi hardware through a custom CSI
interface.

Raspberry Pi 4b ● A debit single-board low-cost computer that has a


dedicated processor, memory, and a graphics driver,
just like PC. It also has its own operating system,
Raspberry Pi OS.

Table 4 presents the components to be used in making the device and their corresponding

descriptions. The components above are the overall components used in creating the prototype.

Assembling
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

The researchers assembled the physical prototype and incorporated the parts such as the camera

module, displays, microprocessor, and speaker.

Coding

The researchers will write the code for the device system operation using the chosen

programming language.

Testing

The researchers will check the correctness and function of the components and verify if each

component performs its intended functions. They will also assess the device system's

vulnerabilities, implement necessary security measures, and evaluate its

performance based on the user interface and experience.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Table 5

Test Parameters

Technical Features Test Parameters

Computer Vision ● Hand Gesture Recognition: tests the accuracy,


and speed of the system in recognizing the hand
gestures.

● Sign Language Fingerspelling Recognition: tests


the accuracy of sign language gesture
recognition.

Text to Speech ● Pronunciation Accuracy: assess the device’s


ability to accurately pronounce the synthesized
speech from the sign language gestures.
● Speech/ Sound Audibility: Assess the quality
and audibility of the sound produced by the
device.

Auto-start Script ● Boot time: assess the time it takes for the system
to boot up and become operational and
functional.

Table 5 outlines the test parameters for assessing the performance of a sign language

translation device across its technical features. The identified technical features include

Computer Vision, Text-to-speech, and Auto-start Script. Each technical feature is associated with
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

specific test parameters, and corresponding test rubrics are established to systematically measure

and score the system's capabilities.

Revising

The researchers will analyze the results and identify the problems in the device. They will

prioritize the issues based on their severity, and focus on resolving these errors. The researchers

will also perform a code review to ensure the quality of the code, improve the necessary code

issues, and retest.

Finalizing

The researchers will perform the final rounds of testing to ensure all issues are resolved. They

will prepare for the deployment of the device and consider potential additional improvements.

Evaluation Procedure

The data gathering procedure will begin with a written request to the Executive Director of

Carlos Hilado Memorial State University – Alijis campus through the Dean of the College of

Computer Studies, which will seek approval to conduct the study on the campus. This approval

letter will authorize the proponents to coordinate with the Program Head of the Bachelor of Science
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

in Computer Engineering (BSCPE), students of the BSCPE program, different experts in the field

of Computer Engineering, Sign Language users, and/or Deaf and Hard of hearing students within

Bacolod City, and Students currently taking up Bachelor of Special Need Education in CHMSU

Talisay Campus as respondents of the study. The researchers will conform to the ethical

requirements of research (ie., informed consent, anonymity, privacy, and confidentiality during

the conduct of the study.

To test the quality of the prototype, the respondents will evaluate its quality using the

survey questionnaire developed by the researchers. The items in the instrument will be subjected

to validity and reliability test to be approved by panel of evaluators to determine the quality of the

prototype. In addition, the instrument will be administered for a period of one (1) month during

the second semester of School Year 2023-2024. Moreover, the researchers will utilize purposive

sampling will be utilized to provide easy and fast data collection for the proponent.

Instrumentation

To measure the quality of the prototype, the proponent will utilize the research instrument

developed by Flaviano L. Urera Jr. in 2019. The instrument’s content underwent validation by a

panel of experts, ensuring its reliability and accuracy in measuring the desired parameters. The

survey questionnaire is an eleven (11) item instrument, which will employ a Likert-type scale

with 5 responses: 1 Poor; 2 = Fair; 3 = Satisfactory; 4 = Very Good; and 5 = Excellent, requires

the respondents to evaluate the prototype divided into three (3) parts.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Validity and Reliability of Research Instrument

To establish the validity and reliability of the survey questionnaire, the researchers will

utilize a pre-existing research instrument that was created, validated, and used by Mr. Flaviano

L. Urera Jr. and Mr. Francis F. Balahadia in their 2019 study entitled “ICTeachMUPO: An

Evaluation of Information E-Learning Module System for Faculty and Students.” The research

instrument will no longer undergo validity and reliability testing for it is already standardized

and have already underwent said process.

Data to be Gathered

The respondents of the study will be composed of experts in the field of Computer

Engineering, Sign Language users, and/or Deaf and Hard of Hearing students within Bacolod

City, Students that currently taking up Bachelor of Special Need Education in CHMSU Talisay

Campus, and Computer Engineering students at Carlos Hilado Memorial State University –

Alijis Campus for the 2nd Semester, School Year 2023-2024. In Addition, the research instrument

will be administered to the online through the utilization of Google Forms and face-to-face with

the experts in the field, sign language users, and deaf and/or deaf and hard of hearing individuals

through the use of physical survey questionnaires. This will proceed for one (1) month during the

2nd semester of School year 2023-2024.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

This study will utilize purposive sampling method to select the respondents of the study.

However, the accumulated respondents should not be less than thirty (30) to determine the

reliability of the instrument. Purposive sampling, or judgmental sampling, is a non-probability

technique where units are deliberately chosen based on specific characteristics needed for the

study. This method relies on the researcher's judgment to select individuals, cases, or events that

best contribute to meeting the study's objectives (Nikolopoulou, 2023).

Data Analysis

The quantitative data gathered from the conduct of the instrument will be analyzed to

determine the quality of the prototype.

Table 6
Interpretative Scale for Prototype Quality

Mean Scale Interpretation Description

4.21 – 5.00 Excellent The device met the 100% of the


specified objectives in the instrument.

3.41 – 4.20 Very Good The device met the 75% of the
specified objectives in the instrument.

2.61 – 3.40 Satisfactory The device met the 50% of the


specified objectives in the instrument.

1.81 – 2.60 Fair The device met the 25% of the


specified objectives in the instrument.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

1.0 – 1.80 Poor The device did not meet any of the
specified objectives in the instrument.

As presented in Table 5, there are five scales to interpret the quality of the prototype. A

mean scale of 4.21 – 5.00 will be interpreted as Excellent; 3.41 – 4.20 as Very Good; 2.61 – 3.40

as Satisfactory; 1.81 – 2.60 as Fair; and 1.00 – 1.80 as Poor.

To determine the quality of the prototype, the mean and standard deviation will be

utilized. The analysis will provide valuable insights into the overall performance and consistency

of the proposed project. By applying these standards, it is possible to guarantee a thorough

assessment procedure that will accurately evaluate the prototype’s strengths and areas for

improvement.

Parameters for Analysis

To measure the quality of the SignMo: Real-time Sing Language to Speech Translation, it

will be tested based on the parameters stated in the ISO 25010 criteria in which there were eight

characteristics to assess whether a certain prototype is of excellent quality or not. Three out of

eight characteristics will be assessed that there will be no failures once the device is

implemented.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Table 7

Parameters for Analysis

Parameters Description Success Indicator

Functionality The degree to which a device is


able to:
● Completeness The survey results
● Correctness ● Fulfill all the requirements should achieve a mean
● Appropriateness and user objectives. score of at least 3.41 on
● Provide the correct result average.
with the needed level of
accuracy
● Completion of tasks and
objectives.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Portability The degree to which a device is


able to:
● Adaptability
● Durability ● Effectively and efficiently
● Installability adapted for different
● Replaceability hardware, software or
● Affordability operational usage
environment. The survey results
● Withstand technology should achieve a mean
evolution and changes score of at least 3.41 on
without costly redesign. average.
● Successfully installed and
uninstalled in a specified
environment
● Replace another specified
product for same purpose in
the same environment
● Can increase efficiency and
productivity by reducing the
time and costs in delivering
instruction.

Performance Efficiency The degree to which a device is


able to:
● Time-behavior
● Resource ● Response and processing
The survey results
Utilization time meet the objectives.
should achieve a mean
● Capacity ● Resources used in the
score of at least 3.41 on
device meet the
average.
requirements.
● Maximum limits of the
device parameters meet
requirements.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Table 7 presents the test parameters for analysis that will be employed in the study. This

emphasizes that in order for the prototype to be referred to as functionally suitable, reliable, and

performance efficient, it should be at least obtain a mean score of 3.41 from the survey

questionnaire.

Project Cost

Table 8
Project Cost
Description Qty Unit Price Amount (Php)
Raspberry Pi 4b 1 pc 7, 989.00 7, 989.00

Camera Module 1 pc 321.00 321.00

Speaker 1 pc 137.00 137.00

MicroSD Card 1 pc 629.00 629.00

Voltage Regulator 1 pc 299.00 299.00

Battery Capacity Indicator 1 pc 59.00 59.00

3.7 Lithium Battery 1 pc 709.00 709.00


Total = 10, 143.00
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Table 8 presents the components needed to be employed in the completion of the device.

Thus far, the estimated total cost of creating the prototype, not counting the fees for the printing

and production of the study, amounts to P10, 143.00


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

References

Amangeldy, N., Milosz, M., Kudubayeva, S., Kassymova, A., Kalakova, G., & Zhetkenbay, L.
(2023). A Real-Time Dynamic Gesture Variability Recognition Method Based on
Convolutional Neural Networks. Applied Sciences, 13(19), 10799.

Ananthanarayana, T., Srivastava, P., Chintha, A., Santha, A., Landy, B. P., Panaro, J., Webster,
A., Kotecha, N., Sah, S., Sarchet, T., Ptucha, R., & Nwogu, I. (2021). Deep Learning
Methods for Sign Language Translation. ACM Transactions on Accessible Computing.
https://doi.org/10.1145/3477498

Ang, M. C., Taguibao, K. R. C., & Manlises, C. O. (2022, September). Hand Gesture
Recognition for Filipino Sign Language Under Different Backgrounds. In 2022 IEEE
International Conference on Artificial Intelligence in Engineering and Technology
(IICAIET) (pp. 1-6). IEEE.

Antony, A. S., Santhosh, K. B., Salimath, N., Tanmaya, S. H., Ramyapriya, Y., & Suchith, M.
(2022, January). Sign Language Recognition using Sensor and Vision Based Approach.
In 2022 International Conference on Advances in Computing, Communication and
Applied Informatics (ACCAI) (pp. 1-8). IEEE.

Antony, R., Paul, S., & Alex, S. (2020). Sign language translation system. International Journal
of Scientific Research & Engineering Trends, 6.

Barbhuiya, A. A., Karsh, R. K., & Jain, R. (2020). CNN based feature extraction and
classification for sign language. Multimedia Tools and Applications, 80(2), 3051–3069.
https://doi.org/10.1007/s11042-020-09829-y

Becerra Sepúlveda, C. A. (2020). Inclusión e interculturalidad para la cultura Sorda:caminos


recorridos y desafíos pendientes.

Carlock, J. (2021, April 9). US20220327309A1 - METHODS, SYSTEMS, and MACHINE-


READABLE MEDIA FOR TRANSLATING SIGN LANGUAGE CONTENT INTO
WORD CONTENT and VICE VERSA - Google Patents.
https://patents.google.com/patent/US20220327309A1/en?q=(Real-
time+Sign+Language+Speech+Motion+Translation)&oq=Real-
time+Sign+Language+Speech+Motion+Translation

Chevtchenko, S. F., Vale, R., & Macario, V. (2018). Multi-objective optimization for hand
posture recognition. Expert Systems With Applications, 92, 170–181.
https://doi.org/10.1016/j.eswa.2017.09.046
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Cronin, K., Ducusin, R., Sia, J., Tuaño, C., & Rivera, J. The Use of Motion Sensing to
Recognize Filipino Sign Language Movements.

DevX. (2023, September 18). Portability - DevX. https://www.devx.com/terms/portability/


ELECTRONIC RESOURCES

Eser, A. J., Flores, A., & Vallarta, J. C. (2023). A Filipino Sign Language (FSL) Software:
Conversion of FSL to Text and Speech Using Deep Learning. Ascendens Asia Journal
of Multidisciplinary Research Abstracts, 5(2), 78-78.

F. N. H. Al Nuaimy, “Design and implementation of interaction system for the deaf and mute,”
International Engineering Technology Conference (ICET), pp. 1–6, 2017.

Foggetti, F. (2023, April 18). 5 Interesting Facts about Sign Languages. Hand Talk - Learn ASL
Today. https://www.handtalk.me/en/blog/nteresting-facts-about-sign-languages/

FUNCTIONALITY definition and meaning | Collins English Dictionary. (2023). In Collins


Dictionaries. https://www.collinsdictionary.com/dictionary/english/functionality

Garcia B and Viesca S, “A Real-time American sign language recognition with convolutional
neural networks”, Convolutional Neural Networks for Visual Recognition, 2022, pp. 225-
232.

Goel, P., Sharma, A., Goel, V., & Jain, V. (2022, November). Real-Time Sign Language to Text
and Speech Translation and Hand Gesture Recognition using the LSTM Model. In 2022
3rd International Conference on Issues and Challenges in Intelligent Computing
Techniques (ICICT) (pp. 1-6). IEEE.

Hassan, M. R., et al. (2021). Sign Language Recognition: A Comprehensive Review. IEEE Access,
9, 63289-63321.

Haug, T., & Mann, W. (2018). Understanding the Deaf culture and community. In Cultural and
Language Diversity and the Deaf Experience (pp. 13-24). Routledge.

Herrera, J. A., Muro, A. A., Tuason III, P. L., Alpano, P. V., & Pedrasa, J. R. (2023, June).
Check for updates Millimeter Wave Radar Sensing Technology for Filipino Sign
Language Recognition. In Pervasive Computing Technologies for Healthcare: 16th EAI
International Conference, PervasiveHealth 2022, Thessaloniki, Greece, December 12-
14, 2022, Proceedings (Vol. 488, p. 274). Springer Nature.

Hommes, R.E., Borash, A.I., Hartwig, K., et al. (2018). American Sign Language Interpreters'
Perceptions of Barriers to Healthcare Communication in Deaf and Hard of Hearing
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Patients. Journal of Community Health, 43(5), 956-961. https://doi.org/10.1007/s10900-


018-0511-3

Humphries, T., Kushalnagar, P., Mathur, G., Napoli, D. J., Rathmann, C., & Smith, S. (2019b).
Support for parents of deaf children: Common questions and informed, evidence-based
answers. International Journal of Pediatric Otorhinolaryngology, 118, 134–142.
https://doi.org/10.1016/j.ijporl.2018.12.036

ISO 25010. (n.d.). https://iso25000.com/index.php/en/iso-25000-standards/iso-25010


Jadhav, A. J., & Joshi, M. P, “Hand Gesture recognition System for Speech Impaired People”
International Research Journal of Enginnering and Technology (IRJET) ,2016, pp. 1171-
1175.

Jiang, S. (2021). Skeleton aware Multi-Modal sign language recognition.


http://openaccess.thecvf.com/content/CVPR2021W/ChaLearn/html/Jiang_Skeleton_Awa
re_Multi-Modal_Sign_Language_Recognition_CVPRW_2021_paper.html

Jung, W. S. (2016, February 11). US10089901B2 - Apparatus for bi-directional sign


language/speech translation in real time and method - Google Patents.
https://patents.google.com/patent/US10089901B2/en?q=(Real-
time+Sign+Language+Speech+Motion+Translation)&oq=Real-
time+Sign+Language+Speech+Motion+Translation

Kumar, S., Rachna, P., Hiremath, R. B., Ramadurgam, V. S., & Shaw, D. K. (2022, December).
Survey on implementation of TinyML for real-time sign language recognition using smart
gloves. In 2022 Fourth International Conference on Emerging Research in Electronics,
Computer Science and Technology (ICERECT) (pp. 1-7). IEEE.

Kushalnagar, R. (2019). Deafness and hearing loss. Web Accessibility: A Foundation for
Research, 35-47.

Kusurnika Krori Dutta, Satheesh Kumar Raju K, Anil Kumar G S, Sunny Arokia Swarny B,
“Double handed Indian Sign Language to speech and text”, IEEE, 2015 Third
International Conference on Image Information Processing.

Kute, S., Chinchole, M. G., & Bansode, R. S. (2020). Sign language to digital voice conversion
device. International Research Journal of Modernization in Engineering Technology and
Science, 7(2), 462-466.

Manikandan, S. A., Vidhya, S. S., Chandragiri, V., Sriram, T. M., & Yuvaraja, K. B. (2022).
DESIGN OF LOW COST AND EFFICIENT SIGN LANGUAGE INTERPRETER FOR
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

THE SPEECH AND HEARING IMPAIRED. www.arpnjournals.com.


https://www.arpnjournals.org/jeas/research_papers/rp_2018/jeas_0518_7098
Meadow, K. P. (2023). Deafness and child development. Univ of California Press.

Montefalcon, M. D., Padilla, J. R., & Llabanes Rodriguez, R. (2021, August). Filipino sign
language recognition using deep learning. In 2021 5th International Conference on E-
Society, E-Education and E-Technology (pp. 219-225).
Murillo, S. M., Villanueva, M. E., Tamayo, K. M., Apolinario, M. V., & Lopez, M. D. (2021,
August). Speak the Sign: A Real-Time Sign Language to Text Converter Application for
Basic Filipino Words and Phrases.
http://cajmtcs.centralasianstudies.org/index.php/CAJMTCS.
https://cajmtcs.centralasianstudies.org/index.php/CAJMTCS/article/view/92/74
Nakamura, K., Yamada, K., & Kawai, Y. (2019). Sign Language Translation and Its Applications.
In Advances in Computer Vision and Pattern Recognition (pp. 21-39). Springer.

Nikolopoulou, K. (2023, June 22). What is purposive sampling? | Definition & Examples.
Scribbr. https://www.scribbr.com/methodology/purposive-
sampling/#:~:text=Purposive%20sampling%20refers%20to%20a,on%20purpose%E2%8
0%9D%20in%20purposive%20sampling.

Paasa, P. (2022). FILIPINO SIGN LANGUAGE TRANSLATOR USING LEAP MOTION.


Ramon Magsaysay Memorial Colleges. https://www.studocu.com/ph/document/ramon-
magsaysay-memorial-colleges/secondary-education/thesis-asd/34450766
Papatsimouli, M., Sarigiannidis, P., & Fragulis, G. F. (2023). A Survey of Advancements in
Real-Time Sign Language Translators: Integration with IoT Technology. Technologies
(Basel), 11(4), 83. https://doi.org/10.3390/technologies11040083
Pawar, S., Bamgude, A., Kamthe, S., Patil, A., & Barapte, R. (2022, August). Gesture Language
Translator Using Raspberry Pi. International Journal for Research in Applied Science &
Engineering Technology (IJRASET). https://www.ijraset.com/best-journal/gesture-
language-translator-using-raspberry-pi
Pawar, S., Bamgude, A., Kamthe, S., Patil, A., & Barapte, R. (2020). Gesture Language
Translator Using Raspberry Pi. https://www.ijraset.com/research-paper/gesture-language-
translator-using-raspberry-pi Rev. de Invest. Edu. de La Red. 11, 1–
23.doi: 10.33010/ierierediech.v11i0.792
Prasetya, M. L., & Sarno, R. (2021). Sign language recognition using convolutional neural
networks. International Journal of Interactive Mobile Technologies, 15(2), 93-108.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

S, R. A. (2022, December 12). What is Raspberry Pi? Here’s the best guide to get started.
Simplilearn.com. https://www.simplilearn.com/tutorials/programming-tutorial/what-is-
raspberry-
pi#:~:text=The%20Raspberry%20Pi%20is%20a,a%20modified%20version%20of%20Li
nux.

Sampaga, U., Toledo, A., Peret, M. a. L. D., Genodiala, L. M., Aguilar, S. L. C., & Antoja, G. a.
M. (2023). Real-Time Vision-Based Sign Language Bilateral Communication Device for
Signers and Non-Signers using Convolutional Neural Network. World Journal of
Advanced Research and Reviews, 18(3), 934–943.
https://doi.org/10.30574/wjarr.2023.18.3.1169

Sandler, W. (2018). The body as evidence for the nature of language. Frontiers in Psychology, 9.
https://doi.org/10.3389/fpsyg.2018.01782
Saxena, S., Paygude, A., Jain, P., Memon, A., & Naik, V. (2022, July). Hand Gesture
Recognition using YOLO Models for Hearing and Speech Impaired People. In 2022
IEEE Students Conference on Engineering and Systems (SCES) (pp. 1-6). IEEE.

Schönström, K. (2021). Sign languages and second language acquisition research: An I


ntroduction. Journal of the European Second Language Association, 5(1), 30–43.
https://doi.org/10.22599/jesla.73

Shanthi, K. G., Manikandan, A., Vidhya, S. S., Chandragiri, V. P. P., Sriram, T. M., & Yuvaraja,
K. B. (2018). Design of low cost and efficient sign language interpreter for the speech
and hearing impaired. ARPN Journal of Engineering and Applied Sciences, 13(10), 3530-
3535.

SignHealth. (2023, August 7). What is the difference between deaf and Deaf? - SignHealth.
https://signhealth.org.uk/resources/learn-about-deafness/deaf-or-deaf/

Solano, C. I. H., Barraza, J. A. V., Avelar, R. S., and Bustos, G. N. (2018). Noa la discapacidad:
La Sordera como minoría lingüística y cultural. Revista Nacional e Internacional de
Educación Inclusiva, 11, 63–80. Available online at:
https://revistaeducacioninclusiva.es/index.php/REI/article/view/384 (accessed February
3,2023)

Sumadeep, J., Aparna, V., Ramani, K., Sairam, V., Kumar, O. P and Krishna, R. L. P, “Hand
Gesture Recognition And Voice Conversion System for Dumb People”, 2019

Tan, Y. S., Lim, K. M., & Lee, C. P. (2021). Hand gesture recognition via enhanced densely
connected convolutional neural network. Expert Systems With Applications, 175, 114797.
https://doi.org/10.1016/j.eswa.2021.114797
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Tao, W., Leu, M., & Yin, Z. (2018). American Sign Language alphabet recognition using
Convolutional Neural Networks with multiview augmentation and inference fusion.
Engineering Applications of Artificial Intelligence, 76, 202–213.
https://doi.org/10.1016/j.engappai.2018.09.006

Terry, J. (2023). Enablers and barriers for hearing parents with deaf children: Experiences of
parents and workers in Wales, UK. Health Expectations, 26(6), 2666–2683.
https://doi.org/10.1111/hex.13864

Tippannavar, S. S., Shivprasad, N., & Yashwanth, S. D. (2023, February). Smart Gloves—A tool
to assist Individuals with Hearing difficulties. In 2023 International Conference on
Recent Trends in Electronics and Communication (ICRTEC) (pp. 1-5). IEEE.

Trivedi, A., Pant, N., Shah, P., Sonik, S., & Agrawal, S. (2018). Speech to text and text to speech
recognition systems-Areview. IOSR J. Comput. Eng, 20(2), 36- 43.

Using startup scripts on Linux VMs. (n.d.). Google Cloud.


https://cloud.google.com/compute/docs/instances/startup-scripts/linux

W. K. Wong, F. H. Juwono and B. T. T. Khoo, "Multi-Features Capacitive Hand Gesture


Recognition Sensor: A Machine Learning Approach," in IEEE Sensors Journal, vol. 21,
no. 6, pp. 8441-8450, 15 March15, 2021, doi: 10.1109/JSEN.2021.3049273.

Wadhawan, A., & Kumar, P. (2020). Deep learning-based sign language recognition system for
static signs. Neural computing and applications, 32, 7957-7968.

Wen, F., Zhang, Z., He, T., & Lee, C. (2021). AI enabled sign language recognition and VR
space bidirectional communication using triboelectric smart glove. Nature
communications, 12(1), 5378.

What is Computer Vision? | Microsoft Azure. (n.d.-b). https://azure.microsoft.com/en-


us/resources/cloud-computing-dictionary/what-is-computer-vision

Willis, K. D. D. (2018, November 9). US20230056614A1 - Conversion of geometry to boundary


representation with facilitated editing for computer aided design and 2.5-axis subtractive
manufacturing - Google Patents.
https://patents.google.com/patent/US20230056614A1/en?q=(design+criteria)&oq=design
+criteria

Woll, B. (2018). Deaf people: Linguistic and social issues. In The Oxford Handbook of Deaf
Studies in Language (pp. 1-17). Oxford University Press.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Working with Raspberry Pi Camera Board - MATLAB & Simulink Example. (n.d.).
https://www.mathworks.com/help/supportpkg/raspberrypiio/ref/working-with-raspberry-
pi-camera-
board.html#:~:text=The%20Raspberry%20Pi%20Camera%20Board,at%2030%20frames
%20per%20second.

World Health Organization: WHO. (2023b, February 27). Deafness and hearing loss.
https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

APPENDIX A

REALTIME SIGN LANGUAGE TO SPEECH TRANSLATION

SURVEY FORM

Instruction: Below are statements that relate to the quality of the SignMo: Real-time Sign

Language to Speech Translation Based on the ISO 25010 criteria. Using the scale below, kindly

rate by checking (✓) the box that corresponds to your response to the given statements in the

criteria below.

Numerical Rating Verbal Interpretation

5 Excellent

4 Very Good

3 Satisfactory

2 Fair

1 Poor

Parameters 5 4 3 2 1

A. Functional Suitability
1. The set of functions covers all the
specified tasks and user objectives.
Ang buong sistema ay sumasaklaw sa lahat ng
tinukoy na mga Gawain at mga layunin ng
gumagamit.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

5 4 3 2 1
2. The function provides the correct results
with the needed degree of precision.
Ang Sistema ay nagbibigay ng tamang resulta
sa kinakailangang antas ng katumpakan.

3. The function facilitates the


accomplishment of specified tasks and
objectives.
Ang paggamit sa Sistema ay mangangasiwa sa
pagtupad ng tiyakang mga Gawain at layunin.

B. Portability
1. A product or a system can effectively and
efficiently be adapted for different or
evolving software or other operational
usage environments.
Ang produkto o Sistema ay maaaring epektibo
at mahusay na maiaakma para sa iba’t ibang
hardware, software o iba pang mga uri ng
pagpapatakbo o paggamit.

2. A product or system can withstand


technology evolution and changes without
costly redesign, reconfiguration or
recoding.
Ang produkto o Sistema ay maaaring tumagal
sa ebolusyon ng teknolohiya at pagbabago ng
hindi mahal na muling idisenyo, pagsasaayos
o pakukudigo.

3. A product or system can be successfully


installed and/or uninstalled in a specified
environment.
Ang produkto o sistema ay maaaring
matagumpay na maikabit at matanggal ng
naaayon sa pangangailangan.

4. A product can replace another specified


software product for the same purpose in
the same environment.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Ang produkto ay maaaring palitan ng isa pang


tiyak na produkto na software para sa
parehong layunin sa parehong kaligiran.

5. A product or a system can increase


efficiency and productivity by reducing the
time and costs involved in delivering
instruction.
Ang produkto ay maaaring tumaas ang
kakayahan at pagiging produktibo sa
pamamagitan ng pabawas ng oras at paggugol
sa katulad na kapaligiran.

C. Performance Efficiency

1. The response and processing times and


throughput rates of a product or system,
when performing its functions, meet
requirements.
Nakatugon ang Sistema sa mga kinakailangan
oras ng pagtugon at pagproseso at mga antas
ng throughput ng isang produkto o Sistema,
kapag nakapagsagawa ng tungkulin nito.

2. The amounts and types of resources used


by product or system, when performing its
functions, meet requirements.
Ang halaga at uri ng mga mapagkukunan na
ginamit ng Sistema, kapag gumaganap ng
tungkulin nito ay nakatugon sa mga
pangangailangan.

3. The maximum limits of the product or


system parameter meet requirements.
Nagtutugunan ng pinakamataas ng limitasyon
or parametron ng Sistema ang mga
pangangailangan.

Comments/ Recommendations.
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

APPENDIX B

Letter to the Owner of the Research Instrument


January 9, 2024
Sir Flaviano L. Urera Jr.
College of Computer Studies
Laguna State Polytechnic University, Philippines

Good day sir!

As we are on the process of creating out thesis paper, one of the requirements is the
research instrument to obtain, measure, and analyze the data given to our study.

In line with this, the researchers, (Flora, Lachica,Velasco) would like to request your
consent to use your research instrument. We acknowledge the time and effort you have put in
creating this instrument; thus, it will be a great help to us in successfully completing our study.
Rest assured; we commit to maintaining the confidentiality of the instrument as well as no
modifications nor adjustments will be made without your consent.

We are hoping for your kind and positive response. Thank you very much!

Sincerely,
The Researchers
ERA MARIE M. FLORA ESTEBAN L. LACHICA JOHN PATRICK M. VELASCO

Approved By:

Flaviano L. Urera Jr.


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

APPENDIX C

Informed Consent Form

Carlos Hilado Memorial State University

College of Engineering
Brgy. Alijis, Bacolod City

Dear participant,
Greetings!
The goal of this study entitled “SignMo: Real-time Sign Language to Speech
Translation” is to evaluate the device based on the objectives it seeks to accomplish. The
researchers are requesting your time so that you can provide insightful feedback about the device
that has been presented based on your observations and evaluation. The researchers also ensure
that your personal information is kept confidential and that no disclosure of information will
happen without your permission.
Please review the study’s information so that you can ask them if you have any questions.
Thank you!

Respectfully yours,

The Researchers

ERA MARIE M. FLORA ESTEBAN L. LACHICA JOHN PATRICK M. VELASCO

Approved by:
ENGR. NEIL GABRIEL ESGRA
THESIS Adviser
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Appendix D

Letter to the Dean


January 13, 2024
Dr. Joe Marie D. Dormido
Dean, College of Computer Studies
Carlos Hilado Memorial State University - Alijis

Good day, Sir!


We are presently undertaking action research entitled “SignMo: Real-time Sign Language
to Speech Translation” in compliance with the requirements of the CPEDS1 – CpE Practice and
Design 1 for the degree Bachelor of Science in Computer Engineering at Carlos Hilado
Memorial State University – Alijis. The study employs developmental research design with the
experts in the field of Computer Engineering and any other related fields, Sign Language users,
and/or Deaf and Hard of hearing students within Bacolod City, and Students currently taking up
Bachelor of Special Need Education in CHMSU Talisay Campus.
In line with this, we would like to ask permission from your office to conduct our study in
the institution. The following requirements are also being sought for your approval:
1. To allow the respondents to fill out their survey questionnaire as soon as possible.
2. To seek the assistance of the faculty and staff in facilitating the survey.

Very yours truly,


The Researchers Approved by:
ERA MARIE M. FLORA ENGR. NEIL GABRIEL ESGRA
ESTEBAN L. LACHICA THESIS Adviser
JOHN PATRIC M. VELASCO
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Appendix E

Letter to the School President

January 13, 2024


Norberto P. Mangulabnan, PhD
President, Carlos Hilado Memorial State University
Carlos Hilado Memorial State University

Good day, Sir!


We are presently undertaking action research entitled “SignMo: Real-time Sign Language
to Speech Translation” in compliance with the requirements of the CPEDS1 – CpE Practice and
Design 1 for the degree Bachelor of Science in Computer Engineering at Carlos Hilado
Memorial State University – Alijis. The study employs developmental research design with the
experts in the field of Computer Engineering and any other related fields, Sign Language users,
and/or Deaf and Hard of hearing students within Bacolod City, and Students currently taking up
Bachelor of Special Need Education in CHMSU Talisay Campus.
In line with this, we would like to ask permission from your office to conduct our study in
the institution. We are likewise seeking your approval to administer the survey questionnaire
during the most convenient time of the respondents.
We highly appreciate your support by granting our requests.

Very truly yours,

The Researchers Approved By:


ERA MARIE M. FLORA ENGR. NEIL GABRIEL ESGRA
ESTEBAN L. LACHICA THESIS Adviser
JOHN PATRIC M. VELASCO
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Appendix F

System Block Diagram


Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus

To be a leading GREEN institution of higher learning in the global community by 2030


(Good governance, Research-oriented, Extension-driven, Education for Sustainable Development & Nation-building)

Appendix G

Proposed Design for Device

You might also like