Revised Chapter I - III - SignMo
Revised Chapter I - III - SignMo
A Technical Project
Presented to the
In Partial Fulfillment
By
Lachica, Esteban L.
October, 2023
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
TABLE OF CONTENTS
Page
Title Page i
Table of Contents ii
List of Tables v
List of Figures vi
CHAPTERS
I INTRODUCTION 1
Conceptual Framework 4
Definition of Terms 9
Summary of Insights 25
III METHODOLOGY 28
Research Design 28
Design Criteria 28
Evaluation Procedure 40
Instrumentation 41
Data to be Gathered 42
Data Analysis 43
Project Cost 46
REFERENCES 48
APPENDICES 55
LIST OF TABLES
Table Page
4 List of Materials 36
5 Test Parameters 38
8 Project Cost 46
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
LIST OF FIGURES
Figure Page
Chapter I
Individuals who are mute and with hearing impairment use sign language, a visual
and gestural form of communication. People who do not speak it suffer a significant
communication barrier even though its grammar and syntax make it an expressive and
sophisticated language (Hommes, Borash, Hartwig, et al., 2018). Individuals who are
mute and with hearing impaired often face challenges when interacting with the hearing
community, makes it difficult to their inclusion and access to various aspects of society
(Kushalnagar, 2019). In situations where a sign language interpreter isn't available, this
communication and sign language interpreters have been used as traditional alternatives.
However, these approaches are not always feasible, effective, or easily accessible
(Nakamura et al., 2019). Additionally, they might not offer spontaneous, in-the-moment
time systems for translating sign language gestures into spoken language,
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
facilitating much simpler communication for the deaf and mute (Hassan et al.,
2021). Thanks to research, systems for recognizing sign language have advanced
significantly in recent years. The movements and gestures of sign language users
are captured and analyzed by these systems using computer vision techniques, and
machine learning algorithms are then used to identify the related sign languages
(Wadhawan & Kumar, 2020). Large sign language datasets have been created for
translation systems (Kumar et al., 2022). These mobile gadgets have expanded the
the hearing public and the Deaf and hard of hearing community. It affects social
inclusion, prospects for work, and the provisio Objectives of the Study n of healthcare
and education. In order to help the Deaf and mute community communicate more easily
and spontaneously, this thesis will develop a real-time sign language translation system
using a Raspberry Pi that combines computer vision, machine learning, and text to speech
engine.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
This study generally aims to design, develop and test a Real-time Sign Language Speech
Motion Translation for the 2nd Semester, School Year 2023 – 2024.
1. Design and develop a Real-time Sign Language Speech Motion Translation with the
a. Computer Vision;
c. Auto-start Script;
2. Test the quality of the Real-time Sign Language Speech Motion Translation the in terms of:
a. Functionality;
b. Portability; and
c. Performance Efficiency
Conceptual Framework
This section shows the element of the research paper. The given details show the
guidelines and process of how the study will be conducted, what the input and the output will be.
Figure 1 shows the framework presents a three-phase roadmap for developing SignMo, a
real-time sign language speech motion translation device. Phase 1 lays the technological
resources to use in developing the device. In Phase 2, the prototype is subjected in enhancing to
improve the user experience. Phase 3 ensures that SignMo fulfills its requirements and
This study will be of considerable interest to the homeowners, law enforcers, proponent, and
future researchers.
Healthcare providers and professionals can benefit from this study as they can communicate well
with patients who are deaf or hard of hearing, leading to better care and understanding the needs
of the patients.
General Public
General public can benefit from Real-time Sign Language Speech Motion Translation device as
Educational institutions and teachers can benefit from this study by using Real-time Sign
Future Researchers
Future researchers can benefit from this study by using it to gather data for further research
studies on real-time sign language translation system for future developments. The study’s
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
findings, methodologies, and challenges encountered can provide future researchers more
advanced and efficient communication solutions for deaf and hard of hearing individuals.
Hearing impaired individuals can benefit in Real-time Sign Language Speech Motion
Translation device, they can communicate to normal individuals for easier access to information
This study is concerned with the designing, development, and testing of the SingMo:
Real-time Sign Language Speech Motion Translation during the second semester of School Year
The primary scope of this research includes the creation of a device that is capable of
recognizing and translating sign language gestures into speech in real-time. The proposed device
will utilize a Raspberry Pi microprocessor as its main processing unit. Additionally, it will utilize
a 5-megapixel camera module for sign-language hand gesture detection. For portability and
convenience, the device is packed with a rechargeable battery as its power supply. Moreover, a
speaker will be used for the sign-language to speech feature. The device will be trained using the
data sets that will be utilized during the sign-language detection and translation process though
the Computer Vision feature. These data sets will be the foundation in the entire machine
learning cycle of the device, from training to evaluating and improving its performance.
The respondents of the study will be composed of experts in the field of Computer
Engineering, Sign Language users, and/or Deaf and Hard of hearing students within Bacolod
City, Students that currently taking up Bachelor of Special Need Education in CHMSU Talisay
Campus, and Computer Engineering students at Carlos Hilado Memorial State University –
However, various limitations have been set in the course of this study. Notably, the
system will be limited to the detection of single-hand sign language gestures only. Furthermore,
the reliance on built-in data sets may restrict the system's adaptability to diverse sign language
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
variations. Additionally, the proficiency of the system is limited to simple and common
sentences only, and its effectiveness may be compromised in areas with inadequate lighting
Definition of terms
For better understanding, the following terms are defined conceptually and operationally.
Auto-start Script. It refers to a file that performs tasks during the startup process of a virtual
machine (VM) instance. Startup scripts can apply to all VMs in a project or to a single VM.
In this study, auto-start script refers to the pre-programmed instructions that automatically
Deaf. It refers to describe anyone who does not hear very much. Sometimes it is used to refer to
people who are severely hard of hearing too. Deaf people tend to communicate in sign language
In this study, this refers to individuals who have hearing loss and uses sign language as a primary
mode of communication.
Sign Language. It refers to the fundamental means of communication for those with hearing or
speech impairments. Apart from that, they are also the primary carriers of Deaf culture, with
different beliefs, behaviors, literary traditions, history, and values. (Foggetti, 2023)
In this study, sign language refers to a form of communication of deaf and hard of hearing
Computer Vision. It refers to a field of computer science that focuses on enabling computers to
identify and understand objects and people in images and videos. (What Is Computer Vision? |
In this study, computer vision refers to the use of computer to capture visual information;
Text-to-Speech. It describes as a technology that can be defined as a system that converts text
into speech. Trivedi et al. (2018) defines that text-to-speech is a process in which input text is
first analyzed, then processed and understood, and then the text is converted to digital audio and
then spoken.
In this study, text-to-speech refers to the translation of the written text converted from the sign
how many functions it can perform. (“FUNCTIONALITY Definition and Meaning | Collins
In this study, functionality refers to the ability of the system to accurately perform real-time sign
Portability. It refers to the ease with which a system, software, or data can be transferred and
In this study, portability refers to the ease of the device to be transfer or carry to a different place.
the amount of resources used under stated conditions. (ISO 25010, n.d.)
In this study, performance efficiency refers to the ability of the system to execute the signs and
CHAPTER II
This chapter presents the conceptual, research, and prior art literature from foreign and
constantly rely on visual communication (Goel et al., 2022). Language plays a pivotal role in
everyday life, serving as a complex system for expressing our personality and facilitating
effective communication with others. We interact with people in various contexts through words,
gestures, and vocal tones, conveying our emotions, desires, and inquiries. Individuals with severe
or profound hearing loss naturally rely on sign language as their mode of communication. 5% of
the global population, approximately 466 million people, have some form of hearing impairment.
By 2050, this number is expected to rise to 900 million, equivalent to one in every ten
individuals (World Health Organization, 2023). The means of communication for hearing and
Sign Language enhances the understanding of the ability of physically impaired persons
in speech and hearing worldwide. It utilizes complete signs with facial expressions, hands, and
other body parts. Every country using its sign language is concerned with its syntactical and
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
grammatical meaning (Antony et al., 2020). 90% of children who are born deaf to hearing
families who are uninformed and have little knowledge about deafness (Terry, 2023). Many deaf
children grow up in environments with zero tolerance for oral communication alternatives;
however, relying solely on cochlear implants (CIs) or hearing aids for auditory information
might not guarantee full access to language (Humphries et al., 2019). By their very nature, sign
languages convey linguistic information directly through articulations of different body parts –
The sense of hearing is a very crucial channel of input for all kinds of information that's
important in a child's development. A deaf person's sense of hearing will give him/her limited
information, which he/she will acquire through others, especially in visual communication
channels. However, the consequences of early childhood deafness are far-reaching and varied.
Some rudimentary knowledge of the linguistic, cognitive, social, and psychological aspects of
(Meadow, 2023).
since many people don't know how to use sign language. Exploring second language acquisition
through research in signed and spoken languages, which operate in distinct modalities, offers
significant potential to expand our comprehension of learning mechanisms and emphasize their
significance in our interconnected world (Schönström, 2021). Unlike people with other types of
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
disabilities, Deaf communities have sign languages that enable them to communicate with each
other.
The deaf community has formed its culture and identity, fostering a sense of pride
(Becerra Sepúlveda, 2020). This pride has led them to embrace a social perspective, symbolized
by using the uppercase "D" to name their deafness. Here, "deaf" signifies a clinical or oralist
viewpoint, while "Deaf" represents individuals aligning with a linguistic and cultural minority
(Solano et al., 2018). Many individuals with hearing impairments desire to strive as much as a
Sign language translation systems employ diverse statistical methods to convert sign
language into spoken or written language, emphasizing intensive early interventions to enhance
communication skills for individuals with such disorders (Papatsimouli et al., 2023b). Although
deaf, hard-of-hearing, and mute people can easily communicate, integration in educational,
social, and work environments is a significant barrier for the differently abled. There is a
communication barrier between an unimpaired person unaware of the sign language system and
device for mute and deaf and hard-of-hearing people. A sensing glove integrated with WiFi/
XBee technology utilized an Arduino UNO microcontroller as its central processing unit.
Through their proposed device, a hearing-impaired user can manually store any of the 26 most
frequently used words in his daily life in the Sentence Mode so that it makes it easy for him to
communicate. The user can access this mode by tripping the mode control switch to on state, and
the mode indicating LED gets turned ON, indicating the mode of operation is in sentence mode.
On the other hand, in Character Mode, any gesture formation created by the user will transmit a
single digital character with an equivalent alphabet character. This mode is used when the user
wants to transmit a word not part of Sentence mode. Thus, using this mode, the user can generate
Paasa (2022) also conducted a study in the same context of sign-language translation. In
his study, he developed an assistance-oriented model that utilized a Leap-Motion device, which
was done by tracking the hand and finger movements of the user. However, his device is limited
to translating Filipino sign language into digital text. However, despite this limitation, his device
helps Filipino people with disability, especially those who cannot speak.
that recognizes Filipino Sign Language (FSL) and converts it into text. To assess the device's
level of acceptability in terms of content, design, and functionality, they did a purposive
sampling for the 30 selected respondents, which were 9 Special Education Students, 7 Special
Education Teachers, and 14 Non-Disabled People. According to the three sets of respondents, the
design, and functionality falls under the ―Very Highly Acceptable bracket. The very high
acceptability of the application among the three sets of respondents suggests that the application
was user-friendly and beneficial for the respondents in closing the communication gap.
Pawar et al. (2022) highlighted "Vision-based sign language recognition" in their study.
Their study suggests an algorithm or approach for an application that will aid in recognizing
Indian Sign Language's various signs. The approach has been designed with a single user in
mind, meaning the real-time images will be captured first and then saved in the directory. By
using the SIFT (scale invariance Fourier transform) algorithm, it will be possible to determine
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
which sign has been articulated by the user. The comparison will be done in reverse, and the
result will be generated based on matched critical points from the input image to the image that
has already been saved for a particular letter or word. In Indian Sign Language, twenty-six signs
match each letter of the alphabet, and the proposed algorithm delivers 95% accuracy.
Sign language is a remarkable development that has evolved. Unfortunately, there are
some disadvantages associated with this language (Pawar et al., 2022). Thus, the development of
sign language real-time translation posed a great help in addressing the communication gap. Our
study will utilize this process to develop a more advanced sign language translation by utilizing a
Raspberry Pi microprocessor, which will have a specific feature: a real-time sign language
and hearing-impaired community. A Sign Language Recognition system has made speech and
hard-of-hearing individuals connect easily. Antony et al. (2022) developed a sign language
recognition system that translates gestures into understandable forms. The system has two main
methods for implementing sign language motion recognition: i) Sensor-based approach and ii)
Vision-based approach. In the vision-based approach, cameras are used to capture images of
signs, and in the sensor-based approach, a glove is constructed using sensors that will track the
Sign language recognition poses significant challenges due to the intricate hand gestures,
body postures, and facial expressions, which often incorporate rapid and complex movements
(Jiang et al., 2021). Hand gesture recognition, in particular, is a complex aspect of sign language
Hand gesture recognition systems play a crucial role in various applications, including
natural Human-Computer Interaction (HCI) (Barbhuiya et al., 2020; Tan et al., 2021), virtual
object manipulation, multimedia and gaming interaction (Wong et al., 2021), smart homes, in-
vehicle infotainment systems (Chevtchenko et al., 2018), and sign language recognition (Saxena
et al., 2022).
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Human interaction is essential for sharing ideas, thoughts, and abilities, but there are deaf
and hard-of-hearing individuals who face challenges in everyday communication. Kute et al.
(2020) proposed a smart glove system that converts sign language into speech output. The
system acknowledges basic hand gestures and converts them into electrical signals using motion
sensors. By using a gesture recognition module, the system shows flex sensors fixed on hand
gloves. The sensors acknowledge the English alphabet and a few words and then convert them
which each movement of the hands can convert into different audio messages. The data from the
accelerometer is analyzed by a microcontroller and flex sensor used to capture hand motions. In
order to create recorded sound signals for speaker delivery, the technology uses the correct
signals generated by the flex. Hand motions are translated into spoken or printed words using a
tool and approach for sign language recognition. An Arduino microcontroller setup and a glove
make up the bulk of the topical gadget. The data glove has four flex sensors strategically placed
on it. In an instant, the hand signals can be translated. All can be recognized by the 26 letters just
In order to gather visual-spatial trait characteristics, (Amangeldy et al., 2023) used the
MediaPipe Holistic conveyor in a multi-stage method to extract gesture key points from video
data. The system consists of multiple stages, each intended to target specific limitations of
individual pose models or hand components. These may be achieved by training a carefully
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
designed and structured multilayer perception model using the stage-by-stage extracted posture
and hand properties. A multi-stage pipe that extracts the spatiotemporal characteristics of sign
language was used in this method. The natural language processing (NLP) processor interprets
the sentences produced by gesture recognition, which comprise words in their base form.
Therefore, the researchers should strive for practical and unobtrusive solutions like computer-
Sampaga et al. (2023) aimed to create a real-time two-way communication device for
Filipino Sign Language (FSL) users, addressing grammatical differences among sign languages.
When employing image processing and recognition systems, the device translates FSL gestures
and facial expressions into speech, utilizing Convolutional Neural Networks (CNNs) for
allowing non-signers to communicate with the deaf without an interpreter. The system operates
in real-time, achieving a 93% accuracy rate in recognizing gestures, converting sign language to
speech in 1.84 seconds and speech to text in 2.74 seconds on average. Feedback from Manila
High School participants indicated an 85.50% approval rating, suggesting its effectiveness in
The Filipino deaf community continues to lag behind the fast-paced and technology-
driven society in the Philippines. Filipino Sign Language (FSL) has improved communication
for deaf people; however, most Filipinos do not understand FSL. In developing the FSL sign
language recognition model, Montefalcon et al. (2021) utilized computer vision to obtain the
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
images. They used Convolutional Neural Network (CNN) ResNet architecture to build the
automated FSL. The FSL number recognition model can recognize Filipino number signs from 0
Sign language has become a crucial instrument for impaired people to communicate with
others. However, the lack of knowledge and mastery regarding sign language became a major
people. With this concept of a sign language translator, the language barrier for deaf and mute
and ordinary people is addressed. The software provided translations corresponding to the hand
gestures presented in front of the camera or monitor of the system. The system analyzes the hand
gestures captured from the camera or monitor and then translates them into spoken or text
benefiting from recent profound learning breakthroughs in natural language processing and
image/video captioning. Sign language, being visual-spatial, poses challenges due to its
continuous nature, requiring context for meaning. Ananthanarayana et al. (2021) explored
intricate networks like attention-based, reinforcement learning, and the transformer model.
Implementing translation methods across German (GSL), American (ASL), and Chinese sign
languages (CSL), along with input embeddings from ResNet50 or pose-based landmark features,
models.
The limitations of existing glove-based solutions for sign language recognition are
discussed. These solutions can only recognize discrete single gestures, such as numerals, letters,
or words, rather than complete sentences. Wen, Zhang, He, and Lee (2021) propose an AI-based
sign language recognition and communication system to address this. The segmentation
technique divides complete sentence signals into word units, allowing the DL model to recognize
all word elements. The proposed model achieves an average accuracy rate of 86.67% in
Carlock (2021) developed a communication system to handle various inputs and outputs,
including text and audio. This system contains a translation engine that interacts with the
communication device to generate translations between sign language and word content. It can
also translate word content found within text inputs. The translation engine's function involves
comparing sign language content segments with content indicators related to representations of
word content. The communication device can capture, display, and process video streams, while
the translation engine is precisely engineered to identify sign language content segments within
motion and translate it into speech. This device consists of two components: the first circuit is
the transmitter circuit, and the other circuit is the receiver circuit. The transmitter circuit
comprises a microcontroller, an accelerometer, and a flex sensor. The receiver circuit comprises
an audio module, an amplifier, and a speaker. With the help of an accelerometer and flex sensor,
when a gesture is detected, the A to D converts produce the necessary digital output. The
information is sent to the microcontroller. The microcontroller then finds the values in the
database. Specific values of these in the database are now transmitted to the receiver.
Garcia et al. (2022) developed a CNN-based translator for fingerspelling American Sign
Language (ASL). They employed transfer learning using pre-trained GoogLeNet architecture,
which was trained on ASL and ILSVRC2012 datasets. The models created accurately recognize
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
letters from a to e, and the other set works from a to k. However, the highlighted issue in the
paper is the author's claim regarding the potential for accuracy and efficiency improvements
together with additional datasets. Relying on research speculations can cause challenges
The article implements a hand gesture Filipino Sign Language recognition model using
Raspberry Pi. Numerous studies on Filipino Sign Language (FSL) frequently identify a letter
with a glove and using a plain background, which may be challenging if implemented in a more
was also observed. The model demonstrated dependability in a variety of complex backgrounds.
Technological developments have been made to help the majority communicate with the
Deaf and Mute Community. Unfortunately, these have not reached the Deaf population of the
Philippines. The study aims to recognize Filipino Sign Language (FSL) movements using the
Filipino Sign Language translation technology research focuses more on finger signing and facial
expressions. This leaves out one of the most essential things in sign language, the arm and hand
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
movement. Subsequently, the resulting feature data was synced with the annotated data
manually. This synced data was then grouped into ten frames to simulate motion. (Cronin et al.)
Sign language is the primary language used by the Deaf and Hard-of-Hearing (DHH)
community in the Philippines. Herrera et al. (2023) developed a millimeter wave (mmWave)
technology system that uses gesture recognition applications in sign language. A mmWave-based
FSL recognition system translates isolated signs into their equivalent gloss. The system captures
raw data from a user's motion before the radar sensor. The captured data from the TI IWR1443
radar sensor is then fed into the recognition module, starting with the processing algorithm to
clean the data. It is then provided through the deep learning model to classify the data and return
the gloss of the sign. The researchers conducted simple tests to determine the semi-real-time
capability of the system. The system automatically inferred the gloss corresponding to the
performed sign with some delay. These delays were measured to compute the overall recognition
Summary of Insights
The Related Literature and Prior Research presented in this study provides valuable
insights for researchers studying hearing and speech impairments, sign language, and the
The first theme highlighted the importance of sign language for hearing and speech
impaired people. Prior research and studies included in this theme enlightened the researchers of
this study about how communication is gapped between hearing and speech impaired and abled
individuals. Halim & Abbas made it clear that sign language is a communication used by hearing
who doesn’t use or study this language. By this gap, communication barrier existed for hearing
and speech-impaired individuals across different countries and that Deaf and speech impaired
individuals are often overlooked due to their lower visibility. With this, researchers of this study
will strive to address this gap and bridge the communication between the abled and hearing and
The second theme focused on the prior studies that utilized sign language real-time
translation systems. Several approaches in sign language real-time translation that were utilized
by different researchers in their study will be very beneficial in the course of this study.
However, most of the studies included in this theme were limited in translating sign language
into text only but, despite this limitation these studies has a great impact on this research. Vision-
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
based technique and built-in data sets approach that were introduced from the prior studies were
The third theme focused mainly on studies about Gesture and Motion Recognition. Prior
research about gesture and motion recognition highlighted that the use of gesture and motion
recognition technology for sign language recognition has made communication easier for speech
and hearing-impaired individuals. In the context of gesture and motion recognition, there were
two main methods for implementing sign language motion recognition: sensor-based and vision-
The last theme on the other hand focused mainly on Sign Language Translation
Techniques. Various techniques for sign language translation, have become increasingly
individuals and others who don't know sign language. One of the techniques that the prior studies
have utilized is the hardware-based solution proposed by Sumadeep et al. (2019), which uses two
circuits, a transmitter circuit, and a receiver circuit. The transmitter circuit comprises a
microcontroller, an accelerometer, and a flex sensor, while the receiver circuit comprises an
audio module, an amplifier, and a speaker. Another technique is computer vision-based models,
which require a minimum requirement of one camera and image processing techniques to
classify and categorize motions. The main difficulty with these systems is transporting a camera
and CPU inside a box or a container. The lighting effects also significantly impact; under good
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
lighting, the system may not recognize the hand gesture and interpret the displayed signal
mistakenly, making it difficult for the ordinary user to operate the system. Which as what
Understanding the difficulties and addressing the limitation of the prior research related
to this study, the researchers came up with an idea to develop a sign language translation device
that will incorporate the techniques and approaches which are applicable, that were introduced in
the prior research through this study titled, “SignMo: Real-time Sign Language to Speech
Translation”, a device that translates sign language into speech in real-time using Raspberry Pi.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Chapter III
METHODOLOGY
This chapter discusses the research design, respondents, measures, procedures, and data
Research Design
This quantitative study will utilize descriptive and developmental research methods to
design, develop, and test a Realtime Sign Language to Speech Translation. The developmental
phase will include the design criteria, parameters for analysis, and the design plan preparation
and fabrication of the system. Moreover, the descriptive research will include the evaluation
Design Criteria
Design criteria are the specific guidelines or requirements when creating an object. It
encompasses various aspects to ensure the final product's effectiveness, functionality, and
practicality (Willis, 2018). In this study, the researchers aim for better sign language translation
that will benefit speech and deaf and hard-of-hearing individuals using the technical features of
Computer Vision
This feature is an optical equipment that allows the system to recognize and interpret sign
language gestures. Computer vision is crucial in Sign Language Translation because it captures
and analyzes the real-time movement input of the user’s signs. The gathered data from the visual
component translates the hand gestures or sign gestures into spoken language between
individuals who use sign language as communication and those who do not.
Text to Speech
This second feature within the sign language translation system bridges the communication gap
between the interpreted sign language gestures and spoken language. Then, the TTS processes
the gestures captured from computer vision and converts the sign language gestures into spoken
language.
Auto-start Script
An Auto-start Script is a crucial part of the sign language translation system because it consists
of instructions or commands that automates the sequence of sign gestures involved in capturing,
The proponent will employ the following procedures to ensure the quality of the prototype
that will be developed. The successful utilization of the methods provides the successful design,
development, and testing of the SignMo: Real-time Sign Language to Speech Translation. This
testing, revising, and finalizing. These well-organized stages ensure the systematic and efficient
Planning
The researchers started the planning process by creating a flowchart showing how the device
would work. Figure 3 shows the ideal working conditions of the device. The researchers will
define the physical form of sign language and its interface with a sign language-to-speech
translation device. They’ll also establish a comprehensive set of design criteria and select the
As shown in figure 3, the device will start in capturing sign language gesture as input
data. The captured sign language will be analyzed if the gesture detected or not. If there is a sign
language gesture detected, the neural network will identify the sign language gesture captured.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
When the neural network successfully recognizes sign language in the hand gesture, it will
generate the corresponding text and process it to output the corresponding audio output.
Figure 4. Flow Chart of the Ideal Working Conditions of the Computer Vision
As shown in Figure 4, the device will start capturing an image using a camera, which
serves as raw data for the hand detection and recognition system. After analyzing the image, the
system proceeds to the neural network, which contains dataset of images and their corresponding
labels. The neural network attempts to identify the specific gesture performed by the hand in the
image. If the neural network successfully recognizes the hand gesture, it will generate the
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
corresponding speech output. If step 3 does not detect a hand and no gesture recognition can be
Figure 5. Flow Chart of the Ideal Working Conditions of the Text to Speech
Figure 5 shows the flowchart of the text-to-speech feature where the process begins with
the text that needs to be converted into speech. The text came from the sign language performed
by the signer. With the help of the Raspberry Pi, the TTS engine will utilize various algorithms
and create a voice data to create an audio representation that corresponds to the sign gestures
being performed. This process converts the synthesized text of sign gestures into spoken words,
Component Gathering
The researchers will acquire or place an order for the needed components, such as camera
modules, speakers, microcontrollers, and other electronic components. They will also ensure that
the selected components are necessary for the successful operation of the device and satisfy the
project’s needs.
Table 1
Parts Function
Table 1 presents the parts of Computer Vision and its corresponding function. Computer
vision is the most crucial part of the device as it serves as the eyes of the device that captures the
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
live video of the sign language gestures. It captures and recognizes live video of hand gestures,
enabling real-time translation of the data gathered from the camera module.
Table 2
Parts Function
Table 2 presents the parts of the Text to Speech and their corresponding function. The
TTS engine translates the interpreted text into spoken language and outputs the synthesized text
through the speaker. This feature enables seamless communication between individuals who use
Table 3
Parts Function
Table 3 presents the parts of the Auto-start Script and its corresponding function. The
Raspberry Pi receives and processes the sign language data from the camera. The auto-start
script manages the automatic launch and initialization of the translation function upon the startup
of the device.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Table 4
List of Materials
Components Description
Table 4 presents the components to be used in making the device and their corresponding
descriptions. The components above are the overall components used in creating the prototype.
Assembling
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
The researchers assembled the physical prototype and incorporated the parts such as the camera
Coding
The researchers will write the code for the device system operation using the chosen
programming language.
Testing
The researchers will check the correctness and function of the components and verify if each
component performs its intended functions. They will also assess the device system's
Table 5
Test Parameters
Auto-start Script ● Boot time: assess the time it takes for the system
to boot up and become operational and
functional.
Table 5 outlines the test parameters for assessing the performance of a sign language
translation device across its technical features. The identified technical features include
Computer Vision, Text-to-speech, and Auto-start Script. Each technical feature is associated with
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
specific test parameters, and corresponding test rubrics are established to systematically measure
Revising
The researchers will analyze the results and identify the problems in the device. They will
prioritize the issues based on their severity, and focus on resolving these errors. The researchers
will also perform a code review to ensure the quality of the code, improve the necessary code
Finalizing
The researchers will perform the final rounds of testing to ensure all issues are resolved. They
will prepare for the deployment of the device and consider potential additional improvements.
Evaluation Procedure
The data gathering procedure will begin with a written request to the Executive Director of
Carlos Hilado Memorial State University – Alijis campus through the Dean of the College of
Computer Studies, which will seek approval to conduct the study on the campus. This approval
letter will authorize the proponents to coordinate with the Program Head of the Bachelor of Science
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
in Computer Engineering (BSCPE), students of the BSCPE program, different experts in the field
of Computer Engineering, Sign Language users, and/or Deaf and Hard of hearing students within
Bacolod City, and Students currently taking up Bachelor of Special Need Education in CHMSU
Talisay Campus as respondents of the study. The researchers will conform to the ethical
requirements of research (ie., informed consent, anonymity, privacy, and confidentiality during
To test the quality of the prototype, the respondents will evaluate its quality using the
survey questionnaire developed by the researchers. The items in the instrument will be subjected
to validity and reliability test to be approved by panel of evaluators to determine the quality of the
prototype. In addition, the instrument will be administered for a period of one (1) month during
the second semester of School Year 2023-2024. Moreover, the researchers will utilize purposive
sampling will be utilized to provide easy and fast data collection for the proponent.
Instrumentation
To measure the quality of the prototype, the proponent will utilize the research instrument
developed by Flaviano L. Urera Jr. in 2019. The instrument’s content underwent validation by a
panel of experts, ensuring its reliability and accuracy in measuring the desired parameters. The
survey questionnaire is an eleven (11) item instrument, which will employ a Likert-type scale
with 5 responses: 1 Poor; 2 = Fair; 3 = Satisfactory; 4 = Very Good; and 5 = Excellent, requires
the respondents to evaluate the prototype divided into three (3) parts.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
To establish the validity and reliability of the survey questionnaire, the researchers will
utilize a pre-existing research instrument that was created, validated, and used by Mr. Flaviano
L. Urera Jr. and Mr. Francis F. Balahadia in their 2019 study entitled “ICTeachMUPO: An
Evaluation of Information E-Learning Module System for Faculty and Students.” The research
instrument will no longer undergo validity and reliability testing for it is already standardized
Data to be Gathered
The respondents of the study will be composed of experts in the field of Computer
Engineering, Sign Language users, and/or Deaf and Hard of Hearing students within Bacolod
City, Students that currently taking up Bachelor of Special Need Education in CHMSU Talisay
Campus, and Computer Engineering students at Carlos Hilado Memorial State University –
Alijis Campus for the 2nd Semester, School Year 2023-2024. In Addition, the research instrument
will be administered to the online through the utilization of Google Forms and face-to-face with
the experts in the field, sign language users, and deaf and/or deaf and hard of hearing individuals
through the use of physical survey questionnaires. This will proceed for one (1) month during the
This study will utilize purposive sampling method to select the respondents of the study.
However, the accumulated respondents should not be less than thirty (30) to determine the
technique where units are deliberately chosen based on specific characteristics needed for the
study. This method relies on the researcher's judgment to select individuals, cases, or events that
Data Analysis
The quantitative data gathered from the conduct of the instrument will be analyzed to
Table 6
Interpretative Scale for Prototype Quality
3.41 – 4.20 Very Good The device met the 75% of the
specified objectives in the instrument.
1.0 – 1.80 Poor The device did not meet any of the
specified objectives in the instrument.
As presented in Table 5, there are five scales to interpret the quality of the prototype. A
mean scale of 4.21 – 5.00 will be interpreted as Excellent; 3.41 – 4.20 as Very Good; 2.61 – 3.40
To determine the quality of the prototype, the mean and standard deviation will be
utilized. The analysis will provide valuable insights into the overall performance and consistency
assessment procedure that will accurately evaluate the prototype’s strengths and areas for
improvement.
To measure the quality of the SignMo: Real-time Sing Language to Speech Translation, it
will be tested based on the parameters stated in the ISO 25010 criteria in which there were eight
characteristics to assess whether a certain prototype is of excellent quality or not. Three out of
eight characteristics will be assessed that there will be no failures once the device is
implemented.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Table 7
Table 7 presents the test parameters for analysis that will be employed in the study. This
emphasizes that in order for the prototype to be referred to as functionally suitable, reliable, and
performance efficient, it should be at least obtain a mean score of 3.41 from the survey
questionnaire.
Project Cost
Table 8
Project Cost
Description Qty Unit Price Amount (Php)
Raspberry Pi 4b 1 pc 7, 989.00 7, 989.00
Table 8 presents the components needed to be employed in the completion of the device.
Thus far, the estimated total cost of creating the prototype, not counting the fees for the printing
References
Amangeldy, N., Milosz, M., Kudubayeva, S., Kassymova, A., Kalakova, G., & Zhetkenbay, L.
(2023). A Real-Time Dynamic Gesture Variability Recognition Method Based on
Convolutional Neural Networks. Applied Sciences, 13(19), 10799.
Ananthanarayana, T., Srivastava, P., Chintha, A., Santha, A., Landy, B. P., Panaro, J., Webster,
A., Kotecha, N., Sah, S., Sarchet, T., Ptucha, R., & Nwogu, I. (2021). Deep Learning
Methods for Sign Language Translation. ACM Transactions on Accessible Computing.
https://doi.org/10.1145/3477498
Ang, M. C., Taguibao, K. R. C., & Manlises, C. O. (2022, September). Hand Gesture
Recognition for Filipino Sign Language Under Different Backgrounds. In 2022 IEEE
International Conference on Artificial Intelligence in Engineering and Technology
(IICAIET) (pp. 1-6). IEEE.
Antony, A. S., Santhosh, K. B., Salimath, N., Tanmaya, S. H., Ramyapriya, Y., & Suchith, M.
(2022, January). Sign Language Recognition using Sensor and Vision Based Approach.
In 2022 International Conference on Advances in Computing, Communication and
Applied Informatics (ACCAI) (pp. 1-8). IEEE.
Antony, R., Paul, S., & Alex, S. (2020). Sign language translation system. International Journal
of Scientific Research & Engineering Trends, 6.
Barbhuiya, A. A., Karsh, R. K., & Jain, R. (2020). CNN based feature extraction and
classification for sign language. Multimedia Tools and Applications, 80(2), 3051–3069.
https://doi.org/10.1007/s11042-020-09829-y
Chevtchenko, S. F., Vale, R., & Macario, V. (2018). Multi-objective optimization for hand
posture recognition. Expert Systems With Applications, 92, 170–181.
https://doi.org/10.1016/j.eswa.2017.09.046
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Cronin, K., Ducusin, R., Sia, J., Tuaño, C., & Rivera, J. The Use of Motion Sensing to
Recognize Filipino Sign Language Movements.
Eser, A. J., Flores, A., & Vallarta, J. C. (2023). A Filipino Sign Language (FSL) Software:
Conversion of FSL to Text and Speech Using Deep Learning. Ascendens Asia Journal
of Multidisciplinary Research Abstracts, 5(2), 78-78.
F. N. H. Al Nuaimy, “Design and implementation of interaction system for the deaf and mute,”
International Engineering Technology Conference (ICET), pp. 1–6, 2017.
Foggetti, F. (2023, April 18). 5 Interesting Facts about Sign Languages. Hand Talk - Learn ASL
Today. https://www.handtalk.me/en/blog/nteresting-facts-about-sign-languages/
Garcia B and Viesca S, “A Real-time American sign language recognition with convolutional
neural networks”, Convolutional Neural Networks for Visual Recognition, 2022, pp. 225-
232.
Goel, P., Sharma, A., Goel, V., & Jain, V. (2022, November). Real-Time Sign Language to Text
and Speech Translation and Hand Gesture Recognition using the LSTM Model. In 2022
3rd International Conference on Issues and Challenges in Intelligent Computing
Techniques (ICICT) (pp. 1-6). IEEE.
Hassan, M. R., et al. (2021). Sign Language Recognition: A Comprehensive Review. IEEE Access,
9, 63289-63321.
Haug, T., & Mann, W. (2018). Understanding the Deaf culture and community. In Cultural and
Language Diversity and the Deaf Experience (pp. 13-24). Routledge.
Herrera, J. A., Muro, A. A., Tuason III, P. L., Alpano, P. V., & Pedrasa, J. R. (2023, June).
Check for updates Millimeter Wave Radar Sensing Technology for Filipino Sign
Language Recognition. In Pervasive Computing Technologies for Healthcare: 16th EAI
International Conference, PervasiveHealth 2022, Thessaloniki, Greece, December 12-
14, 2022, Proceedings (Vol. 488, p. 274). Springer Nature.
Hommes, R.E., Borash, A.I., Hartwig, K., et al. (2018). American Sign Language Interpreters'
Perceptions of Barriers to Healthcare Communication in Deaf and Hard of Hearing
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Humphries, T., Kushalnagar, P., Mathur, G., Napoli, D. J., Rathmann, C., & Smith, S. (2019b).
Support for parents of deaf children: Common questions and informed, evidence-based
answers. International Journal of Pediatric Otorhinolaryngology, 118, 134–142.
https://doi.org/10.1016/j.ijporl.2018.12.036
Kumar, S., Rachna, P., Hiremath, R. B., Ramadurgam, V. S., & Shaw, D. K. (2022, December).
Survey on implementation of TinyML for real-time sign language recognition using smart
gloves. In 2022 Fourth International Conference on Emerging Research in Electronics,
Computer Science and Technology (ICERECT) (pp. 1-7). IEEE.
Kushalnagar, R. (2019). Deafness and hearing loss. Web Accessibility: A Foundation for
Research, 35-47.
Kusurnika Krori Dutta, Satheesh Kumar Raju K, Anil Kumar G S, Sunny Arokia Swarny B,
“Double handed Indian Sign Language to speech and text”, IEEE, 2015 Third
International Conference on Image Information Processing.
Kute, S., Chinchole, M. G., & Bansode, R. S. (2020). Sign language to digital voice conversion
device. International Research Journal of Modernization in Engineering Technology and
Science, 7(2), 462-466.
Manikandan, S. A., Vidhya, S. S., Chandragiri, V., Sriram, T. M., & Yuvaraja, K. B. (2022).
DESIGN OF LOW COST AND EFFICIENT SIGN LANGUAGE INTERPRETER FOR
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Montefalcon, M. D., Padilla, J. R., & Llabanes Rodriguez, R. (2021, August). Filipino sign
language recognition using deep learning. In 2021 5th International Conference on E-
Society, E-Education and E-Technology (pp. 219-225).
Murillo, S. M., Villanueva, M. E., Tamayo, K. M., Apolinario, M. V., & Lopez, M. D. (2021,
August). Speak the Sign: A Real-Time Sign Language to Text Converter Application for
Basic Filipino Words and Phrases.
http://cajmtcs.centralasianstudies.org/index.php/CAJMTCS.
https://cajmtcs.centralasianstudies.org/index.php/CAJMTCS/article/view/92/74
Nakamura, K., Yamada, K., & Kawai, Y. (2019). Sign Language Translation and Its Applications.
In Advances in Computer Vision and Pattern Recognition (pp. 21-39). Springer.
Nikolopoulou, K. (2023, June 22). What is purposive sampling? | Definition & Examples.
Scribbr. https://www.scribbr.com/methodology/purposive-
sampling/#:~:text=Purposive%20sampling%20refers%20to%20a,on%20purpose%E2%8
0%9D%20in%20purposive%20sampling.
S, R. A. (2022, December 12). What is Raspberry Pi? Here’s the best guide to get started.
Simplilearn.com. https://www.simplilearn.com/tutorials/programming-tutorial/what-is-
raspberry-
pi#:~:text=The%20Raspberry%20Pi%20is%20a,a%20modified%20version%20of%20Li
nux.
Sampaga, U., Toledo, A., Peret, M. a. L. D., Genodiala, L. M., Aguilar, S. L. C., & Antoja, G. a.
M. (2023). Real-Time Vision-Based Sign Language Bilateral Communication Device for
Signers and Non-Signers using Convolutional Neural Network. World Journal of
Advanced Research and Reviews, 18(3), 934–943.
https://doi.org/10.30574/wjarr.2023.18.3.1169
Sandler, W. (2018). The body as evidence for the nature of language. Frontiers in Psychology, 9.
https://doi.org/10.3389/fpsyg.2018.01782
Saxena, S., Paygude, A., Jain, P., Memon, A., & Naik, V. (2022, July). Hand Gesture
Recognition using YOLO Models for Hearing and Speech Impaired People. In 2022
IEEE Students Conference on Engineering and Systems (SCES) (pp. 1-6). IEEE.
Shanthi, K. G., Manikandan, A., Vidhya, S. S., Chandragiri, V. P. P., Sriram, T. M., & Yuvaraja,
K. B. (2018). Design of low cost and efficient sign language interpreter for the speech
and hearing impaired. ARPN Journal of Engineering and Applied Sciences, 13(10), 3530-
3535.
SignHealth. (2023, August 7). What is the difference between deaf and Deaf? - SignHealth.
https://signhealth.org.uk/resources/learn-about-deafness/deaf-or-deaf/
Solano, C. I. H., Barraza, J. A. V., Avelar, R. S., and Bustos, G. N. (2018). Noa la discapacidad:
La Sordera como minoría lingüística y cultural. Revista Nacional e Internacional de
Educación Inclusiva, 11, 63–80. Available online at:
https://revistaeducacioninclusiva.es/index.php/REI/article/view/384 (accessed February
3,2023)
Sumadeep, J., Aparna, V., Ramani, K., Sairam, V., Kumar, O. P and Krishna, R. L. P, “Hand
Gesture Recognition And Voice Conversion System for Dumb People”, 2019
Tan, Y. S., Lim, K. M., & Lee, C. P. (2021). Hand gesture recognition via enhanced densely
connected convolutional neural network. Expert Systems With Applications, 175, 114797.
https://doi.org/10.1016/j.eswa.2021.114797
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Tao, W., Leu, M., & Yin, Z. (2018). American Sign Language alphabet recognition using
Convolutional Neural Networks with multiview augmentation and inference fusion.
Engineering Applications of Artificial Intelligence, 76, 202–213.
https://doi.org/10.1016/j.engappai.2018.09.006
Terry, J. (2023). Enablers and barriers for hearing parents with deaf children: Experiences of
parents and workers in Wales, UK. Health Expectations, 26(6), 2666–2683.
https://doi.org/10.1111/hex.13864
Tippannavar, S. S., Shivprasad, N., & Yashwanth, S. D. (2023, February). Smart Gloves—A tool
to assist Individuals with Hearing difficulties. In 2023 International Conference on
Recent Trends in Electronics and Communication (ICRTEC) (pp. 1-5). IEEE.
Trivedi, A., Pant, N., Shah, P., Sonik, S., & Agrawal, S. (2018). Speech to text and text to speech
recognition systems-Areview. IOSR J. Comput. Eng, 20(2), 36- 43.
Wadhawan, A., & Kumar, P. (2020). Deep learning-based sign language recognition system for
static signs. Neural computing and applications, 32, 7957-7968.
Wen, F., Zhang, Z., He, T., & Lee, C. (2021). AI enabled sign language recognition and VR
space bidirectional communication using triboelectric smart glove. Nature
communications, 12(1), 5378.
Woll, B. (2018). Deaf people: Linguistic and social issues. In The Oxford Handbook of Deaf
Studies in Language (pp. 1-17). Oxford University Press.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Working with Raspberry Pi Camera Board - MATLAB & Simulink Example. (n.d.).
https://www.mathworks.com/help/supportpkg/raspberrypiio/ref/working-with-raspberry-
pi-camera-
board.html#:~:text=The%20Raspberry%20Pi%20Camera%20Board,at%2030%20frames
%20per%20second.
World Health Organization: WHO. (2023b, February 27). Deafness and hearing loss.
https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
APPENDIX A
SURVEY FORM
Instruction: Below are statements that relate to the quality of the SignMo: Real-time Sign
Language to Speech Translation Based on the ISO 25010 criteria. Using the scale below, kindly
rate by checking (✓) the box that corresponds to your response to the given statements in the
criteria below.
5 Excellent
4 Very Good
3 Satisfactory
2 Fair
1 Poor
Parameters 5 4 3 2 1
A. Functional Suitability
1. The set of functions covers all the
specified tasks and user objectives.
Ang buong sistema ay sumasaklaw sa lahat ng
tinukoy na mga Gawain at mga layunin ng
gumagamit.
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
5 4 3 2 1
2. The function provides the correct results
with the needed degree of precision.
Ang Sistema ay nagbibigay ng tamang resulta
sa kinakailangang antas ng katumpakan.
B. Portability
1. A product or a system can effectively and
efficiently be adapted for different or
evolving software or other operational
usage environments.
Ang produkto o Sistema ay maaaring epektibo
at mahusay na maiaakma para sa iba’t ibang
hardware, software o iba pang mga uri ng
pagpapatakbo o paggamit.
C. Performance Efficiency
Comments/ Recommendations.
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
APPENDIX B
As we are on the process of creating out thesis paper, one of the requirements is the
research instrument to obtain, measure, and analyze the data given to our study.
In line with this, the researchers, (Flora, Lachica,Velasco) would like to request your
consent to use your research instrument. We acknowledge the time and effort you have put in
creating this instrument; thus, it will be a great help to us in successfully completing our study.
Rest assured; we commit to maintaining the confidentiality of the instrument as well as no
modifications nor adjustments will be made without your consent.
We are hoping for your kind and positive response. Thank you very much!
Sincerely,
The Researchers
ERA MARIE M. FLORA ESTEBAN L. LACHICA JOHN PATRICK M. VELASCO
Approved By:
APPENDIX C
College of Engineering
Brgy. Alijis, Bacolod City
Dear participant,
Greetings!
The goal of this study entitled “SignMo: Real-time Sign Language to Speech
Translation” is to evaluate the device based on the objectives it seeks to accomplish. The
researchers are requesting your time so that you can provide insightful feedback about the device
that has been presented based on your observations and evaluation. The researchers also ensure
that your personal information is kept confidential and that no disclosure of information will
happen without your permission.
Please review the study’s information so that you can ask them if you have any questions.
Thank you!
Respectfully yours,
The Researchers
Approved by:
ENGR. NEIL GABRIEL ESGRA
THESIS Adviser
Carlos Hilado Memorial State University
Alijis Campus | Binalbagan Campus | Fortune Towne Campus | Talisay Campus
Appendix D
Appendix E
Appendix F
Appendix G