0% found this document useful (0 votes)
2 views

Bangla Sign Language Recognition

The document discusses the development of a sign language recognition system for Bangla words and letters using hand gestures, aimed at aiding the deaf and dumb community in communication. It highlights the challenges faced in recognizing Bangla Sign Language (BdSL) and presents a dataset of 51,800 hand signs for training deep learning models, which achieved an accuracy of 96.15%. The research emphasizes the need for improved datasets and methodologies to enhance the recognition of BdSL gestures in real-time applications.

Uploaded by

Sabbir Musfique
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Bangla Sign Language Recognition

The document discusses the development of a sign language recognition system for Bangla words and letters using hand gestures, aimed at aiding the deaf and dumb community in communication. It highlights the challenges faced in recognizing Bangla Sign Language (BdSL) and presents a dataset of 51,800 hand signs for training deep learning models, which achieved an accuracy of 96.15%. The research emphasizes the need for improved datasets and methodologies to enhance the recognition of BdSL gestures in real-time applications.

Uploaded by

Sabbir Musfique
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

SIGN LANGUAGE RECOGNITION OF

BANGLA WORDS AND LETTERS USING


HAND GESTURES

Abstract—For the deaf and dumb (D&D) people, sign till date, which is obviously important for a number of
language is one of the primary and most used methods applications, such as translation devices and assistive
for communication. All over the world, every day the technology tools.
D&D community faces difficulties while communicating
with the general mass. Most of the times, they need Deaf people can use sign language to share their
an interpreter to communicate with others and the feelings and express their emotions. The global pop-
interpreter may not always be available. Bangla sign ulation is made up of 15% of the population who
language (BdSL) is a complete and independent natural have various forms of disabilities. There are over five
sign language with its own linguistic characteristics. percent of the population that is deaf, which is over
Our system solely relies on the images of bare hands,
which allows the users to interact with the system in a 466 million people. These people deal with difficulties
natural way. We have collected in total 51800 different in interacting with others especially when joining the
hand signs for the 47 BdSL alphabets and 10 digits workforce, education, healthcare, and transportation.
along with 30 Bengali words. We propose both deep According to the Department of Social Services, there
learning & machine learning algorithms in our study
are 153,776 vocal disabled people, 73,507 hearing
and found out that deep learning models have achieved
comparatively more accuracy (96.15%) than that of disabled people, and 9625 hearing and visually dis-
machine learning models (94.09%). Thus deep learning abled people in Bangladesh [1]. A digital Bangla
models have performed quite better. Sign Language Interpretation system can surpass this
Index Terms—Communication, BdSL, Image Augmen- communication barrier between vocal-hearing disabled
tation, Convulational Neural Network(CNN), KPI, Mo-
people and a common person. Approximately 71 mil-
bileNet, VGG16
lion people worldwide use the spatial movements-
I. INTORDUCTION based language for their primary interactions. There
Sign language, popularly known as silent conversa- are over 3 million deaf and hard-of-hearing people
tion, serves as a visual gesture-based primary commu- in Bangladesh [2]. It is considered the second most
nication medium for hearing-impaired individuals. At prevalent type of disability in this country.
present, deafness is one of the major health problems Now, there exists another statistics that WHO stated
in the world. For solving this problem, sign language around 466 million people have a hearing dis-ability,
comes into play. In a world abundant with different which is over 5% of the world’s population [3]. 0.38%
spoken languages, Bengali is a language which is of the total population of Bangladesh have speech and
mostly used in communication by millions of people, hearing disabilities, according to the National Census
often finding itself overshadowed. Yet, within this 2011. Approximately 15% of the world’s population
vast linguistic tapestry, a unique and often overlooked have some degree of hearing loss, and many of them
form of communication exists—Bangla Sign Language are children [4]. WHO (World Health Organization)
(BdSL). It differs from any other language that has reported that over 5% of the world’s population have
dependency on hand shape, palm orientation, body hearing loss, jeopardizing their daily life and livelihood
gesture, and facial expression to express its meaning. in the year 2013.
Bengali is one of the most spoken languages; still, There is a possible way that we can perform Scale-
there is minimal research in Bangla Sign Language, Invariant Feature Transform (SIFT) for the robust
particularly for word-level detection. detection of keypoints and invariant feature descriptors
BdSL possesses some complex two-handed gestures of Bangla Sign words & letters. [3] Therefore the
along with simultaneous body movements that make it main objective of our research is to provide sufficient
different and at the same time challenging as compared support to the deaf-muted people in their daily life
to other sign languages. Some of the key challenges and to develop and build an improved and efficient
include: Firstly, to the best of our knowledge large- Machine Learning model for recognizing Bangla sign
scale diverse datasets for BdSL do not exist. Secondly, words.
the approaches that are currently existing for recog-
II. LITERATURE REVIEW
nizing BdSL often don’t provide high scalability and
accuracy. It is also evident that there is yet no efficient In one of our studied articles, we have found out that
system that could recognize BdSL gestures in real time there exists Bangla Ishara Bhasha Obhidhan (Bangla
Sign Language Dictionary, 1994, 1997) and Ishara utilized a dataset of 3,000 images categorized into
Bhashay Jogajog (Communication in Sign Language, original, binary, and segmented formats. (Das & Is-
2005, 2015) that try to bridge the gap in communica- lam, 2021). Accuracies achieved by Bengali words
tion. Islam et al. (2018): Created ”Ishara-Lipi,” the first and alphabets were 92.5% by using a dataset
dataset for isolated Bangla characters. Rahaman et al. mentioned in (Miah et al., 2022). It was also found
(2014): The authors presented a real-time computer that most of the earlier works make use of small or
vision-based BSL recognition system with a vowel single datasets; thus, generalization of such models is
recognition accuracy of 98.17%. Most of the existing not that easily possible. [2]
models have focused on letters or numerical digits. According to our knowledge, there is no video
Most of the approaches are not scalable for dynamic dataset available for BdSL except for a few image-
gestures or larger vocabularies of BSL. [5] Previous based datasets. This creates a gap in research on video-
works majorly concentrated on alphabet and digit based BdSL, which is the central of this research work.
recognition, thus leaving the topic of detecting static- [8] Now there exists a dataset named BdSL47 which
gesture words in BSL largely unexplored. is a comprehensive dataset that can be a valuable
Research on Bangla Sign Language recognition is resource for the researchers working on computer
relatively less as compared to other sign languages like vision-based Bangla sign language recognition. [14]
American Sign Language or Indian Sign Language. 2D The researchers and developers can explore the use
and 3D tracking sensors for depth information and seg- of multimodal deep learning architectures to correctly
mentation Machine learning models, including but not identify Bangla hand signs because the dataset con-
limited to HMM, CRF, and SVM, have been used for tains both RGB images and depth key-points of each
the identification, feature extraction from gestures, and sign for analysis. The dataset contains Bangla hand
gesture recognition. Deep learning approaches, espe- signs of digits and alphabets in different challenging
cially CNNs, have become popular owing to their high- conditions that reflect real-life scenarios and impose
level feature extraction capability and higher accuracy. challenges for the researchers and developers.
VGG16, VGG19, and custom CNN architectures, pre- III. METHODOLOGY AND
trained models were applied on static and dynamic IMPLEMENTATION
gestures and showed high accuracy in isolated datasets.
[15] Very few works are reported on Bangla Sign Our proposed system is designed to make the life
Language which has also focused on static gestures of people with hearing and speaking disability much
for alphabets and digits. While some efforts have been easier. The system’s design will enable communities
directed to translating BSL into text and the identifi- with disabilities to communicate within themselves
cation of static hand gestures, research regarding sen- and among others. The system is efficient enough
tence construction and dynamic gestures remains few to allow its users to communicate via Bengali sign
and far between. Large and diverse datasets for word- language. We have conducted our research work ac-
level or sentence-level recognition are also limited. cording to the procedure as illustrated in Figure 1
[6] Now, there is some clear evidence that different
types of studies have been conducted on sign language
detection around the world, most of them based on
American Sign Language, Thai Sign Language, and
Arabic Sign Language. Methods involving YOLOv3
for the real-time conversion of ASL and CNNs for
the detection and speech generation from Arabic Sign
Language have also given hopeful results. Now let’s Fig. 1: Methodology of our proposed System
come to the discussion about Bangla sign recognition.
There is a method called SIFT and PCA-based Feature
Extraction. It used to detect 38 Bangla signs; these A. Dataset Collection
methods converted the images from RGB to HSV Acquiring the data has been a crucial part of this
color space. There was also Incomplete and a lack of work since there is not enough dataset available for
diversity in BdSL datasets. One of the popular datasets use. And this task was not easy due to the presence
for hand gesture classification is “Isharalipi Dataset” of a huge number of alphabets. At first, we have
but not suitable for real-time object detection as it is collected two datasets consisting of hand images which
of low resolution. [3] expressed sign words and letters. The first one (dataset-
We have studied another article which stated that un- 1) is BdSL47 which contains 47000 RGB input images
like widely studied sign languages like American Sign of 47 signs (10 digits, 37 letters) of Bangla Sign
Language (ASL), BSL possesses complex grammar Language. [14] And the second(dataset-2) consists of
and limited resources, hence making the detection and 1200 images, categorized into 30 different classes.
translation difficult.There is also a developed real-time Each class represents a distinct sign in the Bangla Sign
BSL alphabet recognizer using deep learning which Language(BSL), with each class containing 40 images.
The images in the dataset are in RGB color space. Both B. Image Preprocessing
the are available in open-source resources.
In the next step of our research, we have prepro-
cessed all the obtained images found after merging
of datasets. All the input images were resized and
normalized in the range of 0 and 1 to ensure con-
sistency and facilitate the machine learning and deep
learning models to learn correctly and effectively. We
অ/য় আ ই/ঈ উ/ঊ র/ঋ have also made some adjustments in the brightness
of the images to enhance the robustness of machine
learning models. In general, we have done fine-tuning
only when our available image of the dataset is not
drastically different in context from the dataset on
এ ঐ ও ঔ ক which the pre-trained model is already trained. After
that as part of image augmentation, we have converted
them into four distinct types:
i. Grayscale
খ/ক্ষ গ ঘ ঙ চ We have converted all our images to grayscale
to remove color dependencies and reduce com-
putational complexity. The images were resized
to 128×128 pixels.
ii. Gaussian Blur
ছ জ/য ঝ ঞ ট Our model has applied a slight blur using a
5×5 kernel to introduce slight variations in image
texture.
iii. High Contrast Next, we have increased the
brightness (alpha=1.2, beta=30) of the corre-
ঠ ড ঢ ণ/ন ত sponding images.
iv. Low Contrast After that, we have also decreased
brightness by setting alpha=0.8 and beta=-30 of
the hand-gesture images.
থ দ ধ প ফ Image augmentation is a process of creating new
training examples from the existing ones. The pictures
above illustrate some of the dataset samples that we
have used for Bangla Sign Language recognition, con-
sisting of various hand gestures. Each row represents
ব/ভ ম ল শ/ষ/স হ
different gestures, while each column shows different
augmentation techniques applied to the original im-
ages.
The first picture of every row(a) in Figure 3 is
ংʘ ◌ং ০ ১ ২ indicating the original hand-gesture image of the sign
words, which is basically the colored images. After
collecting such samples, we have enhanced our dataset
using four key augmentation techniques: (a) Grayscale
conversion, (b) Gaussian Blur, (c) High Contrast, and
৩ ৪ ৫ ৬ ৭ (d) Low Contrast. These augmentations are placed
Fig. 2: Overview of our merged dataset sequentially ((b, c, d & e) of every row) in the
picture as mentioned in Figure 3. Such preprocessing
certainly improves the generalization of the model by
introducing variations in lighting, texture, and noise
After that, we have merged both the dataset to in- conditions.
crease the robustness of the model. The merger is sup- This diverse dataset ensures robustness in the recog-
posed to have equal balanced gesture classes and have nition of gestures under varying real-world conditions,
the real world variations better represented(dataset- making the model more effective for practical applica-
1+2). This dataset has been primarily used to evaluate tions. Besides the preprocessing steps greatly helped
various deep learning models as well as evaluating our system to learn about the hand gestures quickly
different machine learning models. and swiftly.
1(a) 1(b) 1(c) 1(d) 1(e)

2(a) 2(b) 2(c) 2(d) 2(e) Fig. 4: CNN architecture

volume (for instance, holding the class scores) through


a differentiable function as illustrated in Figure 4. The
3(a) 3(b) 3(c) 3(d) 3(e) hyperparameters that were tuned include the learning
rate, batch size, and number of epochs-for obtaining
optimal performance. To improve generalization, the
regularization methods Dropout and L2 regularizations
were incorporated. The evaluation of model perfor-
4(a) 4(b) 4(c) 4(d) 4(e)
mance was done based on accuracy, recall, and F1-
score for good reliability and effectiveness.
E. Proper Training of the Model

5(a) 5(b) 5(c) 5(d) 5(e)


Finally, we have divided our dataset into two parts.
One part for training and the other part for testing.
We maintained the train-test split in 80:20 ratio to
ensure unbiased evaluation. Preprocessing was critical
in maintaining data consistency; all input images were
6(a) 6(b) 6(c) 6(d) 6(e) resized to 128x128 pixels so that proper learning could
be achieved by the models. Grayscale conversion, thus
Fig. 3: Preprocessed images of some Bangla Sign helping in computation, was done without losing the
words in dataset: 1. Aaj, 2. Bagh, 3. Basha, 4. Biyog, prime information about the gestures. Training con-
5. Bondhu, 6. Chamra tinued for multiple epochs, while the performance of
the model across training and validation datasets was
monitored. Early stopping was implemented to stop
C. Dataset Organization
training when validation performance had plateaued
Preprocessed images were saved in a structured to prevent any overfitting. The assigned checkpoint
directory hierarchy based on their class labels. This ensured that the model with the best validation accu-
preprocessing enriches the dataset by incorporating racy was saved for the purposes of final evaluation.
slight variations while retaining the essential features Finally, the testing phase was checked for general-
of the images. Each class of static hand gesture ization capability, and performance metrics indicated
images was stored in a separate subdirectory, with that deep learning models outperform conventional
the directory name representing the class label (e.g., machine learning models in recognizing BdSL static
”Aaj”, ”Baagh”, ”Basha”, ”Chamra”). This structure gestures.
facilitated efficient data loading and class identification
during the training process. F. Evaluation of the Model
After having trained the machine learning and deep
D. Developing Deep Learning Model learning models properly, we had performed extensive
Accordingly, towards the next step of our research testing to measure how well the models could dis-
work; we focused on the development of a deep tinguish Bengali Sign Language gestures. The testing
learning model for recognizing static hand gestures was done with some performance metrics that included
that can work well in the context of Bengali Sign accuracy, recall, F1-score, and confusion matrix as-
Language (BSL). The model architecture was specifi- sessment.
cally devised with CNNs as the most fitting framework To ensure the reliability of our model, we tested it
for image-based tasks. The augmentation techniques on a separate dataset that was not used during training.
(rotation, flipping, scaling, and contrast adjustment) The evaluation procedure entailed mapping predicted
made the models more robust. labels for gestures into true labels with a view to
A CNN architecture is formed by a stack of distinct evaluate classification performance. Additionally, we
layers that transform the input volume into an output used cross-validation techniques to reduce overfitting
and improve generalization across different samples. test accuracy (94. 2%) and the overall lowest F1
The evaluation’s findings informed subsequent model score and recall were obtained from VGG16 which
architecture and hyperparameter tuning improvements, is 89. 67% and 89. 64%, respectively. We certainly
guaranteeing peak performance for real-time Bengali have achieved better results compared to some existing
Sign Language identification. research works. [2] [15]

IV. RESULTS AND ANALYSIS TABLE IV: KPI values of merged datasets using
The system has been evaluated with 9640 test machine learning algorithms
images in 77 classes which have not been used to
Algorithm Name Test Accuracy(%) F1 score(%) Recall(%)
train. We have calculated metrics (Key Performance KNN 96.05 96.01 96.05
Indicators) - Recall, F1-Score, and Accuracy to assess RandomForest 96.24 96.19 96.24
the performance of our hand gesture detection model.

TABLE I: Evaluation Metrics of dataset-1 using deep Similarly, table- IV depicts the values that we have
learning algorithms obtained during our experiment using machine learn-
ing algorithms. In this case, RandomForest has given
Algorithm Name Test Accuracy(%) F1 score(%) Recall(%) comparatively better results than KNN according to
CNN 98.09 98.09 98.09 our observation.
MobileNet v2 96.57 96.56 96.57
VGG16 94.64 94.64 94.64

In our experiment, we have implemented various


deep learning models as mentioned in table I. Con-
sequently, it was found that the CNN (convolutional
neural network) model provides the highest test ac-
curacy (98.09%) along with the highest F1 score and
recall among other deep learning algorithms for our
1st dataset and VGG16 has provided the least accuracy
(94.64%).

TABLE II: Evaluation Metrics of dataset-2 using deep


learning algorithms

Algorithm Name Test Accuracy(%) F1 score(%) Recall(%)


CNN 77.92 77.96 77.92
MobileNet v2 94.58 94.62 94.58
VGG16 97.08 97.07 97.08

Fig. 5: Accuracy result of augmented images training


In case of our 2nd dataset we have found that set
VGG16 has performed much better compared to other
deep learning models, as mentioned in table II. The It is evident from Figure 5 that our model’s perfor-
highest accuracy obtained here is 97.08%, where CNN mance improves as the number of epochs increases.
has provided the least accuracy which is 77.92%. The training accuracy (blue curve) steadily rises and
TABLE III: Evaluation Metrics of merged dataset-1 & converges near 90%, while the validation accuracy
2 using deep learning algorithms (orange curve) stabilizes above 90%. This indicates
that the model effectively generalizes to unseen data
Algorithm Name Test Accuracy(%) F1 score(%) Recall(%) with prolonged training.
CNN 98.44 98.44 98.44 Figure 6 depicts that the loss of the model that we
MobileNet v2 94.2 94.17 94.2 have built in our system for Bengali sign language
VGG16 97.08 89.67 89.64
recognition. Both the blue line (train loss) and orange
line (validation loss) are trending downwards, which
After merging both the datasets, we have trained our implies the model is learning effectively. The loss
system using the previously mentioned deep learning starts off high but drops immediately in the early
algorithms. The KPI values obtained in this case are epochs and then gradually settles slowly. Validation
mentioned in table III. CNN was found to perform loss is lower than the train loss, and it implies good
better than all other deep learning algorithms that we generalization with minimal overfitting.
evaluated in the system. It has given the highest test Our merged dataset improved robustness across
accuracy (98.44%) along with the highest F1 score all models while keeping the overfitting minimized
and recall. In addition, MobileNet v2 has the lowest and generalization maximized, especially for the DL
[2] S. Siddique, S. Islam, E. E. Neon, T. Sabbir, I. T. Naheen,
and R. Khan, “Deep Learning- based Bangla Sign Language
Detection with an Edge Device,” Intelligent Systems with
Applications, vol. 18, p.200224, May 2023, doi: https://doi.
org/10.1016/j.iswa.2023.200224.
[3] D. Talukder and F. Jahara, “Real-Time Bangla Sign Language
Detection with Sentence and Speech Generation,” Dec. 2020,
doi: https://doi.org/10.1109/iccit51783.2020.9392693.
[4] Urmee, P. P., Al Mashud, M. A., Akter, J., Jameel, A. S. M.
M., & Islam, S. (2019, November). Real-time bangla sign lan-
guage detection using xception model with augmented dataset.
In 2019 IEEE International WIE Conference on Electrical
and Computer Engineering (WIECON-ECE) (pp. 1-5). IEEE.
https://ieeexplore.ieee.org/abstract/document/9019934.
[5] K. A. Lipi, S. F. K. Adrita, Z. F. Tunny, A. H. Munna, and
A. Kabir, “Static-gesture word recognition in Bangla sign lan-
guage using convolutional neural network,” TELKOMNIKA
(Telecommunication Computing Electronics and Control), vol.
20, no. 5, p. 1109, Oct. 2022, doi: https://doi.org/10.12928/
telkomnika.v20i5.24096.
[6] Safayet Anowar Shurid et al., “Bangla Sign Language Recog-
nition and Sentence Building Using Deep Learning,” 2020
IEEE Asia-Pacific Conference on Computer Science and Data
Engineering (CSDE), Dec. 2020, doi: https://doi.org/10.1109/
Fig. 6: Model Loss of augmented images training set csde50874.2020.9411523.
[7] M. A. Rahaman, M. Jasim, Md. H. Ali, and Md. Hasanuz-
zaman, “Bangla language modeling algorithm for automatic
recognition of hand-sign-spelled Bangla sign language,” Fron-
models. Consistently, DL models did better than ML tiers of Computer Science, vol. 14, no. 3, Dec. 2019, doi:
models with respect to accuracy, F1 score and Re- https://doi.org/10.1007/s11704-018-7253-3.
call whereas ML models were more time-efficient at [8] A.Sams, Ahsan Habib Akash, and M.Rahman, “SignBD-
Word: Video-Based Bangla Word-Level Sign Language and
training and had a lower computational requirement, Pose Translation,” Jul. 2023, doi: https://doi.org/10.1109/
suitable for resource-constrained environments. icccnt56998.2023.10306914.
[9] M. Bin Munir, F. R. Alam, S. Ishrak, S. Hussain, Md.
V. CONCLUSION Shalahuddin, and M. N. Islam, “A Machine Learning Based
Sign Language Interpretation System for Communication with
Humans with physical disability of hearing and Deaf-mute People,” Proceedings of the XXI International
speaking face problems in their daily life regarding Conference on Human Computer Interaction, Aug. 2021, doi:
https://doi.org/10.1145/3471391.3471422.
communication. This paper aims to work for the [10] Abdul Muntakim Rafi, Nowshin Nawal, Nur, Lusain Nima,
welfare of people with physical impairment. Our re- C. Shahnaz, and Shaikh Anowarul Fattah, “Image-based
search has demonstrated the potential of deep learning Bengali Sign Language Alphabet Recognition for Deaf and
Dumb Community,” Oct. 2019, doi: https://doi.org/10.1109/
models to effectively recognize Bangla sign language ghtc46095.2019.9033031
for individual words using hand gestures.Our current [11] M. A. Uddin and S. A. Chowdhury, “Hand sign lan-
work implements deep learning techniques via CNN to guage recognition for Bangla alphabet using Support Vec-
tor Machine,” IEEE Xplore, 2016. https://ieeexplore.ieee.org/
give promising results in the accurate classification of document/7856479
a large vocabulary of Bangla signs. A limitation of our [12] A.A.J.Jim, I. Rafi, Md. Z. Akon, U. Biswas, and A.-A.
proposed system is that the detection accuracy may fall Nahid, “KU-BdSL: An open dataset for Bengali sign language
recognition,” Data in Brief, vol. 51, p. 109797, Nov. 2023, doi:
when it comes to complex backgrounds. We plan to https://doi.org/10.1016/j.dib.2023.109797
address this issue in our future works. We consider our [13] K.Tiku, J.Maloo, A. Ramesh, and I. R., “Real-time Conver-
system as an important contribution towards creating sion of Sign Language toText and Speech,” 2020 Second
International Conference on Inventive Research in Computing
a more accessible and equitable society. Applications (ICIRCA), Jul. 2020, doi: https://doi.org/10.1109/
icirca48905.2020.9182877.
ACKNOWLEDGEMENT [14] S. M. Rayeed, “BdSL47: A Complete Depth-based Bangla
We acknowledge the persons who have contributed Sign Alphabet and Digit Dataset,” Mendeley Data, vol. 3, Nov.
2023, doi: https://doi.org/10.17632/pbb3w3f92y.3.
to build the dataset. We greatly thanks to S M Rayeed, [15] M. S. Islalm, M. M. Rahman, Md. H. Rahman, M. Arifuz-
Sidratul Tamzida Tuba, Hasan Mahmud, Mumtahin zaman, R. Sassi, and M. Aktaruzzaman, “Recognition Bangla
Habib Ullah Mazumder, Md. Saddam Hossain, Md. Sign Language using Convolutional Neural Network,” 2019
International Conference on Innovation and Intelligence for
Kamrul Hasan and Muhammad Ibrahim for their pa- Informatics, Computing, and Technologies (3ICT), Sep. 2019,
tience to capture the images and heartwarming support doi: https://doi.org/10.1109/3ict.2019.8910301.
for the completion of the task. We also want to give [16] M. A.Rahaman, M. Jasim, Md. H. Ali, and Md. Hasanuz-
zaman, “Bangla language modeling algorithm for automatic
thanks to Abir Munna for his dedication in building recognition of hand-sign-spelled Bangla sign language,” Fron-
the dataset that contains 40 words with 1200 images. tiers of Computer Science, vol. 14, no. 3, Dec. 2019, doi:
https://doi.org/10.1007/s11704-018-7253-3.
REFERENCES [17] P.Roy, S.M.M.Uddin, Md.A.Rahman, Md.M.Rahman, Md.
S.Alam, and Md.S.Rashid Mahin, “Bangla Sign Language
[1] K. K. Podder et al., “Bangla Sign Language (BdSL) Alphabets Conversation Interpreter Using Image Processing,” IEEE
and Numerals Classification Using a Deep Learning Model,” Xplore, May 01, 2019.https://ieeexplore.ieee.org/abstract/
Sensors, vol. 22, no. 2, p. 574, Jan. 2022, doi: https://doi.org/ document/8934614 (accessed Nov. 22, 2022).
10.3390/s22020574. [18] C. M. Jin, Z. Omar, and M.H.Jaward, “A mobile application of
American sign language translation via image processing
algorithms,” IEEE Xplore, May 01,2016, doi:
https://ieeexplore.ieee.org/abstract/document/7519386

You might also like