Abstract
Abstract
Trainable Parameters.
The recognition and detection of traffic signs are crucial for the development of intelligent
transportation systems, particularly in regions with unique and diverse sign designs like Ethiopia.
This study presents a novel approach to Ethiopian traffic sign identification using transfer
learning with reduced trainable parameters. By leveraging pre-trained deep learning models, we
fine-tune the networks specifically for the Ethiopian context, thus mitigating the need for
extensive computational resources and large datasets typically required for training from scratch.
Our methodology involves the use of transfer learning to adapt well-established convolutional
neural networks (CNNs) to recognize Ethiopian traffic signs accurately. By freezing the majority
of the network layers and only fine-tuning a small number of layers, we significantly reduce the
number of trainable parameters. This approach not only enhances computational efficiency but
also improves the model’s generalization capabilities, making it feasible for deployment in
resource-constrained environments.
We conduct extensive experiments to evaluate the performance of our model on a curated dataset
of Ethiopian traffic signs. The results demonstrate that our approach achieves high accuracy and
robustness, outperforming traditional training methods in both efficiency and effectiveness. This
research provides a promising solution for implementing advanced traffic sign recognition
systems in Ethiopia, contributing to safer and more reliable transportation networks.
Keywords: Ethiopian traffic sign recognition, transfer learning, deep learning, reduced
trainable parameters, and intelligent transportation systems.
Introduction:
Traffic sign recognition plays a crucial role in enhancing road safety and traffic management
systems. With the rapid expansion of road networks and urban development in Ethiopia, there is
a growing need for robust and efficient systems to categorize Ethiopian traffic signs. However,
building accurate traffic sign recognition systems for Ethiopia faces challenges, including the
scarcity of annotated datasets tailored to the country's specific signage and regulatory
requirements.
In recent years, deep learning techniques, particularly convolutional neural networks (CNNs),
have demonstrated remarkable performance in various computer vision tasks, including object
recognition and classification. Transfer learning, a subfield of deep learning, has emerged as a
powerful strategy for adapting pre-trained models to new domains with limited labeled data. By
leveraging knowledge learned from large-scale datasets, transfer learning enables the
development of effective models even when training data is scarce.
In this study, we propose a novel approach for the categorization of Ethiopian traffic signs using
transfer learning techniques. Our goal is to develop a robust and accurate traffic sign recognition
system tailored to the Ethiopian context. The utilization of transfer learning allows us to leverage
pre-trained CNN models, which have been trained on extensive datasets such as ImageNet, to
bootstrap the learning process for Ethiopian traffic signs.
1. Explore and adapt state-of-the-art CNN architectures, including ResNet, VGG, and Inception, for
the categorization of Ethiopian traffic signs.
2. Investigate deep learning methodologies to fine-tune pre-trained CNN models on a limited
dataset of Ethiopian traffic signs.
3. Evaluate the performance of the proposed approach in terms of classification accuracy,
robustness, and efficiency.
By employing transfer learning techniques, we aim to overcome the challenges posed by the lack
of annotated data specific to Ethiopian traffic signs. The proposed approach has the potential to
contribute to the development of intelligent transportation systems tailored to the Ethiopian
context, ultimately enhancing road safety and traffic efficiency across the country.
Traffic sign recognition is a fundamental task in modern transportation systems, playing a vital
role in ensuring road safety and efficient traffic management. With the rapid development of
road infrastructure in Ethiopia, there is a growing need for robust and accurate systems capable
of categorizing Ethiopian traffic signs. However, building such systems poses significant
challenges due to the diversity, complexity, and unique characteristics of Ethiopian traffic
signage.
Traditional approaches to traffic sign recognition often rely on handcrafted features and machine
learning algorithms, which may struggle to generalize across diverse sign types and
environmental conditions. In contrast, deep learning techniques, particularly convolutional neural
networks (CNNs), have emerged as powerful tools for automatically learning hierarchical
representations directly from raw image data. By leveraging the capability of CNNs to capture
complex patterns and features, deep learning offers a promising avenue for addressing the
challenges associated with Ethiopian traffic sign categorization.
In this study, we propose a novel approach to categorize Ethiopian traffic signs using deep
learning techniques. Our objective is to develop a robust and accurate traffic sign recognition
system tailored to the unique characteristics of Ethiopian signage. By leveraging the expressive
power of deep neural networks, we aim to overcome the limitations of traditional methods and
improve the performance of traffic sign categorization in the Ethiopian context.
By leveraging the capabilities of deep learning, this research aims to advance the state-of-the-art
in Ethiopian traffic sign recognition, contributing to safer and more efficient transportation
systems across the country. The proposed approach has the potential to revolutionize the way
traffic signs are categorized and interpreted, paving the way for smarter and more adaptive
transportation infrastructure in Ethiopia.
Related work
13r. The experimentation method involve building a CNN model based on modified LeNet
architecture with four convolutional layers, two max-pooling layers and two dense layers.
The model is trained and tested with the German Traffic Sign Recognition Benchmark
(GTSRB) dataset. Parameter tuning with different combinations of learning rate and epochs
is done to improve the model’s performance. Later this model is used to classify the images
introduced to the camera in real-time. The graphs depicting the accuracy and loss of the
model before and after parameter tuning are presented. An experiment is done to classify the
traffic sign image introduced to the camera by using the CNN model. High probability scores
are achieved during the process which is presented. The results show that the proposed model
achieved 95% model accuracy with an optimum number of epochs.
14r. This paper proposes a novel model called Traffic Sign Yolo (TS-Yolo) based on the
convolutional neural network to improve the detection and recognition accuracy of traffic
signs, especially under low visibility and extremely restricted vision conditions. The
experimental results demonstrated that, using the YoloV5 dataset with augmentation, the
precision was 71.92, which was increased by 34.56 compared with the data without
augmentation, and the mean average precision mAP_0.5 was 80.05, which was increased by
33.11 compared with the data without augmentation. When MixConv and AFF were applied
to the TS-Yolo model, the precision was 74.53 and 2.61 higher than that with data
augmentation only, and the value of mAP_0.5 was 83.73 and 3.68 higher than that based on
the YoloV5 dataset with augmentation only. Overall, the performance of the proposed
method was competitive with the latest traffic sign detection approaches.
15r. A camera-based traffic sign recognition system was created to assist drivers and self-
driving automobiles in overcoming this difficulty. After being trained on a significant
amount of data, including synthetic traffic signs and photos from street views, the proposed
multi-task convolutional neural network (CNN) refines and classifies the data to their precise
classifications. Post processing analyses all of the input frames before to making a
recognition judgment. The suggested system classified traffic indicators using the CNN
algorithm. The efficacy of the proposed methodology has been demonstrated through
experimentation.
16r. In this study, traffic sign recognition and classification is implemented using transfer
learning concept. We focused on smaller CNN architectures compared to state-of-the-art
CNN architectures like VGG, AlexNet and some ResNet models. We experimented with
three pre-trained modelsInceptionV3, Resnet50 and Xception. The results from using each of
these models are compared in efficiency and accuracy. The transfer learning models are
trained using the German Traffic Sign Recognition Benchmark (GTSRB) dataset. Of these
three, the highest accuracy of 97.15% was achieved using InceptionV3 architecture.
18r. The purpose of this research is to examine the application of deep transfer learning to
use intelligence gathered from an established, standard dataset of traffic signs from a given
country/region and use it to improve recognition of traffic signs from another country/region
using a deep learning framework. This helps users to avoid the difficulty of data gathering
and classification by utilizing an established dataset from another location to help in the
recognition of a desired target dataset. We used VGG 16 deep transfer learning algorithms
and achieved 95.61% of accuracy. This analysis shows that transferring information between
deep learning classifiers can result in a higher accuracy for traffic sign recognition than a
model that uses deep learning.
19r. In this paper, we implement an experiment to evaluate the performance of the latest
version of YOLOv5 based on our dataset for Traffic Sign Recognition (TSR), which unfolds
how the model for visual object recognition in deep learning is suitable for TSR through a
comprehensive comparison with SSD (i.e., single shot multibox detector) as the objective of
this paper. The experiments in this project utilize our own dataset. Pertaining to the
experimental results, YOLOv5 achieves 97.70% in terms of [email protected] for all classes, SSD
obtains 90.14% mAP in the same term. Meanwhile, regarding recognition speed, YOLOv5
also outperforms SSD.
20r. This paper presents a deep-learning-based autonomous scheme for cognizance of traffic
signs in India. The automatic traffic sign detection and recognition was conceived on a
Convolutional Neural Network (CNN)- Refined Mask R-CNN (RM R-CNN)-based end-to-
end learning. The proffered concept was appraised via an innovative dataset comprised of
6480 images that constituted 7056 instances of Indian traffic signs grouped into 87
categories. We present several refinements to the Mask R-CNN model both in architecture
and data augmentation. We have considered highly challenging Indian traffic sign categories
which are not yet reported in previous works. The dataset for training and testing of the
proposed model is obtained by capturing images in real-time on Indian roads. The evaluation
results indicate lower than 3% error. Furthermore, RM R-CNN’s performance was compared
with the conventional deep neural network architectures such as Fast R-CNN and Mask R-
CNN. Our proposed model achieved precision of 97.08% which is higher than precision
obtained by Mask R-CNN and Faster RCNN models.
21r. Five efficient transfer learning models are explored which are available in Keras
libraries – Xception network, InceptionV3 Networks, Residual Networks ResNet50, VGG-16
and EfficientNetB0 models for the detection, recognition, and classification of GTSRB
traffic signs. The main focus of this paper is to apply and as well as compare five recent and
most successful deep learning strategies to verify which model can stand-out in feature
extraction and classification of the traffic sign data available. Accuracy, loss, training time
and model parameters are considered in grading these models. Xception network has been
proven to be highly successful in terms of accuracy (95.04%), minimum lossvalue (0.2311)
and affordable speed and training time, whereas ResNet50 and EfficientNetB0 obtained good
accuracy with fewer model parameters for traffic signs detection, recognition and
classification.
23r. The present study aims to exploit and investigate the effectiveness of transfer learning
feature-combining models, particularly to classify traffic signs. The images were gathered from
GTSRB dataset which consists of 10 different types of traffic signs i.e. warning, stop, repair, not
enter, traffic light, turn right, speed limit (80km/s), speed limit (50km/s), speed limit (60km/s),
and turn left sign board. A total of 7000 images were then split to 70:30 for train and test ratio
using a stratified method. The VGG16 and VGG19 TL-features models were used to combine
with two classifiers, Random Forest (RF) and Neural Network. In summary, six different
pipelines were trained and tested. From the results obtained, the best pipeline was
VGG16+VGG19 with RF classifier, which was able to yield an average classification accuracy
of 0.9838. The findings showed that the feature-combining model successfully classifies the
traffic signs much better than the single TL-feature model. The investigation would be useful for
traffic signs classification applications i.e. for ADAS systems.
24r. The proposed approach is based on an overview of different Traffic Sign Detection (TSD)
and Traffic Sign Classification (TSC) methods, aiming to choose the best ones in terms of
accuracy and processing time. Hence, the proposed methodology combines the Haar cascade
technique with a deep CNN model classifier. The developed TSC model is trained on the
GTSRB dataset and then tested on various categories of road signs. The achieved testing
accuracy rate reaches 98.56%. In order to improve the classification performance, we propose a
new attention-based deep convolutional neural network. The achieved results are better than
those existing in other traffic sign classification studies since the obtained testing accuracy and
F1-measure rates achieve, respectively, 99.91% and 99%. The developed TSR system is
evaluated and validated on a Raspberry Pi 4 board. Experimental results confirm the reliable
performance of the suggested approach.
27r. The purpose of the research work is to develop a recognition system, increasing the
classification accuracy of the model, using deep learning methods of Road sign recognition
system for drivers in real time on the road. In addition, in this work, the convolutional neural
network (CNN) method with the help of deep learning was used to create a system for
identifying and recognizing road signs. The proposed road sign recognition system can work in
real time to recognize the image of road signs. In this paper, a model is trained using deep
learning of 43 different road signs using existing datasets and collected local road signs. A traffic
sign detection and recognition system is presented using an 8-layer convolutional neural
network, which acquires different functions by training different types of traffic signs.
Traffic signs are important parts of modern transportation systems because they keep roads safe and
help drivers. There are two types of traffic signs: symbol-based and text-based. It is very important for
real-world applications like autonomous driving, traffic monitoring, and driver safety to be able to read
traffic signs. Therefore, traffic sign identification is a difficult problem since various sizes, illuminations,
and sounds impact sign detection and recognition. Traffic sign detection systems based on convolutional
neural networks face difficulties in adverse weather conditions, as well as a shortage of training data and
difficulty in detecting objects. The training is done using the German Traffic Sign Dataset and resulted in
a high rate of traffic sign recognition.
Methodology: