0% found this document useful (0 votes)
7 views

Traffic Sign Dec Tection Recognition Using Deep Learning

The document discusses a deep learning approach for traffic sign detection and recognition (TSDR), emphasizing its importance for road safety. The authors collected a dataset of 7000 images of 70 different traffic signs from Bangladesh and utilized CNN, InceptionV3, and AlexNet models for classification. The study demonstrates the effectiveness of these models, with the InceptionV3 model achieving the highest accuracy in recognizing traffic signs.

Uploaded by

chaymaefilali2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Traffic Sign Dec Tection Recognition Using Deep Learning

The document discusses a deep learning approach for traffic sign detection and recognition (TSDR), emphasizing its importance for road safety. The authors collected a dataset of 7000 images of 70 different traffic signs from Bangladesh and utilized CNN, InceptionV3, and AlexNet models for classification. The study demonstrates the effectiveness of these models, with the InceptionV3 model achieving the highest accuracy in recognizing traffic signs.

Uploaded by

chaymaefilali2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/371482772

Traffic Sign Detection and Recognition Using Deep Learning Approach

Chapter in Lecture Notes of the Institute for Computer Sciences · June 2023
DOI: 10.1007/978-3-031-34619-4_27

CITATION READS

1 935

2 authors:

Umma Saima Rahman Maruf ..


Port City International University Port City International University
5 PUBLICATIONS 17 CITATIONS 1 PUBLICATION 1 CITATION

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Umma Saima Rahman on 12 June 2023.

The user has requested enhancement of the downloaded file.


Traffic Sign Detection and Recognition Using
Author Proof

Deep Learning Approach

Umma Saima Rahman(B) and Maruf

Port City International University, Chattogram 4225, Bangladesh


[email protected]

Abstract. The use of traffic signs ensures the safety of drivers and pedestrians.
A high level of focus is required during driving because of the driver’s need to
AQ1 perform both perceptual and motor functions simultaneously. Inattention or other
AQ2 factors might cause drivers to miss a traffic sign, resulting in an accident. A traffic
sign detection and recognition system can assist drivers in locating and following
traffic signs. Research on the topic is required since it has become increasingly
AQ3 important to be able to read traffic signs. This is due to the necessity of being
able to understand traffic signs. The goal of this research is to find a traffic sign
in an image and figure out what kind of sign it is. Traffic sign classification is
performed using CNN, InceptionV3, and AlexNet models. We collected 7000
photographs of 70 different types of traffic signs from Bangladesh because there
was no comparable data collection from that country’s perspective, resulting in 70
AQ4 classes in the dataset.

Keywords: Deep learning · detection · recognition · classification · CNN ·


InceptionV3 and AlexNet

1 Introduction
Traffic signs and road signs are the common names for the signs we encounter on
the side of the road on a daily basis. Pedestrians and drivers alike will benefit from
the information these signs provide. Lisbon’s earliest traffic signs, dating from 1686,
were commissioned by King Peter II to help with traffic management [1]. The volume
of traffic has increased significantly since then. To make them easier to understand,
many countries have transformed their traffic signs into pictorial representations. It has
become imperative to be able to recognize these signs. According to a road accident
monitoring report [2], there were 4,891 traffic accidents in Bangladesh in 2020, resulting
in 6,686 deaths and 8,600 injuries. Today’s efficient traffic flow is heavily reliant on traffic
signs. Safety on the road can be improved by informing and reminding drivers of the
rules of the road. Automatic and intelligent driver assistance systems, intelligent vehicle
development, and road maintenance all depend on the accurate detection and recognition
of traffic signs. With this system, drivers can be alerted to potentially dangerous road
conditions, which could help cut down on the number of accidents on the road.

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023
Published by Springer Nature Switzerland AG 2023. All Rights Reserved
M. S. Satu et al. (Eds.): MIET 2022, LNICST 490, pp. 1–13, 2023.
https://doi.org/10.1007/978-3-031-34619-4_27
2 U. S. Rahman and Maruf

Traffic sign detection and recognition (TSDR) has grown in relevance as technology
Author Proof

has advanced. TSDR is a technology that helps control and guide traffic in order to make
roads safer.
The purpose of road signs is to ensure the safety of motorists. They help drivers steer
their automobiles in the right direction on highways. If a driver does not understand the
sign’s meaning, their safety on the road may be in jeopardy. The purpose of this project
is to make it easier for motorists to read and understand traffic signs. Road signs can be
confusing, and this approach is designed to help people better understand their meaning.
This approach will serve to remind them of the unique meaning of the sign.
As a result of our efforts, the following results have been achieved: Initial research
involved collecting street signs from Bangladeshi cities. The dataset was then built using
the photos that had been collected. The dataset’s efficiency was determined by comparing
the accuracy of multiple deep learning models. After that, we assessed the accuracy of
our models based on the confusion matrix, precision, recall, and F1 score. In the end, an
image was used to find a sign, which the model was able to classify.

2 Literature Review

TSDR has become an essential aspect of modern research as it prepares the way for a
safer transportation system. Due to a variety of environmental conditions, detecting and
identifying traffic signs has always been difficult. This technology has been implemented
by a number of automotive manufacturers. Various methods have been used to correctly
detect traffic signs. Over the last ten years, TSDR systems have advanced incredibly.
CNN is used to categorize the images. Learning methods based on hand-crafted features
and deep learning methods are the two primary groups of classification algorithms.
The TSDR system serves a vital role in the current traffic control system. In recent
years, many researchers have worked on traffic sign recognition and tried to associate
the risk factors that may occur during a drive.
In many foreign nations, traffic signs are colored differently depending on what
they indicate and how urgent or important they are for all vehicles on the road. As a
result, many researchers choose to use the color-based TSDR approach. M. Benallal
et al. attempted to investigate the difference in the colour of traffic signs dependent on
the illumination surrounding them in [3], then determined that the effect was not so
noticeable and proposed an easy-to-understand segmentation algorithm. Liu et al. used
the color quantization method in the HSV color model [4], followed by ROI scaling and
border tracing.
With the introduction of the object recognition system, the possibilities for creat-
ing a more efficient TSDR system have skyrocketed. Furthermore, machine learning
approaches provide the most commonly and frequently used tools in this regard. This
paper by A. De La Escalera et al. [5] demonstrates the use of color thresholding and
shape analysis in association with machine learning techniques for the identification and
recognition of road signs. However, in [6], the author released a study demonstrating the
use of LUTs in HSI to detect ROIs and a genetic algorithm for detection, claiming that
it provided a more successful detection system regardless of distance, position, angle,
damage, or obstacles in the path of observation. Another strategy was investigated by
Traffic Sign Detection and Recognition Using Deep Learning Approach 3

Wali et al. [7], in which the image was segmented. Subsequently, the traffic sign was
Author Proof

detected using SVM while considering any signs of damage. As can be observed, sev-
eral studies use both color segmentation and template matching methods to improve the
effectiveness and efficiency of their TSDR systems.
Mrinal Haloi’s research revealed that each image was a systematic collection of
visual subjects [8]. His approach employed a cutting-edge PLSA (Probabilistic Latent
Semantic Analysis) based classification technique. This method was critical for dis-
playing features. The GTSRB (German Traffic Sign Recognition Benchmark) was used
to test this method, and it produced an impressive result. Systems for the detection
and recognition of signs necessitate the use of High-performance computer vision and
machine learning. The three main components of this system were detection, feature
extraction, and classification. Samira El Margie and Berraho Sanae [9] focused their
research on feature extraction using the Local Binary Pattern (LBP) approach. The LBP
approach was based on blocks. They used the KNN classifier to implement the LBP
approach. When tested on GTSRB, this approach demonstrated very high accuracy in
image classification.
Deep learning revolutionizes image classification approaches, allowing for more
precise detection and classification systems. Rongqiang Qian and Bailing Zhang [10]
proposed a high-performance classification system based on deep convolutional neural
networks. Their system had a high rate of performance and recognition accuracy. Their
main goal was to categorize signs based on digits, English letters, and Chinese characters
because their system was tested on a Chinese traffic sign dataset. R-CNN was the source
of inspiration for their work. High precision and low processing time were critical in
developing a perfect TSR system.

3 Methodology

This section describes our research methodology, which includes using multiple Deep
Learning models to detect and recognize traffic signs. Figure 1 depicts the overall
functioning of the system.

3.1 Image Collection

A Benchmark image dataset is required for successful research. This study is based on
Bangladeshi traffic signs; hence it must use a data set that includes Bangladeshi traffic
signs. However, there is no standard dataset available for Bangladeshi traffic signs. As a
result, a new dataset was required to be created. We roamed around the street of Dhaka
and took picture of various traffic signs and we gathered 7000 photos of 70 traffic signs.
As a result, there are 70 different classes for detecting and recognizing traffic signs
AQ5 (Fig. 2).
4 U. S. Rahman and Maruf
Author Proof

Image Collection Pre-processing Splitting


for Dataset

Train Data (80%)

Test Data (20%)

Classification of
CNN InceptionV3 AlexNet
traffic signs

Detecting sign using Single Image


Haar cascade file input

Fig. 1. Methodology.

Fig. 2. Collected Images.

3.2 Preprocessing
The Keras image data generator class generates tensor image data in batches with real-
time data augmentation. At first, all photos are resized to 128 × 128 pixels, which is
the standard input for CNN models. Zoom range, shear range, rotation range, as well as
additional factors, are applied.

3.3 Detection Method


Object detection is a computer technique for locating objects in photos and videos.
Computer vision, image processing, and deep learning are all related to it. In this work,
Haar cascades [11] are employed to detect signs.
Detection with OpenCV. OpenCV is an essential open-source library utilized in mod-
ern systems to support real-time activities. It can be used to detect objects, faces, and
even human handwriting in photos and videos. Detecting objects with Haar Cascade
Traffic Sign Detection and Recognition Using Deep Learning Approach 5

classifiers is a useful technique. Haar Cascade is a machine learning-based method that


Author Proof

requires training the classifier using a large number of positive and negative images.
Positive Images: These photos contain the images that our classifier should recognize.
Negative Images: Images of everything else that is not the object.

3.4 Detection Method

We employed three approaches for classification: CNN, InceptionV3, and AlexNet.


Convolutional Neural Network (CNN). A CNN model [12] for classification is con-
structed with ten convolution layers, five maxpooling layers, six dropout layers, one
flatten layer and two dense layers.
InceptionV3. InceptionV3 [13] is a CNN model that has been particularly trained to
classify images. In the Inception model, additional classifiers subsidize after the training
phase. The accuracy is typically saturated near the end. As a result, the BatchNorm is
employed in the InceptionV3 model’s additional classifiers. The loss formula for this
model includes a regularizing component that prevents the network from overfitting.
This method is known as label smoothing. In addition, 7 × 7 factorized convolutions
are used in InceptionV3. This model also incorporates upgrades from prior Inception
models. In addition, the Imagenet dataset [14] is used to train the model. There are nearly
14 million images in the Imagenet dataset, divided into a thousand classes.
AlexNet. There are eight levels in the Alexnet [15], each with its own learning param-
eters. There are five layers in the model, with three fully connected layers and one with
max pooling. Except for the output layer, we use ReLu activation in all of these layers.
As an activation function, the ReLu accelerated the training process by roughly six times.
Dropout layers are also employed to prevent the overfitting of the model.

4 Experimental Results
4.1 CNN

After preparing the model, it performed 175 steps per epoch and 35 epochs for each
model. For the CNN model, validation loss is greater than training loss.
To visualize the training and valid losses, see Fig. 3. The training accuracy data
has remained stable. However, the validation accuracy results displayed some degree of
variation. Despite this, the accuracy values obtained through training and validation are
identical. The difference between the training loss and the validation loss is relatively
small, which suggests that the CNN model did an excellent job on the dataset.
6 U. S. Rahman and Maruf
Author Proof

Fig. 3. Training and validation curve of CNN.

Fig. 4. Training and validation curve of InceptionV3.

4.2 InceptionV3

The number of epochs and the number of steps in each epoch remain the same across
all models. In addition, this model consists of 35 epochs, each of which consists of 175
steps. For the InceptionV3 model, the accuracy and loss curves are shown in Fig. 4. This
model generates the most accurate results when applied to our dataset.
When compared to other models, the data utilized for training and validation show
no noticeable difference in accuracy. As seen in Fig. 4, the InceptionV3 model achieves
very good results when applied to the dataset. Most of the time, there is not much
of a gap between the validation loss data and the training loss data. If the amount of
training loss and validation loss data is essentially equal to that of other models, we can
anticipate a better outcome. As a consequence of this, the InceptionV3 model carried
out an outstanding performance on the dataset.
Traffic Sign Detection and Recognition Using Deep Learning Approach 7

4.3 AlexNet
Author Proof

In a similar fashion, this model utilizes 35 epochs, with each epoch consisting of 175
steps. In spite of the fact that the model got off to a good start at the end, the validation
curve in Fig. 5 exhibited some instability. The amount of data lost due to training has
been gradually going down. On the other hand, the data on the validation losses has a
significant value that is marginally increased.

Fig. 5. Training and validation curve of AlexNet.

As can be seen in Fig. 5, the data for training loss and validation loss during the
second and final epochs are comparable to one another. If the data for the training loss
and the validation loss are quite near to one another, there is a good chance of achieving
a high level of accuracy.

5 Model Evaluation and Analysis

5.1 Classification Report of the Models

CNN Model: Table 1 displays the precision, recall, and f1-score of the CNN model.
We achieved a 97% accuracy rate.
8 U. S. Rahman and Maruf

Table 1. Classification Report of CNN.


Author Proof

Class Precision Recall F1-score Class Precision Recall F1-score Class Precision Recall F1-score
0 1 1 1 24 0.91 1 0.95 48 1 0.9 0.95
1 1 1 1 25 1 1 1 49 1 1 1
2 1 1 1 26 0.91 1 0.95 50 1 1 1
3 1 1 1 27 1 1 1 51 0.71 1 0.83
4 0.91 1 0.95 28 0.91 1 0.95 52 1 0.7 0.82
5 1 1 1 29 0.95 1 0.98 53 1 0.75 0.86
6 1 1 1 30 0.95 1 0.98 54 1 1 1
7 1 1 1 31 1 1 1 55 1 0.9 0.95
8 1 1 1 32 0.94 0.85 0.89 56 0.83 1 0.91
9 1 1 1 33 1 1 1 57 1 1 1
10 1 0.95 0.97 34 1 0.8 0.89 58 1 1 1
11 1 1 1 35 0.87 1 0.93 59 1 0.95 0.97
12 1 0.8 0.89 36 1 0.9 0.95 60 1 1 1
13 0.91 1 0.95 37 1 1 1 61 1 1 1
14 1 1 1 38 1 0.9 0.95 62 1 1 1
15 1 1 1 39 1 1 1 63 0.87 1 0.93
16 1 1 1 40 0.91 1 0.95 64 0.95 0.95 0.95
17 1 0.95 0.97 41 1 1 1 65 1 1 1
18 0.95 0.9 0.92 42 1 0.95 0.97 66 1 0.9 0.95
19 1 1 1 43 1 0.95 0.97 67 0.87 1 0.93
20 0.95 1 0.98 44 1 0.95 0.97 68 1 1 1
21 1 1 1 45 0.87 1 0.93 69 1 0.95 0.97
22 1 1 1 46 0.95 1 0.98 Accuracy 0.97
23 1 1 1 47 1 0.95 0.97

InceptionV3 Model: The classification report in Table 2 shows the precision, recall,
and f1-score for the InceptionV3 model. We attained a 99% accuracy rate.

Table 2. Classification Report of InceptionV3.

Class Precision Recall F1-score Class Precision Recall F1-score Class Precision Recall F1-score
0 1 1 1 24 1 1 1 48 1 1 1
1 1 1 1 25 1 1 1 49 1 1 1
2 1 1 1 26 1 1 1 50 1 1 1
3 1 1 1 27 1 1 1 51 1 1 1
4 0.95 1 0.98 28 1 1 1 52 1 1 1
5 1 0.95 0.97 29 1 1 1 53 1 1 1
6 1 1 1 30 1 1 1 54 1 1 1
(continued)
Traffic Sign Detection and Recognition Using Deep Learning Approach 9

Table 2. (continued)
Author Proof

Class Precision Recall F1-score Class Precision Recall F1-score Class Precision Recall F1-score
7 1 1 1 31 1 1 1 55 1 1 1
8 1 1 1 32 1 1 1 56 1 1 1
9 1 1 1 33 1 0.95 0.97 57 1 1 1
10 1 1 1 34 1 0.95 0.97 58 1 1 1
11 1 1 1 35 0.91 1 0.95 59 0.95 1 0.98
12 1 1 1 36 1 1 1 60 1 1 1
13 1 1 1 37 1 1 1 61 1 1 1
14 1 1 1 38 1 1 1 62 1 1 1
15 1 1 1 39 1 1 1 63 1 1 1
16 1 1 1 40 1 1 1 64 1 1 1
17 1 0.95 0.97 41 1 0.95 0.97 65 1 1 1
18 1 1 1 42 1 1 1 66 1 1 1
19 1 1 1 43 1 1 1 67 1 1 1
20 1 1 1 44 0.95 1 0.98 68 1 1 1
21 1 1 1 45 1 1 1 69 1 1 1
22 1 1 1 46 1 1 1 Accuracy 0.99
23 1 1 1 47 1 1 1

AlexNet Model: The precision, recall, and f1-score for the AlexNet model can be found
in Table 3 and the accuracy rate is 97%.

Table 3. Classification Report of AlexNet.

Class Precision Recall F1-score Class Precision Recall F1-score Class Precision Recall F1-score
0 1 1 1 24 0.95 1 0.98 48 1 1 1
1 1 1 1 25 1 1 1 49 0.95 1 0.98
2 1 1 1 26 1 1 1 50 1 0.95 0.97
3 1 1 1 27 1 1 1 51 1 1 1
4 0.91 1 0.95 28 0.86 0.95 0.9 52 1 1 1
5 0.95 1 0.98 29 0.87 1 0.93 53 1 0.75 0.86
6 1 1 1 30 1 1 1 54 1 1 1
7 1 1 1 31 0.95 1 0.98 55 0.91 1 0.95
8 1 1 1 32 1 1 1 56 1 1 1
9 1 1 1 33 1 0.95 0.97 57 1 1 1
10 1 1 1 34 1 0.8 0.89 58 0.91 1 0.95
11 1 1 1 35 0.91 1 0.95 59 1 0.8 0.89
(continued)
10 U. S. Rahman and Maruf

Table 3. (continued)
Author Proof

Class Precision Recall F1-score Class Precision Recall F1-score Class Precision Recall F1-score
12 0.83 0.95 0.88 36 1 0.95 0.97 60 1 1 1
13 0.95 1 0.98 37 1 1 1 61 1 1 1
14 1 1 1 38 1 0.9 0.95 62 1 0.95 0.97
15 1 1 1 39 1 1 1 63 1 1 1
16 1 0.9 0.95 40 0.95 1 0.98 64 0.91 1 0.95
17 0.83 1 0.91 41 0.91 1 0.95 65 1 1 1
18 0.89 0.8 0.94 42 1 0.7 0.82 66 1 0.95 0.97
19 1 1 1 43 1 1 1 67 1 1 1
20 1 1 1 44 0.95 1 1 68 1 1 1
21 1 1 1 45 1 1 1 69 1 0.95 0.97
22 1 0.95 0.97 46 1 1 1 Accuracy 0.97
23 1 1 1 47 0.91 1 0.95

5.2 Prediction Results

After the model training is completed, the prediction result can be considered an output,
shown in Fig. 6, Fig. 7 and Fig. 8. We used the 20 images to examine our model’s
prediction outcomes, which are good because our models properly predict all of the
given signs, shown in Fig. 9. As for test data we have used 1400 images.

Fig. 6. Prediction result of CNN model.

5.3 Comparative Analysis

Figure 10 indicates that the CNN model has a 97% accuracy, the InceptionV3 model
has a 99% accuracy, and the AlexNet model has a 97% accuracy. As can be seen, the
InceptionV3 model has the highest accuracy.
Traffic Sign Detection and Recognition Using Deep Learning Approach 11
Author Proof

Fig. 7. Prediction result of InceptionV3 model.

Fig. 8. Prediction result of AlexNet model.

Fig. 9. Detection and recognition result.


12 U. S. Rahman and Maruf
Author Proof

Accuracy
90%

99%
97%

97%
70%
50%
30%
10%
CNN INCEPTIONV3 ALEXNET

Fig. 10. Accuracy comparison.

6 Conclusion
Road accidents are escalating at an alarming rate in Bangladesh. The majority of acci-
dents are caused by a failure to recognize traffic signs. It is essential to recognize traffic
signs to reduce the incidence of accidents. The primary goal of our work is to present a
traffic sign detection and recognition system from Bangladesh’s perspective. The study
also discovered several severe issues with our country’s traffic signs. On the data, three
models, CNN, InceptionV3, and AlexNet, are evaluated, and it is found that InceptionV3
had the highest accuracy for the data set, around 99.71%. More data will be collected
from various Bangladeshi roads and varied weather and lighting circumstances. Then
the system will be able to reach a higher accuracy rate. The same road sign had different
shapes, fonts, and colors, which added to the complication; however, future research is
expected to improve the results.

References
1. Toh, C.K., Cano, J.C., Fernandez-Laguia, C., et al.: Wireless digital traffic signs of the future.
IET Netw. 8, 74–78 (2019). https://doi.org/10.1049/IET-NET.2018.5127
2. Rayed, A.M., Ur, M.A., Tariq, R., et al.: An analysis of driving behavior of educated youth in
Bangladesh considering physiological, cultural and socioeconomic variables (2022). https://
doi.org/10.3390/su14095134
3. Bénallal, M., Meunier, J.: Real-time color segmentation of road signs. In: Canadian Confer-
ence on Electrical and Computer Engineering, vol. 3, pp. 1823–1826 (2003). https://doi.org/
10.1109/CCECE.2003.1226265
4. Liu, Y.S., Duh, D.J., Chen, S.Y., et al.: Scale and skew-invariant road sign recognition. Int. J.
Imaging Syst Technol 17, 28–39 (2007). https://doi.org/10.1002/IMA.20095
5. De la Escalera, A., Armingol, J.M., Mata, M.: Traffic sign recognition and analysis for intel-
ligent vehicles. Image Vis. Comput. 21, 247–258 (2003). https://doi.org/10.1016/S0262-885
6(02)00156-7
6. De La Escalera, A., Moreno, L.E., Salichs, M.A., Armingol, J.M.: Road traffic sign detection
and classification. IEEE Trans. Ind. Electron. 44, 848–859 (1997). https://doi.org/10.1109/
41.649946
Traffic Sign Detection and Recognition Using Deep Learning Approach 13

7. Wali, S.B., Hannan, M.A., Hussain, A., Samad, S.A.: An automatic traffic sign detection and
Author Proof

recognition system based on colour segmentation, shape matching, and SVM. Math. Probl.
Eng. 2015 (2015). https://doi.org/10.1155/2015/250461
8. Haloi, M.: A novel pLSA based Traffic Signs Classification System (2015). https://doi.org/
10.48550/arxiv.1503.06643
9. El Margae, S., Sanae, B., Mounir, A.K., Youssef, F.: Traffic sign recognition based on multi-
block LBP features using SVM with normalization. In: 2014 9th International Conference on
Intelligent Systems: Theories and Applications, SITA 2014 (2014). https://doi.org/10.1109/
SITA.2014.6847283
10. Qian, R., Zhang, B., Yue, Y., et al.: Robust Chinese traffic sign detection and recognition
with deep convolutional neural network. In: Proceedings of the International Conference on
Natural Computation, pp. 791–796 (2016). https://doi.org/10.1109/ICNC.2015.7378092
11. Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features
(2001)
12. Shin, H.-C., Roth, H.R., Gao, M., et al.: Deep convolutional neural networks for computer-
aided detection: CNN architectures, dataset characteristics and transfer learning CNN model
analysis and valuable insights can be extended to the design of high performance CAD systems
for other medical imaging tasks. IEEE Trans. Med. Imaging 35, 1285 (2016). https://doi.org/
10.1109/TMI.2016.2528162
13. Lin, C., Li, L., Luo, W., et al.: Transfer learning based traffic sign recognition using inception-
v3 model. Period. Polytech. Transp. Eng. 47, 242–250 (2019). https://doi.org/10.3311/PPTR.
11480
14. Deng, J., Dong, W., Socher, R., et al.: ImageNet: a large-scale hierarchical image database,
pp. 248–255 (2010). https://doi.org/10.1109/CVPR.2009.5206848
15. Alom, M.Z., Taha, T.M., Yakopcic, C., et al.: The History Began from AlexNet: A
Comprehensive Survey on Deep Learning Approaches (2018)

View publication stats

You might also like