0% found this document useful (0 votes)
8 views19 pages

Technologies 09 00009 With Cover

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views19 pages

Technologies 09 00009 With Cover

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

3.6 5.

Article Editor’s Choice

Intelligent System for Vehicles


Number Plate Detection and
Recognition Using Convolutional
Neural Networks

Nur-A- Alam, Mominul Ahsan, Md. Abdul Based and Julfikar Haider

Special Issue
Networking, Computing and Immersive Technologies for Smart Environments
Edited by
Prof. Dr. Konstantinos Oikonomou and Dr. Vasileios Komianos

https://doi.org/10.3390/technologies9010009
technologies
Article
Intelligent System for Vehicles Number Plate Detection and
Recognition Using Convolutional Neural Networks
Nur‑A‑ Alam 1 , Mominul Ahsan 2, * , Md. Abdul Based 3 and Julfikar Haider 2

1 Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University,
Tangail 1902, Bangladesh; [email protected]
2 Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M15 6BH, UK;
[email protected]
3 Department of Electrical, Electronics and Telecommunication Engineering, Dhaka International University,
Dhaka 1205, Bangladesh; [email protected]
* Correspondence: [email protected]

Abstract: Vehicles on the road are rising in extensive numbers, particularly in proportion to the
industrial revolution and growing economy. The significant use of vehicles has increased the proba‑
bility of traffic rules violation, causing unexpected accidents, and triggering traffic crimes. In order
to overcome these problems, an intelligent traffic monitoring system is required. The intelligent sys‑
tem can play a vital role in traffic control through the number plate detection of the vehicles. In this
research work, a system is developed for detecting and recognizing of vehicle number plates using
a convolutional neural network (CNN), a deep learning technique. This system comprises of two
parts: number plate detection and number plate recognition. In the detection part, a vehicle’s image
is captured through a digital camera. Then the system segments the number plate region from the im‑
age frame. After extracting the number plate region, a super resolution method is applied to convert
 the low‑resolution image into a high‑resolution image. The super resolution technique is used with
 the convolutional layer of CNN to reconstruct the pixel quality of the input image. Each character of
Citation: Alam, N.‑A.; Ahsan, M.; the number plate is segmented using a bounding box method. In the recognition part, features are
Based, M.A.; Haider, J. Intelligent extracted and classified using the CNN technique. The novelty of this research is the development
System for Vehicles Number Plate of an intelligent system employing CNN to recognize number plates, which have less resolution,
Detection and Recognition Using and are written in the Bengali language.
Convolutional Neural Networks.
Technologies 2021, 9, 9. https:// Keywords: number plate detection; super resolution technique; convolutional neural networks;
doi.org/10.3390/technologies9010009 deep learning; bounding box method

Received: 24 December 2020


Accepted: 18 January 2021
Published: 20 January 2021
1. Introduction

Publisher’s Note: MDPI stays neutral


Vehicle Number Plate Recognition (VNPR) is an exoteric and effective research modal‑
with regard to jurisdictional claims in
ity in the field of computer vision [1]. As there are an increasing number of vehicles on the
published maps and institutional affil‑ road, it is highly challenging to monitor and control the vehicles using existing systems
iations. (such as manual monitoring and monitoring by traffic police). An intelligent system can
be used to overcome this problem in a convenient and efficient way. Real time detection
of number plates from moving vehicles is needed, not only for monitoring traffic systems,
but also for traffic law enforcement. However, development in this area is slow and very
challenging to implement from a practical point of view [2].
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
Recognizing vehicle number plates can help with authorization (for example, when a
This article is an open access article
vehicle enters into an impervious premise). VNPR can develop a security policy while the
distributed under the terms and issues are more crucial. This research work aims to detect and recognize number plates in
conditions of the Creative Commons an intelligent way. The tests were carried out on vehicles in Dhaka city, in Bangladesh, al‑
Attribution (CC BY) license (https:// though the work can be extended to any country. Dhaka is a densely populated city with
creativecommons.org/licenses/by/ huge amounts of traffic—and people frequently break the traffic rules. In Bangladesh,
4.0/).

Technologies 2021, 9, 9. https://doi.org/10.3390/technologies9010009 https://www.mdpi.com/journal/technologies


Technologies 2021, 9, 9 2 of 18

Bangladesh Road Transport Authority (BRTA) has the authority to register vehicles. Ac‑
cording to the annual report of BRTA [3], the number of vehicles are increasing rapidly ev‑
ery year in Bangladesh (Figure 1). This could be accounted for the increasing road traffic ac‑
cidents and related deaths. Bengali alphabet and Bengali numerals are used in Bangladeshi
vehicle number plates. The international vehicle registration code for Bangladesh is BD.
The two types of vehicles are used in Bangladesh are civil vehicles and army vehicles.
In Figure 2, the sample of Bangladeshi vehicle number plates is shown.

Figure 1. Registration of vehicles’ number in Bangladesh in the past eight years [3].

(a) (b)

Figure 2. Bangladeshi vehicle number plates: (a) civil vehicle number plate, (b) army vehicle number plate.

Authorized letters and numeric numbers for the vehicle number plates in Bangladesh
with the corresponding English equivalent figures are presented in Table 1.

Table 1. Bangla letters and their corresponding English equivalents used in the vehicle number plates.

Bangla letter অ ই উ এ ক খ গ ঘ ঙ চ ছ জ ঝ ত থ ড
English letter a i u e ka฀ kha ฀ ga
฀ gha
฀ ฀
฀ ฀ ฀na ฀
฀ ca฀
฀ cha
฀ ฀฀ja ฀
฀jha฀
฀ ta tha ฀ da฀
฀ ฀

฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀
Bangla letter ঢ ট ঠ দ ধ ন প ফ ব ভ ম য র ল শ স
English letter dha ta tha da dha na pa pha ba bha ma ya ra la sha sa
Bangla letter ০ ১ ২ ৩ ৪฀฀ ฀
฀৫฀ ฀ ৬
฀฀ ฀
฀৭ ฀฀৮ ฀฀ ৯฀฀ ฀
฀ ฀฀ ฀ ฀ ฀ ฀
‑ ฀฀ ฀
฀ ฀ ฀ ฀฀
English number 0 1 2 3 4 5 6 7 8 9 ‑
฀ ฀
฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀ ฀
The format of vehicle number plates in Bangladesh is “city name—class letter of a
vehicle and its number—vehicle number”. For example, “DHAKA METREO—GA 0568”.
Here, “Dhaka” represents city name, “GA: represents vehicle class in Bangla alphabets.
The second line (number line) contains six digits, where the first two digits (15) denote ve‑
Technologies 2021, 9, 9 3 of 18

hicle class number and the last four digits (0568) represent the vehicle registration number
in Bangla numeral. Figure 3 shows the representation of the above in the number plate.

Figure 3. Representation of a vehicle number plate in Bangladesh.

This research work developed an intelligent system that is efficient to recognize vehi‑
cle number plates using Convolutional Neural Networks (CNN). The recognition system
consists of five major steps: image pre‑processing, detection of the number plate from the
captured image, learning based super resolution technique to produce a high‑resolution
image, segmentation, and recognition of each character. Segmentation is the most impor‑
tant task as it provides the result of the entire number plate analysis. The main objective
of segmentation is to determine each region according to vehicle city, type, and number.
However, segmentation of blurred number plates was more challenging, and this was over‑
come by using the super resolution method that transformed the blurred number plate
image into a clear image.
To get a perfect segmentation result, the bounding box method was used. Then the
system used CNN for extracting features of the number plate and recognizing the vehicle
number.
The main steps of this system are organized as follows:
• Localization of the number plate region: template matching algorithm is used for
extracting the number plate region from the input image frame of the vehicle.
• Super resolution and segmentation techniques: the super resolution technique is used
to get a clear number plate with good resolution and the bounding box method is used
for segmenting each character of the number plate. The method segments the vehicle
city, type, and number from the plate region.
• Feature extraction: the system used 700 number plate images for training by using
CNN, and it provided 4096 features for each character to recognize correctly. The num‑
ber plate images used in this investigation was collected from Bangladesh Road Trans‑
port Authority (https://service.brta.gov.bd/).
The paper is organized as follows: Section 2 describes the related work; Section 3 an‑
alyzes the methodology; Section 4 represents the simulation of this work; finally, the con‑
clusion is drawn in Section 5.

2. Related Works
In the literature, a large number of systems were proposed and applied for vehicle
number plate recognition: digital image enhancement, detection of the number plate area
from the captured image, segmentation of each character, and recognition of the character
form the core steps in the recognition systems.
Cheokman et al. [4] showed morphological operators to pre‑process the image. After
preprocessing, the template matching approach was used for recognition of each charac‑
ter. It was issued for the vehicle registration plate (Macao‑style). In [5], scaling and cross‑
validation was applied for removing outliers and finding the clear parameters, using the
Support Vector Machine (SVM) method. Recognizing characters via the SVM method,
the rate of accuracy was higher from the Neural Network (NN) system.
Prabhakar et al. [6] proposed a webcam for capturing images. This system can local‑
ize several sizes of number plates from the captured images. After localizing the plate,
characters are segmented and recognized using several NNs.
Technologies 2021, 9, 9 4 of 18

Sobel color detector for detecting vertical edges was used in [7], where the ineffec‑
tive edge was removed. The plate region was discovered by using the template matching
approach. Mathematical morphology and connected component analysis were used for
segmentation. Chirag Patel [8] proposed mathematical morphology and connected com‑
ponent analysis for segmentation and recognition of characters using the radial basis func‑
tion of the neural network.
The number plate detection system in [9] used plate background and character color
to find the plate location. For segmentation, the column sum vector was adopted. Artificial
Neural Network (ANN) was used for character recognition.
The system in [10] is used for Chinese number plate recognition. The number plate
image converts into a binary image, and noises of the image are removed. Then, the feature
is extracted from the image and the image is normalized in an 8 × 16 pixel. After normal‑
ization, the back‑propagation neuronal network is used for recognition.
Ziya et al. proposed Fuzzy geometry to locate the number plate, and segmented the
plate by using Fuzzy C‑Means [11]. The segmentation technique, by using blob labeling
and clustering, provides a segmentation accuracy of 94.24% [12]. In [13], Gabor filter,
threshold, and connected component labeling were used for finding the number plate.
A self‑organizing map (SOM) neural network wass used for character recognition after
segmentation [13]. In [14], a two‑layer Marko network was used for segmentation and
character recognition. Similar works on number plate detection are published in [15,16].
Maulidia et al. [17] presented a method where the accuracy of Otsu and K‑nearest
neighbor (KNN) were obtained for converting an RGB image into a binary image, extract‑
ing characteristics of the image. Feature extraction in pattern recognition was used for
converting pixels into binary form. Feature extraction was performed by the Otsu method
where KNN classified the image by comparing the neighborhood test data to the training
data. Test data were determined by using the learning algorithm through a classification
process, which groups the test data into classes. The Otsu method was developed based
on a pattern recognition process with a binary vector without influencing the threshold
value. Adjustment of distribution of the pixel values of the image was performed to ob‑
tain binary segmentation. KNN classification proved to be a great boon in recognizing the
vehicle number plate. However, the authors did not provide the recognition capability of
the system under extreme weather conditions.
Liu et al. [18] presented a supervised K‑means machine learning algorithm to segre‑
gate the characters of the number plate into subgroups, which were classified further by
the Support Vector Machine (SVM). Their system recognized blurred number plate im‑
ages and improved the classification accuracy. This system differentiated the obstacles in
character recognition due to the angle of the camera, speed of the vehicle, and surround‑
ing light and shadow. The camera captured faint and unrecognizable character images.
A huge number of samples increased the workload of SVM classifiers; thus, affecting the
accuracy.
Quiros et al. [19] used the KNN algorithm for classifying characters from number
plates. An image processing camera was installed on a highway in their proposed sys‑
tem and analyzed the feed received, capturing the images of vehicles. Contours within
the number plates were computed as if they were valid characters, along with their sizes,
and afterwards, the plates were segmented from the detected contours. Each contour was
classified using the KNN algorithm, which was trained using different sets of data, con‑
taining 36 characters, comprised of 26 alphabets and 10 numerical digits. The algorithm
was tested on previously segmented characters and compared with the character recogni‑
tion technique, such as artificial neural network. Their proposed system did not provide
the character recognition performance compared to the literature.
Thangallapally et al. [20] implemented a technique to recognize the characters on
number plates and to upload details into a server. This, in turn, was segregated to ex‑
tract the image of the vehicle number plate. The process led to compartmentalizing the
characters from the number plate, where KNN was applied to extract the characters up‑
Technologies 2021, 9, 9 5 of 18

loaded in the server. The hindrance of this process was recognizing the number plates
from blurred or ambiguous images.
Singh and Roy [21] proposed a vehicle number plate recognition system in India,
where various issues were observed, including a plethora of font sizes, different colors,
double line number plates, etc. Artificial neural network (ANN) and SVM were employed
to recognize characters, and to detect plate contours, respectively. Although the number of
algorithms were employed in literature to remove noise, and to enhance plate recognition,
ANN showed good results with easing camera constraints.
Sanchez [22] implemented a recognition system of vehicle number plates in the UK
using machine learning algorithms, including SVM, ANN, and KNN. The system received
the car image, processed and analyzed with the Machine Learning (ML) algorithm, and com‑
puter vision techniques. The results from the investigation showed that the system could
identify the number plate of the car from the images.
Panahi and Gholampour [23] proposed a system to detect unclear number plates dur‑
ing rough weather and high‑speed vehicles in different traffic situations. The image data of
the vehicle number plates were collected from different roads, streets, and highways dur‑
ing the day and night. The proposed system was robustly receptive to variations in light,
size, and clarity of the number plates. The aforementioned techniques helped in compil‑
ing a dedicated set of solutions to problems and challenges involved in the formation of a
number plate recognition system in various intelligent transportation system applications.
Subhadhira et al. [24] used the deep learning method, which was used for training
processes, to classify vehicle number plates accurately. This system consisted of two parts:
(1) pre‑processed and extracted features using the histogram of oriented gradients (HOG)
and (2) the second part classified each number and alphabetical character that appeared on
the number plate to be analyzed and segregated. The extreme learning machine (ELM) was
used as a classifier, whereas HOG extracted important features from the plate to recognize
Thai characters on the number plate. The ELM system performed better due to its high
speed and acceptable testing and training tenets.
In previous works, different techniques, such as template matching, and several clas‑
sifiers, SVM, ANN, were used to recognize characters. The ambiguous characters were not
dealt with concerning template matching techniques. For illumination ambiences, orienta‑
tion of each character of the damaged number plates, SVM could not be supported. Consid‑
ering these issues, in this research, the number plate region is extracted from the captured
vehicle image by using a template matching method and the super resolution technique
is applied to improve the resolution quality. Then, segmentation of characters from the
number plate region, and feature extraction to recognize the characters, are performed us‑
ing CNN. CNN uses the gradient‑based learning algorithm with modernized activation
functions, including Rectified Linear Unit (ReLU), which can deal with diminishing gradi‑
ent problem [25]. Gradient descent‑based method performs training and creates models to
minimize the errors and update the weights accordingly. Thus, better prediction accuracy
is obtained through producing highly optimized weights during training. CNN is capa‑
ble of representing two‑dimensional (2D) or 3D images with meaningful features, which
can help achieve superior recognition performance. In particular, the max‑pooling layer
of CNN can deal with shape, as well as scale invariant image problems. In addition, the al‑
gorithm uses a considerably low number of network parameters compared to traditional
neural networks with similar sizes.

3. Proposed Methodology
This proposed system has the capability of detecting and recognizing vehicle number
plates in any language. To detect each character of a number plate, this system trains
all alphabetic letters from the plate using machine learning. Number plate characters for
the majority countries across the world are from A to Z and 0 to 9, so that it can easily
be detected, whereas Bangla number plate detection and recognition is very challenging,
Technologies 2021, 9, 9 6 of 18

due to the complex alphanumeric characters. Therefore, Bangla number plates have been
considered as a case study in this research.
The basic steps of the proposed methodology are (a) pre‑processing; (b) localization
of the number plate region; (c) super resolution techniques to get clear images; (d) seg‑
mentation of characters; (e) feature extraction; and (f) recognition of characters, shown in
Figure 4.

Figure 4. Overview of the proposed system for vehicle number plate recognition.

3.1. Pre‑Processing
In this system, there are two parts: one part is for number plate detection of moving
vehicles and another part is for number plate recognition. The first part extracts the num‑
ber plate region from the captured image of the vehicle by using the template matching
technique. The number plate recognition part consists of three activities. (1) The super
resolution method is used in the number plate region for converting a low‑resolution im‑
age to a high‑resolution image. Then it converts RGB into a gray image. (2) The bounding
box method is used for segmenting characters. (3) Finally, features from the authorized
alphabets and numbers are extracted using CNN. The CNN model provides 4096 features
for recognizing each character.

3.2. Localization of the Number Plate Region


The template matching technique was applied to recognize the plate region from the
vehicle image, which is identical to the template from the target image. In this method,
the target image is governed by the template and calculates the measures of similarity.
The localization process traverses the template image to each position in the vehicle
image and computes the numeral indexes that ensure the template matches the image
pixel‑by‑pixel in that portion. Finally, the strongest similarities are identified as efficient
pattern positions. Figure 5 illustrates the template matching procedure. The various por‑
tions of the input image are matched to the predefined template image and the naive tem‑
plate matching approach is performed in order to extract the template. Then, the extracted
plate region is resized to 127 × 127 pixels. An example of the extracted plate region from
the vehicle image frame using the template matching technique is shown in Figure 6.
Technologies 2021, 9, 9 7 of 18

Figure 5. Procedure of template matching technique for detecting the vehicle number plate.

(a) Input Image (b) Template Image (c) Output Image

Figure 6. Localization of the number plate region by using the template matching technique: (a) input image; (b) template
image; and (c) output image.

3.3. Super Resolution Technique


The imaging chips and optical components are highly expensive and, practically, not
used in surveillance cameras. The quality of the surveillance camera and the configuration
of the hardware components limit the captured image resolution. It cannot detect character
of the number plate from the vagueness plate region. In this research, a spatial super
resolution (SR) technique is used to overcome the limitation successfully.
The SR techniques construct the high‑resolution images from various accomplished
low‑resolution images. The concept of SR is to organize the non‑redundant data contained
in abundant low‑resolution frames to produce a high‑resolution image. The main objec‑
tive of super resolution is to acquire the high‑resolution image from the multiscale low‑
resolution image through the spatial resolution approach. The spatial super resolution is
applied to get a high‑resolution number plate image. First, the system used the down‑
sampling approach and converted the image samples, such as local gradient, alignment
vector, and local statistics. Then, the kernel formed the downsampling image and con‑
structed the high‑resolution RGB image. The high‑resolution image was converted from
RGB to a gray‑scale image (I), according to Equation (1).

I = Wr * R + Wg * G + Wb * B (1)

The R, G, and B are values of monochrome colors (red, green, and blue, respectively)
of the RGB color image, which are linear in luminance. Wr, Wg, and Wb are the coeffi‑
cient (fixed weight) of red, green, and black colors, with a value of 0.299, 0.587, and 0.114.
Summation of all three weights (R, G, and B) is equal to 1.
Figure 7 describes the process to reconstruct images using the super resolution (SR)
technique. SR techniques are a good choice to get clear images to segment from blurred
images. There are various super resolution techniques. Among these, better results are
achieved for the spatial super resolution technique. This work calculates peak signal‑to‑
noise ratio (PSNR) value for different super resolution techniques, but spatial super res‑
olution technique archives better PSNR value. Multiple low‑resolution frames are down‑
sampled, shifting subpixels from the high‑resolution scene between one another. The con‑
struction of SR aligns with the low‑resolution observances and combines subpixels into
high‑resolution image grids to overcome the limitation of a camera’s image processing.
The proposed super resolution is summarized in Algorithm 1. This algorithm combines
𝑯 = 𝑩𝒊𝒄𝒖𝒃𝒊𝒄𝑰𝒏𝒕𝒆𝒓𝒑𝒐𝒍𝒂𝒕𝒊𝒐𝒏 𝑨𝒗𝒓 𝑶𝟏,……….., 𝑶𝒌

Technologies 2021, 9, 9 − − 8 of 18

many low‑resolution images from the original image [26]. Then, a kernel is added to the
combined image to get a clearer image.

Algorithm 1: Super Resolution of Plate Region


Super_resolution
(input : Original Low Resolution Frames [O1, ........., Ok ]; output : SRFrame )
H = BicubicInterpolation(Avr(O1,........., Ok ))
For i = 1 to N
For j = 1 to K
SumFrame =𝐹SumFrame + Fk−1 (Fk (H, Ok ) − Ok );
End loop;
H = H – SumFrame; 𝑂 𝐹
SR_Frame = H;
Return SR_Frame;

Figure 7. Overview of the super resolution technique to improve the visibility of detected number plates.

In Algorithm 1, an enlarged hypothesis frame, H, was first created from several se‑
quential low‑resolution frames. The initial hypothesis frame can be either a simple bicubic
interpolation of one of the frames, or a bicubic interpolation of the average of all of the low‑
resolution frames. Afterwards, this hypothesis frame was iteratively altered and adjusted
with the information from all of the low‑resolution frames. Algorithm 1 runs for N itera‑
tions specified by the user. The hypothesis frame, H, was adjusted in each of the iterations
of the algorithm. Fn was a function used to align the hypothesis frame with the original
low‑resolution frames. These were then subtracted with each of the original, individual
low‑resolution frames, On . Fn−1 was a function that reversed the alignment and enlarged
the error frame. These were summed over the number, n, of low‑resolution frames, so that
it could be used to adjust the hypothesis frame, H.
This algorithm shows that H is a hypothesis frame that is first created from differ‑
ent pursuant low‑resolution frames. The primary hypothesis frame can be either a light
bicubic interpolation of one of the frames or a bicubic interpolation of the average of all
low‑resolution frames. This hypothesis frame is altered and adjusted with the information
from all of the low‑resolution frames. The images, after applying the super resolution al‑
gorithm in the plate region shown in Figure 8, clearly demonstrate that image quality was
significantly improved with the correct balance of brightness and contrast.

3.4. Segmentation of Character


The character segmentation process will partition the number plate image into multi‑
ple sub‑images; each sub‑image offers one character. In this work, segmentation is the
most significant part, as the successful recognition of each character relies on accurate
Technologies 2021, 9, 9 9 of 18

segmentation. If segmentation is not performed correctly, then recognition will not be


accurate.

Original Image After Super Resolution

Stand by ve-
hicle
Moving vehi- Stand by in
cle low light

Figure 8. Implementing the super resolution technique on the original images taken under different
conditions.

The bounding box method is used to segment the exact region of each character.
This method is efficient at identifying the boundaries of each character. It surrounds the
labeled region with a rectangular box, as shown in Figure 9. Then, it determines the upper
left and lower left corner of the rectangle by the x and y coordinates, and it provides a
label of each character. The bounding box method is described in Equation (2) [27] and
Algorithm 2. This algorithm is a bounding box algorithm to describe the target location.
The bounding box is a rectangular box that can be determined by the x‑ and y‑axis coordi‑
nates in the upper‑left corner, and the x‑ and y‑axis coordinates in the lower‑right corner
of the rectangle.

E( x ) = ∑ U p × xp + ∑ V pq × x p − xq , x p ϵ{0, 1} (2)
p∈ β { p,q}ϵε

where β is an image with a set of pixels p ∈ β. xp as an individual pixel label assumes


values of 1 and 0 for foreground and background, respectively. Pairs of adjacent pixels,
unary potentials, and pairwise potentials are defined by ε, Up , and Vpq .

Algorithm 2: Bounding
𝐸(𝑥) = Box𝑈Method
×𝑥 + of Plate Region
𝑉 × 𝑥 𝑥 , 𝑥 𝜖 0,1

For i = 1 : N (each video segment ) do ,
β 1 : L (each f rame) do
For t = ∈β
i
Detect character as foreground object with a target bounding box;
Form the target image region Ri defined by the εtarget bounding box;
Normalization Region of Interest (ROI) with preservation of aspect ratio;
Compute shaper description pi ∈ M from normalized ROI, where M is the underlying
Riemannian manifold;
𝑖 = 1 ∶ 𝑁 (𝑒𝑎𝑐ℎ 𝑣𝑖𝑑𝑒𝑜 𝑠𝑒𝑔𝑚𝑒𝑛𝑡)
End
𝑡 = 1 ∶ 𝐿 (𝑒𝑎𝑐ℎ 𝑓𝑟𝑎𝑚𝑒)
Collect the set of manifold points { pi }tL=i 1 ;
Compute final feature vector xi
End 𝑅
Construct training set X = ({ xi , yi })i=1 ;
N

Train CNN classifier using X with𝑝cross‑validation


∈ℳ ℳ

𝑝 ;
𝑥

𝒳 = ( 𝑥 ,𝑦 ) ;
Technologies 2021, 9, 9 10 of 18

Figure 9. Segmentation performed of each part of three different number plates.

The algorithm selects a frame and detects characters as a foreground object with a tar‑
get bounding box. Then, the target image region defined by the target bounding box and
normalization ROI (preserving the aspect ratio) are formed. Afterwards, feature vector
was extracted for the target image. The training data were matched with the features of
the target image to segment the image. This process was continued until all of the sections
were segmented.
The features were extracted according to three classes for the segmented number
plates. The first class consisted of 64 categories (64 districts of Bangladesh). In the sec‑
ond class, there were 19 categories according to the vehicle type. The last class consisted
of 10 categories (0–9), which identified the vehicle number. A deep learning technique,
CNN, was employed to train the classes of the vehicle city, type number using 700 num‑
ber plate images of different angles, and resolutions to introduce a high‑resolution train‑
able image. The CNN extracted features from different categories and stored the features ×
×
in three separate vectors [28]. Figure 10 shows the proposed CNN AlexNet architecture.
The AlexNet model consisted of five convolutional × layers, three max‑pooling layers, two
×
fully connected layers, and one Softmax layer. Each convolutional layer took on convo‑
×
lutional filters and a nonlinear activation function ReLU. The pooling layers were used
to perform max pooling. The first convolution layer contained 96 filters with a filter × size
of 11 × 11 and a Stride of 4. On the other hand, Layer 2 ×had Output
a layer
size of 55 × 55 and the
96 filters of number of filters was 256. Layers 3 and 4Max contained
size 11×11 256 filters of poling × filter sizes of 13 × 13, with 384 filters.
The lastsize
convolutional
5×5 layer contained
Filters ×
a
size filter size
layer 5 of 13 × 13, with 256 filters. The two fully
384 filters of
Input connected layers× provided 1 × 40961×1features to classify the output by the Softmax layer.
size 3×3
Image
108 classes
Output layer
96 filters of
size 11×11 256 filters of Max poling
Input image Convolutional
layer 5
224×224×1 size 5×5 Filters size
384 filters of Layer 5
Input 1×1 Fully Fully
size 3×3 Convolutional
Image Convolutional connected connected
+Max poling layer 1 layer
1082classes
+Max poling Convolutional Convolutional Layer 4
Layer 1 +Max poling +Max poling
Input image Layer 2 Layer 3 Convolutional
224×224×1 Layer 5
Fully Fully
Convolutional connected connected
Convolutional +Max poling
+Max poling Convolutional Convolutional layer 1 layer 2
Layer 4
Layer 1 +Max poling +Max poling
Layer 2 Layer 3

Figure 10. Proposed convolutional neural network (CNN) architecture for vehicle number plate recognition.
Technologies 2021, 9, 9 11 of 18

After, segmentation of the number plate images for vehicle city, type, and number
were found. The CNN had extracted features of these images to recognize vehicle city,
type, and number, separately, by transforming image text to characters. The 224 × 224 × 1
image size was used for feature extraction. The system used AlexNet as the CNN model.
Figure 11 shows the recognition process of each character.

Figure 11. Recognition of each character to text after extracting features from the detected num‑
ber plates.

Table 2 presents a comparison of detection, segmentation, and recognition techniques


used in this investigation and in other relevant literature. The key novelty in the proposed
system compared to the literature was the capability of producing high‑resolution images
from blur images, by using the super resolution method to recognize the number plate
characters with high accuracy. Furthermore, the employment of machine learning based
on the AlexNet model can identify text from images correctly, to obtain a recognition rate
higher than the other techniques.

Table 2. Detection, segmentation, and recognition techniques of the applied method and existing methods.

Reference Detection Segmentation Recognition


For extracting plate region,
bounding box method was used
Proposed system Template matching CNN
Super Resolution techniques
were used to get clear images
Sobel edge detection with Line segmentation, word
[29] additional morphological segmentation based on area Feed forward neural network
operations filtering
[30] Connected component technique Template matching
Line segment orientation
[31] Sobel edge detector Template matching
(LSO) algorithm
[25] CNN CNN CNN
Technologies 2021, 9, 9 12 of 18

4. Simulation Results and Discussions


An experiment was carried out on MATLAB R2018a simulator (Image Processing
Toolbox™) for recognition of vehicle number plates. This detection and recognition pro‑
cess consisted of the following steps:
• Step 1: the system captured a video of the vehicle. Then, the system extracted the
vehicle frame from the video and localized the number plate from the vehicle image.
The number plate images were converted to high‑resolution images to perform accu‑
rate segmentation. For extracting the number plate, the template matching method
was used.
• Step 2: for segmentation, the system used the bounding box method to segment each
character. Each letter or word was mapped with a box value and extracted groups
of characters. Figure 12 illustrates the segmented characters from the vehicle number
plate images.
• Step 3: the system used CNN for extracting features and tested number plates on
the VLPR vehicle dataset. In order to evaluate the experiment results, 700 vehicle
images were appointed. The AlexNet model was employed for training the CNN.
The system accomplished
𝑓 a maximum of 70 iterations for each input set. The iterations
were confined when
𝐸 = the minimum error rate was clarified by the user. The error rate
× 100
𝑛
for this system was 1.8%. After training, the CNN acquired 98.2% accuracy based on
the validation set, and attained 98.1% accuracy based on the testing set.

Figure 12. Example of character segmentation from a vehicle number plate.

The error rate was calculated as Ei of an individual program; I depends on the number
of samples incorrectly classified (false positives plus false negatives), and is evaluated by
Equation (3):
f
Ei = × 100 (3)
n
where f is the number of sample cases incorrectly classified, and n is the total number of
sample cases.
Table 3 shows comparative accuracies and computational processing time between
the proposed system and the systems employed in relevant literature. In [30], the authors
estimated that 1.3 s was taken to complete the image processing, whereas the proposed
system in this experiment took, on average, only 111 milliseconds to complete the whole
process. A Graphics Processing Unit (GEFORCE RTX 2070 super) with 16 GB RAM was
Technologies 2021, 9, 9 13 of 18

used in this system to perform quick computation. With a comparable sample size for
training and testing, the accuracy of the proposed system was found much higher than
the others available in the literature.

Table 3. Comparative accuracy of the applied system and the existing systems for image processing.

Reference Sample Size Localization Accuracy Processing Time


Proposed Training: 500 111 milliseconds for
100% 98.2%
system Testing: 200 the whole process
[29] Testing: 300 84% 80% ‑
[30] Testing: 120 ‑ ‑ 1.3 s
[31] Testing: 119 95.8% 84.87% ‑
Training:
[25] 88.67% ‑ ‑
450Testing: 50
TP + TN
Accuracy (ACC) =
TP + TN + FP + FN
A comparison of prediction accuracies for detecting number plates between different
CNN models, such as the scratch model [32], ResNet50 [33], VGG 16 [34], and the model
employed in this work (AlexNet) [35] are presented in Figure 13. Total of number of true
positives (TP) and true negatives (TN) were divided by the total number of TP, TN, false
positives (FP) and false negatives (FN) to measure the accuracy (Equation (4)).

TP + TN
Accuracy (ACC) = (4)
TP + TN + FP + FN

Scratch model
120%
ResNet50
100% VGG16
Prediction accuracy

Alexnet (Proposed Method)


80%

60%

40%

20%

0%
CNN models

Figure 13. Comparative study on prediction accuracy of different CNN models.

It can be concluded that AlexNet outperforms (98.2%) the other models for image pro‑
cessing. Figure 14 shows the performance comparison in terms of peak signal to noise ra‑
tio (PSNR) using different super resolution techniques, such as interpolation‑based SR [36]
and reconstruction‑based SR [37]. The spatial super resolution technique used in this study
showed high PSNR (33.7845 dB) compared to other related works, such as interpolation‑
based (32.3676 dB) and reconstruction‑based (32.4787 dB) super resolution techniques, in‑
dicating a better quality of the compressed or reconstructed images.
In addition, 700 number plates collected in different backgrounds and different illumi‑
nation conditions were used to test the number plate detection. Among them, 605 number
plates were correctly identified equating to a recognition rate of 86.50%. A total of 133 valid
characters were checked and 121 of them were correctly identified, which showed that the
character recognition rate was significantly high (90.9%). The average recognition time
Technologies 2021, 9, 9 14 of 18

for a single number plate was calculated as 707 ms from a sample of 700 number plates.
The recognition rates and times for characters are summarized in Table 4.

35
Interpolation-based SR

Peak Signal to Noise Ratio (dB) 34 Reconstruction-based SR

Spatial super resolution


(Proposed Method)
33

32

31

30
Super Resolution Techniques
Figure 14. Comparative study on peak signal to noise ratio (PSNR) of different super resolution
techniques.

Table 4. Identification performance of each character.

Object Parameter Value


Recognition Rate (%) 86.5
Letters
Recognition Times (ms) 48.6
Recognition Rate (%) 97.8
Numbers
Recognition Times (ms) 48.9
Characters (Letters & Recognition Rate (%) 90.9
Numbers) Recognition Times (ms) 52.3

A system presented in [38] might have failed if the texture of the plate was not clear
and the number plate region was short of the threshold of the projection operation. To solve
this problem, this proposed system brought novelty in employing the super resolution
technique to organize the non‑redundant data contained in abundant low‑resolution
frames, and to produce a high‑resolution image. The technique converted blur images
to clear images to recognize the texts correctly. A vehicle number plate detection system
presented in [39] was unable to properly recognize some characters and numbers, such as
2, 0, 7, and more, due to having a lower recognition rate compared to the other charac‑
ters. In this system, the machine learning based CNN model was employed to recognize
all characters and numbers correctly. Thus, the recognition rate was higher than the other
techniques reported in the literature. Capturing images by directly pointing the camera to‑
wards the number plates was not feasible for real‑world usage due to complex background
and environment [40,41]. This system employed a novel template matching technique to
detect vehicle number plate images from the video in order to extract the target region
more accurately.
Furthermore, a similar system was proposed for Bangla number plate detection in [42],
which showed a limitation of correctly recognizing the characters from large blur images.
However, the proposed system could solve this issue by employing the super resolution
technique to get high‑resolution images for easy detection. The previous system recog‑
nized only Bangla number plates in contrast to this system having the capability of recog‑
nizing the vehicle number plates from other countries. Furthermore, the previous system
used only 200 number plate images in comparison to 700 number plate images contain‑
Technologies 2021, 9, 9 15 of 18

ing very high and very low resolutions for training the proposed system. Therefore, low‑
resolution images during testing were detected and recognized accurately by the system.
Vehicle number plate recognition system (VNPRS) can play a vital role in implement‑
ing technologies for smart cities, such as traffic control, smart parking, toll automation,
driverless car, air quality monitoring, security, etc. [43,44]. A conceptual framework for
integrating VNPRS with the smart city systems is presented in Figure 15. The key theme in
the smart city is about collecting data from the individual systems by sensors and cameras,
communicating within different systems, and taking action from the hidden information
within the data. Automatic vehicle number identification could provide a base data to the
network of the smart city. For example, in case of monitoring and managing security for
a particular location within a city, VNPRS can serve as the tracking aid for the security
authority. The VNPRS can be connected to a cloud‑based system where all registered ve‑
hicle numbers will be stored. A vehicle number plate recognized by the system will be
directed to the cloud system for matching with the database, and identifying the vehicle
user information, for taking further action by the relevant authority.

Figure 15. Connecting the vehicle number plate recognition system with a smart city.

5. Conclusions
In this research, a system is proposed for detecting and recognizing vehicle number
plates in Bangladesh, which are written in the Bengali language. In this system, the images
of the vehicles are captured and then the number plate regions are extracted using the tem‑
plate matching method. Then, the segmentation of each character is performed. Finally,
a convolutional neural networks (CNN) is used for extracting features of each character
that classifies the vehicle city, type, and number, to recognize the characters of the num‑
ber plate. The CNN provides a large number of features to help with accurate recognition
of characters from the number plate. This research used super resolution techniques to
recognize characters with high resolution. In order to evaluate the experiment results,
700 vehicle images (with 70 iterations of each input set) were appointed. After training,
the CNN acquired 98.2% accuracy based on the validation set, and attained 98.1% accu‑
racy based on the testing set. This system can also be used for the number plates written
in other languages in the same way.

Author Contributions: Conceptualization: N.‑A.‑A., M.A.B.; methodology: N.‑A.‑A., M.A.B.; for‑


mal analysis and investigation: N.‑A.‑A., M.A., J.H., M.A.B.; writing—original draft preparation:
N.‑A.‑A., M.A.B.; writing—review and editing: N.‑A.‑A., M.A., J.H., M.A.B.; funding acquisition:
N.‑A.‑A., M.A.B., resources: N.‑A.‑A., M.A.B., supervision: M.A., J.H., M.A.B. All authors have read
and agreed to the published version of the manuscript.
Technologies 2021, 9, 9 16 of 18

Funding: The research received no funding.


Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author. The data are not publicly available due to privacy.
Acknowledgments: The authors would like to thank Bangladesh Road Transport Authority (BRTA)
for providing the number plate data sets.
Conflicts of Interest: On behalf of all authors, the corresponding authors state that there is no conflict
of interest.

References
1. Kocer, H.E.; Cevik, K.K. Artificial neural networks‑based vehicle license plate recognition. Procedia Comput. Sci. 2011, 3, 1033–
1037. [CrossRef]
2. Balaji, G.N.; Rajesh, D. Smart Vehicle Number Plate Detection System for Different Countries Using an Improved Segmentation
Method. Imp. J. Interdiscip. Res. 2017, 3, 263–268.
3. Annual Report, Bangladesh Road Transport Authority (BRTA). Available online: http://www.brta.gov.bd/ (accessed on 20 April
2020).
4. Wu, C.; On, L.C.; Weng, C.H.; Kuan, T.S.; Ng, K. A Macao license plate recognition system Machine Learning and Cybernetics.
In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August
2005; IEEE: New York, NY, USA, 2005; 7, pp. 18–21. [CrossRef]
5. Lopez, J.M.; Gonzalez, J.; Galindo, C.; Cabello, J. A Simple Method for Chinese License Plate Recognition Based on Support Vector
Machine Communications. In Proceedings of the 2006 International Conference on Communications, Circuits and Systems,
Guilin, China, 25–28 June 2006; IEEE: New York, NY, USA, 2006; 3, pp. 2141–2145. [CrossRef]
6. Prabhakar, P.; Anupama, P. A novel design for vehicle license plate detection and recognition. In Proceedings of the Second
International Conference on Current Trends in Engineering and Technology‑ICCTET, Coimbatore, India, 8 July 2014; IEEE: New
York, NY, USA, 2014. [CrossRef]
7. Anagnostopoulos, C.N.E.; Anagnostopoulos, I.E.; Psoroulas, I.D.; Loumos, V.; Kayafas, E. License plate recognition from still
images and video sequences: A survey. IEEE Trans. Intell. Transp. Syst. 2008, 9, 377–391. [CrossRef]
8. Patel, C.; Shah, D. Automatic Number Plate Recognition System. Int. J. Comput. Appl. 2013, 69, 1–5.
9. Du, S.; Ibrahim, M.; Shehata, M.; Badawy, W. Automatic license plate recognition (ALPR): A state‑of the‑art review. IEEE Trans.
Circuits Syst. Video Technol. 2013, 23, 311–325. [CrossRef]
10. Zhao, Z.; Yang, S.; Ma, X. Chinese License Plate Recognition Using a Convolutional Neural Network. In Proceedings of the 2008
IEEE Pacific‑Asia Workshop on Computational Intelligence and Industrial Application, Wuhan, China, 19–20 December 2008;
IEEE: New York, NY, USA, 2008; pp. 27–30. [CrossRef]
11. Telatar, Z.; Camasircioglu, E. Plate Detection and Recognition by using Color Information and ANN. In Proceedings of the 2007
IEEE 15th Signal Processing and Communications Applications, Eskisehir, Turkey, 11–13 June 2007; IEEE: New York, NY, USA,
2007. [CrossRef]
12. Khan, N.Y.; Imran, A.S.; Ali, N. Distance and Color Invariant Automatic License Plate Recognition System. In Proceedings of
the 2007 International Conference on Emerging Technologies , Islamabad, Pakistan, 12–13 November 2007; IEEE: New York, NY,
USA, 2007; pp. 232–237. [CrossRef]
13. Juntanasub, R.; Sureerattanan, N. Car license plate recognition through Hausdorff distance technique, Tools with Artificial Intel‑
ligence. In Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05), Hong Kong,
China, 14–16 November 2005; IEEE: New York, NY, USA, 2005. [CrossRef]
14. Feng, Y.; Fan, Y. Character recognition using parallel BP neural network, International Conference on Language and Image
Processing. In Proceedings of the 2008 International Conference on Audio, Language and Image Processing, Shanghai, China,
7–9 July 2008; IEEE: New York, NY, USA, 2008; pp. 1595–1599. [CrossRef]
15. Patel, S.G. Vehicle License Plate Recognition Using Morphology and Neural Network. Int. J. Mach. Learn. Cybern. (IJCI) 2013, 2,
1–7. [CrossRef]
16. Syed, Y.A.; Sarfraz, M. Color edge enhancement based fuzzy segmentation of license plates. In Proceedings of the Ninth In‑
ternational Conference on Information Visualisation (IV’05), London, UK, 6–8 July 2005; IEEE: New York, NY, USA, 2005; pp.
227–232. [CrossRef]
17. Hidayah, M.R.; Akhlis, I.; Sugiharti, E. Recognition Number of the Vehicle Plate Using Otsu Method and K‑Nearest Neighbour
Classification. Sci. J. Inform. 2017, 4, 66–75. [CrossRef]
18. Liu, W.‑C.; Lin, C.H. A hierarchical license plate recognition system using supervised K‑means and Support Vector Machine. In
Proceedings of the 2017 International Conference on Applied System Innovation (ICASI); Sapporo, Japan, 13–17 May 2017, IEEE: New
York, NY, USA, 2017; pp. 1622–1625.
Technologies 2021, 9, 9 17 of 18

19. Quiros, A.R.F.; Bedruz, R.A.; Uy, A.C.; Abad, A.; Bandala, A.; Dadios, E.P.; La Salle, D. A kNN‑based approach for the machine
vision of character recognition of license plate numbers. In Proceedings of the TENCON 2017—2017 IEEE Region 10 Conference,
Penang, Malaysia, 5–8 November 2017; IEEE: New York, NY, USA, 2017; pp. 1081–1086.
20. Thangallapally, S.K.; Maripeddi, R.; Banoth, V.K.; Naveen, C.; Satpute, V.R. E‑Security System for Vehicle Number Tracking
at Parking Lot (Application for VNIT Gate Security). In Proceedings of the 2018 IEEE International Students’ Conference on
Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 24–25 February 2018; IEEE: New York, NY, USA, 2018;
pp. 1–4.
21. Subhadhira, S.; Juithonglang, U.; Sakulkoo, P.; Horata, P. License plate recognition application using extreme learning machines.
In Proceedings of the 2014 Third ICT International Student Project Conference (ICT‑ISPC), Nakhon Pathom, Thailand, 26–27
March 2014; IEEE: New York, NY, USA, 2014; pp. 103–106.
22. Singh, A.K.; Roy, S. ANPR Indian system using surveillance cameras. In Proceedings of the 2015 Eighth International Conference
on Contemporary Computing (IC3), Noida, India, 20–22 August 2015; IEEE: New York, NY, USA, 2015; pp. 291–294.
23. Sanchez, L.F. Automatic Number Plate Recognition System Using Machine Learning Techniques. Ph.D. Thesis, Cranfield Uni‑
versity, Cranfield, UK, 2017.
24. Panahi, R.; Gholampour, I. Accurate detection and recognition of dirty vehicle plate numbers for high‑speed applications. IEEE
Trans. Intell. Transp. Syst. 2016, 18, 767–779. [CrossRef]
25. Rahman, M.S.; Mostakim, M.; Nasrin, M.S.; Alom, M.Z. Bangla License Plate Recognition Using Convolutional Neural Networks
(CNN). In Proceedings of the 2019 22nd International Conference on Computer and Information Technology (ICCIT), Dhaka,
Bangladesh, 18–20 December 2019; IEEE: New York, NY, USA, 2019; pp. 1–6.
26. Leung, B.; Memik, S.O. Exploring super‑resolution implementations across multiple platforms. EURASIP J. Adv. Signal Process.
2013, 1, 116. [CrossRef]
27. Lempitsky, V.; Kohli, P.; Rother, C.; Sharp, T. Image segmentation with a bounding box prior. In Proceedings of the 2009 IEEE
12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; IEEE: New York, NY, USA, 2009;
pp. 277–284. [CrossRef]
28. Youcef, M.‑A.H. Convolutional Neural Network for Image Classification with Implementation on Python Using PyTorch. 2019.
Available online: https://mc.ai/convolutional‑neural‑network‑for‑image‑classification‑with‑implementation‑on‑python‑using‑
pytorch/ (accessed on 4 March 2020).
29. Ghosh, A.K.; Sharma, S.K.D.; Islam, M.N.; Biswas SAkter, S. Automatic license plate recognition (ALPR) for Bangladeshi vehicles.
Glob. J. Comput. Sci. Technol. 2011, 11, 1–6.
30. Baten, R.A.; Omair, Z.; Sikder, U. Bangla license plate reader for metropolitan cities of Bangladesh using template matching. In
Proceedings of the 8th International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, 20–22 December
2014; IEEE: New York, NY, USA, 2014; pp. 776–779. [CrossRef]
31. Haque, M.R.; Hossain, S.; Roy, S.; Alam, N.; Islam, M.J. Line segmentation and orientation algorithm for automatic Bengali
license plate localization and recognition. Int. J. Comput. Appl. 2016, 154, 21–28. [CrossRef]
32. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif.
Intell. Rev. 2019, 53, 5455–5516. [CrossRef]
33. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM
2017, 60, 84–90. [CrossRef]
34. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large‑scale image recognition. arXiv 2014, arXiv:1409.1556.
35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [CrossRef]
36. Ramos‑Llordén, G.; Vegas‑Sánchez‑Ferrero, G.; Martin‑Fernandez, M.; Alberola‑López, C.; Aja‑Fernández, S. Anisotropic Dif‑
fusion Filter with Memory Based on Speckle Statistics for Ultrasound Images. IEEE Trans. Image Process. 2014, 24, 345–358.
[CrossRef] [PubMed]
37. Tsai, R. Multiframe Image Restoration and Registration. Adv. Comput. Vis. Image Process. 1984, 1, 317–339.
38. Li, B.; Zeng, Z.Y.; Zhou, J.Z.; Dong, H.L. An algorithm for license plate recognition using radial basis function neural network.
In Proceedings of the 2008 International Symposium on Computer Science and Computational Technology, Shanghai, China,
20–22 December 2008; IEEE: New York, NY, USA, 2008; pp. 569–572.
39. Shan, B. Vehicle License Plate Recognition Based on Text‑line Construction and Multilevel RBF Neural Network. J. Comput. Sci.
2011, 6, 246–253. [CrossRef]
40. Mutholib, A.; Gunawan, T.S.; Kartiwi, M. Design and implementation of automatic number plate recognition on android plat‑
form. In Proceedings of the 2012 International Conference on Computer and Communication Engineering (ICCCE), Kuala
Lumpur, Malaysia, 3–5 July 2012; IEEE: New York, NY, USA, 2012; pp. 540–543.
41. Romadhon, R.K.; Ilham, M.; Munawar, N.I.; Tan, S.; Hedwig, R. Android‑based license plate recognition using pre‑trained neural
network. Internet Work. Indones. J. 2012, 4, 15–18.
42. Saif, N.; Ahmmed, N.; Pasha, S.; Shahrin MS, K.; Hasan, M.M.; Islam, S.; Jameel, A.S.M.M. Automatic License Plate Recognition
System for Bangla License Plates using Convolutional Neural Network. In Proceedings of the TENCON 2019—2019 IEEE Region
10 Conference (TENCON), Kochi, India, 17–20 October 2019; IEEE: New York, NY, USA, 2019; pp. 925–930.
Technologies 2021, 9, 9 18 of 18

43. Polishetty, R.; Roopaei, M.; Rad, P. A next‑generation secure cloud‑based deep learning license plate recognition for smart cities.
In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA,
USA, 18–20 December 2016; IEEE: New York, NY, USA, 2016; pp. 286–293.
44. Chen, Y.S.; Lin, C.K.; Kan, Y.W. An advanced ICTVSS model for real‑time vehicle traffic applications. Sensors 2019, 19, 4134.
[CrossRef] [PubMed]

You might also like