0% found this document useful (0 votes)
27 views

Recent

Uploaded by

Randa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Recent

Uploaded by

Randa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Journal of Electrical Engineering & Technology (2023) 18:2275–2285

https://doi.org/10.1007/s42835-022-01314-w

ORIGINAL ARTICLE

Lightweight Deep Learning‑Based Model for Traffic Prediction


in Fog‑Enabled Dense Deployed IoT Networks
Abdelhamied A. Ateya1,2 · Naglaa F. Soliman3 · Reem Alkanhel3 · Amel A. Alhussan4 · Ammar Muthanna5 ·
Andrey Koucheryavy2

Received: 10 January 2022 / Revised: 20 October 2022 / Accepted: 8 November 2022 / Published online: 23 November 2022
© The Author(s) under exclusive licence to The Korean Institute of Electrical Engineers 2022

Abstract
Internet of Things (IoT) is one of the promising technologies, announced as one of the primary use cases of the fifth-gener-
ation cellular systems (5G). It has many applications that cover many fields, moving from indoor applications, e.g., smart
homes, smart metering, and healthcare applications, to outdoor applications, including smart agriculture, smart city, and
surveillance applications. This produces massive heterogeneous traffic that loads the IoT network and other integrated com-
munication networks, e.g., 5G, which represents a significant challenge in designing IoT networks; especially, with dense
deployment scenarios. To this end, this work considers developing a novel artificial intelligence (AI)-based framework for
predicting traffic over IoT networks with dense deployment. This facilitates traffic management and avoids network conges-
tion. The developed AI algorithm is a deep learning model based on the convolutional neural network, which is a lightweight
algorithm to be implemented by a distributed edge computing node, e.g., a fog node, with limited computing capabilities.
The considered IoT model deploys distributed edge computing to enable dense deployment, increase network availability,
reliability, and energy efficiency, and reduce communication latency. The developed framework has been evaluated, and the
results are introduced to validate the proposed prediction model.

Keywords Tactile Internet · Cloud · 5G · Mobile edge computing · Latency

1 Introduction the third evolution of the traditional Internet that enables


the paradigm of machine-to-machine (M2M) communica-
The Internet of Things (IoT) is one of the most promising tions [3]. IoT has been announced as a fifth-generation cel-
communication systems, introducing a massive number of lular system (5G) use case; however, integrating IoT with
applications and services in all life fields [1, 2]. It represents the cellular system faces many challenges. Heterogeneous

2
* Abdelhamied A. Ateya Department of Telecommunication Networks
[email protected] and Data Transmission, St. Petersburg State University
of Telecommunication, St. Petersburg, Russia 193232
Naglaa F. Soliman
3
[email protected] Department of Information Technology, College
of Computer and Information Sciences, Princess Nourah bint
Reem Alkanhel
Abdulrahman University, P.O. Box 84428, Riyadh 11671,
[email protected]
Saudi Arabia
Amel A. Alhussan 4
Department of Computer Sciences, College of Computer
[email protected]
and Information Sciences, Princess Nourah bint
Ammar Muthanna Abdulrahman University, P.O. Box 84428, Riyadh 11671,
[email protected] Saudi Arabia
5
Andrey Koucheryavy Department of Applied Probability and Informatics, Peoples’
[email protected] Friendship University of Russia (RUDN University),
Moscow, Russia 117198
1
Department of Electronics and Communications
Engineering, Zagazig University, Zagazig 44519, Sharqia,
Egypt

13
Vol.:(0123456789)
2276 Journal of Electrical Engineering & Technology (2023) 18:2275–2285

massive traffic is the main challenge facing IoT-based cel- Recently, fog computing has been introduced for IoT net-
lular systems [4, 5]. works to provide computing resources to the battery-oper-
By 2025, IoT-connected devices are expected to be ten ated end devices. This achieves many benefits to the IoT
times the current existing number [6]. This increase of networks that can be summarized in the following points
connected devices will introduce IoT connections in many [15, 16]:
applications. Figure 1 illustrates the percentage increase in
IoT connections in 2025 compared with the current existing • Reducing the communication latency,
number in 2021. The percentage increase is introduced per • Reducing the network congestion,
application [7]. • Improving network management operations,
This massive number of applications introduces huge • Providing a path for data offloading,
heterogeneous network traffic that represents a challenge in • Increasing the overall network reliability,
IoT network design; especially, for cellular-based IoT sys- • Increasing the overall network scalability,
tems, e.g., Narrow-band IoT (NB-IoT) [8, 9]. One of the • Increasing the overall network reliability, and
main deployment scenarios supported by 5G systems is the • Increasing the energy efficiency and thus, increases the
dense deployment, and IoT should support dense deploy- battery life of end devices.
ment scenarios and high scalability [10]. This is due to the
recent innovation of sensory manufacturing that results in a The introduction of fog computing facilitates the imple-
surprising number of available sensors. mentation and integration of ML algorithms developed for
Many tools and solutions have been introduced to reduce IoT networks at the edge of the access network [17]. How-
the effect of the plurality and the vast amount of IoT network ever, this integration has a certain level of complexity due to
traffic. Machine learning (ML) is one of the most effective the limited computing resources, e.g., storage, and process-
tools to minimize such traffic effects and maintain network ing, of fog nodes [18]. This work aims to develop a novel
performance at higher levels, even if the network traffic is ML algorithm based on a convolutional neural network
high. ML is used to classify IoT network traffic and predict (CNN) to predict the network traffic in dense deployed IoT
network traffic over a certain period [11–13]. These ML- networks. The algorithm is lightweight to be implemented
based classifications and prediction models are implemented on fog nodes. The main contributions of the work can be
at the core network and application server to facilitate net- summarized in the following points.
work management operations. However, recent proposals
consider developing and implementing such models at the • Design and development of a novel framework of IoT
edge of the access network. network based on distributed fog computing,
Distributed edge computing, e.g., fog computing, is • Design and development of a lightweight ML algorithm
another effective paradigm used for IoT networks to increase based on CNN to predict the network traffic over a cer-
network scalability and enable dense deployment [14]. tain period,
• Training the developed ML algorithm using two datasets,
• Evaluating the performance of the developed framework.
Percentage increase of IoT connections
(2021/2025) The rest of the article is organized as follows: Sect. 2
provides the related works to the developed framework. Sec-
Smart home tion 3 introduces the proposed ML-based framework for IoT
Consumer electronics traffic prediction. Section 4 presents the performance evalu-
Smart vehicles ation of the proposed work.
Wearables
Healthcare
Smart utilities
2 Related Works
Smart manufacturing
The traditional cloud-based IoT architecture cannot meet the
Smart retail demanding requirements of 5G systems, given the growth
Smart buildings of wireless smart devices and communication technolo-
Smart city gies. Recently, the proliferation of network applications has
Others resulted in an explosion of network traffic. Given the number
of connected devices and the real-time nature of many links,
Fig. 1  Expected percentage increase in IoT connections per applica- it is even more critical for IoT networks. Predicting IoT traf-
tion in 2025 compared to 2021 fic in the modern era has garnered considerable attention to

13
Journal of Electrical Engineering & Technology (2023) 18:2275–2285 2277

maximize bandwidth and channel capacity utilization. Net- Moreover, a cross-validated grid model was used to get the
work management necessitates using technology to classify optimum hyper-parameters that achieve the optimum per-
network traffic without the operator's intervention. Numer- formance of the deep learning model. Results indicated that
ous studies have concentrated on accurately classifying net- the developed model classified the network traffic with high
work traffic. performance.
Many existing proposals consider developing ML algo- In [26], the authors investigated the neural network's abil-
rithms for traffic management in IoT networks [19, 20]. ity to classify time series data with Long-Range Dependence
Some existing literature considers ML for IoT traffic classi- of the Internet, using the Hurst exponent as a measure of
fications [21]. Many existing works consider developing ML self-similarity. Synthetic data was generated using Fractional
algorithms to classify IoT traffic at the IoT gateway and thus, Gaussian noise to model the real ones. The trained model
enable the network service providers to detect unauthorized is shown to classify synthetic data derived from the Pareto
traffic and facilitate the network management process. distribution and real-world traffic data. Various cost function
The other part of the existing ML-based works for IoT optimizers and convolutional layers with different numbers
traffic considers predicting network traffic [22]. Such predic- were used for evaluation. Results indicated that individual
tors can provide the future status of the IoT network, over optimizers achieved comparable performance; thus, training
a certain time interval, based on specific parameters, such models with multiple types of optimizers is a good practice
as the previous network status. This section considers the for determining the model with the highest accuracy.
recent exiting literature that considers developing ML algo- In [27], the authors developed a method based on data
rithms for predicting IoT network traffic. The most related conversion and CNN to predict IoT traffic. The CNN is a
works to our developed framework are only included. lightweight that forms the feature map based on the Spa-
In [23], the authors introduced a novel cost-sensitive tio-temporal model. Moreover, another lightweight neural
CNN model for network traffic classifications that facilitates network has been developed to reduce traffic prediction
the feature extraction process of the network traffic. The computations. This neural network has been introduced to
work has mainly considered the problem of data unbalanc- minimize mean errors and improve the accuracy of the pre-
ing during the deep neural network training process. A cost diction. Firstly, data is processed to get the required features
matrix has been used to assign higher values to classes at of the Spatio-temporal. Then, a lightweight prediction algo-
low frequency and low values at high rates. This cost func- rithm is introduced to predict traffic in IoT networks. The
tion achieves higher accuracy in the classification process. algorithm is based on deep learning, and the required param-
The developed model achieved 98% accuracy for two-class eters are optimized at first. The algorithm has been trained
classifications, with only 2% of the network traffic misclas- using a dataset from a real service provider, i.e., Telecom
sified. The authors have not considered the implementation Italia, with 90% of the data for training. The rest, 10%, has
evaluation of the proposed model and the efficiency for been used to test the algorithm. Data introduced in the data-
dense networks with heterogeneous traffic. set has been collected over 50 days, with a data interval of
In [24], the authors considered traffic sensing from social 10 min. This work differs from our developed model, as our
media by extracting traffic-related microblogs from the proposed model has a different CNN structure with a differ-
SinaWeibo platform, representing the most critical way to ent data pre-processing method. Furthermore, our proposed
extract detailed traffic information, such as the location of work is trained using two data sets with a time interval of
a traffic incident. ML models introduce the problem into 30 min. Also, our framework depends on implementing the
a short text classification problem. The authors developed developed algorithm on fog distributed nodes that increases
a deep neural network model to classify microblogs into the prediction process's performance and effect.
two classes; traffic with high significance and others with In [28], the authors developed a CNN model to classify
no critical relevance. A bag-of-words (CBOW) model was Tor traffic. The packet headers are pre-processed to turn a
used to learn word embedding representations from a used portion of the packet, i.e., the first 54 bytes, into the decimal
dataset of three billion unlabeled microblogs. Experiments format and provide them as input to the CNN. The developed
demonstrated that the developed deep neural network model model was trained using the UNB-CIC dataset of Tor traf-
outperforms the support vector machine (SVM) and multi- fic, with 80% of data for training and 20% for testing. The
layer perceptron (MLP) methods. system achieved an accuracy of 99.3% while testing.
In [25], the authors introduced datasets of network pack- In [29], the authors developed a supervised deep-learning
ets to training five considered deep neural networks using algorithm for traffic prediction in IoT networks. The devel-
convolutional neural networks (CNNs) and residual net- oped model is an adaptive gradient boosting (GB) based
works (ResNets). The packets were turned into images used on learning blocks. That developed neural network model
as inputs to the neural networks. Deep learning models were performs the prediction process faster and easier due to the
used to classify network traffic based on CNNs and ResNets. employment of GB. The developed neural network model

13
2278 Journal of Electrical Engineering & Technology (2023) 18:2275–2285

was trained end-to-end using a dataset of real traffic recorded solution based on time series learning algorithms, including
from a mobile operator. The deep learning model was trained gated recurrent unit (GRU-NN) and LSTM. The developed
using a dataset collected from 6214 mobile users, with het- GRU-NN maintains the traffic characteristics of the IoT net-
erogeneous activities over 26 days. The developed model work for an extended period, allowing the system to forecast
predicts for six hours, i.e., six output predicted values, based future traffic based on the recorded traffic. The work con-
on a time interval of one hour. The main advantages of the sidered a gradient boosting training model to improve IoT
developed model include easy training, speed of prediction, traffic prediction accuracy and transfer learning to remove
and accurately predicted results. However, the work had not low traffic captured data barriers. Results indicated that the
considered the implementation of the developed model and model outperforms other existing traffic predictors in statisti-
the computation cost. cal performance evaluation metrics.
In [30], the authors considered the problem of network To provide the novelty of the proposed model compared
traffic change using CNN. The prediction process of net- to existing proposals, Table 1 compares existing proposals
work traffic has turned into a classification process, and a for IoT traffic classification and prediction. The novelty of
developed deep learning model has been introduced for such our developed framework comes from the introduction of
classification problems. The output of the classifier is one fog computing for implementing the CNN model. The pro-
of the network traffic classes. The CNN was trained using posed CNN is a lightweight that is executed on fog-distrib-
the EDU1 dataset, with 80% of data for training, and the uted nodes. To the best of our knowledge, this is the first
other 20% was used for testing. The developed algorithm work to consider implementing a lightweight traffic predic-
was evaluated and validated with an accuracy of 92.6%. tion algorithm at the edge of the access network, i.e., fog
In [31]. The authors developed an intelligent traffic pre- nodes.
diction model with cognitive cashing to enable real-time
prediction of fog-based radio access networks (RANs).
The considered network structure is fog-based RAN, with
a real-time connection to the core network. The traffic flow 3 Proposed System
prediction model was developed using attention-based long
short-term memory (LSTM) and a collaborative filtering- The considered IoT network has the system-level structure
based cognitive caching strategy. The developed model introduced in Fig. 2. The system consists of four levels; the
was trained and tested using a constructed dataset contain- device layer, distributed edge computing layer, access net-
ing 5000 historical logs collected from fog access points. work layer, and the application layer. The distributed edge
The data consists of text data, image data, and video data. computing layer is introduced between IoT end-devices and
The system was evaluated, and the results indicated that the the access network to provide computing capabilities and
developed strategy could accurately predict traffic flow type energy resources within one communication hop range. The
and efficiently lower communication delay. considered edge computing technology is the fog comput-
In [32], the authors considered the IoT network traffic ing paradigm with high deployment flexibility and a level
prediction problem using ML. The prediction process is of mobility. Fog nodes are distributed near IoT end-devices
based on transfer learning. The work provides a prediction with limited computing resources and low mobility.

Table 1  Comparison between existing traffic management models


Refs. Year Prediction Classification Traffic Fog Lightweight Imp KPIs

[24] 2017 Χ √ Social media Χ Χ Χ Recall, Precision


[28] 2018 Χ √ Tor Χ Χ Χ Recall, Precision
[25] 2019 Χ √ General Χ Χ Χ Recall, Precision
[29] 2019 √ Χ IoT Χ Χ Χ Mean squared error
[33] 2019 √ Χ IoT Χ Χ Χ Packet loss
[30] 2020 √ √ General Χ Χ Χ Accuracy
[31] 2020 √ Χ Cellular √ Χ √ Delay
[26] 2020 Χ √ Internet Χ Χ Χ Accuracy
[23] 2021 Χ √ General Χ Χ Χ Accuracy
[32] 2021 √ Χ IoT Χ Χ Χ Mean squared error
[27] 2021 √ Χ IoT Χ √ Χ Mean absolute error
Proposed model √ Χ IoT √ √ √ Accuracy, Mean squared error

13
Journal of Electrical Engineering & Technology (2023) 18:2275–2285 2279

prediction. The prediction process is made on the basis of a


classification process. The network traffic is classified, and
the traffic prediction is performed based on the number of
the detected traffic classes. Figure 3 presents a flowchart of
the developed traffic prediction model. The received pack-
ets are pre-processed to fed to the developed CNN, which
extracts the features of the traffic to classify the network
traffic into classes. If a match occurs the network traffic is
classified and then the prediction is made based on the num-
ber of detected classes.
The developed LTP-CNN classifies the input network
traffic based on features that rely on the raw data, i.e.,
packet segment. The proposed LTP-CNN consists of an
input layer, convolutional layers, pooling layers, fully con-
nected hidden layers, and an output layer. Figure 4 presents

Fig. 2  Layer system of the considered IoT network

For dense deployment scenarios, the fog layer is highly


demanded to achieve the required network availability
and scalability. The introduction of fog nodes facilitates
the implementation of network algorithms and methods,
including ML-based techniques. Methods and algorithms
for network traffic management, data processing, and energy
harvesting can be implemented and executed by fog nodes
instead of IoT end-devices. This achieves higher network
efficiency and conserves end-devices batteries. This work
considers the problems associated with massive network
traffic and traffic congestion in dense deployed IoT networks.
The considered IoT network has the previously introduced
network structure. An AI-based model is developed to pre-
dict the network traffic in dense deployed IoT networks. The
developed algorithm is a lightweight one implemented at
the fog layer.
The developed traffic prediction model is based on the
convolutional neural network (CNN) that is widely used in Fig. 3  Flowchart of the developed network traffic prediction model
feature extraction-based applications. This makes CNN fits
well for traffic prediction applications. Our developed CNN
is a lightweight model to be implemented on the distributed
edge computing units, i.e., fog nodes. The developed light-
weight traffic prediction CNN is referred to as LTP-CNN,
which predicts the network traffic according to the current
network traffic status recorded over a pre-defined time inter-
val. Figure 2 presents the IoT network structure with the
LTP-CNN enabled distributed edge computing units.
LTP-CNN is a one-dimensional CNN as the process is
carried out over one-dimensional input data, i.e., packet
segment. Moreover, the developed LTP is lightweight, and
most of the existing literature proved that the one-dimen-
sional CNN has the highest performance in network traffic Fig. 4  Structure of the developed LTP-CNN

13
2280 Journal of Electrical Engineering & Technology (2023) 18:2275–2285

a flow diagram of the developed three-layer LTP-CNN, 4 Performance Evaluation


and Table 2 provides its structure, with the size of each
considered convolution layer, the stride, and filter size. In this section, the proposed framework is evaluated. At
A batch normalization layer is introduced for normal- first, the developed LTP-CNN is trained, and then the test
izing and avoiding the overfitting problem. The batch process is introduced. The first part of this section intro-
normalization process is used to reduce the number of duces the considered datasets and the preparation of such
epochs needed for training the deep neural network and datasets for CNN training and testing. The second part
thus, makes the process more stable. Once the network considers the evaluation process and the obtained results.
data is normalized, the next layer becomes independent.
The considered convolutional layer is the feature extrac-
4.1 Dataset Pre‑Processing
tion tool that extracts the different features of the input
data. This mainly depends on the kernel size, as the total
We considered two heterogeneous datasets for the train-
number of the detected features corresponds to the kernel
ing and testing processes of the developed LTP-CNN. The
size. The considered convolution layer has an activation
labelled packet capture (PCAP) traces of the detected traf-
function, which is the Rectified Linear Unit (ReLU). ReLU
fic introduced in each dataset are used. Packet analyzers,
is used to achieve the non-linearity to the CNN model.
e.g., TCPDUM, and Wireshark tools, can be used to detect
Thus, unsupervised pre-training is not necessary. The use
and store the network packets in PCAP trace files.
of ReLU as an activation function, instead of a sigmoid
The first dataset, dataset (I), is the dataset introduced in
activation function, is to achieve faster and efficient train-
[34] and available at [35]. It consists of 1,262,022 captured
ing of CNN. Moreover, ReLU is more efficient in com-
flows during 66 days. This contains more than 35 GB of
putations, making it efficient for our proposed LTP-CNN
artificial-built packets. The second dataset, dataset (II), is
since an embedded processor implements the algorithm
the UNI2 dataset introduced in [36] and available at [37].
with limited capabilities, i.e., IoT gateway with fog node.
The traces of both datasets are turned into PCAP files with
The Pooling layer is constantly introduced for reduc-
a time interval of one hour.
ing features and computations. Pooling reduces the pixel
For the efficient use of both datasets for the developed
windows of the feature map into one-pixel windows. Two
LTP-CNN, the PCAP files are transferred to arrays of
common types of pooling process are used; max-pooling
data. This can be done by extracting the network traffic
and average-pooling. Max-pooling is a discrete process
information from the allocated PCAP traces over a certain
that is introduced to down-sampling the input data. This
period and turning it into arrays of data. These arrays are
is done by selecting the maximum value in each window,
introduced as an input to the LTP-CNN. Figure 5 presents
which makes max-pooling more efficient for images with
examples of the sampled packets, i.e., the first 40 samples,
dim backgrounds and distributed lighter pixels. However,
with a sampling interval of 2 and 4 s; for both considered
average-pooling takes the mean value of all pixels in the
datasets.
window, making it helpful in extracting smooth features.
The considered pooling layer in the developed LTP-CNN
is the max-pooling with a size of (2 × 2). The last convolu- 4.2 Simulation Setup and Results
tion layer provides the extracted features from the intro-
duced neural network and feeds them to the output layer. The developed LTP-CNN is implemented with the specifica-
tions introduced in the simulation parameters table, Table 3.
The experimental evaluation is carried out using the libraries
of Keras 2.2.0 over the Python 3.6 environment. Packets in
datasets are divided into 90% for training, and the other 10%
are located for testing. The threefold cross-validation is used
Table 2  Main layers of the developed LTP-CNN for the training process, with the patch size introduced in
LTP-CNN layer Filter Size Stride Table 2. The developed LTP-CNN is evaluated using the two
considered datasets, with the accuracy as the performance
Conv.1 16 3Χ1 2
metric. The Accuracy of LTP-CNN is calculated as the per-
Max. Pooling – –
centage of the sum of true positive, T ­ +, and true negative,
Conv.2 32 3Χ1 2 −
­T , to the total population, as introduced in (1).
Max. Pooling – – –
Conv.3 64 3Χ1 2
∑ +
T + T−
Max. Pooling – – –
Accuracy = ∑ + × 100 (1)
T + T − + F+ + F−
FC – – –

13
Journal of Electrical Engineering & Technology (2023) 18:2275–2285 2281

Table 3  Specifications of the implementing machine


Parameter Value

GPU GeForce RTX 3080


GPU-Accelerated Libraries NVIDIA CUDA-X
CPU Intel Core i9
RAM 64 GB

Fig. 6  Accuracy of the dataset (I)

Fig. 7  Accuracy of the dataset (II)

Figures 6 and 7 provide the results of the accuracy of


dataset I and dataset II. Results indicate that the accu-
racy increases gradually and steady around 90% for both
datasets. The developed algorithm reaches the steady state
quickly after average epochs of 30.
In order to evaluate the efficiency of the prediction pro-
cess compared to existing traditional CNN, we consider per-
forming statistical analysis to compare the mean square error
(MSE) of a part of the predicted values using LTP-CNN with
that of traditional CNN. These values are twenty real pre-
dicted values obtained after implementing the LTP-CNN and
Fig. 5  Sampled data of a number of detected packets. a and b are
samples from dataset I, and c and d are samples from dataset II
a traditional CNN over a fog node of our previously devel-
oped IoT testbed introduced in [38]. The statistical analysis
has been carried out using GraphPad Prism 5 tool, using

13
2282 Journal of Electrical Engineering & Technology (2023) 18:2275–2285

0.3 Table 4  IoT testbed and experimental setup parameters

Parameter Value

Number of end-devices 300


0.2
IoT node Raspberry pi 3
Fog-Processing Intel(R) Xeon(R) CPU
MSE

E5-2620 v4 @ 2.10 GHz,


32 cores
0.1
Fog-RAM 64 GB
Fog-HDD 500 GB

0.0
Dataset (I) Dataset (II)

Fig. 8  MSE of the predicted values using LTP-CNN trained using the
datasets I and II

0.16

0.15
MSE

0.14

0.13 Fig. 10  Percentage of resources utilization of LTP-CNN compared


with the traditional CNN

0.12
LTP-CNN Traditional CNN
using LTP-CNN significantly differs from that of tradi-
Fig. 9  MSE of the predicted values using LTP-CNN compared with tional CNN, with a P-value of 0.0013. The indicates that
the traditional CNN
the developed LTP-CNN achieves less MSE than tradition
scheme while predicting network traffic, and thus, it pre-
the student's t-test. We consider two main scenarios: in the dicts at higher efficiency than that of the existing models.
first scenario, we compare the MSE of LTP-CNN trained, In order to evaluate the performance of the developed
one time, with the dataset I and the other time with dataset network traffic prediction model in terms of implementa-
II; while the other scenario is introduced to compare MSE tion, we performed a real experiment. The developed LTP-
of the prediction of LTP-CNN with the traditional model. CNN was implemented over a fog node for out previously
Figures 8 and 9 provide the obtained results. developed IoT testbed introduced in [38]. Moreover, we
Figure 8 presents the results of the first scenario, where implemented other existing work deploys traditional neu-
performance of LTP-CNN, for dataset I and dataset II, is ral network for traffic prediction [21]. The specifications
compared. In the first scenario, results indicate no signifi- and the experimental setup parameters are presented in
cant difference between MSE of the LTP-CNN trained by Table 4.
each dataset, at a P-value of 0.1243. This indicates that Figures 10 provides the experimental measures of the
the performance of the developed LTP-CNN in predicting resources utilization of fog node while implementing the
network traffic is very stable and the prediction accuracy developed LTP-CNN and the traditional CNN. It provides
is always very near for all networks. Figure 9 presents the the percentage of utilization of storage resources for both
results of the second scenario, where the performance of considered models. Moreover, the percentage of utilization
LTP-CNN, in traffic prediction, is compared with the tra- of processing resources is presented. The developed model
ditional predicting schemes [21]. In the second scenario, uses less storage resources by an average of 21%, and 18%
results indicate that the MSE of the prediction process less processing resources for implementing the developed
predictor than traditional existing predictors.

13
Journal of Electrical Engineering & Technology (2023) 18:2275–2285 2283

5 Conclusion 11. Adi E, Anwar A, Baig Z, Zeadally S (2020) Machine learn-


ing and data analytics for the IoT. Neural Comput Appl
32(20):16205–16233
With the dramatic increase of IoT traffic, powerful traffic 12. Li W, Chai Y, Khan F, Jan SRU, Verma S, Menon VG, Li X
classifications and prediction tools become a demand. The (2021) A comprehensive survey on machine learning-based big
work introduced a novel lightweight CNN, LTP-CNN, that data analytics for IoT-enabled smart healthcare system. Mob Netw
Appl 26:1–19
predicts the IoT network traffic based on the network status 13. Abbasi M, Shahraki A, Taherkordi A (2021) Deep learning for
for a previous interval. The developed algorithm is imple- network traffic monitoring and analysis (NTMA): a survey. Com-
mented over fog nodes that represent a main part of the put Commun. https://​doi.​org/​10.​1016/j.​comcom.​2021.​01.​021
IoT/5G networks. LTP-CNN has trained and tested using 14. Muthanna A, Khakimov A, Ateya AA, Paramonov A, Kouch-
eryavy A (2018) Enabling M2M communication through MEC
two common datasets. Results indicate that the developed and SDN. In: International conference on distributed computer
LTP-CNN can predict IoT network traffic with an accuracy and communication networks. Springer, Cham, pp 95–105
of around 90%. Comparing LTP-CNN with the traditional 15. Rafique W, Qi L, Yaqoob I, Imran M, Rasool RU, Dou W (2020)
existing CNN models for a practical situation, experimen- Complementing IoT services through software defined network-
ing and edge computing: a comprehensive survey. IEEE Commun
tal results indicate that LTP-CNN achieves higher predic- Surv Tutor 22(3):1761–1804
tion efficiency. 16. Magesh S, Indumathi J, RamMohan R, Niveditha V, Prabha P
(2020) Concepts and contributions of edge computing in internet
Acknowledgements Princess Nourah bint Abdulrahman University of things (IoT): a survey. Int J Comput Netw Appl 7:146–156
Researchers Supporting Project number (PNURSP2022R66), Princess 17. Akbar A, Ibrar M, Jan MA, Bashir AK, Wang L (2020) SDN-ena-
Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. bled adaptive and reliable communication in IoT-Fog environment
using machine learning and multiobjective optimization. IEEE
Internet Things J 8(5):3057–3065
18. Ungurean I, Gaitan NC (2021) Software architecture of a fog com-
puting node for industrial Internet of Things. Sensors 21(11):3715
References 19. Laha S, Chowdhury N, Karmakar R (2020) How can machine
learning impact on wireless network and IoT?–a survey. In 2020
1. Stoyanova M, Nikoloudakis Y, Panagiotakis S, Pallis E, Marka- 11th international conference on computing, communication and
kis EK (2020) A survey on the internet of things (IoT) forensics: networking technologies (ICCCNT). IEEE, pp 1–7
challenges, approaches, and open issues. IEEE Commun Surv 20. Li Y, Tu W (2020). Traffic modelling for IoT networks: a sur-
Tutor 22(2):1191–1221 vey. In: Proceedings of the 2020 10th international conference on
2. Nižetić S, Šolić P, González-de DLDI, Patrono L (2020) Internet information communication and management, pp 4–9
of Things (IoT): Opportunities, issues and challenges towards a 21. Shahraki A, Abbasi M, Taherkordi A, Jurcut AD (2021) Active
smart and sustainable future. J Clean Prod 274:122877 learning for network traffic classification: a technical survey. arXiv
3. Prasad R, Rohokale V (2020) Internet of Things (IoT) and preprint arXiv:​2106.​06933
Machine to Machine (M2M) communication. Cyber secu- 22. Khedkar SP, Canessane RA, Najafi ML (2021) Prediction of traf-
rity: the lifeline of information and communication technol- fic generated by IoT devices using statistical learning time series
ogy. Springer Series in Wireless Technology. Springer, Cham. algorithms. Wirel Commun Mob Comput. https://d​ oi.o​ rg/1​ 0.1​ 155/​
https://​doi.​org/​10.​1007/​978-3-​030-​31703-4_9 2021/​53662​22
4. Tahaei H, Afifi F, Asemi A, Zaki F, Anuar NB (2020) The rise 23. Khodaverdian Z, Sadr H, Edalatpanah SA, Solimandarabi MN
of traffic classification in IoT networks: a survey. J Netw Com- (2021) Combination of convolutional neural network and gated
put Appl 154:102538 recurrent unit for energy aware resource allocation. arXiv preprint
5. Popli S, Jha RK, Jain S (2021) Green IoT: a short survey arXiv:​2106.​12178
on technical evolution & techniques. Wirel Pers Commun 24. Chen Y, Lv Y, Wang X, Wang FY (2017) A convolutional neural
123(1):1–29 network for traffic information sensing from social media text. In:
6. Ateya A, Al-Bahri M, Muthanna A, Koucheryavy A (2018) End- 2017 IEEE 20th international conference on intelligent transporta-
to-end system structure for latency sensitive applications of 5G. tion systems (ITSC). IEEE, pp 1–6
Элeктpocвязь 6:56–61 25. Lim HK, Kim JB, Heo JS, Kim K, Hong YG, Han YH (2019)
7. Polepaka S, Swami Das M, Ram Kumar RP (2020) Internet of Packet-based network traffic classification using deep learning. In:
Things and its applications: an overview. Adv Cybern Cognit 2019 international conference on artificial intelligence in informa-
Mach Learn Commun Technol 643:67–75 tion and communication (ICAIIC). IEEE, pp 046–051
8. Ugwuanyi S, Paul G, Irvine J (2021) Survey of IoT for develop- 26. Filus K, Domański A, Domańska J, Marek D, Szyguła J (2020)
ing countries: performance analysis of LoRaWAN and cellular Long-range dependent traffic classification with convolutional
NB-IoT networks. Electronics 10(18):2224 neural networks based on Hurst exponent analysis. Entropy
9. Al-Shargabi B, Sabri O (2017) Internet of Things: an explora- 22(10):1159
tion study of opportunities and challenges. In: 2017 interna- 27. Chien WC, Huang YM (2021) A lightweight model with spatial–
tional conference on engineering & MIS (ICEMIS), pp 1–4. temporal correlation for cellular traffic prediction in Internet of
IEEE Things. J Supercomput 77:1–17
10. Ateya AA, Muthanna A, Makolkina M, Koucheryavy A (2018) 28. Kim M, Anpalagan A (2018) Tor traffic classification from raw
Study of 5G services standardization: specifications and require- packet header using convolutional neural network. In: 2018 1st
ments. In: 2018 10th international congress on ultra modern tel- IEEE international conference on knowledge innovation and
ecommunications and control systems and workshops (ICUMT), invention (ICKII). IEEE, pp 187–190
pp 1–6. IEEE

13
2284 Journal of Electrical Engineering & Technology (2023) 18:2275–2285

29. Lopez-Martin M, Carro B, Sanchez-Esguevillas A (2019) Neural Zagazig University. He is a member of many scientific communities.
network architecture based on gradient boosting for IoT traffic Abdelhamied is a senior Member of the IEEE and ACM professional
prediction. Future Gener Comput Syst 100:656–673 member. He has been an active member of several international jour-
30. Ko T, Raza SM, Binh DT, Kim M, Choo H (2020). Network pre- nals and conferences, with a contribution as an author, a reviewer, an
diction with traffic gradient classification using convolutional neu- editor, or a member of program committees. His current research inter-
ral networks. In: 2020 14th international conference on ubiquitous ests include machine learning applications in communication networks,
information management and communication (IMCOM). IEEE, 5G/6G communications, Internet of things, Tactile Internet and its
pp 1–4 standardization, and Vehicular communications.
31. Hu L, Miao Y, Yang J, Ghoneim A, Hossain MS, Alrashoud M
(2020) If-rans: intelligent traffic prediction and cognitive caching Naglaa F. Soliman received the
toward fog-computing-based radio access networks. IEEE Wirel B.Sc., M.Sc., and Ph.D. degrees
Commun 27(2):29–35 from the faculty of Engineering,
32. Patil SA, Raj LA, Singh BK (2021) Prediction of IoT traffic using Zagazig University, Egypt in
the gated recurrent unit neural network-(GRU-NN-) based predic- 1999, 2004, and 2011, respec-
tive model. Secur Commun Netw. https://​doi.​org/​10.​1155/​2021/​ tively. She is currently working
14257​32 at department of Information
33. Abdellah AR, Mahmood OAK, Paramonov A, Koucheryavy A Technology, College of Com-
(2019) IoT traffic prediction using multi-step ahead prediction puter and Information Sciences,
with neural network. In: 2019 11th international congress on ultra Princess Nourah Bint Abdulrah-
modern telecommunications and control systems and workshops man University, Riyadh, Saudi
(ICUMT). IEEE, pp 1–4 Arabia since 2015. She has been
34. Carela-Español V, Bujlow T, Barlet-Ros P (2014) Is our ground- a Teaching Staff Member with
truth for traffic classification reliable?. In: International confer- the Department of Electronics
ence on passive and active network measurement. Springer, Cham, and Communications Engineer-
pp 98–108 ing, Faculty of Engineering,
35. Traffic Classification at the Universitat Politècnica de Catalunya Zagazig University, Egypt from 2011 up to 2015. Her current research
(UPC) [Online]. Available: http://​www.​cba.​upc.​edu/​monit​oring/​ interests are Digital and Wireless communications OFDM( Channel
traff​i c-​class​ifica​tion Equalization and Channel Estimation, Signal Processing for 5G and
36. Benson T, Akella A, Maltz DA (2010) Network traffic character- IoT networks Applications), Encoding and compression systems, Opti-
istics of data centers in the wild. In: Proceedings of the 10th ACM cal Communication Systems, Power Line Communications, Sensor
SIGCOMM conference on Internet measurement, pp 267–280. Networks and Applications, Underwater Acoustic Communication
37. UNI2 dataset. [Online]. Available: http://p​ ages.c​ s.w
​ isc.e​ du/~
​ tbens​ Systems, Image Processing:( Enhancement of old images and images
on/​IMC10_​Data.​html acquired under bad illumination conditions, medical images, infrared
38. Muthanna A, Ateya AA, Khakimov A, Gudkova I, Abuarqoub images, restoration of degraded images, restoration of degraded and
A, Samouylov K, Koucheryavy A (2019) Secure and reliable IoT noisy images, multi- channel image processing, image interpolation
networks using fog computing with software-defined networking and resizing, super resolution reconstruction of images, color image
and blockchain. J Sens Actuator Netw 8(1):15 processing, image watermarking, encryption, and data hiding), Signal
Processing: (Spectral Estimation, Wavelet Processing, Signal Separa-
Publisher’s Note Springer Nature remains neutral with regard to tion, and Speech Processing), video processing: (3D Video Watermark-
jurisdictional claims in published maps and institutional affiliations. ing, Steganography, and Encryption), Machine learning and deep learn-
ing, Information Security, Computer vision, Interaction between human
Springer Nature or its licensor (e.g. a society or other partner) holds and computer, Medical Diagnostic Applications, Biometrics systems,
exclusive rights to this article under a publishing agreement with the Cancelable Biometric systems.
author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of Reem Alkanhel received the B.S. degree in computer sciences from
such publishing agreement and applicable law. King Saud University, Riyadh, Saudi Arabia, in 1996, the M.S. degree
in information technology (computer networks and information secu-
rity) from Queensland University of Technology, Brisbane, Australia,
Abdelhamied A. Ateya received in 2007, and the PhD degree in information technology (networks and
the B.Sc. and M.Sc. in Electrical communication systems) from Plymouth University, Plymouth, United
Engineering from Zagazig Uni- Kingdom, in 2019. She has been with Princess Nourah bint Abdulrah-
versity, Egypt, in 2010 and 2014, man University, Riyadh, Saudi Arabia since 1997. She is currently an
respectively. In 2019, he received associate professor at the college of Computer and Information Sci-
the Ph.D. from Saint—Peters- ences. Her current research interests include communication systems,
burg State University of Tele- networking, IOT, information security, and quality of service.
communications, Russia. He is
currently an Assistant professor
at the department of electronics
and communications engineer-
ing, Faculty of Engineering,

13
Journal of Electrical Engineering & Technology (2023) 18:2275–2285 2285

Amel A. Alhussan received B.Sc., M.Sc., and Ph.D. degrees in com- school chair “Internet of Things and self-organizing networks” in SUT
puter and information sciences from King Saud University, Saudi Ara- (2010 up to now); Steering committee member of IEEE technically
bia. Her M.Sc. thesis is in software engineering, and her Ph.D. is in co-sponsored series of conferences ICACT and NEW2AN. SG11
Artificial intelligence. She is currently an Associate Professor in the ITU-T vice-chairman 2005 – 2008, 2009 – 2012. WP3/WP4 SG11
Computer Sciences department, College of Computer and Information chairman 2006 – 2012, WP4 SG11 vice-chairman 2015–2016, Chair-
Sciences, Princess Nourah Bint Abdulrahman University (PNU), Saudi man of SG11 in Study period 2017- March 2022. Co-founder of Inter-
Arabia. She is worked in her college in various administrative and national Testing Center for new telecommunications technologies at
academic positions. Her research interests include machine learning, ZNIIS under ITU-D competence. Host and technical program commit-
networking, and software engineering. tee’s member of the “Kaleidoscope 2014” at SUT. His scientific areas
of interest are network planning, teletraffic theory, IoT and its
Ammar Muthanna is an Associ- enablers.
ate Professor at the Department
of Telecommunication networks
and Head of SDN Laboratory
(sdnlab.ru), Saint—Petersburg
State University of Telecommu-
nications, Russia. He received
his B.Sc. (2009), M.Sc. (2011)
and as well as Ph.D. (2016)
degrees from Saint—Petersburg
State University of Telecommu-
nications. In 2012/2013 he took
part in Erasmus student program
in the University of Ljubljana in
Faculty of Electrical engineer-
ing. Area of research: wireless
communications, 4G/5G cellular systems, IoT applications, and soft-
ware defined networking.

Andrey Koucheryavy was born


in Leningrad, USSR on
02.02.1952. After graduation
from Leningrad University of
Telecommunication in 1974 he
joined Telecommunication
Research Institute LONIIS,
where he was working up to
October 2003 (from 1986 up to
2003 as the First Deputy Direc-
tor). He obtained Ph.D. and
Dr.Sc. in 1982 and 1994 respec-
t i v e l y. Since 1998,
A.Koucheryavy is a professor at
St. Petersburg State University of
Telecommunication (SUT). He
is a Chaired professor of the department “Telecommunication Net-
works and Data Transmission” from 2011. Founder and scientific

13

You might also like