0% found this document useful (0 votes)
22 views

2022 - Joint Optimization of Offloading and Resource Allocation For SDN-Enabled IoV

(i) The document proposes a joint optimization approach for offloading and resource allocation in SDN-enabled Internet of Vehicles (IoV) systems. (ii) It introduces a particle swarm optimization method to jointly optimize offloading decisions, offloading ratios, and resource allocation (ODRR) to minimize average system delay while satisfying quality of service demands. (iii) Simulation results show the proposed ODRR approach achieves better performance than conventional offloading strategies by jointly optimizing offloading and resource allocation in a coordinated way using SDN control.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

2022 - Joint Optimization of Offloading and Resource Allocation For SDN-Enabled IoV

(i) The document proposes a joint optimization approach for offloading and resource allocation in SDN-enabled Internet of Vehicles (IoV) systems. (ii) It introduces a particle swarm optimization method to jointly optimize offloading decisions, offloading ratios, and resource allocation (ODRR) to minimize average system delay while satisfying quality of service demands. (iii) Simulation results show the proposed ODRR approach achieves better performance than conventional offloading strategies by jointly optimizing offloading and resource allocation in a coordinated way using SDN control.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Hindawi

Wireless Communications and Mobile Computing


Volume 2022, Article ID 2954987, 13 pages
https://doi.org/10.1155/2022/2954987

Research Article
Joint Optimization of Offloading and Resource Allocation for
SDN-Enabled IoV

Li Lin and Lei Zhang


College of Information Science and Technology, Donghua University, Shanghai 201620, China

Correspondence should be addressed to Lei Zhang; [email protected]

Received 24 December 2021; Revised 12 February 2022; Accepted 21 February 2022; Published 4 March 2022

Academic Editor: Changqing Luo

Copyright © 2022 Li Lin and Lei Zhang. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

With the development of various vehicle applications, such as vehicle social networking, pattern recognition, and augmented
reality, diverse and complex tasks have to be executed by the vehicle terminals. To extend the computing capability, the nearest
roadside-unit (RSU) is used to offload the tasks. Nevertheless, for intensive tasks, excessive load not only leads to poor
communication links but also results to ultrahigh latency and computational delay. To overcome these problems, this paper
proposes a joint optimization approach on offloading and resource allocation for Internet of Vehicles (IoV). Specifically,
assuming particle tasks assigned for vehicles in the system model are offloaded to RSUs and executed in parallel. Moreover, the
software-defined networking (SDN) assisted routing and controlling protocol is introduced to divide the IoV system into two
independent layers: data transmission layer and control layer. A joint approach optimized offloading decision, offloading ratio,
and resource allocation (ODRR) is proposed to minimize system average delay, on the premise of satisfying the demand of the
quality of service (QoS). By comparing with conventional offloading strategies, the proposed approach is proved to be optimal
and effective for SDN-enabled IoV.

1. Introduction the mobile access network. The traditional cellular network


is deeply integrated with Internet services, to reduce the
The Internet of Vehicles (IoV) is one of the most promising end-to-end delay of mobile service delivery and improve
application for Internet of Things (IoT) technology. Cur- the user experience. With the development of MEC, mobile
rently, it is fuelled by advances in related technology and devices with limited resources can implement various new
industries, such as the decline in sensor costs, the wide- applications by offloading computing tasks to the MEC
spread wireless connections, the improvement of computing server, such as autonomous driving, augmented reality, and
capabilities, the development of cloud computing, and image processing [5–7]. The MEC server is owned by the
wireless transmission and positioning. In IoV, large amounts network operator and is directly implemented in cellular
of heterogeneous data (such as voice, program code, and base stations (BSs) or local wireless access points (APs) as
video) need to be transmitted in various links, e.g., vehicles a general-purpose computing platform [8, 9]. On the one
to vehicles, vehicles to roadside equipment, and vehicles to hand, higher requirements on computing resources and
the cloud [1–4]. These requirements are challenging for storage capacity and capabilities consumption are put
cloud infrastructure and wireless access networks. Upgraded forward to carry the various vehicular applications. On the
service such as ultralow latency, continuity of user experi- other hand, the computing power of in-vehicle devices is
ence, and high reliability have been proposed to highly pro- limited by the size and portability. Therefore, MEC with dis-
mote the local services at the edge of the network close to the tributed computing capabilities, rich computing resources,
terminals. The basic idea of Mobile Edge Computing (MEC) and flexible wireless accessibility is promising for IoV. For
is to migrate the cloud-computing platform to the edge of instance, the selected part of the in-vehicle task can be
2 Wireless Communications and Mobile Computing

offloaded to the appropriate MEC for parallel processing and reduces the complexity of the problem and is very
feedback the execution result to the vehicle-mounted termi- effective for solving the multivehicle and multimec
nal through the neighboring BSs or APs. offloading scenario
Although the MEC server has richer resources than local
equipment, excessive load has a great impact on the trans- (iii) The offloading decisions, local offloading ratios,
mission link. Considering the transmission delay on the link and resource allocation (ODRR) are jointly opti-
[10], this is detrimental to delay-sensitive tasks. As the mized in a complete way, to maximize the system
number of vehicles increases, and the communication envi- performance
ronment becomes worse, resulting in high transmission By comparing with the conventional offloading strate-
delays. In addition, the heterogeneous nature of vehicle- gies, simulation results show the proposed ODRR approach
mounted tasks places higher requirements on the entire achieves the best performance.
system. There are two types of separable task offloading:
(1) One is a bit-type task, which can be arbitrarily divided
2. Related Work
into several independent parts [11, 12], and these parts can
be processed in parallel on different platforms. (2) The other The offloading strategy has been a hot research topic for IoV,
is a code-oriented task, which is composed of various and various offloading models have been proposed in differ-
components [13, 14]. There are dependencies between task ent application scenarios. Considering the standby time of
components and need to be executed in an orderly or con- mobile devices and the latency sensitivity of tasks, most
tinuous manner. Therefore, a reliable computing offloading work in edge computing or fog computing focuses on the
solution is needed to support low-latency, highly reliable optimization of energy consumption or latency. Therefore,
IoV services. we survey the related work according to the optimization
The edge computing capabilities of nearby RSUs have goals, as shown in Table 1.
been leveraged to meet the task-intensive requests and strict
latency requirements, i.e., part or all of the divisible bit-type 2.1. Optimizing the Energy Consumption. For mobile users,
tasks with high delay requirements are selected to be off- processing various application tasks consumes a lot of
loaded to nearby RSUs; then, the computing delay can be energy, so improving the standby time of the device has
greatly reduced by parallel computing. Nevertheless, the always been a concern. Some work is devoted to reducing
unified task intensity is not often, some RSUs may have to local computing power consumption to improve standby
handle extra requests beyond their capabilities, while other time. The energy cost of task calculation and file transmis-
RSUs are relatively idle. Benefiting from the software- sion has been studied in [21]. Combining the multiaccess
defined network (SDN) architecture, SDN controllers with characteristics of 5G heterogeneous networks, jointly opti-
global information are able to coordinate edge computing mizing offloading and wireless resource allocation to
resources [15, 16]. Combined with the current global situa- minimum energy consumption within delay constraints. A
tion, the requested tasks are controlled and forwarded to multiuser MEC system with wireless power supply has been
the corresponding target nodes through the SDN controller, modeled in [22], in order to solve a practical scenario that
which can effectively integrate global resources. requires delay limit and reduces the total energy consump-
A lot of research work has focused on task offloading in tion of the AP, jointly optimizing the energy transmission
IoV [17–20], majorly considering one part of offloading beamforming of the access point, the frequency of the cen-
strategy, offloading ratio or resource allocation, and lack of tral processing unit, the number of bits offloaded by users,
the utilization of SDN to efficiently solve the load balancing and the time allocation among users. [23] has proposed an
problem. In this paper, we jointly optimize the offloading energy-optimal dynamic computation offloading scheme
strategy, offloading ratio, and resource allocation, in order algorithm to minimize system energy consumption under
to minimize the system delay of SDN enabled IoV. More- energy overhead and other resource constraints.
over, the effects of different task complexity on transmission
and execution are also considered. The main contributions 2.2. Optimizing the Delay of the System. For latency-sensitive
of this paper are as follows: tasks, researchers are devoted to reducing system latency
and improving user experience by optimizing local comput-
(i) A tasks-divisible system model with a software- ing resources and edge node resources. [10] aim to minimize
defined network is proposed based on two-layer the maximum delay among users by joint optimization of
transmission offloading decision, computing resource allocation and
resource block and power. [17] investigate the calculation
(ii) A Particle Swarm Optimization- (PSO-) based rate maximization problem in the MEC wireless power
heuristic approach for the overall optimization is supply system enabled by UAV, which is subject to the
proposed, which can effectively solve the offloading energy harvesting causal constraints and the UAV speed
strategy problem of multiuser and multiobjective constraints. A new device-to-device multiassistant MEC
nodes. This approach works by decomposing the system that requests local users to nearby auxiliary devices
problem into three subproblems: (1) offloading for collaborative computing has been designed in [18]. By
decision of vehicles; (2) resource allocation by optimizing task allocation and resource allocation to mini-
RSUs; and (3) offload ratio of vehicles. It greatly mize the execution delay, a collaborative method has been
Wireless Communications and Mobile Computing 3

Table 1: Summary and comparison of the most relevant references.

Load Partial Resource allocation


Work MEC/fog User Optimizing goals
balancing offloading by computing nodes
Minimize the total energy consumption of the
Du et al. [10] Multiple Multiple Yes — Yes
system with delay constraints.
Minimize the total energy consumption of the
Zhang et al. [21] Multiple Multiple — — —
system with delay constraints.
Wang et al. [22] Single Multiple — — Yes Minimize the total energy consumption
Zhou et al. [17] Single Multiple — — Yes Maximize the calculation rate.
Minimize the total delay of the system with
Xing et al. [18] Multiple Single — — —
energy constraints.
Minimize the total energy consumption and
Tran and Pompili [19] Multiple Multiple — — Yes
the delay of the system.
Zhou et al. [20] Single Multiple — Yes Yes Minimize the maximal task completion time.
Minimize the total energy consumption of the
Chen et al. [23] Multiple Multiple — Yes — system with delay and energy consumption
constraints.
Minimize the competition latency of the task
Chen et al. [24] Multiple Multiple Yes Yes —
with maximum delay
Minimize the competition latency with energy
Wang and Chen [25] Multiple Multiple Yes — Yes
consumption constraints.
Minimize the energy consumption and delay in
Yadav et al. [26] Multiple Multiple Yes — Yes
vehicular fog computing.
Minimize the total energy consumption of the
Yadav et al. [27] Multiple Multiple Yes — —
system with delay constraints.
Minimize the average delay for SDN-enabled
This work Multiple Multiple Yes Yes Yes
IoV system.

proposed in [20] for parallel computing and transmission of ment learning scheme has been proposed in [27] to search
virtual reality. The task is divided into two subtasks and off- for optimal available resource nodes to minimize delay and
loaded to the MEC server and the vehicle side in order to energy consumption. In addition, the fog computing and
shorten the completion time of the virtual reality applica- cloud computing have been discussed in [24–27]. Since
tion. Moreover, an offloading scheme has been proposed cloud servers are located in areas far away from cities, there
with efficient privacy protection based on fog-assisted is a large transmission delay, and tasks that are not sensitive
computing and solved by a joint optimization algorithm to to delay are offloaded to the cloud for computing, while
minimize the completion delay in [24]. intensive and delay-sensitive tasks are computed locally or
offloaded to the fog to improve system performance.
2.3. Optimizing Both System Delay and Energy Consumption. In this paper, we mainly focus on delay-sensitive and
The user experience and the standby time of the device have task-intensive scenarios, considering that the computing
been optimized together by the weight parameter, i.e., when resources of edge nodes and the number of tasks received
the power is low, the weight of the energy consumption is set in each period are limited. In order to prevent some edge
larger; when the power is sufficient, a larger delay weight is computing nodes from being overloaded and some edge
set. [19] consider a multicell wireless network that supports nodes relatively idle, we use the SDN-enabled IoV to control
MEC, which assists mobile users perform computationally the task offloading decision-making by monitoring the
intensive tasks through task offloading. A heuristic algo- global situation, which effectively improves the utilization
rithm is proposed to combine task offloading and resource of resources and reduce system latency.
allocation to maximize system utility, which is measured
by the weighted sum of task completion time and energy 3. System Model
consumption reduction. The system latency has been mini-
mized with energy consumption constraints by jointly opti- In this section, the system transmission model, execution
mizing offloading decisions, local computing power, and fog model, and optimization problem formulation are pre-
node computing resources in [25]. The cloudlet overload sented. As shown in Figure 1, we assume a vehicular net-
risk has been alleviated by offloading user tasks to vehicle work system is composed of N vehicles and M RSUs. Each
nodes, and a heuristic algorithm has been proposed to RSU is equipped with a MEC server, which has the comput-
balance energy and delay to minimize system overhead in ing ability to process offloading tasks. Generally, the MEC
[26]. In order to solve the problem of high power consump- can be a physical device or a virtual machine provided by
tion and delay sensitivity of portable devices, a reinforce- the operator. Taking into account the complexity of the
4 Wireless Communications and Mobile Computing

SDN

RSU
RSU

RSU

SDN

Figure 1: Sketch of communication and offloading framework for SDN-enabled IoV System.

vehicular network, this system is a software-defined IoV that are not sensitive to delay response and can be processed
supports edge computing and the configuration of edge locally. If all of them are executed locally or offloaded to
computing nodes. The edge nodes are coordinated and con- the RSUs, it can cause timeout failure, waste of local
trolled by the software-defined network (SDN), which aims resources, and a very poor communication environment
to reduce system delay and improve overall performance. due to interference. Therefore, different offloading strategies
As a coordinator, SDN divides the IoV system into two inde- need to be set for different task types. Let V = f1, 2, ⋯, Ng
pendent layers through software definition and virtualiza- and R = f1, 2, ⋯, Mg represent the set of vehicles and the
tion technology: the data layer and the control layer. The set of RSUs, respectively. For ease of reference, the key sym-
edge nodes uniformly obey SDN scheduling and follow the bols used in this article are summarized in Table 2.
OpenFlow protocol [28]. These edge nodes transmit and
process information according to SDN control instructions. 3.1. Communication Model. We assume that each vehicle
The control and processing are separated by the network terminal n∈V has an executed task at a time and denoted
entities, to effectively integrate resources and improve utili- as V n . Each task has three parameters, hdn , cn , t max n i, in
zation. The edge computing node equipped on RSU con- which dn defines the size of the input data of the task V n
nects the edge node cluster and the SDN controller of the vehicle terminal n (usually in bits), and cn defines
through broadband connection. The physical communica- the computing resources required by the task of the terminal
tion on the control layer is independent of the physical com- n, which refer to CPU cycles. Parameters d n and cn can be
munication channel on the data layer. The data layer is obtained from task analysis. t max
n is the maximum allowable
composed of an OpenFlow-based SDN controller and net- delay for task transmission execution, i.e., if t max
n is exceeded
work nodes, refers to [29, 30]. The SDN controller broadcast by the time of result received, and the task is failed by time-
global status, including Channel Status Information (CSI), out. Since the vehicles receive the offloading decision, the
available resources, and task priority. When SDN receives tasks are not allowed to be interrupted before the execution
the vehicle’s offloading request, it looks for the best solution is completed. Typically, the speed of cars on conventional
(including offloading decision and resource allocation) at the road is 5 to 16 meters per second; thus, we assume the radio
control layer and then sends control instructions. The data channel is not radical varying to interrupt the execution of
layer performs data transmission according to the received the tasks, due to the severe fading. When the vehicle gener-
control layer. Each vehicle generates a task in a period of ates a task, it sends an offloading request to the nearby RSU
time, taking the heterogeneity of vehicle tasks into account first; then, the RSU routes the request command to the SDN
(data volume, delay sensitivity, and difference in computa- controller. SDN synthesizes the current global information
tional complexity). In the case of real-time tasks require and provides the optimal offloading decision. Finally, the
minimal delay and/or have a large amount of data, local decision plan is sent to the targeted node through the control
offloading cannot meet the requirements. Otherwise, tasks layer.
Wireless Communications and Mobile Computing 5

Table 2: Parameter notations. of the RSUs set in two-dimensional plane as fðX 1 , Y 1 Þ,


ðX 2 , Y 2 Þ, ⋯, ðX M , Y M Þg, the coordinates of the vehicle n
Symbol Definition are defined as ðxn , yn Þ, and the coordinates of nearest RSU s
dn The input data size of task V t generated by vehicle n. are denoted by ðX s , Y s Þ. Then the distance between the
BRV The bandwidth of vehicle to RSU channel. vehicle n and the nearest RSU s is given by
B RR
The bandwidth of RSU to RSU. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Ξm
n = ð x n − X s Þ 2 + ðy n − Y s Þ2 : ð2Þ
cn The computing resources required by the vehicle’s task.
Cm
n Indicator of the selection on RSU m by the vehicle n.
The transmission rates of vehicle n to routing RSU s
t max
n The maximum allowable delay for task execution.
channel and routing RSU s to target RSU m channel are
hsn The uplink gain of the user n to routing described as [31]
RSU m channel.
ln 8 !
The computing resource of the vehicle’s task. >
> hsn ðΞsn Þ−ℵ jh0 j2
>
>
s RV
r = B log2 1 + ,
bn The proportion of local execution in the task. >
< n σ2
r sn The transmission rate from vehicle to routing RSU. ! ð3Þ
>
> m
Ξ m −ℵ 2
The transmission rate from routing RSU to >
> h ð Þ j h j
rm
s > rm
: s =B
RR
log2 1 + s s 2 0 ,
targeted RSU. σ
T loc
n The local task execution time.
fm The computing resource of the RSU server m. where BRV and BRR represent the bandwidth allocations of
The computing resources allocated by the RSU m to vehicle n to routing RSU s channel and routing RSU s to
fm
n
vehicle n. target RSU m, respectively. σ2 represents additive white
T m,up The transmission time for the vehicle n to upload. Gaussian noise of the channel. hsn represents the
n
transmission power from routing RSU s to vehicle n. h0 is
T sn The transmission time from the vehicle n to routing
RSU s.
the complex Gaussian channel coefficient [31] following the
complex normal distribution CN (0,1). ℵ is the path loss
Tm The transmission time from routing RSU s to targeted
s index.
RSU m.
We introduce bn to represent the local execution ratio of
T m,comp
n The execution time. the vehicle; thus, ð1 − bn Þ denotes the offload ratio. Since the
Tn The delay of vehicle n completing task. execution result is usually relatively small, the delay from
The computing resources (CPU cycles) for targeted node to the routing node and the delay from the
ϕ
1 bit processing. targeted node can be ignored. We define the binary param-
Indicator of routing decision from the vehicle n to the eter ζm m
n , where ζn is 1 means the vehicle needs to be routed
ζm
n to the targeted node, otherwise, vice versa. Assuming the
targeted RSU m.
ℵ The path loss index. routing node is RSU s, based on the above formulas, the
transmission time for the uploading ð1 − bn Þdn data from
h0 The complex Gaussian channel coefficient. vehicle to the targeted RSU m is divided into two parts:
xi The position of particle. (1) the delay of uploading from the vehicle to the routing
ðxn , yn Þ The coordinates of vehicle. node and (2) the delay of uploading from the routing
node to the targeted node, which are expressed as
ðX m , Y m Þ The coordinates of RSU.
Ξm 8
n The distance of vehicle and RSU.
>
> ð1 − b n Þd n
>
s
< Tn = , ∀n ∈ V ,
σ2 The additive white Gaussian noise of the channel. rsn
ð4Þ
>
> ð1 − b n Þd n
>
: Tm ∀n ∈ V :
s = ,
rm
n
We introduce a binary variable of task offloading
selection C m m Considering that the routing node can be the candidate of
n . If C n =1, it defines that the RSU m is
selected by vehicle n to perform task offloading. On the targeted node, the uploading time T m
n
,up
is defined as
contrary, C m 
n =0 means the vehicle n do not select the m s
RSU m. Therefore, the constraint of C mn is defined as
Tm
n
,up
= ζm
n ðT n + T s Þ + 1 − ζn T n ,∀n ∈ V :
s m
ð5Þ

〠 Cm
n = 1, ∀n ∈ V : ð1Þ 3.2. Computing Model. In this section, we introduce the
m∈R computing model of the vehicle. Considering the parallel
offloading mode, the computing model is mainly divided into
Equation (1) means that each vehicle can offload tasks two parts: (1) the local computing model and (2) RSU com-
to a unique RSU for execution. We define the coordinates puting model.
6 Wireless Communications and Mobile Computing

3.2.1. Local Computing Model. Let ln denotes the computing full use of effective resources are of importance. In addition,
capabilities of vehicle n; then, the local task execution time considering the transmission and execution delay caused
T loc
n on bn data can be defined as
by offloading, the system delay is defined by Equation
(12). Therefore, offloading decision-making and resource
bn cn allocation must be jointly optimized to improve system
T loc
n = , ∀n ∈ V : ð6Þ performance. The goal is to provide all vehicle optimized
ln
offloading strategies χ, computing resource allocation F,
3.2.2. RSU Computing Model. Besides the local computing, and offloading ratio B, aiming to reduce average delay.
the rest part of the tasks are offloaded to the RSU for further Finally, the optimization problem is described as follows:
computing. It requires (1 - bn ) cn computing resources from
RSU. Note that a RSU can be selected by multiple vehicles, 1 N
P1 : Dn = min 〠T ð12Þ
and a vehicle can only select one RSU for execution. Since B,χ ,F N n=1 n
RSU has limited computing resources, it is necessary to
allocate RSU resources to different vehicles in a reasonable s:t:T n ≤ t max ∀n ∈ V ð13aÞ
n ,
manner to improve resource utilization and reduce system
delay. Define f m as the computing resource of RSU m, and 〠 Cm ∀n ∈ V
n = 1, ð13bÞ
fm
n denotes the computing resources allocated by the RSU m∈R
server m to vehicle n. The sum of the resources allocated
by the RSU to each vehicle cannot exceed its own comput- 0 ≤ bn ≤ 1, ∀n ∈ V ð13cÞ
ing resources, thus constrained by Equations (7) and (8), ð13dÞ
and the execution time of vehicle n on RSU m is defined n ∈ f0, 1g,
Cm ∀n ∈ V ,∀m ∈ R
by Equation (9). N

n f n ≤ f m,
0 ≤ 〠 Cm m
∀m ∈ R ð13eÞ
〠 fm
n ≤ f m, ∀m ∈ R, ð7Þ
n=1
n∈V
ζm
n ∈ f0, 1g, ∀n ∈ V ,∀m ∈ R: ð13f Þ
fm The above constraints are explained as follows: con-
n
≤ Cm
n, ∀m ∈ R, ð8Þ
fm straint (13a) is to ensure that the total delay of the task
does not exceed the maximum allowable delay; constraints
n ð1 − bn Þcn
Cm (13b) and (13d) mean that each vehicle can only transmit
Tm ,comp
= , ∀n ∈ V : ð9Þ
n
fm
n the task to only one RSU, and the offloading decision is a
binary variable; constraint (13c) is the offloading ratio
Assuming there exist M RSUs in the network, then the constraint, which is a decimal between 0 and 1; and to
execution time for vehicle n is defined as ensure that the resources allocated by the RSU to the
vehicle does not exceed its own capacity, the constraint
n ð1 − bn Þcn
M
Cm (13e) must be met. Finally, there is a binary constraint,
T RSU = 〠 , ∀n ∈ V : ð10Þ
n
m=1 fm
n which represents whether the vehicle to the targeted RSU
needs to be routed.
For vehicle n, the task is divided into two subtasks,
which are processed in parallel by local unit and by edge 4. Problem Solving
RSU node. The task completion time is accordingly
The fact that the offloading decision is a binary variable
divided into two parts: local processing time and edge
brings problem P1 as a mixed integer programming prob-
RSU node processing time. The edge RSU node processing
lem, which is nonconvex NP-hard. In order to reduce the
also results transmission delay T m n
,up
and execution delay
m,comp complexity of the problem, we divide the problem into three
Tn . Therefore, the time for completing task generated
subproblems, i.e., offloading decisions of various vehicles,
by vehicle n is determined by the maximum delay, which
proportional distribution of tasks performed locally, and
is defined by
different resource allocation for vehicles by RSUs.
( )
M
4.1. Offloading Strategy Making and Load Balance. Given the
T n = max T loc
n , 〠 ðT m
n
,comp
+ Tm
n
,up
Þ , ∀n ∈ V : local computing ratio B and resource allocation F, problem
m=1
P1 is transformed into P1:1 as follows:
ð11Þ
1 N
3.3. Problem Formulation. For delay-sensitive applications, P1:1 : Dn = min 〠T ð14Þ
delay is a problem that must be considered. Therefore, it is χ N n=1 n
vital to formulate the most suitable offloading strategy
based on the global status. For intensive tasks, meeting
the performance requirements of the vehicle and making s:t:ð12aÞ, ð12bÞ, ð12dÞ, ð12eÞ, ð12f Þ: ð15Þ
Wireless Communications and Mobile Computing 7

Obviously, P1:1 is still an NP-hard problem. For such impact on the task execution of the objective function; thus,
problems, heuristic algorithm is a rational solution. The the converted problem P1:2 can be presented as
Particle Swarm Optimization algorithm (PSO) originated
from the study of bird predation behavior is a evolutionary P1:2 : min 〠 T n
F
heuristic algorithm, which has efficient global search capa-
n∈N m ð19Þ
bilities. It has achieved great success in image processing s:t:ð12aÞ, ð12eÞ:
and neural network training. Hence, we propose a offloading
decision-making algorithm based on PSO. The basic idea of Substituting formulas (4), (5), and (6) into P1:2, and per-
PSO is to find the optimal solution through collaboration forming equivalent transformation, the overall optimization
and information sharing between individuals in the group. problem is changed to the following form:
In the paper, particles have only two attributes: speed and
position. Speed represents the speed of movement, and P1:3 : min 〠 T n
F
position represents the direction of particles. Each particle n∈N m
searches for the optimal solution separately in the search bn d n
space and records the current individual extreme value, then s:t:T n ≥
ln ð20Þ
sharing the individual extreme value with other particles. All
 
particles in the particle swarm adjust their speed and ð1 − b n Þd n ζm
n ð1 − b n Þd n ð1 − bn Þcn
position according to the current individual extreme value Tn ≥ + +
rsn rm
s fm
n
they find and the current global optimal solution shared
by others. Therefore, through iteration, the particles of ð12aÞ, ð12eÞ:
the entire population eventually converge to the optimal
solution. As P1:3 shows, T n is not differentiable subject to f m
n.
According to the speed update formula proposed by Substituting formulas (5), (6), and (9) into P1:3, then the
Clerc and Kennedy [32], we introduced a compression fac- T n is approximated as
tor x. The update formula for speed and position is pre-
sented as bn cn ð1 − bn Þdn ζm ð1 − bn Þdn ð1 − bn Þcn
Tn ≤ + + n +
ln r sn rm
n fm
n
 
V i ðI + 1Þ = ρðwV i ðI Þ + c1 r1 ðPbest − X i ðI ÞÞ ð1 − bn Þcn cn d n ζm n dn
ð16Þ = + bn − − m + Γn , m

+ c2 r 2 ðGbest − X i ðI ÞÞÞ, fm
n ln r sn rn
ð21Þ
m
X i ðI + 1Þ = X i ðI Þ + V i ðI Þ, ð17Þ where Γm n = d n /r n + ζn d n /r n and ð1 − bn Þcn /f n + bn ððcn /l n Þ
s m m
m
− ðd n /r n Þ − ðζn d n /r n ÞÞ + Γn is the upper bound of T n . Sub-
s m m
where i = 1, 2, ⋯Nc max:, Nc max is the number of parti- stitute the upper bound into P1:3, then P1:3 can be bounded
cles. w is the coefficient of inertia. The larger w, the stron- with the worst case delay as
ger the global optimization ability, and the weaker the
 
local optimization ability. c1 , c2 are the learning factors, ð1 − bn Þcn cn d n ζm n dn
V i ðIÞ is the velocity of particle i at the I − th iteration, P1:4 : min 〠 + b − − + Γm
F fm n
l n r s r m n
and X i ðIÞ is the current position of particle i at the I − t n∈N m n n n

h iteration. V max denotes the maximum velocity. ð22Þ


The compression factor ρ is given by
 
ð1 − bn Þcn c n d n ζm n dn
s:t: + bn − − n ≤ tn
+ Γm max
ð23aÞ
2 fm
n l n r s
n r m
n
ρ= pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , ð18Þ
2 − φ − φ2 − 4φ ð12eÞ: ð24Þ
P1:4 is a convex optimization problem, with linear
constraints (13e) and convex inequality constraints (23a).
where φ = c1 + c2 . The compression factor guarantees the
Therefore, we use Lagrangian duality theory to solve this
convergence of the particles and prevents the explosion of
problem. The Lagrangian function of P1:4 is given by
velocity. The specific process of the improved PSO algorithm
is described in Algorithm 1. The f itness function refers to Nm    
Equation (12). ð1 − bn Þcn cn d n ζm n dn
Lð f , λ, θÞ = 〠 + bn − − m + Γn m

n=1 fm
n ln r sn rn
4.2. Resource Allocation Optimization. Assuming the local
Nm   !
computing ratio B and offloading decision χ are given, the ð1 − bn Þcn
+ 〠 λn + τ + θm 〠 f n − f m ,
m
problem is transformed into the lowest vehicle delay to each n=1 fmn n∈N
RSU. Defining the vehicle set N m of the task on RSU m, and m

the number of vehicles is N m : The variable f m ð25Þ


n only has an
8 Wireless Communications and Mobile Computing

Input:
The input parameters of particle swarm:
c1 , c2 , w, r 1 , r 2 , Mcmax, Ncmax:
Output:
Gbest, ⊝Gbest
1: For j = 1 ; j < = Mcmax ; j + + do
2: For each i ∈ ½1, Ncmax do
3: Initialize the velocity position of particles V i ð0Þ, X i ð0Þ
4: End for
5: In Algorithm 2, we obtain ζm n then record the current position and fitness as particle’s
individual extreme option and value Pbest, ⊝Pbest.
6: Record the smallest fitness and the corresponding position as ⊝Gbest, Gbest.
7: While the number of iteration steps is not 0 do
8: For i = 1 ; i < = Ncmax ; i + + do
9: Update the velocity V i ðjÞ of particles i using Equation (16)
Update the position X i ðjÞ of i using Equation (17)
Evaluate fitness of particle i using Equation (14)
10: If f itnessðX i ðjÞÞ < ⊝Pbest then
11: ⊝Pbest = f itnessðX i ðjÞÞ
12: End if
13: If f itnessðX i ðjÞÞ < ⊝Gbest then
14: ⊝Gbest = f itnessðX i ðjÞÞ
15: End if
16: End for
17: End while
18: End for
19: Return Gbest, ⊝Gbest

Algorithm 1: A offloading decision making algorithm based on PSO.

Then, the dual problem of P1:4 is


Input:
Offloading decison: Cm n:
max Dðλ, θÞ
Output:
ð28Þ
s:t:λn , θm ≥ 0:
Routing information: ζm n:
1: Initialize the location of the vehicles.
2: For each i ∈ ½1, N do
As the Lagrange function is differentiable, the gradients
3: Obtain the access RSU m of vehicle i, set ζm of the Lagrange multipliers can be obtained by
n = 0.
4: For j = 1 ; j < = M ; j + + do
j ∂Dðλ, θÞ ð1 − bn Þcn
5: If C i = 1 and j! = m then = + τ,
6: m
ζn = 1. ∂λn fm
n
7: End if ð29Þ
∂Dðλ, θÞ
8: End for = 〠 fm
n − f m,
9: End for ∂θm n∈N m
10: Return ζm n
The Lagrange multiplier iterative formula is as follows by
Algorithm 2: Routing confirmation (RC). using gradient descent.
 +
∂D
where τ = bn ððcn /ln Þ − ðd n /r sn Þ − ðζm
n d n /r n ÞÞ + Γn
m m
− t max
n , λn ðt + 1Þ = λn ðt Þ − η1 , ð30Þ
where λn and θm are the Lagrangian multipliers. According ∂λn
to KKT conditions, we have  
∂D +
""sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi#+ θ m ðt + 1 Þ = θ m ðt Þ − η 2 , ð31Þ
∂θm
ð1 + λn Þð1 − bn Þcn
m
fn = : ð26Þ
θm where η1 , η2 are the gradient steps, t represents gradient
number, and [.]+ represents max ð0, :Þ. We summarize the
The Lagrange dual function is then given by procedure for solving problem P1:4 in Algorithm 3.
4.3. Offloading Radio Allocation. Given χ and F, the P1:4
Dðλ, θÞ = min Lð f , λ, θÞ: ð27Þ problem can be transformed into a linear programming
Wireless Communications and Mobile Computing 9

Input:
λn , θ m
Output:
fm
n
1: Repeat
2: Calculate resource allocation f m
n based on Equation (26)
3: Update the λn ðtÞ, θm ðtÞ using Equation (30)
4: Until convergence
5: Return f m
n

Algorithm 3: Resource allocation (RA).

1: Set offloading decision X 0 , resource allocation F 0 , and local executing ratio B0


2: t = 0
3: While t<T do
4: Obtain the offloading decision by Algorithm 1 based on F t−1 , Bt−1
5: Calculate resource allocation by Algorithm 3 based on Bt−1 , X t
6: Update the Bt by using Equation (13c), (23a) based on F t , X t
7: t = t + 1
8: End while
9: ReturnBt , X t , F t

Algorithm 4: Joint approach for offloading decision, resource allocation and ratio (ODRR).

problem on bn . Therefore, the function value takes the


extreme value at the boundary or stagnation point, as shown Table 3: Simulation configurations.
in P1:5
Parameter Value
  The bandwidth B ,B RV RR
1 MHZ
ð1 − bn Þcn c n d n ζm n dn
P1:5 : W = min 〠 + bn − − m The path loss index ðℵÞ
B
n∈N
fm
n ln r sn rn  
4
m
The additive white Gaussian noise σ2 -100 dBm
+ 〠 Γm
n
n∈N m hsn 20 dBm
hm
s 46 dBm
s:t:ð12cÞ, ð20aÞ:
The learning factors c1 , c2 2
ð32Þ The inertia weight ω 0.9

Based on the above formula, ∂W /∂bn as a constant and


the minimum value can be obtained at the boundary.
able delays. Assuming that the computing resources of
According to the constraints (13c) and (23a), bn can be
3 RSUs are ½15G, 15G, 15G, the local resources of the
obtained.
vehicle are 1G. Vehicles’ task data volume, computing
Taken together, we propose an approach combining
complexity, and maximum allowable delay are random
the whole process of offloading strategy, offloading ratio,
ð100KB, 300KBÞ, random ð1000, 9000Þ, random ð1s, 2sÞ.
and resource allocation control (ODRR), as described in
Referring to [31, 33, 34], communication parameter settings
Algorithm 4.
are shown in Table 3. The settings of the parameters in
Algorithm 1 are as follows: the number of particles is 100,
5. Numerical Results the maximum number of iterations is 50, the learning factors
In this section, simulation configurations and results are c1 and c2 are both equal to 2, because the learning factors are
presented and analyzed to verify the effectiveness of the parameters for adjusting the step size. If the setting is too
proposed algorithm. large, the particle moves fast and fly over the optimal point.
If set small, the optimization speed will be slow. Larger inertia
5.1. Simulation Configurations. The scenario in this paper is weight has stronger global search ability and slower conver-
as follows: the system consists of 3 RSUs and N vehicles gence speed. In order to avoid falling into the local optimal
ðN = 10,20,30 ⋯ Þ, which carry tasks with random parame- solution and have a faster convergence speed, it is most
ters, different data volumes, computing resources, and allow- appropriate to set ω to 0.9. In order to verify the effectiveness
10 Wireless Communications and Mobile Computing

0.28

0.26

0.24

0.22
Average delay

0.2

0.18

0.16

0.14

0.12

0.1
10 15 20 25 30 35 40 45 50

The number of the vehicle terminal

Offload-proportion-by-ODRR Offload-whole-to-RSUs
Execute-locally Offload-proportion-by-SA

Figure 2: The system delay of the cases with different vehicle numbers (N = 10, 20, 30, ⋯50).

of the proposed offloading strategy on the proposed Internet computing performance of executing locally is worse than
of Vehicles architecture, we compare it with other offloading offloading whole to RSUs. But when the number exceeds
strategies. the limit, the situation becomes different. On the one hand,
this is because the resources of RSUs are limited. When the
(i) Offload-Whole-to-RSUs: offload the whole task to number of offloading vehicles exceeds a certain number,
edge nodes including access node and remote nodes the load capacity of RSUs is exceeded, resulting in great per-
formance decrease. On the other hand, more vehicles means
(ii) Execute-Locally: the vehicle terminal directly exe-
a worse communication environment, which leads to more
cutes the task locally
communication delays. Compared with the other three
(iii) Offload-proportion-by-ODRR: joint optimization conventional strategies, the latency of the proposed ODRR
of offloading decision, local calculation ratio, and algorithm is always the smallest, as the number of vehicles
resource allocation (ODRR) is based on the pro- increases. Compared with the Offload-Whole-to-RSUs
posed algorithm scheme, the ODRR can be reduced by 42.7% in the best case,
and 17.5% in the worst case; compared with the Execute-
(iv) Offload-proportion-by-SA: offload proportion of task Locally scheme, the ODRR can be reduced by 52.6% in the
to RSUs using simulated annealing algorithm (SA). best case, and 24% in the worst case; compared with Offload
-proportion-by-SA scheme, the highest can be reduced by
5.2. Simulation Results. In this section, we present the per- 16.7%, and the lowest by 7.8%. It can be concluded that
formance of the proposed ODRR algorithm and compare compared with other two conventional strategies, ODRR
with other conventional offloading strategies. can reduce the delay by up to nearly half. Compared to the
Figure 2 plots the average execution time of vehicles as Offload -proportion-by-SA scheme, the reduction is also
the number of connected vehicles increases. It can be seen up to 10%.
that the performance of the proposed ODDR partial offload- Figure 3 plots the impact of different executing complex-
ing algorithm is the best and can keep the average calcula- ities (ϕ = cn /dn ) on system delay. The number of connected
tion delay to a minimum, and the partial offloading SA vehicles is set to 30, and it is found that the average delay
algorithm is behind it. When the number of vehicles is less of the four offloading strategies increases linearly with the
than 35, the effect of offloading whole to RSUs is better than increase of task complexity. The performance of the ODRR
executing locally. Because RSUs have more computing algorithm given in this paper is obviously the best compared
resources than the vehicles, the computing capability of with other strategies. The partial offloading by SA algorithm
RSU is dozens of times that of executing locally. Therefore, is the second, the whole offloading to RSUs is the third, and
Wireless Communications and Mobile Computing 11

1.8

1.6

1.4

1.2
Average delay

0.8

0.6

0.4

0.2

0
1000 2000 3000 4000 5000 6000 7000 8000 9000

The computational complexity

Offload-proportion-by-ODRR Offload-whole-to-RSUs
Execute-locally Offload-proportion-by-SA

Figure 3: The average delay of the cases with different execution complexities.

1.8

1.6

1.4

1.2
Average delay

0.8

0.6

0.4

0.2

0
100 200 300 400 500 600 700 800 900

Input data size (KB)

Offload-proportion-by-ODRR Offload-whole-to-RSUs
Execute-locally Offload-proportion-by-SA

Figure 4: The average delay of the cases with different execution input data size.

executing locally has the worst performance. The higher the highest latency. However, offloading whole to RSUs causes
task complexity, the more CPU resources are needed to uneven distribution and local resource waste. The partial off-
process each byte of data. The local computing resources loading ODRR algorithm takes into account both resource
are much smaller than RSUs, so executing locally gets the allocation and offloading ratio, which greatly improves
12 Wireless Communications and Mobile Computing

system performance. Compared with the Offload-Whole-to- Acknowledgments


RSUs scheme, the ODRR can be reduced ranging from 14%
to 29%; compared with the Execute-Locally scheme, the This work was supported by the National Natural Science
ODRR can be reduced ranging from 14% to 49.7%, and Foundation of China (NSFC) under grants No. 61901104
when the computational complexity is greater, the reduction and the Science and Technology Research Project of
effect is better. Compared with the Offload-proportion-by- Shanghai Songjiang District No. 20SJKJGG4 (Correspond-
SA scheme, the reduction is between 7% and 9%. ing author: Lei Zhang).
When N = 30, ϕ = 2000, Figure 4 shows the change of
average delay with the amount of input data increasing.
The average execution delay of the vehicles increases linearly References
with the size of the input data increasing. The proposed
[1] J. Cheng, M. Zhou, F. Liu, S. Gao, and C. Liu, “Routing in
ODRR algorithm obtains the minimum delay, after the SA
internet of vehicles: a review,” IEEE Transactions on Intelligent
algorithm, whole offloading to RSUs is the third, and only Transportation Systems, vol. 16, no. 5, pp. 2339–2352, 2015.
executing locally is final. The larger the input data, the more
[2] M. Ashritha and C. S. Sridhar, “RSU based efficient vehicle
transmission delay is caused. The algorithm proposed in this authentication mechanism for VANETs,” in 2015 IEEE 9th
paper jointly optimizes the offloading ratio, offloading International Conference on Intelligent Systems and Control
decision-making, and resource allocation, thus greatly (ISCO), pp. 1–5, Coimbatore, India, 2015.
improving the system performance. With the larger [3] Z. Wang, J. Zheng, Y. Wu, and N. Mitton, “A centrality-based
amount of input data, the ODRR proposed in this paper RSU deployment approach for vehicular ad hoc networks,” in
has a better effect on reducing the delay. Compared with 2017 IEEE International Conference on Communications
Execute-Locally scheme, the reduction is between 36.6% (ICC), pp. 1–5, Paris, France, 2017,.
and 47.5%. Compared with Offload-Whole-to-RSUs [4] M. Bahrami, “Cloud computing for emerging mobile cloud
scheme, it can reduce the delay by 30.4% to 42%; com- Apps,” in 2015 3rd IEEE International Conference on Mobile
pared with the Offload-proportion-by-SA scheme, the Cloud Computing, Services, and Engineering, pp. 4-5, San
decline rate is nearly 10%. Francisco, CA, USA, 2015.
[5] B. Brik, P. A. Frangoudis, and A. Ksentini, “Service-oriented
6. Conclusion MEC applications placement in a federated edge cloud archi-
tecture,” in ICC 2020-2020 IEEE international conference on
In this paper, we propose a multiuser and multi-RSU system communications (ICC), pp. 1–6, Dublin, Ireland, 2020.
architecture based on SDN-enabled IoV. The loads of RSUs [6] S. Lee, S. Lee, and M.-K. Shin, “Low cost MEC server place-
are effectively balanced by using the characteristics of SDN. ment and association in 5G networks,” in 2019 International
In order to reduce the delay of task offloading in IoV, a joint conference on information and communication technology con-
approach is proposed to optimize the offloading ratio, off- vergence (ICTC), pp. 879–882, Jeju, Korea (South), 2019.
loading decision-making, and resource allocation. Com- [7] Y. Wang, J. Wang, Y. Ge, B. Yu, C. Li, and L. Li, “MEC support
pared with the conventional work that executing locally, for C-V2X system architecture,” in 2019 IEEE 19th Interna-
the system performance increases 36:6% − 47:5%. Com- tional Conference on Communication Technology (ICCT),
pared with the decision by fully offloading to RSUs, the pp. 1375–1379, Xi’an, China, 2019.
performance increases 30:4% − 42%; compared with the [8] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young,
offloading strategy by SA, the performance increases about “Mobile edge computing–a key technology towards 5G,” ETSI
10%. The simulation results have greatly proved that the White Paper, vol. 11, 2015.
joint optimization approach proposed in this paper is more [9] J. Du, L. Zhao, J. Feng, and X. Chu, “Computation offloading
effective than conventional strategies in dealing with the and resource allocation in mixed fog/cloud computing systems
with min-max fairness guarantee,” IEEE Transactions on Com-
delay problem of multiuser and multi-RSU system and can
munications, vol. 66, no. 4, pp. 1594–1608, 2018.
effectively solve the multidimensional problem. Although
[10] J. Du, L. Zhao, X. Chu, F. R. Yu, J. Feng, and I. Chih-Lin,
greatly improving system performance, there is still room
“Enabling low-latency applications in LTE-A based mixed
for improvement. For instance, the possibility of task failure
fog/cloud computing systems,” IEEE Transactions on Vehicu-
caused by transmission link or edge node failure has not lar Technology, vol. 68, no. 2, pp. 1757–1771, 2019.
considered in this work. Therefore, the reinforcement
[11] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A
learning-based approach is of interest to solve the optimiza- survey on mobile edge computing: the communication per-
tion goal considering the failure retransmission mechanism spective,” IEEE Communications Surveys & Tutorials, vol. 19,
of the task. no. 4, pp. 2322–2358, 2017.
[12] H. Wang, Z. Lin, and T. Lv, “Energy and delay minimization of
Data Availability partial computing offloading for D2D-assisted MEC systems,”
in 2021 IEEE Wireless Communications and Networking Con-
No data were used to support this study. ference (WCNC), pp. 1–6, Nanjing, China, 2021.
[13] J. Wang, T. Lv, P. Huang, and P. T. Mathiopoulos, “Mobility-
Conflicts of Interest aware partial computation offloading in vehicular networks: a
deep reinforcement learning based scheme,” China Communi-
The authors declare that they have no conflicts of interest. cations, vol. 17, no. 10, pp. 31–49, 2020.
Wireless Communications and Mobile Computing 13

[14] Z. Ning, P. Dong, X. Kong, and F. Xia, “A cooperative partial assisted by mobile edge computing,” IEEE Communications
computation offloading scheme for mobile edge computing Magazine, vol. 55, no. 7, pp. 94–100, 2017.
enabled internet of things,” IEEE Internet of Things Journal, [30] I. Ku, Y. Lu, M. Gerla, R. L. Gomes, F. Ongaro, and
vol. 6, no. 3, pp. 4804–4814, 2019. E. Cerqueira, “Towards software-defined VANET: architec-
[15] H. Lee, H. Kim, and Y. Kim, “A practical SDN-based data off- ture and services,” in 2014 13th Annual Mediterranean Ad
loading framework,” in 2017 International Conference on Hoc Networking Workshop (MED-HOC-NET), pp. 103–110,
Information Networking (ICOIN), pp. 604–607, Da Nang, Piran, Slovenia, 2014.
Vietnam, 2017. [31] D. Ye, M. Wu, S. Tang, and R. Yu, “Scalable fog computing
[16] H. Zhang, Z. Wang, and K. Liu, “V2X offloading and resource with service offloading in bus networks,” in 2016 IEEE 3rd
allocation in SDN-assisted MEC-based vehicular networks,” International Conference on Cyber Security and Cloud Com-
China Communications, vol. 17, no. 5, pp. 266–283, 2020. puting (CSCloud), pp. 247–251, Beijing, China, 2016.
[17] F. Zhou, Y. Wu, R. Q. Hu, and Y. Qian, “Computation rate [32] M. Clerc and J. Kennedy, “The particle swarm - explosion, sta-
maximization in UAV-enabled wireless-powered mobile- bility, and convergence in a multidimensional complex space,”
edge computing systems,” IEEE Journal on Selected Areas in IEEE Transactions on Evolutionary Computation, vol. 6, no. 1,
Communications, vol. 36, no. 9, pp. 1927–1941, 2018. pp. 58–73, 2002.
[18] H. Xing, L. Liu, J. Xu, and A. Nallanathan, “Joint task assign- [33] T. Q. Dinh, J. Tang, Q. D. La, and T. Q. S. Quek, “Offloading in
ment and resource allocation for D2D-enabled mobile-edge mobile edge computing: task allocation and computational
computing,” IEEE Transactions on Communications, vol. 67, frequency scaling,” IEEE Transactions on Communications,
no. 6, pp. 4193–4207, 2019. vol. 65, no. 8, pp. 3571–3584, 2017.
[19] T. X. Tran and D. Pompili, “Joint task offloading and resource [34] Y. Qi, H. Wang, L. Zhang, and B. Wang, “Optimal access mode
allocation for multi-server mobile-edge computing networks,” selection and resource allocation for cellular-VANET hetero-
IEEE Transactions on Vehicular Technology, vol. 68, no. 1, geneous networks,” IET Communications, vol. 11, no. 13,
pp. 856–868, 2019. pp. 2012–2019, 2017.
[20] J. Zhou, F. Wu, K. Zhang, Y. Mao, and S. Leng, “Joint
optimization of offloading and resource allocation in vehicular
networks with mobile edge computing,” in 2018 10th Interna-
tional Conference on Wireless Communications and Signal
Processing (WCSP), pp. 1–6, Hangzhou, China, 2018.
[21] K. Zhang, Y. Mao, S. Leng et al., “Energy-Efficient offloading
for mobile edge computing in 5G heterogeneous networks,”
IEEE Access, vol. 4, pp. 5896–5907, 2016.
[22] F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and
computing optimization in wireless powered mobile-edge
computing systems,” IEEE Transactions on Wireless Commu-
nications, vol. 17, no. 3, pp. 1784–1797, 2018.
[23] S. Chen, Y. Zheng, W. Lu, V. Varadarajan, and K. Wang,
“Energy-Optimal dynamic computation offloading for indus-
trial IoT in fog computing,” IEEE Transactions on Green Com-
munications and Networking, vol. 4, no. 2, pp. 566–576, 2020.
[24] S. Chen, X. Zhu, H. Zhang, C. Zhao, G. Yang, and K. Wang,
“Efficient privacy preserving data collection and computation
offloading for fog-assisted IoT,” IEEE Transactions on Sustain-
able Computing, vol. 5, no. 4, pp. 526–540, 2020.
[25] Q. Wang and S. Chen, “Latency–minimum Offloading Deci-
sion and Resource Allocation for Fog enabled Internet of
Things Networks,” Transactions on Emerging Telecommunica-
tions Technologies, vol. 31, no. 12, 2020.
[26] R. Yadav, W. Zhang, O. Kaiwartya, H. Song, and S. Yu,
“Energy-latency tradeoff for dynamic computation offloading
in vehicular fog computing,” IEEE Transactions on Vehicular
Technology, vol. 69, no. 12, pp. 14198–14211, 2020.
[27] R. Yadav, W. Zhang, I. A. Elgendy et al., “Smart healthcare:
RL-based task offloading scheme for edge-enable sensor net-
works,” IEEE Sensors Journal, vol. 21, no. 22, pp. 24910–
24918, 2021.
[28] F. Hu, Q. Hao, and K. Bao, “A survey on software-defined net-
work and OpenFlow: from concept to implementation,” IEEE
Communications Surveys & Tutorials, vol. 16, no. 4, pp. 2181–
2206, 2014.
[29] J. Liu, J. Wan, B. Zeng, Q. Wang, H. Song, and M. Qiu, “A scal-
able and quick-response software defined vehicular network

You might also like