2022 - Joint Optimization of Offloading and Resource Allocation For SDN-Enabled IoV
2022 - Joint Optimization of Offloading and Resource Allocation For SDN-Enabled IoV
Research Article
Joint Optimization of Offloading and Resource Allocation for
SDN-Enabled IoV
Received 24 December 2021; Revised 12 February 2022; Accepted 21 February 2022; Published 4 March 2022
Copyright © 2022 Li Lin and Lei Zhang. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
With the development of various vehicle applications, such as vehicle social networking, pattern recognition, and augmented
reality, diverse and complex tasks have to be executed by the vehicle terminals. To extend the computing capability, the nearest
roadside-unit (RSU) is used to offload the tasks. Nevertheless, for intensive tasks, excessive load not only leads to poor
communication links but also results to ultrahigh latency and computational delay. To overcome these problems, this paper
proposes a joint optimization approach on offloading and resource allocation for Internet of Vehicles (IoV). Specifically,
assuming particle tasks assigned for vehicles in the system model are offloaded to RSUs and executed in parallel. Moreover, the
software-defined networking (SDN) assisted routing and controlling protocol is introduced to divide the IoV system into two
independent layers: data transmission layer and control layer. A joint approach optimized offloading decision, offloading ratio,
and resource allocation (ODRR) is proposed to minimize system average delay, on the premise of satisfying the demand of the
quality of service (QoS). By comparing with conventional offloading strategies, the proposed approach is proved to be optimal
and effective for SDN-enabled IoV.
offloaded to the appropriate MEC for parallel processing and reduces the complexity of the problem and is very
feedback the execution result to the vehicle-mounted termi- effective for solving the multivehicle and multimec
nal through the neighboring BSs or APs. offloading scenario
Although the MEC server has richer resources than local
equipment, excessive load has a great impact on the trans- (iii) The offloading decisions, local offloading ratios,
mission link. Considering the transmission delay on the link and resource allocation (ODRR) are jointly opti-
[10], this is detrimental to delay-sensitive tasks. As the mized in a complete way, to maximize the system
number of vehicles increases, and the communication envi- performance
ronment becomes worse, resulting in high transmission By comparing with the conventional offloading strate-
delays. In addition, the heterogeneous nature of vehicle- gies, simulation results show the proposed ODRR approach
mounted tasks places higher requirements on the entire achieves the best performance.
system. There are two types of separable task offloading:
(1) One is a bit-type task, which can be arbitrarily divided
2. Related Work
into several independent parts [11, 12], and these parts can
be processed in parallel on different platforms. (2) The other The offloading strategy has been a hot research topic for IoV,
is a code-oriented task, which is composed of various and various offloading models have been proposed in differ-
components [13, 14]. There are dependencies between task ent application scenarios. Considering the standby time of
components and need to be executed in an orderly or con- mobile devices and the latency sensitivity of tasks, most
tinuous manner. Therefore, a reliable computing offloading work in edge computing or fog computing focuses on the
solution is needed to support low-latency, highly reliable optimization of energy consumption or latency. Therefore,
IoV services. we survey the related work according to the optimization
The edge computing capabilities of nearby RSUs have goals, as shown in Table 1.
been leveraged to meet the task-intensive requests and strict
latency requirements, i.e., part or all of the divisible bit-type 2.1. Optimizing the Energy Consumption. For mobile users,
tasks with high delay requirements are selected to be off- processing various application tasks consumes a lot of
loaded to nearby RSUs; then, the computing delay can be energy, so improving the standby time of the device has
greatly reduced by parallel computing. Nevertheless, the always been a concern. Some work is devoted to reducing
unified task intensity is not often, some RSUs may have to local computing power consumption to improve standby
handle extra requests beyond their capabilities, while other time. The energy cost of task calculation and file transmis-
RSUs are relatively idle. Benefiting from the software- sion has been studied in [21]. Combining the multiaccess
defined network (SDN) architecture, SDN controllers with characteristics of 5G heterogeneous networks, jointly opti-
global information are able to coordinate edge computing mizing offloading and wireless resource allocation to
resources [15, 16]. Combined with the current global situa- minimum energy consumption within delay constraints. A
tion, the requested tasks are controlled and forwarded to multiuser MEC system with wireless power supply has been
the corresponding target nodes through the SDN controller, modeled in [22], in order to solve a practical scenario that
which can effectively integrate global resources. requires delay limit and reduces the total energy consump-
A lot of research work has focused on task offloading in tion of the AP, jointly optimizing the energy transmission
IoV [17–20], majorly considering one part of offloading beamforming of the access point, the frequency of the cen-
strategy, offloading ratio or resource allocation, and lack of tral processing unit, the number of bits offloaded by users,
the utilization of SDN to efficiently solve the load balancing and the time allocation among users. [23] has proposed an
problem. In this paper, we jointly optimize the offloading energy-optimal dynamic computation offloading scheme
strategy, offloading ratio, and resource allocation, in order algorithm to minimize system energy consumption under
to minimize the system delay of SDN enabled IoV. More- energy overhead and other resource constraints.
over, the effects of different task complexity on transmission
and execution are also considered. The main contributions 2.2. Optimizing the Delay of the System. For latency-sensitive
of this paper are as follows: tasks, researchers are devoted to reducing system latency
and improving user experience by optimizing local comput-
(i) A tasks-divisible system model with a software- ing resources and edge node resources. [10] aim to minimize
defined network is proposed based on two-layer the maximum delay among users by joint optimization of
transmission offloading decision, computing resource allocation and
resource block and power. [17] investigate the calculation
(ii) A Particle Swarm Optimization- (PSO-) based rate maximization problem in the MEC wireless power
heuristic approach for the overall optimization is supply system enabled by UAV, which is subject to the
proposed, which can effectively solve the offloading energy harvesting causal constraints and the UAV speed
strategy problem of multiuser and multiobjective constraints. A new device-to-device multiassistant MEC
nodes. This approach works by decomposing the system that requests local users to nearby auxiliary devices
problem into three subproblems: (1) offloading for collaborative computing has been designed in [18]. By
decision of vehicles; (2) resource allocation by optimizing task allocation and resource allocation to mini-
RSUs; and (3) offload ratio of vehicles. It greatly mize the execution delay, a collaborative method has been
Wireless Communications and Mobile Computing 3
proposed in [20] for parallel computing and transmission of ment learning scheme has been proposed in [27] to search
virtual reality. The task is divided into two subtasks and off- for optimal available resource nodes to minimize delay and
loaded to the MEC server and the vehicle side in order to energy consumption. In addition, the fog computing and
shorten the completion time of the virtual reality applica- cloud computing have been discussed in [24–27]. Since
tion. Moreover, an offloading scheme has been proposed cloud servers are located in areas far away from cities, there
with efficient privacy protection based on fog-assisted is a large transmission delay, and tasks that are not sensitive
computing and solved by a joint optimization algorithm to to delay are offloaded to the cloud for computing, while
minimize the completion delay in [24]. intensive and delay-sensitive tasks are computed locally or
offloaded to the fog to improve system performance.
2.3. Optimizing Both System Delay and Energy Consumption. In this paper, we mainly focus on delay-sensitive and
The user experience and the standby time of the device have task-intensive scenarios, considering that the computing
been optimized together by the weight parameter, i.e., when resources of edge nodes and the number of tasks received
the power is low, the weight of the energy consumption is set in each period are limited. In order to prevent some edge
larger; when the power is sufficient, a larger delay weight is computing nodes from being overloaded and some edge
set. [19] consider a multicell wireless network that supports nodes relatively idle, we use the SDN-enabled IoV to control
MEC, which assists mobile users perform computationally the task offloading decision-making by monitoring the
intensive tasks through task offloading. A heuristic algo- global situation, which effectively improves the utilization
rithm is proposed to combine task offloading and resource of resources and reduce system latency.
allocation to maximize system utility, which is measured
by the weighted sum of task completion time and energy 3. System Model
consumption reduction. The system latency has been mini-
mized with energy consumption constraints by jointly opti- In this section, the system transmission model, execution
mizing offloading decisions, local computing power, and fog model, and optimization problem formulation are pre-
node computing resources in [25]. The cloudlet overload sented. As shown in Figure 1, we assume a vehicular net-
risk has been alleviated by offloading user tasks to vehicle work system is composed of N vehicles and M RSUs. Each
nodes, and a heuristic algorithm has been proposed to RSU is equipped with a MEC server, which has the comput-
balance energy and delay to minimize system overhead in ing ability to process offloading tasks. Generally, the MEC
[26]. In order to solve the problem of high power consump- can be a physical device or a virtual machine provided by
tion and delay sensitivity of portable devices, a reinforce- the operator. Taking into account the complexity of the
4 Wireless Communications and Mobile Computing
SDN
RSU
RSU
RSU
SDN
Figure 1: Sketch of communication and offloading framework for SDN-enabled IoV System.
vehicular network, this system is a software-defined IoV that are not sensitive to delay response and can be processed
supports edge computing and the configuration of edge locally. If all of them are executed locally or offloaded to
computing nodes. The edge nodes are coordinated and con- the RSUs, it can cause timeout failure, waste of local
trolled by the software-defined network (SDN), which aims resources, and a very poor communication environment
to reduce system delay and improve overall performance. due to interference. Therefore, different offloading strategies
As a coordinator, SDN divides the IoV system into two inde- need to be set for different task types. Let V = f1, 2, ⋯, Ng
pendent layers through software definition and virtualiza- and R = f1, 2, ⋯, Mg represent the set of vehicles and the
tion technology: the data layer and the control layer. The set of RSUs, respectively. For ease of reference, the key sym-
edge nodes uniformly obey SDN scheduling and follow the bols used in this article are summarized in Table 2.
OpenFlow protocol [28]. These edge nodes transmit and
process information according to SDN control instructions. 3.1. Communication Model. We assume that each vehicle
The control and processing are separated by the network terminal n∈V has an executed task at a time and denoted
entities, to effectively integrate resources and improve utili- as V n . Each task has three parameters, hdn , cn , t max n i, in
zation. The edge computing node equipped on RSU con- which dn defines the size of the input data of the task V n
nects the edge node cluster and the SDN controller of the vehicle terminal n (usually in bits), and cn defines
through broadband connection. The physical communica- the computing resources required by the task of the terminal
tion on the control layer is independent of the physical com- n, which refer to CPU cycles. Parameters d n and cn can be
munication channel on the data layer. The data layer is obtained from task analysis. t max
n is the maximum allowable
composed of an OpenFlow-based SDN controller and net- delay for task transmission execution, i.e., if t max
n is exceeded
work nodes, refers to [29, 30]. The SDN controller broadcast by the time of result received, and the task is failed by time-
global status, including Channel Status Information (CSI), out. Since the vehicles receive the offloading decision, the
available resources, and task priority. When SDN receives tasks are not allowed to be interrupted before the execution
the vehicle’s offloading request, it looks for the best solution is completed. Typically, the speed of cars on conventional
(including offloading decision and resource allocation) at the road is 5 to 16 meters per second; thus, we assume the radio
control layer and then sends control instructions. The data channel is not radical varying to interrupt the execution of
layer performs data transmission according to the received the tasks, due to the severe fading. When the vehicle gener-
control layer. Each vehicle generates a task in a period of ates a task, it sends an offloading request to the nearby RSU
time, taking the heterogeneity of vehicle tasks into account first; then, the RSU routes the request command to the SDN
(data volume, delay sensitivity, and difference in computa- controller. SDN synthesizes the current global information
tional complexity). In the case of real-time tasks require and provides the optimal offloading decision. Finally, the
minimal delay and/or have a large amount of data, local decision plan is sent to the targeted node through the control
offloading cannot meet the requirements. Otherwise, tasks layer.
Wireless Communications and Mobile Computing 5
〠 Cm
n = 1, ∀n ∈ V : ð1Þ 3.2. Computing Model. In this section, we introduce the
m∈R computing model of the vehicle. Considering the parallel
offloading mode, the computing model is mainly divided into
Equation (1) means that each vehicle can offload tasks two parts: (1) the local computing model and (2) RSU com-
to a unique RSU for execution. We define the coordinates puting model.
6 Wireless Communications and Mobile Computing
3.2.1. Local Computing Model. Let ln denotes the computing full use of effective resources are of importance. In addition,
capabilities of vehicle n; then, the local task execution time considering the transmission and execution delay caused
T loc
n on bn data can be defined as
by offloading, the system delay is defined by Equation
(12). Therefore, offloading decision-making and resource
bn cn allocation must be jointly optimized to improve system
T loc
n = , ∀n ∈ V : ð6Þ performance. The goal is to provide all vehicle optimized
ln
offloading strategies χ, computing resource allocation F,
3.2.2. RSU Computing Model. Besides the local computing, and offloading ratio B, aiming to reduce average delay.
the rest part of the tasks are offloaded to the RSU for further Finally, the optimization problem is described as follows:
computing. It requires (1 - bn ) cn computing resources from
RSU. Note that a RSU can be selected by multiple vehicles, 1 N
P1 : Dn = min 〠T ð12Þ
and a vehicle can only select one RSU for execution. Since B,χ ,F N n=1 n
RSU has limited computing resources, it is necessary to
allocate RSU resources to different vehicles in a reasonable s:t:T n ≤ t max ∀n ∈ V ð13aÞ
n ,
manner to improve resource utilization and reduce system
delay. Define f m as the computing resource of RSU m, and 〠 Cm ∀n ∈ V
n = 1, ð13bÞ
fm
n denotes the computing resources allocated by the RSU m∈R
server m to vehicle n. The sum of the resources allocated
by the RSU to each vehicle cannot exceed its own comput- 0 ≤ bn ≤ 1, ∀n ∈ V ð13cÞ
ing resources, thus constrained by Equations (7) and (8), ð13dÞ
and the execution time of vehicle n on RSU m is defined n ∈ f0, 1g,
Cm ∀n ∈ V ,∀m ∈ R
by Equation (9). N
n f n ≤ f m,
0 ≤ 〠 Cm m
∀m ∈ R ð13eÞ
〠 fm
n ≤ f m, ∀m ∈ R, ð7Þ
n=1
n∈V
ζm
n ∈ f0, 1g, ∀n ∈ V ,∀m ∈ R: ð13f Þ
fm The above constraints are explained as follows: con-
n
≤ Cm
n, ∀m ∈ R, ð8Þ
fm straint (13a) is to ensure that the total delay of the task
does not exceed the maximum allowable delay; constraints
n ð1 − bn Þcn
Cm (13b) and (13d) mean that each vehicle can only transmit
Tm ,comp
= , ∀n ∈ V : ð9Þ
n
fm
n the task to only one RSU, and the offloading decision is a
binary variable; constraint (13c) is the offloading ratio
Assuming there exist M RSUs in the network, then the constraint, which is a decimal between 0 and 1; and to
execution time for vehicle n is defined as ensure that the resources allocated by the RSU to the
vehicle does not exceed its own capacity, the constraint
n ð1 − bn Þcn
M
Cm (13e) must be met. Finally, there is a binary constraint,
T RSU = 〠 , ∀n ∈ V : ð10Þ
n
m=1 fm
n which represents whether the vehicle to the targeted RSU
needs to be routed.
For vehicle n, the task is divided into two subtasks,
which are processed in parallel by local unit and by edge 4. Problem Solving
RSU node. The task completion time is accordingly
The fact that the offloading decision is a binary variable
divided into two parts: local processing time and edge
brings problem P1 as a mixed integer programming prob-
RSU node processing time. The edge RSU node processing
lem, which is nonconvex NP-hard. In order to reduce the
also results transmission delay T m n
,up
and execution delay
m,comp complexity of the problem, we divide the problem into three
Tn . Therefore, the time for completing task generated
subproblems, i.e., offloading decisions of various vehicles,
by vehicle n is determined by the maximum delay, which
proportional distribution of tasks performed locally, and
is defined by
different resource allocation for vehicles by RSUs.
( )
M
4.1. Offloading Strategy Making and Load Balance. Given the
T n = max T loc
n , 〠 ðT m
n
,comp
+ Tm
n
,up
Þ , ∀n ∈ V : local computing ratio B and resource allocation F, problem
m=1
P1 is transformed into P1:1 as follows:
ð11Þ
1 N
3.3. Problem Formulation. For delay-sensitive applications, P1:1 : Dn = min 〠T ð14Þ
delay is a problem that must be considered. Therefore, it is χ N n=1 n
vital to formulate the most suitable offloading strategy
based on the global status. For intensive tasks, meeting
the performance requirements of the vehicle and making s:t:ð12aÞ, ð12bÞ, ð12dÞ, ð12eÞ, ð12f Þ: ð15Þ
Wireless Communications and Mobile Computing 7
Obviously, P1:1 is still an NP-hard problem. For such impact on the task execution of the objective function; thus,
problems, heuristic algorithm is a rational solution. The the converted problem P1:2 can be presented as
Particle Swarm Optimization algorithm (PSO) originated
from the study of bird predation behavior is a evolutionary P1:2 : min 〠 T n
F
heuristic algorithm, which has efficient global search capa-
n∈N m ð19Þ
bilities. It has achieved great success in image processing s:t:ð12aÞ, ð12eÞ:
and neural network training. Hence, we propose a offloading
decision-making algorithm based on PSO. The basic idea of Substituting formulas (4), (5), and (6) into P1:2, and per-
PSO is to find the optimal solution through collaboration forming equivalent transformation, the overall optimization
and information sharing between individuals in the group. problem is changed to the following form:
In the paper, particles have only two attributes: speed and
position. Speed represents the speed of movement, and P1:3 : min 〠 T n
F
position represents the direction of particles. Each particle n∈N m
searches for the optimal solution separately in the search bn d n
space and records the current individual extreme value, then s:t:T n ≥
ln ð20Þ
sharing the individual extreme value with other particles. All
particles in the particle swarm adjust their speed and ð1 − b n Þd n ζm
n ð1 − b n Þd n ð1 − bn Þcn
position according to the current individual extreme value Tn ≥ + +
rsn rm
s fm
n
they find and the current global optimal solution shared
by others. Therefore, through iteration, the particles of ð12aÞ, ð12eÞ:
the entire population eventually converge to the optimal
solution. As P1:3 shows, T n is not differentiable subject to f m
n.
According to the speed update formula proposed by Substituting formulas (5), (6), and (9) into P1:3, then the
Clerc and Kennedy [32], we introduced a compression fac- T n is approximated as
tor x. The update formula for speed and position is pre-
sented as bn cn ð1 − bn Þdn ζm ð1 − bn Þdn ð1 − bn Þcn
Tn ≤ + + n +
ln r sn rm
n fm
n
V i ðI + 1Þ = ρðwV i ðI Þ + c1 r1 ðPbest − X i ðI ÞÞ ð1 − bn Þcn cn d n ζm n dn
ð16Þ = + bn − − m + Γn , m
+ c2 r 2 ðGbest − X i ðI ÞÞÞ, fm
n ln r sn rn
ð21Þ
m
X i ðI + 1Þ = X i ðI Þ + V i ðI Þ, ð17Þ where Γm n = d n /r n + ζn d n /r n and ð1 − bn Þcn /f n + bn ððcn /l n Þ
s m m
m
− ðd n /r n Þ − ðζn d n /r n ÞÞ + Γn is the upper bound of T n . Sub-
s m m
where i = 1, 2, ⋯Nc max:, Nc max is the number of parti- stitute the upper bound into P1:3, then P1:3 can be bounded
cles. w is the coefficient of inertia. The larger w, the stron- with the worst case delay as
ger the global optimization ability, and the weaker the
local optimization ability. c1 , c2 are the learning factors, ð1 − bn Þcn cn d n ζm n dn
V i ðIÞ is the velocity of particle i at the I − th iteration, P1:4 : min 〠 + b − − + Γm
F fm n
l n r s r m n
and X i ðIÞ is the current position of particle i at the I − t n∈N m n n n
n=1 fm
n ln r sn rn
4.2. Resource Allocation Optimization. Assuming the local
Nm !
computing ratio B and offloading decision χ are given, the ð1 − bn Þcn
+ 〠 λn + τ + θm 〠 f n − f m ,
m
problem is transformed into the lowest vehicle delay to each n=1 fmn n∈N
RSU. Defining the vehicle set N m of the task on RSU m, and m
Input:
The input parameters of particle swarm:
c1 , c2 , w, r 1 , r 2 , Mcmax, Ncmax:
Output:
Gbest, ⊝Gbest
1: For j = 1 ; j < = Mcmax ; j + + do
2: For each i ∈ ½1, Ncmax do
3: Initialize the velocity position of particles V i ð0Þ, X i ð0Þ
4: End for
5: In Algorithm 2, we obtain ζm n then record the current position and fitness as particle’s
individual extreme option and value Pbest, ⊝Pbest.
6: Record the smallest fitness and the corresponding position as ⊝Gbest, Gbest.
7: While the number of iteration steps is not 0 do
8: For i = 1 ; i < = Ncmax ; i + + do
9: Update the velocity V i ðjÞ of particles i using Equation (16)
Update the position X i ðjÞ of i using Equation (17)
Evaluate fitness of particle i using Equation (14)
10: If f itnessðX i ðjÞÞ < ⊝Pbest then
11: ⊝Pbest = f itnessðX i ðjÞÞ
12: End if
13: If f itnessðX i ðjÞÞ < ⊝Gbest then
14: ⊝Gbest = f itnessðX i ðjÞÞ
15: End if
16: End for
17: End while
18: End for
19: Return Gbest, ⊝Gbest
Input:
λn , θ m
Output:
fm
n
1: Repeat
2: Calculate resource allocation f m
n based on Equation (26)
3: Update the λn ðtÞ, θm ðtÞ using Equation (30)
4: Until convergence
5: Return f m
n
Algorithm 4: Joint approach for offloading decision, resource allocation and ratio (ODRR).
0.28
0.26
0.24
0.22
Average delay
0.2
0.18
0.16
0.14
0.12
0.1
10 15 20 25 30 35 40 45 50
Offload-proportion-by-ODRR Offload-whole-to-RSUs
Execute-locally Offload-proportion-by-SA
Figure 2: The system delay of the cases with different vehicle numbers (N = 10, 20, 30, ⋯50).
of the proposed offloading strategy on the proposed Internet computing performance of executing locally is worse than
of Vehicles architecture, we compare it with other offloading offloading whole to RSUs. But when the number exceeds
strategies. the limit, the situation becomes different. On the one hand,
this is because the resources of RSUs are limited. When the
(i) Offload-Whole-to-RSUs: offload the whole task to number of offloading vehicles exceeds a certain number,
edge nodes including access node and remote nodes the load capacity of RSUs is exceeded, resulting in great per-
formance decrease. On the other hand, more vehicles means
(ii) Execute-Locally: the vehicle terminal directly exe-
a worse communication environment, which leads to more
cutes the task locally
communication delays. Compared with the other three
(iii) Offload-proportion-by-ODRR: joint optimization conventional strategies, the latency of the proposed ODRR
of offloading decision, local calculation ratio, and algorithm is always the smallest, as the number of vehicles
resource allocation (ODRR) is based on the pro- increases. Compared with the Offload-Whole-to-RSUs
posed algorithm scheme, the ODRR can be reduced by 42.7% in the best case,
and 17.5% in the worst case; compared with the Execute-
(iv) Offload-proportion-by-SA: offload proportion of task Locally scheme, the ODRR can be reduced by 52.6% in the
to RSUs using simulated annealing algorithm (SA). best case, and 24% in the worst case; compared with Offload
-proportion-by-SA scheme, the highest can be reduced by
5.2. Simulation Results. In this section, we present the per- 16.7%, and the lowest by 7.8%. It can be concluded that
formance of the proposed ODRR algorithm and compare compared with other two conventional strategies, ODRR
with other conventional offloading strategies. can reduce the delay by up to nearly half. Compared to the
Figure 2 plots the average execution time of vehicles as Offload -proportion-by-SA scheme, the reduction is also
the number of connected vehicles increases. It can be seen up to 10%.
that the performance of the proposed ODDR partial offload- Figure 3 plots the impact of different executing complex-
ing algorithm is the best and can keep the average calcula- ities (ϕ = cn /dn ) on system delay. The number of connected
tion delay to a minimum, and the partial offloading SA vehicles is set to 30, and it is found that the average delay
algorithm is behind it. When the number of vehicles is less of the four offloading strategies increases linearly with the
than 35, the effect of offloading whole to RSUs is better than increase of task complexity. The performance of the ODRR
executing locally. Because RSUs have more computing algorithm given in this paper is obviously the best compared
resources than the vehicles, the computing capability of with other strategies. The partial offloading by SA algorithm
RSU is dozens of times that of executing locally. Therefore, is the second, the whole offloading to RSUs is the third, and
Wireless Communications and Mobile Computing 11
1.8
1.6
1.4
1.2
Average delay
0.8
0.6
0.4
0.2
0
1000 2000 3000 4000 5000 6000 7000 8000 9000
Offload-proportion-by-ODRR Offload-whole-to-RSUs
Execute-locally Offload-proportion-by-SA
Figure 3: The average delay of the cases with different execution complexities.
1.8
1.6
1.4
1.2
Average delay
0.8
0.6
0.4
0.2
0
100 200 300 400 500 600 700 800 900
Offload-proportion-by-ODRR Offload-whole-to-RSUs
Execute-locally Offload-proportion-by-SA
Figure 4: The average delay of the cases with different execution input data size.
executing locally has the worst performance. The higher the highest latency. However, offloading whole to RSUs causes
task complexity, the more CPU resources are needed to uneven distribution and local resource waste. The partial off-
process each byte of data. The local computing resources loading ODRR algorithm takes into account both resource
are much smaller than RSUs, so executing locally gets the allocation and offloading ratio, which greatly improves
12 Wireless Communications and Mobile Computing
[14] Z. Ning, P. Dong, X. Kong, and F. Xia, “A cooperative partial assisted by mobile edge computing,” IEEE Communications
computation offloading scheme for mobile edge computing Magazine, vol. 55, no. 7, pp. 94–100, 2017.
enabled internet of things,” IEEE Internet of Things Journal, [30] I. Ku, Y. Lu, M. Gerla, R. L. Gomes, F. Ongaro, and
vol. 6, no. 3, pp. 4804–4814, 2019. E. Cerqueira, “Towards software-defined VANET: architec-
[15] H. Lee, H. Kim, and Y. Kim, “A practical SDN-based data off- ture and services,” in 2014 13th Annual Mediterranean Ad
loading framework,” in 2017 International Conference on Hoc Networking Workshop (MED-HOC-NET), pp. 103–110,
Information Networking (ICOIN), pp. 604–607, Da Nang, Piran, Slovenia, 2014.
Vietnam, 2017. [31] D. Ye, M. Wu, S. Tang, and R. Yu, “Scalable fog computing
[16] H. Zhang, Z. Wang, and K. Liu, “V2X offloading and resource with service offloading in bus networks,” in 2016 IEEE 3rd
allocation in SDN-assisted MEC-based vehicular networks,” International Conference on Cyber Security and Cloud Com-
China Communications, vol. 17, no. 5, pp. 266–283, 2020. puting (CSCloud), pp. 247–251, Beijing, China, 2016.
[17] F. Zhou, Y. Wu, R. Q. Hu, and Y. Qian, “Computation rate [32] M. Clerc and J. Kennedy, “The particle swarm - explosion, sta-
maximization in UAV-enabled wireless-powered mobile- bility, and convergence in a multidimensional complex space,”
edge computing systems,” IEEE Journal on Selected Areas in IEEE Transactions on Evolutionary Computation, vol. 6, no. 1,
Communications, vol. 36, no. 9, pp. 1927–1941, 2018. pp. 58–73, 2002.
[18] H. Xing, L. Liu, J. Xu, and A. Nallanathan, “Joint task assign- [33] T. Q. Dinh, J. Tang, Q. D. La, and T. Q. S. Quek, “Offloading in
ment and resource allocation for D2D-enabled mobile-edge mobile edge computing: task allocation and computational
computing,” IEEE Transactions on Communications, vol. 67, frequency scaling,” IEEE Transactions on Communications,
no. 6, pp. 4193–4207, 2019. vol. 65, no. 8, pp. 3571–3584, 2017.
[19] T. X. Tran and D. Pompili, “Joint task offloading and resource [34] Y. Qi, H. Wang, L. Zhang, and B. Wang, “Optimal access mode
allocation for multi-server mobile-edge computing networks,” selection and resource allocation for cellular-VANET hetero-
IEEE Transactions on Vehicular Technology, vol. 68, no. 1, geneous networks,” IET Communications, vol. 11, no. 13,
pp. 856–868, 2019. pp. 2012–2019, 2017.
[20] J. Zhou, F. Wu, K. Zhang, Y. Mao, and S. Leng, “Joint
optimization of offloading and resource allocation in vehicular
networks with mobile edge computing,” in 2018 10th Interna-
tional Conference on Wireless Communications and Signal
Processing (WCSP), pp. 1–6, Hangzhou, China, 2018.
[21] K. Zhang, Y. Mao, S. Leng et al., “Energy-Efficient offloading
for mobile edge computing in 5G heterogeneous networks,”
IEEE Access, vol. 4, pp. 5896–5907, 2016.
[22] F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and
computing optimization in wireless powered mobile-edge
computing systems,” IEEE Transactions on Wireless Commu-
nications, vol. 17, no. 3, pp. 1784–1797, 2018.
[23] S. Chen, Y. Zheng, W. Lu, V. Varadarajan, and K. Wang,
“Energy-Optimal dynamic computation offloading for indus-
trial IoT in fog computing,” IEEE Transactions on Green Com-
munications and Networking, vol. 4, no. 2, pp. 566–576, 2020.
[24] S. Chen, X. Zhu, H. Zhang, C. Zhao, G. Yang, and K. Wang,
“Efficient privacy preserving data collection and computation
offloading for fog-assisted IoT,” IEEE Transactions on Sustain-
able Computing, vol. 5, no. 4, pp. 526–540, 2020.
[25] Q. Wang and S. Chen, “Latency–minimum Offloading Deci-
sion and Resource Allocation for Fog enabled Internet of
Things Networks,” Transactions on Emerging Telecommunica-
tions Technologies, vol. 31, no. 12, 2020.
[26] R. Yadav, W. Zhang, O. Kaiwartya, H. Song, and S. Yu,
“Energy-latency tradeoff for dynamic computation offloading
in vehicular fog computing,” IEEE Transactions on Vehicular
Technology, vol. 69, no. 12, pp. 14198–14211, 2020.
[27] R. Yadav, W. Zhang, I. A. Elgendy et al., “Smart healthcare:
RL-based task offloading scheme for edge-enable sensor net-
works,” IEEE Sensors Journal, vol. 21, no. 22, pp. 24910–
24918, 2021.
[28] F. Hu, Q. Hao, and K. Bao, “A survey on software-defined net-
work and OpenFlow: from concept to implementation,” IEEE
Communications Surveys & Tutorials, vol. 16, no. 4, pp. 2181–
2206, 2014.
[29] J. Liu, J. Wan, B. Zeng, Q. Wang, H. Song, and M. Qiu, “A scal-
able and quick-response software defined vehicular network