0% found this document useful (0 votes)
25 views

Task - Data - Offloading - and - Resource - Allocation - in - Fog - Computing - With - Multi-Task - Delay - Guarantee - Delay Time

This document discusses task offloading and resource allocation in fog computing networks to meet multi-task delay guarantees. It proposes a solution to determine how much task data to offload between fog nodes or to the cloud, and how to allocate computational resources, in order to process multiple tasks with different delay deadlines within the allotted time frames. The solution formulates the problem as a quadratically constrained quadratic program and provides heuristic algorithms to solve it. Simulation results demonstrate the effectiveness of the proposed offloading scheme at meeting delay guarantees under varying conditions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Task - Data - Offloading - and - Resource - Allocation - in - Fog - Computing - With - Multi-Task - Delay - Guarantee - Delay Time

This document discusses task offloading and resource allocation in fog computing networks to meet multi-task delay guarantees. It proposes a solution to determine how much task data to offload between fog nodes or to the cloud, and how to allocate computational resources, in order to process multiple tasks with different delay deadlines within the allotted time frames. The solution formulates the problem as a quadratically constrained quadratic program and provides heuristic algorithms to solve it. Simulation results demonstrate the effectiveness of the proposed offloading scheme at meeting delay guarantees under varying conditions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

SPECIAL SECTION ON MOBILE EDGE COMPUTING AND MOBILE CLOUD COMPUTING:

ADDRESSING HETEROGENEITY AND ENERGY ISSUES OF COMPUTE AND NETWORK RESOURCES

Received July 25, 2019, accepted August 11, 2019, date of publication September 16, 2019, date of current version October 31, 2019.
Digital Object Identifier 10.1109/ACCESS.2019.2941741

Task Data Offloading and Resource Allocation in


Fog Computing With Multi-Task Delay Guarantee
MITHUN MUKHERJEE 1 , (Member, IEEE), SUMAN KUMAR2 , QI ZHANG 3 ,
RAKESH MATAM 4 , (Member, IEEE), CONSTANDINOS X. MAVROMOUSTAKIS 5,

YUNRONG LV 1 , AND GEORGE MASTORAKIS 6


1 Guangdong Provincial Key Laboratory of Petrochemical Equipment Fault Diagnosis, Guangdong University of Petrochemical Technology,
Maoming 525000, China
2 Department of Mathematics, IGNTU, Amarkantak 484886, India
3 DIGIT, Department of Engineering, Aarhus University, 8000 Aarhus, Denmark
4 Department of Computer Science and Engineering, Indian Institute of Information Technology Guwahati, Guwahati 781015, India
5 Mobile Systems Laboratory (MoSys Lab), Department of Computer Science, University of Nicosia, 1700 Nicosia, Cyprus
6 Department of Management Science and Technology, Hellenic Mediterranean University, 72100 Crete, Greece

Corresponding author: Yunrong Lv ([email protected])


This work was supported by the National Key Research and Development Program under Grant 2018YFC0808600.

ABSTRACT With the emergence of delay-sensitive task completion, computational offloading becomes
increasingly desirable due to the end-user’s limitations in performing computation-intense applications.
Interestingly, fog computing enables computational offloading for the end-users towards delay-sensitive task
provisioning. In this paper, we study the computational offloading for the multiple tasks with various delay
requirements for the end-users, initiated one task at a time in end-user side. In our scenario, the end-user
offloads the task data to its primary fog node. However, due to the limited computing resources in fog nodes
compared to the remote cloud server, it becomes a challenging issue to entirely process the task data at the
primary fog node within the delay deadline imposed by the applications initialized by the end-users. In fact,
the primary fog node is mainly responsible for deciding the amount of task data to be offloaded to the
secondary fog node and/or remote cloud. Moreover, the computational resource allocation in term of CPU
cycles to process each bit of the task data at fog node and transmission resource allocation between a fog node
to the remote cloud are also important factors to be considered. We have formulated the above problem as a
Quadratically Constraint Quadratic Programming (QCQP) and provided a solution. Our extensive simulation
results demonstrate the effectiveness of the proposed offloading scheme under different delay deadlines and
traffic intensity levels.

INDEX TERMS 5G and beyond, computation offloading, mobile edge computing, fog computing, resource
allocation, offloading decision.

I. INTRODUCTION remote cloud data center and burden on fronthaul link are the
With the emergence of ultra-reliable and low-latency commu- major barrier for low-latency-aware applications. To address
nications (uRLLC) [1]–[4], the latency and reliability-aware the above challenges, Fog computing [5], [6], often viewed
mission-critical applications are increasingly growing up. as a middleware between end-user and cloud, extends the
To mention, a few examples are, autonomous driving, virtual computational, communication, and storage resources of the
and augmented reality, and cloud robotics, remote surgery, cloud computing close to the network edge.
and factory automation. However, at the same time, the end-
user’s computational resources limit the user’s experience
A. MOTIVATION
(e.g., latency and reliability) for the computational-intensive
applications. The cloud computing has already proven its For computational-intensive task processing in a fog com-
significance to process the computational-intensive tasks, puting scenario, the end-user offloads the data either par-
however, the physical distance between the end-user and tially or entirely to the nearby fog computing node(s).
It would be an ideal solution if a single fog computing node
The associate editor coordinating the review of this manuscript and (hereinafter referred to as fog nodes) is able to compute,
approving it for publication was Christos Verikoukis. process the task data and deliver the results for the tasks

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/
VOLUME 7, 2019 152911
M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

received from the end-user. However, the computational and


storage resources in a single fog node are insufficient to
handle all the tasks data under the delay-deadline. As a result,
the fog node either finds the assistive fog node under its
vicinity or upload the task to a remote cloud for further
computational resources. Both cases retain the challenges as
a) the assistive fog node also does not always have enough
available resources, and b) offloading to the cloud still creates
a burden on the upload link, resulting in a delay in the task
processing. Thus, it becomes a challenging task to decide
where to offload (e.g., assistive fog node and remote cloud)
and how much partial task data to be offloaded under delay
guarantee imposed by the end-user’s application.

B. RELATED WORK
In the last decade, task offloading has been extensively
investigated in both academia and industries under differ- FIGURE 1. Illustration of a fog network for task data offloading with
ent nomenclature/technologies, e.g., mobile cloud comput- multiple applications.
ing [7], mobile edge computing [8], [9], cloudlets [10], and
computing access points [11]. Recently, Chen et al. provided
an optimal solution for deciding between fog node (simi- deadline. We consider the multiple tasks with differ-
lar to computing access point) and remote cloud server to ent delay imposed by the application initiated by the
offload the task data considering single user with a single end-user.
task [12], a single user with multiple tasks [13], multiple users • To address these challenges, we show a comprehen-
with more than one task per user [11]. Basically, all these sive delay model considering computational and trans-
approaches select remote cloud if the fog node does not meet mission delay and formulate a multi-task offloading
the latency and energy consumption deadline requirement – optimization problem that is transformed into a Quadrat-
fog node collaboration was not considered in the network ically Constraint Quadratic Programming (QCQP) prob-
model. Most recently, with an assumption that the end-user lem. We further devise a heuristic approach to solve this
has dual connectivity [14], one is with an access point (can problem and show that the proposed solution is able to
be referred as fog node) and another is with a base station effectively guarantee the latency deadline compared to
(with the higher computational capability), an offloading fixed computing resource allocation.
strategy was suggested. Several work [15], [16] considered The rest of the paper is organized as follows. Section II
the fog node collaboration with transmission delay between presents the system model. The total delay model including
fog nodes, however, these work did not provide any insights local task execution delay and transmission delay is dis-
considering multi-user and multi-delay guarantee. In our cussed in Section III. The task offloading and computational
recent work [17], a joint optimization of task data offload- resource allocation in primary and secondary fog node are
ing and computational resource allocation for fog network presented in Section IV. The simulation results are presented
is addressed. In this work, we further study the multi-task in Section V. Finally, conclusions are drawn in Section VI.
scenario with different delay deadline for each task, that was
not considered in [17]. II. SYSTEM MODEL
Consider a fog network with a set of fog computing nodes
C. OBJECTIVE AND CONTRIBUTIONS N = {1, 2, . . . , N }, a set of end-users K = {1, 2, . . . , K },
In this paper, a fog network is considered where the end-user and one remote cloud server, as shown in Fig. 1. We consider
partially uploads its task data to a nearby fog node.1 Consider- that the fog nodes and end-users are uniformly distributed
ing multiple tasks with different delay deadline received from over the network. In general, we take a time-slotted system
the end-users, it becomes a challenging issue for the fog node indexed by t = {0, 1, , . . . , t}, where the length of each time
to allocate computing resources for each task. In addition, slot is 1t (in s). Assuming one task arrives at the kth end-
a fog node take the tasks from its neighor node, therefore, fog user at time slot t, the kth end-user aims to process the task
node has to optimize the offloading decision to the neighbour data by itself. However, due to resource constraints (CPU rate
fog node and remote cloud. The main contributions of this and energy consumption,2 ) the end-users are often unable to
paper are summarized as follows. process the data within the specified delay threshold when the
• We focus on the offloading decision and the amount total required computation cycle is high. Therefore, the end-
of task data to be offloaded considering delay users uploads either a part or an entire task to the nearest
1 An interesting future work is to further consider fog node selection [18]. 2 Energy consumption issue, although novel, is a part of future work.

152912 VOLUME 7, 2019


M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

fog node that acts as a primary (often termed as master) requires a different CPU cycles to process each bit of the
fog node. Assuming two disjoint task queues maintained by task data, i.e., processing density is different. Moreover, each
the end-user’s task scheduler, one queue is for local task application is bounded by different delay requirement. If the
data processing at the end-user and another queue is used kth end-user initializes the ath application, denote the pro-
for the task data offloading, we consider that the end-user cessing density by La and the deadline on the delay by τatask .
simultaneously executes and offloads the task data. The fog
computing node has higher computing and storage resources B. TASK AT THE END-USER SIDE
compared to the end-users, however, has less resources in Let Dk (t) (in bits) be the task data size arriving at the kth
comparison with remote cloud server. Therefore, the primary end-user at the beginning time slot t. This task data can be
fog node selects a set of fog nodes within its proximity processed at the starting from next time slot, i.e., (t + 1).
and/or uploads the task data to the remote cloud for the task As the end-user is assumed to initialize one task at a time,
processing within the deadline on the delay imposed by the the end-user selects a task say, task a from the application
applications. Although the transmission rate between the fog set A. Generally, if a larger-size task that cannot be processed
nodes play an significant role in task data offloading to the in one time slot can be divided into small sub-tasks which can
other fog nodes, however, we assume that the fog nodes are be computed in a single time slot. For the sake of simplicity,
interconnected3 with Ethernet – the transmission delay is we omit t in the rest of the paper.
ignored compared to the other delays involved. Let DCPU
k be the amount of task (in bits) locally computed
Normally, it is assumed that one fog node can be served at the kth end-user side. Based on the task data size, process-
as primary fogPnode to several end-users. Let Mi = ing density, and available computing resources, if the end-
{1, 2, . . . , Mi }, Ni=1 Mi = K be the set of end-users that user estimates that the task data cannot be processed within
select ith fog node as a primary fog node. Moreover, we con- the tolerable delay τatask , then the task scheduler in end-user
sider that these above sets are disjoint in nature, Mi ∩Mi0 ≡ starts to offload4 the task data to the primary fog node in
Ø for i 6 = i0 , this is due to the reason that one end-user parallel with the local task processing. Therefore, we have
is not allowed to offload the task data to more that one
Dk = DCPU
k k,i ,
+ µk,i DOL (1)
primary fog node directly. In fact, only the primary fog node
decides whether to further offload the end-user’s task data (to where DOL
k,i is the task data (in bits) offloaded from the kth
secondary fog node and/or remote cloud) or not. We further end-user to the ith fog node, µk,i is the offloading decision
assume that the ith primary fog node offloads the task data variable at the end-user side and is expressed as µk,i = 1 if
of only k ∈ Mi end-user to the remote cloud. The ith fog the kth end-user selects the ith fog node as primary fog node
node cannot offload the task data that has been received from to offload the task data, and 0 otherwise.
other end-user k 0 ∈ K \ Mi via Ji neighbor fog nodes, where
Ji = {1, 2, . . . , Ji }, Ji ∈ N is the set of fog nodes that can C. TASK AT THE FOG NODE SIDE
select the ith fog node to offload their task data, to the cloud. The primary fog node receives the task data from the end-
The reason is offloading decision to the remote cloud (and users under its coverage. However, due to the resource con-
other fog node) is co-ordinated by the ith primary fog node of straints, the primary fog node is not able to process all the
the kth end-user that selects the ith fog node as its primary fog task data offloaded by the end-users within the imposed
node. Note that a trade-off exists between the computational deadline by the different applications. Thus, the fog node has
and transmission latency among the tasks offloaded to other to offload a part of the task to the neighbor fog node (we call it
fog nodes and the cloud. As shown in Fig. 1, the ith fog node secondary fog node) that has sufficient amount of resources.
receives the task data from the end-user k ∈ Mi that selects We define βk,i,j as the inter-fog offloading decision vari-
the ith fog node as their primary fog node and other end-user able and express as βk,i,j = 1 if the ith primary fog node
k 0 ∈ K \ Mi via Ji neighbor fog nodes. offloads the kth end-user’s task data to the jth secondary
fog node, and 0 otherwise. Moreover, we introduce another
A. APPLICATION TYPE variable, fog-cloud offloading decision variable, λk,i equals
We consider a large-scale industrial application where the to 1 if the ith primary fog node offloads the kth end-user’s
data (such as the state-information) collected by the industrial task data to the remote cloud, and 0 otherwise.
CPU, fog
sensors are processed for the assistance of delay-sensitive Let Dk,i be the locally processed task data of the kth
decision-making applications. Some of the examples are end-user at the ith primary fog node. Therefore,
manufacturing process, factory automation, and fault detec- CPU, fog
DOL
k,i = Dk,i + βk,i,j DOL
k,i→j + λk,i Dk,i→c ,
OL
(2)
tion. Considering a heterogeneous application scenario,
although an end-user (taking industrial sensors in an indus- where DOL OL
k,i→j and Dk,i→c are the offloaded task data of the kth
trial application) initializes only one task at a time from a end-user from the ith primary fog node to the jth secondary
finite application set A = {1, 2, . . . , A}. Each application fog node and the remote cloud, respectively.
4 Several tasks, e.g., OS-level processing cannot be offloaded. We consider
3 In several cases, the fog nodes are connected via WiFi Direct (using IEEE to offload the task that can only be offloaded to avail higher computational
802.11n) with a data rate more than 300 Mbps. resources for processing.

VOLUME 7, 2019 152913


M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

III. DELAY MODEL: LOCAL TASK EXECUTION DELAY where rk,i denotes the transmission rate between the kth end-
AND TRANSMISSION DELAY user and the ith fog node.
For the delay model, we only consider a) local task execution
delay and b) transmission delay. An interesting future work 2) INTER-FOG TRANSMISSION DELAY
is to further consider task queue model, task prefetching, and It is assumed that the fog nodes can be interconnected via
resource allocation delay. IEEE 802.3 ah/av 1/10 Gbps Ethernet. Thus, compared to
the transmission rate between end-user to primary fog node
A. LOCAL TASK EXECUTION DELAY and primary fog node to the remote cloud, the inter-fog
The task execution delay mainly depends on processing den- transmission delay can be ignored.
sity, i.e., required cycles to process the task data and CPU
clock speed. Consider that the kth end-user initializes the 3) FOG NODE TO CLOUD TRANSMISSION DELAY
task a. Then, the local task execution delay at the kth end- We consider that the fog nodes use orthogonal bands to
user is upload data to the cloud as in 4G cellular networks [21], [22].
La DCPU The transmission delay between ith fog node to the cloud for
τkCPU = k
[s], (3) the kth end-user is
fk
where La is the processing density (in cycles/bit) for the ath Tx
λk,i DOL
k,i→c
task served by the kth user and fk denotes the CPU clock τk,i→c = [s], (7)
rk,i,c
speed (in cycles/s) of the kth end-user.
In the similar way, the local task execution delay (in [s]) where rk,i,c is the offloading-rate for the kth user from the ith
for the kth end-user’s task at the ith fog node becomes fog node to the cloud.
CPU, fog
CPU, fog La Dk,i C. TOTAL DELAY
τk,i = [s], (4)
fk,i Since the task scheduler at the end-user simultaneously exe-
where fk,i refers to the CPU clock speed (in cycles/second) of cutes and uploads the task data, the total delay will be
the ith fog node assigned for the kth user task data processing. the maximum value of local task execution delay and the
As the offloaded task data for the kth user from the ith primary summation of transmission delay and task execution delay
fog node to the jth secondary fog node must be processed of the offloaded task data. Moreover, the primary fog node
at the secondary fog node by itself, the local task execution simultaneously a) locally executes the task data, b) offloads
delay for the kth end-user’s task at the jth secondary fog node the task data to the secondary fog node, and c) offloads the
is given by task data to the remote cloud. Therefore, the maximum value
fog, CPU fog, CPU
of τk,i , τk,j , and τk,i→c
Tx will mainly contribute to
CPU, fog La DOL
k,i→j
τk,j = [s]. (5) the total delay. Therefore, the total delay is expressed as
fk,j    
Tx CPU, fog CPU, fog Tx
In our present work, we will not consider the local task τk = max τkCPU , τk,i + max τk,i ,τk,j , τk,i→c .
processing time at the remote cloud since the cloud is gener- | {z }
executed | {z }
ally equipped with a sufficient amount of computational and at end-user for the offloaded task data
storage resources [19]. Therefore, compared to the resource (8)
constraint fog node and end-users, the task execution delay is
significantly lower in cloud server.
IV. PROBLEM FORMULATION FOR TASK OFFLOADING
AND RESOURCE ALLOCATION
B. TRANSMISSION DELAY
The transmission delay mainly depends on the transmis-
The main objective is to complete the task execution
sion rate (sometimes, called as offloading rate). In general,
within the delay deadline (i.e., τatask ) imposed by the cer-
the total transmission delay consists of both uploading and
tain application initiated by the end-user. As we do not
downloading time. Similar to [20], in our system model,
consider the energy consumption issue, we let the end-user
the downloading time is ignored due to the small data size
locally executes the task data until the delay deadline τatask .
of the results compared to the uploaded task data size from
Therefore, we drop τkCPU in (9) with an assumption that
end-user to fog node, from fog node to the cloud, and from
τkCPU u τatask . As a result, we aim to obtain the following:
primary fog node to secondary fog node.
min τk0 , where
 
1) END-USER TO PRIMARY FOG NODE Tx CPU, fog CPU, fog Tx
τk0 = τk,i + max τk,i , τk,j , τk,i→c . (9)
The transmission delay between the kth end-user and the ith
primary fog node is Therefore,
Tx
µk,i DOL
k,i τatask fk
τk,i = [s], (6) DOL CPU
k,i = Dk − Dk ≡ Dk − . (10)
rk,i La
152914 VOLUME 7, 2019
M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

As an end-user executes one task at a time, we allocate the the maximum value rimax . The constraint (12e) corresponds
maximum CPU clock speed fkmax to the task data processing, to the total transmission rate between the ith fog node and the
i.e., fk = fkmax . As in [23], we take the assumption that a fog max .
cloud for all the users is limited by the maximum value rk,c
node adjusts its CPU rate to meet different amount of CPU First, the constraint (12b) is transformed into a quadratic
resources for processing the certain task data. Let fimax be the equation as x(x − 1) = 0, where x ∈ {µk,i , βk,i,j , λk,i }.
maximum CPU rate for the ith fog node, then Then, we introduce the auxiliary variables to convert the
Mi Mj |
Ji |X
above optimization problem into a convex QCQP. The CVX
X X toolbox [24] is used to obtain the optimum points, feasibility
µk,i fk,i + βk 0 ,j,i fk 0 ,i ≤ fimax . (11)
analysis is left for future work. Afterward, we introduce the
k=1 j=1 k 0 =1
| {z } | {z } auxiliary variables ζk,i
L , ζ O , and ζ O , such as
k,j k,c
for ∀k ∈ Mi for ∀k 0 ∈ K \ Mi
CPU, fog
τk,i ≤ ζk,i
L
, (13a)
A. PROBLEM FORMULATION CPU, fog
τk,j ≤ ζk,j
O
, (13b)
Our main objective is to find the optimal way (where to
Tx
offload, i.e., secondary fog node or remote cloud server, and ≤ τk,i→c ζk,c
O
. (13c)
amount of task data to offload) to offload task data that cannot h i
be entirely processed at the end-user side within the latency Let αk = max ζk,i
L , ζ O , ζ O , such that {ζ L , ζ O , ζ O } ≤
k,j k,c k,i k,j k,c
deadline. As discussed, a primary fog node receives the task αk ∀k ∈ Mi . Let wk = [µk,i ,βk,i,j ,µk,j ,λk,i ,ζk,i L ,f ,ζO ,
k,i k,j
|
data from the end-users directly under its coverage and from fk,j , rk,i,c , ζk,c , rk,i ]10×1 denotes the variable matrix, where
O
its neighbor fog nodes. Thus, we need to jointly optimize the (·)| denotes the transpose of a matrix. Let bLk,i =
computing resources (CPU rate allocated for end-user’s task CPU, fog
, 01×9 ]| , bO k,j = [La Dk,i→j , 01×9 ] , bk,c =
[La Dk,i OL | O
data execution) and transmission resources for offloading
[0, 0, Dk,i→c , 01×7 ] , and eq = [01×(q−1) 1 01×(10−q) ]| for
OL |
task data from primary fog node to the remote cloud. At the
1 ≤ q ≤ 10.
same time, the interference from the end-users under same
Then, we rewrite (13a)–(13c) as
primary fog node also plays an important role in transmission
| |
rate between the end-user to the primary fog node since these e1 bLk,i + wk ALk,i wk ≤ 0, (14a)
end-users share the same channel to offload their task data. |
e1 bO
|
+ wk AO
k,j k,c wk ≤ 0, (14b)
Therefore, to find the number of end-users that select the ith | O | O
fog node as their primary node, Mi is also an important factor wk bk,c + wk Ak,c wk ≤ 0, (14c)
to be considered. where
We aim to jointly optimize the computing and transmission   
1

03×3 03×2 03×5 0 −
resource allocation in fog nodes (both primary and secondary 0 0
ALk,i = 02×3 ALk,i 02×5  , ALk,i =  1 2 ,
 
fog node) to guarantee the minimum delay for each end-
05×3 05×2 05×5 − 0
user’s task completion considering tasks which are arrived
 2 
from multi-users. The task offloading vector for the kth user 
05×5 05×2 05×3
 1
0 −
is defined as 0k = µk,i , βk,i,j , λk,c , DCPU
k,i , Dk,i→j , Dk,i→c .
OL OL

0 0
AO = 02×5 AO 02×3  , AO =  1 2 ,
  
k,j k,j k,j
Next, we formulate the above optimization problem as: − 0
03×5 03×2 03×3
2
minimize τk0 ∀k (12a) and
Mi ,0k ,xi ,fk,i ,fk,j  
subject to µk,i , βk,i,j , λk,i ∈ {0, 1} (12b)

07×7 07×2 07×1
 1
0 0 0 −
CPU, fog
+ DOL OL OL AO = 02×7 AO 02×1  ,
AO
k,c =  1
2.

Dk,i k,i→j + Dk,i→c ≥ Dk,i (12c) k,c

k,c
01×7 01×2 01×1 − 0
Mi
X 2
rk,i ≤ rimax (12d) Let Dk = [Dk,i , −Dk,i , −Dk,i,j , −Dk,i,c , 01×6 ]| . Then,
1 OL CPU OL OL
k=1 | | | |
(12c) becomes (e1 +e2 +e3 +e4 ) D1k ≤ 0. and (12d) becomes
N | max
X e10 wk ≤ ri ∀i ∈ N . We further rewrite (12e) as
λk,i,c rk,i,c ≤ rk,c
max
(12e)
N
i=1 X |
and (11), (12f)
max
wk Ak,i,c wk ≤ rk,c , (15)
i=1
where the constraint (12b) implies the offloading decision
where
variables for the kth end-user. Moreover, the constraint (12c)
02×10
 
corresponds to the condition that the total offloaded task 1
 01×7 01×2 
of the kth end-user to the primary fog node must be pro- 2
.
 
cessed in primary and secondary fog node and cloud server. Ak,i,c == 
 04×10 
 01×2 1
Moreover, the constraint (12d) denotes that the total transmis- 2 01×7

sion rate between the ith fog node and all the users is under 02×10

VOLUME 7, 2019 152915


M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

Moreover, we rewrite the constraint (12f) as


Mi Mj
Ji X
| |
X X
wk Qk,i wk + wk Qk 0 ,i,j wk ≤ fimax , (16)
k=1 j=1 k 0 =1

where
 1 
2
 03×3 02×1 03×6 
Qk,i =
1

2 01×9 
06×10
and
 
03×1 0
Qk 0 ,i,j =  010×1 1
2
010×1 1
2
010×6  .
06×1 08×1 FIGURE 3. Delay violation with different deadline for the tasks.
We consider the following cases Case 1: [τatask 1 = 2s, τatask 2 = 4s], Case 2:
[τatask 1 = 2s, τatask 2 = 2s], and Case 3: [τatask 1 = 4s, τatask 2 = 2s].
V. SIMULATION RESULTS
This section evaluates the performance of the proposed solu-
tion for task offloading in multiple task delay sensitive fog delay violation (i.e., probability of occurrence when the total
networks with Monte Carlo simulations. We consider that delay does not meet the delay deadline). From Fig. 3, we see
total N = 5 fog nodes and total K = 15 end-users are uni- that the delay violation is higher for task 1 compared to the
formly distributed over the network. We set rk,cmax =1 Mbps, task 2 due to reason that the task 1 has more strict delay
rimax =10 Mbps, fkmax = 600 × 106 [cycles/s], fimax = deadline compared to the task 2. In case 2, we reduce the
5 × 109 [cycles/s], and fc = 10 × 109 [cycles/s]. For multiple delay deadline for the task 2 from 4s to 2s. Although we keep
tasks, we consider two tasks with different processing density the same delay deadline for the task 1 in case 2 as that in
as La = 1900 [cycles/byte] (e.g., x264 constant bit rate case 1, because of the more strict deadline on task 2, the delay
encoding [25]) for task 1 and La = 2500 [cycles/byte] for violation increases for both tasks. It is worthwhile to note that
task 2. in case 2, although both tasks have same delay deadline, due
We show that the performance of average total delay versus to higher processing density of the task 2 compared to the
input task data size in Fig. 2. The total delay for the task task 1, the delay violation is slightly higher in task 2 than
increases with the increase of input data size. We further task 1. Furthermore, we have relaxed the delay deadline for
compare the performance of proposed scheme with a baseline the task 1 in case 3 compared to the deadline in case 2.
approach, called fixed resource allocation where the trans- As evident, the delay violation decreases with the increase
mission resources are equally distributed over all the fog of delay deadline. However, the reduction in delay violation
nodes and the fog node allocates an equal amount of CPU is less in task 2 (note that, the delay deadline is the same as the
resources for each tasks. It is interesting to observe that the previous case) as compared to the reduction in task 1. Thus,
proposed approach outperforms the fixed resource allocation. we can say that the if we relax the delay deadline for the tasks
with lower processing density, it has a negligible impact on
the reduction of the delay violation reduction for the tasks
with high processing density.

FIGURE 2. Total delay with task data size.

Moreover, Fig. 3 demonstrates the impact of delay deadline


of the two tasks with different task processing density on the FIGURE 4. Delay violation with the task data size.

152916 VOLUME 7, 2019


M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

Finally, Fig. 4 shows the results for delay violation with [13] M.-H. Chen, B. Liang, and M. Dong, ‘‘A semidefinite relaxation approach
task data size. It is clearly observed that the task data size to mobile cloud offloading with computing access point,’’ in Proc. IEEE
16th Int. Workshop Signal Process. Adv. Wireless Commun. (SPAWC),
has an adverse impact on the delay violation for both the Jun./Jul. 2015, pp. 1–5.
tasks. [14] Y. Wu, Y. He, L. Qian, J. Huang, and X. S. Shen, ‘‘Optimal resource
allocations for mobile data offloading via dual-connectivity,’’ IEEE Trans.
Mobile Comput., vol. 17, no. 10, pp. 2349–2365, Oct. 2018.
VI. CONCLUSION [15] Y.-Y. Shih, W.-H. Chung, A.-C. Pang, T.-C. Chiu, and H.-Y. Wei,
In this paper, we address the issues of task data offloading ‘‘Enabling low-latency applications in fog-radio access networks,’’ IEEE
in fog computing considering different delay deadline for Netw., vol. 31, no. 1, pp. 52–58, Jan. 2017.
[16] M. Mukherjee, Y. Liu, J. Lloret, L. Guo, R. Matam, and M. Aazam, ‘‘Trans-
the tasks initiated by the end-users. Our approach takes into mission and latency-aware load balancing for fog radio access networks,’’
account the delay deadline for different tasks, the trans- in Proc. IEEE GLOBECOM, Dec. 2018, pp. 1–6.
mission delay between primary fog node to the cloud, and [17] M. Mukherjee, S. Kumar, M. Shojafar, Q. Zhang, and
C. X. Mavromoustakis, ‘‘Joint task offloading and resource allocation for
secondary fog node’s available computing resources while delay-sensitive fog networks,’’ in Proc. IEEE ICC, May 2019, pp. 1–7.
offloading the task data. Simulation results have shown that [18] E. Balevi and R. D. Gitlin, ‘‘Optimizing the number of fog nodes for cloud-
the proposed solution outperforms fixed computational and fog-thing networks,’’ IEEE Access, vol. 6, pp. 11173–11183, Feb. 2018.
[19] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, ‘‘Power-delay tradeoff in
transmission resource allocation to satisfy the delay deadline. multi-user mobile-edge computing systems,’’ in Proc. IEEE GLOBECOM,
Moreover, a trade-off between the deadline on the latency Dec. 2016, pp. 1–6.
and the delay violation is observed with numerous parameter [20] S.-W. Ko, K. Huang, S.-L. Kim, and H. Chae, ‘‘Live prefetching for mobile
computation offloading,’’ IEEE Trans. Wireless Commun., vol. 16, no. 5,
settings. Further extensions of this work may include the pp. 3057–3071, May 2017.
investigation of task offloading for carrier-grade reliability [21] J. Huang, V. G. Subramanian, R. Agrawal, and R. Berry, ‘‘Joint scheduling
and latency constraints with joint and competitive caching and resource allocation in uplink OFDM systems for broadband wire-
less access networks,’’ IEEE J. Sel. Areas Commun., vol. 27, no. 2,
designs based on network utility maximization. pp. 226–234, Feb. 2009.
[22] G. Yu, Y. Jiang, L. Xu, and G. Y. Li, ‘‘Multi-objective energy-efficient
resource allocation for multi-RAT heterogeneous networks,’’ IEEE J. Sel.
REFERENCES Areas Commun., vol. 33, no. 10, pp. 2118–2127, Oct. 2015.
[1] K. S. Kim, D. K. Kim, C.-B. Chae, S. Choi, Y.-C. Ko, J. Kim, Y.-G. Lim, [23] J. Kwak, Y. Kim, J. Lee, and S. Chong, ‘‘DREAM: Dynamic resource and
M. Yang, S. Kim, B. Lim, K. Lee, and K. L. Ryu, ‘‘Ultrareliable and task allocation for energy minimization in mobile cloud systems,’’ IEEE J.
low-latency communication techniques for tactile Internet services,’’ Proc. Sel. Areas Commun., vol. 33, no. 12, pp. 2510–2523, Dec. 2015.
IEEE, vol. 107, no. 2, pp. 376–393, Feb. 2019. [24] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.:
[2] J. Liu and Q. Zhang, ‘‘Code-partitioning offloading schemes in Cambridge Univ. Press, 2004.
mobile edge computing for augmented reality,’’ IEEE Access, vol. 7, [25] A. P. Miettinen and J. K. Nurminen, ‘‘Energy efficiency of mobile clients in
pp. 11222–11236, 2019. cloud computing,’’ in Proc. 2nd USENIX Conf. Hot Topics Cloud Comput.
[3] J. Liu and Q. Zhang, ‘‘Offloading schemes in mobile edge computing (HotCloud), Jun. 2010, p. 4.
for ultra-reliable low latency communications,’’ IEEE Access, vol. 6,
pp. 12825–12837, 2018.
[4] P.-V. Mekikis, K. Ramantas, L. Sanabria-Russo, J. Serra, A. Antonopoulos,
D. Pubill, E. Kartsakli, and C. Verikoukis, ‘‘NFV-enabled experimental
platform for 5G Tactile Internet support in industrial environments,’’ IEEE
Trans. Ind. Informat., to be published.
[5] M. Mukherjee, L. Shu, and D. Wang, ‘‘Survey of fog computing: Funda-
mental, network applications, and research challenges,’’ IEEE Commun.
Surveys Tuts., vol. 20, no. 3, pp. 1826–1857, 3rd Quart., 2018.
[6] R. Vilalta, V. Lopez, A. Giorgetti, S. Peng, V. Orsini, L. Velasco,
R. Serral-Gracia, D. Morris, S. De Fina, F. Cugini, P. Castoldi, A. Mayoral, MITHUN MUKHERJEE (S’10–M’16) received
R. Casellas, R. Martinez, C. Verikoukis, and R. Munoz, ‘‘TelcoFog: A uni- the B.E. degree in electronics and communica-
fied flexible fog and cloud computing architecture for 5G networks,’’ IEEE tion engineering from the University Institute
Commun. Mag., vol. 55, no. 8, pp. 36–43, Aug. 2017. of Technology, Burdwan University, Bardhaman,
[7] S. Kosta, A. Aucinas, P. Hui, R. Mortier, and X. Zhang, ‘‘ThinkAir: India, in 2007, the M.E. degree in information
Dynamic resource allocation and parallel execution in the cloud for mobile and communication engineering from the Indian
code offloading,’’ in Proc. IEEE INFOCOM, Mar. 2012, pp. 945–953. Institute of Science and Technology, Shibpur,
[8] L. Jiao, H. Yin, H. Huang, D. Guo, and Y. Lyu, ‘‘Computation India, in 2009, and the Ph.D. degree in electrical
offloading for multi-user mobile edge computing,’’ in Proc. IEEE engineering from the Indian Institute of Technol-
HPCC/SmartCity/DSS, Jun. 2018, pp. 422–429. ogy Patna, Patna, India, in 2015. He is currently
[9] I. Sarrigiannis, E. Kartsakli, K. Ramantas, A. Antonopoulos, and an Assistant Professor with the Guangdong Provincial Key Laboratory of
C. Verikoukis, ‘‘Application and network VNF migration in a MEC- Petrochemical Equipment Fault Diagnosis, Guangdong University of Petro-
enabled 5G architecture,’’ in Proc. 23rd IEEE Int. Workshop Comput. chemical Technology, Maoming, China. He has (co)authored more than
Aided Modeling Design Commun. Links Netw. (CAMAD), Sep. 2018,
80 publications in peer-reviewed international TRANSACTIONS/journals and
pp. 1–6.
conferences. Dr. Mukherjee was a recipient of the 2016 EAI International
[10] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, ‘‘The case for VM-
Wireless Internet Conference, the 2017 International Conference on Recent
based cloudlets in mobile computing,’’ IEEE Pervasive Comput., vol. 8,
no. 4, pp. 14–23, Oct. 2009. Advances on Signal Processing, Telecommunications and Computing,
[11] M.-H. Chen, B. Liang, and M. Dong, ‘‘Multi-user multi-task offloading and the 2018 IEEE SYSTEMS JOURNAL, and the 2018 IEEE International Confer-
resource allocation in mobile cloud systems,’’ IEEE Wireless Commun., ence on Advanced Networks and Telecommunications Systems (ANTS) Best
vol. 17, no. 10, pp. 6790–6805, Oct. 2018. Paper Award. He has been an Associate Editor of IEEE ACCESS and a Guest
[12] M.-H. Chen, M. Dong, and B. Liang, ‘‘Joint offloading decision and Editor of the IEEE INTERNET OF THINGS JOURNAL, the IEEE TRANSACTIONS ON
resource allocation for mobile cloud with computing access point,’’ INDUSTRIAL INFORMATICS, ACM/Springer Mobile Networks and Applications,
in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), and Sensors. His current research interests include wireless communications,
Mar. 2016, pp. 3516–3520. fog computing, and ultra-reliable low-latency communications.

VOLUME 7, 2019 152917


M. Mukherjee et al.: Task Data Offloading and Resource Allocation in Fog Computing

SUMAN KUMAR received the M.Sc. degree in CONSTANDINOS X. MAVROMOUSTAKIS


mathematics from the University of Hyderabad received a five-year Dipl.Eng. (B.Sc., B.Eng., and
and the Ph.D. degree in mathematics from IIT M.Eng.) in electronic and computer engineering
Patna, India. He has done research in mathemat- from the Technical University of Crete, Greece,
ical control theory. He is currently an Assistant the M.Sc. degree in telecommunications from
Professor of mathematics with IGNTU Amarkan- the University College of London, U.K., and the
tak, India. He has also served as a member with Ph.D. degree from the Department of Informatics,
organizing committee in numerous international Aristotle University of Thessaloniki, Greece. He
conferences. His current research interests include is currently a Professor with the Department of
control theory, delay differential systems, abstract Computer Science, University of Nicosia, Cyprus.
linear and nonlinear systems, and modeling and mathematical analysis of Prof. Mavromoustakis is leading the Mobile Systems Laboratory (MOSys
wireless communication systems. Lab., http://www.mosys.unic.ac.cy/), Department of Computer Science,
University of Nicosia. He has been an Active Member (Vice-Chair) of
IEEE/ R8 regional Cyprus section, since 2016, and since 2009, he has been
serving as the Chair of C16 Computer Society Chapter of the Cyprus IEEE
section. He has a dense research work outcome in Mobile and Wearable
computing systems and the Internet-of-Things (IoT), consisting of numerous
refereed publications including several books (IDEA/IGI, Springer, and
QI ZHANG received the M.Sc. and Ph.D. degrees
Elsevier). He has served as a Consultant to many industrial bodies including
in telecommunications from the Technical Univer-
Intel Corporation LLC (www.intel.com), and he is a Management Member
sity of Denmark (DTU), Denmark, in 2005 and
of the IEEE Communications Society (ComSoc) Radio Communications
2008, respectively. She is currently an Associate
Committee (RCC) and a Board Member the IEEE-SA Standards IEEE
Professor with the DIGIT, Department of Engi-
SCC42 WG2040. He has participated in several FP7/H2020/Eureka and
neering, Aarhus University, Denmark. Besides her
National projects.
academic experiences, she has various industrial
experiences. Her current research interests include
the tactile Internet, the IoT, URLLC, mobile edge
YUNRONG LV received the Ph.D. degree from
computing, massive machine type communication,
Zhejiang University, China. He is currently a Dis-
non-orthogonal multiple access (NOMA), and compressed sensing. She was
tinguished Full Professor with the Guanagdong
a Co-Chair of the Co-operative and Cognitive Mobile Networks (CoCoNet)
University of Petrochemical University and an
Workshop in the ICC conference 2010–2015 and a TPC Co-Chair of
Executive Director of the Guangdong Provincial
BodyNets 2015. She is serving as an Editor of the Journal on Wireless
Key Laboratory of Petrochemical Equipment Fault
Communications and Networking (EURASIP).
Diagnosis, China. He was a Chief Scientist in
several national and international companies. He
has been involved with several key projects in
Guangdong. He leads many industrial Internet-
related projects in several national and international companies.

RAKESH MATAM (M’14) received the bache-


lor’s degree in computer science from Jawaharlal GEORGE MASTORAKIS received the B.Eng.
Nehru Technological University at Hyderabad, degree in electronic engineering from UMIST,
the master’s degree from Kakatiya University in 2000, the M.Sc. degree in telecommunications
Warangal, India, and the Ph.D. degree in computer from UCL, in 2001, and the Ph.D. degree in
science from IIT Patna, in 2014. In 2014, he joined telecommunications from the University of the
the Department of Computer Science, Indian Aegean, in 2008. He is currently an Associate
Institute of Information Technology Guwahati Professor with the Department of Management
(IIIT Guwahati), as an Assistant Professor, where Science and Technology, Hellenic Mediterranean
he is currently a member of the Design and Inno- University, Greece. He has more than 250 publica-
vation Center, and a Principal Investigator of a funded research project tions in various international conferences proceed-
sponsored by the Government of India. His current research interests include ings, workshops, scientific journals, and book chapters. His current research
wireless networks, network security, and cloud computing. He has also interests include cognitive radio networks, the Internet of Things, energy-
served as a member in organizing committee in numerous international efficient networks, big data analytics, and mobile computing.
conferences.

152918 VOLUME 7, 2019

You might also like