Real-Time_Scheduling_on_Hierarchical_Heterogeneous_Fog_Networks
Real-Time_Scheduling_on_Hierarchical_Heterogeneous_Fog_Networks
2, MARCH/APRIL 2023
Abstract—Cloud computing is widely used to support offloaded data processing for various applications. However, latency constrained
data processing has requirements that may not always be suitable for cloud-based processing. Fog computing brings processing closer
to data generation sources, by reducing propagation and data transfer delays. It is a viable alternative for processing tasks with real-
time requirements. We propose a scheduling algorithm RTH 2 S (Real Time Heterogeneous Hierarchical Scheduling) for a set of real-
time tasks on a heterogeneous integrated fog-cloud architecture. We consider a hierarchical model for fog nodes, with nodes at higher
tiers having greater computational capacity than nodes at lower tiers, though with greater latency from data generation sources. Tasks
with various profiles have been considered. For the regular profile jobs, we use least laxity first (LLF) to find the preferred fog node for
scheduling. In case of “tagged” profiles, based on their tag values, the jobs are split in order to finish execution before the deadline, or
the LLF heuristic is used. Using HPC2N workload traces across 3.5 years of activity, the real-time performance of RTH 2 S versus
comparable algorithms is demonstrated. We also consider Microsoft Azure-based costs for the proposed algorithm. Our proposed
approach is validated using both simulation (to demonstrate scale up) as well as a lab-based testbed.
Index Terms—Fog computing, cloud computing, real-time scheduling, fog node hierarchy
(i) a multi-tier hierarchical fog-cloud real time schedul- based on their deadline requirements. An energy-efficient
ing algorithm RTH 2 S taking account of device het- fog computing framework has been proposed in [8]. The
erogeneity. We propose a mathematical model for an computation resources are shared with multiple neighbor
n-tier fog cloud architecture that schedules jobs onto helper nodes and an optimal scheduling decision is deter-
fog/cloud processors while meeting their deadline mined for a task node. In [10], the authors proposed a real
requirements. time algorithm called DEBTS for achieving a balanced sys-
(ii) RTH 2 S works for both regular and tagged job pro- tem performance in terms of service delay and energy con-
files. The algorithm either finds a preferred fog node sumption. However, the authors have not considered
for job execution, or splits the job based on a combi- heterogeneous fog nodes in their work. In [17], a fog based
nation of its size and deadline requirements. delay-optimal task scheduling algorithm has been pro-
(iii) Using both simulation and a prototype test bed, we posed. The authors consider a heterogeneous fog network
demonstrate the performance of the proposed algo- as a part of dynamic wireless networks in [19]. In [14], the
rithm RTH 2 S in enhancing a key metric used to mea- placement of tasks on heterogeneous fog nodes has been
sure benefit: Success Ratio (SR) – while considering explored, on the basis of privacy tags. In [33], the authors
task load, propagation delay, heterogeneity and job discuss resource allocation by ranking fog devices based on
profiles. Further, the impact of the tiered fog architec- processing, bandwidth and latency, and assigning process-
ture on the scheduling performance is also discussed. ors to deadline-based tasks. In [34], the authors propose a
This paper is organized as follows. Section 2 includes a dynamic request dispatching algorithm, which minimizes
discussion of related work. The system model, notation and energy consumption and timeliness by using the Lyapunov
problem formulation is described in Section 3. An orchestra- Optimization Technique. The authors propose an adaptive
tion protocol to support automatic system functioning is queuing weight (AQW) resource allocation and real-time
discussed in Section 4. The proposed algorithm is presented offloading technique in a heterogeneous fog environment in
in Section 5. Section 6 discusses results. Finally, Section 7 [35]. In [36], the authors reduce the waiting time of delay-
concludes the paper and discusses future work. sensitive tasks by using a multilevel-feedback queue and
minimizing the starvation problem of low priority tasks.
However, all these approaches focus on a single tier of fog
2 RELATED WORK nodes between the edge and cloud systems, and cannot be
The Open Fog Consortium (involving a number of industry applied directly to multi-tier fog cloud architectures.
partners, e.g., Cisco, Intel, Microsoft, Dell etc.) has proposed There has been some work in multi-tier hierarchical fog
a reference architecture [2] with several use cases for fog cloud scheduling. In [24], the authors proposed a hierarchi-
computing: smart transportation, smart buildings, airport cal edge computing architecture with identical resources in
security, and so on. The extension of network resources from each tier, however without consideration of real time sched-
cloud to fog nodes yields a rich environment which can pro- uling. In [27], the authors propose a multi-tier fog cloud
vide storage, computation and communication resources architecture, and divide tasks into low & high priority. In
over the network [18]. Capacity planning and optimisation [28], a hierarchical fog cloud architecture (limited to 2-tier
of a fog-based system may be analysed using the iFogSim cloudlets) and a workload allocation scheme is proposed,
simulator [3], enabling various resource management strate- which attempts to minimise the response time of user
gies in fog-cloud architectures to be considered. iFogSim requests. In [29], [30] the authors proposed a multi-layer
matches fog node capability (as Million Instructions Per Sec- heterogeneous architecture for task offloading to minimise
ond (MIPS), memory, and network connectivity) with task response time, without considering real time tasks. In [31], a
capability (defined using similar metrics as fog node capabil- component based scheduler for a multi-tier fog cloud archi-
ity). The simulator enables understanding the trade-off tecture is proposed. However, authors consider only two
between computational capability and power consumption tiers of fog nodes in their work, and measure the results
of a fog node, and latency of executing an application task. using simulation – without taking account of a real work-
A survey of fog computing [4], [13] explores a number of load. The effective mapping of jobs to a group of heteroge-
research trends – differentiating characteristics of fog and neous fog nodes is no doubt a challenging problem. To the
cloud computing. In [16], the authors pitch fog computing best of our knowledge, no work has looked into heteroge-
as a crucial element for Internet of Things (IoT), and neous, hierarchical real time scheduling of “regular” as well
develop a mathematical model to assess the suitability of as “tagged” profiled tasks on fog cloud architectures, sup-
fog computing in IoT [20]. In [5], the authors observe that porting both “inter-level heterogeneity” as well as “intra-
fog nodes/cloudlets provide an acceptable interactive level heterogeneity”.
response in human cognition, owing to their physical prox-
imity and one-hop network latency. Several papers, given
the context of fog-based system usage, have focused on min-
3 SYSTEM MODEL
imising latency in such environments [21]. 3.1 Proposed Architecture
In our previous work, we consider the real-time schedul- The proposed architecture is illustrated in Fig. 1. Table 1
ing of single tier fog nodes [11] on homogeneous fog nodes, summarises the notation used in our approach – where a set
i.e., all the fog nodes were assumed to have identical proc- of fog nodes is given by FN. We assume that a hierarchy of
essing capabilities, with the interpretation that a job will fog nodes exists – as outlined in [2]. At the lowest level (i.e.,
have identical execution costs on all fog nodes. In [9], the closest to the user), we have tier-1 fog nodes, followed by
authors schedule tasks in real time on identical processors tier-2 fog nodes at the next level, and then tier-3 fog nodes
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1360 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023
TABLE 2
Representative Smart Car Tasks
TABLE 3 ftðjk ; fnÞ ¼ ctðjk ; fnÞ þ tðjk ; fnÞ þ pdðjk ; fnÞ: (1)
Priority Level Assignment
The jobs are real-time and need to finish by their deadline.
tight (T) moderate (M) loose (L) For unsplit jobs, no overheads are there.
Small job js P1 P2 P3
Medium job jm P1 P2 P3 ftðjk ; fnÞ dðjk Þ: (2)
Large job jl P1 P2 P3
Since this is a heterogeneous system, we need to exercise
caution while assigning jobs to fog nodes and the cloud. We
TABLE 4 use the concept of a preferred fog node pfn of job jk – a
Tag Assignment node on which the job is most likely to meet its deadline
requirements. By selecting a pfn for jk , the algorithm takes
tight (T) moderate (M) loose (L) processor heterogeneity into account.
Small job js x tag2 tag2 For jobs with regular or tag2 profiles, we use the job lax-
Medium job jm tag1 x tag2 ity for finding the preferred fog node. The laxity of a job jk
Large job jl tag1 tag1 x is denoted by lðjk Þ, and is the difference between its finish
time and its deadline [7]. Formally
X
wleft ¼ jk wi : (8) queuing can occur in fog nodes when the jobs are large in
number. The scheduled jobs on cdc cx can generally execute
Overheads Q involved in splitting large jobs into smaller without any queuing delay. Due to the limited processing
ones has three components: (i) delay involved in transmit- capability of fog nodes, we assume that each fog node main-
ting jobs to fog nodes. Job jk is split into smaller chunks tains a queue to buffer the jobs. The queue length of fn at
denoted by wi ; (ii) finish time ft of the job jk ; (iii) delay in t þ 1th instance can be defined as follows[38]:
receiving results. The output of each sub-job with input wi
is denoted by wo . The bandwidth of the network connection qðfn; t þ 1Þ ¼ maxðqðfn; tÞ þ aðfn; tÞ mðfn; tÞ; 0Þ: (15)
between user ui and fog node fn is denoted by bw.
Here, qðfn; t þ 1Þ is the queue of fn at ðt þ 1Þth instance, and
X wi X wo qðfn; tÞ is the queue of fn at tth instance. qðfn; tÞ represents
Qðjk Þ ¼ þ ftðjk Þ þ : (9)
bwðui ; fnÞ bwðfn; ui Þ the number of jobs leaving the queue of fn in the tth time
slot (jobs processed by fog node). aðfn; tÞ denotes the num-
The jobs are real-time and need to finish by their deadline. ber of jobs arriving at fn in the tth time slot. We add the
We need to take the overheads into account for split jobs. queuing delay to Equation (1) to calculate the finish times.
are the total number of jobs submitted to the cloud data cen- Algorithm 1. RTH 2 S
ter. Here, cx is the only cloud data center and ji ; dðji Þ 2 j00 . Input: Set of jobs
Overall, the Success Ratio of the fog nodes is given by Output: Optimal Schedule
1: Populate Q1, Q2, and Q3 with priority level P1 , P2 , and P3
SRsystem ¼ SRFN1 þ SRFN2 þ . . . . . . : þ SRFNn þ SRC : (21) respectively;
2: Sort queue Q1, Q2, and Q3 with ascending order of
deadlines;
Given this context and set of definitions, we can formally
3: Assign tags to the jobs;
define the research problem as:
4: scheduledlist S ¼ empty, Qpj ¼ empty;
“Given a set of jobs JðJS ; JM ; JL Þ, a set of fog nodes
5: for k = 1 to size(Q1) do
FNðFN1 ; FN2 ; FN3 Þ and a cloud data center cx , with heteroge-
6: if tagðjk Þ ! x then
neous execution capacity schedule the jobs on their preferred fog 7: Preferred-fn(1);
node pfns, or split the job onto fog tiers according to the priority 8: end
assignment of Table 3, and tag assignment of Table 4, s.t. 9: if tagðjk Þ ! tag1 then
SRsystem is maximised”. 10: Preempt the currently scheduled jobs and add the jobs
to Qpj ;
4 ORCHESTRATION PROTOCOL 11: ScaleUp();
12: Resume the jobs present in Qpj ;
We adopt a decentralised fog cloud architecture driven by
13: end
Orchestrating agents (OAs), as proposed in [6]. Fig. 2 shows
14: end
the conceptual architecture of the orchestration mechanism
15: for k = 1 to size(Q2) do
for the distributed fog cloud architecture. Here, FNn repre-
16: if tagðjk Þ ! x jj tagðjk Þ ! tag2 then
sents the nth fog node tier of the architecture. In this work, 17: Preferred-fn(2);
we considering n = 3, though n can be varied based on the 18: if jk is unscheduled then
application requirement. An OA is present on each comput- 19: Preferred-fn(3);
ing device. A job specific instance is created by OAs. The 20: end
OAs cooperate with each other to achieve the goal of the 21: end
scheduling algorithm: minimising the overall latency of the 22: if tagðjk Þ ! tag1 then
system, or increasing the success ratio of the system. As 23: Preempt the currently scheduled jobs and add the jobs
demonstrated in the figure, a user can submit jobs to the fog to Qpj ;
devices or cloud data center. Each user has a network con- 24: ScaleUp();
nection to the FN1 . The FN1 are the fog nodes which can 25: Resume the jobs present in Qpj ;
execute jobs in the least latency. The fog node tier FN1 is fur- 26: end
ther connected to next tier of fog nodes i.e., FN2 followed by 27: end
FN3 . Finally, we have a cloud data center at the top most 28: for k = 1 to size(Q3) do
layer of the hierarchy. 29: if tagðjk Þ ! x jj tagðjk Þ ! tag2 then
30: Preferred-fn(3);
31: else
5 PROPOSED ALGORITHM
32: schedule jk on cdc;
In this section, we describe our proposed scheduling 33: estimate the MC using Eq. (17);
scheme RTH 2 S. As mentioned in section III, we consider 34: remove job jk from queue Q, add job jk to scheduledlist
three types of jobs: small (JS ), medium (JM ), and large (JL ). S;
Likewise, we have resources of diverse execution capacities, 35: end
tier-1 fog nodes (FN1 ), tier-2 fog nodes (FN2 ), tier-3 fog 36: end
nodes (FN3 ), and the cloud data center (cx ), which are het- 37: Calculate SRðsysÞ8FN; C
erogeneous with respect to each other. For a particular job
jk 2 J, the goal is to finish its execution within it’s deadline. The algorithm RTH 2 S works as follows. The input data for
More specifically, for the regular profile jobs, the aim is to the algorithm is the set of jobs. This set consists of jobs of vari-
minimise the laxity of the job by assigning the job to its pre- ous sizes along with their deadlines. The first step is to popu-
ferred fog node ðpfnÞ, while finishing the job within it’s late the set of jobs J into three queues Q1, Q2, and Q3, based
deadline. For the tagged profile jobs, we need to make a call on the priority level assignment of Table 3. The queues Q1,
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1364 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023
Q2, Q3 are sorted in ascending order of deadlines. The ratio- Algorithm 3. ScaleUp
nale behind this sorting is to align it with the Earliest Deadline Input: Job jk with tag ! tag1
First algorithm. We form a list named scheduledlist S, which Output: Optimal Schedule
is initially empty. We consider a queue Qpj , which queues the 1: Calculate min. MIPS for job jk to finish before deadline;
preempted jobs. This queue is initially empty. The tags are 2: sum 0;
assigned to the jobs as per Table 4. The jobs are executed 3: for p=1 to m do
according to their priority levels, i.e., P1 being the highest pri- 4: Get MIPS of pth fog node;
ority, and P3 being the lowest priority. Initially, the jobs pres- 5: sum = sum + MIPS(pth );
ent in Q1 are scheduled. As soon as the job arrives, we 6: Estimate Y on pth fog node using Eq. (6);
examine its tag. If the job has no tag, i.e., small jobs with tight 7: if Equation (5) holds true then
deadline, then the preferred fog node of the algorithm is esti- 8: Calculate sub-job wi on fog nodes using Eq. (7);
mated. In Preferred-fn(1), 1 stands for fog tier-1. Initially, the 9: end
ft of the job jk is calculated for the tier-1 fog nodes. We calcu- 10: Estimate ftðwi Þ on pth fog node;
late the minimum finish time ftmin among the calculated finish 11: if sum minimalMIPS then
times. We estimate pfn by using Equation (4) for jk . In the next 12: break;
13: end
step, we compare two conditions: whether the task’s require-
14: end
ment is within the preferred fog node’s pfn capacity and
15: Estimate Qðjk Þ using Equation (9);
whether its laxity is less than zero or not. The latter check
16: if Qðjk Þ deadline then
implies whether the job jk is finishing before the deadline or 17: schedule the job jk on the fog nodes;
not. If both the conditions are satisfied, then the job jk is sched- 18: estimate the MC on fog nodes using Equation (16);
uled on the pfn. After this, we calculate the associated Mone- 19: add job jk to scheduledlist S;
tary cost MC on the preferred fog node pfn. The job is added 20: else
to the scheduledlist S. If the tag of the job is tag1, then the jobs 21: job jk can’t be submitted;
scheduled on tier-1 are preempted and the ScaleUp algorithm 22: end
is called. The preempted jobs are added to Qpj . The ScaleUp
algorithm works as follows. First, we find the minimum MIPS
required for finishing the job before deadline. We form a vari-
6 SIMULATION RESULTS
able sum which is initialised to zero. We form a loop for the
fog nodes at fog node tier-1. The associated value of Y ðfnÞ is In this section, we discuss the simulation results that were
estimated using Equation (6). If the fog node has spare capac- carried out for the performance evaluation of the proposed
ity, then we estimate the sub-job wi by using Equation (7). algorithm RTH 2 S. We consider sample scenarios that align
After this step, we calculate the finish time of sub-job over the with our Fog Architecture depicted in Fig. 1. The jobs may
selected fog node. Once the job jk gets the minimal MIPS be run on: tier-1 fog nodes FN1 , tier-2 fog nodes FN2 , tier-3
required for execution, the loop breaks. The overhead for job fog nodes FN3 , or on the cloud data center cx . In our work,
jk is estimated. If the job jk finishes before deadline, then the we consider three tiers of fog nodes. The proposed model
job jk is scheduled. The job jk is removed from queue Q1 and can be readily extended to support more tiers, based on the
added to the scheduled list S. Otherwise, the job jk can’t be application requirements.
submitted to the scheduler. The jobs in Qpj are resumed on the The jobs are executed on the basis of the priority
respective fog nodes. After traversing Q1, the algorithm goes assigned. Priority P1 jobs run on FN1 nodes, priority P2 jobs
for P2 priority. For the incoming job, the tag is seen. If there is run on FN2 or on FN3 nodes, priority P3 jobs run on FN3
no tag or tag2, then preferred-fn is run for fog tier-2. If the job nodes, or on the cloud data center cx . This ensures that the
is still unscheduled, then the preferred fog node is examined utilization of all nodes is maximised. We compare our pro-
at tier-3. For the tag1, the preemption at fog node tier-2 is done posed scheduling algorithm RTH 2 S with cdc only and a
and ScaleUp algorithm is called. For the last queue i.e Q3, the scheduling algorithm for Heterogeneous Fog Computing
algorithm tries to run the jobs on fog tier-3. If the queue still Architectures proposed in [15]. In cdc only, the fog nodes
has some jobs unscheduled, then they are scheduled on the have not been considered in executing jobs i.e., only the
cdc only. Finally, SR for all the jobs is calculated. cloud data center cx is used for executing all the jobs. In
[15], the authors propose the LTF (Longest Time First)
Algorithm 2. Preferred-fn(n) scheduling algorithm for heterogeneous fog networks. We
compare our proposed algorithm RTH 2 S with LTF . The
Input: Job jk with tag ! x or tag2
LTF algorithm schedules the jobs with the longest execu-
Output: pfn
tion time to the fastest node. Prior to execution, LTF sorts
1: for y=1 to m do
2: estimate ft of job jk in fnyn using Eq. (1); the jobs in a descending order based on their deadlines.
3: find pfn with ftmin forall ft using Eq. (4);
4: end 6.1 Workload
5: if RðpfnÞ rðjk ; pfnÞ and laxity(pfn) 0 then We have used a real workload called HPC2N (High Perfor-
6: schedule job jk on preferred fog node fn; mance Computing Center North) [12], [23]. This is a joint
7: estimate the MC on pfn using Eq. (16); operation between various facilities and educational insti-
8: add job jk to scheduledlist S; tutes. This workload is a result of about 3.5 years of activity.
9: end This activity was carried on the Seth cluster of the HPC cen-
ter in Sweden. The Linux cluster consists of 120 dual CPU
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1365
nodes. Each node in the cluster consists of 2 AMD Athlon made to this class. We have created job queues Q1, Q2, and
MP2000+ CPUs, with a clock frequency of 1.67 GigaHertz. Q3 in the simulator. The queues are sorted in increasing of
The peak performance of this cluster is 800 Gigaflops. Each the deadlines, e.g., the task with the tightest deadline
node has access of 1 GB of RAM, which is shared by both appears at the head of the queue. As per the priority assign-
CPUs. The communication framework consists of a 3D SCI ment of Table 3, jobs are be allocated to tier-1 FN1 , tier-2
intern-connect and fast Ethernet. This workload consists of FN2 , tier-3 FN3 , and cdc cx . Note that each data point is an
over 5,00,000 jobs, which are of various lengths, and is average of five simulation runs. A 95% confidence interval
suited to cloud, grid and fog computing. Each task has vari- is used in the graphs. We now describe the parameters used
ous parameters associated with it – such as Job ID, burst in our simulations:
0
time (t), memory usage, and arrival time. For each job, we 1) Success Ratio (SR). This is defined as ðNN Þ*100, i.e., the
take the arrival time as 0. We have divided the jobs into percentage of the number of jobs finishing execution before
three categories as per the job length by using k-means: their deadlines to the total number of jobs considered for
small, medium and large. The range of job lengths consid- scheduling.
ered for each category are as follows - small: 1-95, medium: 2) Task Load (TL). There is a MIPS requirement associated
96-205, large: 206-400. The fog network consists of 8 FN1 with all jobs considered for scheduling. The MIPS value of
nodes, 4 FN2 nodes, 1 FN3 node and 1 cdc cx . The propaga- each job was uniformly selected from the range (100, 8500).
tion delay ðpdÞ from a user Ui to a tier-1 fog node is 2 milli- Next, we calculated the average MIPS value for all jobs. In
seconds, user Ui to a tier-2 fog node is 6 milliseconds, user order to get a range of Task loads, this MIPS value is multi-
Ui to a tier-3 fog node is 12 milliseconds, and from Ui to a plied by 1 to 5.
cdc is 137 milliseconds (12 milliseconds from the Ui to the 3) Propagation Delay (PD). This quantity is defined as the
proxy server and 125 milliseconds from the proxy server to range of delay factor between the jobs the fog nodes and the
cdc). The capacity of each fog node present at tier-1, cðfny1 Þ cloud data centers ðcdcÞ. A lower value indicates smaller
varies from 1000 MIPS to 2000 MIPS. Likewise, the capacity delay. We set the delay factor ðpdÞ of 2, 6 and 12 millisec-
of each fog node present at tier-2, cðfny2 Þ varies from 2500 onds between the user and fog tier-1, tier-2, and tier-3
MIPS to 4000 MIPS, the capacity of each fog node present at respectively. A value of 137 milliseconds was set between
tier-3, cðfny3 Þ has been taken as 5800 MIPS, and the capacity the user and the cdc. These values are added by 10 millisec-
of the cdc, cðcx Þ has been taken as 70000 MIPS. The number onds in each iteration to get new values.
of jobs (i.e., Job Set JS) varies from 250 to 500 and the execu- 4) Deadline Factor (DF ). Job deadlines are changed over a
tion costs of these jobs (i.e., t) varies from 100 to 8500 MIPS. range to observe the effect of tight and loose deadlines on
The size-wise break up of the jobs is as follows: small jobs performance. A higher value implies tight deadlines, and
make up 41% of the workload, medium jobs make up 34% vice versa. A job’s initial deadline is considered. Next, we
of the workload, and large jobs make up 25% of the work- calculate the average of all such deadlines. To get a range of
load. Note that the values in Table 2 are representative deadline values, we divide this average deadline with a fac-
values, and they can be changed, based on the user require- tor of 1 to 5. The tight deadlines lie in the range 1-24, moder-
ments, without having an effect on the working of the ate deadlines lie in the range 25-74, and loose deadlines lie
RTH 2 S algorithm. in the range 75+.
5) Heterogeneity Level (HL). Heterogeneity Level (HL) sig-
nifies the degree of heterogeneity of fog nodes – measuring
6.2 Simulation Setup and Parameters the variation in computational capacity of fog nodes within
We have used the iFogSim [3] simulator for the implementa- each level. A low HL value implies that the execution capac-
tion of our proposed algorithm RTH 2 S. iFogSim is rooted in ities of the fog nodes are similar. The Heterogeneity level of
CloudSim – a very widely used discrete event cloud simula- any nth tier fog node is given by
tor. iFogSim, therefore, allows us to model the characteris-
n Þ cðfnn Þ
cðfnmax min
tics of a cloud platform more realistically (CloudSim has
HLFNn ¼ ; (22)
> 4K downloads) [32], a key basis for some of the simula- averageðcðfnjn ÞÞ
tion that this work is based on. We have modelled various
features of fog nodes and the cdc in this simulator. By using n Þ represents a tier-n fog node with the maximum
cðfnmax
iFogSim, one can evaluate different fog and cloud schedul- capacity.
ing strategies. This simulator is appropriate for fog enabled
devices, as it follows a representation of the sensor ! pro- fnmax
n ¼ fnjn : cðfnjn Þ > cðfnX
n Þ: (23)
cessor ! actuator model. A class named HierarchicalFog
n 2 FNn && X 6¼ j. cðfnn Þ represents a
In Eq. (23), fnjn ; fnX min
has been implemented in the simulator. This class reads the
tier-n fog node with the minimum capacity.
dataset from a text file and stores the job-id, the job-length,
the deadline, and the priority. In addition to this, the follow-
fnmin
n ¼ fnjn : cðfnjn Þ < cðfnX
n Þ: (24)
ing quantities have also been added to the class : the propa-
gation delay ðpdÞ of all FN and C, execution capacity ðcÞ
n 2 FNn && X 6¼ j. We can replace n in
In Eq. (24), fnjn ; fnX
and the module allocation. A FogDevice class present in FNn to get the heterogeneity level of a fog node. Finally, the
iFogSim contains a function named updateAllocatedMips. heterogeneity level of the system is given by
The task of this function is to allocate the MIPS require-
ments of various execution modules. In order to take job HLsystem ¼ HLFN1 þ HLFN2 þ . . . . . . þ HLFNn þ HLC : (25)
deadlines into account, certain modifications have been
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1366 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023
Fig. 4. Effect of propagation delay on SR. Fig. 5. Effect of task load on SR.
communication. This results in an increase in the com- tiers with less propagation delays which leads to higher suc-
mencement time ðctÞ at the fog nodes present at tier-1 cess ratios SR. It is important to note that as we add fog
ðFN1 Þ, tier-2 ðFN2 Þ, tier-3 FN3 and at the cloud data center node tiers, there is an addition of fog nodes in the network,
ðcx Þ. Hence, the finish time ðftÞ of the jobs often overshoots leading to an increase in the total computation power.
their deadlines ðdÞ, so a lesser number of jobs finish execu- Though, the propagation delay increases as well, but this
tion before their deadlines, which results in a low Success delay is smaller as compared to sending jobs to the cloud.
Ratio SR in all tiers. We observe similar results in LTF . The We have observed that the 3-tier fog based algorithm
induced delay between slow and fast fog nodes results in RTH 2 S outperforms all compared scheduling strategies, for
smaller values for SR. Likewise, the increased pd effects the all metrics considered. The 2TF network and 1TF network
SR in WALL. The pd added at each iteration increases the offer lesser computation power. Though, there is an
completion time of the jobs in both tiers and cdc. Overall, increased communication delay due to the presence of more
we observe that an increase in the pd reduces the SR in all fog tiers in RTH 2 S, this delay is smaller as compared to
six scheduling strategies. sending jobs to the cloud data center for execution. Our pro-
In the next simulation, we show the impact of Task load posed algorithm outperforms cdc only owing to the large
ðTLÞ on Success Ratio ðSRÞ. Fig. 5 depicts the results for this communication delay involved in sending the jobs to cdc
simulation. We increase the task load ðTLÞ from 1 to 5. As only. It outperforms LTF due to their sorting of jobs in an
we increase the TL value, more tasks are added to the sys- opposite direction, which leads to small jobs being sched-
tem. This results in reducing the SR, as a large number of uled too late. RTH 2 S outperforms WALL as it provides the
jobs start missing their deadlines. This behaviour is shown splits of the large jobs with a tight deadline rather than
by all six scheduling strategies: RTH 2 S, 1TF , 2TF , cdc assigning them as a whole to the fog node, which increases
only, LTF , and WALL. However, RTH 2 S takes advantage the finish time of the jobs. Also, WALL selects the users
of the fog nodes present at tier-1, tier-2, and tier-3 due to with the maximum job size first, giving less priority to
which, a larger number of jobs are able to meet their dead- small/medium jobs with tight deadlines.
lines. Note that these jobs are unable to meet their deadlines Effect of Heterogeneity Level ðHLÞ on Success Ratio ðSRÞ. We
on cdc only. This happens as the fog nodes are in closer examine the impact of fog node heterogeneity on the system
proximity to the end users, and hence, the propagation performance. The results of this simulation are shown in
delay ðpdÞ from user to fog nodes is less. Contrarily, jobs Fig. 6. We increase the Heterogeneity Level HL from 0 to
which are using cdc to execute face significant propagation 1.2. The number of fog nodes at tier-1, tier-2, and tier-3 have
delays ðpdÞ, which results in deadline misses. The LTF algo- been fixed at 8, 2 and 1 respectively. The capacity of fog
rithm sorts jobs in a decreasing order of deadlines. It’s SR nodes has been varied from 300 MIPS to 6000 MIPS. We
values are lower than those of the proposed algorithm’s SR compare the performance of six scheduling algorithms:
values, as we sort in the opposite order: small deadline ! RTH 2 S, 1TF , 2TF , LTF , WALL and cdc only. As cdc
large deadline. Hence, a larger number of jobs are able to only does not employ fog nodes, a significant number of
meet their deadlines in a given time interval. The WALL
algorithm sorts jobs in descending order of sizes, which
effects the tight/moderate deadlines of small and medium
jobs. For 1TF node, due to less computation power, these
fog nodes are not able to finish the jobs before the deadlines.
It is tough for a single tier to finish the P1 or P2 priority jobs
before their deadlines. On the other hand, in 2TF , due to
addition of one more tier, more number of jobs can be exe-
cuted before the deadlines. However, once the tiers don’t
have sufficient capacity to execute, the jobs are transferred
to cloud data center cdc cx . Due to the significant propaga-
tion delay between a user and the cloud data center, the
jobs start missing their deadlines. For 3-tier fog node i.e
RTH 2 S, more jobs can be accommodated on the fog node Fig. 6. Effect of HL on SR.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1368 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023
TABLE 5
Cost ($/hour, May 2021, Asia Pacific
Region) of Microsoft Azure
TABLE 6
Effect of Task Load on Monetary Cost Fig. 9. Effect of queuing delay on SR.
TABLE 7
Effect of Deadline Factor on Monetary Cost
Fig. 11. Synthetic dataset: Effect of pd on SR. Fig. 13. Effect of DF on SR.
the delay factor, we see a decrease in the success ratio for all
the jobs in all four scheduling strategies: RTH 2 S, LTF ,
WALL, and cdc only. This happens as the jobs’ finish time
increases with the increase in the pd. This is visible in the
results shown in Fig. 11. Due to the reasons mentioned in
the previous sections, we observe the following SR among
the algorithms: RTH 2 S > WALL > LTF > cdc only.
average improvement in Success Ratio (SR) offered by [13] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow,
and P. A. Polakos, “A comprehensive survey on fog computing:
RTH 2 S over cdc only and LTF is shown in Table 8. State-of-the-art and research challenges,” IEEE Commun. Surv.
Tuts., vol. 20, no. 1, pp. 416–464, First Quarter 2018.
[14] K. Fizza, N. Auluck, O. Rana, and L. Bittencourt, “PASHE: Privacy
7 CONCLUSION aware scheduling in a heterogeneous fog environment,” in Proc.
IEEE 6th Int. Conf. Future Internet Things Cloud, 2018, pp. 333–340.
Significant propagation delays between users and the cloud [15] H. Wu and C. Lee, “Energy efficient scheduling for heterogeneous
data center may act as a deterrent for executing deadline fog computing architectures,” in Proc. IEEE 42nd Annu. Comput.
driven real-time jobs. This delay can be reduced by employ- Softw. Appl. Conf., 2018, pp. 555–560.
ing fog nodes for the execution of such jobs. In addition, it [16] A. Yousefpour, G. Ishigaki, and J. P. Jue, “Fog computing:
Towards minimizing delay in the Internet of Things,” in Proc.
may very well be the case that there is a hierarchy of fog IEEE Int. Conf. Edge Comput., 2017, pp. 17–24.
nodes [2]. Typically, fog nodes in various tiers (and even [17] G. Zhang, F. Shen, Y. Zhang, R. Yang, Y. Yang, and E. A. Jors-
within a particular tier) are heterogeneous. In this paper, we wieck, “Delay minimized task scheduling in fog-enabled IoT
networks,” in Proc. 10th Int. Conf. Wireless Commun. Signal Process.,
propose RTH 2 S, an algorithm that schedules real-time jobs
2018, pp. 1–6.
on a multi-tiered fog network by taking diverse job profiles [18] N. Chen, Y. Yang, T. Zhang, M. Zhou, X. Luo, and J. K. Zao, “Fog
into account. Using a real-life workload, RTH 2 S is validated as a service technology,” IEEE Commun. Mag., vol. 56, no. 11,
using a simulator as well as a prototype. We observe that pp. 95–101, Nov. 2018.
[19] S. Zhao, Y. Yang, Z. Shao, X. Yang, H. Qian, and C. Wang,
RTH 2 S offers better real-time results in terms of higher Suc- “FEMOS: Fog-enabled multi-tier operations scheduling in
cess Ratios, and reduced Monetary Costs. We also observe dynamic wireless networks,” IEEE Internet Things J., vol. 5, no. 2,
that job profiles impact the real-time system performance. pp. 1169–1183, Apr. 2018.
An increase in number tag1 profile jobs impact the regular [20] S. Sarkar, S. Chatterjee, and S. Misra, “Assessment of the suitabil-
ity of fog computing in the context of Internet of Things,” IEEE
profile jobs, leading to deadline misses and lower SR val- Trans. Cloud Comput., vol. 6, no. 1, pp. 46–59, First Quarter 2018.
ues. Our future work involves the use of multiple cloud [21] A.-C. Pang, W.-H. Chung, T.-C. Chiu, and J. Zhang, “Latency-
data centers. We also plan to develop “schedulability” and driven cooperative task computing in multi-user fog-radio access
networks,” in Proc. IEEE 37th Int. Conf. Distrib. Comput. Syst., 2017,
performance bounds for real-time tasks on such multi-tier pp. 615–624.
fog-cloud architectures. [22] S. Malik, S. Ahmad, B. W. Kim, D. H. Park, and D. Kim, “Hybrid
inference based scheduling mechanism for efficient real time task
and resource management in smart cars for safe driving,” Elec-
REFERENCES tronics, vol. 8, 2019, Art. no. 344.
[1] Fog Computing and the Internet of Things: Extend the Cloud to [23] T. N’takpe and F. Suter, “Don’t hurry be happy: A deadline-based
where the Things are, Cisco White Paper, 2015. backfilling approach,” in Proc. Workshop Job Scheduling Strategies
[2] openfogconsortium.org, OpenFog Reference Architecture for Fog Parallel Process., 2017, pp. 62–82.
Computing, 2017. [Online]. Available: https://www.openfog- [24] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architec-
consortium.org/wp-content/uploads/OpenFog_Reference_ ture for mobile computing,” in Proc. 35th Annu. IEEE Int. Conf.
Architecture_2_09_17-FINAL.pdf Comput. Commun., 2016, pp. 1–9.
[3] H. Gupta, A. V. Dastjerdi, S. K. Ghosh, and R. Buyya, “iFogSim: A [25] A. K. Mishra, J. L. Hellerstein, W. Cirne, and C. R. Das, “Towards
toolkit for modeling and simulation of resource management tech- characterizing cloud backend workloads: Insights from Google
niques in Internet of Things, edge and fog computing environ- compute clusters,” ACM SIGMETRICS Perform. Eval. Rev., vol. 37,
ments,” 2017. [Online]. Available: http://arxi.org/abs/1606.02007 no. 4, pp. 34–41, 2010.
[4] R. K. Naha, S. Garg, D. Georgakopoulos, P. R. Jayaraman, Y. [26] P. Han, C. Du, J. Chen, and X. Du, “Minimizing monetary costs for
Xiang, and R. Ranjan, “Fog computing: Survey of trends, architec- deadline constrained workflows in cloud environments,” IEEE
tures, requirements, and research directions,” IEEE Access, vol. 6, Access, vol. 8, pp. 25060–25074, 2020.
pp. 47980–48009, 2018. [27] D. A. Chekired, L. Khoukhi, and H. T. Mouftah, “Industrial IoT
[5] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The case data scheduling based on hierarchical fog computing: A key for
for VM-based cloudlets in mobile computing,” IEEE Pervasive enabling smart factory,” IEEE Trans. Ind. Informat., vol. 14, no. 10,
Comput., vol. 8, no. 4, pp. 14–23, Fourth Quarter 2009. pp. 4590–4602, Oct. 2018.
[6] N. Auluck, O. Rana, S. Nepal, A. Jones, and A. Singh, “Scheduling [28] Q. Fan and N. Ansari, “Workload allocation in hierarchical cloudlet
real time security aware tasks in fog networks,” IEEE Trans. Serv. networks,” IEEE Commun. Lett., vol. 22, no. 4, pp. 820–823, Apr. 2018.
Comput., vol. 14, no. 6, pp. 1981–1994, Nov./Dec. 2021. [29] P. Wang, Z. Zheng, B. Di, and L. Song, “HetMEC: Latency-optimal
[7] S. Han and H. Park, “Predictability of least laxity first scheduling task assignment and resource allocation for heterogeneous multi-
algorithm on multiprocessor real-time systems,” in Proc. Int. Conf. layer mobile edge computing,” IEEE Trans. Wireless Commun., vol.
Embedded Ubiquitous Comput., 2006, pp. 755–764. 18, no. 10, pp. 4942–4956, Oct. 2019.
[8] Y. Yang, K. Wang, G. Zhang, X. Chen, X. Luo, and M. T. Zhou, [30] E. El Haber, T. M. Nguyen, and C. Assi, “Joint optimization of
“MEETS: Maximal energy efficient task scheduling in homoge- computational cost and devices energy for task offloading in
neous fog networks,” IEEE Internet Things J., vol. 5, no. 5, multi-tier edge-clouds,” IEEE Trans. Commun., vol. 67, no. 5,
pp. 4076–4087, Oct. 2018. pp. 3407–3421, May 2019.
[9] K. Fizza, N. Auluck, and A. Azim, “Improving the schedulability [31] M. Peixoto, T. Genez, and L. F. Bittencourt, “Hierarchical schedul-
of real-time tasks using fog computing,” IEEE Trans. Serv. Com- ing mechanisms in multi-level fog computing,” IEEE Trans. Serv.
put., vol. 15, no. 1, pp. 372–385, Jan./Feb. 2022. Comput., to be published, doi: 10.1109/TSC.2021.3079110.
[10] Y. Yang, S. Zhao, W. Zhang, Y. Chen, X. Luo, and J. Wang, “DEBTS: [32] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. De Rose , and R.
Delay energy balanced task scheduling in homogeneous fog Buyya, “CloudSim: A toolkit for modeling and simulation of cloud
networks,” IEEE Internet Things J., vol. 5, no. 3, pp. 2094–2106, Jun. computing environments and evaluation of resource provisioning
2018. algorithms,” Softw.: Pract. Experience, vol. 41, no. 1, pp. 23–50, 2011.
[11] A. Singh, N. Auluck, O. Rana, A. Jones, and S. Nepal, “RT-SANE: [33] R. K. Naha, S. Garg, A. Chan, and S. K. Battula, “Deadline-based
Real time security aware scheduling on the network edge,” in dynamic resource allocation and provisioning algorithms in fog-
Proc. 10th IEEE/ACM Int. Conf. Utility Cloud Comput., 2017, cloud environment,” Future Gener. Comput. Syst., vol. 104,
pp. 131–140. pp. 131–141, 2020.
[12] I. A. Moschakis and H. D. Karatza, “A meta-heuristic optimization [34] A. Karimiafshar, M. R. Hashemi, M. R. Heidarpour, and A. N.
approach to the scheduling of Bag-of-Tasks applications on het- Toosi, “An energy-conservative dispatcher for fog-enabled IIoT
erogeneous Clouds with multi-level arrivals and critical jobs,” systems: When stability and timeliness matter,” IEEE Trans. Serv.
Simul. Modelling Pract. Theory, vol. 57, pp. 1–25, 2015. Comput., to be published, doi: 10.1109/TSC.2021.3114964.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1372 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023
[35] L. Li, Q. Guan, L. Jin, and M. Guo, “Resource allocation and task off- Nitin Auluck is currently an associate professor
loading for heterogeneous real-time tasks with uncertain duration in the Department of Computer Science & Engi-
time in a fog queueing system,” IEEE Access, vol. 7, pp. 9912–9925, neering, Indian Institute of Technology Ropar,
2019. Punjab, India. His research interests include fog
[36] M. Adhikari, M. Mukherjee, and S. N. Srirama, “DPTO: A dead- computing, real-time systems, and parallel and
line and priority-aware task offloading in fog computing frame- distributed systems.
work leveraging multilevel feedback queueing,” IEEE Internet
Things J., vol. 7, no. 7, pp. 5773–5782, Jul. 2020.
[37] E. Deelman et al., “Pegasus, A workflow management system for
science automation,” Future Gener. Comput. Syst., vol. 46, pp. 17–35,
2015.
[38] L. Li, M. Guo, L. Ma, H. Mao, and Q. Guan, “Online workload
allocation via fog-fog-cloud cooperation to reduce IoT task service
delay,” Sensors, vol. 19, no. 18, pp. 38–30, 2019. Omer Rana (Member, IEEE) is currently professor
[39] C. Sonmez, A. Ozgovde, and C. Ersoy, “Fuzzy workload orches- of performance engineering with the School of
tration for edge computing,” IEEE Trans. Netw. Service Manage., Computer Science & Informatics, Cardiff Univer-
vol. 16, no. 2, pp. 769–782, Jun. 2019. sity, U.K.
[40] J. Almutairi and M. Aldossary, “A novel approach for IoT task off-
loading in edge-cloud environments,” J. Cloud Comput., vol. 10,
no. 1, pp. 1–19, 2021.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply