0% found this document useful (0 votes)
9 views

Real-Time_Scheduling_on_Hierarchical_Heterogeneous_Fog_Networks

The document presents a scheduling algorithm, RTH2S, designed for real-time tasks in a hierarchical heterogeneous fog-cloud architecture, which addresses the limitations of cloud computing for latency-sensitive applications. It utilizes a multi-tier model where higher-tier fog nodes have greater computational capacity but higher latency, and employs a least laxity first (LLF) approach for job scheduling based on task profiles. The algorithm's effectiveness is validated through simulations and a lab-based testbed, demonstrating improved performance metrics such as Success Ratio (SR) in comparison to existing algorithms.

Uploaded by

cm23csr1p08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Real-Time_Scheduling_on_Hierarchical_Heterogeneous_Fog_Networks

The document presents a scheduling algorithm, RTH2S, designed for real-time tasks in a hierarchical heterogeneous fog-cloud architecture, which addresses the limitations of cloud computing for latency-sensitive applications. It utilizes a multi-tier model where higher-tier fog nodes have greater computational capacity but higher latency, and employs a least laxity first (LLF) approach for job scheduling based on task profiles. The algorithm's effectiveness is validated through simulations and a lab-based testbed, demonstrating improved performance metrics such as Success Ratio (SR) in comparison to existing algorithms.

Uploaded by

cm23csr1p08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1358 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO.

2, MARCH/APRIL 2023

Real-Time Scheduling on Hierarchical


Heterogeneous Fog Networks
Amanjot Kaur, Nitin Auluck , and Omer Rana , Member, IEEE

Abstract—Cloud computing is widely used to support offloaded data processing for various applications. However, latency constrained
data processing has requirements that may not always be suitable for cloud-based processing. Fog computing brings processing closer
to data generation sources, by reducing propagation and data transfer delays. It is a viable alternative for processing tasks with real-
time requirements. We propose a scheduling algorithm RTH 2 S (Real Time Heterogeneous Hierarchical Scheduling) for a set of real-
time tasks on a heterogeneous integrated fog-cloud architecture. We consider a hierarchical model for fog nodes, with nodes at higher
tiers having greater computational capacity than nodes at lower tiers, though with greater latency from data generation sources. Tasks
with various profiles have been considered. For the regular profile jobs, we use least laxity first (LLF) to find the preferred fog node for
scheduling. In case of “tagged” profiles, based on their tag values, the jobs are split in order to finish execution before the deadline, or
the LLF heuristic is used. Using HPC2N workload traces across 3.5 years of activity, the real-time performance of RTH 2 S versus
comparable algorithms is demonstrated. We also consider Microsoft Azure-based costs for the proposed algorithm. Our proposed
approach is validated using both simulation (to demonstrate scale up) as well as a lab-based testbed.

Index Terms—Fog computing, cloud computing, real-time scheduling, fog node hierarchy

1 INTRODUCTION The data volume generated by connected cars can become


challenging to transfer to the cloud for processing, as this
OG computing involves the use of a number of nodes/
F micro data centers located in close proximity to users and
data generation sources [1]. As there could be significant
could result in network congestion and increase in process-
ing times – leading to missed deadlines for tasks that process
this data. A smart transportation scenario can involve a num-
propagation delays between the data generation sources and
ber of sensors present on roadside units, such as weather
the cloud data center, fog computing provides computing
sensors for ice, snow, water, and roadside sensors for speed,
capability closer to the data source. An example of such a job
could be a surveillance camera at a security facility that volume and traffic monitoring (e.g cameras) etc. Smart cars
detects an intruder and alerts relevant authorities. For such communicate with these roadside sensors or with other cars
jobs, processing times need to be in the sub-second range – a in order to make specific decisions – offering services like
constraint that a remotely located cloud data center may not infotainment, supporting collision avoidance, processing of
be able to guarantee. In contrast, fog nodes, owing to their prior information regarding poor road conditions or traffic
proximity to users and data generation sources, can execute congestion. Note that these jobs can have diverse execution
such critical tasks with a lower invocation latency. To sup- requirements and deadlines. An example of a small job with
port the diverse execution requirements of real time applica- a tight deadline includes real time “sensing” or monitoring
tions, the fog node architecture may be hierarchical [2]. of a parameter (or calculating averages or max./min. across
Nodes in proximity to a user (lower tier) are considered to these parameters over a time window), such as temperature,
have lower computational capability compared to fog nodes wind-speed, humidity, rainfall etc. These jobs are typically
at higher tiers, but at a greater geographical distance (i.e., in the milliseconds range. These jobs need to be scheduled
higher latency) from data sources. A trade-off exists there- on a tier-1 fog node located in proximity to the user. On the
fore between processing capability of fog nodes and propa- other hand, jobs with less stringent deadlines but greater exe-
gation delay to users. cution requirements could include control tasks, such as
“Smart” transportation provides a relevant scenario in managing the properties of an infotainment system. This is
this context – as discussed by the Open Fog Consortium [2]. an example of a medium sized job and may be executed at
tier-2 or tier-3 fog nodes. Finally, large jobs with loose dead-
lines could be executed at the cloud. An example of such a
 Amanjot Kaur and Nitin Auluck are with the Department of Computer job could be batch data processing, e.g., determining how
Science and Engineering, Indian Institute of Technology Ropar, Punjab
140001, India. E-mail: {2017csz0014, nitin}@iitrpr.ac.in.
many vehicles crossed an intersection (involving integration
 Omer Rana is with the School of Computer Science and Informatics, Car- across multiple sensors), analysis of traffic patterns at a junc-
diff University, CF10 3AT Cardiff, U.K. E-mail: [email protected]. tion or road intersection, etc. It is pertinent to mention that
Manuscript received 11 July 2021; revised 9 Jan. 2022; accepted 25 Feb. 2022. these fog nodes would be distributed and would be available
Date of publication 3 Mar. 2022; date of current version 10 Apr. 2023. even in the event of a data comms. network outage. Hence,
(Corresponding author: Nitin Auluck.)
Recommended for acceptance by J. Kolodziej. they would make the system more resilient. The key contri-
Digital Object Identifier no. 10.1109/TSC.2022.3155783 butions of the paper are as follows:
1939-1374 © 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See ht_tps://www.ieee.org/publications/rights/index.html for more information.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1359

(i) a multi-tier hierarchical fog-cloud real time schedul- based on their deadline requirements. An energy-efficient
ing algorithm RTH 2 S taking account of device het- fog computing framework has been proposed in [8]. The
erogeneity. We propose a mathematical model for an computation resources are shared with multiple neighbor
n-tier fog cloud architecture that schedules jobs onto helper nodes and an optimal scheduling decision is deter-
fog/cloud processors while meeting their deadline mined for a task node. In [10], the authors proposed a real
requirements. time algorithm called DEBTS for achieving a balanced sys-
(ii) RTH 2 S works for both regular and tagged job pro- tem performance in terms of service delay and energy con-
files. The algorithm either finds a preferred fog node sumption. However, the authors have not considered
for job execution, or splits the job based on a combi- heterogeneous fog nodes in their work. In [17], a fog based
nation of its size and deadline requirements. delay-optimal task scheduling algorithm has been pro-
(iii) Using both simulation and a prototype test bed, we posed. The authors consider a heterogeneous fog network
demonstrate the performance of the proposed algo- as a part of dynamic wireless networks in [19]. In [14], the
rithm RTH 2 S in enhancing a key metric used to mea- placement of tasks on heterogeneous fog nodes has been
sure benefit: Success Ratio (SR) – while considering explored, on the basis of privacy tags. In [33], the authors
task load, propagation delay, heterogeneity and job discuss resource allocation by ranking fog devices based on
profiles. Further, the impact of the tiered fog architec- processing, bandwidth and latency, and assigning process-
ture on the scheduling performance is also discussed. ors to deadline-based tasks. In [34], the authors propose a
This paper is organized as follows. Section 2 includes a dynamic request dispatching algorithm, which minimizes
discussion of related work. The system model, notation and energy consumption and timeliness by using the Lyapunov
problem formulation is described in Section 3. An orchestra- Optimization Technique. The authors propose an adaptive
tion protocol to support automatic system functioning is queuing weight (AQW) resource allocation and real-time
discussed in Section 4. The proposed algorithm is presented offloading technique in a heterogeneous fog environment in
in Section 5. Section 6 discusses results. Finally, Section 7 [35]. In [36], the authors reduce the waiting time of delay-
concludes the paper and discusses future work. sensitive tasks by using a multilevel-feedback queue and
minimizing the starvation problem of low priority tasks.
However, all these approaches focus on a single tier of fog
2 RELATED WORK nodes between the edge and cloud systems, and cannot be
The Open Fog Consortium (involving a number of industry applied directly to multi-tier fog cloud architectures.
partners, e.g., Cisco, Intel, Microsoft, Dell etc.) has proposed There has been some work in multi-tier hierarchical fog
a reference architecture [2] with several use cases for fog cloud scheduling. In [24], the authors proposed a hierarchi-
computing: smart transportation, smart buildings, airport cal edge computing architecture with identical resources in
security, and so on. The extension of network resources from each tier, however without consideration of real time sched-
cloud to fog nodes yields a rich environment which can pro- uling. In [27], the authors propose a multi-tier fog cloud
vide storage, computation and communication resources architecture, and divide tasks into low & high priority. In
over the network [18]. Capacity planning and optimisation [28], a hierarchical fog cloud architecture (limited to 2-tier
of a fog-based system may be analysed using the iFogSim cloudlets) and a workload allocation scheme is proposed,
simulator [3], enabling various resource management strate- which attempts to minimise the response time of user
gies in fog-cloud architectures to be considered. iFogSim requests. In [29], [30] the authors proposed a multi-layer
matches fog node capability (as Million Instructions Per Sec- heterogeneous architecture for task offloading to minimise
ond (MIPS), memory, and network connectivity) with task response time, without considering real time tasks. In [31], a
capability (defined using similar metrics as fog node capabil- component based scheduler for a multi-tier fog cloud archi-
ity). The simulator enables understanding the trade-off tecture is proposed. However, authors consider only two
between computational capability and power consumption tiers of fog nodes in their work, and measure the results
of a fog node, and latency of executing an application task. using simulation – without taking account of a real work-
A survey of fog computing [4], [13] explores a number of load. The effective mapping of jobs to a group of heteroge-
research trends – differentiating characteristics of fog and neous fog nodes is no doubt a challenging problem. To the
cloud computing. In [16], the authors pitch fog computing best of our knowledge, no work has looked into heteroge-
as a crucial element for Internet of Things (IoT), and neous, hierarchical real time scheduling of “regular” as well
develop a mathematical model to assess the suitability of as “tagged” profiled tasks on fog cloud architectures, sup-
fog computing in IoT [20]. In [5], the authors observe that porting both “inter-level heterogeneity” as well as “intra-
fog nodes/cloudlets provide an acceptable interactive level heterogeneity”.
response in human cognition, owing to their physical prox-
imity and one-hop network latency. Several papers, given
the context of fog-based system usage, have focused on min-
3 SYSTEM MODEL
imising latency in such environments [21]. 3.1 Proposed Architecture
In our previous work, we consider the real-time schedul- The proposed architecture is illustrated in Fig. 1. Table 1
ing of single tier fog nodes [11] on homogeneous fog nodes, summarises the notation used in our approach – where a set
i.e., all the fog nodes were assumed to have identical proc- of fog nodes is given by FN. We assume that a hierarchy of
essing capabilities, with the interpretation that a job will fog nodes exists – as outlined in [2]. At the lowest level (i.e.,
have identical execution costs on all fog nodes. In [9], the closest to the user), we have tier-1 fog nodes, followed by
authors schedule tasks in real time on identical processors tier-2 fog nodes at the next level, and then tier-3 fog nodes
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1360 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

TABLE 2
Representative Smart Car Tasks

Job Execution cost Memory usage Job Job description


(MIPS) (GBs) type
j1s 250 0.34 small rainfall
j2s 155 0.28 small temperature
j1m 3000 0.81 medium object rec.
j2m 1850 0.67 medium wiper cont.
j1l 6700 1.78 large traffic patt.
j2l 7200 2.01 large non-crit.
updates

Fig. 1. Fog architecture.


proposed approach can be generalised to multiple providers.
TABLE 1 The capacity of a particular fog node or the cloud data center
Key Notation is given by c. This execution capacity is given in terms of Mil-
lions of Instructions per Second, or MIPS. The popular fog
C cloud data center set
cx cloud data center cx 2 C simulator iFogSim [3] models computational node execution
FN set of all fog nodes capacity in MIPS, so we have chosen MIPS in our model – to
FN1 set of tier-1 fog nodes make our model compatible with iFogSim. An application
fny1 yth tier-1 fog node 2 FN1 that needs to be executed consists of the set of all jobs in the
FN2 set of all tier-2 fog nodes system is denoted by J, such that J ¼ fJ1 ; J2 ; J3 ; J4 ; . . . ::g.
fny2 yth tier-2 fog node 2 FN2 Throughout this paper, we use the term job and task inter-
FN3 set of tier-3 fog nodes
fny3 yth tier-3 fog node 2 FN3 changeably. The deadline of the kth job is denoted by dðjk Þ.
J set of all jobs The propagation delay between job jk and tier-1 fog node fn1
N total number of jobs is given by pdðjk ; fn1 Þ. Likewise, pdðjk ; fn2 Þ, and pdðjk ; fn3 Þ
JS set of all small jobs 2 J represents the propagation delay between job jk and tier-2 fog
jks kth small job 2 JS node fn2 , job jk and tier-3 fog node fn3 respectively. Finally,
JM set of all medium jobs 2 J the propagation delay between job jk and cloud data center cx
jkm kth medium job 2 JM is represented as pdðjk ; cx Þ. Note that fn1 could be any tier-1
JL set of all large jobs 2 J
jkl kth large job 2 JL fog node 2 FN1 , fn2 could be any tier-2 fog node 2 FN2 , and
cðfnji Þ capacity of jth fog node at ith tier so on. The execution cost of the kth job on a tier-1 fog node is
cðcx Þ capacity of cloud data center cx denoted by tðjk ; fn1 Þ.
dðjk Þ deadline for kth job In order the classify the jobs based on their execution
pdðjk ; fn1 Þ propagation delay between job and tier-1 fog node requirements, we consider three sets of jobs: small (JS ),
pdðjk ; fn2 Þ propagation delay between job and tier-2 fog node medium (JM ), and large (JL ). So, jk could be any small,
pdðjk ; fn3 Þ propagation delay between job and tier-3 fog node
medium, or large job 2; JS ; JM ; JL . Small jobs (JS ) are the
pdðjk ; cx Þ propagation delay between job and cloud data
center jobs that require less processing power to execute, medium
tðjk ; fnÞ job execution cost on fog node jobs (JM ) are the jobs that require moderate processing
pfnz ðjk Þ zth preferred fog node of job jk power to run, and large jobs (JL ) are the jobs that need high
processing power for execution. Although there are a num-
ber of smart car data sets that focus on speed, traffic patterns,
car images, we could not find any data set that specifies the
at a higher level (i.e., closer to the data center). As a general
CPU execution requirements, or memory usage of various
rule, as one moves up in the hierarchy, the execution capac-
smart car tasks. Some representative tasks from the smart
ity of the fog nodes increases. However, on the flip side, the
automobile use case are given in Table 2. The nature of these
communication distance to the data generation sources also
jobs has been inspired from [22]. We consider three types of
increases, increasing the propagation delay. Note that Fig. 1
deadlines: tight(T), moderate (M), and loose(L). The deadline
depicts n tiers of fog nodes. In this work, we consider three
category is decided from Deadline Factor (DF) defined in
tiers of fog nodes i.e n = 3. Based on the system requirement,
Section VI-B Table 3 depicts the priority assignment based
the architecture can be extended to n-tiers of fog nodes.
on the job sizes and their deadlines. We have considered 9 * 9
combinations of job sizes and deadlines. In general, as dead-
3.2 Notation lines may be a directly proportional to execution costs, small
The set of all tier-1 fog nodes is given by FN1 , the set of all tier- jobs have lower execution costs and tight deadlines. Hence,
2 fog nodes is given by FN2 , and the set of all tier-3 fog nodes they are assigned to tier-1 fog nodes located at the closest
is given by FN3 . The kth fog node at tier 1, 2, and 3 is given by proximity to users. Typically, medium jobs have higher exe-
fnk1 , fnk2 , and fnk3 respectively. At the top of the hierarchy, we cution costs and looser deadlines than small jobs, and are
have a cloud data center c 2 C, where C is the set of all cloud assigned to the tier-2 or tier-3 fog nodes. Finally, the cloud
data centers. Each cloud data center could potentially belong runs large jobs with loose deadlines. All these jobs can be
to a different cloud provider e.g., Google, Amazon, Microsoft. considered to have regular job profiles. The set of regular job
In our work, we consider a single cloud data center, but the profiles are denoted as R.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1361

TABLE 3 ftðjk ; fnÞ ¼ ctðjk ; fnÞ þ tðjk ; fnÞ þ pdðjk ; fnÞ: (1)
Priority Level Assignment
The jobs are real-time and need to finish by their deadline.
tight (T) moderate (M) loose (L) For unsplit jobs, no overheads are there.
Small job js P1 P2 P3
Medium job jm P1 P2 P3 ftðjk ; fnÞ  dðjk Þ: (2)
Large job jl P1 P2 P3
Since this is a heterogeneous system, we need to exercise
caution while assigning jobs to fog nodes and the cloud. We
TABLE 4 use the concept of a preferred fog node pfn of job jk – a
Tag Assignment node on which the job is most likely to meet its deadline
requirements. By selecting a pfn for jk , the algorithm takes
tight (T) moderate (M) loose (L) processor heterogeneity into account.
Small job js x tag2 tag2 For jobs with regular or tag2 profiles, we use the job lax-
Medium job jm tag1 x tag2 ity for finding the preferred fog node. The laxity of a job jk
Large job jl tag1 tag1 x is denoted by lðjk Þ, and is the difference between its finish
time and its deadline [7]. Formally

lðjk Þ ¼ ftmin ðjk Þ  dðjk Þ: (3)


However, based on the user requirements, jobs may not fit
into the above profiles. We call such job profiles as “tagged” – Since each job jk can have different finish times on different
denoted as T . For example, if a medium job has a tight dead- fog nodes, we consider the finish time that has the minimum
line, or a large job has a tight/ moderate deadline, despite value in our estimation of laxity. Hence, pfn1 ðjk Þ is given as
being medium/large, they need to execute in the lower fog
layers to meet their deadline requirements. We propose to pfn1 ðjk Þ ¼ pu : lðjk ; puÞ < lðjk ; pu0 Þ: (4)
split such jobs so that they can be executed on lower capabil-
ity resources. All these jobs are assigned tag1. Such jobs need In Eq. (4) above, pu; pu0 2 FN; C && pu 6¼ pu0 . In other
to be assigned a high priority. As another example, small words, assigning a job jk to its preferred fog node 1 pfn1
jobs may have moderate/loose deadlines and medium jobs results in the minimum laxity value of the job. Likewise, the
may have loose deadlines. These jobs are tagged as tag2. fog node that results in the next lowest laxity value becomes
Such jobs can be assigned a lower priority. The set T has two pfn2 , and so on.
subsets: T 1 subset for tag1 jobs, and T 2 subset for tag2 jobs. The following equation makes sure that the job’s require-
Table 4 depicts the tag assignment. ment is not more than the fog node capacity:
We consider various kinds of heterogeneity in our model.
RðfnÞ  rðjk ; fnÞ: (5)
By their nature, tier-1 fog nodes (FN1 ), tier-2 fog nodes
(FN2 ), tier-3 fog nodes (FN3 ), and the cloud data center (cx ) Here, RðfnÞ represents the resource capability of fn and
are heterogeneous with respect to each other. This means rðjk ; fnÞ represents the resource requirement of jk .
that execution capacity of FN1 nodes is different from that
of FN2 and FN3 , which is further different from cx . In gen- 3.4 Job Splitting Constraints
eral, the order of execution capacity is: cx > FN3 > FN2 > For tag1 jobs, it may not be possible to execute the whole job
FN1 . Higher execution capacity implies faster execution on a single fog node due to their limited computational
rates, so assigned tasks will finish earlier. We call such het- capacity and strict deadline requirement. Hence, we pro-
erogeneity “inter-level-heterogeneity”. In addition, we con- pose splitting such jobs and assigning the generated sub-
sider “intra-level-heterogeneity”. jobs to various fog nodes. While splitting the jobs, we need
The total number of jobs is given by J. This is the sum of to take care of the fog node’s propagation delay and compu-
all small, medium and large jobs: J ¼ JS þ JM þ JL . The tation power. A fog node with less propagation delay (close
commencement time of a job jk 2 J on a fog node fn 2 FN to the user) may have low computation power. On the other
is denoted by ctðjk ; fnÞ. In this work, we consider that there hand, a fog node with high computation power may be
are no precedence constraints among various jobs, so all located a bit far from the user. So, we consider an inverse of
jobs are independent of each other. However, if the jobs are both parameters to calculate the value of Y ðfnÞ for fn.
split on various fog nodes, then we consider the aggregate
finish time from the entry fog node to exit fog node. Hence, 1 1
Y ðfnÞ ¼ þ k : (6)
a job may start at time 0. Note that, jk can be a whole job or k
pdðj ; fnÞ cðj ; fnÞ
a part of split job. The execution cost of a job jk 2 J on a fog
node fn 2 FN is given by tðjk ; fnÞ. Since the processing ele-
ments are heterogeneous, the execution cost of a job can be To divide the job jk into sub-jobs wi , we use Eq. (7)
different on different processing units FN, C.
wi ðfnÞ ¼ jk  Y ðfnÞ: (7)
3.3 Real-Time Constraints The sub-job size wi is calculated by considering the propa-
The finish time of a job jk 2 J on a fog node fn 2 FN is gation delay pd and capacity c of fog node. The number of
denoted by ftðjk ; fnÞ. The finish time of a job jk on a fog sub-jobs depend upon the size of the job and characteristics
node fn may be modelled as of the fog node. Job jk is calculated in Eq. (8)
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1362 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

X
wleft ¼ jk  wi : (8) queuing can occur in fog nodes when the jobs are large in
number. The scheduled jobs on cdc cx can generally execute
Overheads Q involved in splitting large jobs into smaller without any queuing delay. Due to the limited processing
ones has three components: (i) delay involved in transmit- capability of fog nodes, we assume that each fog node main-
ting jobs to fog nodes. Job jk is split into smaller chunks tains a queue to buffer the jobs. The queue length of fn at
denoted by wi ; (ii) finish time ft of the job jk ; (iii) delay in t þ 1th instance can be defined as follows[38]:
receiving results. The output of each sub-job with input wi
is denoted by wo . The bandwidth of the network connection qðfn; t þ 1Þ ¼ maxðqðfn; tÞ þ aðfn; tÞ  mðfn; tÞ; 0Þ: (15)
between user ui and fog node fn is denoted by bw.
Here, qðfn; t þ 1Þ is the queue of fn at ðt þ 1Þth instance, and
X wi X wo qðfn; tÞ is the queue of fn at tth instance. qðfn; tÞ represents
Qðjk Þ ¼ þ ftðjk Þ þ : (9)
bwðui ; fnÞ bwðfn; ui Þ the number of jobs leaving the queue of fn in the tth time
slot (jobs processed by fog node). aðfn; tÞ denotes the num-
The jobs are real-time and need to finish by their deadline. ber of jobs arriving at fn in the tth time slot. We add the
We need to take the overheads into account for split jobs. queuing delay to Equation (1) to calculate the finish times.

Qðjk Þ  dðjk Þ: (10) 3.7 Cost and Objective Function Constraints


Monetary Cost (MC) is estimated by considering the
3.5 Job Precedence Model weighted average of the execution cost t and propagation
A workflow is defined as a set of small interdependent jobs delay pd of jk on a fog node fn. This weighted average is
modeled as a Directed Acyclic Graph ðDAGÞ. Each DAG is multiplied by the price of the fog node on which the jobs
defined by the tuple ðJ; E; t; ccÞ, where J is the set of jobs, are being offloaded [26].
and E is the set of edges defining data dependencies
between them. Let J ¼ ðj1 ; j2 ; ::::; jt Þ be the set of t jobs in priceðfnÞ  ðw1  tðjk ; fnÞ þ w2  pdðjk ; fnÞÞ
MCðjk ; fnÞ ¼ : (16)
the workflow. t is the set of execution costs, and cost for job w1 þ w2
jk 2 J is denoted by tðjk Þ. The set cc consists of communica-
tion costs, and each edge from job jk to job ji , ek;i 2 E has a Similarly, the Monetary Cost (MC) of cloud data center cx
cost ccðjk ; ji Þ associated with it. The data flow dependency can be defined as follows:
from job jk to job ji is defined by an edge ek;i 2 E, with a
precedence constraint that the job ji can only start after the priceðcx Þ  ðw1  tðjk ; cx Þ þ w2  pdðjk ; cx ÞÞ
completion of job jk . Suppose, a job jk is scheduled on a fog MCðjk ; cx Þ ¼ : (17)
w1 þ w2
node fn. Let mstðjk ; fnÞ and mctðjk ; fnÞ be the minimum
start time and minimum completion time for job jk on fn
respectively, when fn is available for the execution of job jk . The overall Monetary Cost of the system is given by

mstðjk ; fnÞ ¼ 0; if predðjk Þ ¼ f (11) J X


X N X
m
MCsystem ¼ MCðjk ; fnjn Þ
k¼1 n¼1 j¼1
mstðjk ; fnÞ ¼ maxðactðjp Þ þ ccðjp ; jk ÞÞ; J X
X C
þ MCðjk ; cx Þ: (18)
for each j 2 predðj Þ
p k
(12) k¼1 x¼1

Success Ratio (SR) at nth fog tier is the percentage ratio of


mctðjk ; fnÞ ¼ mstðjk ; fnÞ þ tðjk ; fnÞ: (13) n-tier jobs that finish execution before their deadline, to the
total number of jobs submitted to fog nodes FNn . The Suc-
If both the parent and child jobs are assigned to the same fog cess Ratio of nth tier fog nodes is given by
node, the cost cc will be zero. After a job is assigned to a fog
node, the mst and mct become the job’s actual start time j0 ðFNn Þ
SRFNn ¼  100: (19)
ðastÞ and actual completion time ðactÞ, respectively. At last, jðFNn Þ
the workflow’s finish time ðftÞ is equal to the actual comple-
tion time of the last job , jexit Here, j0 ðFNn Þ are the total number of FNn bound jobs that
finish before their deadlines, i.e., ftðji ; fnn Þ  dðji Þ. jðFNn Þ
ftðDAGÞ ¼ actðjexit Þ: (14) are the total number of jobs submitted to the nth fog tier.
Here, fnn 2 FNn and ji ; dðji Þ 2 j0 .
We take a DAG as input for the dependent tasks, convert The Success Ratio (SRC ) on the cloud data center can be
this into an ordered tasks list, and then submit these tasks defined as follows:
for execution. For workflow/DAG execution, a tool like
Pegasus [37] may be used. j00 ðcx Þ
SRC ¼  100: (20)
jðcx Þ
3.6 Queuing Delay
As the processing capability of the fog nodes FN is consid- Here, j00 are the total number of cloud bound jobs that have
erably less than the processing capability of the cdc cx , finished before their deadlines, i.e., ftðji ; cx Þ  dðji Þ. jðcx Þ
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1363

whether the job needs to be split or not. If yes, the scaling


up algorithm is invoked, which splits the jobs among vari-
ous fog nodes in order to finish within the deadline. Other-
wise, the jobs are assigned to the preferred fog node ðpfnÞ.
Initially, we divide the job set J into small(JS ), medium
(JM ), and large (JL ) jobs. We use the k-means algorithm to
partition the jobs into small js , medium jm , and large jl sizes
[25]. We use job duration and memory usage of the jobs as an
input data. The k-means clustering is applied by using k = 3,
this results in providing the breakpoints to categorise the jobs.
Fig. 2. Distributed fog cloud architecture.

are the total number of jobs submitted to the cloud data cen- Algorithm 1. RTH 2 S
ter. Here, cx is the only cloud data center and ji ; dðji Þ 2 j00 . Input: Set of jobs
Overall, the Success Ratio of the fog nodes is given by Output: Optimal Schedule
1: Populate Q1, Q2, and Q3 with priority level P1 , P2 , and P3
SRsystem ¼ SRFN1 þ SRFN2 þ . . . . . . : þ SRFNn þ SRC : (21) respectively;
2: Sort queue Q1, Q2, and Q3 with ascending order of
deadlines;
Given this context and set of definitions, we can formally
3: Assign tags to the jobs;
define the research problem as:
4: scheduledlist S ¼ empty, Qpj ¼ empty;
“Given a set of jobs JðJS ; JM ; JL Þ, a set of fog nodes
5: for k = 1 to size(Q1) do
FNðFN1 ; FN2 ; FN3 Þ and a cloud data center cx , with heteroge-
6: if tagðjk Þ ! x then
neous execution capacity schedule the jobs on their preferred fog 7: Preferred-fn(1);
node pfns, or split the job onto fog tiers according to the priority 8: end
assignment of Table 3, and tag assignment of Table 4, s.t. 9: if tagðjk Þ ! tag1 then
SRsystem is maximised”. 10: Preempt the currently scheduled jobs and add the jobs
to Qpj ;
4 ORCHESTRATION PROTOCOL 11: ScaleUp();
12: Resume the jobs present in Qpj ;
We adopt a decentralised fog cloud architecture driven by
13: end
Orchestrating agents (OAs), as proposed in [6]. Fig. 2 shows
14: end
the conceptual architecture of the orchestration mechanism
15: for k = 1 to size(Q2) do
for the distributed fog cloud architecture. Here, FNn repre-
16: if tagðjk Þ ! x jj tagðjk Þ ! tag2 then
sents the nth fog node tier of the architecture. In this work, 17: Preferred-fn(2);
we considering n = 3, though n can be varied based on the 18: if jk is unscheduled then
application requirement. An OA is present on each comput- 19: Preferred-fn(3);
ing device. A job specific instance is created by OAs. The 20: end
OAs cooperate with each other to achieve the goal of the 21: end
scheduling algorithm: minimising the overall latency of the 22: if tagðjk Þ ! tag1 then
system, or increasing the success ratio of the system. As 23: Preempt the currently scheduled jobs and add the jobs
demonstrated in the figure, a user can submit jobs to the fog to Qpj ;
devices or cloud data center. Each user has a network con- 24: ScaleUp();
nection to the FN1 . The FN1 are the fog nodes which can 25: Resume the jobs present in Qpj ;
execute jobs in the least latency. The fog node tier FN1 is fur- 26: end
ther connected to next tier of fog nodes i.e., FN2 followed by 27: end
FN3 . Finally, we have a cloud data center at the top most 28: for k = 1 to size(Q3) do
layer of the hierarchy. 29: if tagðjk Þ ! x jj tagðjk Þ ! tag2 then
30: Preferred-fn(3);
31: else
5 PROPOSED ALGORITHM
32: schedule jk on cdc;
In this section, we describe our proposed scheduling 33: estimate the MC using Eq. (17);
scheme RTH 2 S. As mentioned in section III, we consider 34: remove job jk from queue Q, add job jk to scheduledlist
three types of jobs: small (JS ), medium (JM ), and large (JL ). S;
Likewise, we have resources of diverse execution capacities, 35: end
tier-1 fog nodes (FN1 ), tier-2 fog nodes (FN2 ), tier-3 fog 36: end
nodes (FN3 ), and the cloud data center (cx ), which are het- 37: Calculate SRðsysÞ8FN; C
erogeneous with respect to each other. For a particular job
jk 2 J, the goal is to finish its execution within it’s deadline. The algorithm RTH 2 S works as follows. The input data for
More specifically, for the regular profile jobs, the aim is to the algorithm is the set of jobs. This set consists of jobs of vari-
minimise the laxity of the job by assigning the job to its pre- ous sizes along with their deadlines. The first step is to popu-
ferred fog node ðpfnÞ, while finishing the job within it’s late the set of jobs J into three queues Q1, Q2, and Q3, based
deadline. For the tagged profile jobs, we need to make a call on the priority level assignment of Table 3. The queues Q1,
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1364 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

Q2, Q3 are sorted in ascending order of deadlines. The ratio- Algorithm 3. ScaleUp
nale behind this sorting is to align it with the Earliest Deadline Input: Job jk with tag ! tag1
First algorithm. We form a list named scheduledlist S, which Output: Optimal Schedule
is initially empty. We consider a queue Qpj , which queues the 1: Calculate min. MIPS for job jk to finish before deadline;
preempted jobs. This queue is initially empty. The tags are 2: sum 0;
assigned to the jobs as per Table 4. The jobs are executed 3: for p=1 to m do
according to their priority levels, i.e., P1 being the highest pri- 4: Get MIPS of pth fog node;
ority, and P3 being the lowest priority. Initially, the jobs pres- 5: sum = sum + MIPS(pth );
ent in Q1 are scheduled. As soon as the job arrives, we 6: Estimate Y on pth fog node using Eq. (6);
examine its tag. If the job has no tag, i.e., small jobs with tight 7: if Equation (5) holds true then
deadline, then the preferred fog node of the algorithm is esti- 8: Calculate sub-job wi on fog nodes using Eq. (7);
mated. In Preferred-fn(1), 1 stands for fog tier-1. Initially, the 9: end
ft of the job jk is calculated for the tier-1 fog nodes. We calcu- 10: Estimate ftðwi Þ on pth fog node;
late the minimum finish time ftmin among the calculated finish 11: if sum  minimalMIPS then
times. We estimate pfn by using Equation (4) for jk . In the next 12: break;
13: end
step, we compare two conditions: whether the task’s require-
14: end
ment is within the preferred fog node’s pfn capacity and
15: Estimate Qðjk Þ using Equation (9);
whether its laxity is less than zero or not. The latter check
16: if Qðjk Þ  deadline then
implies whether the job jk is finishing before the deadline or 17: schedule the job jk on the fog nodes;
not. If both the conditions are satisfied, then the job jk is sched- 18: estimate the MC on fog nodes using Equation (16);
uled on the pfn. After this, we calculate the associated Mone- 19: add job jk to scheduledlist S;
tary cost MC on the preferred fog node pfn. The job is added 20: else
to the scheduledlist S. If the tag of the job is tag1, then the jobs 21: job jk can’t be submitted;
scheduled on tier-1 are preempted and the ScaleUp algorithm 22: end
is called. The preempted jobs are added to Qpj . The ScaleUp
algorithm works as follows. First, we find the minimum MIPS
required for finishing the job before deadline. We form a vari-
6 SIMULATION RESULTS
able sum which is initialised to zero. We form a loop for the
fog nodes at fog node tier-1. The associated value of Y ðfnÞ is In this section, we discuss the simulation results that were
estimated using Equation (6). If the fog node has spare capac- carried out for the performance evaluation of the proposed
ity, then we estimate the sub-job wi by using Equation (7). algorithm RTH 2 S. We consider sample scenarios that align
After this step, we calculate the finish time of sub-job over the with our Fog Architecture depicted in Fig. 1. The jobs may
selected fog node. Once the job jk gets the minimal MIPS be run on: tier-1 fog nodes FN1 , tier-2 fog nodes FN2 , tier-3
required for execution, the loop breaks. The overhead for job fog nodes FN3 , or on the cloud data center cx . In our work,
jk is estimated. If the job jk finishes before deadline, then the we consider three tiers of fog nodes. The proposed model
job jk is scheduled. The job jk is removed from queue Q1 and can be readily extended to support more tiers, based on the
added to the scheduled list S. Otherwise, the job jk can’t be application requirements.
submitted to the scheduler. The jobs in Qpj are resumed on the The jobs are executed on the basis of the priority
respective fog nodes. After traversing Q1, the algorithm goes assigned. Priority P1 jobs run on FN1 nodes, priority P2 jobs
for P2 priority. For the incoming job, the tag is seen. If there is run on FN2 or on FN3 nodes, priority P3 jobs run on FN3
no tag or tag2, then preferred-fn is run for fog tier-2. If the job nodes, or on the cloud data center cx . This ensures that the
is still unscheduled, then the preferred fog node is examined utilization of all nodes is maximised. We compare our pro-
at tier-3. For the tag1, the preemption at fog node tier-2 is done posed scheduling algorithm RTH 2 S with cdc  only and a
and ScaleUp algorithm is called. For the last queue i.e Q3, the scheduling algorithm for Heterogeneous Fog Computing
algorithm tries to run the jobs on fog tier-3. If the queue still Architectures proposed in [15]. In cdc  only, the fog nodes
has some jobs unscheduled, then they are scheduled on the have not been considered in executing jobs i.e., only the
cdc  only. Finally, SR for all the jobs is calculated. cloud data center cx is used for executing all the jobs. In
[15], the authors propose the LTF (Longest Time First)
Algorithm 2. Preferred-fn(n) scheduling algorithm for heterogeneous fog networks. We
compare our proposed algorithm RTH 2 S with LTF . The
Input: Job jk with tag ! x or tag2
LTF algorithm schedules the jobs with the longest execu-
Output: pfn
tion time to the fastest node. Prior to execution, LTF sorts
1: for y=1 to m do
2: estimate ft of job jk in fnyn using Eq. (1); the jobs in a descending order based on their deadlines.
3: find pfn with ftmin forall ft using Eq. (4);
4: end 6.1 Workload
5: if RðpfnÞ  rðjk ; pfnÞ and laxity(pfn)  0 then We have used a real workload called HPC2N (High Perfor-
6: schedule job jk on preferred fog node fn; mance Computing Center North) [12], [23]. This is a joint
7: estimate the MC on pfn using Eq. (16); operation between various facilities and educational insti-
8: add job jk to scheduledlist S; tutes. This workload is a result of about 3.5 years of activity.
9: end This activity was carried on the Seth cluster of the HPC cen-
ter in Sweden. The Linux cluster consists of 120 dual CPU
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1365

nodes. Each node in the cluster consists of 2 AMD Athlon made to this class. We have created job queues Q1, Q2, and
MP2000+ CPUs, with a clock frequency of 1.67 GigaHertz. Q3 in the simulator. The queues are sorted in increasing of
The peak performance of this cluster is 800 Gigaflops. Each the deadlines, e.g., the task with the tightest deadline
node has access of 1 GB of RAM, which is shared by both appears at the head of the queue. As per the priority assign-
CPUs. The communication framework consists of a 3D SCI ment of Table 3, jobs are be allocated to tier-1 FN1 , tier-2
intern-connect and fast Ethernet. This workload consists of FN2 , tier-3 FN3 , and cdc cx . Note that each data point is an
over 5,00,000 jobs, which are of various lengths, and is average of five simulation runs. A 95% confidence interval
suited to cloud, grid and fog computing. Each task has vari- is used in the graphs. We now describe the parameters used
ous parameters associated with it – such as Job ID, burst in our simulations:
0
time (t), memory usage, and arrival time. For each job, we 1) Success Ratio (SR). This is defined as ðNN Þ*100, i.e., the
take the arrival time as 0. We have divided the jobs into percentage of the number of jobs finishing execution before
three categories as per the job length by using k-means: their deadlines to the total number of jobs considered for
small, medium and large. The range of job lengths consid- scheduling.
ered for each category are as follows - small: 1-95, medium: 2) Task Load (TL). There is a MIPS requirement associated
96-205, large: 206-400. The fog network consists of 8 FN1 with all jobs considered for scheduling. The MIPS value of
nodes, 4 FN2 nodes, 1 FN3 node and 1 cdc cx . The propaga- each job was uniformly selected from the range (100, 8500).
tion delay ðpdÞ from a user Ui to a tier-1 fog node is 2 milli- Next, we calculated the average MIPS value for all jobs. In
seconds, user Ui to a tier-2 fog node is 6 milliseconds, user order to get a range of Task loads, this MIPS value is multi-
Ui to a tier-3 fog node is 12 milliseconds, and from Ui to a plied by 1 to 5.
cdc is 137 milliseconds (12 milliseconds from the Ui to the 3) Propagation Delay (PD). This quantity is defined as the
proxy server and 125 milliseconds from the proxy server to range of delay factor between the jobs the fog nodes and the
cdc). The capacity of each fog node present at tier-1, cðfny1 Þ cloud data centers ðcdcÞ. A lower value indicates smaller
varies from 1000 MIPS to 2000 MIPS. Likewise, the capacity delay. We set the delay factor ðpdÞ of 2, 6 and 12 millisec-
of each fog node present at tier-2, cðfny2 Þ varies from 2500 onds between the user and fog tier-1, tier-2, and tier-3
MIPS to 4000 MIPS, the capacity of each fog node present at respectively. A value of 137 milliseconds was set between
tier-3, cðfny3 Þ has been taken as 5800 MIPS, and the capacity the user and the cdc. These values are added by 10 millisec-
of the cdc, cðcx Þ has been taken as 70000 MIPS. The number onds in each iteration to get new values.
of jobs (i.e., Job Set JS) varies from 250 to 500 and the execu- 4) Deadline Factor (DF ). Job deadlines are changed over a
tion costs of these jobs (i.e., t) varies from 100 to 8500 MIPS. range to observe the effect of tight and loose deadlines on
The size-wise break up of the jobs is as follows: small jobs performance. A higher value implies tight deadlines, and
make up 41% of the workload, medium jobs make up 34% vice versa. A job’s initial deadline is considered. Next, we
of the workload, and large jobs make up 25% of the work- calculate the average of all such deadlines. To get a range of
load. Note that the values in Table 2 are representative deadline values, we divide this average deadline with a fac-
values, and they can be changed, based on the user require- tor of 1 to 5. The tight deadlines lie in the range 1-24, moder-
ments, without having an effect on the working of the ate deadlines lie in the range 25-74, and loose deadlines lie
RTH 2 S algorithm. in the range 75+.
5) Heterogeneity Level (HL). Heterogeneity Level (HL) sig-
nifies the degree of heterogeneity of fog nodes – measuring
6.2 Simulation Setup and Parameters the variation in computational capacity of fog nodes within
We have used the iFogSim [3] simulator for the implementa- each level. A low HL value implies that the execution capac-
tion of our proposed algorithm RTH 2 S. iFogSim is rooted in ities of the fog nodes are similar. The Heterogeneity level of
CloudSim – a very widely used discrete event cloud simula- any nth tier fog node is given by
tor. iFogSim, therefore, allows us to model the characteris-
n Þ  cðfnn Þ
cðfnmax min
tics of a cloud platform more realistically (CloudSim has
HLFNn ¼ ; (22)
> 4K downloads) [32], a key basis for some of the simula- averageðcðfnjn ÞÞ
tion that this work is based on. We have modelled various
features of fog nodes and the cdc in this simulator. By using n Þ represents a tier-n fog node with the maximum
cðfnmax
iFogSim, one can evaluate different fog and cloud schedul- capacity.
ing strategies. This simulator is appropriate for fog enabled
devices, as it follows a representation of the sensor ! pro- fnmax
n ¼ fnjn : cðfnjn Þ > cðfnX
n Þ: (23)
cessor ! actuator model. A class named HierarchicalFog
n 2 FNn && X 6¼ j. cðfnn Þ represents a
In Eq. (23), fnjn ; fnX min
has been implemented in the simulator. This class reads the
tier-n fog node with the minimum capacity.
dataset from a text file and stores the job-id, the job-length,
the deadline, and the priority. In addition to this, the follow-
fnmin
n ¼ fnjn : cðfnjn Þ < cðfnX
n Þ: (24)
ing quantities have also been added to the class : the propa-
gation delay ðpdÞ of all FN and C, execution capacity ðcÞ
n 2 FNn && X 6¼ j. We can replace n in
In Eq. (24), fnjn ; fnX
and the module allocation. A FogDevice class present in FNn to get the heterogeneity level of a fog node. Finally, the
iFogSim contains a function named updateAllocatedMips. heterogeneity level of the system is given by
The task of this function is to allocate the MIPS require-
ments of various execution modules. In order to take job HLsystem ¼ HLFN1 þ HLFN2 þ . . . . . . þ HLFNn þ HLC : (25)
deadlines into account, certain modifications have been
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1366 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

6) Monetary Cost (MC). This quantity is defined as the cost


associated with executing the job on fog nodes FN, or on
cloud data center cdc. This metric depends upon the execu-
tion cost and propagation delay of jk on fn.

6.3 Results and Discussion


In this section we describe results of various experiments to
evaluate and compare our approach across both real world
& synthentic datasets.
Effect of Fog Resources on Performance. We evaluate the
capacity improvement of using fog nodes with the cloud
Fig. 3. Effect of DF on SR.
data center cdc, using Success RatioðSRÞ as the performance
metric. Scheduling algorithms: RTH 2 S, 1TF (1-tier fog),
2TF (2-tier fog), cdc  only, LTF [15], and WALL [28] are execution before their deadlines, which reflects in the
compared. In cdc  only, we forward all the jobs to the cloud decreased SR.
data center for execution. In RTH 2 S, the number of fog Our proposed algorithm RTH 2 S provides a higher SR
nodes at tier-1, tier-2 and tier-3 has been fixed at 8, 4 and 1 values than the cdc  only algorithm. RTH 2 S schedules the
respectively. We assume one cdc. jobs as per their size, priority and deadlines. On the other
LTF considers two kinds of fog nodes: fast and slow. In hand, in case of the cdc  only algorithm, all jobs, irrespec-
order to achieve parity, we consider one fast node and two tive of their size and priority are forwarded to the cloud
slow nodes in this section of the simulation. The computa- data center ðcx Þ for execution. This has an adverse effect on
tion power of the fast nodes is more than that of the slow both the tight or moderate deadline jobs, due to the large
nodes. However, the fast nodes consume more power. Our propagation delay between the user Ui and cloud data cen-
algorithm RTH 2 S dispatches the jobs in an increasing order ter ðcx Þ. Hence, the SR values of our proposed algorithm
of deadlines. On the other hand, LTF sorts the jobs in RTH 2 S are higher than those offered by cdc  only. The
decreasing order of deadlines and sends the jobs with the LTF algorithm sorts the jobs from longest to shortest, and
largest execution time to the fastest node. The Workload then assigns the long jobs to the fast nodes and the short
ALLocation algorithm, WALL is a hierarchical cloudlet net- jobs to the slow nodes. Hence, short jobs will be executed at
work that assigns jobs to suitable cloudlets/fog nodes, such the end, due to which they may have already missed their
that the average response time is minimized. It sorts the deadlines. The large jobs execute at the fast nodes, whereas
users based on decreasing workload size and schedules the the medium and small jobs execute at the slow nodes. Due to
jobs to the cloudlet so as to minimise the response time. We the modest power of the slow nodes, a small number of jobs are
also consider various fog node tiers in our simulation. The accommodated at the lower level. Hence, LTF offers a smaller
propagation delay ðpdÞ from a user Ui to cdc cx has been Success Ratio SR value than RTH 2 S. In 2TF , jobs with P1 prior-
fixed at 125 milliseconds. In 1-tier fog i.e., 1TF , we consider ity are executed at tier-1, jobs with P2 priority are executed at
only one tier of fog nodes ðFN1 Þ and a cloud data center tier-2, and jobs with P3 priority may get execute on the cloud
ðcx Þ. The cðfny1 Þ of tier-1 fog nodes has been fixed at 3500 data center cx , depending on the size and deadline require-
MIPS. The propagation delay ðpdÞ from a user Ui to tier-1 ments. In the absence of the third tier, more jobs are sent to the
FN1 in 1TF has been fixed at 2 milliseconds. In 2-tier fog cloud for execution, which leads to smaller SR values for 2TF .
i.e., 2TF , we consider two tiers of fog nodes ðFN1 Þ, ðFN2 Þ On the other hand, 1TF offer low SR values, despite having
and a cloud data center ðcx Þ. The propagation delay ðpdÞ reduced propagation delay between Ui and fny1 . This happens
from an user Ui to tier-1 FN1 , and from user Ui to tier-2 FN2 due to the reduced overall computation capacity of tier-1 fog
has been fixed at 2 milliseconds, and 6 milliseconds respec- nodes, which results in transferring more jobs to the cloud data
tively. The propagation delay ðpdÞ from an user Ui to tier-1 center cx . The WALL algorithm takes the user with a maximum
FN1 , from user Ui to tier-2 FN2 , and from user Ui to tier-3 job size and assigns it to the fog node that offers the minimum
FN3 in 3TF has been fixed at 2 milliseconds, 6 milliseconds, response time. This approach negatively affects small and
and 12 milliseconds respectively. medium jobs with tight deadlines. Also, the computation power
In the first simulation scenario, we increase the Deadline of fog nodes is relatively modest for executing large jobs, which
Factor (DF), and observe it’s impact on the Success Ratio further increases the finish times, leading to deadline misses. In
(SR). The delay factor ðpdÞ is taken as 2, 6, and 12 millisec- RTH 2 S, we split the large jobs to complete jobs within the dead-
onds between the user and fog tier-1, tier-2, and tier-3 lines. The SR ratio provided by different algorithms is as fol-
respectively. A value of 125 milliseconds is taken between lows : RTH 2 S > 2TF > 1TF > WALL > LTF > cdc  only.
the user and the cdc in RTH 2 S. The deadline factor ðDF Þ of In the second simulation, we examine the impact of Prop-
the jobs has been varied from 1 to 5. Tasks have the loosest agation Delay ðpdÞ on the SR. The initial pd from the users to
deadlines when DF = 1, and the tightest deadlines when fog nodes has been fixed as follows: 2 milliseconds to tier-1,
DF = 5. The results of this simulation are shown in Fig. 3. It 6 milliseconds to tier-2, 12 milliseconds to tier-3 and 125
is observed that as we increase the DF value, deadlines milliseconds to the cdc. In order to increase this delay, we
become more “tight”, and we notice a complementary have added 10 milliseconds at tier-1, tier-2, tier-3 and cdc in
decrease in the SR value for all scheduling algorithms. As each iteration. Fig. 4 depicts the results. It is observed that
such, a large number of jobs are unable to finish their with an increase of the pd value, more time is spend in
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1367

Fig. 4. Effect of propagation delay on SR. Fig. 5. Effect of task load on SR.

communication. This results in an increase in the com- tiers with less propagation delays which leads to higher suc-
mencement time ðctÞ at the fog nodes present at tier-1 cess ratios SR. It is important to note that as we add fog
ðFN1 Þ, tier-2 ðFN2 Þ, tier-3 FN3 and at the cloud data center node tiers, there is an addition of fog nodes in the network,
ðcx Þ. Hence, the finish time ðftÞ of the jobs often overshoots leading to an increase in the total computation power.
their deadlines ðdÞ, so a lesser number of jobs finish execu- Though, the propagation delay increases as well, but this
tion before their deadlines, which results in a low Success delay is smaller as compared to sending jobs to the cloud.
Ratio SR in all tiers. We observe similar results in LTF . The We have observed that the 3-tier fog based algorithm
induced delay between slow and fast fog nodes results in RTH 2 S outperforms all compared scheduling strategies, for
smaller values for SR. Likewise, the increased pd effects the all metrics considered. The 2TF network and 1TF network
SR in WALL. The pd added at each iteration increases the offer lesser computation power. Though, there is an
completion time of the jobs in both tiers and cdc. Overall, increased communication delay due to the presence of more
we observe that an increase in the pd reduces the SR in all fog tiers in RTH 2 S, this delay is smaller as compared to
six scheduling strategies. sending jobs to the cloud data center for execution. Our pro-
In the next simulation, we show the impact of Task load posed algorithm outperforms cdc  only owing to the large
ðTLÞ on Success Ratio ðSRÞ. Fig. 5 depicts the results for this communication delay involved in sending the jobs to cdc 
simulation. We increase the task load ðTLÞ from 1 to 5. As only. It outperforms LTF due to their sorting of jobs in an
we increase the TL value, more tasks are added to the sys- opposite direction, which leads to small jobs being sched-
tem. This results in reducing the SR, as a large number of uled too late. RTH 2 S outperforms WALL as it provides the
jobs start missing their deadlines. This behaviour is shown splits of the large jobs with a tight deadline rather than
by all six scheduling strategies: RTH 2 S, 1TF , 2TF , cdc  assigning them as a whole to the fog node, which increases
only, LTF , and WALL. However, RTH 2 S takes advantage the finish time of the jobs. Also, WALL selects the users
of the fog nodes present at tier-1, tier-2, and tier-3 due to with the maximum job size first, giving less priority to
which, a larger number of jobs are able to meet their dead- small/medium jobs with tight deadlines.
lines. Note that these jobs are unable to meet their deadlines Effect of Heterogeneity Level ðHLÞ on Success Ratio ðSRÞ. We
on cdc  only. This happens as the fog nodes are in closer examine the impact of fog node heterogeneity on the system
proximity to the end users, and hence, the propagation performance. The results of this simulation are shown in
delay ðpdÞ from user to fog nodes is less. Contrarily, jobs Fig. 6. We increase the Heterogeneity Level HL from 0 to
which are using cdc to execute face significant propagation 1.2. The number of fog nodes at tier-1, tier-2, and tier-3 have
delays ðpdÞ, which results in deadline misses. The LTF algo- been fixed at 8, 2 and 1 respectively. The capacity of fog
rithm sorts jobs in a decreasing order of deadlines. It’s SR nodes has been varied from 300 MIPS to 6000 MIPS. We
values are lower than those of the proposed algorithm’s SR compare the performance of six scheduling algorithms:
values, as we sort in the opposite order: small deadline ! RTH 2 S, 1TF , 2TF , LTF , WALL and cdc  only. As cdc 
large deadline. Hence, a larger number of jobs are able to only does not employ fog nodes, a significant number of
meet their deadlines in a given time interval. The WALL
algorithm sorts jobs in descending order of sizes, which
effects the tight/moderate deadlines of small and medium
jobs. For 1TF node, due to less computation power, these
fog nodes are not able to finish the jobs before the deadlines.
It is tough for a single tier to finish the P1 or P2 priority jobs
before their deadlines. On the other hand, in 2TF , due to
addition of one more tier, more number of jobs can be exe-
cuted before the deadlines. However, once the tiers don’t
have sufficient capacity to execute, the jobs are transferred
to cloud data center cdc cx . Due to the significant propaga-
tion delay between a user and the cloud data center, the
jobs start missing their deadlines. For 3-tier fog node i.e
RTH 2 S, more jobs can be accommodated on the fog node Fig. 6. Effect of HL on SR.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1368 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

Fig. 8. Effect of tag mix 2 on SR.


Fig. 7. Effect of tag mix 1 on SR.

directly accommodated on a single fog node, they need to


jobs miss their deadlines. Moreover, we observe a constant be split before scheduling on fog nodes. This increases the
Success Ratio for cdc  only, i.e., increasing fog node hetero- commencement time of the jobs which may lead to a dead-
geneity has no effect on cdc  only’s SR. This is because we line miss. This decreases overall success ratio SR of RTH 2 S.
consider only 1 cdc in our approach, so there is no heteroge- The minimum SR is exhibited by the cdc  only owing to
neity in the cdc. Our proposed model can be easily extended the distance between user Ui and cloud data center cx . Also,
to consider heterogeneity in the cdc. We omit this experi- the cloud data center is not suitable for handling jobs with
ment due to space constraints. For RTH 2 S, we observe that tight deadlines. On the other hand, LTF sorts the job in
increasing HL leads to an increase in the SR. This is because decreasing order of deadlines which results in missing most
increasing HL increases the variation in the execution of the tight deadlines. WALL sorts the users in decreasing
capacity of fog nodes. Hence, the probability of picking a order of workloads. Moreover, it doesn’t do any splitting of
faster fog node increases. This behaviour is observed in the jobs. Due to the modest capacity of fog nodes, it takes
1TF , 2TF , WALL and LTF . more time to finish the large jobs. This results in missing
However, we observe that RTH 2 S offers a higher SR most of the job deadlines.
than LTF . This is because LTF orders the jobs from longest Fig. 8 shows the result of tag mix2, where we increase
! shortest, i.e., it is non real-time. Hence, the short jobs start tag2 jobs, i.e., small jobs with moderate deadlines, small
late, and miss their deadlines. On the other hand, RTH 2 S jobs with loose deadline and medium jobs with moderate
sorts jobs from smallest ! largest, in terms of deadlines. So, deadline. This results in decreasing the performance of the
the number of jobs meeting deadlines is maximised. WALL RTH 2 S algorithm due to an increase in the number of jobs.
performs better than LTF , as WALL has two tiers, while However, in this case, no job preemption is necessary as job
LTF has just one-tier. Hence, RTH 2 S performs better than sizes are not so large and deadlines are not so tight. The
1TF , 2TF , cdc  only, WALL and LTF as it provides higher cdc  only algorithm performs the worst as it does not
SR values. employ any fog node tier for offloading the computation.
Effect of Tag Mix on Success Ratio ðSRÞ. we study the On the other hand, LTF executes large jobs first by employ-
impact of tag assignment on the success ratio SR of RTH 2 S, ing fast fog nodes for job execution. This results in missing
WALL, LTF , and cdc  only. We consider two separate tag jobs of small and moderate sizes. We observe a similar pat-
mixes in this simulation: tern in WALL, as it gives preference to large jobs, resulting
Tag Mix 1: the number of un-tagged i.e., regular profiles in the performance degradation due to misses in tight dead-
& tag2 jobs are constant, and the number of tag1 jobs are lines of small and medium jobs. We observe that as we
periodically injected by 1=4 at every x-axis data point. Based increase the number of tag1 jobs, there is significant
on Table 4, all three types of tag1 jobs are increased in equal decrease in the SR values. This happens because tagged
proportion by 1=12. jobs with tight/moderate deadlines and large/medium
Tag Mix 2: the number of tag1 & un-tagged jobs are con- sizes lead to regular profile jobs being unable to execute
stant, and the number of tag2 jobs are periodically injected before their deadlines.
by 1=4 at every x-axis data point. Based on Table 4, all three Effect of Task Load on Monetary Cost ðMCÞ. We consider
types of tag2 jobs are increased in equal proportion by 1=12. task load on monetary cost MC for RTH 2 S using Microsoft
Initially, we considered 160 jobs. The results for tag mix 1 Azure pricing in our simulations, as shown in Table 5, with
& mix 2 are shown in Figs. 7 and 8 respectively. The format results in Table 6. We have taken weights w1 and w2 as 0.75
of each x-axis data point is as follows: (# of tag1 jobs, # of and 0.25, respectively. As we increase the task load of the
tag2 jobs, # of no tag jobs). Fig. 7 shows the result of tag mix system, we observe an increase in the monetary cost. As
1. Increasing larger & medium jobs (tag1) with tight dead- more jobs are added to the system, more work has to be
lines, and large jobs with moderate deadlines, we are done by the FN and cdc. With the increase in the task load,
increasing the load on the lower tiers of fog nodes. Due to the jobs are sent to the higher tiers for execution. The price
an increase in tag1 jobs, the algorithm preempts the cur- of higher-tier fog nodes is more than the lower-tier fog
rently scheduled jobs. This negatively impacts regular pro- nodes. Also, this increases the propagation delay in the sys-
file jobs: small jobs with tight deadlines, or medium jobs tem. As monetary cost is directly proportional to the execu-
with moderate deadlines. As large size tasks cannot be tion cost and propagation delay of jobs, this is reflected in
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1369

TABLE 5
Cost ($/hour, May 2021, Asia Pacific
Region) of Microsoft Azure

Instance type Cost per hour($)


tier-1 $0.034
tier-2 $0.34
tier-3 $3.4
cdc $0.08

TABLE 6
Effect of Task Load on Monetary Cost Fig. 9. Effect of queuing delay on SR.

TL RTH 2 S (TL") WALL (TL") LTF (TL")


Task Success Ratio ðSRÞ and Queuing Delay. We investigate
1 $1.02 $2.41 $3.29 the effect of task load on system performance, while consid-
2 $2.28 $4.02 $5.12 ering queuing delay. In Fig. 9, we compare the performance
3 $3.53 $5.69 $7.93
4 $5.31 $8.23 $9.89 of three scheduling algorithms: RTH 2 S, WALL, and LTF ,
5 $7.31 $10.72 $13.23 by increasing task load TL from 1 to 5. We calculate the fin-
ish time of the jobs with and without queuing delay ðqÞ.
Increasing queuing delay leads to jobs being unable to meet
their deadlines, and therefore a decreasing system success
the results of RTH 2 S, WALL and LTF . The monetary cost of
ratio SR proportional to the number of jobs. RTH 2 S offers
LTF is more than RTH 2 S as LTF sends jobs with the largest
better performance, as it considers job priority, size and
execution time to the fastest node. The monetary cost of
deadline. On the other hand, LTF sorts jobs from largest to
WALL is less than LTF , as LTF sends more jobs to cdc as
smallest, leading to higher deadline misses. WALL chooses
task load increases. This increases the propagation delay,
a user having the largest job among all the users – leading
which increases the overall monetary cost of LTF . The mon-
to short jobs missing most of the deadlines.
etary cost of WALL is higher than RTH 2 S, as WALL prefers
the larger workloads initially. This algorithm can finish a
6.4 Performance Analysis Using Synthesized
very number of small and medium jobs.
Dataset
Task Deadline and Monetary Cost ðMCÞ. We investigate the
We consider synthesized datasets to evaluate the perfor-
effect of deadline factor on monetary cost MC for the pro-
mance of our proposed algorithm RTH 2 S, LTF , WALL,
posed algorithm RTH 2 S, WALL and LTF . The results are
and cdc  only. Our dataset comprises [100-300] jobs in job
shown in Table 7. With the increase in the deadline factor
set JS. We randomly generate the job between 2000–45000
DF , the deadline becomes more tight and the jobs have to
MI, with memory usge between 0.15GB–2.5GB and Dead-
run in the fog tiers to finish their execution before deadlines.
line range of 250ms–10000ms, based on [39], [40]
As more and more jobs run on the fog tiers, the monetary
DF and Success Ratio ðSRÞ. We consider the impact on SR
cost MC increases. Besides, as deadline become tighter with
while increasing DF from 1 to 5, higher values of DF makes
increase in deadline factor, less number of jobs are able to
it difficult for jobs to finish execution within their deadlines.
finish with cdc. This happens as jobs have to travel farther
From Fig. 10 we observe a similar trend in the performance
to execute on the cdc. By the time the tight deadline jobs
of RTH 2 S, cdc  only, LTF , and WALL.
reach the cdc, it is already too late. RTH 2 S utilises the fog
Propagation Delay ðpdÞ and Success Ratio ðSRÞ. We illus-
tier’s resources to finish the job execution before the dead-
trate the impact of pd on SR. We take the delay factor ðpdÞ
lines. This further increases the overall MC of our proposed
as 2, 6, and 12 milliseconds between the user and fog tier-1,
algorithm RTH 2 S. RTH 2 S outperforms LTF and WALL
tier-2, and tier-3, respectively. We consider a value of 125
due to the usage of better heuristics. Both LTF and WALL
milliseconds between the user and the cdc. As we increase
execute large jobs first, and the execution cost of large jobs
is higher than small and medium jobs. With deadlines
becoming tighter, both algorithms run a significant portion
of large jobs, leading to higher monetary costs.

TABLE 7
Effect of Deadline Factor on Monetary Cost

DF RTH 2 S (DF") WALL (DF") LTF (DF")


1 $1.13 $1.91 $2.98
2 $1.5 $2.53 $4.08
3 $2.33 $3.45 $5.91
4 $4.98 $6.43 $7.18
5 $6.05 $8.49 $10.23
Fig. 10. Synthetic dataset: Effect of DF on SR.
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1370 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

Fig. 11. Synthetic dataset: Effect of pd on SR. Fig. 13. Effect of DF on SR.

the delay factor, we see a decrease in the success ratio for all
the jobs in all four scheduling strategies: RTH 2 S, LTF ,
WALL, and cdc  only. This happens as the jobs’ finish time
increases with the increase in the pd. This is visible in the
results shown in Fig. 11. Due to the reasons mentioned in
the previous sections, we observe the following SR among
the algorithms: RTH 2 S > WALL > LTF > cdc  only.

6.5 Fog Cloud Test-Bed


Our prototype considers a single tier of fog nodes FN1 fol-
lowed by the cloud data center cx . Tier-1 FN1 consists of
two heterogeneous fog nodes. We used a RPi4 Model B Fig. 14. Effect of TL on average response time.
with 4GB RAM, and a desktop with Ubuntu 16.04 operating
system as fn11 , and fn21 respectively. We used a VM instance TABLE 8
on Amazon EC2 as cx . The first fog node is a RPi4 (Broad- Average Improvement
com BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC
Performance metric cdc  only LTF WALL
1.5 GHz running raspbian OS). The second fog node is an
Intel i7-7700 CPU, 3.60 GHz  8, 64-bit OS with 7.7 GiB Deadline Factor (DF ) 81% 69% 46%
RAM. We used a cloud VM instance with 1 vCPU, 1 GiB Propagation Delay (PD) 134% 107% 39%
Heterogeneity Level (HL) 106% 43.5% 29%
memory as cx . A regular IPv4 Internet connection is used to
Task Load (TL) 81% 64% 42%
connect the user to the cloud. We do not show the results
for the WALL algorithm, as its performance is similar to
LTF, due to the consideration of 1 tier of fog nodes.
Effect of TL on Success Ratio SR. In the first experiment, we following the earliest deadline first EDF algorithm. More-
study the real time performance of proposed algorithm over, the jobs are assigned according to their priority. The
RTH 2 S, LTF and cx . Three kinds of job priorities are consid- cdc  only performs worst owing to the significant propaga-
ered i.e., P 1, P 2, P 3. Initially, we consider eight jobs. We tion delay from user ui to cx .
gradually increase the number of jobs (upto 48), and Effect of DF on Success Ratio SR. The deadline factor for
observe the impact on the Success Ratio SR. The results are the tight, moderate and large deadline is 0s, 2s, and 10s
shown in Fig. 12. The SR decreases with the increase of jobs respectively. We increase all three deadlines by 1s in each
in all three cases. However, our proposed algorithm RTH 2 S iteration. As shown in Fig. 13, the best performance is
outperforms the others due to the usage of superior heuris- offered by RTH 2 S. With a loose deadline, more jobs are able
tic. It incorporates fog nodes in job execution, while to finish their execution leading to an increase in the SR for
all three approaches.
Effect of TL on Average Response Time. We estimate the aver-
age response time by varying task load for RTH 2 S, LTF and
cdc  only. Initially, we consider 7 jobs, followed by an
increase of 5 per iteration. The average response time is the
sum of execution time & communication delay. With the
increase in the number of jobs, the average response time
increases. The results of this experiment are shown in Fig. 14.
The highest average response time is provided by cdc  only
owing to the high communication delay from user Ui to cx .
The lowest average response time is exhibited by RTH 2 S fol-
lowed by LTF , as LTF sorts the deadlines in descending
order leading to short deadline jobs executing last. This also
Fig. 12. Effect of TL on SR. increases the average response time of the all jobs. The
uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
KAUR ET AL.: REAL-TIME SCHEDULING ON HIERARCHICAL HETEROGENEOUS FOG NETWORKS 1371

average improvement in Success Ratio (SR) offered by [13] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow,
and P. A. Polakos, “A comprehensive survey on fog computing:
RTH 2 S over cdc  only and LTF is shown in Table 8. State-of-the-art and research challenges,” IEEE Commun. Surv.
Tuts., vol. 20, no. 1, pp. 416–464, First Quarter 2018.
[14] K. Fizza, N. Auluck, O. Rana, and L. Bittencourt, “PASHE: Privacy
7 CONCLUSION aware scheduling in a heterogeneous fog environment,” in Proc.
IEEE 6th Int. Conf. Future Internet Things Cloud, 2018, pp. 333–340.
Significant propagation delays between users and the cloud [15] H. Wu and C. Lee, “Energy efficient scheduling for heterogeneous
data center may act as a deterrent for executing deadline fog computing architectures,” in Proc. IEEE 42nd Annu. Comput.
driven real-time jobs. This delay can be reduced by employ- Softw. Appl. Conf., 2018, pp. 555–560.
ing fog nodes for the execution of such jobs. In addition, it [16] A. Yousefpour, G. Ishigaki, and J. P. Jue, “Fog computing:
Towards minimizing delay in the Internet of Things,” in Proc.
may very well be the case that there is a hierarchy of fog IEEE Int. Conf. Edge Comput., 2017, pp. 17–24.
nodes [2]. Typically, fog nodes in various tiers (and even [17] G. Zhang, F. Shen, Y. Zhang, R. Yang, Y. Yang, and E. A. Jors-
within a particular tier) are heterogeneous. In this paper, we wieck, “Delay minimized task scheduling in fog-enabled IoT
networks,” in Proc. 10th Int. Conf. Wireless Commun. Signal Process.,
propose RTH 2 S, an algorithm that schedules real-time jobs
2018, pp. 1–6.
on a multi-tiered fog network by taking diverse job profiles [18] N. Chen, Y. Yang, T. Zhang, M. Zhou, X. Luo, and J. K. Zao, “Fog
into account. Using a real-life workload, RTH 2 S is validated as a service technology,” IEEE Commun. Mag., vol. 56, no. 11,
using a simulator as well as a prototype. We observe that pp. 95–101, Nov. 2018.
[19] S. Zhao, Y. Yang, Z. Shao, X. Yang, H. Qian, and C. Wang,
RTH 2 S offers better real-time results in terms of higher Suc- “FEMOS: Fog-enabled multi-tier operations scheduling in
cess Ratios, and reduced Monetary Costs. We also observe dynamic wireless networks,” IEEE Internet Things J., vol. 5, no. 2,
that job profiles impact the real-time system performance. pp. 1169–1183, Apr. 2018.
An increase in number tag1 profile jobs impact the regular [20] S. Sarkar, S. Chatterjee, and S. Misra, “Assessment of the suitabil-
ity of fog computing in the context of Internet of Things,” IEEE
profile jobs, leading to deadline misses and lower SR val- Trans. Cloud Comput., vol. 6, no. 1, pp. 46–59, First Quarter 2018.
ues. Our future work involves the use of multiple cloud [21] A.-C. Pang, W.-H. Chung, T.-C. Chiu, and J. Zhang, “Latency-
data centers. We also plan to develop “schedulability” and driven cooperative task computing in multi-user fog-radio access
networks,” in Proc. IEEE 37th Int. Conf. Distrib. Comput. Syst., 2017,
performance bounds for real-time tasks on such multi-tier pp. 615–624.
fog-cloud architectures. [22] S. Malik, S. Ahmad, B. W. Kim, D. H. Park, and D. Kim, “Hybrid
inference based scheduling mechanism for efficient real time task
and resource management in smart cars for safe driving,” Elec-
REFERENCES tronics, vol. 8, 2019, Art. no. 344.
[1] Fog Computing and the Internet of Things: Extend the Cloud to [23] T. N’takpe and F. Suter, “Don’t hurry be happy: A deadline-based
where the Things are, Cisco White Paper, 2015. backfilling approach,” in Proc. Workshop Job Scheduling Strategies
[2] openfogconsortium.org, OpenFog Reference Architecture for Fog Parallel Process., 2017, pp. 62–82.
Computing, 2017. [Online]. Available: https://www.openfog- [24] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architec-
consortium.org/wp-content/uploads/OpenFog_Reference_ ture for mobile computing,” in Proc. 35th Annu. IEEE Int. Conf.
Architecture_2_09_17-FINAL.pdf Comput. Commun., 2016, pp. 1–9.
[3] H. Gupta, A. V. Dastjerdi, S. K. Ghosh, and R. Buyya, “iFogSim: A [25] A. K. Mishra, J. L. Hellerstein, W. Cirne, and C. R. Das, “Towards
toolkit for modeling and simulation of resource management tech- characterizing cloud backend workloads: Insights from Google
niques in Internet of Things, edge and fog computing environ- compute clusters,” ACM SIGMETRICS Perform. Eval. Rev., vol. 37,
ments,” 2017. [Online]. Available: http://arxi.org/abs/1606.02007 no. 4, pp. 34–41, 2010.
[4] R. K. Naha, S. Garg, D. Georgakopoulos, P. R. Jayaraman, Y. [26] P. Han, C. Du, J. Chen, and X. Du, “Minimizing monetary costs for
Xiang, and R. Ranjan, “Fog computing: Survey of trends, architec- deadline constrained workflows in cloud environments,” IEEE
tures, requirements, and research directions,” IEEE Access, vol. 6, Access, vol. 8, pp. 25060–25074, 2020.
pp. 47980–48009, 2018. [27] D. A. Chekired, L. Khoukhi, and H. T. Mouftah, “Industrial IoT
[5] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The case data scheduling based on hierarchical fog computing: A key for
for VM-based cloudlets in mobile computing,” IEEE Pervasive enabling smart factory,” IEEE Trans. Ind. Informat., vol. 14, no. 10,
Comput., vol. 8, no. 4, pp. 14–23, Fourth Quarter 2009. pp. 4590–4602, Oct. 2018.
[6] N. Auluck, O. Rana, S. Nepal, A. Jones, and A. Singh, “Scheduling [28] Q. Fan and N. Ansari, “Workload allocation in hierarchical cloudlet
real time security aware tasks in fog networks,” IEEE Trans. Serv. networks,” IEEE Commun. Lett., vol. 22, no. 4, pp. 820–823, Apr. 2018.
Comput., vol. 14, no. 6, pp. 1981–1994, Nov./Dec. 2021. [29] P. Wang, Z. Zheng, B. Di, and L. Song, “HetMEC: Latency-optimal
[7] S. Han and H. Park, “Predictability of least laxity first scheduling task assignment and resource allocation for heterogeneous multi-
algorithm on multiprocessor real-time systems,” in Proc. Int. Conf. layer mobile edge computing,” IEEE Trans. Wireless Commun., vol.
Embedded Ubiquitous Comput., 2006, pp. 755–764. 18, no. 10, pp. 4942–4956, Oct. 2019.
[8] Y. Yang, K. Wang, G. Zhang, X. Chen, X. Luo, and M. T. Zhou, [30] E. El Haber, T. M. Nguyen, and C. Assi, “Joint optimization of
“MEETS: Maximal energy efficient task scheduling in homoge- computational cost and devices energy for task offloading in
neous fog networks,” IEEE Internet Things J., vol. 5, no. 5, multi-tier edge-clouds,” IEEE Trans. Commun., vol. 67, no. 5,
pp. 4076–4087, Oct. 2018. pp. 3407–3421, May 2019.
[9] K. Fizza, N. Auluck, and A. Azim, “Improving the schedulability [31] M. Peixoto, T. Genez, and L. F. Bittencourt, “Hierarchical schedul-
of real-time tasks using fog computing,” IEEE Trans. Serv. Com- ing mechanisms in multi-level fog computing,” IEEE Trans. Serv.
put., vol. 15, no. 1, pp. 372–385, Jan./Feb. 2022. Comput., to be published, doi: 10.1109/TSC.2021.3079110.
[10] Y. Yang, S. Zhao, W. Zhang, Y. Chen, X. Luo, and J. Wang, “DEBTS: [32] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. De Rose , and R.
Delay energy balanced task scheduling in homogeneous fog Buyya, “CloudSim: A toolkit for modeling and simulation of cloud
networks,” IEEE Internet Things J., vol. 5, no. 3, pp. 2094–2106, Jun. computing environments and evaluation of resource provisioning
2018. algorithms,” Softw.: Pract. Experience, vol. 41, no. 1, pp. 23–50, 2011.
[11] A. Singh, N. Auluck, O. Rana, A. Jones, and S. Nepal, “RT-SANE: [33] R. K. Naha, S. Garg, A. Chan, and S. K. Battula, “Deadline-based
Real time security aware scheduling on the network edge,” in dynamic resource allocation and provisioning algorithms in fog-
Proc. 10th IEEE/ACM Int. Conf. Utility Cloud Comput., 2017, cloud environment,” Future Gener. Comput. Syst., vol. 104,
pp. 131–140. pp. 131–141, 2020.
[12] I. A. Moschakis and H. D. Karatza, “A meta-heuristic optimization [34] A. Karimiafshar, M. R. Hashemi, M. R. Heidarpour, and A. N.
approach to the scheduling of Bag-of-Tasks applications on het- Toosi, “An energy-conservative dispatcher for fog-enabled IIoT
erogeneous Clouds with multi-level arrivals and critical jobs,” systems: When stability and timeliness matter,” IEEE Trans. Serv.
Simul. Modelling Pract. Theory, vol. 57, pp. 1–25, 2015. Comput., to be published, doi: 10.1109/TSC.2021.3114964.

uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply
1372 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 16, NO. 2, MARCH/APRIL 2023

[35] L. Li, Q. Guan, L. Jin, and M. Guo, “Resource allocation and task off- Nitin Auluck is currently an associate professor
loading for heterogeneous real-time tasks with uncertain duration in the Department of Computer Science & Engi-
time in a fog queueing system,” IEEE Access, vol. 7, pp. 9912–9925, neering, Indian Institute of Technology Ropar,
2019. Punjab, India. His research interests include fog
[36] M. Adhikari, M. Mukherjee, and S. N. Srirama, “DPTO: A dead- computing, real-time systems, and parallel and
line and priority-aware task offloading in fog computing frame- distributed systems.
work leveraging multilevel feedback queueing,” IEEE Internet
Things J., vol. 7, no. 7, pp. 5773–5782, Jul. 2020.
[37] E. Deelman et al., “Pegasus, A workflow management system for
science automation,” Future Gener. Comput. Syst., vol. 46, pp. 17–35,
2015.
[38] L. Li, M. Guo, L. Ma, H. Mao, and Q. Guan, “Online workload
allocation via fog-fog-cloud cooperation to reduce IoT task service
delay,” Sensors, vol. 19, no. 18, pp. 38–30, 2019. Omer Rana (Member, IEEE) is currently professor
[39] C. Sonmez, A. Ozgovde, and C. Ersoy, “Fuzzy workload orches- of performance engineering with the School of
tration for edge computing,” IEEE Trans. Netw. Service Manage., Computer Science & Informatics, Cardiff Univer-
vol. 16, no. 2, pp. 769–782, Jun. 2019. sity, U.K.
[40] J. Almutairi and M. Aldossary, “A novel approach for IoT task off-
loading in edge-cloud environments,” J. Cloud Comput., vol. 10,
no. 1, pp. 1–19, 2021.

Amanjot Kaur is currently working toward the


PhD degree from the Indian Institute of Technol-
ogy, Ropar, India. Her research interests include
cloud computing, fog and edge computing.
" For more information on this or any other computing topic,
please visit our Digital Library at www.computer.org/csdl.

uthorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on November 07,2023 at 09:59:42 UTC from IEEE Xplore. Restrictions apply

You might also like